doc_content
stringlengths
1
386k
doc_id
stringlengths
5
188
coroutine loop.sock_recv_into(sock, buf) Receive data from sock into the buf buffer. Modeled after the blocking socket.recv_into() method. Return the number of bytes written to the buffer. sock must be a non-blocking socket. New in version 3.7.
python.library.asyncio-eventloop#asyncio.loop.sock_recv_into
coroutine loop.sock_sendall(sock, data) Send data to the sock socket. Asynchronous version of socket.sendall(). This method continues to send to the socket until either all data in data has been sent or an error occurs. None is returned on success. On error, an exception is raised. Additionally, there is no way to determine how much data, if any, was successfully processed by the receiving end of the connection. sock must be a non-blocking socket. Changed in version 3.7: Even though the method was always documented as a coroutine method, before Python 3.7 it returned an Future. Since Python 3.7, this is an async def method.
python.library.asyncio-eventloop#asyncio.loop.sock_sendall
coroutine loop.sock_sendfile(sock, file, offset=0, count=None, *, fallback=True) Send a file using high-performance os.sendfile if possible. Return the total number of bytes sent. Asynchronous version of socket.sendfile(). sock must be a non-blocking socket.SOCK_STREAM socket. file must be a regular file object open in binary mode. offset tells from where to start reading the file. If specified, count is the total number of bytes to transmit as opposed to sending the file until EOF is reached. File position is always updated, even when this method raises an error, and file.tell() can be used to obtain the actual number of bytes sent. fallback, when set to True, makes asyncio manually read and send the file when the platform does not support the sendfile syscall (e.g. Windows or SSL socket on Unix). Raise SendfileNotAvailableError if the system does not support sendfile syscall and fallback is False. sock must be a non-blocking socket. New in version 3.7.
python.library.asyncio-eventloop#asyncio.loop.sock_sendfile
coroutine loop.start_tls(transport, protocol, sslcontext, *, server_side=False, server_hostname=None, ssl_handshake_timeout=None) Upgrade an existing transport-based connection to TLS. Return a new transport instance, that the protocol must start using immediately after the await. The transport instance passed to the start_tls method should never be used again. Parameters: transport and protocol instances that methods like create_server() and create_connection() return. sslcontext: a configured instance of SSLContext. server_side pass True when a server-side connection is being upgraded (like the one created by create_server()). server_hostname: sets or overrides the host name that the target server’s certificate will be matched against. ssl_handshake_timeout is (for a TLS connection) the time in seconds to wait for the TLS handshake to complete before aborting the connection. 60.0 seconds if None (default). New in version 3.7.
python.library.asyncio-eventloop#asyncio.loop.start_tls
loop.stop() Stop the event loop.
python.library.asyncio-eventloop#asyncio.loop.stop
coroutine loop.subprocess_exec(protocol_factory, *args, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, **kwargs) Create a subprocess from one or more string arguments specified by args. args must be a list of strings represented by: str; or bytes, encoded to the filesystem encoding. The first string specifies the program executable, and the remaining strings specify the arguments. Together, string arguments form the argv of the program. This is similar to the standard library subprocess.Popen class called with shell=False and the list of strings passed as the first argument; however, where Popen takes a single argument which is list of strings, subprocess_exec takes multiple string arguments. The protocol_factory must be a callable returning a subclass of the asyncio.SubprocessProtocol class. Other parameters: stdin can be any of these: a file-like object representing a pipe to be connected to the subprocess’s standard input stream using connect_write_pipe() the subprocess.PIPE constant (default) which will create a new pipe and connect it, the value None which will make the subprocess inherit the file descriptor from this process the subprocess.DEVNULL constant which indicates that the special os.devnull file will be used stdout can be any of these: a file-like object representing a pipe to be connected to the subprocess’s standard output stream using connect_write_pipe() the subprocess.PIPE constant (default) which will create a new pipe and connect it, the value None which will make the subprocess inherit the file descriptor from this process the subprocess.DEVNULL constant which indicates that the special os.devnull file will be used stderr can be any of these: a file-like object representing a pipe to be connected to the subprocess’s standard error stream using connect_write_pipe() the subprocess.PIPE constant (default) which will create a new pipe and connect it, the value None which will make the subprocess inherit the file descriptor from this process the subprocess.DEVNULL constant which indicates that the special os.devnull file will be used the subprocess.STDOUT constant which will connect the standard error stream to the process’ standard output stream All other keyword arguments are passed to subprocess.Popen without interpretation, except for bufsize, universal_newlines, shell, text, encoding and errors, which should not be specified at all. The asyncio subprocess API does not support decoding the streams as text. bytes.decode() can be used to convert the bytes returned from the stream to text. See the constructor of the subprocess.Popen class for documentation on other arguments. Returns a pair of (transport, protocol), where transport conforms to the asyncio.SubprocessTransport base class and protocol is an object instantiated by the protocol_factory.
python.library.asyncio-eventloop#asyncio.loop.subprocess_exec
coroutine loop.subprocess_shell(protocol_factory, cmd, *, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, **kwargs) Create a subprocess from cmd, which can be a str or a bytes string encoded to the filesystem encoding, using the platform’s “shell” syntax. This is similar to the standard library subprocess.Popen class called with shell=True. The protocol_factory must be a callable returning a subclass of the SubprocessProtocol class. See subprocess_exec() for more details about the remaining arguments. Returns a pair of (transport, protocol), where transport conforms to the SubprocessTransport base class and protocol is an object instantiated by the protocol_factory.
python.library.asyncio-eventloop#asyncio.loop.subprocess_shell
loop.time() Return the current time, as a float value, according to the event loop’s internal monotonic clock.
python.library.asyncio-eventloop#asyncio.loop.time
class asyncio.MultiLoopChildWatcher This implementation registers a SIGCHLD signal handler on instantiation. That can break third-party code that installs a custom handler for SIGCHLD signal. The watcher avoids disrupting other code spawning processes by polling every process explicitly on a SIGCHLD signal. There is no limitation for running subprocesses from different threads once the watcher is installed. The solution is safe but it has a significant overhead when handling a big number of processes (O(n) each time a SIGCHLD is received). New in version 3.8.
python.library.asyncio-policy#asyncio.MultiLoopChildWatcher
asyncio.new_event_loop() Create a new event loop object.
python.library.asyncio-eventloop#asyncio.new_event_loop
coroutine asyncio.open_connection(host=None, port=None, *, loop=None, limit=None, ssl=None, family=0, proto=0, flags=0, sock=None, local_addr=None, server_hostname=None, ssl_handshake_timeout=None) Establish a network connection and return a pair of (reader, writer) objects. The returned reader and writer objects are instances of StreamReader and StreamWriter classes. The loop argument is optional and can always be determined automatically when this function is awaited from a coroutine. limit determines the buffer size limit used by the returned StreamReader instance. By default the limit is set to 64 KiB. The rest of the arguments are passed directly to loop.create_connection(). New in version 3.7: The ssl_handshake_timeout parameter.
python.library.asyncio-stream#asyncio.open_connection
coroutine asyncio.open_unix_connection(path=None, *, loop=None, limit=None, ssl=None, sock=None, server_hostname=None, ssl_handshake_timeout=None) Establish a Unix socket connection and return a pair of (reader, writer). Similar to open_connection() but operates on Unix sockets. See also the documentation of loop.create_unix_connection(). Availability: Unix. New in version 3.7: The ssl_handshake_timeout parameter. Changed in version 3.7: The path parameter can now be a path-like object
python.library.asyncio-stream#asyncio.open_unix_connection
class asyncio.PidfdChildWatcher This implementation polls process file descriptors (pidfds) to await child process termination. In some respects, PidfdChildWatcher is a “Goldilocks” child watcher implementation. It doesn’t require signals or threads, doesn’t interfere with any processes launched outside the event loop, and scales linearly with the number of subprocesses launched by the event loop. The main disadvantage is that pidfds are specific to Linux, and only work on recent (5.3+) kernels. New in version 3.9.
python.library.asyncio-policy#asyncio.PidfdChildWatcher
class asyncio.PriorityQueue A variant of Queue; retrieves entries in priority order (lowest first). Entries are typically tuples of the form (priority_number, data).
python.library.asyncio-queue#asyncio.PriorityQueue
class asyncio.ProactorEventLoop An event loop for Windows that uses “I/O Completion Ports” (IOCP). Availability: Windows. See also MSDN documentation on I/O Completion Ports.
python.library.asyncio-eventloop#asyncio.ProactorEventLoop
class asyncio.Protocol(BaseProtocol) The base class for implementing streaming protocols (TCP, Unix sockets, etc).
python.library.asyncio-protocol#asyncio.Protocol
Protocol.data_received(data) Called when some data is received. data is a non-empty bytes object containing the incoming data. Whether the data is buffered, chunked or reassembled depends on the transport. In general, you shouldn’t rely on specific semantics and instead make your parsing generic and flexible. However, data is always received in the correct order. The method can be called an arbitrary number of times while a connection is open. However, protocol.eof_received() is called at most once. Once eof_received() is called, data_received() is not called anymore.
python.library.asyncio-protocol#asyncio.Protocol.data_received
Protocol.eof_received() Called when the other end signals it won’t send any more data (for example by calling transport.write_eof(), if the other end also uses asyncio). This method may return a false value (including None), in which case the transport will close itself. Conversely, if this method returns a true value, the protocol used determines whether to close the transport. Since the default implementation returns None, it implicitly closes the connection. Some transports, including SSL, don’t support half-closed connections, in which case returning true from this method will result in the connection being closed.
python.library.asyncio-protocol#asyncio.Protocol.eof_received
class asyncio.Queue(maxsize=0, *, loop=None) A first in, first out (FIFO) queue. If maxsize is less than or equal to zero, the queue size is infinite. If it is an integer greater than 0, then await put() blocks when the queue reaches maxsize until an item is removed by get(). Unlike the standard library threading queue, the size of the queue is always known and can be returned by calling the qsize() method. Deprecated since version 3.8, will be removed in version 3.10: The loop parameter. This class is not thread safe. maxsize Number of items allowed in the queue. empty() Return True if the queue is empty, False otherwise. full() Return True if there are maxsize items in the queue. If the queue was initialized with maxsize=0 (the default), then full() never returns True. coroutine get() Remove and return an item from the queue. If queue is empty, wait until an item is available. get_nowait() Return an item if one is immediately available, else raise QueueEmpty. coroutine join() Block until all items in the queue have been received and processed. The count of unfinished tasks goes up whenever an item is added to the queue. The count goes down whenever a consumer coroutine calls task_done() to indicate that the item was retrieved and all work on it is complete. When the count of unfinished tasks drops to zero, join() unblocks. coroutine put(item) Put an item into the queue. If the queue is full, wait until a free slot is available before adding the item. put_nowait(item) Put an item into the queue without blocking. If no free slot is immediately available, raise QueueFull. qsize() Return the number of items in the queue. task_done() Indicate that a formerly enqueued task is complete. Used by queue consumers. For each get() used to fetch a task, a subsequent call to task_done() tells the queue that the processing on the task is complete. If a join() is currently blocking, it will resume when all items have been processed (meaning that a task_done() call was received for every item that had been put() into the queue). Raises ValueError if called more times than there were items placed in the queue.
python.library.asyncio-queue#asyncio.Queue
empty() Return True if the queue is empty, False otherwise.
python.library.asyncio-queue#asyncio.Queue.empty
full() Return True if there are maxsize items in the queue. If the queue was initialized with maxsize=0 (the default), then full() never returns True.
python.library.asyncio-queue#asyncio.Queue.full
coroutine get() Remove and return an item from the queue. If queue is empty, wait until an item is available.
python.library.asyncio-queue#asyncio.Queue.get
get_nowait() Return an item if one is immediately available, else raise QueueEmpty.
python.library.asyncio-queue#asyncio.Queue.get_nowait
coroutine join() Block until all items in the queue have been received and processed. The count of unfinished tasks goes up whenever an item is added to the queue. The count goes down whenever a consumer coroutine calls task_done() to indicate that the item was retrieved and all work on it is complete. When the count of unfinished tasks drops to zero, join() unblocks.
python.library.asyncio-queue#asyncio.Queue.join
maxsize Number of items allowed in the queue.
python.library.asyncio-queue#asyncio.Queue.maxsize
coroutine put(item) Put an item into the queue. If the queue is full, wait until a free slot is available before adding the item.
python.library.asyncio-queue#asyncio.Queue.put
put_nowait(item) Put an item into the queue without blocking. If no free slot is immediately available, raise QueueFull.
python.library.asyncio-queue#asyncio.Queue.put_nowait
qsize() Return the number of items in the queue.
python.library.asyncio-queue#asyncio.Queue.qsize
task_done() Indicate that a formerly enqueued task is complete. Used by queue consumers. For each get() used to fetch a task, a subsequent call to task_done() tells the queue that the processing on the task is complete. If a join() is currently blocking, it will resume when all items have been processed (meaning that a task_done() call was received for every item that had been put() into the queue). Raises ValueError if called more times than there were items placed in the queue.
python.library.asyncio-queue#asyncio.Queue.task_done
exception asyncio.QueueEmpty This exception is raised when the get_nowait() method is called on an empty queue.
python.library.asyncio-queue#asyncio.QueueEmpty
exception asyncio.QueueFull Exception raised when the put_nowait() method is called on a queue that has reached its maxsize.
python.library.asyncio-queue#asyncio.QueueFull
class asyncio.ReadTransport(BaseTransport) A base transport for read-only connections. Instances of the ReadTransport class are returned from the loop.connect_read_pipe() event loop method and are also used by subprocess-related methods like loop.subprocess_exec().
python.library.asyncio-protocol#asyncio.ReadTransport
ReadTransport.is_reading() Return True if the transport is receiving new data. New in version 3.7.
python.library.asyncio-protocol#asyncio.ReadTransport.is_reading
ReadTransport.pause_reading() Pause the receiving end of the transport. No data will be passed to the protocol’s protocol.data_received() method until resume_reading() is called. Changed in version 3.7: The method is idempotent, i.e. it can be called when the transport is already paused or closed.
python.library.asyncio-protocol#asyncio.ReadTransport.pause_reading
ReadTransport.resume_reading() Resume the receiving end. The protocol’s protocol.data_received() method will be called once again if some data is available for reading. Changed in version 3.7: The method is idempotent, i.e. it can be called when the transport is already reading.
python.library.asyncio-protocol#asyncio.ReadTransport.resume_reading
asyncio.run(coro, *, debug=False) Execute the coroutine coro and return the result. This function runs the passed coroutine, taking care of managing the asyncio event loop, finalizing asynchronous generators, and closing the threadpool. This function cannot be called when another asyncio event loop is running in the same thread. If debug is True, the event loop will be run in debug mode. This function always creates a new event loop and closes it at the end. It should be used as a main entry point for asyncio programs, and should ideally only be called once. Example: async def main(): await asyncio.sleep(1) print('hello') asyncio.run(main()) New in version 3.7. Changed in version 3.9: Updated to use loop.shutdown_default_executor(). Note The source code for asyncio.run() can be found in Lib/asyncio/runners.py.
python.library.asyncio-task#asyncio.run
asyncio.run_coroutine_threadsafe(coro, loop) Submit a coroutine to the given event loop. Thread-safe. Return a concurrent.futures.Future to wait for the result from another OS thread. This function is meant to be called from a different OS thread than the one where the event loop is running. Example: # Create a coroutine coro = asyncio.sleep(1, result=3) # Submit the coroutine to a given loop future = asyncio.run_coroutine_threadsafe(coro, loop) # Wait for the result with an optional timeout argument assert future.result(timeout) == 3 If an exception is raised in the coroutine, the returned Future will be notified. It can also be used to cancel the task in the event loop: try: result = future.result(timeout) except asyncio.TimeoutError: print('The coroutine took too long, cancelling the task...') future.cancel() except Exception as exc: print(f'The coroutine raised an exception: {exc!r}') else: print(f'The coroutine returned: {result!r}') See the concurrency and multithreading section of the documentation. Unlike other asyncio functions this function requires the loop argument to be passed explicitly. New in version 3.5.1.
python.library.asyncio-task#asyncio.run_coroutine_threadsafe
class asyncio.SafeChildWatcher This implementation uses active event loop from the main thread to handle SIGCHLD signal. If the main thread has no running event loop another thread cannot spawn a subprocess (RuntimeError is raised). The watcher avoids disrupting other code spawning processes by polling every process explicitly on a SIGCHLD signal. This solution is as safe as MultiLoopChildWatcher and has the same O(N) complexity but requires a running event loop in the main thread to work.
python.library.asyncio-policy#asyncio.SafeChildWatcher
class asyncio.SelectorEventLoop An event loop based on the selectors module. Uses the most efficient selector available for the given platform. It is also possible to manually configure the exact selector implementation to be used: import asyncio import selectors selector = selectors.SelectSelector() loop = asyncio.SelectorEventLoop(selector) asyncio.set_event_loop(loop) Availability: Unix, Windows.
python.library.asyncio-eventloop#asyncio.SelectorEventLoop
class asyncio.Semaphore(value=1, *, loop=None) A Semaphore object. Not thread-safe. A semaphore manages an internal counter which is decremented by each acquire() call and incremented by each release() call. The counter can never go below zero; when acquire() finds that it is zero, it blocks, waiting until some task calls release(). The optional value argument gives the initial value for the internal counter (1 by default). If the given value is less than 0 a ValueError is raised. Deprecated since version 3.8, will be removed in version 3.10: The loop parameter. The preferred way to use a Semaphore is an async with statement: sem = asyncio.Semaphore(10) # ... later async with sem: # work with shared resource which is equivalent to: sem = asyncio.Semaphore(10) # ... later await sem.acquire() try: # work with shared resource finally: sem.release() coroutine acquire() Acquire a semaphore. If the internal counter is greater than zero, decrement it by one and return True immediately. If it is zero, wait until a release() is called and return True. locked() Returns True if semaphore can not be acquired immediately. release() Release a semaphore, incrementing the internal counter by one. Can wake up a task waiting to acquire the semaphore. Unlike BoundedSemaphore, Semaphore allows making more release() calls than acquire() calls.
python.library.asyncio-sync#asyncio.Semaphore
coroutine acquire() Acquire a semaphore. If the internal counter is greater than zero, decrement it by one and return True immediately. If it is zero, wait until a release() is called and return True.
python.library.asyncio-sync#asyncio.Semaphore.acquire
locked() Returns True if semaphore can not be acquired immediately.
python.library.asyncio-sync#asyncio.Semaphore.locked
release() Release a semaphore, incrementing the internal counter by one. Can wake up a task waiting to acquire the semaphore. Unlike BoundedSemaphore, Semaphore allows making more release() calls than acquire() calls.
python.library.asyncio-sync#asyncio.Semaphore.release
exception asyncio.SendfileNotAvailableError The “sendfile” syscall is not available for the given socket or file type. A subclass of RuntimeError.
python.library.asyncio-exceptions#asyncio.SendfileNotAvailableError
class asyncio.Server Server objects are asynchronous context managers. When used in an async with statement, it’s guaranteed that the Server object is closed and not accepting new connections when the async with statement is completed: srv = await loop.create_server(...) async with srv: # some code # At this point, srv is closed and no longer accepts new connections. Changed in version 3.7: Server object is an asynchronous context manager since Python 3.7. close() Stop serving: close listening sockets and set the sockets attribute to None. The sockets that represent existing incoming client connections are left open. The server is closed asynchronously, use the wait_closed() coroutine to wait until the server is closed. get_loop() Return the event loop associated with the server object. New in version 3.7. coroutine start_serving() Start accepting connections. This method is idempotent, so it can be called when the server is already being serving. The start_serving keyword-only parameter to loop.create_server() and asyncio.start_server() allows creating a Server object that is not accepting connections initially. In this case Server.start_serving(), or Server.serve_forever() can be used to make the Server start accepting connections. New in version 3.7. coroutine serve_forever() Start accepting connections until the coroutine is cancelled. Cancellation of serve_forever task causes the server to be closed. This method can be called if the server is already accepting connections. Only one serve_forever task can exist per one Server object. Example: async def client_connected(reader, writer): # Communicate with the client with # reader/writer streams. For example: await reader.readline() async def main(host, port): srv = await asyncio.start_server( client_connected, host, port) await srv.serve_forever() asyncio.run(main('127.0.0.1', 0)) New in version 3.7. is_serving() Return True if the server is accepting new connections. New in version 3.7. coroutine wait_closed() Wait until the close() method completes. sockets List of socket.socket objects the server is listening on. Changed in version 3.7: Prior to Python 3.7 Server.sockets used to return an internal list of server sockets directly. In 3.7 a copy of that list is returned.
python.library.asyncio-eventloop#asyncio.Server
close() Stop serving: close listening sockets and set the sockets attribute to None. The sockets that represent existing incoming client connections are left open. The server is closed asynchronously, use the wait_closed() coroutine to wait until the server is closed.
python.library.asyncio-eventloop#asyncio.Server.close
get_loop() Return the event loop associated with the server object. New in version 3.7.
python.library.asyncio-eventloop#asyncio.Server.get_loop
is_serving() Return True if the server is accepting new connections. New in version 3.7.
python.library.asyncio-eventloop#asyncio.Server.is_serving
coroutine serve_forever() Start accepting connections until the coroutine is cancelled. Cancellation of serve_forever task causes the server to be closed. This method can be called if the server is already accepting connections. Only one serve_forever task can exist per one Server object. Example: async def client_connected(reader, writer): # Communicate with the client with # reader/writer streams. For example: await reader.readline() async def main(host, port): srv = await asyncio.start_server( client_connected, host, port) await srv.serve_forever() asyncio.run(main('127.0.0.1', 0)) New in version 3.7.
python.library.asyncio-eventloop#asyncio.Server.serve_forever
sockets List of socket.socket objects the server is listening on. Changed in version 3.7: Prior to Python 3.7 Server.sockets used to return an internal list of server sockets directly. In 3.7 a copy of that list is returned.
python.library.asyncio-eventloop#asyncio.Server.sockets
coroutine start_serving() Start accepting connections. This method is idempotent, so it can be called when the server is already being serving. The start_serving keyword-only parameter to loop.create_server() and asyncio.start_server() allows creating a Server object that is not accepting connections initially. In this case Server.start_serving(), or Server.serve_forever() can be used to make the Server start accepting connections. New in version 3.7.
python.library.asyncio-eventloop#asyncio.Server.start_serving
coroutine wait_closed() Wait until the close() method completes.
python.library.asyncio-eventloop#asyncio.Server.wait_closed
asyncio.set_child_watcher(watcher) Set the current child watcher to watcher for the current policy. watcher must implement methods defined in the AbstractChildWatcher base class.
python.library.asyncio-policy#asyncio.set_child_watcher
asyncio.set_event_loop(loop) Set loop as a current event loop for the current OS thread.
python.library.asyncio-eventloop#asyncio.set_event_loop
asyncio.set_event_loop_policy(policy) Set the current process-wide policy to policy. If policy is set to None, the default policy is restored.
python.library.asyncio-policy#asyncio.set_event_loop_policy
awaitable asyncio.shield(aw, *, loop=None) Protect an awaitable object from being cancelled. If aw is a coroutine it is automatically scheduled as a Task. The statement: res = await shield(something()) is equivalent to: res = await something() except that if the coroutine containing it is cancelled, the Task running in something() is not cancelled. From the point of view of something(), the cancellation did not happen. Although its caller is still cancelled, so the “await” expression still raises a CancelledError. If something() is cancelled by other means (i.e. from within itself) that would also cancel shield(). If it is desired to completely ignore cancellation (not recommended) the shield() function should be combined with a try/except clause, as follows: try: res = await shield(something()) except CancelledError: res = None Deprecated since version 3.8, will be removed in version 3.10: The loop parameter.
python.library.asyncio-task#asyncio.shield
coroutine asyncio.sleep(delay, result=None, *, loop=None) Block for delay seconds. If result is provided, it is returned to the caller when the coroutine completes. sleep() always suspends the current task, allowing other tasks to run. Deprecated since version 3.8, will be removed in version 3.10: The loop parameter. Example of coroutine displaying the current date every second for 5 seconds: import asyncio import datetime async def display_date(): loop = asyncio.get_running_loop() end_time = loop.time() + 5.0 while True: print(datetime.datetime.now()) if (loop.time() + 1.0) >= end_time: break await asyncio.sleep(1) asyncio.run(display_date())
python.library.asyncio-task#asyncio.sleep
coroutine asyncio.start_server(client_connected_cb, host=None, port=None, *, loop=None, limit=None, family=socket.AF_UNSPEC, flags=socket.AI_PASSIVE, sock=None, backlog=100, ssl=None, reuse_address=None, reuse_port=None, ssl_handshake_timeout=None, start_serving=True) Start a socket server. The client_connected_cb callback is called whenever a new client connection is established. It receives a (reader, writer) pair as two arguments, instances of the StreamReader and StreamWriter classes. client_connected_cb can be a plain callable or a coroutine function; if it is a coroutine function, it will be automatically scheduled as a Task. The loop argument is optional and can always be determined automatically when this method is awaited from a coroutine. limit determines the buffer size limit used by the returned StreamReader instance. By default the limit is set to 64 KiB. The rest of the arguments are passed directly to loop.create_server(). New in version 3.7: The ssl_handshake_timeout and start_serving parameters.
python.library.asyncio-stream#asyncio.start_server
coroutine asyncio.start_unix_server(client_connected_cb, path=None, *, loop=None, limit=None, sock=None, backlog=100, ssl=None, ssl_handshake_timeout=None, start_serving=True) Start a Unix socket server. Similar to start_server() but works with Unix sockets. See also the documentation of loop.create_unix_server(). Availability: Unix. New in version 3.7: The ssl_handshake_timeout and start_serving parameters. Changed in version 3.7: The path parameter can now be a path-like object.
python.library.asyncio-stream#asyncio.start_unix_server
class asyncio.StreamReader Represents a reader object that provides APIs to read data from the IO stream. It is not recommended to instantiate StreamReader objects directly; use open_connection() and start_server() instead. coroutine read(n=-1) Read up to n bytes. If n is not provided, or set to -1, read until EOF and return all read bytes. If EOF was received and the internal buffer is empty, return an empty bytes object. coroutine readline() Read one line, where “line” is a sequence of bytes ending with \n. If EOF is received and \n was not found, the method returns partially read data. If EOF is received and the internal buffer is empty, return an empty bytes object. coroutine readexactly(n) Read exactly n bytes. Raise an IncompleteReadError if EOF is reached before n can be read. Use the IncompleteReadError.partial attribute to get the partially read data. coroutine readuntil(separator=b'\n') Read data from the stream until separator is found. On success, the data and separator will be removed from the internal buffer (consumed). Returned data will include the separator at the end. If the amount of data read exceeds the configured stream limit, a LimitOverrunError exception is raised, and the data is left in the internal buffer and can be read again. If EOF is reached before the complete separator is found, an IncompleteReadError exception is raised, and the internal buffer is reset. The IncompleteReadError.partial attribute may contain a portion of the separator. New in version 3.5.2. at_eof() Return True if the buffer is empty and feed_eof() was called.
python.library.asyncio-stream#asyncio.StreamReader
at_eof() Return True if the buffer is empty and feed_eof() was called.
python.library.asyncio-stream#asyncio.StreamReader.at_eof
coroutine read(n=-1) Read up to n bytes. If n is not provided, or set to -1, read until EOF and return all read bytes. If EOF was received and the internal buffer is empty, return an empty bytes object.
python.library.asyncio-stream#asyncio.StreamReader.read
coroutine readexactly(n) Read exactly n bytes. Raise an IncompleteReadError if EOF is reached before n can be read. Use the IncompleteReadError.partial attribute to get the partially read data.
python.library.asyncio-stream#asyncio.StreamReader.readexactly
coroutine readline() Read one line, where “line” is a sequence of bytes ending with \n. If EOF is received and \n was not found, the method returns partially read data. If EOF is received and the internal buffer is empty, return an empty bytes object.
python.library.asyncio-stream#asyncio.StreamReader.readline
coroutine readuntil(separator=b'\n') Read data from the stream until separator is found. On success, the data and separator will be removed from the internal buffer (consumed). Returned data will include the separator at the end. If the amount of data read exceeds the configured stream limit, a LimitOverrunError exception is raised, and the data is left in the internal buffer and can be read again. If EOF is reached before the complete separator is found, an IncompleteReadError exception is raised, and the internal buffer is reset. The IncompleteReadError.partial attribute may contain a portion of the separator. New in version 3.5.2.
python.library.asyncio-stream#asyncio.StreamReader.readuntil
class asyncio.StreamWriter Represents a writer object that provides APIs to write data to the IO stream. It is not recommended to instantiate StreamWriter objects directly; use open_connection() and start_server() instead. write(data) The method attempts to write the data to the underlying socket immediately. If that fails, the data is queued in an internal write buffer until it can be sent. The method should be used along with the drain() method: stream.write(data) await stream.drain() writelines(data) The method writes a list (or any iterable) of bytes to the underlying socket immediately. If that fails, the data is queued in an internal write buffer until it can be sent. The method should be used along with the drain() method: stream.writelines(lines) await stream.drain() close() The method closes the stream and the underlying socket. The method should be used along with the wait_closed() method: stream.close() await stream.wait_closed() can_write_eof() Return True if the underlying transport supports the write_eof() method, False otherwise. write_eof() Close the write end of the stream after the buffered write data is flushed. transport Return the underlying asyncio transport. get_extra_info(name, default=None) Access optional transport information; see BaseTransport.get_extra_info() for details. coroutine drain() Wait until it is appropriate to resume writing to the stream. Example: writer.write(data) await writer.drain() This is a flow control method that interacts with the underlying IO write buffer. When the size of the buffer reaches the high watermark, drain() blocks until the size of the buffer is drained down to the low watermark and writing can be resumed. When there is nothing to wait for, the drain() returns immediately. is_closing() Return True if the stream is closed or in the process of being closed. New in version 3.7. coroutine wait_closed() Wait until the stream is closed. Should be called after close() to wait until the underlying connection is closed. New in version 3.7.
python.library.asyncio-stream#asyncio.StreamWriter
can_write_eof() Return True if the underlying transport supports the write_eof() method, False otherwise.
python.library.asyncio-stream#asyncio.StreamWriter.can_write_eof
close() The method closes the stream and the underlying socket. The method should be used along with the wait_closed() method: stream.close() await stream.wait_closed()
python.library.asyncio-stream#asyncio.StreamWriter.close
coroutine drain() Wait until it is appropriate to resume writing to the stream. Example: writer.write(data) await writer.drain() This is a flow control method that interacts with the underlying IO write buffer. When the size of the buffer reaches the high watermark, drain() blocks until the size of the buffer is drained down to the low watermark and writing can be resumed. When there is nothing to wait for, the drain() returns immediately.
python.library.asyncio-stream#asyncio.StreamWriter.drain
get_extra_info(name, default=None) Access optional transport information; see BaseTransport.get_extra_info() for details.
python.library.asyncio-stream#asyncio.StreamWriter.get_extra_info
is_closing() Return True if the stream is closed or in the process of being closed. New in version 3.7.
python.library.asyncio-stream#asyncio.StreamWriter.is_closing
transport Return the underlying asyncio transport.
python.library.asyncio-stream#asyncio.StreamWriter.transport
coroutine wait_closed() Wait until the stream is closed. Should be called after close() to wait until the underlying connection is closed. New in version 3.7.
python.library.asyncio-stream#asyncio.StreamWriter.wait_closed
write(data) The method attempts to write the data to the underlying socket immediately. If that fails, the data is queued in an internal write buffer until it can be sent. The method should be used along with the drain() method: stream.write(data) await stream.drain()
python.library.asyncio-stream#asyncio.StreamWriter.write
writelines(data) The method writes a list (or any iterable) of bytes to the underlying socket immediately. If that fails, the data is queued in an internal write buffer until it can be sent. The method should be used along with the drain() method: stream.writelines(lines) await stream.drain()
python.library.asyncio-stream#asyncio.StreamWriter.writelines
write_eof() Close the write end of the stream after the buffered write data is flushed.
python.library.asyncio-stream#asyncio.StreamWriter.write_eof
class asyncio.SubprocessProtocol(BaseProtocol) The base class for implementing protocols communicating with child processes (unidirectional pipes).
python.library.asyncio-protocol#asyncio.SubprocessProtocol
SubprocessProtocol.pipe_connection_lost(fd, exc) Called when one of the pipes communicating with the child process is closed. fd is the integer file descriptor that was closed.
python.library.asyncio-protocol#asyncio.SubprocessProtocol.pipe_connection_lost
SubprocessProtocol.pipe_data_received(fd, data) Called when the child process writes data into its stdout or stderr pipe. fd is the integer file descriptor of the pipe. data is a non-empty bytes object containing the received data.
python.library.asyncio-protocol#asyncio.SubprocessProtocol.pipe_data_received
SubprocessProtocol.process_exited() Called when the child process has exited.
python.library.asyncio-protocol#asyncio.SubprocessProtocol.process_exited
class asyncio.SubprocessTransport(BaseTransport) An abstraction to represent a connection between a parent and its child OS process. Instances of the SubprocessTransport class are returned from event loop methods loop.subprocess_shell() and loop.subprocess_exec().
python.library.asyncio-protocol#asyncio.SubprocessTransport
SubprocessTransport.close() Kill the subprocess by calling the kill() method. If the subprocess hasn’t returned yet, and close transports of stdin, stdout, and stderr pipes.
python.library.asyncio-protocol#asyncio.SubprocessTransport.close
SubprocessTransport.get_pid() Return the subprocess process id as an integer.
python.library.asyncio-protocol#asyncio.SubprocessTransport.get_pid
SubprocessTransport.get_pipe_transport(fd) Return the transport for the communication pipe corresponding to the integer file descriptor fd: 0: readable streaming transport of the standard input (stdin), or None if the subprocess was not created with stdin=PIPE 1: writable streaming transport of the standard output (stdout), or None if the subprocess was not created with stdout=PIPE 2: writable streaming transport of the standard error (stderr), or None if the subprocess was not created with stderr=PIPE other fd: None
python.library.asyncio-protocol#asyncio.SubprocessTransport.get_pipe_transport
SubprocessTransport.get_returncode() Return the subprocess return code as an integer or None if it hasn’t returned, which is similar to the subprocess.Popen.returncode attribute.
python.library.asyncio-protocol#asyncio.SubprocessTransport.get_returncode
SubprocessTransport.kill() Kill the subprocess. On POSIX systems, the function sends SIGKILL to the subprocess. On Windows, this method is an alias for terminate(). See also subprocess.Popen.kill().
python.library.asyncio-protocol#asyncio.SubprocessTransport.kill
SubprocessTransport.send_signal(signal) Send the signal number to the subprocess, as in subprocess.Popen.send_signal().
python.library.asyncio-protocol#asyncio.SubprocessTransport.send_signal
SubprocessTransport.terminate() Stop the subprocess. On POSIX systems, this method sends SIGTERM to the subprocess. On Windows, the Windows API function TerminateProcess() is called to stop the subprocess. See also subprocess.Popen.terminate().
python.library.asyncio-protocol#asyncio.SubprocessTransport.terminate
class asyncio.Task(coro, *, loop=None, name=None) A Future-like object that runs a Python coroutine. Not thread-safe. Tasks are used to run coroutines in event loops. If a coroutine awaits on a Future, the Task suspends the execution of the coroutine and waits for the completion of the Future. When the Future is done, the execution of the wrapped coroutine resumes. Event loops use cooperative scheduling: an event loop runs one Task at a time. While a Task awaits for the completion of a Future, the event loop runs other Tasks, callbacks, or performs IO operations. Use the high-level asyncio.create_task() function to create Tasks, or the low-level loop.create_task() or ensure_future() functions. Manual instantiation of Tasks is discouraged. To cancel a running Task use the cancel() method. Calling it will cause the Task to throw a CancelledError exception into the wrapped coroutine. If a coroutine is awaiting on a Future object during cancellation, the Future object will be cancelled. cancelled() can be used to check if the Task was cancelled. The method returns True if the wrapped coroutine did not suppress the CancelledError exception and was actually cancelled. asyncio.Task inherits from Future all of its APIs except Future.set_result() and Future.set_exception(). Tasks support the contextvars module. When a Task is created it copies the current context and later runs its coroutine in the copied context. Changed in version 3.7: Added support for the contextvars module. Changed in version 3.8: Added the name parameter. Deprecated since version 3.8, will be removed in version 3.10: The loop parameter. cancel(msg=None) Request the Task to be cancelled. This arranges for a CancelledError exception to be thrown into the wrapped coroutine on the next cycle of the event loop. The coroutine then has a chance to clean up or even deny the request by suppressing the exception with a try … … except CancelledError … finally block. Therefore, unlike Future.cancel(), Task.cancel() does not guarantee that the Task will be cancelled, although suppressing cancellation completely is not common and is actively discouraged. Changed in version 3.9: Added the msg parameter. The following example illustrates how coroutines can intercept the cancellation request: async def cancel_me(): print('cancel_me(): before sleep') try: # Wait for 1 hour await asyncio.sleep(3600) except asyncio.CancelledError: print('cancel_me(): cancel sleep') raise finally: print('cancel_me(): after sleep') async def main(): # Create a "cancel_me" Task task = asyncio.create_task(cancel_me()) # Wait for 1 second await asyncio.sleep(1) task.cancel() try: await task except asyncio.CancelledError: print("main(): cancel_me is cancelled now") asyncio.run(main()) # Expected output: # # cancel_me(): before sleep # cancel_me(): cancel sleep # cancel_me(): after sleep # main(): cancel_me is cancelled now cancelled() Return True if the Task is cancelled. The Task is cancelled when the cancellation was requested with cancel() and the wrapped coroutine propagated the CancelledError exception thrown into it. done() Return True if the Task is done. A Task is done when the wrapped coroutine either returned a value, raised an exception, or the Task was cancelled. result() Return the result of the Task. If the Task is done, the result of the wrapped coroutine is returned (or if the coroutine raised an exception, that exception is re-raised.) If the Task has been cancelled, this method raises a CancelledError exception. If the Task’s result isn’t yet available, this method raises a InvalidStateError exception. exception() Return the exception of the Task. If the wrapped coroutine raised an exception that exception is returned. If the wrapped coroutine returned normally this method returns None. If the Task has been cancelled, this method raises a CancelledError exception. If the Task isn’t done yet, this method raises an InvalidStateError exception. add_done_callback(callback, *, context=None) Add a callback to be run when the Task is done. This method should only be used in low-level callback-based code. See the documentation of Future.add_done_callback() for more details. remove_done_callback(callback) Remove callback from the callbacks list. This method should only be used in low-level callback-based code. See the documentation of Future.remove_done_callback() for more details. get_stack(*, limit=None) Return the list of stack frames for this Task. If the wrapped coroutine is not done, this returns the stack where it is suspended. If the coroutine has completed successfully or was cancelled, this returns an empty list. If the coroutine was terminated by an exception, this returns the list of traceback frames. The frames are always ordered from oldest to newest. Only one stack frame is returned for a suspended coroutine. The optional limit argument sets the maximum number of frames to return; by default all available frames are returned. The ordering of the returned list differs depending on whether a stack or a traceback is returned: the newest frames of a stack are returned, but the oldest frames of a traceback are returned. (This matches the behavior of the traceback module.) print_stack(*, limit=None, file=None) Print the stack or traceback for this Task. This produces output similar to that of the traceback module for the frames retrieved by get_stack(). The limit argument is passed to get_stack() directly. The file argument is an I/O stream to which the output is written; by default output is written to sys.stderr. get_coro() Return the coroutine object wrapped by the Task. New in version 3.8. get_name() Return the name of the Task. If no name has been explicitly assigned to the Task, the default asyncio Task implementation generates a default name during instantiation. New in version 3.8. set_name(value) Set the name of the Task. The value argument can be any object, which is then converted to a string. In the default Task implementation, the name will be visible in the repr() output of a task object. New in version 3.8.
python.library.asyncio-task#asyncio.Task
add_done_callback(callback, *, context=None) Add a callback to be run when the Task is done. This method should only be used in low-level callback-based code. See the documentation of Future.add_done_callback() for more details.
python.library.asyncio-task#asyncio.Task.add_done_callback
cancel(msg=None) Request the Task to be cancelled. This arranges for a CancelledError exception to be thrown into the wrapped coroutine on the next cycle of the event loop. The coroutine then has a chance to clean up or even deny the request by suppressing the exception with a try … … except CancelledError … finally block. Therefore, unlike Future.cancel(), Task.cancel() does not guarantee that the Task will be cancelled, although suppressing cancellation completely is not common and is actively discouraged. Changed in version 3.9: Added the msg parameter. The following example illustrates how coroutines can intercept the cancellation request: async def cancel_me(): print('cancel_me(): before sleep') try: # Wait for 1 hour await asyncio.sleep(3600) except asyncio.CancelledError: print('cancel_me(): cancel sleep') raise finally: print('cancel_me(): after sleep') async def main(): # Create a "cancel_me" Task task = asyncio.create_task(cancel_me()) # Wait for 1 second await asyncio.sleep(1) task.cancel() try: await task except asyncio.CancelledError: print("main(): cancel_me is cancelled now") asyncio.run(main()) # Expected output: # # cancel_me(): before sleep # cancel_me(): cancel sleep # cancel_me(): after sleep # main(): cancel_me is cancelled now
python.library.asyncio-task#asyncio.Task.cancel
cancelled() Return True if the Task is cancelled. The Task is cancelled when the cancellation was requested with cancel() and the wrapped coroutine propagated the CancelledError exception thrown into it.
python.library.asyncio-task#asyncio.Task.cancelled
done() Return True if the Task is done. A Task is done when the wrapped coroutine either returned a value, raised an exception, or the Task was cancelled.
python.library.asyncio-task#asyncio.Task.done
exception() Return the exception of the Task. If the wrapped coroutine raised an exception that exception is returned. If the wrapped coroutine returned normally this method returns None. If the Task has been cancelled, this method raises a CancelledError exception. If the Task isn’t done yet, this method raises an InvalidStateError exception.
python.library.asyncio-task#asyncio.Task.exception
get_coro() Return the coroutine object wrapped by the Task. New in version 3.8.
python.library.asyncio-task#asyncio.Task.get_coro
get_name() Return the name of the Task. If no name has been explicitly assigned to the Task, the default asyncio Task implementation generates a default name during instantiation. New in version 3.8.
python.library.asyncio-task#asyncio.Task.get_name
get_stack(*, limit=None) Return the list of stack frames for this Task. If the wrapped coroutine is not done, this returns the stack where it is suspended. If the coroutine has completed successfully or was cancelled, this returns an empty list. If the coroutine was terminated by an exception, this returns the list of traceback frames. The frames are always ordered from oldest to newest. Only one stack frame is returned for a suspended coroutine. The optional limit argument sets the maximum number of frames to return; by default all available frames are returned. The ordering of the returned list differs depending on whether a stack or a traceback is returned: the newest frames of a stack are returned, but the oldest frames of a traceback are returned. (This matches the behavior of the traceback module.)
python.library.asyncio-task#asyncio.Task.get_stack
print_stack(*, limit=None, file=None) Print the stack or traceback for this Task. This produces output similar to that of the traceback module for the frames retrieved by get_stack(). The limit argument is passed to get_stack() directly. The file argument is an I/O stream to which the output is written; by default output is written to sys.stderr.
python.library.asyncio-task#asyncio.Task.print_stack
remove_done_callback(callback) Remove callback from the callbacks list. This method should only be used in low-level callback-based code. See the documentation of Future.remove_done_callback() for more details.
python.library.asyncio-task#asyncio.Task.remove_done_callback
result() Return the result of the Task. If the Task is done, the result of the wrapped coroutine is returned (or if the coroutine raised an exception, that exception is re-raised.) If the Task has been cancelled, this method raises a CancelledError exception. If the Task’s result isn’t yet available, this method raises a InvalidStateError exception.
python.library.asyncio-task#asyncio.Task.result