doc_content
stringlengths
1
386k
doc_id
stringlengths
5
188
notify(n=1) Wake up at most n tasks (1 by default) waiting on this condition. The method is no-op if no tasks are waiting. The lock must be acquired before this method is called and released shortly after. If called with an unlocked lock a RuntimeError error is raised.
python.library.asyncio-sync#asyncio.Condition.notify
notify_all() Wake up all tasks waiting on this condition. This method acts like notify(), but wakes up all waiting tasks. The lock must be acquired before this method is called and released shortly after. If called with an unlocked lock a RuntimeError error is raised.
python.library.asyncio-sync#asyncio.Condition.notify_all
release() Release the underlying lock. When invoked on an unlocked lock, a RuntimeError is raised.
python.library.asyncio-sync#asyncio.Condition.release
coroutine wait() Wait until notified. If the calling task has not acquired the lock when this method is called, a RuntimeError is raised. This method releases the underlying lock, and then blocks until it is awakened by a notify() or notify_all() call. Once awakened, the Condition re-acquires its lock and this method returns True.
python.library.asyncio-sync#asyncio.Condition.wait
coroutine wait_for(predicate) Wait until a predicate becomes true. The predicate must be a callable which result will be interpreted as a boolean value. The final value is the return value.
python.library.asyncio-sync#asyncio.Condition.wait_for
@asyncio.coroutine Decorator to mark generator-based coroutines. This decorator enables legacy generator-based coroutines to be compatible with async/await code: @asyncio.coroutine def old_style_coroutine(): yield from asyncio.sleep(1) async def main(): await old_style_coroutine() This decorator should not be used for async def coroutines. Deprecated since version 3.8, will be removed in version 3.10: Use async def instead.
python.library.asyncio-task#asyncio.coroutine
coroutine asyncio.create_subprocess_exec(program, *args, stdin=None, stdout=None, stderr=None, loop=None, limit=None, **kwds) Create a subprocess. The limit argument sets the buffer limit for StreamReader wrappers for Process.stdout and Process.stderr (if subprocess.PIPE is passed to stdout and stderr arguments). Return a Process instance. See the documentation of loop.subprocess_exec() for other parameters. Deprecated since version 3.8, will be removed in version 3.10: The loop parameter.
python.library.asyncio-subprocess#asyncio.create_subprocess_exec
coroutine asyncio.create_subprocess_shell(cmd, stdin=None, stdout=None, stderr=None, loop=None, limit=None, **kwds) Run the cmd shell command. The limit argument sets the buffer limit for StreamReader wrappers for Process.stdout and Process.stderr (if subprocess.PIPE is passed to stdout and stderr arguments). Return a Process instance. See the documentation of loop.subprocess_shell() for other parameters. Important It is the application’s responsibility to ensure that all whitespace and special characters are quoted appropriately to avoid shell injection vulnerabilities. The shlex.quote() function can be used to properly escape whitespace and special shell characters in strings that are going to be used to construct shell commands. Deprecated since version 3.8, will be removed in version 3.10: The loop parameter.
python.library.asyncio-subprocess#asyncio.create_subprocess_shell
asyncio.create_task(coro, *, name=None) Wrap the coro coroutine into a Task and schedule its execution. Return the Task object. If name is not None, it is set as the name of the task using Task.set_name(). The task is executed in the loop returned by get_running_loop(), RuntimeError is raised if there is no running loop in current thread. This function has been added in Python 3.7. Prior to Python 3.7, the low-level asyncio.ensure_future() function can be used instead: async def coro(): ... # In Python 3.7+ task = asyncio.create_task(coro()) ... # This works in all Python versions but is less readable task = asyncio.ensure_future(coro()) ... New in version 3.7. Changed in version 3.8: Added the name parameter.
python.library.asyncio-task#asyncio.create_task
asyncio.current_task(loop=None) Return the currently running Task instance, or None if no task is running. If loop is None get_running_loop() is used to get the current loop. New in version 3.7.
python.library.asyncio-task#asyncio.current_task
class asyncio.DatagramProtocol(BaseProtocol) The base class for implementing datagram (UDP) protocols.
python.library.asyncio-protocol#asyncio.DatagramProtocol
DatagramProtocol.datagram_received(data, addr) Called when a datagram is received. data is a bytes object containing the incoming data. addr is the address of the peer sending the data; the exact format depends on the transport.
python.library.asyncio-protocol#asyncio.DatagramProtocol.datagram_received
DatagramProtocol.error_received(exc) Called when a previous send or receive operation raises an OSError. exc is the OSError instance. This method is called in rare conditions, when the transport (e.g. UDP) detects that a datagram could not be delivered to its recipient. In many conditions though, undeliverable datagrams will be silently dropped.
python.library.asyncio-protocol#asyncio.DatagramProtocol.error_received
class asyncio.DatagramTransport(BaseTransport) A transport for datagram (UDP) connections. Instances of the DatagramTransport class are returned from the loop.create_datagram_endpoint() event loop method.
python.library.asyncio-protocol#asyncio.DatagramTransport
DatagramTransport.abort() Close the transport immediately, without waiting for pending operations to complete. Buffered data will be lost. No more data will be received. The protocol’s protocol.connection_lost() method will eventually be called with None as its argument.
python.library.asyncio-protocol#asyncio.DatagramTransport.abort
DatagramTransport.sendto(data, addr=None) Send the data bytes to the remote peer given by addr (a transport-dependent target address). If addr is None, the data is sent to the target address given on transport creation. This method does not block; it buffers the data and arranges for it to be sent out asynchronously.
python.library.asyncio-protocol#asyncio.DatagramTransport.sendto
class asyncio.DefaultEventLoopPolicy The default asyncio policy. Uses SelectorEventLoop on Unix and ProactorEventLoop on Windows. There is no need to install the default policy manually. asyncio is configured to use the default policy automatically. Changed in version 3.8: On Windows, ProactorEventLoop is now used by default.
python.library.asyncio-policy#asyncio.DefaultEventLoopPolicy
asyncio.ensure_future(obj, *, loop=None) Return: obj argument as is, if obj is a Future, a Task, or a Future-like object (isfuture() is used for the test.) a Task object wrapping obj, if obj is a coroutine (iscoroutine() is used for the test); in this case the coroutine will be scheduled by ensure_future(). a Task object that would await on obj, if obj is an awaitable (inspect.isawaitable() is used for the test.) If obj is neither of the above a TypeError is raised. Important See also the create_task() function which is the preferred way for creating new Tasks. Changed in version 3.5.1: The function accepts any awaitable object.
python.library.asyncio-future#asyncio.ensure_future
class asyncio.Event(*, loop=None) An event object. Not thread-safe. An asyncio event can be used to notify multiple asyncio tasks that some event has happened. An Event object manages an internal flag that can be set to true with the set() method and reset to false with the clear() method. The wait() method blocks until the flag is set to true. The flag is set to false initially. Deprecated since version 3.8, will be removed in version 3.10: The loop parameter. Example: async def waiter(event): print('waiting for it ...') await event.wait() print('... got it!') async def main(): # Create an Event object. event = asyncio.Event() # Spawn a Task to wait until 'event' is set. waiter_task = asyncio.create_task(waiter(event)) # Sleep for 1 second and set the event. await asyncio.sleep(1) event.set() # Wait until the waiter task is finished. await waiter_task asyncio.run(main()) coroutine wait() Wait until the event is set. If the event is set, return True immediately. Otherwise block until another task calls set(). set() Set the event. All tasks waiting for event to be set will be immediately awakened. clear() Clear (unset) the event. Tasks awaiting on wait() will now block until the set() method is called again. is_set() Return True if the event is set.
python.library.asyncio-sync#asyncio.Event
clear() Clear (unset) the event. Tasks awaiting on wait() will now block until the set() method is called again.
python.library.asyncio-sync#asyncio.Event.clear
is_set() Return True if the event is set.
python.library.asyncio-sync#asyncio.Event.is_set
set() Set the event. All tasks waiting for event to be set will be immediately awakened.
python.library.asyncio-sync#asyncio.Event.set
coroutine wait() Wait until the event is set. If the event is set, return True immediately. Otherwise block until another task calls set().
python.library.asyncio-sync#asyncio.Event.wait
class asyncio.FastChildWatcher This implementation reaps every terminated processes by calling os.waitpid(-1) directly, possibly breaking other code spawning processes and waiting for their termination. There is no noticeable overhead when handling a big number of children (O(1) each time a child terminates). This solution requires a running event loop in the main thread to work, as SafeChildWatcher.
python.library.asyncio-policy#asyncio.FastChildWatcher
class asyncio.Future(*, loop=None) A Future represents an eventual result of an asynchronous operation. Not thread-safe. Future is an awaitable object. Coroutines can await on Future objects until they either have a result or an exception set, or until they are cancelled. Typically Futures are used to enable low-level callback-based code (e.g. in protocols implemented using asyncio transports) to interoperate with high-level async/await code. The rule of thumb is to never expose Future objects in user-facing APIs, and the recommended way to create a Future object is to call loop.create_future(). This way alternative event loop implementations can inject their own optimized implementations of a Future object. Changed in version 3.7: Added support for the contextvars module. result() Return the result of the Future. If the Future is done and has a result set by the set_result() method, the result value is returned. If the Future is done and has an exception set by the set_exception() method, this method raises the exception. If the Future has been cancelled, this method raises a CancelledError exception. If the Future’s result isn’t yet available, this method raises a InvalidStateError exception. set_result(result) Mark the Future as done and set its result. Raises a InvalidStateError error if the Future is already done. set_exception(exception) Mark the Future as done and set an exception. Raises a InvalidStateError error if the Future is already done. done() Return True if the Future is done. A Future is done if it was cancelled or if it has a result or an exception set with set_result() or set_exception() calls. cancelled() Return True if the Future was cancelled. The method is usually used to check if a Future is not cancelled before setting a result or an exception for it: if not fut.cancelled(): fut.set_result(42) add_done_callback(callback, *, context=None) Add a callback to be run when the Future is done. The callback is called with the Future object as its only argument. If the Future is already done when this method is called, the callback is scheduled with loop.call_soon(). An optional keyword-only context argument allows specifying a custom contextvars.Context for the callback to run in. The current context is used when no context is provided. functools.partial() can be used to pass parameters to the callback, e.g.: # Call 'print("Future:", fut)' when "fut" is done. fut.add_done_callback( functools.partial(print, "Future:")) Changed in version 3.7: The context keyword-only parameter was added. See PEP 567 for more details. remove_done_callback(callback) Remove callback from the callbacks list. Returns the number of callbacks removed, which is typically 1, unless a callback was added more than once. cancel(msg=None) Cancel the Future and schedule callbacks. If the Future is already done or cancelled, return False. Otherwise, change the Future’s state to cancelled, schedule the callbacks, and return True. Changed in version 3.9: Added the msg parameter. exception() Return the exception that was set on this Future. The exception (or None if no exception was set) is returned only if the Future is done. If the Future has been cancelled, this method raises a CancelledError exception. If the Future isn’t done yet, this method raises an InvalidStateError exception. get_loop() Return the event loop the Future object is bound to. New in version 3.7.
python.library.asyncio-future#asyncio.Future
add_done_callback(callback, *, context=None) Add a callback to be run when the Future is done. The callback is called with the Future object as its only argument. If the Future is already done when this method is called, the callback is scheduled with loop.call_soon(). An optional keyword-only context argument allows specifying a custom contextvars.Context for the callback to run in. The current context is used when no context is provided. functools.partial() can be used to pass parameters to the callback, e.g.: # Call 'print("Future:", fut)' when "fut" is done. fut.add_done_callback( functools.partial(print, "Future:")) Changed in version 3.7: The context keyword-only parameter was added. See PEP 567 for more details.
python.library.asyncio-future#asyncio.Future.add_done_callback
cancel(msg=None) Cancel the Future and schedule callbacks. If the Future is already done or cancelled, return False. Otherwise, change the Future’s state to cancelled, schedule the callbacks, and return True. Changed in version 3.9: Added the msg parameter.
python.library.asyncio-future#asyncio.Future.cancel
cancelled() Return True if the Future was cancelled. The method is usually used to check if a Future is not cancelled before setting a result or an exception for it: if not fut.cancelled(): fut.set_result(42)
python.library.asyncio-future#asyncio.Future.cancelled
done() Return True if the Future is done. A Future is done if it was cancelled or if it has a result or an exception set with set_result() or set_exception() calls.
python.library.asyncio-future#asyncio.Future.done
exception() Return the exception that was set on this Future. The exception (or None if no exception was set) is returned only if the Future is done. If the Future has been cancelled, this method raises a CancelledError exception. If the Future isn’t done yet, this method raises an InvalidStateError exception.
python.library.asyncio-future#asyncio.Future.exception
get_loop() Return the event loop the Future object is bound to. New in version 3.7.
python.library.asyncio-future#asyncio.Future.get_loop
remove_done_callback(callback) Remove callback from the callbacks list. Returns the number of callbacks removed, which is typically 1, unless a callback was added more than once.
python.library.asyncio-future#asyncio.Future.remove_done_callback
result() Return the result of the Future. If the Future is done and has a result set by the set_result() method, the result value is returned. If the Future is done and has an exception set by the set_exception() method, this method raises the exception. If the Future has been cancelled, this method raises a CancelledError exception. If the Future’s result isn’t yet available, this method raises a InvalidStateError exception.
python.library.asyncio-future#asyncio.Future.result
set_exception(exception) Mark the Future as done and set an exception. Raises a InvalidStateError error if the Future is already done.
python.library.asyncio-future#asyncio.Future.set_exception
set_result(result) Mark the Future as done and set its result. Raises a InvalidStateError error if the Future is already done.
python.library.asyncio-future#asyncio.Future.set_result
awaitable asyncio.gather(*aws, loop=None, return_exceptions=False) Run awaitable objects in the aws sequence concurrently. If any awaitable in aws is a coroutine, it is automatically scheduled as a Task. If all awaitables are completed successfully, the result is an aggregate list of returned values. The order of result values corresponds to the order of awaitables in aws. If return_exceptions is False (default), the first raised exception is immediately propagated to the task that awaits on gather(). Other awaitables in the aws sequence won’t be cancelled and will continue to run. If return_exceptions is True, exceptions are treated the same as successful results, and aggregated in the result list. If gather() is cancelled, all submitted awaitables (that have not completed yet) are also cancelled. If any Task or Future from the aws sequence is cancelled, it is treated as if it raised CancelledError – the gather() call is not cancelled in this case. This is to prevent the cancellation of one submitted Task/Future to cause other Tasks/Futures to be cancelled. Deprecated since version 3.8, will be removed in version 3.10: The loop parameter. Example: import asyncio async def factorial(name, number): f = 1 for i in range(2, number + 1): print(f"Task {name}: Compute factorial({i})...") await asyncio.sleep(1) f *= i print(f"Task {name}: factorial({number}) = {f}") async def main(): # Schedule three calls *concurrently*: await asyncio.gather( factorial("A", 2), factorial("B", 3), factorial("C", 4), ) asyncio.run(main()) # Expected output: # # Task A: Compute factorial(2)... # Task B: Compute factorial(2)... # Task C: Compute factorial(2)... # Task A: factorial(2) = 2 # Task B: Compute factorial(3)... # Task C: Compute factorial(3)... # Task B: factorial(3) = 6 # Task C: Compute factorial(4)... # Task C: factorial(4) = 24 Note If return_exceptions is False, cancelling gather() after it has been marked done won’t cancel any submitted awaitables. For instance, gather can be marked done after propagating an exception to the caller, therefore, calling gather.cancel() after catching an exception (raised by one of the awaitables) from gather won’t cancel any other awaitables. Changed in version 3.7: If the gather itself is cancelled, the cancellation is propagated regardless of return_exceptions.
python.library.asyncio-task#asyncio.gather
asyncio.get_child_watcher() Return the current child watcher for the current policy.
python.library.asyncio-policy#asyncio.get_child_watcher
asyncio.get_event_loop() Get the current event loop. If there is no current event loop set in the current OS thread, the OS thread is main, and set_event_loop() has not yet been called, asyncio will create a new event loop and set it as the current one. Because this function has rather complex behavior (especially when custom event loop policies are in use), using the get_running_loop() function is preferred to get_event_loop() in coroutines and callbacks. Consider also using the asyncio.run() function instead of using lower level functions to manually create and close an event loop.
python.library.asyncio-eventloop#asyncio.get_event_loop
asyncio.get_event_loop_policy() Return the current process-wide policy.
python.library.asyncio-policy#asyncio.get_event_loop_policy
asyncio.get_running_loop() Return the running event loop in the current OS thread. If there is no running event loop a RuntimeError is raised. This function can only be called from a coroutine or a callback. New in version 3.7.
python.library.asyncio-eventloop#asyncio.get_running_loop
class asyncio.Handle A callback wrapper object returned by loop.call_soon(), loop.call_soon_threadsafe(). cancel() Cancel the callback. If the callback has already been canceled or executed, this method has no effect. cancelled() Return True if the callback was cancelled. New in version 3.7.
python.library.asyncio-eventloop#asyncio.Handle
cancel() Cancel the callback. If the callback has already been canceled or executed, this method has no effect.
python.library.asyncio-eventloop#asyncio.Handle.cancel
cancelled() Return True if the callback was cancelled. New in version 3.7.
python.library.asyncio-eventloop#asyncio.Handle.cancelled
exception asyncio.IncompleteReadError The requested read operation did not complete fully. Raised by the asyncio stream APIs. This exception is a subclass of EOFError. expected The total number (int) of expected bytes. partial A string of bytes read before the end of stream was reached.
python.library.asyncio-exceptions#asyncio.IncompleteReadError
expected The total number (int) of expected bytes.
python.library.asyncio-exceptions#asyncio.IncompleteReadError.expected
partial A string of bytes read before the end of stream was reached.
python.library.asyncio-exceptions#asyncio.IncompleteReadError.partial
exception asyncio.InvalidStateError Invalid internal state of Task or Future. Can be raised in situations like setting a result value for a Future object that already has a result value set.
python.library.asyncio-exceptions#asyncio.InvalidStateError
asyncio.iscoroutine(obj) Return True if obj is a coroutine object. This method is different from inspect.iscoroutine() because it returns True for generator-based coroutines.
python.library.asyncio-task#asyncio.iscoroutine
asyncio.iscoroutinefunction(func) Return True if func is a coroutine function. This method is different from inspect.iscoroutinefunction() because it returns True for generator-based coroutine functions decorated with @coroutine.
python.library.asyncio-task#asyncio.iscoroutinefunction
asyncio.isfuture(obj) Return True if obj is either of: an instance of asyncio.Future, an instance of asyncio.Task, a Future-like object with a _asyncio_future_blocking attribute. New in version 3.5.
python.library.asyncio-future#asyncio.isfuture
class asyncio.LifoQueue A variant of Queue that retrieves most recently added entries first (last in, first out).
python.library.asyncio-queue#asyncio.LifoQueue
exception asyncio.LimitOverrunError Reached the buffer size limit while looking for a separator. Raised by the asyncio stream APIs. consumed The total number of to be consumed bytes.
python.library.asyncio-exceptions#asyncio.LimitOverrunError
consumed The total number of to be consumed bytes.
python.library.asyncio-exceptions#asyncio.LimitOverrunError.consumed
class asyncio.Lock(*, loop=None) Implements a mutex lock for asyncio tasks. Not thread-safe. An asyncio lock can be used to guarantee exclusive access to a shared resource. The preferred way to use a Lock is an async with statement: lock = asyncio.Lock() # ... later async with lock: # access shared state which is equivalent to: lock = asyncio.Lock() # ... later await lock.acquire() try: # access shared state finally: lock.release() Deprecated since version 3.8, will be removed in version 3.10: The loop parameter. coroutine acquire() Acquire the lock. This method waits until the lock is unlocked, sets it to locked and returns True. When more than one coroutine is blocked in acquire() waiting for the lock to be unlocked, only one coroutine eventually proceeds. Acquiring a lock is fair: the coroutine that proceeds will be the first coroutine that started waiting on the lock. release() Release the lock. When the lock is locked, reset it to unlocked and return. If the lock is unlocked, a RuntimeError is raised. locked() Return True if the lock is locked.
python.library.asyncio-sync#asyncio.Lock
coroutine acquire() Acquire the lock. This method waits until the lock is unlocked, sets it to locked and returns True. When more than one coroutine is blocked in acquire() waiting for the lock to be unlocked, only one coroutine eventually proceeds. Acquiring a lock is fair: the coroutine that proceeds will be the first coroutine that started waiting on the lock.
python.library.asyncio-sync#asyncio.Lock.acquire
locked() Return True if the lock is locked.
python.library.asyncio-sync#asyncio.Lock.locked
release() Release the lock. When the lock is locked, reset it to unlocked and return. If the lock is unlocked, a RuntimeError is raised.
python.library.asyncio-sync#asyncio.Lock.release
loop.add_reader(fd, callback, *args) Start monitoring the fd file descriptor for read availability and invoke callback with the specified arguments once fd is available for reading.
python.library.asyncio-eventloop#asyncio.loop.add_reader
loop.add_signal_handler(signum, callback, *args) Set callback as the handler for the signum signal. The callback will be invoked by loop, along with other queued callbacks and runnable coroutines of that event loop. Unlike signal handlers registered using signal.signal(), a callback registered with this function is allowed to interact with the event loop. Raise ValueError if the signal number is invalid or uncatchable. Raise RuntimeError if there is a problem setting up the handler. Use functools.partial() to pass keyword arguments to callback. Like signal.signal(), this function must be invoked in the main thread.
python.library.asyncio-eventloop#asyncio.loop.add_signal_handler
loop.add_writer(fd, callback, *args) Start monitoring the fd file descriptor for write availability and invoke callback with the specified arguments once fd is available for writing. Use functools.partial() to pass keyword arguments to callback.
python.library.asyncio-eventloop#asyncio.loop.add_writer
loop.call_at(when, callback, *args, context=None) Schedule callback to be called at the given absolute timestamp when (an int or a float), using the same time reference as loop.time(). This method’s behavior is the same as call_later(). An instance of asyncio.TimerHandle is returned which can be used to cancel the callback. Changed in version 3.7: The context keyword-only parameter was added. See PEP 567 for more details. Changed in version 3.8: In Python 3.7 and earlier with the default event loop implementation, the difference between when and the current time could not exceed one day. This has been fixed in Python 3.8.
python.library.asyncio-eventloop#asyncio.loop.call_at
loop.call_exception_handler(context) Call the current event loop exception handler. context is a dict object containing the following keys (new keys may be introduced in future Python versions): ‘message’: Error message; ‘exception’ (optional): Exception object; ‘future’ (optional): asyncio.Future instance; ‘handle’ (optional): asyncio.Handle instance; ‘protocol’ (optional): Protocol instance; ‘transport’ (optional): Transport instance; ‘socket’ (optional): socket.socket instance. Note This method should not be overloaded in subclassed event loops. For custom exception handling, use the set_exception_handler() method.
python.library.asyncio-eventloop#asyncio.loop.call_exception_handler
loop.call_later(delay, callback, *args, context=None) Schedule callback to be called after the given delay number of seconds (can be either an int or a float). An instance of asyncio.TimerHandle is returned which can be used to cancel the callback. callback will be called exactly once. If two callbacks are scheduled for exactly the same time, the order in which they are called is undefined. The optional positional args will be passed to the callback when it is called. If you want the callback to be called with keyword arguments use functools.partial(). An optional keyword-only context argument allows specifying a custom contextvars.Context for the callback to run in. The current context is used when no context is provided. Changed in version 3.7: The context keyword-only parameter was added. See PEP 567 for more details. Changed in version 3.8: In Python 3.7 and earlier with the default event loop implementation, the delay could not exceed one day. This has been fixed in Python 3.8.
python.library.asyncio-eventloop#asyncio.loop.call_later
loop.call_soon(callback, *args, context=None) Schedule the callback callback to be called with args arguments at the next iteration of the event loop. Callbacks are called in the order in which they are registered. Each callback will be called exactly once. An optional keyword-only context argument allows specifying a custom contextvars.Context for the callback to run in. The current context is used when no context is provided. An instance of asyncio.Handle is returned, which can be used later to cancel the callback. This method is not thread-safe.
python.library.asyncio-eventloop#asyncio.loop.call_soon
loop.call_soon_threadsafe(callback, *args, context=None) A thread-safe variant of call_soon(). Must be used to schedule callbacks from another thread. See the concurrency and multithreading section of the documentation.
python.library.asyncio-eventloop#asyncio.loop.call_soon_threadsafe
loop.close() Close the event loop. The loop must not be running when this function is called. Any pending callbacks will be discarded. This method clears all queues and shuts down the executor, but does not wait for the executor to finish. This method is idempotent and irreversible. No other methods should be called after the event loop is closed.
python.library.asyncio-eventloop#asyncio.loop.close
coroutine loop.connect_accepted_socket(protocol_factory, sock, *, ssl=None, ssl_handshake_timeout=None) Wrap an already accepted connection into a transport/protocol pair. This method can be used by servers that accept connections outside of asyncio but that use asyncio to handle them. Parameters: protocol_factory must be a callable returning a protocol implementation. sock is a preexisting socket object returned from socket.accept. ssl can be set to an SSLContext to enable SSL over the accepted connections. ssl_handshake_timeout is (for an SSL connection) the time in seconds to wait for the SSL handshake to complete before aborting the connection. 60.0 seconds if None (default). Returns a (transport, protocol) pair. New in version 3.7: The ssl_handshake_timeout parameter. New in version 3.5.3.
python.library.asyncio-eventloop#asyncio.loop.connect_accepted_socket
coroutine loop.connect_read_pipe(protocol_factory, pipe) Register the read end of pipe in the event loop. protocol_factory must be a callable returning an asyncio protocol implementation. pipe is a file-like object. Return pair (transport, protocol), where transport supports the ReadTransport interface and protocol is an object instantiated by the protocol_factory. With SelectorEventLoop event loop, the pipe is set to non-blocking mode.
python.library.asyncio-eventloop#asyncio.loop.connect_read_pipe
coroutine loop.connect_write_pipe(protocol_factory, pipe) Register the write end of pipe in the event loop. protocol_factory must be a callable returning an asyncio protocol implementation. pipe is file-like object. Return pair (transport, protocol), where transport supports WriteTransport interface and protocol is an object instantiated by the protocol_factory. With SelectorEventLoop event loop, the pipe is set to non-blocking mode.
python.library.asyncio-eventloop#asyncio.loop.connect_write_pipe
coroutine loop.create_connection(protocol_factory, host=None, port=None, *, ssl=None, family=0, proto=0, flags=0, sock=None, local_addr=None, server_hostname=None, ssl_handshake_timeout=None, happy_eyeballs_delay=None, interleave=None) Open a streaming transport connection to a given address specified by host and port. The socket family can be either AF_INET or AF_INET6 depending on host (or the family argument, if provided). The socket type will be SOCK_STREAM. protocol_factory must be a callable returning an asyncio protocol implementation. This method will try to establish the connection in the background. When successful, it returns a (transport, protocol) pair. The chronological synopsis of the underlying operation is as follows: The connection is established and a transport is created for it. protocol_factory is called without arguments and is expected to return a protocol instance. The protocol instance is coupled with the transport by calling its connection_made() method. A (transport, protocol) tuple is returned on success. The created transport is an implementation-dependent bidirectional stream. Other arguments: ssl: if given and not false, a SSL/TLS transport is created (by default a plain TCP transport is created). If ssl is a ssl.SSLContext object, this context is used to create the transport; if ssl is True, a default context returned from ssl.create_default_context() is used. See also SSL/TLS security considerations server_hostname sets or overrides the hostname that the target server’s certificate will be matched against. Should only be passed if ssl is not None. By default the value of the host argument is used. If host is empty, there is no default and you must pass a value for server_hostname. If server_hostname is an empty string, hostname matching is disabled (which is a serious security risk, allowing for potential man-in-the-middle attacks). family, proto, flags are the optional address family, protocol and flags to be passed through to getaddrinfo() for host resolution. If given, these should all be integers from the corresponding socket module constants. happy_eyeballs_delay, if given, enables Happy Eyeballs for this connection. It should be a floating-point number representing the amount of time in seconds to wait for a connection attempt to complete, before starting the next attempt in parallel. This is the “Connection Attempt Delay” as defined in RFC 8305. A sensible default value recommended by the RFC is 0.25 (250 milliseconds). interleave controls address reordering when a host name resolves to multiple IP addresses. If 0 or unspecified, no reordering is done, and addresses are tried in the order returned by getaddrinfo(). If a positive integer is specified, the addresses are interleaved by address family, and the given integer is interpreted as “First Address Family Count” as defined in RFC 8305. The default is 0 if happy_eyeballs_delay is not specified, and 1 if it is. sock, if given, should be an existing, already connected socket.socket object to be used by the transport. If sock is given, none of host, port, family, proto, flags, happy_eyeballs_delay, interleave and local_addr should be specified. local_addr, if given, is a (local_host, local_port) tuple used to bind the socket to locally. The local_host and local_port are looked up using getaddrinfo(), similarly to host and port. ssl_handshake_timeout is (for a TLS connection) the time in seconds to wait for the TLS handshake to complete before aborting the connection. 60.0 seconds if None (default). New in version 3.8: Added the happy_eyeballs_delay and interleave parameters. Happy Eyeballs Algorithm: Success with Dual-Stack Hosts. When a server’s IPv4 path and protocol are working, but the server’s IPv6 path and protocol are not working, a dual-stack client application experiences significant connection delay compared to an IPv4-only client. This is undesirable because it causes the dual- stack client to have a worse user experience. This document specifies requirements for algorithms that reduce this user-visible delay and provides an algorithm. For more information: https://tools.ietf.org/html/rfc6555 New in version 3.7: The ssl_handshake_timeout parameter. Changed in version 3.6: The socket option TCP_NODELAY is set by default for all TCP connections. Changed in version 3.5: Added support for SSL/TLS in ProactorEventLoop. See also The open_connection() function is a high-level alternative API. It returns a pair of (StreamReader, StreamWriter) that can be used directly in async/await code.
python.library.asyncio-eventloop#asyncio.loop.create_connection
coroutine loop.create_datagram_endpoint(protocol_factory, local_addr=None, remote_addr=None, *, family=0, proto=0, flags=0, reuse_address=None, reuse_port=None, allow_broadcast=None, sock=None) Note The parameter reuse_address is no longer supported, as using SO_REUSEADDR poses a significant security concern for UDP. Explicitly passing reuse_address=True will raise an exception. When multiple processes with differing UIDs assign sockets to an identical UDP socket address with SO_REUSEADDR, incoming packets can become randomly distributed among the sockets. For supported platforms, reuse_port can be used as a replacement for similar functionality. With reuse_port, SO_REUSEPORT is used instead, which specifically prevents processes with differing UIDs from assigning sockets to the same socket address. Create a datagram connection. The socket family can be either AF_INET, AF_INET6, or AF_UNIX, depending on host (or the family argument, if provided). The socket type will be SOCK_DGRAM. protocol_factory must be a callable returning a protocol implementation. A tuple of (transport, protocol) is returned on success. Other arguments: local_addr, if given, is a (local_host, local_port) tuple used to bind the socket to locally. The local_host and local_port are looked up using getaddrinfo(). remote_addr, if given, is a (remote_host, remote_port) tuple used to connect the socket to a remote address. The remote_host and remote_port are looked up using getaddrinfo(). family, proto, flags are the optional address family, protocol and flags to be passed through to getaddrinfo() for host resolution. If given, these should all be integers from the corresponding socket module constants. reuse_port tells the kernel to allow this endpoint to be bound to the same port as other existing endpoints are bound to, so long as they all set this flag when being created. This option is not supported on Windows and some Unixes. If the SO_REUSEPORT constant is not defined then this capability is unsupported. allow_broadcast tells the kernel to allow this endpoint to send messages to the broadcast address. sock can optionally be specified in order to use a preexisting, already connected, socket.socket object to be used by the transport. If specified, local_addr and remote_addr should be omitted (must be None). See UDP echo client protocol and UDP echo server protocol examples. Changed in version 3.4.4: The family, proto, flags, reuse_address, reuse_port, *allow_broadcast, and sock parameters were added. Changed in version 3.8.1: The reuse_address parameter is no longer supported due to security concerns. Changed in version 3.8: Added support for Windows.
python.library.asyncio-eventloop#asyncio.loop.create_datagram_endpoint
loop.create_future() Create an asyncio.Future object attached to the event loop. This is the preferred way to create Futures in asyncio. This lets third-party event loops provide alternative implementations of the Future object (with better performance or instrumentation). New in version 3.5.2.
python.library.asyncio-eventloop#asyncio.loop.create_future
coroutine loop.create_server(protocol_factory, host=None, port=None, *, family=socket.AF_UNSPEC, flags=socket.AI_PASSIVE, sock=None, backlog=100, ssl=None, reuse_address=None, reuse_port=None, ssl_handshake_timeout=None, start_serving=True) Create a TCP server (socket type SOCK_STREAM) listening on port of the host address. Returns a Server object. Arguments: protocol_factory must be a callable returning a protocol implementation. The host parameter can be set to several types which determine where the server would be listening: If host is a string, the TCP server is bound to a single network interface specified by host. If host is a sequence of strings, the TCP server is bound to all network interfaces specified by the sequence. If host is an empty string or None, all interfaces are assumed and a list of multiple sockets will be returned (most likely one for IPv4 and another one for IPv6). family can be set to either socket.AF_INET or AF_INET6 to force the socket to use IPv4 or IPv6. If not set, the family will be determined from host name (defaults to AF_UNSPEC). flags is a bitmask for getaddrinfo(). sock can optionally be specified in order to use a preexisting socket object. If specified, host and port must not be specified. backlog is the maximum number of queued connections passed to listen() (defaults to 100). ssl can be set to an SSLContext instance to enable TLS over the accepted connections. reuse_address tells the kernel to reuse a local socket in TIME_WAIT state, without waiting for its natural timeout to expire. If not specified will automatically be set to True on Unix. reuse_port tells the kernel to allow this endpoint to be bound to the same port as other existing endpoints are bound to, so long as they all set this flag when being created. This option is not supported on Windows. ssl_handshake_timeout is (for a TLS server) the time in seconds to wait for the TLS handshake to complete before aborting the connection. 60.0 seconds if None (default). start_serving set to True (the default) causes the created server to start accepting connections immediately. When set to False, the user should await on Server.start_serving() or Server.serve_forever() to make the server to start accepting connections. New in version 3.7: Added ssl_handshake_timeout and start_serving parameters. Changed in version 3.6: The socket option TCP_NODELAY is set by default for all TCP connections. Changed in version 3.5: Added support for SSL/TLS in ProactorEventLoop. Changed in version 3.5.1: The host parameter can be a sequence of strings. See also The start_server() function is a higher-level alternative API that returns a pair of StreamReader and StreamWriter that can be used in an async/await code.
python.library.asyncio-eventloop#asyncio.loop.create_server
loop.create_task(coro, *, name=None) Schedule the execution of a Coroutines. Return a Task object. Third-party event loops can use their own subclass of Task for interoperability. In this case, the result type is a subclass of Task. If the name argument is provided and not None, it is set as the name of the task using Task.set_name(). Changed in version 3.8: Added the name parameter.
python.library.asyncio-eventloop#asyncio.loop.create_task
coroutine loop.create_unix_connection(protocol_factory, path=None, *, ssl=None, sock=None, server_hostname=None, ssl_handshake_timeout=None) Create a Unix connection. The socket family will be AF_UNIX; socket type will be SOCK_STREAM. A tuple of (transport, protocol) is returned on success. path is the name of a Unix domain socket and is required, unless a sock parameter is specified. Abstract Unix sockets, str, bytes, and Path paths are supported. See the documentation of the loop.create_connection() method for information about arguments to this method. Availability: Unix. New in version 3.7: The ssl_handshake_timeout parameter. Changed in version 3.7: The path parameter can now be a path-like object.
python.library.asyncio-eventloop#asyncio.loop.create_unix_connection
coroutine loop.create_unix_server(protocol_factory, path=None, *, sock=None, backlog=100, ssl=None, ssl_handshake_timeout=None, start_serving=True) Similar to loop.create_server() but works with the AF_UNIX socket family. path is the name of a Unix domain socket, and is required, unless a sock argument is provided. Abstract Unix sockets, str, bytes, and Path paths are supported. See the documentation of the loop.create_server() method for information about arguments to this method. Availability: Unix. New in version 3.7: The ssl_handshake_timeout and start_serving parameters. Changed in version 3.7: The path parameter can now be a Path object.
python.library.asyncio-eventloop#asyncio.loop.create_unix_server
loop.default_exception_handler(context) Default exception handler. This is called when an exception occurs and no exception handler is set. This can be called by a custom exception handler that wants to defer to the default handler behavior. context parameter has the same meaning as in call_exception_handler().
python.library.asyncio-eventloop#asyncio.loop.default_exception_handler
coroutine loop.getaddrinfo(host, port, *, family=0, type=0, proto=0, flags=0) Asynchronous version of socket.getaddrinfo().
python.library.asyncio-eventloop#asyncio.loop.getaddrinfo
coroutine loop.getnameinfo(sockaddr, flags=0) Asynchronous version of socket.getnameinfo().
python.library.asyncio-eventloop#asyncio.loop.getnameinfo
loop.get_debug() Get the debug mode (bool) of the event loop. The default value is True if the environment variable PYTHONASYNCIODEBUG is set to a non-empty string, False otherwise.
python.library.asyncio-eventloop#asyncio.loop.get_debug
loop.get_exception_handler() Return the current exception handler, or None if no custom exception handler was set. New in version 3.5.2.
python.library.asyncio-eventloop#asyncio.loop.get_exception_handler
loop.get_task_factory() Return a task factory or None if the default one is in use.
python.library.asyncio-eventloop#asyncio.loop.get_task_factory
loop.is_closed() Return True if the event loop was closed.
python.library.asyncio-eventloop#asyncio.loop.is_closed
loop.is_running() Return True if the event loop is currently running.
python.library.asyncio-eventloop#asyncio.loop.is_running
loop.remove_reader(fd) Stop monitoring the fd file descriptor for read availability.
python.library.asyncio-eventloop#asyncio.loop.remove_reader
loop.remove_signal_handler(sig) Remove the handler for the sig signal. Return True if the signal handler was removed, or False if no handler was set for the given signal. Availability: Unix.
python.library.asyncio-eventloop#asyncio.loop.remove_signal_handler
loop.remove_writer(fd) Stop monitoring the fd file descriptor for write availability.
python.library.asyncio-eventloop#asyncio.loop.remove_writer
loop.run_forever() Run the event loop until stop() is called. If stop() is called before run_forever() is called, the loop will poll the I/O selector once with a timeout of zero, run all callbacks scheduled in response to I/O events (and those that were already scheduled), and then exit. If stop() is called while run_forever() is running, the loop will run the current batch of callbacks and then exit. Note that new callbacks scheduled by callbacks will not run in this case; instead, they will run the next time run_forever() or run_until_complete() is called.
python.library.asyncio-eventloop#asyncio.loop.run_forever
awaitable loop.run_in_executor(executor, func, *args) Arrange for func to be called in the specified executor. The executor argument should be an concurrent.futures.Executor instance. The default executor is used if executor is None. Example: import asyncio import concurrent.futures def blocking_io(): # File operations (such as logging) can block the # event loop: run them in a thread pool. with open('/dev/urandom', 'rb') as f: return f.read(100) def cpu_bound(): # CPU-bound operations will block the event loop: # in general it is preferable to run them in a # process pool. return sum(i * i for i in range(10 ** 7)) async def main(): loop = asyncio.get_running_loop() ## Options: # 1. Run in the default loop's executor: result = await loop.run_in_executor( None, blocking_io) print('default thread pool', result) # 2. Run in a custom thread pool: with concurrent.futures.ThreadPoolExecutor() as pool: result = await loop.run_in_executor( pool, blocking_io) print('custom thread pool', result) # 3. Run in a custom process pool: with concurrent.futures.ProcessPoolExecutor() as pool: result = await loop.run_in_executor( pool, cpu_bound) print('custom process pool', result) asyncio.run(main()) This method returns a asyncio.Future object. Use functools.partial() to pass keyword arguments to func. Changed in version 3.5.3: loop.run_in_executor() no longer configures the max_workers of the thread pool executor it creates, instead leaving it up to the thread pool executor (ThreadPoolExecutor) to set the default.
python.library.asyncio-eventloop#asyncio.loop.run_in_executor
loop.run_until_complete(future) Run until the future (an instance of Future) has completed. If the argument is a coroutine object it is implicitly scheduled to run as a asyncio.Task. Return the Future’s result or raise its exception.
python.library.asyncio-eventloop#asyncio.loop.run_until_complete
coroutine loop.sendfile(transport, file, offset=0, count=None, *, fallback=True) Send a file over a transport. Return the total number of bytes sent. The method uses high-performance os.sendfile() if available. file must be a regular file object opened in binary mode. offset tells from where to start reading the file. If specified, count is the total number of bytes to transmit as opposed to sending the file until EOF is reached. File position is always updated, even when this method raises an error, and file.tell() can be used to obtain the actual number of bytes sent. fallback set to True makes asyncio to manually read and send the file when the platform does not support the sendfile system call (e.g. Windows or SSL socket on Unix). Raise SendfileNotAvailableError if the system does not support the sendfile syscall and fallback is False. New in version 3.7.
python.library.asyncio-eventloop#asyncio.loop.sendfile
loop.set_debug(enabled: bool) Set the debug mode of the event loop. Changed in version 3.7: The new Python Development Mode can now also be used to enable the debug mode.
python.library.asyncio-eventloop#asyncio.loop.set_debug
loop.set_default_executor(executor) Set executor as the default executor used by run_in_executor(). executor should be an instance of ThreadPoolExecutor. Deprecated since version 3.8: Using an executor that is not an instance of ThreadPoolExecutor is deprecated and will trigger an error in Python 3.9. executor must be an instance of concurrent.futures.ThreadPoolExecutor.
python.library.asyncio-eventloop#asyncio.loop.set_default_executor
loop.set_exception_handler(handler) Set handler as the new event loop exception handler. If handler is None, the default exception handler will be set. Otherwise, handler must be a callable with the signature matching (loop, context), where loop is a reference to the active event loop, and context is a dict object containing the details of the exception (see call_exception_handler() documentation for details about context).
python.library.asyncio-eventloop#asyncio.loop.set_exception_handler
loop.set_task_factory(factory) Set a task factory that will be used by loop.create_task(). If factory is None the default task factory will be set. Otherwise, factory must be a callable with the signature matching (loop, coro), where loop is a reference to the active event loop, and coro is a coroutine object. The callable must return a asyncio.Future-compatible object.
python.library.asyncio-eventloop#asyncio.loop.set_task_factory
coroutine loop.shutdown_asyncgens() Schedule all currently open asynchronous generator objects to close with an aclose() call. After calling this method, the event loop will issue a warning if a new asynchronous generator is iterated. This should be used to reliably finalize all scheduled asynchronous generators. Note that there is no need to call this function when asyncio.run() is used. Example: try: loop.run_forever() finally: loop.run_until_complete(loop.shutdown_asyncgens()) loop.close() New in version 3.6.
python.library.asyncio-eventloop#asyncio.loop.shutdown_asyncgens
coroutine loop.shutdown_default_executor() Schedule the closure of the default executor and wait for it to join all of the threads in the ThreadPoolExecutor. After calling this method, a RuntimeError will be raised if loop.run_in_executor() is called while using the default executor. Note that there is no need to call this function when asyncio.run() is used. New in version 3.9.
python.library.asyncio-eventloop#asyncio.loop.shutdown_default_executor
coroutine loop.sock_accept(sock) Accept a connection. Modeled after the blocking socket.accept() method. The socket must be bound to an address and listening for connections. The return value is a pair (conn, address) where conn is a new socket object usable to send and receive data on the connection, and address is the address bound to the socket on the other end of the connection. sock must be a non-blocking socket. Changed in version 3.7: Even though the method was always documented as a coroutine method, before Python 3.7 it returned a Future. Since Python 3.7, this is an async def method. See also loop.create_server() and start_server().
python.library.asyncio-eventloop#asyncio.loop.sock_accept
coroutine loop.sock_connect(sock, address) Connect sock to a remote socket at address. Asynchronous version of socket.connect(). sock must be a non-blocking socket. Changed in version 3.5.2: address no longer needs to be resolved. sock_connect will try to check if the address is already resolved by calling socket.inet_pton(). If not, loop.getaddrinfo() will be used to resolve the address. See also loop.create_connection() and asyncio.open_connection().
python.library.asyncio-eventloop#asyncio.loop.sock_connect
coroutine loop.sock_recv(sock, nbytes) Receive up to nbytes from sock. Asynchronous version of socket.recv(). Return the received data as a bytes object. sock must be a non-blocking socket. Changed in version 3.7: Even though this method was always documented as a coroutine method, releases before Python 3.7 returned a Future. Since Python 3.7 this is an async def method.
python.library.asyncio-eventloop#asyncio.loop.sock_recv