doc_content stringlengths 1 386k | doc_id stringlengths 5 188 |
|---|---|
set_name(value)
Set the name of the Task. The value argument can be any object, which is then converted to a string. In the default Task implementation, the name will be visible in the repr() output of a task object. New in version 3.8. | python.library.asyncio-task#asyncio.Task.set_name |
class asyncio.ThreadedChildWatcher
This implementation starts a new waiting thread for every subprocess spawn. It works reliably even when the asyncio event loop is run in a non-main OS thread. There is no noticeable overhead when handling a big number of children (O(1) each time a child terminates), but starting a thread per process requires extra memory. This watcher is used by default. New in version 3.8. | python.library.asyncio-policy#asyncio.ThreadedChildWatcher |
exception asyncio.TimeoutError
The operation has exceeded the given deadline. Important This exception is different from the builtin TimeoutError exception. | python.library.asyncio-exceptions#asyncio.TimeoutError |
class asyncio.TimerHandle
A callback wrapper object returned by loop.call_later(), and loop.call_at(). This class is a subclass of Handle.
when()
Return a scheduled callback time as float seconds. The time is an absolute timestamp, using the same time reference as loop.time(). New in version 3.7. | python.library.asyncio-eventloop#asyncio.TimerHandle |
when()
Return a scheduled callback time as float seconds. The time is an absolute timestamp, using the same time reference as loop.time(). New in version 3.7. | python.library.asyncio-eventloop#asyncio.TimerHandle.when |
coroutine asyncio.to_thread(func, /, *args, **kwargs)
Asynchronously run function func in a separate thread. Any *args and **kwargs supplied for this function are directly passed to func. Also, the current contextvars.Context is propagated, allowing context variables from the event loop thread to be accessed in the separate thread. Return a coroutine that can be awaited to get the eventual result of func. This coroutine function is primarily intended to be used for executing IO-bound functions/methods that would otherwise block the event loop if they were ran in the main thread. For example: def blocking_io():
print(f"start blocking_io at {time.strftime('%X')}")
# Note that time.sleep() can be replaced with any blocking
# IO-bound operation, such as file operations.
time.sleep(1)
print(f"blocking_io complete at {time.strftime('%X')}")
async def main():
print(f"started main at {time.strftime('%X')}")
await asyncio.gather(
asyncio.to_thread(blocking_io),
asyncio.sleep(1))
print(f"finished main at {time.strftime('%X')}")
asyncio.run(main())
# Expected output:
#
# started main at 19:50:53
# start blocking_io at 19:50:53
# blocking_io complete at 19:50:54
# finished main at 19:50:54
Directly calling blocking_io() in any coroutine would block the event loop for its duration, resulting in an additional 1 second of run time. Instead, by using asyncio.to_thread(), we can run it in a separate thread without blocking the event loop. Note Due to the GIL, asyncio.to_thread() can typically only be used to make IO-bound functions non-blocking. However, for extension modules that release the GIL or alternative Python implementations that don’t have one, asyncio.to_thread() can also be used for CPU-bound functions. New in version 3.9. | python.library.asyncio-task#asyncio.to_thread |
class asyncio.Transport(WriteTransport, ReadTransport)
Interface representing a bidirectional transport, such as a TCP connection. The user does not instantiate a transport directly; they call a utility function, passing it a protocol factory and other information necessary to create the transport and protocol. Instances of the Transport class are returned from or used by event loop methods like loop.create_connection(), loop.create_unix_connection(), loop.create_server(), loop.sendfile(), etc. | python.library.asyncio-protocol#asyncio.Transport |
coroutine asyncio.wait(aws, *, loop=None, timeout=None, return_when=ALL_COMPLETED)
Run awaitable objects in the aws iterable concurrently and block until the condition specified by return_when. The aws iterable must not be empty. Returns two sets of Tasks/Futures: (done, pending). Usage: done, pending = await asyncio.wait(aws)
timeout (a float or int), if specified, can be used to control the maximum number of seconds to wait before returning. Note that this function does not raise asyncio.TimeoutError. Futures or Tasks that aren’t done when the timeout occurs are simply returned in the second set. return_when indicates when this function should return. It must be one of the following constants:
Constant Description
FIRST_COMPLETED The function will return when any future finishes or is cancelled.
FIRST_EXCEPTION The function will return when any future finishes by raising an exception. If no future raises an exception then it is equivalent to ALL_COMPLETED.
ALL_COMPLETED The function will return when all futures finish or are cancelled. Unlike wait_for(), wait() does not cancel the futures when a timeout occurs. Deprecated since version 3.8: If any awaitable in aws is a coroutine, it is automatically scheduled as a Task. Passing coroutines objects to wait() directly is deprecated as it leads to confusing behavior. Deprecated since version 3.8, will be removed in version 3.10: The loop parameter. Note wait() schedules coroutines as Tasks automatically and later returns those implicitly created Task objects in (done, pending) sets. Therefore the following code won’t work as expected: async def foo():
return 42
coro = foo()
done, pending = await asyncio.wait({coro})
if coro in done:
# This branch will never be run!
Here is how the above snippet can be fixed: async def foo():
return 42
task = asyncio.create_task(foo())
done, pending = await asyncio.wait({task})
if task in done:
# Everything will work as expected now.
Deprecated since version 3.8, will be removed in version 3.11: Passing coroutine objects to wait() directly is deprecated. | python.library.asyncio-task#asyncio.wait |
coroutine asyncio.wait_for(aw, timeout, *, loop=None)
Wait for the aw awaitable to complete with a timeout. If aw is a coroutine it is automatically scheduled as a Task. timeout can either be None or a float or int number of seconds to wait for. If timeout is None, block until the future completes. If a timeout occurs, it cancels the task and raises asyncio.TimeoutError. To avoid the task cancellation, wrap it in shield(). The function will wait until the future is actually cancelled, so the total wait time may exceed the timeout. If an exception happens during cancellation, it is propagated. If the wait is cancelled, the future aw is also cancelled. Deprecated since version 3.8, will be removed in version 3.10: The loop parameter. Example: async def eternity():
# Sleep for one hour
await asyncio.sleep(3600)
print('yay!')
async def main():
# Wait for at most 1 second
try:
await asyncio.wait_for(eternity(), timeout=1.0)
except asyncio.TimeoutError:
print('timeout!')
asyncio.run(main())
# Expected output:
#
# timeout!
Changed in version 3.7: When aw is cancelled due to a timeout, wait_for waits for aw to be cancelled. Previously, it raised asyncio.TimeoutError immediately. | python.library.asyncio-task#asyncio.wait_for |
class asyncio.WindowsProactorEventLoopPolicy
An alternative event loop policy that uses the ProactorEventLoop event loop implementation. Availability: Windows. | python.library.asyncio-policy#asyncio.WindowsProactorEventLoopPolicy |
class asyncio.WindowsSelectorEventLoopPolicy
An alternative event loop policy that uses the SelectorEventLoop event loop implementation. Availability: Windows. | python.library.asyncio-policy#asyncio.WindowsSelectorEventLoopPolicy |
asyncio.wrap_future(future, *, loop=None)
Wrap a concurrent.futures.Future object in a asyncio.Future object. | python.library.asyncio-future#asyncio.wrap_future |
class asyncio.WriteTransport(BaseTransport)
A base transport for write-only connections. Instances of the WriteTransport class are returned from the loop.connect_write_pipe() event loop method and are also used by subprocess-related methods like loop.subprocess_exec(). | python.library.asyncio-protocol#asyncio.WriteTransport |
WriteTransport.abort()
Close the transport immediately, without waiting for pending operations to complete. Buffered data will be lost. No more data will be received. The protocol’s protocol.connection_lost() method will eventually be called with None as its argument. | python.library.asyncio-protocol#asyncio.WriteTransport.abort |
WriteTransport.can_write_eof()
Return True if the transport supports write_eof(), False if not. | python.library.asyncio-protocol#asyncio.WriteTransport.can_write_eof |
WriteTransport.get_write_buffer_limits()
Get the high and low watermarks for write flow control. Return a tuple (low, high) where low and high are positive number of bytes. Use set_write_buffer_limits() to set the limits. New in version 3.4.2. | python.library.asyncio-protocol#asyncio.WriteTransport.get_write_buffer_limits |
WriteTransport.get_write_buffer_size()
Return the current size of the output buffer used by the transport. | python.library.asyncio-protocol#asyncio.WriteTransport.get_write_buffer_size |
WriteTransport.set_write_buffer_limits(high=None, low=None)
Set the high and low watermarks for write flow control. These two values (measured in number of bytes) control when the protocol’s protocol.pause_writing() and protocol.resume_writing() methods are called. If specified, the low watermark must be less than or equal to the high watermark. Neither high nor low can be negative. pause_writing() is called when the buffer size becomes greater than or equal to the high value. If writing has been paused, resume_writing() is called when the buffer size becomes less than or equal to the low value. The defaults are implementation-specific. If only the high watermark is given, the low watermark defaults to an implementation-specific value less than or equal to the high watermark. Setting high to zero forces low to zero as well, and causes pause_writing() to be called whenever the buffer becomes non-empty. Setting low to zero causes resume_writing() to be called only once the buffer is empty. Use of zero for either limit is generally sub-optimal as it reduces opportunities for doing I/O and computation concurrently. Use get_write_buffer_limits() to get the limits. | python.library.asyncio-protocol#asyncio.WriteTransport.set_write_buffer_limits |
WriteTransport.write(data)
Write some data bytes to the transport. This method does not block; it buffers the data and arranges for it to be sent out asynchronously. | python.library.asyncio-protocol#asyncio.WriteTransport.write |
WriteTransport.writelines(list_of_data)
Write a list (or any iterable) of data bytes to the transport. This is functionally equivalent to calling write() on each element yielded by the iterable, but may be implemented more efficiently. | python.library.asyncio-protocol#asyncio.WriteTransport.writelines |
WriteTransport.write_eof()
Close the write end of the transport after flushing all buffered data. Data may still be received. This method can raise NotImplementedError if the transport (e.g. SSL) doesn’t support half-closed connections. | python.library.asyncio-protocol#asyncio.WriteTransport.write_eof |
asyncore — Asynchronous socket handler Source code: Lib/asyncore.py Deprecated since version 3.6: Please use asyncio instead. Note This module exists for backwards compatibility only. For new code we recommend using asyncio. This module provides the basic infrastructure for writing asynchronous socket service clients and servers. There are only two ways to have a program on a single processor do “more than one thing at a time.” Multi-threaded programming is the simplest and most popular way to do it, but there is another very different technique, that lets you have nearly all the advantages of multi-threading, without actually using multiple threads. It’s really only practical if your program is largely I/O bound. If your program is processor bound, then pre-emptive scheduled threads are probably what you really need. Network servers are rarely processor bound, however. If your operating system supports the select() system call in its I/O library (and nearly all do), then you can use it to juggle multiple communication channels at once; doing other work while your I/O is taking place in the “background.” Although this strategy can seem strange and complex, especially at first, it is in many ways easier to understand and control than multi-threaded programming. The asyncore module solves many of the difficult problems for you, making the task of building sophisticated high-performance network servers and clients a snap. For “conversational” applications and protocols the companion asynchat module is invaluable. The basic idea behind both modules is to create one or more network channels, instances of class asyncore.dispatcher and asynchat.async_chat. Creating the channels adds them to a global map, used by the loop() function if you do not provide it with your own map. Once the initial channel(s) is(are) created, calling the loop() function activates channel service, which continues until the last channel (including any that have been added to the map during asynchronous service) is closed.
asyncore.loop([timeout[, use_poll[, map[, count]]]])
Enter a polling loop that terminates after count passes or all open channels have been closed. All arguments are optional. The count parameter defaults to None, resulting in the loop terminating only when all channels have been closed. The timeout argument sets the timeout parameter for the appropriate select() or poll() call, measured in seconds; the default is 30 seconds. The use_poll parameter, if true, indicates that poll() should be used in preference to select() (the default is False). The map parameter is a dictionary whose items are the channels to watch. As channels are closed they are deleted from their map. If map is omitted, a global map is used. Channels (instances of asyncore.dispatcher, asynchat.async_chat and subclasses thereof) can freely be mixed in the map.
class asyncore.dispatcher
The dispatcher class is a thin wrapper around a low-level socket object. To make it more useful, it has a few methods for event-handling which are called from the asynchronous loop. Otherwise, it can be treated as a normal non-blocking socket object. The firing of low-level events at certain times or in certain connection states tells the asynchronous loop that certain higher-level events have taken place. For example, if we have asked for a socket to connect to another host, we know that the connection has been made when the socket becomes writable for the first time (at this point you know that you may write to it with the expectation of success). The implied higher-level events are:
Event Description
handle_connect() Implied by the first read or write event
handle_close() Implied by a read event with no data available
handle_accepted() Implied by a read event on a listening socket During asynchronous processing, each mapped channel’s readable() and writable() methods are used to determine whether the channel’s socket should be added to the list of channels select()ed or poll()ed for read and write events. Thus, the set of channel events is larger than the basic socket events. The full set of methods that can be overridden in your subclass follows:
handle_read()
Called when the asynchronous loop detects that a read() call on the channel’s socket will succeed.
handle_write()
Called when the asynchronous loop detects that a writable socket can be written. Often this method will implement the necessary buffering for performance. For example: def handle_write(self):
sent = self.send(self.buffer)
self.buffer = self.buffer[sent:]
handle_expt()
Called when there is out of band (OOB) data for a socket connection. This will almost never happen, as OOB is tenuously supported and rarely used.
handle_connect()
Called when the active opener’s socket actually makes a connection. Might send a “welcome” banner, or initiate a protocol negotiation with the remote endpoint, for example.
handle_close()
Called when the socket is closed.
handle_error()
Called when an exception is raised and not otherwise handled. The default version prints a condensed traceback.
handle_accept()
Called on listening channels (passive openers) when a connection can be established with a new remote endpoint that has issued a connect() call for the local endpoint. Deprecated in version 3.2; use handle_accepted() instead. Deprecated since version 3.2.
handle_accepted(sock, addr)
Called on listening channels (passive openers) when a connection has been established with a new remote endpoint that has issued a connect() call for the local endpoint. sock is a new socket object usable to send and receive data on the connection, and addr is the address bound to the socket on the other end of the connection. New in version 3.2.
readable()
Called each time around the asynchronous loop to determine whether a channel’s socket should be added to the list on which read events can occur. The default method simply returns True, indicating that by default, all channels will be interested in read events.
writable()
Called each time around the asynchronous loop to determine whether a channel’s socket should be added to the list on which write events can occur. The default method simply returns True, indicating that by default, all channels will be interested in write events.
In addition, each channel delegates or extends many of the socket methods. Most of these are nearly identical to their socket partners.
create_socket(family=socket.AF_INET, type=socket.SOCK_STREAM)
This is identical to the creation of a normal socket, and will use the same options for creation. Refer to the socket documentation for information on creating sockets. Changed in version 3.3: family and type arguments can be omitted.
connect(address)
As with the normal socket object, address is a tuple with the first element the host to connect to, and the second the port number.
send(data)
Send data to the remote end-point of the socket.
recv(buffer_size)
Read at most buffer_size bytes from the socket’s remote end-point. An empty bytes object implies that the channel has been closed from the other end. Note that recv() may raise BlockingIOError , even though select.select() or select.poll() has reported the socket ready for reading.
listen(backlog)
Listen for connections made to the socket. The backlog argument specifies the maximum number of queued connections and should be at least 1; the maximum value is system-dependent (usually 5).
bind(address)
Bind the socket to address. The socket must not already be bound. (The format of address depends on the address family — refer to the socket documentation for more information.) To mark the socket as re-usable (setting the SO_REUSEADDR option), call the dispatcher object’s set_reuse_addr() method.
accept()
Accept a connection. The socket must be bound to an address and listening for connections. The return value can be either None or a pair (conn, address) where conn is a new socket object usable to send and receive data on the connection, and address is the address bound to the socket on the other end of the connection. When None is returned it means the connection didn’t take place, in which case the server should just ignore this event and keep listening for further incoming connections.
close()
Close the socket. All future operations on the socket object will fail. The remote end-point will receive no more data (after queued data is flushed). Sockets are automatically closed when they are garbage-collected.
class asyncore.dispatcher_with_send
A dispatcher subclass which adds simple buffered output capability, useful for simple clients. For more sophisticated usage use asynchat.async_chat.
class asyncore.file_dispatcher
A file_dispatcher takes a file descriptor or file object along with an optional map argument and wraps it for use with the poll() or loop() functions. If provided a file object or anything with a fileno() method, that method will be called and passed to the file_wrapper constructor. Availability: Unix.
class asyncore.file_wrapper
A file_wrapper takes an integer file descriptor and calls os.dup() to duplicate the handle so that the original handle may be closed independently of the file_wrapper. This class implements sufficient methods to emulate a socket for use by the file_dispatcher class. Availability: Unix.
asyncore Example basic HTTP client Here is a very basic HTTP client that uses the dispatcher class to implement its socket handling: import asyncore
class HTTPClient(asyncore.dispatcher):
def __init__(self, host, path):
asyncore.dispatcher.__init__(self)
self.create_socket()
self.connect( (host, 80) )
self.buffer = bytes('GET %s HTTP/1.0\r\nHost: %s\r\n\r\n' %
(path, host), 'ascii')
def handle_connect(self):
pass
def handle_close(self):
self.close()
def handle_read(self):
print(self.recv(8192))
def writable(self):
return (len(self.buffer) > 0)
def handle_write(self):
sent = self.send(self.buffer)
self.buffer = self.buffer[sent:]
client = HTTPClient('www.python.org', '/')
asyncore.loop()
asyncore Example basic echo server Here is a basic echo server that uses the dispatcher class to accept connections and dispatches the incoming connections to a handler: import asyncore
class EchoHandler(asyncore.dispatcher_with_send):
def handle_read(self):
data = self.recv(8192)
if data:
self.send(data)
class EchoServer(asyncore.dispatcher):
def __init__(self, host, port):
asyncore.dispatcher.__init__(self)
self.create_socket()
self.set_reuse_addr()
self.bind((host, port))
self.listen(5)
def handle_accepted(self, sock, addr):
print('Incoming connection from %s' % repr(addr))
handler = EchoHandler(sock)
server = EchoServer('localhost', 8080)
asyncore.loop() | python.library.asyncore |
class asyncore.dispatcher
The dispatcher class is a thin wrapper around a low-level socket object. To make it more useful, it has a few methods for event-handling which are called from the asynchronous loop. Otherwise, it can be treated as a normal non-blocking socket object. The firing of low-level events at certain times or in certain connection states tells the asynchronous loop that certain higher-level events have taken place. For example, if we have asked for a socket to connect to another host, we know that the connection has been made when the socket becomes writable for the first time (at this point you know that you may write to it with the expectation of success). The implied higher-level events are:
Event Description
handle_connect() Implied by the first read or write event
handle_close() Implied by a read event with no data available
handle_accepted() Implied by a read event on a listening socket During asynchronous processing, each mapped channel’s readable() and writable() methods are used to determine whether the channel’s socket should be added to the list of channels select()ed or poll()ed for read and write events. Thus, the set of channel events is larger than the basic socket events. The full set of methods that can be overridden in your subclass follows:
handle_read()
Called when the asynchronous loop detects that a read() call on the channel’s socket will succeed.
handle_write()
Called when the asynchronous loop detects that a writable socket can be written. Often this method will implement the necessary buffering for performance. For example: def handle_write(self):
sent = self.send(self.buffer)
self.buffer = self.buffer[sent:]
handle_expt()
Called when there is out of band (OOB) data for a socket connection. This will almost never happen, as OOB is tenuously supported and rarely used.
handle_connect()
Called when the active opener’s socket actually makes a connection. Might send a “welcome” banner, or initiate a protocol negotiation with the remote endpoint, for example.
handle_close()
Called when the socket is closed.
handle_error()
Called when an exception is raised and not otherwise handled. The default version prints a condensed traceback.
handle_accept()
Called on listening channels (passive openers) when a connection can be established with a new remote endpoint that has issued a connect() call for the local endpoint. Deprecated in version 3.2; use handle_accepted() instead. Deprecated since version 3.2.
handle_accepted(sock, addr)
Called on listening channels (passive openers) when a connection has been established with a new remote endpoint that has issued a connect() call for the local endpoint. sock is a new socket object usable to send and receive data on the connection, and addr is the address bound to the socket on the other end of the connection. New in version 3.2.
readable()
Called each time around the asynchronous loop to determine whether a channel’s socket should be added to the list on which read events can occur. The default method simply returns True, indicating that by default, all channels will be interested in read events.
writable()
Called each time around the asynchronous loop to determine whether a channel’s socket should be added to the list on which write events can occur. The default method simply returns True, indicating that by default, all channels will be interested in write events.
In addition, each channel delegates or extends many of the socket methods. Most of these are nearly identical to their socket partners.
create_socket(family=socket.AF_INET, type=socket.SOCK_STREAM)
This is identical to the creation of a normal socket, and will use the same options for creation. Refer to the socket documentation for information on creating sockets. Changed in version 3.3: family and type arguments can be omitted.
connect(address)
As with the normal socket object, address is a tuple with the first element the host to connect to, and the second the port number.
send(data)
Send data to the remote end-point of the socket.
recv(buffer_size)
Read at most buffer_size bytes from the socket’s remote end-point. An empty bytes object implies that the channel has been closed from the other end. Note that recv() may raise BlockingIOError , even though select.select() or select.poll() has reported the socket ready for reading.
listen(backlog)
Listen for connections made to the socket. The backlog argument specifies the maximum number of queued connections and should be at least 1; the maximum value is system-dependent (usually 5).
bind(address)
Bind the socket to address. The socket must not already be bound. (The format of address depends on the address family — refer to the socket documentation for more information.) To mark the socket as re-usable (setting the SO_REUSEADDR option), call the dispatcher object’s set_reuse_addr() method.
accept()
Accept a connection. The socket must be bound to an address and listening for connections. The return value can be either None or a pair (conn, address) where conn is a new socket object usable to send and receive data on the connection, and address is the address bound to the socket on the other end of the connection. When None is returned it means the connection didn’t take place, in which case the server should just ignore this event and keep listening for further incoming connections.
close()
Close the socket. All future operations on the socket object will fail. The remote end-point will receive no more data (after queued data is flushed). Sockets are automatically closed when they are garbage-collected. | python.library.asyncore#asyncore.dispatcher |
accept()
Accept a connection. The socket must be bound to an address and listening for connections. The return value can be either None or a pair (conn, address) where conn is a new socket object usable to send and receive data on the connection, and address is the address bound to the socket on the other end of the connection. When None is returned it means the connection didn’t take place, in which case the server should just ignore this event and keep listening for further incoming connections. | python.library.asyncore#asyncore.dispatcher.accept |
bind(address)
Bind the socket to address. The socket must not already be bound. (The format of address depends on the address family — refer to the socket documentation for more information.) To mark the socket as re-usable (setting the SO_REUSEADDR option), call the dispatcher object’s set_reuse_addr() method. | python.library.asyncore#asyncore.dispatcher.bind |
close()
Close the socket. All future operations on the socket object will fail. The remote end-point will receive no more data (after queued data is flushed). Sockets are automatically closed when they are garbage-collected. | python.library.asyncore#asyncore.dispatcher.close |
connect(address)
As with the normal socket object, address is a tuple with the first element the host to connect to, and the second the port number. | python.library.asyncore#asyncore.dispatcher.connect |
create_socket(family=socket.AF_INET, type=socket.SOCK_STREAM)
This is identical to the creation of a normal socket, and will use the same options for creation. Refer to the socket documentation for information on creating sockets. Changed in version 3.3: family and type arguments can be omitted. | python.library.asyncore#asyncore.dispatcher.create_socket |
handle_accept()
Called on listening channels (passive openers) when a connection can be established with a new remote endpoint that has issued a connect() call for the local endpoint. Deprecated in version 3.2; use handle_accepted() instead. Deprecated since version 3.2. | python.library.asyncore#asyncore.dispatcher.handle_accept |
handle_accepted(sock, addr)
Called on listening channels (passive openers) when a connection has been established with a new remote endpoint that has issued a connect() call for the local endpoint. sock is a new socket object usable to send and receive data on the connection, and addr is the address bound to the socket on the other end of the connection. New in version 3.2. | python.library.asyncore#asyncore.dispatcher.handle_accepted |
handle_close()
Called when the socket is closed. | python.library.asyncore#asyncore.dispatcher.handle_close |
handle_connect()
Called when the active opener’s socket actually makes a connection. Might send a “welcome” banner, or initiate a protocol negotiation with the remote endpoint, for example. | python.library.asyncore#asyncore.dispatcher.handle_connect |
handle_error()
Called when an exception is raised and not otherwise handled. The default version prints a condensed traceback. | python.library.asyncore#asyncore.dispatcher.handle_error |
handle_expt()
Called when there is out of band (OOB) data for a socket connection. This will almost never happen, as OOB is tenuously supported and rarely used. | python.library.asyncore#asyncore.dispatcher.handle_expt |
handle_read()
Called when the asynchronous loop detects that a read() call on the channel’s socket will succeed. | python.library.asyncore#asyncore.dispatcher.handle_read |
handle_write()
Called when the asynchronous loop detects that a writable socket can be written. Often this method will implement the necessary buffering for performance. For example: def handle_write(self):
sent = self.send(self.buffer)
self.buffer = self.buffer[sent:] | python.library.asyncore#asyncore.dispatcher.handle_write |
listen(backlog)
Listen for connections made to the socket. The backlog argument specifies the maximum number of queued connections and should be at least 1; the maximum value is system-dependent (usually 5). | python.library.asyncore#asyncore.dispatcher.listen |
readable()
Called each time around the asynchronous loop to determine whether a channel’s socket should be added to the list on which read events can occur. The default method simply returns True, indicating that by default, all channels will be interested in read events. | python.library.asyncore#asyncore.dispatcher.readable |
recv(buffer_size)
Read at most buffer_size bytes from the socket’s remote end-point. An empty bytes object implies that the channel has been closed from the other end. Note that recv() may raise BlockingIOError , even though select.select() or select.poll() has reported the socket ready for reading. | python.library.asyncore#asyncore.dispatcher.recv |
send(data)
Send data to the remote end-point of the socket. | python.library.asyncore#asyncore.dispatcher.send |
writable()
Called each time around the asynchronous loop to determine whether a channel’s socket should be added to the list on which write events can occur. The default method simply returns True, indicating that by default, all channels will be interested in write events. | python.library.asyncore#asyncore.dispatcher.writable |
class asyncore.dispatcher_with_send
A dispatcher subclass which adds simple buffered output capability, useful for simple clients. For more sophisticated usage use asynchat.async_chat. | python.library.asyncore#asyncore.dispatcher_with_send |
class asyncore.file_dispatcher
A file_dispatcher takes a file descriptor or file object along with an optional map argument and wraps it for use with the poll() or loop() functions. If provided a file object or anything with a fileno() method, that method will be called and passed to the file_wrapper constructor. Availability: Unix. | python.library.asyncore#asyncore.file_dispatcher |
class asyncore.file_wrapper
A file_wrapper takes an integer file descriptor and calls os.dup() to duplicate the handle so that the original handle may be closed independently of the file_wrapper. This class implements sufficient methods to emulate a socket for use by the file_dispatcher class. Availability: Unix. | python.library.asyncore#asyncore.file_wrapper |
asyncore.loop([timeout[, use_poll[, map[, count]]]])
Enter a polling loop that terminates after count passes or all open channels have been closed. All arguments are optional. The count parameter defaults to None, resulting in the loop terminating only when all channels have been closed. The timeout argument sets the timeout parameter for the appropriate select() or poll() call, measured in seconds; the default is 30 seconds. The use_poll parameter, if true, indicates that poll() should be used in preference to select() (the default is False). The map parameter is a dictionary whose items are the channels to watch. As channels are closed they are deleted from their map. If map is omitted, a global map is used. Channels (instances of asyncore.dispatcher, asynchat.async_chat and subclasses thereof) can freely be mixed in the map. | python.library.asyncore#asyncore.loop |
atexit — Exit handlers The atexit module defines functions to register and unregister cleanup functions. Functions thus registered are automatically executed upon normal interpreter termination. atexit runs these functions in the reverse order in which they were registered; if you register A, B, and C, at interpreter termination time they will be run in the order C, B, A. Note: The functions registered via this module are not called when the program is killed by a signal not handled by Python, when a Python fatal internal error is detected, or when os._exit() is called. Changed in version 3.7: When used with C-API subinterpreters, registered functions are local to the interpreter they were registered in.
atexit.register(func, *args, **kwargs)
Register func as a function to be executed at termination. Any optional arguments that are to be passed to func must be passed as arguments to register(). It is possible to register the same function and arguments more than once. At normal program termination (for instance, if sys.exit() is called or the main module’s execution completes), all functions registered are called in last in, first out order. The assumption is that lower level modules will normally be imported before higher level modules and thus must be cleaned up later. If an exception is raised during execution of the exit handlers, a traceback is printed (unless SystemExit is raised) and the exception information is saved. After all exit handlers have had a chance to run the last exception to be raised is re-raised. This function returns func, which makes it possible to use it as a decorator.
atexit.unregister(func)
Remove func from the list of functions to be run at interpreter shutdown. After calling unregister(), func is guaranteed not to be called when the interpreter shuts down, even if it was registered more than once. unregister() silently does nothing if func was not previously registered.
See also
Module readline
Useful example of atexit to read and write readline history files. atexit Example The following simple example demonstrates how a module can initialize a counter from a file when it is imported and save the counter’s updated value automatically when the program terminates without relying on the application making an explicit call into this module at termination. try:
with open("counterfile") as infile:
_count = int(infile.read())
except FileNotFoundError:
_count = 0
def incrcounter(n):
global _count
_count = _count + n
def savecounter():
with open("counterfile", "w") as outfile:
outfile.write("%d" % _count)
import atexit
atexit.register(savecounter)
Positional and keyword arguments may also be passed to register() to be passed along to the registered function when it is called: def goodbye(name, adjective):
print('Goodbye, %s, it was %s to meet you.' % (name, adjective))
import atexit
atexit.register(goodbye, 'Donny', 'nice')
# or:
atexit.register(goodbye, adjective='nice', name='Donny')
Usage as a decorator: import atexit
@atexit.register
def goodbye():
print("You are now leaving the Python sector.")
This only works with functions that can be called without arguments. | python.library.atexit |
atexit.register(func, *args, **kwargs)
Register func as a function to be executed at termination. Any optional arguments that are to be passed to func must be passed as arguments to register(). It is possible to register the same function and arguments more than once. At normal program termination (for instance, if sys.exit() is called or the main module’s execution completes), all functions registered are called in last in, first out order. The assumption is that lower level modules will normally be imported before higher level modules and thus must be cleaned up later. If an exception is raised during execution of the exit handlers, a traceback is printed (unless SystemExit is raised) and the exception information is saved. After all exit handlers have had a chance to run the last exception to be raised is re-raised. This function returns func, which makes it possible to use it as a decorator. | python.library.atexit#atexit.register |
atexit.unregister(func)
Remove func from the list of functions to be run at interpreter shutdown. After calling unregister(), func is guaranteed not to be called when the interpreter shuts down, even if it was registered more than once. unregister() silently does nothing if func was not previously registered. | python.library.atexit#atexit.unregister |
exception AttributeError
Raised when an attribute reference (see Attribute references) or assignment fails. (When an object does not support attribute references or attribute assignments at all, TypeError is raised.) | python.library.exceptions#AttributeError |
audioop — Manipulate raw audio data The audioop module contains some useful operations on sound fragments. It operates on sound fragments consisting of signed integer samples 8, 16, 24 or 32 bits wide, stored in bytes-like objects. All scalar items are integers, unless specified otherwise. Changed in version 3.4: Support for 24-bit samples was added. All functions now accept any bytes-like object. String input now results in an immediate error. This module provides support for a-LAW, u-LAW and Intel/DVI ADPCM encodings. A few of the more complicated operations only take 16-bit samples, otherwise the sample size (in bytes) is always a parameter of the operation. The module defines the following variables and functions:
exception audioop.error
This exception is raised on all errors, such as unknown number of bytes per sample, etc.
audioop.add(fragment1, fragment2, width)
Return a fragment which is the addition of the two samples passed as parameters. width is the sample width in bytes, either 1, 2, 3 or 4. Both fragments should have the same length. Samples are truncated in case of overflow.
audioop.adpcm2lin(adpcmfragment, width, state)
Decode an Intel/DVI ADPCM coded fragment to a linear fragment. See the description of lin2adpcm() for details on ADPCM coding. Return a tuple (sample, newstate) where the sample has the width specified in width.
audioop.alaw2lin(fragment, width)
Convert sound fragments in a-LAW encoding to linearly encoded sound fragments. a-LAW encoding always uses 8 bits samples, so width refers only to the sample width of the output fragment here.
audioop.avg(fragment, width)
Return the average over all samples in the fragment.
audioop.avgpp(fragment, width)
Return the average peak-peak value over all samples in the fragment. No filtering is done, so the usefulness of this routine is questionable.
audioop.bias(fragment, width, bias)
Return a fragment that is the original fragment with a bias added to each sample. Samples wrap around in case of overflow.
audioop.byteswap(fragment, width)
“Byteswap” all samples in a fragment and returns the modified fragment. Converts big-endian samples to little-endian and vice versa. New in version 3.4.
audioop.cross(fragment, width)
Return the number of zero crossings in the fragment passed as an argument.
audioop.findfactor(fragment, reference)
Return a factor F such that rms(add(fragment, mul(reference, -F))) is minimal, i.e., return the factor with which you should multiply reference to make it match as well as possible to fragment. The fragments should both contain 2-byte samples. The time taken by this routine is proportional to len(fragment).
audioop.findfit(fragment, reference)
Try to match reference as well as possible to a portion of fragment (which should be the longer fragment). This is (conceptually) done by taking slices out of fragment, using findfactor() to compute the best match, and minimizing the result. The fragments should both contain 2-byte samples. Return a tuple (offset, factor) where offset is the (integer) offset into fragment where the optimal match started and factor is the (floating-point) factor as per findfactor().
audioop.findmax(fragment, length)
Search fragment for a slice of length length samples (not bytes!) with maximum energy, i.e., return i for which rms(fragment[i*2:(i+length)*2]) is maximal. The fragments should both contain 2-byte samples. The routine takes time proportional to len(fragment).
audioop.getsample(fragment, width, index)
Return the value of sample index from the fragment.
audioop.lin2adpcm(fragment, width, state)
Convert samples to 4 bit Intel/DVI ADPCM encoding. ADPCM coding is an adaptive coding scheme, whereby each 4 bit number is the difference between one sample and the next, divided by a (varying) step. The Intel/DVI ADPCM algorithm has been selected for use by the IMA, so it may well become a standard. state is a tuple containing the state of the coder. The coder returns a tuple (adpcmfrag, newstate), and the newstate should be passed to the next call of lin2adpcm(). In the initial call, None can be passed as the state. adpcmfrag is the ADPCM coded fragment packed 2 4-bit values per byte.
audioop.lin2alaw(fragment, width)
Convert samples in the audio fragment to a-LAW encoding and return this as a bytes object. a-LAW is an audio encoding format whereby you get a dynamic range of about 13 bits using only 8 bit samples. It is used by the Sun audio hardware, among others.
audioop.lin2lin(fragment, width, newwidth)
Convert samples between 1-, 2-, 3- and 4-byte formats. Note In some audio formats, such as .WAV files, 16, 24 and 32 bit samples are signed, but 8 bit samples are unsigned. So when converting to 8 bit wide samples for these formats, you need to also add 128 to the result: new_frames = audioop.lin2lin(frames, old_width, 1)
new_frames = audioop.bias(new_frames, 1, 128)
The same, in reverse, has to be applied when converting from 8 to 16, 24 or 32 bit width samples.
audioop.lin2ulaw(fragment, width)
Convert samples in the audio fragment to u-LAW encoding and return this as a bytes object. u-LAW is an audio encoding format whereby you get a dynamic range of about 14 bits using only 8 bit samples. It is used by the Sun audio hardware, among others.
audioop.max(fragment, width)
Return the maximum of the absolute value of all samples in a fragment.
audioop.maxpp(fragment, width)
Return the maximum peak-peak value in the sound fragment.
audioop.minmax(fragment, width)
Return a tuple consisting of the minimum and maximum values of all samples in the sound fragment.
audioop.mul(fragment, width, factor)
Return a fragment that has all samples in the original fragment multiplied by the floating-point value factor. Samples are truncated in case of overflow.
audioop.ratecv(fragment, width, nchannels, inrate, outrate, state[, weightA[, weightB]])
Convert the frame rate of the input fragment. state is a tuple containing the state of the converter. The converter returns a tuple (newfragment, newstate), and newstate should be passed to the next call of ratecv(). The initial call should pass None as the state. The weightA and weightB arguments are parameters for a simple digital filter and default to 1 and 0 respectively.
audioop.reverse(fragment, width)
Reverse the samples in a fragment and returns the modified fragment.
audioop.rms(fragment, width)
Return the root-mean-square of the fragment, i.e. sqrt(sum(S_i^2)/n). This is a measure of the power in an audio signal.
audioop.tomono(fragment, width, lfactor, rfactor)
Convert a stereo fragment to a mono fragment. The left channel is multiplied by lfactor and the right channel by rfactor before adding the two channels to give a mono signal.
audioop.tostereo(fragment, width, lfactor, rfactor)
Generate a stereo fragment from a mono fragment. Each pair of samples in the stereo fragment are computed from the mono sample, whereby left channel samples are multiplied by lfactor and right channel samples by rfactor.
audioop.ulaw2lin(fragment, width)
Convert sound fragments in u-LAW encoding to linearly encoded sound fragments. u-LAW encoding always uses 8 bits samples, so width refers only to the sample width of the output fragment here.
Note that operations such as mul() or max() make no distinction between mono and stereo fragments, i.e. all samples are treated equal. If this is a problem the stereo fragment should be split into two mono fragments first and recombined later. Here is an example of how to do that: def mul_stereo(sample, width, lfactor, rfactor):
lsample = audioop.tomono(sample, width, 1, 0)
rsample = audioop.tomono(sample, width, 0, 1)
lsample = audioop.mul(lsample, width, lfactor)
rsample = audioop.mul(rsample, width, rfactor)
lsample = audioop.tostereo(lsample, width, 1, 0)
rsample = audioop.tostereo(rsample, width, 0, 1)
return audioop.add(lsample, rsample, width)
If you use the ADPCM coder to build network packets and you want your protocol to be stateless (i.e. to be able to tolerate packet loss) you should not only transmit the data but also the state. Note that you should send the initial state (the one you passed to lin2adpcm()) along to the decoder, not the final state (as returned by the coder). If you want to use struct.Struct to store the state in binary you can code the first element (the predicted value) in 16 bits and the second (the delta index) in 8. The ADPCM coders have never been tried against other ADPCM coders, only against themselves. It could well be that I misinterpreted the standards in which case they will not be interoperable with the respective standards. The find*() routines might look a bit funny at first sight. They are primarily meant to do echo cancellation. A reasonably fast way to do this is to pick the most energetic piece of the output sample, locate that in the input sample and subtract the whole output sample from the input sample: def echocancel(outputdata, inputdata):
pos = audioop.findmax(outputdata, 800) # one tenth second
out_test = outputdata[pos*2:]
in_test = inputdata[pos*2:]
ipos, factor = audioop.findfit(in_test, out_test)
# Optional (for better cancellation):
# factor = audioop.findfactor(in_test[ipos*2:ipos*2+len(out_test)],
# out_test)
prefill = '\0'*(pos+ipos)*2
postfill = '\0'*(len(inputdata)-len(prefill)-len(outputdata))
outputdata = prefill + audioop.mul(outputdata, 2, -factor) + postfill
return audioop.add(inputdata, outputdata, 2) | python.library.audioop |
audioop.add(fragment1, fragment2, width)
Return a fragment which is the addition of the two samples passed as parameters. width is the sample width in bytes, either 1, 2, 3 or 4. Both fragments should have the same length. Samples are truncated in case of overflow. | python.library.audioop#audioop.add |
audioop.adpcm2lin(adpcmfragment, width, state)
Decode an Intel/DVI ADPCM coded fragment to a linear fragment. See the description of lin2adpcm() for details on ADPCM coding. Return a tuple (sample, newstate) where the sample has the width specified in width. | python.library.audioop#audioop.adpcm2lin |
audioop.alaw2lin(fragment, width)
Convert sound fragments in a-LAW encoding to linearly encoded sound fragments. a-LAW encoding always uses 8 bits samples, so width refers only to the sample width of the output fragment here. | python.library.audioop#audioop.alaw2lin |
audioop.avg(fragment, width)
Return the average over all samples in the fragment. | python.library.audioop#audioop.avg |
audioop.avgpp(fragment, width)
Return the average peak-peak value over all samples in the fragment. No filtering is done, so the usefulness of this routine is questionable. | python.library.audioop#audioop.avgpp |
audioop.bias(fragment, width, bias)
Return a fragment that is the original fragment with a bias added to each sample. Samples wrap around in case of overflow. | python.library.audioop#audioop.bias |
audioop.byteswap(fragment, width)
“Byteswap” all samples in a fragment and returns the modified fragment. Converts big-endian samples to little-endian and vice versa. New in version 3.4. | python.library.audioop#audioop.byteswap |
audioop.cross(fragment, width)
Return the number of zero crossings in the fragment passed as an argument. | python.library.audioop#audioop.cross |
exception audioop.error
This exception is raised on all errors, such as unknown number of bytes per sample, etc. | python.library.audioop#audioop.error |
audioop.findfactor(fragment, reference)
Return a factor F such that rms(add(fragment, mul(reference, -F))) is minimal, i.e., return the factor with which you should multiply reference to make it match as well as possible to fragment. The fragments should both contain 2-byte samples. The time taken by this routine is proportional to len(fragment). | python.library.audioop#audioop.findfactor |
audioop.findfit(fragment, reference)
Try to match reference as well as possible to a portion of fragment (which should be the longer fragment). This is (conceptually) done by taking slices out of fragment, using findfactor() to compute the best match, and minimizing the result. The fragments should both contain 2-byte samples. Return a tuple (offset, factor) where offset is the (integer) offset into fragment where the optimal match started and factor is the (floating-point) factor as per findfactor(). | python.library.audioop#audioop.findfit |
audioop.findmax(fragment, length)
Search fragment for a slice of length length samples (not bytes!) with maximum energy, i.e., return i for which rms(fragment[i*2:(i+length)*2]) is maximal. The fragments should both contain 2-byte samples. The routine takes time proportional to len(fragment). | python.library.audioop#audioop.findmax |
audioop.getsample(fragment, width, index)
Return the value of sample index from the fragment. | python.library.audioop#audioop.getsample |
audioop.lin2adpcm(fragment, width, state)
Convert samples to 4 bit Intel/DVI ADPCM encoding. ADPCM coding is an adaptive coding scheme, whereby each 4 bit number is the difference between one sample and the next, divided by a (varying) step. The Intel/DVI ADPCM algorithm has been selected for use by the IMA, so it may well become a standard. state is a tuple containing the state of the coder. The coder returns a tuple (adpcmfrag, newstate), and the newstate should be passed to the next call of lin2adpcm(). In the initial call, None can be passed as the state. adpcmfrag is the ADPCM coded fragment packed 2 4-bit values per byte. | python.library.audioop#audioop.lin2adpcm |
audioop.lin2alaw(fragment, width)
Convert samples in the audio fragment to a-LAW encoding and return this as a bytes object. a-LAW is an audio encoding format whereby you get a dynamic range of about 13 bits using only 8 bit samples. It is used by the Sun audio hardware, among others. | python.library.audioop#audioop.lin2alaw |
audioop.lin2lin(fragment, width, newwidth)
Convert samples between 1-, 2-, 3- and 4-byte formats. Note In some audio formats, such as .WAV files, 16, 24 and 32 bit samples are signed, but 8 bit samples are unsigned. So when converting to 8 bit wide samples for these formats, you need to also add 128 to the result: new_frames = audioop.lin2lin(frames, old_width, 1)
new_frames = audioop.bias(new_frames, 1, 128)
The same, in reverse, has to be applied when converting from 8 to 16, 24 or 32 bit width samples. | python.library.audioop#audioop.lin2lin |
audioop.lin2ulaw(fragment, width)
Convert samples in the audio fragment to u-LAW encoding and return this as a bytes object. u-LAW is an audio encoding format whereby you get a dynamic range of about 14 bits using only 8 bit samples. It is used by the Sun audio hardware, among others. | python.library.audioop#audioop.lin2ulaw |
audioop.max(fragment, width)
Return the maximum of the absolute value of all samples in a fragment. | python.library.audioop#audioop.max |
audioop.maxpp(fragment, width)
Return the maximum peak-peak value in the sound fragment. | python.library.audioop#audioop.maxpp |
audioop.minmax(fragment, width)
Return a tuple consisting of the minimum and maximum values of all samples in the sound fragment. | python.library.audioop#audioop.minmax |
audioop.mul(fragment, width, factor)
Return a fragment that has all samples in the original fragment multiplied by the floating-point value factor. Samples are truncated in case of overflow. | python.library.audioop#audioop.mul |
audioop.ratecv(fragment, width, nchannels, inrate, outrate, state[, weightA[, weightB]])
Convert the frame rate of the input fragment. state is a tuple containing the state of the converter. The converter returns a tuple (newfragment, newstate), and newstate should be passed to the next call of ratecv(). The initial call should pass None as the state. The weightA and weightB arguments are parameters for a simple digital filter and default to 1 and 0 respectively. | python.library.audioop#audioop.ratecv |
audioop.reverse(fragment, width)
Reverse the samples in a fragment and returns the modified fragment. | python.library.audioop#audioop.reverse |
audioop.rms(fragment, width)
Return the root-mean-square of the fragment, i.e. sqrt(sum(S_i^2)/n). This is a measure of the power in an audio signal. | python.library.audioop#audioop.rms |
audioop.tomono(fragment, width, lfactor, rfactor)
Convert a stereo fragment to a mono fragment. The left channel is multiplied by lfactor and the right channel by rfactor before adding the two channels to give a mono signal. | python.library.audioop#audioop.tomono |
audioop.tostereo(fragment, width, lfactor, rfactor)
Generate a stereo fragment from a mono fragment. Each pair of samples in the stereo fragment are computed from the mono sample, whereby left channel samples are multiplied by lfactor and right channel samples by rfactor. | python.library.audioop#audioop.tostereo |
audioop.ulaw2lin(fragment, width)
Convert sound fragments in u-LAW encoding to linearly encoded sound fragments. u-LAW encoding always uses 8 bits samples, so width refers only to the sample width of the output fragment here. | python.library.audioop#audioop.ulaw2lin |
base64 — Base16, Base32, Base64, Base85 Data Encodings Source code: Lib/base64.py This module provides functions for encoding binary data to printable ASCII characters and decoding such encodings back to binary data. It provides encoding and decoding functions for the encodings specified in RFC 3548, which defines the Base16, Base32, and Base64 algorithms, and for the de-facto standard Ascii85 and Base85 encodings. The RFC 3548 encodings are suitable for encoding binary data so that it can safely sent by email, used as parts of URLs, or included as part of an HTTP POST request. The encoding algorithm is not the same as the uuencode program. There are two interfaces provided by this module. The modern interface supports encoding bytes-like objects to ASCII bytes, and decoding bytes-like objects or strings containing ASCII to bytes. Both base-64 alphabets defined in RFC 3548 (normal, and URL- and filesystem-safe) are supported. The legacy interface does not support decoding from strings, but it does provide functions for encoding and decoding to and from file objects. It only supports the Base64 standard alphabet, and it adds newlines every 76 characters as per RFC 2045. Note that if you are looking for RFC 2045 support you probably want to be looking at the email package instead. Changed in version 3.3: ASCII-only Unicode strings are now accepted by the decoding functions of the modern interface. Changed in version 3.4: Any bytes-like objects are now accepted by all encoding and decoding functions in this module. Ascii85/Base85 support added. The modern interface provides:
base64.b64encode(s, altchars=None)
Encode the bytes-like object s using Base64 and return the encoded bytes. Optional altchars must be a bytes-like object of at least length 2 (additional characters are ignored) which specifies an alternative alphabet for the + and / characters. This allows an application to e.g. generate URL or filesystem safe Base64 strings. The default is None, for which the standard Base64 alphabet is used.
base64.b64decode(s, altchars=None, validate=False)
Decode the Base64 encoded bytes-like object or ASCII string s and return the decoded bytes. Optional altchars must be a bytes-like object or ASCII string of at least length 2 (additional characters are ignored) which specifies the alternative alphabet used instead of the + and / characters. A binascii.Error exception is raised if s is incorrectly padded. If validate is False (the default), characters that are neither in the normal base-64 alphabet nor the alternative alphabet are discarded prior to the padding check. If validate is True, these non-alphabet characters in the input result in a binascii.Error.
base64.standard_b64encode(s)
Encode bytes-like object s using the standard Base64 alphabet and return the encoded bytes.
base64.standard_b64decode(s)
Decode bytes-like object or ASCII string s using the standard Base64 alphabet and return the decoded bytes.
base64.urlsafe_b64encode(s)
Encode bytes-like object s using the URL- and filesystem-safe alphabet, which substitutes - instead of + and _ instead of / in the standard Base64 alphabet, and return the encoded bytes. The result can still contain =.
base64.urlsafe_b64decode(s)
Decode bytes-like object or ASCII string s using the URL- and filesystem-safe alphabet, which substitutes - instead of + and _ instead of / in the standard Base64 alphabet, and return the decoded bytes.
base64.b32encode(s)
Encode the bytes-like object s using Base32 and return the encoded bytes.
base64.b32decode(s, casefold=False, map01=None)
Decode the Base32 encoded bytes-like object or ASCII string s and return the decoded bytes. Optional casefold is a flag specifying whether a lowercase alphabet is acceptable as input. For security purposes, the default is False. RFC 3548 allows for optional mapping of the digit 0 (zero) to the letter O (oh), and for optional mapping of the digit 1 (one) to either the letter I (eye) or letter L (el). The optional argument map01 when not None, specifies which letter the digit 1 should be mapped to (when map01 is not None, the digit 0 is always mapped to the letter O). For security purposes the default is None, so that 0 and 1 are not allowed in the input. A binascii.Error is raised if s is incorrectly padded or if there are non-alphabet characters present in the input.
base64.b16encode(s)
Encode the bytes-like object s using Base16 and return the encoded bytes.
base64.b16decode(s, casefold=False)
Decode the Base16 encoded bytes-like object or ASCII string s and return the decoded bytes. Optional casefold is a flag specifying whether a lowercase alphabet is acceptable as input. For security purposes, the default is False. A binascii.Error is raised if s is incorrectly padded or if there are non-alphabet characters present in the input.
base64.a85encode(b, *, foldspaces=False, wrapcol=0, pad=False, adobe=False)
Encode the bytes-like object b using Ascii85 and return the encoded bytes. foldspaces is an optional flag that uses the special short sequence ‘y’ instead of 4 consecutive spaces (ASCII 0x20) as supported by ‘btoa’. This feature is not supported by the “standard” Ascii85 encoding. wrapcol controls whether the output should have newline (b'\n') characters added to it. If this is non-zero, each output line will be at most this many characters long. pad controls whether the input is padded to a multiple of 4 before encoding. Note that the btoa implementation always pads. adobe controls whether the encoded byte sequence is framed with <~ and ~>, which is used by the Adobe implementation. New in version 3.4.
base64.a85decode(b, *, foldspaces=False, adobe=False, ignorechars=b' \t\n\r\v')
Decode the Ascii85 encoded bytes-like object or ASCII string b and return the decoded bytes. foldspaces is a flag that specifies whether the ‘y’ short sequence should be accepted as shorthand for 4 consecutive spaces (ASCII 0x20). This feature is not supported by the “standard” Ascii85 encoding. adobe controls whether the input sequence is in Adobe Ascii85 format (i.e. is framed with <~ and ~>). ignorechars should be a bytes-like object or ASCII string containing characters to ignore from the input. This should only contain whitespace characters, and by default contains all whitespace characters in ASCII. New in version 3.4.
base64.b85encode(b, pad=False)
Encode the bytes-like object b using base85 (as used in e.g. git-style binary diffs) and return the encoded bytes. If pad is true, the input is padded with b'\0' so its length is a multiple of 4 bytes before encoding. New in version 3.4.
base64.b85decode(b)
Decode the base85-encoded bytes-like object or ASCII string b and return the decoded bytes. Padding is implicitly removed, if necessary. New in version 3.4.
The legacy interface:
base64.decode(input, output)
Decode the contents of the binary input file and write the resulting binary data to the output file. input and output must be file objects. input will be read until input.readline() returns an empty bytes object.
base64.decodebytes(s)
Decode the bytes-like object s, which must contain one or more lines of base64 encoded data, and return the decoded bytes. New in version 3.1.
base64.encode(input, output)
Encode the contents of the binary input file and write the resulting base64 encoded data to the output file. input and output must be file objects. input will be read until input.read() returns an empty bytes object. encode() inserts a newline character (b'\n') after every 76 bytes of the output, as well as ensuring that the output always ends with a newline, as per RFC 2045 (MIME).
base64.encodebytes(s)
Encode the bytes-like object s, which can contain arbitrary binary data, and return bytes containing the base64-encoded data, with newlines (b'\n') inserted after every 76 bytes of output, and ensuring that there is a trailing newline, as per RFC 2045 (MIME). New in version 3.1.
An example usage of the module: >>> import base64
>>> encoded = base64.b64encode(b'data to be encoded')
>>> encoded
b'ZGF0YSB0byBiZSBlbmNvZGVk'
>>> data = base64.b64decode(encoded)
>>> data
b'data to be encoded'
See also
Module binascii
Support module containing ASCII-to-binary and binary-to-ASCII conversions.
RFC 1521 - MIME (Multipurpose Internet Mail Extensions) Part One: Mechanisms for Specifying and Describing the Format of Internet Message Bodies
Section 5.2, “Base64 Content-Transfer-Encoding,” provides the definition of the base64 encoding. | python.library.base64 |
base64.a85decode(b, *, foldspaces=False, adobe=False, ignorechars=b' \t\n\r\v')
Decode the Ascii85 encoded bytes-like object or ASCII string b and return the decoded bytes. foldspaces is a flag that specifies whether the ‘y’ short sequence should be accepted as shorthand for 4 consecutive spaces (ASCII 0x20). This feature is not supported by the “standard” Ascii85 encoding. adobe controls whether the input sequence is in Adobe Ascii85 format (i.e. is framed with <~ and ~>). ignorechars should be a bytes-like object or ASCII string containing characters to ignore from the input. This should only contain whitespace characters, and by default contains all whitespace characters in ASCII. New in version 3.4. | python.library.base64#base64.a85decode |
base64.a85encode(b, *, foldspaces=False, wrapcol=0, pad=False, adobe=False)
Encode the bytes-like object b using Ascii85 and return the encoded bytes. foldspaces is an optional flag that uses the special short sequence ‘y’ instead of 4 consecutive spaces (ASCII 0x20) as supported by ‘btoa’. This feature is not supported by the “standard” Ascii85 encoding. wrapcol controls whether the output should have newline (b'\n') characters added to it. If this is non-zero, each output line will be at most this many characters long. pad controls whether the input is padded to a multiple of 4 before encoding. Note that the btoa implementation always pads. adobe controls whether the encoded byte sequence is framed with <~ and ~>, which is used by the Adobe implementation. New in version 3.4. | python.library.base64#base64.a85encode |
base64.b16decode(s, casefold=False)
Decode the Base16 encoded bytes-like object or ASCII string s and return the decoded bytes. Optional casefold is a flag specifying whether a lowercase alphabet is acceptable as input. For security purposes, the default is False. A binascii.Error is raised if s is incorrectly padded or if there are non-alphabet characters present in the input. | python.library.base64#base64.b16decode |
base64.b16encode(s)
Encode the bytes-like object s using Base16 and return the encoded bytes. | python.library.base64#base64.b16encode |
base64.b32decode(s, casefold=False, map01=None)
Decode the Base32 encoded bytes-like object or ASCII string s and return the decoded bytes. Optional casefold is a flag specifying whether a lowercase alphabet is acceptable as input. For security purposes, the default is False. RFC 3548 allows for optional mapping of the digit 0 (zero) to the letter O (oh), and for optional mapping of the digit 1 (one) to either the letter I (eye) or letter L (el). The optional argument map01 when not None, specifies which letter the digit 1 should be mapped to (when map01 is not None, the digit 0 is always mapped to the letter O). For security purposes the default is None, so that 0 and 1 are not allowed in the input. A binascii.Error is raised if s is incorrectly padded or if there are non-alphabet characters present in the input. | python.library.base64#base64.b32decode |
base64.b32encode(s)
Encode the bytes-like object s using Base32 and return the encoded bytes. | python.library.base64#base64.b32encode |
base64.b64decode(s, altchars=None, validate=False)
Decode the Base64 encoded bytes-like object or ASCII string s and return the decoded bytes. Optional altchars must be a bytes-like object or ASCII string of at least length 2 (additional characters are ignored) which specifies the alternative alphabet used instead of the + and / characters. A binascii.Error exception is raised if s is incorrectly padded. If validate is False (the default), characters that are neither in the normal base-64 alphabet nor the alternative alphabet are discarded prior to the padding check. If validate is True, these non-alphabet characters in the input result in a binascii.Error. | python.library.base64#base64.b64decode |
base64.b64encode(s, altchars=None)
Encode the bytes-like object s using Base64 and return the encoded bytes. Optional altchars must be a bytes-like object of at least length 2 (additional characters are ignored) which specifies an alternative alphabet for the + and / characters. This allows an application to e.g. generate URL or filesystem safe Base64 strings. The default is None, for which the standard Base64 alphabet is used. | python.library.base64#base64.b64encode |
base64.b85decode(b)
Decode the base85-encoded bytes-like object or ASCII string b and return the decoded bytes. Padding is implicitly removed, if necessary. New in version 3.4. | python.library.base64#base64.b85decode |
base64.b85encode(b, pad=False)
Encode the bytes-like object b using base85 (as used in e.g. git-style binary diffs) and return the encoded bytes. If pad is true, the input is padded with b'\0' so its length is a multiple of 4 bytes before encoding. New in version 3.4. | python.library.base64#base64.b85encode |
base64.decode(input, output)
Decode the contents of the binary input file and write the resulting binary data to the output file. input and output must be file objects. input will be read until input.readline() returns an empty bytes object. | python.library.base64#base64.decode |
base64.decodebytes(s)
Decode the bytes-like object s, which must contain one or more lines of base64 encoded data, and return the decoded bytes. New in version 3.1. | python.library.base64#base64.decodebytes |
base64.encode(input, output)
Encode the contents of the binary input file and write the resulting base64 encoded data to the output file. input and output must be file objects. input will be read until input.read() returns an empty bytes object. encode() inserts a newline character (b'\n') after every 76 bytes of the output, as well as ensuring that the output always ends with a newline, as per RFC 2045 (MIME). | python.library.base64#base64.encode |
base64.encodebytes(s)
Encode the bytes-like object s, which can contain arbitrary binary data, and return bytes containing the base64-encoded data, with newlines (b'\n') inserted after every 76 bytes of output, and ensuring that there is a trailing newline, as per RFC 2045 (MIME). New in version 3.1. | python.library.base64#base64.encodebytes |
base64.standard_b64decode(s)
Decode bytes-like object or ASCII string s using the standard Base64 alphabet and return the decoded bytes. | python.library.base64#base64.standard_b64decode |
base64.standard_b64encode(s)
Encode bytes-like object s using the standard Base64 alphabet and return the encoded bytes. | python.library.base64#base64.standard_b64encode |
base64.urlsafe_b64decode(s)
Decode bytes-like object or ASCII string s using the URL- and filesystem-safe alphabet, which substitutes - instead of + and _ instead of / in the standard Base64 alphabet, and return the decoded bytes. | python.library.base64#base64.urlsafe_b64decode |
base64.urlsafe_b64encode(s)
Encode bytes-like object s using the URL- and filesystem-safe alphabet, which substitutes - instead of + and _ instead of / in the standard Base64 alphabet, and return the encoded bytes. The result can still contain =. | python.library.base64#base64.urlsafe_b64encode |
exception BaseException
The base class for all built-in exceptions. It is not meant to be directly inherited by user-defined classes (for that, use Exception). If str() is called on an instance of this class, the representation of the argument(s) to the instance are returned, or the empty string when there were no arguments.
args
The tuple of arguments given to the exception constructor. Some built-in exceptions (like OSError) expect a certain number of arguments and assign a special meaning to the elements of this tuple, while others are usually called only with a single string giving an error message.
with_traceback(tb)
This method sets tb as the new traceback for the exception and returns the exception object. It is usually used in exception handling code like this: try:
...
except SomeException:
tb = sys.exc_info()[2]
raise OtherException(...).with_traceback(tb) | python.library.exceptions#BaseException |
args
The tuple of arguments given to the exception constructor. Some built-in exceptions (like OSError) expect a certain number of arguments and assign a special meaning to the elements of this tuple, while others are usually called only with a single string giving an error message. | python.library.exceptions#BaseException.args |
with_traceback(tb)
This method sets tb as the new traceback for the exception and returns the exception object. It is usually used in exception handling code like this: try:
...
except SomeException:
tb = sys.exc_info()[2]
raise OtherException(...).with_traceback(tb) | python.library.exceptions#BaseException.with_traceback |
bdb — Debugger framework Source code: Lib/bdb.py The bdb module handles basic debugger functions, like setting breakpoints or managing execution via the debugger. The following exception is defined:
exception bdb.BdbQuit
Exception raised by the Bdb class for quitting the debugger.
The bdb module also defines two classes:
class bdb.Breakpoint(self, file, line, temporary=0, cond=None, funcname=None)
This class implements temporary breakpoints, ignore counts, disabling and (re-)enabling, and conditionals. Breakpoints are indexed by number through a list called bpbynumber and by (file, line) pairs through bplist. The former points to a single instance of class Breakpoint. The latter points to a list of such instances since there may be more than one breakpoint per line. When creating a breakpoint, its associated filename should be in canonical form. If a funcname is defined, a breakpoint hit will be counted when the first line of that function is executed. A conditional breakpoint always counts a hit. Breakpoint instances have the following methods:
deleteMe()
Delete the breakpoint from the list associated to a file/line. If it is the last breakpoint in that position, it also deletes the entry for the file/line.
enable()
Mark the breakpoint as enabled.
disable()
Mark the breakpoint as disabled.
bpformat()
Return a string with all the information about the breakpoint, nicely formatted: The breakpoint number. If it is temporary or not. Its file,line position. The condition that causes a break. If it must be ignored the next N times. The breakpoint hit count. New in version 3.2.
bpprint(out=None)
Print the output of bpformat() to the file out, or if it is None, to standard output.
class bdb.Bdb(skip=None)
The Bdb class acts as a generic Python debugger base class. This class takes care of the details of the trace facility; a derived class should implement user interaction. The standard debugger class (pdb.Pdb) is an example. The skip argument, if given, must be an iterable of glob-style module name patterns. The debugger will not step into frames that originate in a module that matches one of these patterns. Whether a frame is considered to originate in a certain module is determined by the __name__ in the frame globals. New in version 3.1: The skip argument. The following methods of Bdb normally don’t need to be overridden.
canonic(filename)
Auxiliary method for getting a filename in a canonical form, that is, as a case-normalized (on case-insensitive filesystems) absolute path, stripped of surrounding angle brackets.
reset()
Set the botframe, stopframe, returnframe and quitting attributes with values ready to start debugging.
trace_dispatch(frame, event, arg)
This function is installed as the trace function of debugged frames. Its return value is the new trace function (in most cases, that is, itself). The default implementation decides how to dispatch a frame, depending on the type of event (passed as a string) that is about to be executed. event can be one of the following:
"line": A new line of code is going to be executed.
"call": A function is about to be called, or another code block entered.
"return": A function or other code block is about to return.
"exception": An exception has occurred.
"c_call": A C function is about to be called.
"c_return": A C function has returned.
"c_exception": A C function has raised an exception. For the Python events, specialized functions (see below) are called. For the C events, no action is taken. The arg parameter depends on the previous event. See the documentation for sys.settrace() for more information on the trace function. For more information on code and frame objects, refer to The standard type hierarchy.
dispatch_line(frame)
If the debugger should stop on the current line, invoke the user_line() method (which should be overridden in subclasses). Raise a BdbQuit exception if the Bdb.quitting flag is set (which can be set from user_line()). Return a reference to the trace_dispatch() method for further tracing in that scope.
dispatch_call(frame, arg)
If the debugger should stop on this function call, invoke the user_call() method (which should be overridden in subclasses). Raise a BdbQuit exception if the Bdb.quitting flag is set (which can be set from user_call()). Return a reference to the trace_dispatch() method for further tracing in that scope.
dispatch_return(frame, arg)
If the debugger should stop on this function return, invoke the user_return() method (which should be overridden in subclasses). Raise a BdbQuit exception if the Bdb.quitting flag is set (which can be set from user_return()). Return a reference to the trace_dispatch() method for further tracing in that scope.
dispatch_exception(frame, arg)
If the debugger should stop at this exception, invokes the user_exception() method (which should be overridden in subclasses). Raise a BdbQuit exception if the Bdb.quitting flag is set (which can be set from user_exception()). Return a reference to the trace_dispatch() method for further tracing in that scope.
Normally derived classes don’t override the following methods, but they may if they want to redefine the definition of stopping and breakpoints.
stop_here(frame)
This method checks if the frame is somewhere below botframe in the call stack. botframe is the frame in which debugging started.
break_here(frame)
This method checks if there is a breakpoint in the filename and line belonging to frame or, at least, in the current function. If the breakpoint is a temporary one, this method deletes it.
break_anywhere(frame)
This method checks if there is a breakpoint in the filename of the current frame.
Derived classes should override these methods to gain control over debugger operation.
user_call(frame, argument_list)
This method is called from dispatch_call() when there is the possibility that a break might be necessary anywhere inside the called function.
user_line(frame)
This method is called from dispatch_line() when either stop_here() or break_here() yields True.
user_return(frame, return_value)
This method is called from dispatch_return() when stop_here() yields True.
user_exception(frame, exc_info)
This method is called from dispatch_exception() when stop_here() yields True.
do_clear(arg)
Handle how a breakpoint must be removed when it is a temporary one. This method must be implemented by derived classes.
Derived classes and clients can call the following methods to affect the stepping state.
set_step()
Stop after one line of code.
set_next(frame)
Stop on the next line in or below the given frame.
set_return(frame)
Stop when returning from the given frame.
set_until(frame)
Stop when the line with the line no greater than the current one is reached or when returning from current frame.
set_trace([frame])
Start debugging from frame. If frame is not specified, debugging starts from caller’s frame.
set_continue()
Stop only at breakpoints or when finished. If there are no breakpoints, set the system trace function to None.
set_quit()
Set the quitting attribute to True. This raises BdbQuit in the next call to one of the dispatch_*() methods.
Derived classes and clients can call the following methods to manipulate breakpoints. These methods return a string containing an error message if something went wrong, or None if all is well.
set_break(filename, lineno, temporary=0, cond, funcname)
Set a new breakpoint. If the lineno line doesn’t exist for the filename passed as argument, return an error message. The filename should be in canonical form, as described in the canonic() method.
clear_break(filename, lineno)
Delete the breakpoints in filename and lineno. If none were set, an error message is returned.
clear_bpbynumber(arg)
Delete the breakpoint which has the index arg in the Breakpoint.bpbynumber. If arg is not numeric or out of range, return an error message.
clear_all_file_breaks(filename)
Delete all breakpoints in filename. If none were set, an error message is returned.
clear_all_breaks()
Delete all existing breakpoints.
get_bpbynumber(arg)
Return a breakpoint specified by the given number. If arg is a string, it will be converted to a number. If arg is a non-numeric string, if the given breakpoint never existed or has been deleted, a ValueError is raised. New in version 3.2.
get_break(filename, lineno)
Check if there is a breakpoint for lineno of filename.
get_breaks(filename, lineno)
Return all breakpoints for lineno in filename, or an empty list if none are set.
get_file_breaks(filename)
Return all breakpoints in filename, or an empty list if none are set.
get_all_breaks()
Return all breakpoints that are set.
Derived classes and clients can call the following methods to get a data structure representing a stack trace.
get_stack(f, t)
Get a list of records for a frame and all higher (calling) and lower frames, and the size of the higher part.
format_stack_entry(frame_lineno, lprefix=': ')
Return a string with information about a stack entry, identified by a (frame, lineno) tuple: The canonical form of the filename which contains the frame. The function name, or "<lambda>". The input arguments. The return value. The line of code (if it exists).
The following two methods can be called by clients to use a debugger to debug a statement, given as a string.
run(cmd, globals=None, locals=None)
Debug a statement executed via the exec() function. globals defaults to __main__.__dict__, locals defaults to globals.
runeval(expr, globals=None, locals=None)
Debug an expression executed via the eval() function. globals and locals have the same meaning as in run().
runctx(cmd, globals, locals)
For backwards compatibility. Calls the run() method.
runcall(func, /, *args, **kwds)
Debug a single function call, and return its result.
Finally, the module defines the following functions:
bdb.checkfuncname(b, frame)
Check whether we should break here, depending on the way the breakpoint b was set. If it was set via line number, it checks if b.line is the same as the one in the frame also passed as argument. If the breakpoint was set via function name, we have to check we are in the right frame (the right function) and if we are in its first executable line.
bdb.effective(file, line, frame)
Determine if there is an effective (active) breakpoint at this line of code. Return a tuple of the breakpoint and a boolean that indicates if it is ok to delete a temporary breakpoint. Return (None, None) if there is no matching breakpoint.
bdb.set_trace()
Start debugging with a Bdb instance from caller’s frame. | python.library.bdb |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.