doc_content stringlengths 1 386k | doc_id stringlengths 5 188 |
|---|---|
class ast.NodeTransformer
A NodeVisitor subclass that walks the abstract syntax tree and allows modification of nodes. The NodeTransformer will walk the AST and use the return value of the visitor methods to replace or remove the old node. If the return value of the visitor method is None, the node will be removed from its location, otherwise it is replaced with the return value. The return value may be the original node in which case no replacement takes place. Here is an example transformer that rewrites all occurrences of name lookups (foo) to data['foo']: class RewriteName(NodeTransformer):
def visit_Name(self, node):
return Subscript(
value=Name(id='data', ctx=Load()),
slice=Constant(value=node.id),
ctx=node.ctx
)
Keep in mind that if the node you’re operating on has child nodes you must either transform the child nodes yourself or call the generic_visit() method for the node first. For nodes that were part of a collection of statements (that applies to all statement nodes), the visitor may also return a list of nodes rather than just a single node. If NodeTransformer introduces new nodes (that weren’t part of original tree) without giving them location information (such as lineno), fix_missing_locations() should be called with the new sub-tree to recalculate the location information: tree = ast.parse('foo', mode='eval')
new_tree = fix_missing_locations(RewriteName().visit(tree))
Usually you use the transformer like this: node = YourTransformer().visit(node) | python.library.ast#ast.NodeTransformer |
class ast.NodeVisitor
A node visitor base class that walks the abstract syntax tree and calls a visitor function for every node found. This function may return a value which is forwarded by the visit() method. This class is meant to be subclassed, with the subclass adding visitor methods.
visit(node)
Visit a node. The default implementation calls the method called self.visit_classname where classname is the name of the node class, or generic_visit() if that method doesn’t exist.
generic_visit(node)
This visitor calls visit() on all children of the node. Note that child nodes of nodes that have a custom visitor method won’t be visited unless the visitor calls generic_visit() or visits them itself.
Don’t use the NodeVisitor if you want to apply changes to nodes during traversal. For this a special visitor exists (NodeTransformer) that allows modifications. Deprecated since version 3.8: Methods visit_Num(), visit_Str(), visit_Bytes(), visit_NameConstant() and visit_Ellipsis() are deprecated now and will not be called in future Python versions. Add the visit_Constant() method to handle all constant nodes. | python.library.ast#ast.NodeVisitor |
generic_visit(node)
This visitor calls visit() on all children of the node. Note that child nodes of nodes that have a custom visitor method won’t be visited unless the visitor calls generic_visit() or visits them itself. | python.library.ast#ast.NodeVisitor.generic_visit |
visit(node)
Visit a node. The default implementation calls the method called self.visit_classname where classname is the name of the node class, or generic_visit() if that method doesn’t exist. | python.library.ast#ast.NodeVisitor.visit |
class ast.Global(names)
class ast.Nonlocal(names)
global and nonlocal statements. names is a list of raw strings. >>> print(ast.dump(ast.parse('global x,y,z'), indent=4))
Module(
body=[
Global(
names=[
'x',
'y',
'z'])],
type_ignores=[])
>>> print(ast.dump(ast.parse('nonlocal x,y,z'), indent=4))
Module(
body=[
Nonlocal(
names=[
'x',
'y',
'z'])],
type_ignores=[]) | python.library.ast#ast.Nonlocal |
class ast.UAdd
class ast.USub
class ast.Not
class ast.Invert
Unary operator tokens. Not is the not keyword, Invert is the ~ operator. >>> print(ast.dump(ast.parse('not x', mode='eval'), indent=4))
Expression(
body=UnaryOp(
op=Not(),
operand=Name(id='x', ctx=Load()))) | python.library.ast#ast.Not |
class ast.Eq
class ast.NotEq
class ast.Lt
class ast.LtE
class ast.Gt
class ast.GtE
class ast.Is
class ast.IsNot
class ast.In
class ast.NotIn
Comparison operator tokens. | python.library.ast#ast.NotEq |
class ast.Eq
class ast.NotEq
class ast.Lt
class ast.LtE
class ast.Gt
class ast.GtE
class ast.Is
class ast.IsNot
class ast.In
class ast.NotIn
Comparison operator tokens. | python.library.ast#ast.NotIn |
class ast.And
class ast.Or
Boolean operator tokens. | python.library.ast#ast.Or |
ast.parse(source, filename='<unknown>', mode='exec', *, type_comments=False, feature_version=None)
Parse the source into an AST node. Equivalent to compile(source,
filename, mode, ast.PyCF_ONLY_AST). If type_comments=True is given, the parser is modified to check and return type comments as specified by PEP 484 and PEP 526. This is equivalent to adding ast.PyCF_TYPE_COMMENTS to the flags passed to compile(). This will report syntax errors for misplaced type comments. Without this flag, type comments will be ignored, and the type_comment field on selected AST nodes will always be None. In addition, the locations of # type:
ignore comments will be returned as the type_ignores attribute of Module (otherwise it is always an empty list). In addition, if mode is 'func_type', the input syntax is modified to correspond to PEP 484 “signature type comments”, e.g. (str, int) -> List[str]. Also, setting feature_version to a tuple (major, minor) will attempt to parse using that Python version’s grammar. Currently major must equal to 3. For example, setting feature_version=(3, 4) will allow the use of async and await as variable names. The lowest supported version is (3, 4); the highest is sys.version_info[0:2]. Warning It is possible to crash the Python interpreter with a sufficiently large/complex string due to stack depth limitations in Python’s AST compiler. Changed in version 3.8: Added type_comments, mode='func_type' and feature_version. | python.library.ast#ast.parse |
class ast.Pass
A pass statement. >>> print(ast.dump(ast.parse('pass'), indent=4))
Module(
body=[
Pass()],
type_ignores=[]) | python.library.ast#ast.Pass |
class ast.Add
class ast.Sub
class ast.Mult
class ast.Div
class ast.FloorDiv
class ast.Mod
class ast.Pow
class ast.LShift
class ast.RShift
class ast.BitOr
class ast.BitXor
class ast.BitAnd
class ast.MatMult
Binary operator tokens. | python.library.ast#ast.Pow |
ast.PyCF_ALLOW_TOP_LEVEL_AWAIT
Enables support for top-level await, async for, async with and async comprehensions. New in version 3.8. | python.library.ast#ast.PyCF_ALLOW_TOP_LEVEL_AWAIT |
ast.PyCF_ONLY_AST
Generates and returns an abstract syntax tree instead of returning a compiled code object. | python.library.ast#ast.PyCF_ONLY_AST |
ast.PyCF_TYPE_COMMENTS
Enables support for PEP 484 and PEP 526 style type comments (# type: <type>, # type: ignore <stuff>). New in version 3.8. | python.library.ast#ast.PyCF_TYPE_COMMENTS |
class ast.Raise(exc, cause)
A raise statement. exc is the exception object to be raised, normally a Call or Name, or None for a standalone raise. cause is the optional part for y in raise x from y. >>> print(ast.dump(ast.parse('raise x from y'), indent=4))
Module(
body=[
Raise(
exc=Name(id='x', ctx=Load()),
cause=Name(id='y', ctx=Load()))],
type_ignores=[]) | python.library.ast#ast.Raise |
class ast.Return(value)
A return statement. >>> print(ast.dump(ast.parse('return 4'), indent=4))
Module(
body=[
Return(
value=Constant(value=4))],
type_ignores=[]) | python.library.ast#ast.Return |
class ast.Add
class ast.Sub
class ast.Mult
class ast.Div
class ast.FloorDiv
class ast.Mod
class ast.Pow
class ast.LShift
class ast.RShift
class ast.BitOr
class ast.BitXor
class ast.BitAnd
class ast.MatMult
Binary operator tokens. | python.library.ast#ast.RShift |
class ast.Set(elts)
A set. elts holds a list of nodes representing the set’s elements. >>> print(ast.dump(ast.parse('{1, 2, 3}', mode='eval'), indent=4))
Expression(
body=Set(
elts=[
Constant(value=1),
Constant(value=2),
Constant(value=3)])) | python.library.ast#ast.Set |
class ast.ListComp(elt, generators)
class ast.SetComp(elt, generators)
class ast.GeneratorExp(elt, generators)
class ast.DictComp(key, value, generators)
List and set comprehensions, generator expressions, and dictionary comprehensions. elt (or key and value) is a single node representing the part that will be evaluated for each item. generators is a list of comprehension nodes. >>> print(ast.dump(ast.parse('[x for x in numbers]', mode='eval'), indent=4))
Expression(
body=ListComp(
elt=Name(id='x', ctx=Load()),
generators=[
comprehension(
target=Name(id='x', ctx=Store()),
iter=Name(id='numbers', ctx=Load()),
ifs=[],
is_async=0)]))
>>> print(ast.dump(ast.parse('{x: x**2 for x in numbers}', mode='eval'), indent=4))
Expression(
body=DictComp(
key=Name(id='x', ctx=Load()),
value=BinOp(
left=Name(id='x', ctx=Load()),
op=Pow(),
right=Constant(value=2)),
generators=[
comprehension(
target=Name(id='x', ctx=Store()),
iter=Name(id='numbers', ctx=Load()),
ifs=[],
is_async=0)]))
>>> print(ast.dump(ast.parse('{x for x in numbers}', mode='eval'), indent=4))
Expression(
body=SetComp(
elt=Name(id='x', ctx=Load()),
generators=[
comprehension(
target=Name(id='x', ctx=Store()),
iter=Name(id='numbers', ctx=Load()),
ifs=[],
is_async=0)])) | python.library.ast#ast.SetComp |
class ast.Slice(lower, upper, step)
Regular slicing (on the form lower:upper or lower:upper:step). Can occur only inside the slice field of Subscript, either directly or as an element of Tuple. >>> print(ast.dump(ast.parse('l[1:2]', mode='eval'), indent=4))
Expression(
body=Subscript(
value=Name(id='l', ctx=Load()),
slice=Slice(
lower=Constant(value=1),
upper=Constant(value=2)),
ctx=Load())) | python.library.ast#ast.Slice |
class ast.Starred(value, ctx)
A *var variable reference. value holds the variable, typically a Name node. This type must be used when building a Call node with *args. >>> print(ast.dump(ast.parse('a, *b = it'), indent=4))
Module(
body=[
Assign(
targets=[
Tuple(
elts=[
Name(id='a', ctx=Store()),
Starred(
value=Name(id='b', ctx=Store()),
ctx=Store())],
ctx=Store())],
value=Name(id='it', ctx=Load()))],
type_ignores=[]) | python.library.ast#ast.Starred |
class ast.Load
class ast.Store
class ast.Del
Variable references can be used to load the value of a variable, to assign a new value to it, or to delete it. Variable references are given a context to distinguish these cases. >>> print(ast.dump(ast.parse('a'), indent=4))
Module(
body=[
Expr(
value=Name(id='a', ctx=Load()))],
type_ignores=[])
>>> print(ast.dump(ast.parse('a = 1'), indent=4))
Module(
body=[
Assign(
targets=[
Name(id='a', ctx=Store())],
value=Constant(value=1))],
type_ignores=[])
>>> print(ast.dump(ast.parse('del a'), indent=4))
Module(
body=[
Delete(
targets=[
Name(id='a', ctx=Del())])],
type_ignores=[]) | python.library.ast#ast.Store |
class ast.Add
class ast.Sub
class ast.Mult
class ast.Div
class ast.FloorDiv
class ast.Mod
class ast.Pow
class ast.LShift
class ast.RShift
class ast.BitOr
class ast.BitXor
class ast.BitAnd
class ast.MatMult
Binary operator tokens. | python.library.ast#ast.Sub |
class ast.Subscript(value, slice, ctx)
A subscript, such as l[1]. value is the subscripted object (usually sequence or mapping). slice is an index, slice or key. It can be a Tuple and contain a Slice. ctx is Load, Store or Del according to the action performed with the subscript. >>> print(ast.dump(ast.parse('l[1:2, 3]', mode='eval'), indent=4))
Expression(
body=Subscript(
value=Name(id='l', ctx=Load()),
slice=Tuple(
elts=[
Slice(
lower=Constant(value=1),
upper=Constant(value=2)),
Constant(value=3)],
ctx=Load()),
ctx=Load())) | python.library.ast#ast.Subscript |
class ast.Try(body, handlers, orelse, finalbody)
try blocks. All attributes are list of nodes to execute, except for handlers, which is a list of ExceptHandler nodes. >>> print(ast.dump(ast.parse("""
... try:
... ...
... except Exception:
... ...
... except OtherException as e:
... ...
... else:
... ...
... finally:
... ...
... """), indent=4))
Module(
body=[
Try(
body=[
Expr(
value=Constant(value=Ellipsis))],
handlers=[
ExceptHandler(
type=Name(id='Exception', ctx=Load()),
body=[
Expr(
value=Constant(value=Ellipsis))]),
ExceptHandler(
type=Name(id='OtherException', ctx=Load()),
name='e',
body=[
Expr(
value=Constant(value=Ellipsis))])],
orelse=[
Expr(
value=Constant(value=Ellipsis))],
finalbody=[
Expr(
value=Constant(value=Ellipsis))])],
type_ignores=[]) | python.library.ast#ast.Try |
class ast.List(elts, ctx)
class ast.Tuple(elts, ctx)
A list or tuple. elts holds a list of nodes representing the elements. ctx is Store if the container is an assignment target (i.e. (x,y)=something), and Load otherwise. >>> print(ast.dump(ast.parse('[1, 2, 3]', mode='eval'), indent=4))
Expression(
body=List(
elts=[
Constant(value=1),
Constant(value=2),
Constant(value=3)],
ctx=Load()))
>>> print(ast.dump(ast.parse('(1, 2, 3)', mode='eval'), indent=4))
Expression(
body=Tuple(
elts=[
Constant(value=1),
Constant(value=2),
Constant(value=3)],
ctx=Load())) | python.library.ast#ast.Tuple |
class ast.UAdd
class ast.USub
class ast.Not
class ast.Invert
Unary operator tokens. Not is the not keyword, Invert is the ~ operator. >>> print(ast.dump(ast.parse('not x', mode='eval'), indent=4))
Expression(
body=UnaryOp(
op=Not(),
operand=Name(id='x', ctx=Load()))) | python.library.ast#ast.UAdd |
class ast.UnaryOp(op, operand)
A unary operation. op is the operator, and operand any expression node. | python.library.ast#ast.UnaryOp |
ast.unparse(ast_obj)
Unparse an ast.AST object and generate a string with code that would produce an equivalent ast.AST object if parsed back with ast.parse(). Warning The produced code string will not necessarily be equal to the original code that generated the ast.AST object (without any compiler optimizations, such as constant tuples/frozensets). Warning Trying to unparse a highly complex expression would result with RecursionError. New in version 3.9. | python.library.ast#ast.unparse |
class ast.UAdd
class ast.USub
class ast.Not
class ast.Invert
Unary operator tokens. Not is the not keyword, Invert is the ~ operator. >>> print(ast.dump(ast.parse('not x', mode='eval'), indent=4))
Expression(
body=UnaryOp(
op=Not(),
operand=Name(id='x', ctx=Load()))) | python.library.ast#ast.USub |
ast.walk(node)
Recursively yield all descendant nodes in the tree starting at node (including node itself), in no specified order. This is useful if you only want to modify nodes in place and don’t care about the context. | python.library.ast#ast.walk |
class ast.While(test, body, orelse)
A while loop. test holds the condition, such as a Compare node. >> print(ast.dump(ast.parse("""
... while x:
... ...
... else:
... ...
... """), indent=4))
Module(
body=[
While(
test=Name(id='x', ctx=Load()),
body=[
Expr(
value=Constant(value=Ellipsis))],
orelse=[
Expr(
value=Constant(value=Ellipsis))])],
type_ignores=[]) | python.library.ast#ast.While |
class ast.With(items, body, type_comment)
A with block. items is a list of withitem nodes representing the context managers, and body is the indented block inside the context.
type_comment
type_comment is an optional string with the type annotation as a comment. | python.library.ast#ast.With |
type_comment
type_comment is an optional string with the type annotation as a comment. | python.library.ast#ast.With.type_comment |
class ast.withitem(context_expr, optional_vars)
A single context manager in a with block. context_expr is the context manager, often a Call node. optional_vars is a Name, Tuple or List for the as foo part, or None if that isn’t used. >>> print(ast.dump(ast.parse("""\
... with a as b, c as d:
... something(b, d)
... """), indent=4))
Module(
body=[
With(
items=[
withitem(
context_expr=Name(id='a', ctx=Load()),
optional_vars=Name(id='b', ctx=Store())),
withitem(
context_expr=Name(id='c', ctx=Load()),
optional_vars=Name(id='d', ctx=Store()))],
body=[
Expr(
value=Call(
func=Name(id='something', ctx=Load()),
args=[
Name(id='b', ctx=Load()),
Name(id='d', ctx=Load())],
keywords=[]))])],
type_ignores=[]) | python.library.ast#ast.withitem |
class ast.Yield(value)
class ast.YieldFrom(value)
A yield or yield from expression. Because these are expressions, they must be wrapped in a Expr node if the value sent back is not used. >>> print(ast.dump(ast.parse('yield x'), indent=4))
Module(
body=[
Expr(
value=Yield(
value=Name(id='x', ctx=Load())))],
type_ignores=[])
>>> print(ast.dump(ast.parse('yield from x'), indent=4))
Module(
body=[
Expr(
value=YieldFrom(
value=Name(id='x', ctx=Load())))],
type_ignores=[]) | python.library.ast#ast.Yield |
class ast.Yield(value)
class ast.YieldFrom(value)
A yield or yield from expression. Because these are expressions, they must be wrapped in a Expr node if the value sent back is not used. >>> print(ast.dump(ast.parse('yield x'), indent=4))
Module(
body=[
Expr(
value=Yield(
value=Name(id='x', ctx=Load())))],
type_ignores=[])
>>> print(ast.dump(ast.parse('yield from x'), indent=4))
Module(
body=[
Expr(
value=YieldFrom(
value=Name(id='x', ctx=Load())))],
type_ignores=[]) | python.library.ast#ast.YieldFrom |
asynchat — Asynchronous socket command/response handler Source code: Lib/asynchat.py Deprecated since version 3.6: Please use asyncio instead. Note This module exists for backwards compatibility only. For new code we recommend using asyncio. This module builds on the asyncore infrastructure, simplifying asynchronous clients and servers and making it easier to handle protocols whose elements are terminated by arbitrary strings, or are of variable length. asynchat defines the abstract class async_chat that you subclass, providing implementations of the collect_incoming_data() and found_terminator() methods. It uses the same asynchronous loop as asyncore, and the two types of channel, asyncore.dispatcher and asynchat.async_chat, can freely be mixed in the channel map. Typically an asyncore.dispatcher server channel generates new asynchat.async_chat channel objects as it receives incoming connection requests.
class asynchat.async_chat
This class is an abstract subclass of asyncore.dispatcher. To make practical use of the code you must subclass async_chat, providing meaningful collect_incoming_data() and found_terminator() methods. The asyncore.dispatcher methods can be used, although not all make sense in a message/response context. Like asyncore.dispatcher, async_chat defines a set of events that are generated by an analysis of socket conditions after a select() call. Once the polling loop has been started the async_chat object’s methods are called by the event-processing framework with no action on the part of the programmer. Two class attributes can be modified, to improve performance, or possibly even to conserve memory.
ac_in_buffer_size
The asynchronous input buffer size (default 4096).
ac_out_buffer_size
The asynchronous output buffer size (default 4096).
Unlike asyncore.dispatcher, async_chat allows you to define a FIFO queue of producers. A producer need have only one method, more(), which should return data to be transmitted on the channel. The producer indicates exhaustion (i.e. that it contains no more data) by having its more() method return the empty bytes object. At this point the async_chat object removes the producer from the queue and starts using the next producer, if any. When the producer queue is empty the handle_write() method does nothing. You use the channel object’s set_terminator() method to describe how to recognize the end of, or an important breakpoint in, an incoming transmission from the remote endpoint. To build a functioning async_chat subclass your input methods collect_incoming_data() and found_terminator() must handle the data that the channel receives asynchronously. The methods are described below.
async_chat.close_when_done()
Pushes a None on to the producer queue. When this producer is popped off the queue it causes the channel to be closed.
async_chat.collect_incoming_data(data)
Called with data holding an arbitrary amount of received data. The default method, which must be overridden, raises a NotImplementedError exception.
async_chat.discard_buffers()
In emergencies this method will discard any data held in the input and/or output buffers and the producer queue.
async_chat.found_terminator()
Called when the incoming data stream matches the termination condition set by set_terminator(). The default method, which must be overridden, raises a NotImplementedError exception. The buffered input data should be available via an instance attribute.
async_chat.get_terminator()
Returns the current terminator for the channel.
async_chat.push(data)
Pushes data on to the channel’s queue to ensure its transmission. This is all you need to do to have the channel write the data out to the network, although it is possible to use your own producers in more complex schemes to implement encryption and chunking, for example.
async_chat.push_with_producer(producer)
Takes a producer object and adds it to the producer queue associated with the channel. When all currently-pushed producers have been exhausted the channel will consume this producer’s data by calling its more() method and send the data to the remote endpoint.
async_chat.set_terminator(term)
Sets the terminating condition to be recognized on the channel. term may be any of three types of value, corresponding to three different ways to handle incoming protocol data.
term Description
string Will call found_terminator() when the string is found in the input stream
integer Will call found_terminator() when the indicated number of characters have been received
None The channel continues to collect data forever Note that any data following the terminator will be available for reading by the channel after found_terminator() is called.
asynchat Example The following partial example shows how HTTP requests can be read with async_chat. A web server might create an http_request_handler object for each incoming client connection. Notice that initially the channel terminator is set to match the blank line at the end of the HTTP headers, and a flag indicates that the headers are being read. Once the headers have been read, if the request is of type POST (indicating that further data are present in the input stream) then the Content-Length: header is used to set a numeric terminator to read the right amount of data from the channel. The handle_request() method is called once all relevant input has been marshalled, after setting the channel terminator to None to ensure that any extraneous data sent by the web client are ignored. import asynchat
class http_request_handler(asynchat.async_chat):
def __init__(self, sock, addr, sessions, log):
asynchat.async_chat.__init__(self, sock=sock)
self.addr = addr
self.sessions = sessions
self.ibuffer = []
self.obuffer = b""
self.set_terminator(b"\r\n\r\n")
self.reading_headers = True
self.handling = False
self.cgi_data = None
self.log = log
def collect_incoming_data(self, data):
"""Buffer the data"""
self.ibuffer.append(data)
def found_terminator(self):
if self.reading_headers:
self.reading_headers = False
self.parse_headers(b"".join(self.ibuffer))
self.ibuffer = []
if self.op.upper() == b"POST":
clen = self.headers.getheader("content-length")
self.set_terminator(int(clen))
else:
self.handling = True
self.set_terminator(None)
self.handle_request()
elif not self.handling:
self.set_terminator(None) # browsers sometimes over-send
self.cgi_data = parse(self.headers, b"".join(self.ibuffer))
self.handling = True
self.ibuffer = []
self.handle_request() | python.library.asynchat |
class asynchat.async_chat
This class is an abstract subclass of asyncore.dispatcher. To make practical use of the code you must subclass async_chat, providing meaningful collect_incoming_data() and found_terminator() methods. The asyncore.dispatcher methods can be used, although not all make sense in a message/response context. Like asyncore.dispatcher, async_chat defines a set of events that are generated by an analysis of socket conditions after a select() call. Once the polling loop has been started the async_chat object’s methods are called by the event-processing framework with no action on the part of the programmer. Two class attributes can be modified, to improve performance, or possibly even to conserve memory.
ac_in_buffer_size
The asynchronous input buffer size (default 4096).
ac_out_buffer_size
The asynchronous output buffer size (default 4096).
Unlike asyncore.dispatcher, async_chat allows you to define a FIFO queue of producers. A producer need have only one method, more(), which should return data to be transmitted on the channel. The producer indicates exhaustion (i.e. that it contains no more data) by having its more() method return the empty bytes object. At this point the async_chat object removes the producer from the queue and starts using the next producer, if any. When the producer queue is empty the handle_write() method does nothing. You use the channel object’s set_terminator() method to describe how to recognize the end of, or an important breakpoint in, an incoming transmission from the remote endpoint. To build a functioning async_chat subclass your input methods collect_incoming_data() and found_terminator() must handle the data that the channel receives asynchronously. The methods are described below. | python.library.asynchat#asynchat.async_chat |
ac_in_buffer_size
The asynchronous input buffer size (default 4096). | python.library.asynchat#asynchat.async_chat.ac_in_buffer_size |
ac_out_buffer_size
The asynchronous output buffer size (default 4096). | python.library.asynchat#asynchat.async_chat.ac_out_buffer_size |
async_chat.close_when_done()
Pushes a None on to the producer queue. When this producer is popped off the queue it causes the channel to be closed. | python.library.asynchat#asynchat.async_chat.close_when_done |
async_chat.collect_incoming_data(data)
Called with data holding an arbitrary amount of received data. The default method, which must be overridden, raises a NotImplementedError exception. | python.library.asynchat#asynchat.async_chat.collect_incoming_data |
async_chat.discard_buffers()
In emergencies this method will discard any data held in the input and/or output buffers and the producer queue. | python.library.asynchat#asynchat.async_chat.discard_buffers |
async_chat.found_terminator()
Called when the incoming data stream matches the termination condition set by set_terminator(). The default method, which must be overridden, raises a NotImplementedError exception. The buffered input data should be available via an instance attribute. | python.library.asynchat#asynchat.async_chat.found_terminator |
async_chat.get_terminator()
Returns the current terminator for the channel. | python.library.asynchat#asynchat.async_chat.get_terminator |
async_chat.push(data)
Pushes data on to the channel’s queue to ensure its transmission. This is all you need to do to have the channel write the data out to the network, although it is possible to use your own producers in more complex schemes to implement encryption and chunking, for example. | python.library.asynchat#asynchat.async_chat.push |
async_chat.push_with_producer(producer)
Takes a producer object and adds it to the producer queue associated with the channel. When all currently-pushed producers have been exhausted the channel will consume this producer’s data by calling its more() method and send the data to the remote endpoint. | python.library.asynchat#asynchat.async_chat.push_with_producer |
async_chat.set_terminator(term)
Sets the terminating condition to be recognized on the channel. term may be any of three types of value, corresponding to three different ways to handle incoming protocol data.
term Description
string Will call found_terminator() when the string is found in the input stream
integer Will call found_terminator() when the indicated number of characters have been received
None The channel continues to collect data forever Note that any data following the terminator will be available for reading by the channel after found_terminator() is called. | python.library.asynchat#asynchat.async_chat.set_terminator |
asyncio — Asynchronous I/O Hello World! import asyncio
async def main():
print('Hello ...')
await asyncio.sleep(1)
print('... World!')
# Python 3.7+
asyncio.run(main())
asyncio is a library to write concurrent code using the async/await syntax. asyncio is used as a foundation for multiple Python asynchronous frameworks that provide high-performance network and web-servers, database connection libraries, distributed task queues, etc. asyncio is often a perfect fit for IO-bound and high-level structured network code. asyncio provides a set of high-level APIs to:
run Python coroutines concurrently and have full control over their execution; perform network IO and IPC; control subprocesses; distribute tasks via queues;
synchronize concurrent code; Additionally, there are low-level APIs for library and framework developers to: create and manage event loops, which provide asynchronous APIs for networking, running subprocesses, handling OS signals, etc; implement efficient protocols using transports;
bridge callback-based libraries and code with async/await syntax. Reference High-level APIs Coroutines and Tasks Streams Synchronization Primitives Subprocesses Queues Exceptions Low-level APIs Event Loop Futures Transports and Protocols Policies Platform Support Guides and Tutorials High-level API Index Low-level API Index Developing with asyncio Note The source code for asyncio can be found in Lib/asyncio/. | python.library.asyncio |
class asyncio.AbstractChildWatcher
add_child_handler(pid, callback, *args)
Register a new child handler. Arrange for callback(pid, returncode, *args) to be called when a process with PID equal to pid terminates. Specifying another callback for the same process replaces the previous handler. The callback callable must be thread-safe.
remove_child_handler(pid)
Removes the handler for process with PID equal to pid. The function returns True if the handler was successfully removed, False if there was nothing to remove.
attach_loop(loop)
Attach the watcher to an event loop. If the watcher was previously attached to an event loop, then it is first detached before attaching to the new loop. Note: loop may be None.
is_active()
Return True if the watcher is ready to use. Spawning a subprocess with inactive current child watcher raises RuntimeError. New in version 3.8.
close()
Close the watcher. This method has to be called to ensure that underlying resources are cleaned-up. | python.library.asyncio-policy#asyncio.AbstractChildWatcher |
add_child_handler(pid, callback, *args)
Register a new child handler. Arrange for callback(pid, returncode, *args) to be called when a process with PID equal to pid terminates. Specifying another callback for the same process replaces the previous handler. The callback callable must be thread-safe. | python.library.asyncio-policy#asyncio.AbstractChildWatcher.add_child_handler |
attach_loop(loop)
Attach the watcher to an event loop. If the watcher was previously attached to an event loop, then it is first detached before attaching to the new loop. Note: loop may be None. | python.library.asyncio-policy#asyncio.AbstractChildWatcher.attach_loop |
close()
Close the watcher. This method has to be called to ensure that underlying resources are cleaned-up. | python.library.asyncio-policy#asyncio.AbstractChildWatcher.close |
is_active()
Return True if the watcher is ready to use. Spawning a subprocess with inactive current child watcher raises RuntimeError. New in version 3.8. | python.library.asyncio-policy#asyncio.AbstractChildWatcher.is_active |
remove_child_handler(pid)
Removes the handler for process with PID equal to pid. The function returns True if the handler was successfully removed, False if there was nothing to remove. | python.library.asyncio-policy#asyncio.AbstractChildWatcher.remove_child_handler |
class asyncio.AbstractEventLoop
Abstract base class for asyncio-compliant event loops. The Event Loop Methods section lists all methods that an alternative implementation of AbstractEventLoop should have defined. | python.library.asyncio-eventloop#asyncio.AbstractEventLoop |
class asyncio.AbstractEventLoopPolicy
An abstract base class for asyncio policies.
get_event_loop()
Get the event loop for the current context. Return an event loop object implementing the AbstractEventLoop interface. This method should never return None. Changed in version 3.6.
set_event_loop(loop)
Set the event loop for the current context to loop.
new_event_loop()
Create and return a new event loop object. This method should never return None.
get_child_watcher()
Get a child process watcher object. Return a watcher object implementing the AbstractChildWatcher interface. This function is Unix specific.
set_child_watcher(watcher)
Set the current child process watcher to watcher. This function is Unix specific. | python.library.asyncio-policy#asyncio.AbstractEventLoopPolicy |
get_child_watcher()
Get a child process watcher object. Return a watcher object implementing the AbstractChildWatcher interface. This function is Unix specific. | python.library.asyncio-policy#asyncio.AbstractEventLoopPolicy.get_child_watcher |
get_event_loop()
Get the event loop for the current context. Return an event loop object implementing the AbstractEventLoop interface. This method should never return None. Changed in version 3.6. | python.library.asyncio-policy#asyncio.AbstractEventLoopPolicy.get_event_loop |
new_event_loop()
Create and return a new event loop object. This method should never return None. | python.library.asyncio-policy#asyncio.AbstractEventLoopPolicy.new_event_loop |
set_child_watcher(watcher)
Set the current child process watcher to watcher. This function is Unix specific. | python.library.asyncio-policy#asyncio.AbstractEventLoopPolicy.set_child_watcher |
set_event_loop(loop)
Set the event loop for the current context to loop. | python.library.asyncio-policy#asyncio.AbstractEventLoopPolicy.set_event_loop |
asyncio.all_tasks(loop=None)
Return a set of not yet finished Task objects run by the loop. If loop is None, get_running_loop() is used for getting current loop. New in version 3.7. | python.library.asyncio-task#asyncio.all_tasks |
asyncio.subprocess.DEVNULL
Special value that can be used as the stdin, stdout or stderr argument to process creation functions. It indicates that the special file os.devnull will be used for the corresponding subprocess stream. | python.library.asyncio-subprocess#asyncio.asyncio.subprocess.DEVNULL |
asyncio.subprocess.PIPE
Can be passed to the stdin, stdout or stderr parameters. If PIPE is passed to stdin argument, the Process.stdin attribute will point to a StreamWriter instance. If PIPE is passed to stdout or stderr arguments, the Process.stdout and Process.stderr attributes will point to StreamReader instances. | python.library.asyncio-subprocess#asyncio.asyncio.subprocess.PIPE |
class asyncio.subprocess.Process
An object that wraps OS processes created by the create_subprocess_exec() and create_subprocess_shell() functions. This class is designed to have a similar API to the subprocess.Popen class, but there are some notable differences: unlike Popen, Process instances do not have an equivalent to the poll() method; the communicate() and wait() methods don’t have a timeout parameter: use the wait_for() function; the Process.wait() method is asynchronous, whereas subprocess.Popen.wait() method is implemented as a blocking busy loop; the universal_newlines parameter is not supported. This class is not thread safe. See also the Subprocess and Threads section.
coroutine wait()
Wait for the child process to terminate. Set and return the returncode attribute. Note This method can deadlock when using stdout=PIPE or stderr=PIPE and the child process generates so much output that it blocks waiting for the OS pipe buffer to accept more data. Use the communicate() method when using pipes to avoid this condition.
coroutine communicate(input=None)
Interact with process: send data to stdin (if input is not None); read data from stdout and stderr, until EOF is reached; wait for process to terminate. The optional input argument is the data (bytes object) that will be sent to the child process. Return a tuple (stdout_data, stderr_data). If either BrokenPipeError or ConnectionResetError exception is raised when writing input into stdin, the exception is ignored. This condition occurs when the process exits before all data are written into stdin. If it is desired to send data to the process’ stdin, the process needs to be created with stdin=PIPE. Similarly, to get anything other than None in the result tuple, the process has to be created with stdout=PIPE and/or stderr=PIPE arguments. Note, that the data read is buffered in memory, so do not use this method if the data size is large or unlimited.
send_signal(signal)
Sends the signal signal to the child process. Note On Windows, SIGTERM is an alias for terminate(). CTRL_C_EVENT and CTRL_BREAK_EVENT can be sent to processes started with a creationflags parameter which includes CREATE_NEW_PROCESS_GROUP.
terminate()
Stop the child process. On POSIX systems this method sends signal.SIGTERM to the child process. On Windows the Win32 API function TerminateProcess() is called to stop the child process.
kill()
Kill the child process. On POSIX systems this method sends SIGKILL to the child process. On Windows this method is an alias for terminate().
stdin
Standard input stream (StreamWriter) or None if the process was created with stdin=None.
stdout
Standard output stream (StreamReader) or None if the process was created with stdout=None.
stderr
Standard error stream (StreamReader) or None if the process was created with stderr=None.
Warning Use the communicate() method rather than process.stdin.write(), await process.stdout.read() or await process.stderr.read. This avoids deadlocks due to streams pausing reading or writing and blocking the child process.
pid
Process identification number (PID). Note that for processes created by the create_subprocess_shell() function, this attribute is the PID of the spawned shell.
returncode
Return code of the process when it exits. A None value indicates that the process has not terminated yet. A negative value -N indicates that the child was terminated by signal N (POSIX only). | python.library.asyncio-subprocess#asyncio.asyncio.subprocess.Process |
coroutine communicate(input=None)
Interact with process: send data to stdin (if input is not None); read data from stdout and stderr, until EOF is reached; wait for process to terminate. The optional input argument is the data (bytes object) that will be sent to the child process. Return a tuple (stdout_data, stderr_data). If either BrokenPipeError or ConnectionResetError exception is raised when writing input into stdin, the exception is ignored. This condition occurs when the process exits before all data are written into stdin. If it is desired to send data to the process’ stdin, the process needs to be created with stdin=PIPE. Similarly, to get anything other than None in the result tuple, the process has to be created with stdout=PIPE and/or stderr=PIPE arguments. Note, that the data read is buffered in memory, so do not use this method if the data size is large or unlimited. | python.library.asyncio-subprocess#asyncio.asyncio.subprocess.Process.communicate |
kill()
Kill the child process. On POSIX systems this method sends SIGKILL to the child process. On Windows this method is an alias for terminate(). | python.library.asyncio-subprocess#asyncio.asyncio.subprocess.Process.kill |
pid
Process identification number (PID). Note that for processes created by the create_subprocess_shell() function, this attribute is the PID of the spawned shell. | python.library.asyncio-subprocess#asyncio.asyncio.subprocess.Process.pid |
returncode
Return code of the process when it exits. A None value indicates that the process has not terminated yet. A negative value -N indicates that the child was terminated by signal N (POSIX only). | python.library.asyncio-subprocess#asyncio.asyncio.subprocess.Process.returncode |
send_signal(signal)
Sends the signal signal to the child process. Note On Windows, SIGTERM is an alias for terminate(). CTRL_C_EVENT and CTRL_BREAK_EVENT can be sent to processes started with a creationflags parameter which includes CREATE_NEW_PROCESS_GROUP. | python.library.asyncio-subprocess#asyncio.asyncio.subprocess.Process.send_signal |
stderr
Standard error stream (StreamReader) or None if the process was created with stderr=None. | python.library.asyncio-subprocess#asyncio.asyncio.subprocess.Process.stderr |
stdin
Standard input stream (StreamWriter) or None if the process was created with stdin=None. | python.library.asyncio-subprocess#asyncio.asyncio.subprocess.Process.stdin |
stdout
Standard output stream (StreamReader) or None if the process was created with stdout=None. | python.library.asyncio-subprocess#asyncio.asyncio.subprocess.Process.stdout |
terminate()
Stop the child process. On POSIX systems this method sends signal.SIGTERM to the child process. On Windows the Win32 API function TerminateProcess() is called to stop the child process. | python.library.asyncio-subprocess#asyncio.asyncio.subprocess.Process.terminate |
coroutine wait()
Wait for the child process to terminate. Set and return the returncode attribute. Note This method can deadlock when using stdout=PIPE or stderr=PIPE and the child process generates so much output that it blocks waiting for the OS pipe buffer to accept more data. Use the communicate() method when using pipes to avoid this condition. | python.library.asyncio-subprocess#asyncio.asyncio.subprocess.Process.wait |
asyncio.subprocess.STDOUT
Special value that can be used as the stderr argument and indicates that standard error should be redirected into standard output. | python.library.asyncio-subprocess#asyncio.asyncio.subprocess.STDOUT |
asyncio.as_completed(aws, *, loop=None, timeout=None)
Run awaitable objects in the aws iterable concurrently. Return an iterator of coroutines. Each coroutine returned can be awaited to get the earliest next result from the iterable of the remaining awaitables. Raises asyncio.TimeoutError if the timeout occurs before all Futures are done. Deprecated since version 3.8, will be removed in version 3.10: The loop parameter. Example: for coro in as_completed(aws):
earliest_result = await coro
# ... | python.library.asyncio-task#asyncio.as_completed |
class asyncio.BaseProtocol
Base protocol with methods that all protocols share. | python.library.asyncio-protocol#asyncio.BaseProtocol |
BaseProtocol.connection_lost(exc)
Called when the connection is lost or closed. The argument is either an exception object or None. The latter means a regular EOF is received, or the connection was aborted or closed by this side of the connection. | python.library.asyncio-protocol#asyncio.BaseProtocol.connection_lost |
BaseProtocol.connection_made(transport)
Called when a connection is made. The transport argument is the transport representing the connection. The protocol is responsible for storing the reference to its transport. | python.library.asyncio-protocol#asyncio.BaseProtocol.connection_made |
BaseProtocol.pause_writing()
Called when the transport’s buffer goes over the high watermark. | python.library.asyncio-protocol#asyncio.BaseProtocol.pause_writing |
BaseProtocol.resume_writing()
Called when the transport’s buffer drains below the low watermark. | python.library.asyncio-protocol#asyncio.BaseProtocol.resume_writing |
class asyncio.BaseTransport
Base class for all transports. Contains methods that all asyncio transports share. | python.library.asyncio-protocol#asyncio.BaseTransport |
BaseTransport.close()
Close the transport. If the transport has a buffer for outgoing data, buffered data will be flushed asynchronously. No more data will be received. After all buffered data is flushed, the protocol’s protocol.connection_lost() method will be called with None as its argument. | python.library.asyncio-protocol#asyncio.BaseTransport.close |
BaseTransport.get_extra_info(name, default=None)
Return information about the transport or underlying resources it uses. name is a string representing the piece of transport-specific information to get. default is the value to return if the information is not available, or if the transport does not support querying it with the given third-party event loop implementation or on the current platform. For example, the following code attempts to get the underlying socket object of the transport: sock = transport.get_extra_info('socket')
if sock is not None:
print(sock.getsockopt(...))
Categories of information that can be queried on some transports:
socket:
'peername': the remote address to which the socket is connected, result of socket.socket.getpeername() (None on error)
'socket': socket.socket instance
'sockname': the socket’s own address, result of socket.socket.getsockname()
SSL socket:
'compression': the compression algorithm being used as a string, or None if the connection isn’t compressed; result of ssl.SSLSocket.compression()
'cipher': a three-value tuple containing the name of the cipher being used, the version of the SSL protocol that defines its use, and the number of secret bits being used; result of ssl.SSLSocket.cipher()
'peercert': peer certificate; result of ssl.SSLSocket.getpeercert()
'sslcontext': ssl.SSLContext instance
'ssl_object': ssl.SSLObject or ssl.SSLSocket instance
pipe:
'pipe': pipe object
subprocess:
'subprocess': subprocess.Popen instance | python.library.asyncio-protocol#asyncio.BaseTransport.get_extra_info |
BaseTransport.get_protocol()
Return the current protocol. | python.library.asyncio-protocol#asyncio.BaseTransport.get_protocol |
BaseTransport.is_closing()
Return True if the transport is closing or is closed. | python.library.asyncio-protocol#asyncio.BaseTransport.is_closing |
BaseTransport.set_protocol(protocol)
Set a new protocol. Switching protocol should only be done when both protocols are documented to support the switch. | python.library.asyncio-protocol#asyncio.BaseTransport.set_protocol |
class asyncio.BoundedSemaphore(value=1, *, loop=None)
A bounded semaphore object. Not thread-safe. Bounded Semaphore is a version of Semaphore that raises a ValueError in release() if it increases the internal counter above the initial value. Deprecated since version 3.8, will be removed in version 3.10: The loop parameter. | python.library.asyncio-sync#asyncio.BoundedSemaphore |
class asyncio.BufferedProtocol(BaseProtocol)
A base class for implementing streaming protocols with manual control of the receive buffer. | python.library.asyncio-protocol#asyncio.BufferedProtocol |
BufferedProtocol.buffer_updated(nbytes)
Called when the buffer was updated with the received data. nbytes is the total number of bytes that were written to the buffer. | python.library.asyncio-protocol#asyncio.BufferedProtocol.buffer_updated |
BufferedProtocol.eof_received()
See the documentation of the protocol.eof_received() method. | python.library.asyncio-protocol#asyncio.BufferedProtocol.eof_received |
BufferedProtocol.get_buffer(sizehint)
Called to allocate a new receive buffer. sizehint is the recommended minimum size for the returned buffer. It is acceptable to return smaller or larger buffers than what sizehint suggests. When set to -1, the buffer size can be arbitrary. It is an error to return a buffer with a zero size. get_buffer() must return an object implementing the buffer protocol. | python.library.asyncio-protocol#asyncio.BufferedProtocol.get_buffer |
exception asyncio.CancelledError
The operation has been cancelled. This exception can be caught to perform custom operations when asyncio Tasks are cancelled. In almost all situations the exception must be re-raised. Changed in version 3.8: CancelledError is now a subclass of BaseException. | python.library.asyncio-exceptions#asyncio.CancelledError |
class asyncio.Condition(lock=None, *, loop=None)
A Condition object. Not thread-safe. An asyncio condition primitive can be used by a task to wait for some event to happen and then get exclusive access to a shared resource. In essence, a Condition object combines the functionality of an Event and a Lock. It is possible to have multiple Condition objects share one Lock, which allows coordinating exclusive access to a shared resource between different tasks interested in particular states of that shared resource. The optional lock argument must be a Lock object or None. In the latter case a new Lock object is created automatically. Deprecated since version 3.8, will be removed in version 3.10: The loop parameter. The preferred way to use a Condition is an async with statement: cond = asyncio.Condition()
# ... later
async with cond:
await cond.wait()
which is equivalent to: cond = asyncio.Condition()
# ... later
await cond.acquire()
try:
await cond.wait()
finally:
cond.release()
coroutine acquire()
Acquire the underlying lock. This method waits until the underlying lock is unlocked, sets it to locked and returns True.
notify(n=1)
Wake up at most n tasks (1 by default) waiting on this condition. The method is no-op if no tasks are waiting. The lock must be acquired before this method is called and released shortly after. If called with an unlocked lock a RuntimeError error is raised.
locked()
Return True if the underlying lock is acquired.
notify_all()
Wake up all tasks waiting on this condition. This method acts like notify(), but wakes up all waiting tasks. The lock must be acquired before this method is called and released shortly after. If called with an unlocked lock a RuntimeError error is raised.
release()
Release the underlying lock. When invoked on an unlocked lock, a RuntimeError is raised.
coroutine wait()
Wait until notified. If the calling task has not acquired the lock when this method is called, a RuntimeError is raised. This method releases the underlying lock, and then blocks until it is awakened by a notify() or notify_all() call. Once awakened, the Condition re-acquires its lock and this method returns True.
coroutine wait_for(predicate)
Wait until a predicate becomes true. The predicate must be a callable which result will be interpreted as a boolean value. The final value is the return value. | python.library.asyncio-sync#asyncio.Condition |
coroutine acquire()
Acquire the underlying lock. This method waits until the underlying lock is unlocked, sets it to locked and returns True. | python.library.asyncio-sync#asyncio.Condition.acquire |
locked()
Return True if the underlying lock is acquired. | python.library.asyncio-sync#asyncio.Condition.locked |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.