doc_content
stringlengths
1
386k
doc_id
stringlengths
5
188
writelines(list) Writes the concatenated list of strings to the stream (possibly by reusing the write() method). The standard bytes-to-bytes codecs do not support this method.
python.library.codecs#codecs.StreamWriter.writelines
codecs.strict_errors(exception) Implements the 'strict' error handling: each encoding or decoding error raises a UnicodeError.
python.library.codecs#codecs.strict_errors
codecs.xmlcharrefreplace_errors(exception) Implements the 'xmlcharrefreplace' error handling (for encoding with text encodings only): the unencodable character is replaced by an appropriate XML character reference.
python.library.codecs#codecs.xmlcharrefreplace_errors
codeop — Compile Python code Source code: Lib/codeop.py The codeop module provides utilities upon which the Python read-eval-print loop can be emulated, as is done in the code module. As a result, you probably don’t want to use the module directly; if you want to include such a loop in your program you probably want to use the code module instead. There are two parts to this job: Being able to tell if a line of input completes a Python statement: in short, telling whether to print ‘>>>’ or ‘...’ next. Remembering which future statements the user has entered, so subsequent input can be compiled with these in effect. The codeop module provides a way of doing each of these things, and a way of doing them both. To do just the former: codeop.compile_command(source, filename="<input>", symbol="single") Tries to compile source, which should be a string of Python code and return a code object if source is valid Python code. In that case, the filename attribute of the code object will be filename, which defaults to '<input>'. Returns None if source is not valid Python code, but is a prefix of valid Python code. If there is a problem with source, an exception will be raised. SyntaxError is raised if there is invalid Python syntax, and OverflowError or ValueError if there is an invalid literal. The symbol argument determines whether source is compiled as a statement ('single', the default), as a sequence of statements ('exec') or as an expression ('eval'). Any other value will cause ValueError to be raised. Note It is possible (but not likely) that the parser stops parsing with a successful outcome before reaching the end of the source; in this case, trailing symbols may be ignored instead of causing an error. For example, a backslash followed by two newlines may be followed by arbitrary garbage. This will be fixed once the API for the parser is better. class codeop.Compile Instances of this class have __call__() methods identical in signature to the built-in function compile(), but with the difference that if the instance compiles program text containing a __future__ statement, the instance ‘remembers’ and compiles all subsequent program texts with the statement in force. class codeop.CommandCompiler Instances of this class have __call__() methods identical in signature to compile_command(); the difference is that if the instance compiles program text containing a __future__ statement, the instance ‘remembers’ and compiles all subsequent program texts with the statement in force.
python.library.codeop
class codeop.CommandCompiler Instances of this class have __call__() methods identical in signature to compile_command(); the difference is that if the instance compiles program text containing a __future__ statement, the instance ‘remembers’ and compiles all subsequent program texts with the statement in force.
python.library.codeop#codeop.CommandCompiler
class codeop.Compile Instances of this class have __call__() methods identical in signature to the built-in function compile(), but with the difference that if the instance compiles program text containing a __future__ statement, the instance ‘remembers’ and compiles all subsequent program texts with the statement in force.
python.library.codeop#codeop.Compile
codeop.compile_command(source, filename="<input>", symbol="single") Tries to compile source, which should be a string of Python code and return a code object if source is valid Python code. In that case, the filename attribute of the code object will be filename, which defaults to '<input>'. Returns None if source is not valid Python code, but is a prefix of valid Python code. If there is a problem with source, an exception will be raised. SyntaxError is raised if there is invalid Python syntax, and OverflowError or ValueError if there is an invalid literal. The symbol argument determines whether source is compiled as a statement ('single', the default), as a sequence of statements ('exec') or as an expression ('eval'). Any other value will cause ValueError to be raised. Note It is possible (but not likely) that the parser stops parsing with a successful outcome before reaching the end of the source; in this case, trailing symbols may be ignored instead of causing an error. For example, a backslash followed by two newlines may be followed by arbitrary garbage. This will be fixed once the API for the parser is better.
python.library.codeop#codeop.compile_command
collections — Container datatypes Source code: Lib/collections/__init__.py This module implements specialized container datatypes providing alternatives to Python’s general purpose built-in containers, dict, list, set, and tuple. namedtuple() factory function for creating tuple subclasses with named fields deque list-like container with fast appends and pops on either end ChainMap dict-like class for creating a single view of multiple mappings Counter dict subclass for counting hashable objects OrderedDict dict subclass that remembers the order entries were added defaultdict dict subclass that calls a factory function to supply missing values UserDict wrapper around dictionary objects for easier dict subclassing UserList wrapper around list objects for easier list subclassing UserString wrapper around string objects for easier string subclassing Deprecated since version 3.3, will be removed in version 3.10: Moved Collections Abstract Base Classes to the collections.abc module. For backwards compatibility, they continue to be visible in this module through Python 3.9. ChainMap objects New in version 3.3. A ChainMap class is provided for quickly linking a number of mappings so they can be treated as a single unit. It is often much faster than creating a new dictionary and running multiple update() calls. The class can be used to simulate nested scopes and is useful in templating. class collections.ChainMap(*maps) A ChainMap groups multiple dicts or other mappings together to create a single, updateable view. If no maps are specified, a single empty dictionary is provided so that a new chain always has at least one mapping. The underlying mappings are stored in a list. That list is public and can be accessed or updated using the maps attribute. There is no other state. Lookups search the underlying mappings successively until a key is found. In contrast, writes, updates, and deletions only operate on the first mapping. A ChainMap incorporates the underlying mappings by reference. So, if one of the underlying mappings gets updated, those changes will be reflected in ChainMap. All of the usual dictionary methods are supported. In addition, there is a maps attribute, a method for creating new subcontexts, and a property for accessing all but the first mapping: maps A user updateable list of mappings. The list is ordered from first-searched to last-searched. It is the only stored state and can be modified to change which mappings are searched. The list should always contain at least one mapping. new_child(m=None) Returns a new ChainMap containing a new map followed by all of the maps in the current instance. If m is specified, it becomes the new map at the front of the list of mappings; if not specified, an empty dict is used, so that a call to d.new_child() is equivalent to: ChainMap({}, *d.maps). This method is used for creating subcontexts that can be updated without altering values in any of the parent mappings. Changed in version 3.4: The optional m parameter was added. parents Property returning a new ChainMap containing all of the maps in the current instance except the first one. This is useful for skipping the first map in the search. Use cases are similar to those for the nonlocal keyword used in nested scopes. The use cases also parallel those for the built-in super() function. A reference to d.parents is equivalent to: ChainMap(*d.maps[1:]). Note, the iteration order of a ChainMap() is determined by scanning the mappings last to first: >>> baseline = {'music': 'bach', 'art': 'rembrandt'} >>> adjustments = {'art': 'van gogh', 'opera': 'carmen'} >>> list(ChainMap(adjustments, baseline)) ['music', 'art', 'opera'] This gives the same ordering as a series of dict.update() calls starting with the last mapping: >>> combined = baseline.copy() >>> combined.update(adjustments) >>> list(combined) ['music', 'art', 'opera'] Changed in version 3.9: Added support for | and |= operators, specified in PEP 584. See also The MultiContext class in the Enthought CodeTools package has options to support writing to any mapping in the chain. Django’s Context class for templating is a read-only chain of mappings. It also features pushing and popping of contexts similar to the new_child() method and the parents property. The Nested Contexts recipe has options to control whether writes and other mutations apply only to the first mapping or to any mapping in the chain. A greatly simplified read-only version of Chainmap. ChainMap Examples and Recipes This section shows various approaches to working with chained maps. Example of simulating Python’s internal lookup chain: import builtins pylookup = ChainMap(locals(), globals(), vars(builtins)) Example of letting user specified command-line arguments take precedence over environment variables which in turn take precedence over default values: import os, argparse defaults = {'color': 'red', 'user': 'guest'} parser = argparse.ArgumentParser() parser.add_argument('-u', '--user') parser.add_argument('-c', '--color') namespace = parser.parse_args() command_line_args = {k: v for k, v in vars(namespace).items() if v is not None} combined = ChainMap(command_line_args, os.environ, defaults) print(combined['color']) print(combined['user']) Example patterns for using the ChainMap class to simulate nested contexts: c = ChainMap() # Create root context d = c.new_child() # Create nested child context e = c.new_child() # Child of c, independent from d e.maps[0] # Current context dictionary -- like Python's locals() e.maps[-1] # Root context -- like Python's globals() e.parents # Enclosing context chain -- like Python's nonlocals d['x'] = 1 # Set value in current context d['x'] # Get first key in the chain of contexts del d['x'] # Delete from current context list(d) # All nested values k in d # Check all nested values len(d) # Number of nested values d.items() # All nested items dict(d) # Flatten into a regular dictionary The ChainMap class only makes updates (writes and deletions) to the first mapping in the chain while lookups will search the full chain. However, if deep writes and deletions are desired, it is easy to make a subclass that updates keys found deeper in the chain: class DeepChainMap(ChainMap): 'Variant of ChainMap that allows direct updates to inner scopes' def __setitem__(self, key, value): for mapping in self.maps: if key in mapping: mapping[key] = value return self.maps[0][key] = value def __delitem__(self, key): for mapping in self.maps: if key in mapping: del mapping[key] return raise KeyError(key) >>> d = DeepChainMap({'zebra': 'black'}, {'elephant': 'blue'}, {'lion': 'yellow'}) >>> d['lion'] = 'orange' # update an existing key two levels down >>> d['snake'] = 'red' # new keys get added to the topmost dict >>> del d['elephant'] # remove an existing key one level down >>> d # display result DeepChainMap({'zebra': 'black', 'snake': 'red'}, {}, {'lion': 'orange'}) Counter objects A counter tool is provided to support convenient and rapid tallies. For example: >>> # Tally occurrences of words in a list >>> cnt = Counter() >>> for word in ['red', 'blue', 'red', 'green', 'blue', 'blue']: ... cnt[word] += 1 >>> cnt Counter({'blue': 3, 'red': 2, 'green': 1}) >>> # Find the ten most common words in Hamlet >>> import re >>> words = re.findall(r'\w+', open('hamlet.txt').read().lower()) >>> Counter(words).most_common(10) [('the', 1143), ('and', 966), ('to', 762), ('of', 669), ('i', 631), ('you', 554), ('a', 546), ('my', 514), ('hamlet', 471), ('in', 451)] class collections.Counter([iterable-or-mapping]) A Counter is a dict subclass for counting hashable objects. It is a collection where elements are stored as dictionary keys and their counts are stored as dictionary values. Counts are allowed to be any integer value including zero or negative counts. The Counter class is similar to bags or multisets in other languages. Elements are counted from an iterable or initialized from another mapping (or counter): >>> c = Counter() # a new, empty counter >>> c = Counter('gallahad') # a new counter from an iterable >>> c = Counter({'red': 4, 'blue': 2}) # a new counter from a mapping >>> c = Counter(cats=4, dogs=8) # a new counter from keyword args Counter objects have a dictionary interface except that they return a zero count for missing items instead of raising a KeyError: >>> c = Counter(['eggs', 'ham']) >>> c['bacon'] # count of a missing element is zero 0 Setting a count to zero does not remove an element from a counter. Use del to remove it entirely: >>> c['sausage'] = 0 # counter entry with a zero count >>> del c['sausage'] # del actually removes the entry New in version 3.1. Changed in version 3.7: As a dict subclass, Counter Inherited the capability to remember insertion order. Math operations on Counter objects also preserve order. Results are ordered according to when an element is first encountered in the left operand and then by the order encountered in the right operand. Counter objects support three methods beyond those available for all dictionaries: elements() Return an iterator over elements repeating each as many times as its count. Elements are returned in the order first encountered. If an element’s count is less than one, elements() will ignore it. >>> c = Counter(a=4, b=2, c=0, d=-2) >>> sorted(c.elements()) ['a', 'a', 'a', 'a', 'b', 'b'] most_common([n]) Return a list of the n most common elements and their counts from the most common to the least. If n is omitted or None, most_common() returns all elements in the counter. Elements with equal counts are ordered in the order first encountered: >>> Counter('abracadabra').most_common(3) [('a', 5), ('b', 2), ('r', 2)] subtract([iterable-or-mapping]) Elements are subtracted from an iterable or from another mapping (or counter). Like dict.update() but subtracts counts instead of replacing them. Both inputs and outputs may be zero or negative. >>> c = Counter(a=4, b=2, c=0, d=-2) >>> d = Counter(a=1, b=2, c=3, d=4) >>> c.subtract(d) >>> c Counter({'a': 3, 'b': 0, 'c': -3, 'd': -6}) New in version 3.2. The usual dictionary methods are available for Counter objects except for two which work differently for counters. fromkeys(iterable) This class method is not implemented for Counter objects. update([iterable-or-mapping]) Elements are counted from an iterable or added-in from another mapping (or counter). Like dict.update() but adds counts instead of replacing them. Also, the iterable is expected to be a sequence of elements, not a sequence of (key, value) pairs. Common patterns for working with Counter objects: sum(c.values()) # total of all counts c.clear() # reset all counts list(c) # list unique elements set(c) # convert to a set dict(c) # convert to a regular dictionary c.items() # convert to a list of (elem, cnt) pairs Counter(dict(list_of_pairs)) # convert from a list of (elem, cnt) pairs c.most_common()[:-n-1:-1] # n least common elements +c # remove zero and negative counts Several mathematical operations are provided for combining Counter objects to produce multisets (counters that have counts greater than zero). Addition and subtraction combine counters by adding or subtracting the counts of corresponding elements. Intersection and union return the minimum and maximum of corresponding counts. Each operation can accept inputs with signed counts, but the output will exclude results with counts of zero or less. >>> c = Counter(a=3, b=1) >>> d = Counter(a=1, b=2) >>> c + d # add two counters together: c[x] + d[x] Counter({'a': 4, 'b': 3}) >>> c - d # subtract (keeping only positive counts) Counter({'a': 2}) >>> c & d # intersection: min(c[x], d[x]) Counter({'a': 1, 'b': 1}) >>> c | d # union: max(c[x], d[x]) Counter({'a': 3, 'b': 2}) Unary addition and subtraction are shortcuts for adding an empty counter or subtracting from an empty counter. >>> c = Counter(a=2, b=-4) >>> +c Counter({'a': 2}) >>> -c Counter({'b': 4}) New in version 3.3: Added support for unary plus, unary minus, and in-place multiset operations. Note Counters were primarily designed to work with positive integers to represent running counts; however, care was taken to not unnecessarily preclude use cases needing other types or negative values. To help with those use cases, this section documents the minimum range and type restrictions. The Counter class itself is a dictionary subclass with no restrictions on its keys and values. The values are intended to be numbers representing counts, but you could store anything in the value field. The most_common() method requires only that the values be orderable. For in-place operations such as c[key] += 1, the value type need only support addition and subtraction. So fractions, floats, and decimals would work and negative values are supported. The same is also true for update() and subtract() which allow negative and zero values for both inputs and outputs. The multiset methods are designed only for use cases with positive values. The inputs may be negative or zero, but only outputs with positive values are created. There are no type restrictions, but the value type needs to support addition, subtraction, and comparison. The elements() method requires integer counts. It ignores zero and negative counts. See also Bag class in Smalltalk. Wikipedia entry for Multisets. C++ multisets tutorial with examples. For mathematical operations on multisets and their use cases, see Knuth, Donald. The Art of Computer Programming Volume II, Section 4.6.3, Exercise 19. To enumerate all distinct multisets of a given size over a given set of elements, see itertools.combinations_with_replacement(): map(Counter, combinations_with_replacement('ABC', 2)) # --> AA AB AC BB BC CC deque objects class collections.deque([iterable[, maxlen]]) Returns a new deque object initialized left-to-right (using append()) with data from iterable. If iterable is not specified, the new deque is empty. Deques are a generalization of stacks and queues (the name is pronounced “deck” and is short for “double-ended queue”). Deques support thread-safe, memory efficient appends and pops from either side of the deque with approximately the same O(1) performance in either direction. Though list objects support similar operations, they are optimized for fast fixed-length operations and incur O(n) memory movement costs for pop(0) and insert(0, v) operations which change both the size and position of the underlying data representation. If maxlen is not specified or is None, deques may grow to an arbitrary length. Otherwise, the deque is bounded to the specified maximum length. Once a bounded length deque is full, when new items are added, a corresponding number of items are discarded from the opposite end. Bounded length deques provide functionality similar to the tail filter in Unix. They are also useful for tracking transactions and other pools of data where only the most recent activity is of interest. Deque objects support the following methods: append(x) Add x to the right side of the deque. appendleft(x) Add x to the left side of the deque. clear() Remove all elements from the deque leaving it with length 0. copy() Create a shallow copy of the deque. New in version 3.5. count(x) Count the number of deque elements equal to x. New in version 3.2. extend(iterable) Extend the right side of the deque by appending elements from the iterable argument. extendleft(iterable) Extend the left side of the deque by appending elements from iterable. Note, the series of left appends results in reversing the order of elements in the iterable argument. index(x[, start[, stop]]) Return the position of x in the deque (at or after index start and before index stop). Returns the first match or raises ValueError if not found. New in version 3.5. insert(i, x) Insert x into the deque at position i. If the insertion would cause a bounded deque to grow beyond maxlen, an IndexError is raised. New in version 3.5. pop() Remove and return an element from the right side of the deque. If no elements are present, raises an IndexError. popleft() Remove and return an element from the left side of the deque. If no elements are present, raises an IndexError. remove(value) Remove the first occurrence of value. If not found, raises a ValueError. reverse() Reverse the elements of the deque in-place and then return None. New in version 3.2. rotate(n=1) Rotate the deque n steps to the right. If n is negative, rotate to the left. When the deque is not empty, rotating one step to the right is equivalent to d.appendleft(d.pop()), and rotating one step to the left is equivalent to d.append(d.popleft()). Deque objects also provide one read-only attribute: maxlen Maximum size of a deque or None if unbounded. New in version 3.1. In addition to the above, deques support iteration, pickling, len(d), reversed(d), copy.copy(d), copy.deepcopy(d), membership testing with the in operator, and subscript references such as d[0] to access the first element. Indexed access is O(1) at both ends but slows to O(n) in the middle. For fast random access, use lists instead. Starting in version 3.5, deques support __add__(), __mul__(), and __imul__(). Example: >>> from collections import deque >>> d = deque('ghi') # make a new deque with three items >>> for elem in d: # iterate over the deque's elements ... print(elem.upper()) G H I >>> d.append('j') # add a new entry to the right side >>> d.appendleft('f') # add a new entry to the left side >>> d # show the representation of the deque deque(['f', 'g', 'h', 'i', 'j']) >>> d.pop() # return and remove the rightmost item 'j' >>> d.popleft() # return and remove the leftmost item 'f' >>> list(d) # list the contents of the deque ['g', 'h', 'i'] >>> d[0] # peek at leftmost item 'g' >>> d[-1] # peek at rightmost item 'i' >>> list(reversed(d)) # list the contents of a deque in reverse ['i', 'h', 'g'] >>> 'h' in d # search the deque True >>> d.extend('jkl') # add multiple elements at once >>> d deque(['g', 'h', 'i', 'j', 'k', 'l']) >>> d.rotate(1) # right rotation >>> d deque(['l', 'g', 'h', 'i', 'j', 'k']) >>> d.rotate(-1) # left rotation >>> d deque(['g', 'h', 'i', 'j', 'k', 'l']) >>> deque(reversed(d)) # make a new deque in reverse order deque(['l', 'k', 'j', 'i', 'h', 'g']) >>> d.clear() # empty the deque >>> d.pop() # cannot pop from an empty deque Traceback (most recent call last): File "<pyshell#6>", line 1, in -toplevel- d.pop() IndexError: pop from an empty deque >>> d.extendleft('abc') # extendleft() reverses the input order >>> d deque(['c', 'b', 'a']) deque Recipes This section shows various approaches to working with deques. Bounded length deques provide functionality similar to the tail filter in Unix: def tail(filename, n=10): 'Return the last n lines of a file' with open(filename) as f: return deque(f, n) Another approach to using deques is to maintain a sequence of recently added elements by appending to the right and popping to the left: def moving_average(iterable, n=3): # moving_average([40, 30, 50, 46, 39, 44]) --> 40.0 42.0 45.0 43.0 # http://en.wikipedia.org/wiki/Moving_average it = iter(iterable) d = deque(itertools.islice(it, n-1)) d.appendleft(0) s = sum(d) for elem in it: s += elem - d.popleft() d.append(elem) yield s / n A round-robin scheduler can be implemented with input iterators stored in a deque. Values are yielded from the active iterator in position zero. If that iterator is exhausted, it can be removed with popleft(); otherwise, it can be cycled back to the end with the rotate() method: def roundrobin(*iterables): "roundrobin('ABC', 'D', 'EF') --> A D E B F C" iterators = deque(map(iter, iterables)) while iterators: try: while True: yield next(iterators[0]) iterators.rotate(-1) except StopIteration: # Remove an exhausted iterator. iterators.popleft() The rotate() method provides a way to implement deque slicing and deletion. For example, a pure Python implementation of del d[n] relies on the rotate() method to position elements to be popped: def delete_nth(d, n): d.rotate(-n) d.popleft() d.rotate(n) To implement deque slicing, use a similar approach applying rotate() to bring a target element to the left side of the deque. Remove old entries with popleft(), add new entries with extend(), and then reverse the rotation. With minor variations on that approach, it is easy to implement Forth style stack manipulations such as dup, drop, swap, over, pick, rot, and roll. defaultdict objects class collections.defaultdict([default_factory[, ...]]) Returns a new dictionary-like object. defaultdict is a subclass of the built-in dict class. It overrides one method and adds one writable instance variable. The remaining functionality is the same as for the dict class and is not documented here. The first argument provides the initial value for the default_factory attribute; it defaults to None. All remaining arguments are treated the same as if they were passed to the dict constructor, including keyword arguments. defaultdict objects support the following method in addition to the standard dict operations: __missing__(key) If the default_factory attribute is None, this raises a KeyError exception with the key as argument. If default_factory is not None, it is called without arguments to provide a default value for the given key, this value is inserted in the dictionary for the key, and returned. If calling default_factory raises an exception this exception is propagated unchanged. This method is called by the __getitem__() method of the dict class when the requested key is not found; whatever it returns or raises is then returned or raised by __getitem__(). Note that __missing__() is not called for any operations besides __getitem__(). This means that get() will, like normal dictionaries, return None as a default rather than using default_factory. defaultdict objects support the following instance variable: default_factory This attribute is used by the __missing__() method; it is initialized from the first argument to the constructor, if present, or to None, if absent. Changed in version 3.9: Added merge (|) and update (|=) operators, specified in PEP 584. defaultdict Examples Using list as the default_factory, it is easy to group a sequence of key-value pairs into a dictionary of lists: >>> s = [('yellow', 1), ('blue', 2), ('yellow', 3), ('blue', 4), ('red', 1)] >>> d = defaultdict(list) >>> for k, v in s: ... d[k].append(v) ... >>> sorted(d.items()) [('blue', [2, 4]), ('red', [1]), ('yellow', [1, 3])] When each key is encountered for the first time, it is not already in the mapping; so an entry is automatically created using the default_factory function which returns an empty list. The list.append() operation then attaches the value to the new list. When keys are encountered again, the look-up proceeds normally (returning the list for that key) and the list.append() operation adds another value to the list. This technique is simpler and faster than an equivalent technique using dict.setdefault(): >>> d = {} >>> for k, v in s: ... d.setdefault(k, []).append(v) ... >>> sorted(d.items()) [('blue', [2, 4]), ('red', [1]), ('yellow', [1, 3])] Setting the default_factory to int makes the defaultdict useful for counting (like a bag or multiset in other languages): >>> s = 'mississippi' >>> d = defaultdict(int) >>> for k in s: ... d[k] += 1 ... >>> sorted(d.items()) [('i', 4), ('m', 1), ('p', 2), ('s', 4)] When a letter is first encountered, it is missing from the mapping, so the default_factory function calls int() to supply a default count of zero. The increment operation then builds up the count for each letter. The function int() which always returns zero is just a special case of constant functions. A faster and more flexible way to create constant functions is to use a lambda function which can supply any constant value (not just zero): >>> def constant_factory(value): ... return lambda: value >>> d = defaultdict(constant_factory('<missing>')) >>> d.update(name='John', action='ran') >>> '%(name)s %(action)s to %(object)s' % d 'John ran to <missing>' Setting the default_factory to set makes the defaultdict useful for building a dictionary of sets: >>> s = [('red', 1), ('blue', 2), ('red', 3), ('blue', 4), ('red', 1), ('blue', 4)] >>> d = defaultdict(set) >>> for k, v in s: ... d[k].add(v) ... >>> sorted(d.items()) [('blue', {2, 4}), ('red', {1, 3})] namedtuple() Factory Function for Tuples with Named Fields Named tuples assign meaning to each position in a tuple and allow for more readable, self-documenting code. They can be used wherever regular tuples are used, and they add the ability to access fields by name instead of position index. collections.namedtuple(typename, field_names, *, rename=False, defaults=None, module=None) Returns a new tuple subclass named typename. The new subclass is used to create tuple-like objects that have fields accessible by attribute lookup as well as being indexable and iterable. Instances of the subclass also have a helpful docstring (with typename and field_names) and a helpful __repr__() method which lists the tuple contents in a name=value format. The field_names are a sequence of strings such as ['x', 'y']. Alternatively, field_names can be a single string with each fieldname separated by whitespace and/or commas, for example 'x y' or 'x, y'. Any valid Python identifier may be used for a fieldname except for names starting with an underscore. Valid identifiers consist of letters, digits, and underscores but do not start with a digit or underscore and cannot be a keyword such as class, for, return, global, pass, or raise. If rename is true, invalid fieldnames are automatically replaced with positional names. For example, ['abc', 'def', 'ghi', 'abc'] is converted to ['abc', '_1', 'ghi', '_3'], eliminating the keyword def and the duplicate fieldname abc. defaults can be None or an iterable of default values. Since fields with a default value must come after any fields without a default, the defaults are applied to the rightmost parameters. For example, if the fieldnames are ['x', 'y', 'z'] and the defaults are (1, 2), then x will be a required argument, y will default to 1, and z will default to 2. If module is defined, the __module__ attribute of the named tuple is set to that value. Named tuple instances do not have per-instance dictionaries, so they are lightweight and require no more memory than regular tuples. To support pickling, the named tuple class should be assigned to a variable that matches typename. Changed in version 3.1: Added support for rename. Changed in version 3.6: The verbose and rename parameters became keyword-only arguments. Changed in version 3.6: Added the module parameter. Changed in version 3.7: Removed the verbose parameter and the _source attribute. Changed in version 3.7: Added the defaults parameter and the _field_defaults attribute. >>> # Basic example >>> Point = namedtuple('Point', ['x', 'y']) >>> p = Point(11, y=22) # instantiate with positional or keyword arguments >>> p[0] + p[1] # indexable like the plain tuple (11, 22) 33 >>> x, y = p # unpack like a regular tuple >>> x, y (11, 22) >>> p.x + p.y # fields also accessible by name 33 >>> p # readable __repr__ with a name=value style Point(x=11, y=22) Named tuples are especially useful for assigning field names to result tuples returned by the csv or sqlite3 modules: EmployeeRecord = namedtuple('EmployeeRecord', 'name, age, title, department, paygrade') import csv for emp in map(EmployeeRecord._make, csv.reader(open("employees.csv", "rb"))): print(emp.name, emp.title) import sqlite3 conn = sqlite3.connect('/companydata') cursor = conn.cursor() cursor.execute('SELECT name, age, title, department, paygrade FROM employees') for emp in map(EmployeeRecord._make, cursor.fetchall()): print(emp.name, emp.title) In addition to the methods inherited from tuples, named tuples support three additional methods and two attributes. To prevent conflicts with field names, the method and attribute names start with an underscore. classmethod somenamedtuple._make(iterable) Class method that makes a new instance from an existing sequence or iterable. >>> t = [11, 22] >>> Point._make(t) Point(x=11, y=22) somenamedtuple._asdict() Return a new dict which maps field names to their corresponding values: >>> p = Point(x=11, y=22) >>> p._asdict() {'x': 11, 'y': 22} Changed in version 3.1: Returns an OrderedDict instead of a regular dict. Changed in version 3.8: Returns a regular dict instead of an OrderedDict. As of Python 3.7, regular dicts are guaranteed to be ordered. If the extra features of OrderedDict are required, the suggested remediation is to cast the result to the desired type: OrderedDict(nt._asdict()). somenamedtuple._replace(**kwargs) Return a new instance of the named tuple replacing specified fields with new values: >>> p = Point(x=11, y=22) >>> p._replace(x=33) Point(x=33, y=22) >>> for partnum, record in inventory.items(): ... inventory[partnum] = record._replace(price=newprices[partnum], timestamp=time.now()) somenamedtuple._fields Tuple of strings listing the field names. Useful for introspection and for creating new named tuple types from existing named tuples. >>> p._fields # view the field names ('x', 'y') >>> Color = namedtuple('Color', 'red green blue') >>> Pixel = namedtuple('Pixel', Point._fields + Color._fields) >>> Pixel(11, 22, 128, 255, 0) Pixel(x=11, y=22, red=128, green=255, blue=0) somenamedtuple._field_defaults Dictionary mapping field names to default values. >>> Account = namedtuple('Account', ['type', 'balance'], defaults=[0]) >>> Account._field_defaults {'balance': 0} >>> Account('premium') Account(type='premium', balance=0) To retrieve a field whose name is stored in a string, use the getattr() function: >>> getattr(p, 'x') 11 To convert a dictionary to a named tuple, use the double-star-operator (as described in Unpacking Argument Lists): >>> d = {'x': 11, 'y': 22} >>> Point(**d) Point(x=11, y=22) Since a named tuple is a regular Python class, it is easy to add or change functionality with a subclass. Here is how to add a calculated field and a fixed-width print format: >>> class Point(namedtuple('Point', ['x', 'y'])): ... __slots__ = () ... @property ... def hypot(self): ... return (self.x ** 2 + self.y ** 2) ** 0.5 ... def __str__(self): ... return 'Point: x=%6.3f y=%6.3f hypot=%6.3f' % (self.x, self.y, self.hypot) >>> for p in Point(3, 4), Point(14, 5/7): ... print(p) Point: x= 3.000 y= 4.000 hypot= 5.000 Point: x=14.000 y= 0.714 hypot=14.018 The subclass shown above sets __slots__ to an empty tuple. This helps keep memory requirements low by preventing the creation of instance dictionaries. Subclassing is not useful for adding new, stored fields. Instead, simply create a new named tuple type from the _fields attribute: >>> Point3D = namedtuple('Point3D', Point._fields + ('z',)) Docstrings can be customized by making direct assignments to the __doc__ fields: >>> Book = namedtuple('Book', ['id', 'title', 'authors']) >>> Book.__doc__ += ': Hardcover book in active collection' >>> Book.id.__doc__ = '13-digit ISBN' >>> Book.title.__doc__ = 'Title of first printing' >>> Book.authors.__doc__ = 'List of authors sorted by last name' Changed in version 3.5: Property docstrings became writeable. See also See typing.NamedTuple for a way to add type hints for named tuples. It also provides an elegant notation using the class keyword: class Component(NamedTuple): part_number: int weight: float description: Optional[str] = None See types.SimpleNamespace() for a mutable namespace based on an underlying dictionary instead of a tuple. The dataclasses module provides a decorator and functions for automatically adding generated special methods to user-defined classes. OrderedDict objects Ordered dictionaries are just like regular dictionaries but have some extra capabilities relating to ordering operations. They have become less important now that the built-in dict class gained the ability to remember insertion order (this new behavior became guaranteed in Python 3.7). Some differences from dict still remain: The regular dict was designed to be very good at mapping operations. Tracking insertion order was secondary. The OrderedDict was designed to be good at reordering operations. Space efficiency, iteration speed, and the performance of update operations were secondary. Algorithmically, OrderedDict can handle frequent reordering operations better than dict. This makes it suitable for tracking recent accesses (for example in an LRU cache). The equality operation for OrderedDict checks for matching order. The popitem() method of OrderedDict has a different signature. It accepts an optional argument to specify which item is popped. OrderedDict has a move_to_end() method to efficiently reposition an element to an endpoint. Until Python 3.8, dict lacked a __reversed__() method. class collections.OrderedDict([items]) Return an instance of a dict subclass that has methods specialized for rearranging dictionary order. New in version 3.1. popitem(last=True) The popitem() method for ordered dictionaries returns and removes a (key, value) pair. The pairs are returned in LIFO order if last is true or FIFO order if false. move_to_end(key, last=True) Move an existing key to either end of an ordered dictionary. The item is moved to the right end if last is true (the default) or to the beginning if last is false. Raises KeyError if the key does not exist: >>> d = OrderedDict.fromkeys('abcde') >>> d.move_to_end('b') >>> ''.join(d.keys()) 'acdeb' >>> d.move_to_end('b', last=False) >>> ''.join(d.keys()) 'bacde' New in version 3.2. In addition to the usual mapping methods, ordered dictionaries also support reverse iteration using reversed(). Equality tests between OrderedDict objects are order-sensitive and are implemented as list(od1.items())==list(od2.items()). Equality tests between OrderedDict objects and other Mapping objects are order-insensitive like regular dictionaries. This allows OrderedDict objects to be substituted anywhere a regular dictionary is used. Changed in version 3.5: The items, keys, and values views of OrderedDict now support reverse iteration using reversed(). Changed in version 3.6: With the acceptance of PEP 468, order is retained for keyword arguments passed to the OrderedDict constructor and its update() method. Changed in version 3.9: Added merge (|) and update (|=) operators, specified in PEP 584. OrderedDict Examples and Recipes It is straightforward to create an ordered dictionary variant that remembers the order the keys were last inserted. If a new entry overwrites an existing entry, the original insertion position is changed and moved to the end: class LastUpdatedOrderedDict(OrderedDict): 'Store items in the order the keys were last added' def __setitem__(self, key, value): super().__setitem__(key, value) self.move_to_end(key) An OrderedDict would also be useful for implementing variants of functools.lru_cache(): class LRU(OrderedDict): 'Limit size, evicting the least recently looked-up key when full' def __init__(self, maxsize=128, /, *args, **kwds): self.maxsize = maxsize super().__init__(*args, **kwds) def __getitem__(self, key): value = super().__getitem__(key) self.move_to_end(key) return value def __setitem__(self, key, value): if key in self: self.move_to_end(key) super().__setitem__(key, value) if len(self) > self.maxsize: oldest = next(iter(self)) del self[oldest] UserDict objects The class, UserDict acts as a wrapper around dictionary objects. The need for this class has been partially supplanted by the ability to subclass directly from dict; however, this class can be easier to work with because the underlying dictionary is accessible as an attribute. class collections.UserDict([initialdata]) Class that simulates a dictionary. The instance’s contents are kept in a regular dictionary, which is accessible via the data attribute of UserDict instances. If initialdata is provided, data is initialized with its contents; note that a reference to initialdata will not be kept, allowing it be used for other purposes. In addition to supporting the methods and operations of mappings, UserDict instances provide the following attribute: data A real dictionary used to store the contents of the UserDict class. UserList objects This class acts as a wrapper around list objects. It is a useful base class for your own list-like classes which can inherit from them and override existing methods or add new ones. In this way, one can add new behaviors to lists. The need for this class has been partially supplanted by the ability to subclass directly from list; however, this class can be easier to work with because the underlying list is accessible as an attribute. class collections.UserList([list]) Class that simulates a list. The instance’s contents are kept in a regular list, which is accessible via the data attribute of UserList instances. The instance’s contents are initially set to a copy of list, defaulting to the empty list []. list can be any iterable, for example a real Python list or a UserList object. In addition to supporting the methods and operations of mutable sequences, UserList instances provide the following attribute: data A real list object used to store the contents of the UserList class. Subclassing requirements: Subclasses of UserList are expected to offer a constructor which can be called with either no arguments or one argument. List operations which return a new sequence attempt to create an instance of the actual implementation class. To do so, it assumes that the constructor can be called with a single parameter, which is a sequence object used as a data source. If a derived class does not wish to comply with this requirement, all of the special methods supported by this class will need to be overridden; please consult the sources for information about the methods which need to be provided in that case. UserString objects The class, UserString acts as a wrapper around string objects. The need for this class has been partially supplanted by the ability to subclass directly from str; however, this class can be easier to work with because the underlying string is accessible as an attribute. class collections.UserString(seq) Class that simulates a string object. The instance’s content is kept in a regular string object, which is accessible via the data attribute of UserString instances. The instance’s contents are initially set to a copy of seq. The seq argument can be any object which can be converted into a string using the built-in str() function. In addition to supporting the methods and operations of strings, UserString instances provide the following attribute: data A real str object used to store the contents of the UserString class. Changed in version 3.5: New methods __getnewargs__, __rmod__, casefold, format_map, isprintable, and maketrans.
python.library.collections
collections.abc — Abstract Base Classes for Containers New in version 3.3: Formerly, this module was part of the collections module. Source code: Lib/_collections_abc.py This module provides abstract base classes that can be used to test whether a class provides a particular interface; for example, whether it is hashable or whether it is a mapping. Collections Abstract Base Classes The collections module offers the following ABCs: ABC Inherits from Abstract Methods Mixin Methods Container __contains__ Hashable __hash__ Iterable __iter__ Iterator Iterable __next__ __iter__ Reversible Iterable __reversed__ Generator Iterator send, throw close, __iter__, __next__ Sized __len__ Callable __call__ Collection Sized, Iterable, Container __contains__, __iter__, __len__ Sequence Reversible, Collection __getitem__, __len__ __contains__, __iter__, __reversed__, index, and count MutableSequence Sequence __getitem__, __setitem__, __delitem__, __len__, insert Inherited Sequence methods and append, reverse, extend, pop, remove, and __iadd__ ByteString Sequence __getitem__, __len__ Inherited Sequence methods Set Collection __contains__, __iter__, __len__ __le__, __lt__, __eq__, __ne__, __gt__, __ge__, __and__, __or__, __sub__, __xor__, and isdisjoint MutableSet Set __contains__, __iter__, __len__, add, discard Inherited Set methods and clear, pop, remove, __ior__, __iand__, __ixor__, and __isub__ Mapping Collection __getitem__, __iter__, __len__ __contains__, keys, items, values, get, __eq__, and __ne__ MutableMapping Mapping __getitem__, __setitem__, __delitem__, __iter__, __len__ Inherited Mapping methods and pop, popitem, clear, update, and setdefault MappingView Sized __len__ ItemsView MappingView, Set __contains__, __iter__ KeysView MappingView, Set __contains__, __iter__ ValuesView MappingView, Collection __contains__, __iter__ Awaitable __await__ Coroutine Awaitable send, throw close AsyncIterable __aiter__ AsyncIterator AsyncIterable __anext__ __aiter__ AsyncGenerator AsyncIterator asend, athrow aclose, __aiter__, __anext__ class collections.abc.Container ABC for classes that provide the __contains__() method. class collections.abc.Hashable ABC for classes that provide the __hash__() method. class collections.abc.Sized ABC for classes that provide the __len__() method. class collections.abc.Callable ABC for classes that provide the __call__() method. class collections.abc.Iterable ABC for classes that provide the __iter__() method. Checking isinstance(obj, Iterable) detects classes that are registered as Iterable or that have an __iter__() method, but it does not detect classes that iterate with the __getitem__() method. The only reliable way to determine whether an object is iterable is to call iter(obj). class collections.abc.Collection ABC for sized iterable container classes. New in version 3.6. class collections.abc.Iterator ABC for classes that provide the __iter__() and __next__() methods. See also the definition of iterator. class collections.abc.Reversible ABC for iterable classes that also provide the __reversed__() method. New in version 3.6. class collections.abc.Generator ABC for generator classes that implement the protocol defined in PEP 342 that extends iterators with the send(), throw() and close() methods. See also the definition of generator. New in version 3.5. class collections.abc.Sequence class collections.abc.MutableSequence class collections.abc.ByteString ABCs for read-only and mutable sequences. Implementation note: Some of the mixin methods, such as __iter__(), __reversed__() and index(), make repeated calls to the underlying __getitem__() method. Consequently, if __getitem__() is implemented with constant access speed, the mixin methods will have linear performance; however, if the underlying method is linear (as it would be with a linked list), the mixins will have quadratic performance and will likely need to be overridden. Changed in version 3.5: The index() method added support for stop and start arguments. class collections.abc.Set class collections.abc.MutableSet ABCs for read-only and mutable sets. class collections.abc.Mapping class collections.abc.MutableMapping ABCs for read-only and mutable mappings. class collections.abc.MappingView class collections.abc.ItemsView class collections.abc.KeysView class collections.abc.ValuesView ABCs for mapping, items, keys, and values views. class collections.abc.Awaitable ABC for awaitable objects, which can be used in await expressions. Custom implementations must provide the __await__() method. Coroutine objects and instances of the Coroutine ABC are all instances of this ABC. Note In CPython, generator-based coroutines (generators decorated with types.coroutine() or asyncio.coroutine()) are awaitables, even though they do not have an __await__() method. Using isinstance(gencoro, Awaitable) for them will return False. Use inspect.isawaitable() to detect them. New in version 3.5. class collections.abc.Coroutine ABC for coroutine compatible classes. These implement the following methods, defined in Coroutine Objects: send(), throw(), and close(). Custom implementations must also implement __await__(). All Coroutine instances are also instances of Awaitable. See also the definition of coroutine. Note In CPython, generator-based coroutines (generators decorated with types.coroutine() or asyncio.coroutine()) are awaitables, even though they do not have an __await__() method. Using isinstance(gencoro, Coroutine) for them will return False. Use inspect.isawaitable() to detect them. New in version 3.5. class collections.abc.AsyncIterable ABC for classes that provide __aiter__ method. See also the definition of asynchronous iterable. New in version 3.5. class collections.abc.AsyncIterator ABC for classes that provide __aiter__ and __anext__ methods. See also the definition of asynchronous iterator. New in version 3.5. class collections.abc.AsyncGenerator ABC for asynchronous generator classes that implement the protocol defined in PEP 525 and PEP 492. New in version 3.6. These ABCs allow us to ask classes or instances if they provide particular functionality, for example: size = None if isinstance(myvar, collections.abc.Sized): size = len(myvar) Several of the ABCs are also useful as mixins that make it easier to develop classes supporting container APIs. For example, to write a class supporting the full Set API, it is only necessary to supply the three underlying abstract methods: __contains__(), __iter__(), and __len__(). The ABC supplies the remaining methods such as __and__() and isdisjoint(): class ListBasedSet(collections.abc.Set): ''' Alternate set implementation favoring space over speed and not requiring the set elements to be hashable. ''' def __init__(self, iterable): self.elements = lst = [] for value in iterable: if value not in lst: lst.append(value) def __iter__(self): return iter(self.elements) def __contains__(self, value): return value in self.elements def __len__(self): return len(self.elements) s1 = ListBasedSet('abcdef') s2 = ListBasedSet('defghi') overlap = s1 & s2 # The __and__() method is supported automatically Notes on using Set and MutableSet as a mixin: Since some set operations create new sets, the default mixin methods need a way to create new instances from an iterable. The class constructor is assumed to have a signature in the form ClassName(iterable). That assumption is factored-out to an internal classmethod called _from_iterable() which calls cls(iterable) to produce a new set. If the Set mixin is being used in a class with a different constructor signature, you will need to override _from_iterable() with a classmethod or regular method that can construct new instances from an iterable argument. To override the comparisons (presumably for speed, as the semantics are fixed), redefine __le__() and __ge__(), then the other operations will automatically follow suit. The Set mixin provides a _hash() method to compute a hash value for the set; however, __hash__() is not defined because not all sets are hashable or immutable. To add set hashability using mixins, inherit from both Set() and Hashable(), then define __hash__ = Set._hash. See also OrderedSet recipe for an example built on MutableSet. For more about ABCs, see the abc module and PEP 3119.
python.library.collections.abc
class collections.abc.AsyncGenerator ABC for asynchronous generator classes that implement the protocol defined in PEP 525 and PEP 492. New in version 3.6.
python.library.collections.abc#collections.abc.AsyncGenerator
class collections.abc.AsyncIterable ABC for classes that provide __aiter__ method. See also the definition of asynchronous iterable. New in version 3.5.
python.library.collections.abc#collections.abc.AsyncIterable
class collections.abc.AsyncIterator ABC for classes that provide __aiter__ and __anext__ methods. See also the definition of asynchronous iterator. New in version 3.5.
python.library.collections.abc#collections.abc.AsyncIterator
class collections.abc.Awaitable ABC for awaitable objects, which can be used in await expressions. Custom implementations must provide the __await__() method. Coroutine objects and instances of the Coroutine ABC are all instances of this ABC. Note In CPython, generator-based coroutines (generators decorated with types.coroutine() or asyncio.coroutine()) are awaitables, even though they do not have an __await__() method. Using isinstance(gencoro, Awaitable) for them will return False. Use inspect.isawaitable() to detect them. New in version 3.5.
python.library.collections.abc#collections.abc.Awaitable
class collections.abc.Sequence class collections.abc.MutableSequence class collections.abc.ByteString ABCs for read-only and mutable sequences. Implementation note: Some of the mixin methods, such as __iter__(), __reversed__() and index(), make repeated calls to the underlying __getitem__() method. Consequently, if __getitem__() is implemented with constant access speed, the mixin methods will have linear performance; however, if the underlying method is linear (as it would be with a linked list), the mixins will have quadratic performance and will likely need to be overridden. Changed in version 3.5: The index() method added support for stop and start arguments.
python.library.collections.abc#collections.abc.ByteString
class collections.abc.Callable ABC for classes that provide the __call__() method.
python.library.collections.abc#collections.abc.Callable
class collections.abc.Collection ABC for sized iterable container classes. New in version 3.6.
python.library.collections.abc#collections.abc.Collection
class collections.abc.Container ABC for classes that provide the __contains__() method.
python.library.collections.abc#collections.abc.Container
class collections.abc.Coroutine ABC for coroutine compatible classes. These implement the following methods, defined in Coroutine Objects: send(), throw(), and close(). Custom implementations must also implement __await__(). All Coroutine instances are also instances of Awaitable. See also the definition of coroutine. Note In CPython, generator-based coroutines (generators decorated with types.coroutine() or asyncio.coroutine()) are awaitables, even though they do not have an __await__() method. Using isinstance(gencoro, Coroutine) for them will return False. Use inspect.isawaitable() to detect them. New in version 3.5.
python.library.collections.abc#collections.abc.Coroutine
class collections.abc.Generator ABC for generator classes that implement the protocol defined in PEP 342 that extends iterators with the send(), throw() and close() methods. See also the definition of generator. New in version 3.5.
python.library.collections.abc#collections.abc.Generator
class collections.abc.Hashable ABC for classes that provide the __hash__() method.
python.library.collections.abc#collections.abc.Hashable
class collections.abc.MappingView class collections.abc.ItemsView class collections.abc.KeysView class collections.abc.ValuesView ABCs for mapping, items, keys, and values views.
python.library.collections.abc#collections.abc.ItemsView
class collections.abc.Iterable ABC for classes that provide the __iter__() method. Checking isinstance(obj, Iterable) detects classes that are registered as Iterable or that have an __iter__() method, but it does not detect classes that iterate with the __getitem__() method. The only reliable way to determine whether an object is iterable is to call iter(obj).
python.library.collections.abc#collections.abc.Iterable
class collections.abc.Iterator ABC for classes that provide the __iter__() and __next__() methods. See also the definition of iterator.
python.library.collections.abc#collections.abc.Iterator
class collections.abc.MappingView class collections.abc.ItemsView class collections.abc.KeysView class collections.abc.ValuesView ABCs for mapping, items, keys, and values views.
python.library.collections.abc#collections.abc.KeysView
class collections.abc.Mapping class collections.abc.MutableMapping ABCs for read-only and mutable mappings.
python.library.collections.abc#collections.abc.Mapping
class collections.abc.MappingView class collections.abc.ItemsView class collections.abc.KeysView class collections.abc.ValuesView ABCs for mapping, items, keys, and values views.
python.library.collections.abc#collections.abc.MappingView
class collections.abc.Mapping class collections.abc.MutableMapping ABCs for read-only and mutable mappings.
python.library.collections.abc#collections.abc.MutableMapping
class collections.abc.Sequence class collections.abc.MutableSequence class collections.abc.ByteString ABCs for read-only and mutable sequences. Implementation note: Some of the mixin methods, such as __iter__(), __reversed__() and index(), make repeated calls to the underlying __getitem__() method. Consequently, if __getitem__() is implemented with constant access speed, the mixin methods will have linear performance; however, if the underlying method is linear (as it would be with a linked list), the mixins will have quadratic performance and will likely need to be overridden. Changed in version 3.5: The index() method added support for stop and start arguments.
python.library.collections.abc#collections.abc.MutableSequence
class collections.abc.Set class collections.abc.MutableSet ABCs for read-only and mutable sets.
python.library.collections.abc#collections.abc.MutableSet
class collections.abc.Reversible ABC for iterable classes that also provide the __reversed__() method. New in version 3.6.
python.library.collections.abc#collections.abc.Reversible
class collections.abc.Sequence class collections.abc.MutableSequence class collections.abc.ByteString ABCs for read-only and mutable sequences. Implementation note: Some of the mixin methods, such as __iter__(), __reversed__() and index(), make repeated calls to the underlying __getitem__() method. Consequently, if __getitem__() is implemented with constant access speed, the mixin methods will have linear performance; however, if the underlying method is linear (as it would be with a linked list), the mixins will have quadratic performance and will likely need to be overridden. Changed in version 3.5: The index() method added support for stop and start arguments.
python.library.collections.abc#collections.abc.Sequence
class collections.abc.Set class collections.abc.MutableSet ABCs for read-only and mutable sets.
python.library.collections.abc#collections.abc.Set
class collections.abc.Sized ABC for classes that provide the __len__() method.
python.library.collections.abc#collections.abc.Sized
class collections.abc.MappingView class collections.abc.ItemsView class collections.abc.KeysView class collections.abc.ValuesView ABCs for mapping, items, keys, and values views.
python.library.collections.abc#collections.abc.ValuesView
class collections.ChainMap(*maps) A ChainMap groups multiple dicts or other mappings together to create a single, updateable view. If no maps are specified, a single empty dictionary is provided so that a new chain always has at least one mapping. The underlying mappings are stored in a list. That list is public and can be accessed or updated using the maps attribute. There is no other state. Lookups search the underlying mappings successively until a key is found. In contrast, writes, updates, and deletions only operate on the first mapping. A ChainMap incorporates the underlying mappings by reference. So, if one of the underlying mappings gets updated, those changes will be reflected in ChainMap. All of the usual dictionary methods are supported. In addition, there is a maps attribute, a method for creating new subcontexts, and a property for accessing all but the first mapping: maps A user updateable list of mappings. The list is ordered from first-searched to last-searched. It is the only stored state and can be modified to change which mappings are searched. The list should always contain at least one mapping. new_child(m=None) Returns a new ChainMap containing a new map followed by all of the maps in the current instance. If m is specified, it becomes the new map at the front of the list of mappings; if not specified, an empty dict is used, so that a call to d.new_child() is equivalent to: ChainMap({}, *d.maps). This method is used for creating subcontexts that can be updated without altering values in any of the parent mappings. Changed in version 3.4: The optional m parameter was added. parents Property returning a new ChainMap containing all of the maps in the current instance except the first one. This is useful for skipping the first map in the search. Use cases are similar to those for the nonlocal keyword used in nested scopes. The use cases also parallel those for the built-in super() function. A reference to d.parents is equivalent to: ChainMap(*d.maps[1:]). Note, the iteration order of a ChainMap() is determined by scanning the mappings last to first: >>> baseline = {'music': 'bach', 'art': 'rembrandt'} >>> adjustments = {'art': 'van gogh', 'opera': 'carmen'} >>> list(ChainMap(adjustments, baseline)) ['music', 'art', 'opera'] This gives the same ordering as a series of dict.update() calls starting with the last mapping: >>> combined = baseline.copy() >>> combined.update(adjustments) >>> list(combined) ['music', 'art', 'opera'] Changed in version 3.9: Added support for | and |= operators, specified in PEP 584.
python.library.collections#collections.ChainMap
maps A user updateable list of mappings. The list is ordered from first-searched to last-searched. It is the only stored state and can be modified to change which mappings are searched. The list should always contain at least one mapping.
python.library.collections#collections.ChainMap.maps
new_child(m=None) Returns a new ChainMap containing a new map followed by all of the maps in the current instance. If m is specified, it becomes the new map at the front of the list of mappings; if not specified, an empty dict is used, so that a call to d.new_child() is equivalent to: ChainMap({}, *d.maps). This method is used for creating subcontexts that can be updated without altering values in any of the parent mappings. Changed in version 3.4: The optional m parameter was added.
python.library.collections#collections.ChainMap.new_child
parents Property returning a new ChainMap containing all of the maps in the current instance except the first one. This is useful for skipping the first map in the search. Use cases are similar to those for the nonlocal keyword used in nested scopes. The use cases also parallel those for the built-in super() function. A reference to d.parents is equivalent to: ChainMap(*d.maps[1:]).
python.library.collections#collections.ChainMap.parents
class collections.Counter([iterable-or-mapping]) A Counter is a dict subclass for counting hashable objects. It is a collection where elements are stored as dictionary keys and their counts are stored as dictionary values. Counts are allowed to be any integer value including zero or negative counts. The Counter class is similar to bags or multisets in other languages. Elements are counted from an iterable or initialized from another mapping (or counter): >>> c = Counter() # a new, empty counter >>> c = Counter('gallahad') # a new counter from an iterable >>> c = Counter({'red': 4, 'blue': 2}) # a new counter from a mapping >>> c = Counter(cats=4, dogs=8) # a new counter from keyword args Counter objects have a dictionary interface except that they return a zero count for missing items instead of raising a KeyError: >>> c = Counter(['eggs', 'ham']) >>> c['bacon'] # count of a missing element is zero 0 Setting a count to zero does not remove an element from a counter. Use del to remove it entirely: >>> c['sausage'] = 0 # counter entry with a zero count >>> del c['sausage'] # del actually removes the entry New in version 3.1. Changed in version 3.7: As a dict subclass, Counter Inherited the capability to remember insertion order. Math operations on Counter objects also preserve order. Results are ordered according to when an element is first encountered in the left operand and then by the order encountered in the right operand. Counter objects support three methods beyond those available for all dictionaries: elements() Return an iterator over elements repeating each as many times as its count. Elements are returned in the order first encountered. If an element’s count is less than one, elements() will ignore it. >>> c = Counter(a=4, b=2, c=0, d=-2) >>> sorted(c.elements()) ['a', 'a', 'a', 'a', 'b', 'b'] most_common([n]) Return a list of the n most common elements and their counts from the most common to the least. If n is omitted or None, most_common() returns all elements in the counter. Elements with equal counts are ordered in the order first encountered: >>> Counter('abracadabra').most_common(3) [('a', 5), ('b', 2), ('r', 2)] subtract([iterable-or-mapping]) Elements are subtracted from an iterable or from another mapping (or counter). Like dict.update() but subtracts counts instead of replacing them. Both inputs and outputs may be zero or negative. >>> c = Counter(a=4, b=2, c=0, d=-2) >>> d = Counter(a=1, b=2, c=3, d=4) >>> c.subtract(d) >>> c Counter({'a': 3, 'b': 0, 'c': -3, 'd': -6}) New in version 3.2. The usual dictionary methods are available for Counter objects except for two which work differently for counters. fromkeys(iterable) This class method is not implemented for Counter objects. update([iterable-or-mapping]) Elements are counted from an iterable or added-in from another mapping (or counter). Like dict.update() but adds counts instead of replacing them. Also, the iterable is expected to be a sequence of elements, not a sequence of (key, value) pairs.
python.library.collections#collections.Counter
elements() Return an iterator over elements repeating each as many times as its count. Elements are returned in the order first encountered. If an element’s count is less than one, elements() will ignore it. >>> c = Counter(a=4, b=2, c=0, d=-2) >>> sorted(c.elements()) ['a', 'a', 'a', 'a', 'b', 'b']
python.library.collections#collections.Counter.elements
fromkeys(iterable) This class method is not implemented for Counter objects.
python.library.collections#collections.Counter.fromkeys
most_common([n]) Return a list of the n most common elements and their counts from the most common to the least. If n is omitted or None, most_common() returns all elements in the counter. Elements with equal counts are ordered in the order first encountered: >>> Counter('abracadabra').most_common(3) [('a', 5), ('b', 2), ('r', 2)]
python.library.collections#collections.Counter.most_common
subtract([iterable-or-mapping]) Elements are subtracted from an iterable or from another mapping (or counter). Like dict.update() but subtracts counts instead of replacing them. Both inputs and outputs may be zero or negative. >>> c = Counter(a=4, b=2, c=0, d=-2) >>> d = Counter(a=1, b=2, c=3, d=4) >>> c.subtract(d) >>> c Counter({'a': 3, 'b': 0, 'c': -3, 'd': -6}) New in version 3.2.
python.library.collections#collections.Counter.subtract
update([iterable-or-mapping]) Elements are counted from an iterable or added-in from another mapping (or counter). Like dict.update() but adds counts instead of replacing them. Also, the iterable is expected to be a sequence of elements, not a sequence of (key, value) pairs.
python.library.collections#collections.Counter.update
class collections.defaultdict([default_factory[, ...]]) Returns a new dictionary-like object. defaultdict is a subclass of the built-in dict class. It overrides one method and adds one writable instance variable. The remaining functionality is the same as for the dict class and is not documented here. The first argument provides the initial value for the default_factory attribute; it defaults to None. All remaining arguments are treated the same as if they were passed to the dict constructor, including keyword arguments. defaultdict objects support the following method in addition to the standard dict operations: __missing__(key) If the default_factory attribute is None, this raises a KeyError exception with the key as argument. If default_factory is not None, it is called without arguments to provide a default value for the given key, this value is inserted in the dictionary for the key, and returned. If calling default_factory raises an exception this exception is propagated unchanged. This method is called by the __getitem__() method of the dict class when the requested key is not found; whatever it returns or raises is then returned or raised by __getitem__(). Note that __missing__() is not called for any operations besides __getitem__(). This means that get() will, like normal dictionaries, return None as a default rather than using default_factory. defaultdict objects support the following instance variable: default_factory This attribute is used by the __missing__() method; it is initialized from the first argument to the constructor, if present, or to None, if absent. Changed in version 3.9: Added merge (|) and update (|=) operators, specified in PEP 584.
python.library.collections#collections.defaultdict
default_factory This attribute is used by the __missing__() method; it is initialized from the first argument to the constructor, if present, or to None, if absent.
python.library.collections#collections.defaultdict.default_factory
__missing__(key) If the default_factory attribute is None, this raises a KeyError exception with the key as argument. If default_factory is not None, it is called without arguments to provide a default value for the given key, this value is inserted in the dictionary for the key, and returned. If calling default_factory raises an exception this exception is propagated unchanged. This method is called by the __getitem__() method of the dict class when the requested key is not found; whatever it returns or raises is then returned or raised by __getitem__(). Note that __missing__() is not called for any operations besides __getitem__(). This means that get() will, like normal dictionaries, return None as a default rather than using default_factory.
python.library.collections#collections.defaultdict.__missing__
class collections.deque([iterable[, maxlen]]) Returns a new deque object initialized left-to-right (using append()) with data from iterable. If iterable is not specified, the new deque is empty. Deques are a generalization of stacks and queues (the name is pronounced “deck” and is short for “double-ended queue”). Deques support thread-safe, memory efficient appends and pops from either side of the deque with approximately the same O(1) performance in either direction. Though list objects support similar operations, they are optimized for fast fixed-length operations and incur O(n) memory movement costs for pop(0) and insert(0, v) operations which change both the size and position of the underlying data representation. If maxlen is not specified or is None, deques may grow to an arbitrary length. Otherwise, the deque is bounded to the specified maximum length. Once a bounded length deque is full, when new items are added, a corresponding number of items are discarded from the opposite end. Bounded length deques provide functionality similar to the tail filter in Unix. They are also useful for tracking transactions and other pools of data where only the most recent activity is of interest. Deque objects support the following methods: append(x) Add x to the right side of the deque. appendleft(x) Add x to the left side of the deque. clear() Remove all elements from the deque leaving it with length 0. copy() Create a shallow copy of the deque. New in version 3.5. count(x) Count the number of deque elements equal to x. New in version 3.2. extend(iterable) Extend the right side of the deque by appending elements from the iterable argument. extendleft(iterable) Extend the left side of the deque by appending elements from iterable. Note, the series of left appends results in reversing the order of elements in the iterable argument. index(x[, start[, stop]]) Return the position of x in the deque (at or after index start and before index stop). Returns the first match or raises ValueError if not found. New in version 3.5. insert(i, x) Insert x into the deque at position i. If the insertion would cause a bounded deque to grow beyond maxlen, an IndexError is raised. New in version 3.5. pop() Remove and return an element from the right side of the deque. If no elements are present, raises an IndexError. popleft() Remove and return an element from the left side of the deque. If no elements are present, raises an IndexError. remove(value) Remove the first occurrence of value. If not found, raises a ValueError. reverse() Reverse the elements of the deque in-place and then return None. New in version 3.2. rotate(n=1) Rotate the deque n steps to the right. If n is negative, rotate to the left. When the deque is not empty, rotating one step to the right is equivalent to d.appendleft(d.pop()), and rotating one step to the left is equivalent to d.append(d.popleft()). Deque objects also provide one read-only attribute: maxlen Maximum size of a deque or None if unbounded. New in version 3.1.
python.library.collections#collections.deque
append(x) Add x to the right side of the deque.
python.library.collections#collections.deque.append
appendleft(x) Add x to the left side of the deque.
python.library.collections#collections.deque.appendleft
clear() Remove all elements from the deque leaving it with length 0.
python.library.collections#collections.deque.clear
copy() Create a shallow copy of the deque. New in version 3.5.
python.library.collections#collections.deque.copy
count(x) Count the number of deque elements equal to x. New in version 3.2.
python.library.collections#collections.deque.count
extend(iterable) Extend the right side of the deque by appending elements from the iterable argument.
python.library.collections#collections.deque.extend
extendleft(iterable) Extend the left side of the deque by appending elements from iterable. Note, the series of left appends results in reversing the order of elements in the iterable argument.
python.library.collections#collections.deque.extendleft
index(x[, start[, stop]]) Return the position of x in the deque (at or after index start and before index stop). Returns the first match or raises ValueError if not found. New in version 3.5.
python.library.collections#collections.deque.index
insert(i, x) Insert x into the deque at position i. If the insertion would cause a bounded deque to grow beyond maxlen, an IndexError is raised. New in version 3.5.
python.library.collections#collections.deque.insert
maxlen Maximum size of a deque or None if unbounded. New in version 3.1.
python.library.collections#collections.deque.maxlen
pop() Remove and return an element from the right side of the deque. If no elements are present, raises an IndexError.
python.library.collections#collections.deque.pop
popleft() Remove and return an element from the left side of the deque. If no elements are present, raises an IndexError.
python.library.collections#collections.deque.popleft
remove(value) Remove the first occurrence of value. If not found, raises a ValueError.
python.library.collections#collections.deque.remove
reverse() Reverse the elements of the deque in-place and then return None. New in version 3.2.
python.library.collections#collections.deque.reverse
rotate(n=1) Rotate the deque n steps to the right. If n is negative, rotate to the left. When the deque is not empty, rotating one step to the right is equivalent to d.appendleft(d.pop()), and rotating one step to the left is equivalent to d.append(d.popleft()).
python.library.collections#collections.deque.rotate
collections.namedtuple(typename, field_names, *, rename=False, defaults=None, module=None) Returns a new tuple subclass named typename. The new subclass is used to create tuple-like objects that have fields accessible by attribute lookup as well as being indexable and iterable. Instances of the subclass also have a helpful docstring (with typename and field_names) and a helpful __repr__() method which lists the tuple contents in a name=value format. The field_names are a sequence of strings such as ['x', 'y']. Alternatively, field_names can be a single string with each fieldname separated by whitespace and/or commas, for example 'x y' or 'x, y'. Any valid Python identifier may be used for a fieldname except for names starting with an underscore. Valid identifiers consist of letters, digits, and underscores but do not start with a digit or underscore and cannot be a keyword such as class, for, return, global, pass, or raise. If rename is true, invalid fieldnames are automatically replaced with positional names. For example, ['abc', 'def', 'ghi', 'abc'] is converted to ['abc', '_1', 'ghi', '_3'], eliminating the keyword def and the duplicate fieldname abc. defaults can be None or an iterable of default values. Since fields with a default value must come after any fields without a default, the defaults are applied to the rightmost parameters. For example, if the fieldnames are ['x', 'y', 'z'] and the defaults are (1, 2), then x will be a required argument, y will default to 1, and z will default to 2. If module is defined, the __module__ attribute of the named tuple is set to that value. Named tuple instances do not have per-instance dictionaries, so they are lightweight and require no more memory than regular tuples. To support pickling, the named tuple class should be assigned to a variable that matches typename. Changed in version 3.1: Added support for rename. Changed in version 3.6: The verbose and rename parameters became keyword-only arguments. Changed in version 3.6: Added the module parameter. Changed in version 3.7: Removed the verbose parameter and the _source attribute. Changed in version 3.7: Added the defaults parameter and the _field_defaults attribute.
python.library.collections#collections.namedtuple
class collections.OrderedDict([items]) Return an instance of a dict subclass that has methods specialized for rearranging dictionary order. New in version 3.1. popitem(last=True) The popitem() method for ordered dictionaries returns and removes a (key, value) pair. The pairs are returned in LIFO order if last is true or FIFO order if false. move_to_end(key, last=True) Move an existing key to either end of an ordered dictionary. The item is moved to the right end if last is true (the default) or to the beginning if last is false. Raises KeyError if the key does not exist: >>> d = OrderedDict.fromkeys('abcde') >>> d.move_to_end('b') >>> ''.join(d.keys()) 'acdeb' >>> d.move_to_end('b', last=False) >>> ''.join(d.keys()) 'bacde' New in version 3.2.
python.library.collections#collections.OrderedDict
move_to_end(key, last=True) Move an existing key to either end of an ordered dictionary. The item is moved to the right end if last is true (the default) or to the beginning if last is false. Raises KeyError if the key does not exist: >>> d = OrderedDict.fromkeys('abcde') >>> d.move_to_end('b') >>> ''.join(d.keys()) 'acdeb' >>> d.move_to_end('b', last=False) >>> ''.join(d.keys()) 'bacde' New in version 3.2.
python.library.collections#collections.OrderedDict.move_to_end
popitem(last=True) The popitem() method for ordered dictionaries returns and removes a (key, value) pair. The pairs are returned in LIFO order if last is true or FIFO order if false.
python.library.collections#collections.OrderedDict.popitem
somenamedtuple._asdict() Return a new dict which maps field names to their corresponding values: >>> p = Point(x=11, y=22) >>> p._asdict() {'x': 11, 'y': 22} Changed in version 3.1: Returns an OrderedDict instead of a regular dict. Changed in version 3.8: Returns a regular dict instead of an OrderedDict. As of Python 3.7, regular dicts are guaranteed to be ordered. If the extra features of OrderedDict are required, the suggested remediation is to cast the result to the desired type: OrderedDict(nt._asdict()).
python.library.collections#collections.somenamedtuple._asdict
somenamedtuple._fields Tuple of strings listing the field names. Useful for introspection and for creating new named tuple types from existing named tuples. >>> p._fields # view the field names ('x', 'y') >>> Color = namedtuple('Color', 'red green blue') >>> Pixel = namedtuple('Pixel', Point._fields + Color._fields) >>> Pixel(11, 22, 128, 255, 0) Pixel(x=11, y=22, red=128, green=255, blue=0)
python.library.collections#collections.somenamedtuple._fields
somenamedtuple._field_defaults Dictionary mapping field names to default values. >>> Account = namedtuple('Account', ['type', 'balance'], defaults=[0]) >>> Account._field_defaults {'balance': 0} >>> Account('premium') Account(type='premium', balance=0)
python.library.collections#collections.somenamedtuple._field_defaults
classmethod somenamedtuple._make(iterable) Class method that makes a new instance from an existing sequence or iterable. >>> t = [11, 22] >>> Point._make(t) Point(x=11, y=22)
python.library.collections#collections.somenamedtuple._make
somenamedtuple._replace(**kwargs) Return a new instance of the named tuple replacing specified fields with new values: >>> p = Point(x=11, y=22) >>> p._replace(x=33) Point(x=33, y=22) >>> for partnum, record in inventory.items(): ... inventory[partnum] = record._replace(price=newprices[partnum], timestamp=time.now())
python.library.collections#collections.somenamedtuple._replace
class collections.UserDict([initialdata]) Class that simulates a dictionary. The instance’s contents are kept in a regular dictionary, which is accessible via the data attribute of UserDict instances. If initialdata is provided, data is initialized with its contents; note that a reference to initialdata will not be kept, allowing it be used for other purposes. In addition to supporting the methods and operations of mappings, UserDict instances provide the following attribute: data A real dictionary used to store the contents of the UserDict class.
python.library.collections#collections.UserDict
data A real dictionary used to store the contents of the UserDict class.
python.library.collections#collections.UserDict.data
class collections.UserList([list]) Class that simulates a list. The instance’s contents are kept in a regular list, which is accessible via the data attribute of UserList instances. The instance’s contents are initially set to a copy of list, defaulting to the empty list []. list can be any iterable, for example a real Python list or a UserList object. In addition to supporting the methods and operations of mutable sequences, UserList instances provide the following attribute: data A real list object used to store the contents of the UserList class.
python.library.collections#collections.UserList
data A real list object used to store the contents of the UserList class.
python.library.collections#collections.UserList.data
class collections.UserString(seq) Class that simulates a string object. The instance’s content is kept in a regular string object, which is accessible via the data attribute of UserString instances. The instance’s contents are initially set to a copy of seq. The seq argument can be any object which can be converted into a string using the built-in str() function. In addition to supporting the methods and operations of strings, UserString instances provide the following attribute: data A real str object used to store the contents of the UserString class. Changed in version 3.5: New methods __getnewargs__, __rmod__, casefold, format_map, isprintable, and maketrans.
python.library.collections#collections.UserString
data A real str object used to store the contents of the UserString class.
python.library.collections#collections.UserString.data
colorsys — Conversions between color systems Source code: Lib/colorsys.py The colorsys module defines bidirectional conversions of color values between colors expressed in the RGB (Red Green Blue) color space used in computer monitors and three other coordinate systems: YIQ, HLS (Hue Lightness Saturation) and HSV (Hue Saturation Value). Coordinates in all of these color spaces are floating point values. In the YIQ space, the Y coordinate is between 0 and 1, but the I and Q coordinates can be positive or negative. In all other spaces, the coordinates are all between 0 and 1. See also More information about color spaces can be found at https://poynton.ca/ColorFAQ.html and https://www.cambridgeincolour.com/tutorials/color-spaces.htm. The colorsys module defines the following functions: colorsys.rgb_to_yiq(r, g, b) Convert the color from RGB coordinates to YIQ coordinates. colorsys.yiq_to_rgb(y, i, q) Convert the color from YIQ coordinates to RGB coordinates. colorsys.rgb_to_hls(r, g, b) Convert the color from RGB coordinates to HLS coordinates. colorsys.hls_to_rgb(h, l, s) Convert the color from HLS coordinates to RGB coordinates. colorsys.rgb_to_hsv(r, g, b) Convert the color from RGB coordinates to HSV coordinates. colorsys.hsv_to_rgb(h, s, v) Convert the color from HSV coordinates to RGB coordinates. Example: >>> import colorsys >>> colorsys.rgb_to_hsv(0.2, 0.4, 0.4) (0.5, 0.5, 0.4) >>> colorsys.hsv_to_rgb(0.5, 0.5, 0.4) (0.2, 0.4, 0.4)
python.library.colorsys
colorsys.hls_to_rgb(h, l, s) Convert the color from HLS coordinates to RGB coordinates.
python.library.colorsys#colorsys.hls_to_rgb
colorsys.hsv_to_rgb(h, s, v) Convert the color from HSV coordinates to RGB coordinates.
python.library.colorsys#colorsys.hsv_to_rgb
colorsys.rgb_to_hls(r, g, b) Convert the color from RGB coordinates to HLS coordinates.
python.library.colorsys#colorsys.rgb_to_hls
colorsys.rgb_to_hsv(r, g, b) Convert the color from RGB coordinates to HSV coordinates.
python.library.colorsys#colorsys.rgb_to_hsv
colorsys.rgb_to_yiq(r, g, b) Convert the color from RGB coordinates to YIQ coordinates.
python.library.colorsys#colorsys.rgb_to_yiq
colorsys.yiq_to_rgb(y, i, q) Convert the color from YIQ coordinates to RGB coordinates.
python.library.colorsys#colorsys.yiq_to_rgb
compile(source, filename, mode, flags=0, dont_inherit=False, optimize=-1) Compile the source into a code or AST object. Code objects can be executed by exec() or eval(). source can either be a normal string, a byte string, or an AST object. Refer to the ast module documentation for information on how to work with AST objects. The filename argument should give the file from which the code was read; pass some recognizable value if it wasn’t read from a file ('<string>' is commonly used). The mode argument specifies what kind of code must be compiled; it can be 'exec' if source consists of a sequence of statements, 'eval' if it consists of a single expression, or 'single' if it consists of a single interactive statement (in the latter case, expression statements that evaluate to something other than None will be printed). The optional arguments flags and dont_inherit control which compiler options should be activated and which future features should be allowed. If neither is present (or both are zero) the code is compiled with the same flags that affect the code that is calling compile(). If the flags argument is given and dont_inherit is not (or is zero) then the compiler options and the future statements specified by the flags argument are used in addition to those that would be used anyway. If dont_inherit is a non-zero integer then the flags argument is it – the flags (future features and compiler options) in the surrounding code are ignored. Compiler options and future statements are specified by bits which can be bitwise ORed together to specify multiple options. The bitfield required to specify a given future feature can be found as the compiler_flag attribute on the _Feature instance in the __future__ module. Compiler flags can be found in ast module, with PyCF_ prefix. The argument optimize specifies the optimization level of the compiler; the default value of -1 selects the optimization level of the interpreter as given by -O options. Explicit levels are 0 (no optimization; __debug__ is true), 1 (asserts are removed, __debug__ is false) or 2 (docstrings are removed too). This function raises SyntaxError if the compiled source is invalid, and ValueError if the source contains null bytes. If you want to parse Python code into its AST representation, see ast.parse(). Raises an auditing event compile with arguments source and filename. This event may also be raised by implicit compilation. Note When compiling a string with multi-line code in 'single' or 'eval' mode, input must be terminated by at least one newline character. This is to facilitate detection of incomplete and complete statements in the code module. Warning It is possible to crash the Python interpreter with a sufficiently large/complex string when compiling to an AST object due to stack depth limitations in Python’s AST compiler. Changed in version 3.2: Allowed use of Windows and Mac newlines. Also input in 'exec' mode does not have to end in a newline anymore. Added the optimize parameter. Changed in version 3.5: Previously, TypeError was raised when null bytes were encountered in source. New in version 3.8: ast.PyCF_ALLOW_TOP_LEVEL_AWAIT can now be passed in flags to enable support for top-level await, async for, and async with.
python.library.functions#compile
compileall — Byte-compile Python libraries Source code: Lib/compileall.py This module provides some utility functions to support installing Python libraries. These functions compile Python source files in a directory tree. This module can be used to create the cached byte-code files at library installation time, which makes them available for use even by users who don’t have write permission to the library directories. Command-line use This module can work as a script (using python -m compileall) to compile Python sources. directory ... file ... Positional arguments are files to compile or directories that contain source files, traversed recursively. If no argument is given, behave as if the command line was -l <directories from sys.path>. -l Do not recurse into subdirectories, only compile source code files directly contained in the named or implied directories. -f Force rebuild even if timestamps are up-to-date. -q Do not print the list of files compiled. If passed once, error messages will still be printed. If passed twice (-qq), all output is suppressed. -d destdir Directory prepended to the path to each file being compiled. This will appear in compilation time tracebacks, and is also compiled in to the byte-code file, where it will be used in tracebacks and other messages in cases where the source file does not exist at the time the byte-code file is executed. -s strip_prefix -p prepend_prefix Remove (-s) or append (-p) the given prefix of paths recorded in the .pyc files. Cannot be combined with -d. -x regex regex is used to search the full path to each file considered for compilation, and if the regex produces a match, the file is skipped. -i list Read the file list and add each line that it contains to the list of files and directories to compile. If list is -, read lines from stdin. -b Write the byte-code files to their legacy locations and names, which may overwrite byte-code files created by another version of Python. The default is to write files to their PEP 3147 locations and names, which allows byte-code files from multiple versions of Python to coexist. -r Control the maximum recursion level for subdirectories. If this is given, then -l option will not be taken into account. python -m compileall <directory> -r 0 is equivalent to python -m compileall <directory> -l. -j N Use N workers to compile the files within the given directory. If 0 is used, then the result of os.cpu_count() will be used. --invalidation-mode [timestamp|checked-hash|unchecked-hash] Control how the generated byte-code files are invalidated at runtime. The timestamp value, means that .pyc files with the source timestamp and size embedded will be generated. The checked-hash and unchecked-hash values cause hash-based pycs to be generated. Hash-based pycs embed a hash of the source file contents rather than a timestamp. See Cached bytecode invalidation for more information on how Python validates bytecode cache files at runtime. The default is timestamp if the SOURCE_DATE_EPOCH environment variable is not set, and checked-hash if the SOURCE_DATE_EPOCH environment variable is set. -o level Compile with the given optimization level. May be used multiple times to compile for multiple levels at a time (for example, compileall -o 1 -o 2). -e dir Ignore symlinks pointing outside the given directory. --hardlink-dupes If two .pyc files with different optimization level have the same content, use hard links to consolidate duplicate files. Changed in version 3.2: Added the -i, -b and -h options. Changed in version 3.5: Added the -j, -r, and -qq options. -q option was changed to a multilevel value. -b will always produce a byte-code file ending in .pyc, never .pyo. Changed in version 3.7: Added the --invalidation-mode option. Changed in version 3.9: Added the -s, -p, -e and --hardlink-dupes options. Raised the default recursion limit from 10 to sys.getrecursionlimit(). Added the possibility to specify the -o option multiple times. There is no command-line option to control the optimization level used by the compile() function, because the Python interpreter itself already provides the option: python -O -m compileall. Similarly, the compile() function respects the sys.pycache_prefix setting. The generated bytecode cache will only be useful if compile() is run with the same sys.pycache_prefix (if any) that will be used at runtime. Public functions compileall.compile_dir(dir, maxlevels=sys.getrecursionlimit(), ddir=None, force=False, rx=None, quiet=0, legacy=False, optimize=-1, workers=1, invalidation_mode=None, *, stripdir=None, prependdir=None, limit_sl_dest=None, hardlink_dupes=False) Recursively descend the directory tree named by dir, compiling all .py files along the way. Return a true value if all the files compiled successfully, and a false value otherwise. The maxlevels parameter is used to limit the depth of the recursion; it defaults to sys.getrecursionlimit(). If ddir is given, it is prepended to the path to each file being compiled for use in compilation time tracebacks, and is also compiled in to the byte-code file, where it will be used in tracebacks and other messages in cases where the source file does not exist at the time the byte-code file is executed. If force is true, modules are re-compiled even if the timestamps are up to date. If rx is given, its search method is called on the complete path to each file considered for compilation, and if it returns a true value, the file is skipped. If quiet is False or 0 (the default), the filenames and other information are printed to standard out. Set to 1, only errors are printed. Set to 2, all output is suppressed. If legacy is true, byte-code files are written to their legacy locations and names, which may overwrite byte-code files created by another version of Python. The default is to write files to their PEP 3147 locations and names, which allows byte-code files from multiple versions of Python to coexist. optimize specifies the optimization level for the compiler. It is passed to the built-in compile() function. Accepts also a sequence of optimization levels which lead to multiple compilations of one .py file in one call. The argument workers specifies how many workers are used to compile files in parallel. The default is to not use multiple workers. If the platform can’t use multiple workers and workers argument is given, then sequential compilation will be used as a fallback. If workers is 0, the number of cores in the system is used. If workers is lower than 0, a ValueError will be raised. invalidation_mode should be a member of the py_compile.PycInvalidationMode enum and controls how the generated pycs are invalidated at runtime. The stripdir, prependdir and limit_sl_dest arguments correspond to the -s, -p and -e options described above. They may be specified as str, bytes or os.PathLike. If hardlink_dupes is true and two .pyc files with different optimization level have the same content, use hard links to consolidate duplicate files. Changed in version 3.2: Added the legacy and optimize parameter. Changed in version 3.5: Added the workers parameter. Changed in version 3.5: quiet parameter was changed to a multilevel value. Changed in version 3.5: The legacy parameter only writes out .pyc files, not .pyo files no matter what the value of optimize is. Changed in version 3.6: Accepts a path-like object. Changed in version 3.7: The invalidation_mode parameter was added. Changed in version 3.7.2: The invalidation_mode parameter’s default value is updated to None. Changed in version 3.8: Setting workers to 0 now chooses the optimal number of cores. Changed in version 3.9: Added stripdir, prependdir, limit_sl_dest and hardlink_dupes arguments. Default value of maxlevels was changed from 10 to sys.getrecursionlimit() compileall.compile_file(fullname, ddir=None, force=False, rx=None, quiet=0, legacy=False, optimize=-1, invalidation_mode=None, *, stripdir=None, prependdir=None, limit_sl_dest=None, hardlink_dupes=False) Compile the file with path fullname. Return a true value if the file compiled successfully, and a false value otherwise. If ddir is given, it is prepended to the path to the file being compiled for use in compilation time tracebacks, and is also compiled in to the byte-code file, where it will be used in tracebacks and other messages in cases where the source file does not exist at the time the byte-code file is executed. If rx is given, its search method is passed the full path name to the file being compiled, and if it returns a true value, the file is not compiled and True is returned. If quiet is False or 0 (the default), the filenames and other information are printed to standard out. Set to 1, only errors are printed. Set to 2, all output is suppressed. If legacy is true, byte-code files are written to their legacy locations and names, which may overwrite byte-code files created by another version of Python. The default is to write files to their PEP 3147 locations and names, which allows byte-code files from multiple versions of Python to coexist. optimize specifies the optimization level for the compiler. It is passed to the built-in compile() function. Accepts also a sequence of optimization levels which lead to multiple compilations of one .py file in one call. invalidation_mode should be a member of the py_compile.PycInvalidationMode enum and controls how the generated pycs are invalidated at runtime. The stripdir, prependdir and limit_sl_dest arguments correspond to the -s, -p and -e options described above. They may be specified as str, bytes or os.PathLike. If hardlink_dupes is true and two .pyc files with different optimization level have the same content, use hard links to consolidate duplicate files. New in version 3.2. Changed in version 3.5: quiet parameter was changed to a multilevel value. Changed in version 3.5: The legacy parameter only writes out .pyc files, not .pyo files no matter what the value of optimize is. Changed in version 3.7: The invalidation_mode parameter was added. Changed in version 3.7.2: The invalidation_mode parameter’s default value is updated to None. Changed in version 3.9: Added stripdir, prependdir, limit_sl_dest and hardlink_dupes arguments. compileall.compile_path(skip_curdir=True, maxlevels=0, force=False, quiet=0, legacy=False, optimize=-1, invalidation_mode=None) Byte-compile all the .py files found along sys.path. Return a true value if all the files compiled successfully, and a false value otherwise. If skip_curdir is true (the default), the current directory is not included in the search. All other parameters are passed to the compile_dir() function. Note that unlike the other compile functions, maxlevels defaults to 0. Changed in version 3.2: Added the legacy and optimize parameter. Changed in version 3.5: quiet parameter was changed to a multilevel value. Changed in version 3.5: The legacy parameter only writes out .pyc files, not .pyo files no matter what the value of optimize is. Changed in version 3.7: The invalidation_mode parameter was added. Changed in version 3.7.2: The invalidation_mode parameter’s default value is updated to None. To force a recompile of all the .py files in the Lib/ subdirectory and all its subdirectories: import compileall compileall.compile_dir('Lib/', force=True) # Perform same compilation, excluding files in .svn directories. import re compileall.compile_dir('Lib/', rx=re.compile(r'[/\\][.]svn'), force=True) # pathlib.Path objects can also be used. import pathlib compileall.compile_dir(pathlib.Path('Lib/'), force=True) See also Module py_compile Byte-compile a single source file.
python.library.compileall
compileall.compile_dir(dir, maxlevels=sys.getrecursionlimit(), ddir=None, force=False, rx=None, quiet=0, legacy=False, optimize=-1, workers=1, invalidation_mode=None, *, stripdir=None, prependdir=None, limit_sl_dest=None, hardlink_dupes=False) Recursively descend the directory tree named by dir, compiling all .py files along the way. Return a true value if all the files compiled successfully, and a false value otherwise. The maxlevels parameter is used to limit the depth of the recursion; it defaults to sys.getrecursionlimit(). If ddir is given, it is prepended to the path to each file being compiled for use in compilation time tracebacks, and is also compiled in to the byte-code file, where it will be used in tracebacks and other messages in cases where the source file does not exist at the time the byte-code file is executed. If force is true, modules are re-compiled even if the timestamps are up to date. If rx is given, its search method is called on the complete path to each file considered for compilation, and if it returns a true value, the file is skipped. If quiet is False or 0 (the default), the filenames and other information are printed to standard out. Set to 1, only errors are printed. Set to 2, all output is suppressed. If legacy is true, byte-code files are written to their legacy locations and names, which may overwrite byte-code files created by another version of Python. The default is to write files to their PEP 3147 locations and names, which allows byte-code files from multiple versions of Python to coexist. optimize specifies the optimization level for the compiler. It is passed to the built-in compile() function. Accepts also a sequence of optimization levels which lead to multiple compilations of one .py file in one call. The argument workers specifies how many workers are used to compile files in parallel. The default is to not use multiple workers. If the platform can’t use multiple workers and workers argument is given, then sequential compilation will be used as a fallback. If workers is 0, the number of cores in the system is used. If workers is lower than 0, a ValueError will be raised. invalidation_mode should be a member of the py_compile.PycInvalidationMode enum and controls how the generated pycs are invalidated at runtime. The stripdir, prependdir and limit_sl_dest arguments correspond to the -s, -p and -e options described above. They may be specified as str, bytes or os.PathLike. If hardlink_dupes is true and two .pyc files with different optimization level have the same content, use hard links to consolidate duplicate files. Changed in version 3.2: Added the legacy and optimize parameter. Changed in version 3.5: Added the workers parameter. Changed in version 3.5: quiet parameter was changed to a multilevel value. Changed in version 3.5: The legacy parameter only writes out .pyc files, not .pyo files no matter what the value of optimize is. Changed in version 3.6: Accepts a path-like object. Changed in version 3.7: The invalidation_mode parameter was added. Changed in version 3.7.2: The invalidation_mode parameter’s default value is updated to None. Changed in version 3.8: Setting workers to 0 now chooses the optimal number of cores. Changed in version 3.9: Added stripdir, prependdir, limit_sl_dest and hardlink_dupes arguments. Default value of maxlevels was changed from 10 to sys.getrecursionlimit()
python.library.compileall#compileall.compile_dir
compileall.compile_file(fullname, ddir=None, force=False, rx=None, quiet=0, legacy=False, optimize=-1, invalidation_mode=None, *, stripdir=None, prependdir=None, limit_sl_dest=None, hardlink_dupes=False) Compile the file with path fullname. Return a true value if the file compiled successfully, and a false value otherwise. If ddir is given, it is prepended to the path to the file being compiled for use in compilation time tracebacks, and is also compiled in to the byte-code file, where it will be used in tracebacks and other messages in cases where the source file does not exist at the time the byte-code file is executed. If rx is given, its search method is passed the full path name to the file being compiled, and if it returns a true value, the file is not compiled and True is returned. If quiet is False or 0 (the default), the filenames and other information are printed to standard out. Set to 1, only errors are printed. Set to 2, all output is suppressed. If legacy is true, byte-code files are written to their legacy locations and names, which may overwrite byte-code files created by another version of Python. The default is to write files to their PEP 3147 locations and names, which allows byte-code files from multiple versions of Python to coexist. optimize specifies the optimization level for the compiler. It is passed to the built-in compile() function. Accepts also a sequence of optimization levels which lead to multiple compilations of one .py file in one call. invalidation_mode should be a member of the py_compile.PycInvalidationMode enum and controls how the generated pycs are invalidated at runtime. The stripdir, prependdir and limit_sl_dest arguments correspond to the -s, -p and -e options described above. They may be specified as str, bytes or os.PathLike. If hardlink_dupes is true and two .pyc files with different optimization level have the same content, use hard links to consolidate duplicate files. New in version 3.2. Changed in version 3.5: quiet parameter was changed to a multilevel value. Changed in version 3.5: The legacy parameter only writes out .pyc files, not .pyo files no matter what the value of optimize is. Changed in version 3.7: The invalidation_mode parameter was added. Changed in version 3.7.2: The invalidation_mode parameter’s default value is updated to None. Changed in version 3.9: Added stripdir, prependdir, limit_sl_dest and hardlink_dupes arguments.
python.library.compileall#compileall.compile_file
compileall.compile_path(skip_curdir=True, maxlevels=0, force=False, quiet=0, legacy=False, optimize=-1, invalidation_mode=None) Byte-compile all the .py files found along sys.path. Return a true value if all the files compiled successfully, and a false value otherwise. If skip_curdir is true (the default), the current directory is not included in the search. All other parameters are passed to the compile_dir() function. Note that unlike the other compile functions, maxlevels defaults to 0. Changed in version 3.2: Added the legacy and optimize parameter. Changed in version 3.5: quiet parameter was changed to a multilevel value. Changed in version 3.5: The legacy parameter only writes out .pyc files, not .pyo files no matter what the value of optimize is. Changed in version 3.7: The invalidation_mode parameter was added. Changed in version 3.7.2: The invalidation_mode parameter’s default value is updated to None.
python.library.compileall#compileall.compile_path
class complex([real[, imag]]) Return a complex number with the value real + imag*1j or convert a string or number to a complex number. If the first parameter is a string, it will be interpreted as a complex number and the function must be called without a second parameter. The second parameter can never be a string. Each argument may be any numeric type (including complex). If imag is omitted, it defaults to zero and the constructor serves as a numeric conversion like int and float. If both arguments are omitted, returns 0j. For a general Python object x, complex(x) delegates to x.__complex__(). If __complex__() is not defined then it falls back to __float__(). If __float__() is not defined then it falls back to __index__(). Note When converting from a string, the string must not contain whitespace around the central + or - operator. For example, complex('1+2j') is fine, but complex('1 + 2j') raises ValueError. The complex type is described in Numeric Types — int, float, complex. Changed in version 3.6: Grouping digits with underscores as in code literals is allowed. Changed in version 3.8: Falls back to __index__() if __complex__() and __float__() are not defined.
python.library.functions#complex
concurrent.futures — Launching parallel tasks New in version 3.2. Source code: Lib/concurrent/futures/thread.py and Lib/concurrent/futures/process.py The concurrent.futures module provides a high-level interface for asynchronously executing callables. The asynchronous execution can be performed with threads, using ThreadPoolExecutor, or separate processes, using ProcessPoolExecutor. Both implement the same interface, which is defined by the abstract Executor class. Executor Objects class concurrent.futures.Executor An abstract class that provides methods to execute calls asynchronously. It should not be used directly, but through its concrete subclasses. submit(fn, /, *args, **kwargs) Schedules the callable, fn, to be executed as fn(*args **kwargs) and returns a Future object representing the execution of the callable. with ThreadPoolExecutor(max_workers=1) as executor: future = executor.submit(pow, 323, 1235) print(future.result()) map(func, *iterables, timeout=None, chunksize=1) Similar to map(func, *iterables) except: the iterables are collected immediately rather than lazily; func is executed asynchronously and several calls to func may be made concurrently. The returned iterator raises a concurrent.futures.TimeoutError if __next__() is called and the result isn’t available after timeout seconds from the original call to Executor.map(). timeout can be an int or a float. If timeout is not specified or None, there is no limit to the wait time. If a func call raises an exception, then that exception will be raised when its value is retrieved from the iterator. When using ProcessPoolExecutor, this method chops iterables into a number of chunks which it submits to the pool as separate tasks. The (approximate) size of these chunks can be specified by setting chunksize to a positive integer. For very long iterables, using a large value for chunksize can significantly improve performance compared to the default size of 1. With ThreadPoolExecutor, chunksize has no effect. Changed in version 3.5: Added the chunksize argument. shutdown(wait=True, *, cancel_futures=False) Signal the executor that it should free any resources that it is using when the currently pending futures are done executing. Calls to Executor.submit() and Executor.map() made after shutdown will raise RuntimeError. If wait is True then this method will not return until all the pending futures are done executing and the resources associated with the executor have been freed. If wait is False then this method will return immediately and the resources associated with the executor will be freed when all pending futures are done executing. Regardless of the value of wait, the entire Python program will not exit until all pending futures are done executing. If cancel_futures is True, this method will cancel all pending futures that the executor has not started running. Any futures that are completed or running won’t be cancelled, regardless of the value of cancel_futures. If both cancel_futures and wait are True, all futures that the executor has started running will be completed prior to this method returning. The remaining futures are cancelled. You can avoid having to call this method explicitly if you use the with statement, which will shutdown the Executor (waiting as if Executor.shutdown() were called with wait set to True): import shutil with ThreadPoolExecutor(max_workers=4) as e: e.submit(shutil.copy, 'src1.txt', 'dest1.txt') e.submit(shutil.copy, 'src2.txt', 'dest2.txt') e.submit(shutil.copy, 'src3.txt', 'dest3.txt') e.submit(shutil.copy, 'src4.txt', 'dest4.txt') Changed in version 3.9: Added cancel_futures. ThreadPoolExecutor ThreadPoolExecutor is an Executor subclass that uses a pool of threads to execute calls asynchronously. Deadlocks can occur when the callable associated with a Future waits on the results of another Future. For example: import time def wait_on_b(): time.sleep(5) print(b.result()) # b will never complete because it is waiting on a. return 5 def wait_on_a(): time.sleep(5) print(a.result()) # a will never complete because it is waiting on b. return 6 executor = ThreadPoolExecutor(max_workers=2) a = executor.submit(wait_on_b) b = executor.submit(wait_on_a) And: def wait_on_future(): f = executor.submit(pow, 5, 2) # This will never complete because there is only one worker thread and # it is executing this function. print(f.result()) executor = ThreadPoolExecutor(max_workers=1) executor.submit(wait_on_future) class concurrent.futures.ThreadPoolExecutor(max_workers=None, thread_name_prefix='', initializer=None, initargs=()) An Executor subclass that uses a pool of at most max_workers threads to execute calls asynchronously. initializer is an optional callable that is called at the start of each worker thread; initargs is a tuple of arguments passed to the initializer. Should initializer raise an exception, all currently pending jobs will raise a BrokenThreadPool, as well as any attempt to submit more jobs to the pool. Changed in version 3.5: If max_workers is None or not given, it will default to the number of processors on the machine, multiplied by 5, assuming that ThreadPoolExecutor is often used to overlap I/O instead of CPU work and the number of workers should be higher than the number of workers for ProcessPoolExecutor. New in version 3.6: The thread_name_prefix argument was added to allow users to control the threading.Thread names for worker threads created by the pool for easier debugging. Changed in version 3.7: Added the initializer and initargs arguments. Changed in version 3.8: Default value of max_workers is changed to min(32, os.cpu_count() + 4). This default value preserves at least 5 workers for I/O bound tasks. It utilizes at most 32 CPU cores for CPU bound tasks which release the GIL. And it avoids using very large resources implicitly on many-core machines. ThreadPoolExecutor now reuses idle worker threads before starting max_workers worker threads too. ThreadPoolExecutor Example import concurrent.futures import urllib.request URLS = ['http://www.foxnews.com/', 'http://www.cnn.com/', 'http://europe.wsj.com/', 'http://www.bbc.co.uk/', 'http://some-made-up-domain.com/'] # Retrieve a single page and report the URL and contents def load_url(url, timeout): with urllib.request.urlopen(url, timeout=timeout) as conn: return conn.read() # We can use a with statement to ensure threads are cleaned up promptly with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor: # Start the load operations and mark each future with its URL future_to_url = {executor.submit(load_url, url, 60): url for url in URLS} for future in concurrent.futures.as_completed(future_to_url): url = future_to_url[future] try: data = future.result() except Exception as exc: print('%r generated an exception: %s' % (url, exc)) else: print('%r page is %d bytes' % (url, len(data))) ProcessPoolExecutor The ProcessPoolExecutor class is an Executor subclass that uses a pool of processes to execute calls asynchronously. ProcessPoolExecutor uses the multiprocessing module, which allows it to side-step the Global Interpreter Lock but also means that only picklable objects can be executed and returned. The __main__ module must be importable by worker subprocesses. This means that ProcessPoolExecutor will not work in the interactive interpreter. Calling Executor or Future methods from a callable submitted to a ProcessPoolExecutor will result in deadlock. class concurrent.futures.ProcessPoolExecutor(max_workers=None, mp_context=None, initializer=None, initargs=()) An Executor subclass that executes calls asynchronously using a pool of at most max_workers processes. If max_workers is None or not given, it will default to the number of processors on the machine. If max_workers is less than or equal to 0, then a ValueError will be raised. On Windows, max_workers must be less than or equal to 61. If it is not then ValueError will be raised. If max_workers is None, then the default chosen will be at most 61, even if more processors are available. mp_context can be a multiprocessing context or None. It will be used to launch the workers. If mp_context is None or not given, the default multiprocessing context is used. initializer is an optional callable that is called at the start of each worker process; initargs is a tuple of arguments passed to the initializer. Should initializer raise an exception, all currently pending jobs will raise a BrokenProcessPool, as well as any attempt to submit more jobs to the pool. Changed in version 3.3: When one of the worker processes terminates abruptly, a BrokenProcessPool error is now raised. Previously, behaviour was undefined but operations on the executor or its futures would often freeze or deadlock. Changed in version 3.7: The mp_context argument was added to allow users to control the start_method for worker processes created by the pool. Added the initializer and initargs arguments. ProcessPoolExecutor Example import concurrent.futures import math PRIMES = [ 112272535095293, 112582705942171, 112272535095293, 115280095190773, 115797848077099, 1099726899285419] def is_prime(n): if n < 2: return False if n == 2: return True if n % 2 == 0: return False sqrt_n = int(math.floor(math.sqrt(n))) for i in range(3, sqrt_n + 1, 2): if n % i == 0: return False return True def main(): with concurrent.futures.ProcessPoolExecutor() as executor: for number, prime in zip(PRIMES, executor.map(is_prime, PRIMES)): print('%d is prime: %s' % (number, prime)) if __name__ == '__main__': main() Future Objects The Future class encapsulates the asynchronous execution of a callable. Future instances are created by Executor.submit(). class concurrent.futures.Future Encapsulates the asynchronous execution of a callable. Future instances are created by Executor.submit() and should not be created directly except for testing. cancel() Attempt to cancel the call. If the call is currently being executed or finished running and cannot be cancelled then the method will return False, otherwise the call will be cancelled and the method will return True. cancelled() Return True if the call was successfully cancelled. running() Return True if the call is currently being executed and cannot be cancelled. done() Return True if the call was successfully cancelled or finished running. result(timeout=None) Return the value returned by the call. If the call hasn’t yet completed then this method will wait up to timeout seconds. If the call hasn’t completed in timeout seconds, then a concurrent.futures.TimeoutError will be raised. timeout can be an int or float. If timeout is not specified or None, there is no limit to the wait time. If the future is cancelled before completing then CancelledError will be raised. If the call raised, this method will raise the same exception. exception(timeout=None) Return the exception raised by the call. If the call hasn’t yet completed then this method will wait up to timeout seconds. If the call hasn’t completed in timeout seconds, then a concurrent.futures.TimeoutError will be raised. timeout can be an int or float. If timeout is not specified or None, there is no limit to the wait time. If the future is cancelled before completing then CancelledError will be raised. If the call completed without raising, None is returned. add_done_callback(fn) Attaches the callable fn to the future. fn will be called, with the future as its only argument, when the future is cancelled or finishes running. Added callables are called in the order that they were added and are always called in a thread belonging to the process that added them. If the callable raises an Exception subclass, it will be logged and ignored. If the callable raises a BaseException subclass, the behavior is undefined. If the future has already completed or been cancelled, fn will be called immediately. The following Future methods are meant for use in unit tests and Executor implementations. set_running_or_notify_cancel() This method should only be called by Executor implementations before executing the work associated with the Future and by unit tests. If the method returns False then the Future was cancelled, i.e. Future.cancel() was called and returned True. Any threads waiting on the Future completing (i.e. through as_completed() or wait()) will be woken up. If the method returns True then the Future was not cancelled and has been put in the running state, i.e. calls to Future.running() will return True. This method can only be called once and cannot be called after Future.set_result() or Future.set_exception() have been called. set_result(result) Sets the result of the work associated with the Future to result. This method should only be used by Executor implementations and unit tests. Changed in version 3.8: This method raises concurrent.futures.InvalidStateError if the Future is already done. set_exception(exception) Sets the result of the work associated with the Future to the Exception exception. This method should only be used by Executor implementations and unit tests. Changed in version 3.8: This method raises concurrent.futures.InvalidStateError if the Future is already done. Module Functions concurrent.futures.wait(fs, timeout=None, return_when=ALL_COMPLETED) Wait for the Future instances (possibly created by different Executor instances) given by fs to complete. Returns a named 2-tuple of sets. The first set, named done, contains the futures that completed (finished or cancelled futures) before the wait completed. The second set, named not_done, contains the futures that did not complete (pending or running futures). timeout can be used to control the maximum number of seconds to wait before returning. timeout can be an int or float. If timeout is not specified or None, there is no limit to the wait time. return_when indicates when this function should return. It must be one of the following constants: Constant Description FIRST_COMPLETED The function will return when any future finishes or is cancelled. FIRST_EXCEPTION The function will return when any future finishes by raising an exception. If no future raises an exception then it is equivalent to ALL_COMPLETED. ALL_COMPLETED The function will return when all futures finish or are cancelled. concurrent.futures.as_completed(fs, timeout=None) Returns an iterator over the Future instances (possibly created by different Executor instances) given by fs that yields futures as they complete (finished or cancelled futures). Any futures given by fs that are duplicated will be returned once. Any futures that completed before as_completed() is called will be yielded first. The returned iterator raises a concurrent.futures.TimeoutError if __next__() is called and the result isn’t available after timeout seconds from the original call to as_completed(). timeout can be an int or float. If timeout is not specified or None, there is no limit to the wait time. See also PEP 3148 – futures - execute computations asynchronously The proposal which described this feature for inclusion in the Python standard library. Exception classes exception concurrent.futures.CancelledError Raised when a future is cancelled. exception concurrent.futures.TimeoutError Raised when a future operation exceeds the given timeout. exception concurrent.futures.BrokenExecutor Derived from RuntimeError, this exception class is raised when an executor is broken for some reason, and cannot be used to submit or execute new tasks. New in version 3.7. exception concurrent.futures.InvalidStateError Raised when an operation is performed on a future that is not allowed in the current state. New in version 3.8. exception concurrent.futures.thread.BrokenThreadPool Derived from BrokenExecutor, this exception class is raised when one of the workers of a ThreadPoolExecutor has failed initializing. New in version 3.7. exception concurrent.futures.process.BrokenProcessPool Derived from BrokenExecutor (formerly RuntimeError), this exception class is raised when one of the workers of a ProcessPoolExecutor has terminated in a non-clean fashion (for example, if it was killed from the outside). New in version 3.3.
python.library.concurrent.futures
concurrent.futures.as_completed(fs, timeout=None) Returns an iterator over the Future instances (possibly created by different Executor instances) given by fs that yields futures as they complete (finished or cancelled futures). Any futures given by fs that are duplicated will be returned once. Any futures that completed before as_completed() is called will be yielded first. The returned iterator raises a concurrent.futures.TimeoutError if __next__() is called and the result isn’t available after timeout seconds from the original call to as_completed(). timeout can be an int or float. If timeout is not specified or None, there is no limit to the wait time.
python.library.concurrent.futures#concurrent.futures.as_completed
exception concurrent.futures.BrokenExecutor Derived from RuntimeError, this exception class is raised when an executor is broken for some reason, and cannot be used to submit or execute new tasks. New in version 3.7.
python.library.concurrent.futures#concurrent.futures.BrokenExecutor
exception concurrent.futures.CancelledError Raised when a future is cancelled.
python.library.concurrent.futures#concurrent.futures.CancelledError
class concurrent.futures.Executor An abstract class that provides methods to execute calls asynchronously. It should not be used directly, but through its concrete subclasses. submit(fn, /, *args, **kwargs) Schedules the callable, fn, to be executed as fn(*args **kwargs) and returns a Future object representing the execution of the callable. with ThreadPoolExecutor(max_workers=1) as executor: future = executor.submit(pow, 323, 1235) print(future.result()) map(func, *iterables, timeout=None, chunksize=1) Similar to map(func, *iterables) except: the iterables are collected immediately rather than lazily; func is executed asynchronously and several calls to func may be made concurrently. The returned iterator raises a concurrent.futures.TimeoutError if __next__() is called and the result isn’t available after timeout seconds from the original call to Executor.map(). timeout can be an int or a float. If timeout is not specified or None, there is no limit to the wait time. If a func call raises an exception, then that exception will be raised when its value is retrieved from the iterator. When using ProcessPoolExecutor, this method chops iterables into a number of chunks which it submits to the pool as separate tasks. The (approximate) size of these chunks can be specified by setting chunksize to a positive integer. For very long iterables, using a large value for chunksize can significantly improve performance compared to the default size of 1. With ThreadPoolExecutor, chunksize has no effect. Changed in version 3.5: Added the chunksize argument. shutdown(wait=True, *, cancel_futures=False) Signal the executor that it should free any resources that it is using when the currently pending futures are done executing. Calls to Executor.submit() and Executor.map() made after shutdown will raise RuntimeError. If wait is True then this method will not return until all the pending futures are done executing and the resources associated with the executor have been freed. If wait is False then this method will return immediately and the resources associated with the executor will be freed when all pending futures are done executing. Regardless of the value of wait, the entire Python program will not exit until all pending futures are done executing. If cancel_futures is True, this method will cancel all pending futures that the executor has not started running. Any futures that are completed or running won’t be cancelled, regardless of the value of cancel_futures. If both cancel_futures and wait are True, all futures that the executor has started running will be completed prior to this method returning. The remaining futures are cancelled. You can avoid having to call this method explicitly if you use the with statement, which will shutdown the Executor (waiting as if Executor.shutdown() were called with wait set to True): import shutil with ThreadPoolExecutor(max_workers=4) as e: e.submit(shutil.copy, 'src1.txt', 'dest1.txt') e.submit(shutil.copy, 'src2.txt', 'dest2.txt') e.submit(shutil.copy, 'src3.txt', 'dest3.txt') e.submit(shutil.copy, 'src4.txt', 'dest4.txt') Changed in version 3.9: Added cancel_futures.
python.library.concurrent.futures#concurrent.futures.Executor
map(func, *iterables, timeout=None, chunksize=1) Similar to map(func, *iterables) except: the iterables are collected immediately rather than lazily; func is executed asynchronously and several calls to func may be made concurrently. The returned iterator raises a concurrent.futures.TimeoutError if __next__() is called and the result isn’t available after timeout seconds from the original call to Executor.map(). timeout can be an int or a float. If timeout is not specified or None, there is no limit to the wait time. If a func call raises an exception, then that exception will be raised when its value is retrieved from the iterator. When using ProcessPoolExecutor, this method chops iterables into a number of chunks which it submits to the pool as separate tasks. The (approximate) size of these chunks can be specified by setting chunksize to a positive integer. For very long iterables, using a large value for chunksize can significantly improve performance compared to the default size of 1. With ThreadPoolExecutor, chunksize has no effect. Changed in version 3.5: Added the chunksize argument.
python.library.concurrent.futures#concurrent.futures.Executor.map
shutdown(wait=True, *, cancel_futures=False) Signal the executor that it should free any resources that it is using when the currently pending futures are done executing. Calls to Executor.submit() and Executor.map() made after shutdown will raise RuntimeError. If wait is True then this method will not return until all the pending futures are done executing and the resources associated with the executor have been freed. If wait is False then this method will return immediately and the resources associated with the executor will be freed when all pending futures are done executing. Regardless of the value of wait, the entire Python program will not exit until all pending futures are done executing. If cancel_futures is True, this method will cancel all pending futures that the executor has not started running. Any futures that are completed or running won’t be cancelled, regardless of the value of cancel_futures. If both cancel_futures and wait are True, all futures that the executor has started running will be completed prior to this method returning. The remaining futures are cancelled. You can avoid having to call this method explicitly if you use the with statement, which will shutdown the Executor (waiting as if Executor.shutdown() were called with wait set to True): import shutil with ThreadPoolExecutor(max_workers=4) as e: e.submit(shutil.copy, 'src1.txt', 'dest1.txt') e.submit(shutil.copy, 'src2.txt', 'dest2.txt') e.submit(shutil.copy, 'src3.txt', 'dest3.txt') e.submit(shutil.copy, 'src4.txt', 'dest4.txt') Changed in version 3.9: Added cancel_futures.
python.library.concurrent.futures#concurrent.futures.Executor.shutdown
submit(fn, /, *args, **kwargs) Schedules the callable, fn, to be executed as fn(*args **kwargs) and returns a Future object representing the execution of the callable. with ThreadPoolExecutor(max_workers=1) as executor: future = executor.submit(pow, 323, 1235) print(future.result())
python.library.concurrent.futures#concurrent.futures.Executor.submit
class concurrent.futures.Future Encapsulates the asynchronous execution of a callable. Future instances are created by Executor.submit() and should not be created directly except for testing. cancel() Attempt to cancel the call. If the call is currently being executed or finished running and cannot be cancelled then the method will return False, otherwise the call will be cancelled and the method will return True. cancelled() Return True if the call was successfully cancelled. running() Return True if the call is currently being executed and cannot be cancelled. done() Return True if the call was successfully cancelled or finished running. result(timeout=None) Return the value returned by the call. If the call hasn’t yet completed then this method will wait up to timeout seconds. If the call hasn’t completed in timeout seconds, then a concurrent.futures.TimeoutError will be raised. timeout can be an int or float. If timeout is not specified or None, there is no limit to the wait time. If the future is cancelled before completing then CancelledError will be raised. If the call raised, this method will raise the same exception. exception(timeout=None) Return the exception raised by the call. If the call hasn’t yet completed then this method will wait up to timeout seconds. If the call hasn’t completed in timeout seconds, then a concurrent.futures.TimeoutError will be raised. timeout can be an int or float. If timeout is not specified or None, there is no limit to the wait time. If the future is cancelled before completing then CancelledError will be raised. If the call completed without raising, None is returned. add_done_callback(fn) Attaches the callable fn to the future. fn will be called, with the future as its only argument, when the future is cancelled or finishes running. Added callables are called in the order that they were added and are always called in a thread belonging to the process that added them. If the callable raises an Exception subclass, it will be logged and ignored. If the callable raises a BaseException subclass, the behavior is undefined. If the future has already completed or been cancelled, fn will be called immediately. The following Future methods are meant for use in unit tests and Executor implementations. set_running_or_notify_cancel() This method should only be called by Executor implementations before executing the work associated with the Future and by unit tests. If the method returns False then the Future was cancelled, i.e. Future.cancel() was called and returned True. Any threads waiting on the Future completing (i.e. through as_completed() or wait()) will be woken up. If the method returns True then the Future was not cancelled and has been put in the running state, i.e. calls to Future.running() will return True. This method can only be called once and cannot be called after Future.set_result() or Future.set_exception() have been called. set_result(result) Sets the result of the work associated with the Future to result. This method should only be used by Executor implementations and unit tests. Changed in version 3.8: This method raises concurrent.futures.InvalidStateError if the Future is already done. set_exception(exception) Sets the result of the work associated with the Future to the Exception exception. This method should only be used by Executor implementations and unit tests. Changed in version 3.8: This method raises concurrent.futures.InvalidStateError if the Future is already done.
python.library.concurrent.futures#concurrent.futures.Future