doc_content
stringlengths
1
386k
doc_id
stringlengths
5
188
lngettext(singular, plural, n) Equivalent to gettext() and ngettext(), but the translation is returned as a byte string encoded in the preferred system encoding if no encoding was explicitly set with set_output_charset(). Warning These methods should be avoided in Python 3. See the warning for the lgettext() function. Deprecated since version 3.8, will be removed in version 3.10.
python.library.gettext#gettext.GNUTranslations.lngettext
ngettext(singular, plural, n) Do a plural-forms lookup of a message id. singular is used as the message id for purposes of lookup in the catalog, while n is used to determine which plural form to use. The returned message string is a Unicode string. If the message id is not found in the catalog, and a fallback is specified, the request is forwarded to the fallback’s ngettext() method. Otherwise, when n is 1 singular is returned, and plural is returned in all other cases. Here is an example: n = len(os.listdir('.')) cat = GNUTranslations(somefile) message = cat.ngettext( 'There is %(num)d file in this directory', 'There are %(num)d files in this directory', n) % {'num': n}
python.library.gettext#gettext.GNUTranslations.ngettext
npgettext(context, singular, plural, n) Do a plural-forms lookup of a message id. singular is used as the message id for purposes of lookup in the catalog, while n is used to determine which plural form to use. If the message id for context is not found in the catalog, and a fallback is specified, the request is forwarded to the fallback’s npgettext() method. Otherwise, when n is 1 singular is returned, and plural is returned in all other cases. New in version 3.8.
python.library.gettext#gettext.GNUTranslations.npgettext
pgettext(context, message) Look up the context and message id in the catalog and return the corresponding message string, as a Unicode string. If there is no entry in the catalog for the message id and context, and a fallback has been set, the look up is forwarded to the fallback’s pgettext() method. Otherwise, the message id is returned. New in version 3.8.
python.library.gettext#gettext.GNUTranslations.pgettext
gettext.install(domain, localedir=None, codeset=None, names=None) This installs the function _() in Python’s builtins namespace, based on domain, localedir, and codeset which are passed to the function translation(). For the names parameter, please see the description of the translation object’s install() method. As seen below, you usually mark the strings in your application that are candidates for translation, by wrapping them in a call to the _() function, like this: print(_('This string will be translated.')) For convenience, you want the _() function to be installed in Python’s builtins namespace, so it is easily accessible in all modules of your application. Deprecated since version 3.8, will be removed in version 3.10: The codeset parameter.
python.library.gettext#gettext.install
gettext.ldgettext(domain, message)
python.library.gettext#gettext.ldgettext
gettext.ldngettext(domain, singular, plural, n) Equivalent to the corresponding functions without the l prefix (gettext(), dgettext(), ngettext() and dngettext()), but the translation is returned as a byte string encoded in the preferred system encoding if no other encoding was explicitly set with bind_textdomain_codeset(). Warning These functions should be avoided in Python 3, because they return encoded bytes. It’s much better to use alternatives which return Unicode strings instead, since most Python applications will want to manipulate human readable text as strings instead of bytes. Further, it’s possible that you may get unexpected Unicode-related exceptions if there are encoding problems with the translated strings. Deprecated since version 3.8, will be removed in version 3.10.
python.library.gettext#gettext.ldngettext
gettext.lgettext(message)
python.library.gettext#gettext.lgettext
gettext.lngettext(singular, plural, n)
python.library.gettext#gettext.lngettext
gettext.ngettext(singular, plural, n) Like gettext(), but consider plural forms. If a translation is found, apply the plural formula to n, and return the resulting message (some languages have more than two plural forms). If no translation is found, return singular if n is 1; return plural otherwise. The Plural formula is taken from the catalog header. It is a C or Python expression that has a free variable n; the expression evaluates to the index of the plural in the catalog. See the GNU gettext documentation for the precise syntax to be used in .po files and the formulas for a variety of languages.
python.library.gettext#gettext.ngettext
gettext.npgettext(context, singular, plural, n)
python.library.gettext#gettext.npgettext
class gettext.NullTranslations(fp=None) Takes an optional file object fp, which is ignored by the base class. Initializes “protected” instance variables _info and _charset which are set by derived classes, as well as _fallback, which is set through add_fallback(). It then calls self._parse(fp) if fp is not None. _parse(fp) No-op in the base class, this method takes file object fp, and reads the data from the file, initializing its message catalog. If you have an unsupported message catalog file format, you should override this method to parse your format. add_fallback(fallback) Add fallback as the fallback object for the current translation object. A translation object should consult the fallback if it cannot provide a translation for a given message. gettext(message) If a fallback has been set, forward gettext() to the fallback. Otherwise, return message. Overridden in derived classes. ngettext(singular, plural, n) If a fallback has been set, forward ngettext() to the fallback. Otherwise, return singular if n is 1; return plural otherwise. Overridden in derived classes. pgettext(context, message) If a fallback has been set, forward pgettext() to the fallback. Otherwise, return the translated message. Overridden in derived classes. New in version 3.8. npgettext(context, singular, plural, n) If a fallback has been set, forward npgettext() to the fallback. Otherwise, return the translated message. Overridden in derived classes. New in version 3.8. lgettext(message) lngettext(singular, plural, n) Equivalent to gettext() and ngettext(), but the translation is returned as a byte string encoded in the preferred system encoding if no encoding was explicitly set with set_output_charset(). Overridden in derived classes. Warning These methods should be avoided in Python 3. See the warning for the lgettext() function. Deprecated since version 3.8, will be removed in version 3.10. info() Return the “protected” _info variable, a dictionary containing the metadata found in the message catalog file. charset() Return the encoding of the message catalog file. output_charset() Return the encoding used to return translated messages in lgettext() and lngettext(). Deprecated since version 3.8, will be removed in version 3.10. set_output_charset(charset) Change the encoding used to return translated messages. Deprecated since version 3.8, will be removed in version 3.10. install(names=None) This method installs gettext() into the built-in namespace, binding it to _. If the names parameter is given, it must be a sequence containing the names of functions you want to install in the builtins namespace in addition to _(). Supported names are 'gettext', 'ngettext', 'pgettext', 'npgettext', 'lgettext', and 'lngettext'. Note that this is only one way, albeit the most convenient way, to make the _() function available to your application. Because it affects the entire application globally, and specifically the built-in namespace, localized modules should never install _(). Instead, they should use this code to make _() available to their module: import gettext t = gettext.translation('mymodule', ...) _ = t.gettext This puts _() only in the module’s global namespace and so only affects calls within this module. Changed in version 3.8: Added 'pgettext' and 'npgettext'.
python.library.gettext#gettext.NullTranslations
add_fallback(fallback) Add fallback as the fallback object for the current translation object. A translation object should consult the fallback if it cannot provide a translation for a given message.
python.library.gettext#gettext.NullTranslations.add_fallback
charset() Return the encoding of the message catalog file.
python.library.gettext#gettext.NullTranslations.charset
gettext(message) If a fallback has been set, forward gettext() to the fallback. Otherwise, return message. Overridden in derived classes.
python.library.gettext#gettext.NullTranslations.gettext
info() Return the “protected” _info variable, a dictionary containing the metadata found in the message catalog file.
python.library.gettext#gettext.NullTranslations.info
install(names=None) This method installs gettext() into the built-in namespace, binding it to _. If the names parameter is given, it must be a sequence containing the names of functions you want to install in the builtins namespace in addition to _(). Supported names are 'gettext', 'ngettext', 'pgettext', 'npgettext', 'lgettext', and 'lngettext'. Note that this is only one way, albeit the most convenient way, to make the _() function available to your application. Because it affects the entire application globally, and specifically the built-in namespace, localized modules should never install _(). Instead, they should use this code to make _() available to their module: import gettext t = gettext.translation('mymodule', ...) _ = t.gettext This puts _() only in the module’s global namespace and so only affects calls within this module. Changed in version 3.8: Added 'pgettext' and 'npgettext'.
python.library.gettext#gettext.NullTranslations.install
lgettext(message)
python.library.gettext#gettext.NullTranslations.lgettext
lngettext(singular, plural, n) Equivalent to gettext() and ngettext(), but the translation is returned as a byte string encoded in the preferred system encoding if no encoding was explicitly set with set_output_charset(). Overridden in derived classes. Warning These methods should be avoided in Python 3. See the warning for the lgettext() function. Deprecated since version 3.8, will be removed in version 3.10.
python.library.gettext#gettext.NullTranslations.lngettext
ngettext(singular, plural, n) If a fallback has been set, forward ngettext() to the fallback. Otherwise, return singular if n is 1; return plural otherwise. Overridden in derived classes.
python.library.gettext#gettext.NullTranslations.ngettext
npgettext(context, singular, plural, n) If a fallback has been set, forward npgettext() to the fallback. Otherwise, return the translated message. Overridden in derived classes. New in version 3.8.
python.library.gettext#gettext.NullTranslations.npgettext
output_charset() Return the encoding used to return translated messages in lgettext() and lngettext(). Deprecated since version 3.8, will be removed in version 3.10.
python.library.gettext#gettext.NullTranslations.output_charset
pgettext(context, message) If a fallback has been set, forward pgettext() to the fallback. Otherwise, return the translated message. Overridden in derived classes. New in version 3.8.
python.library.gettext#gettext.NullTranslations.pgettext
set_output_charset(charset) Change the encoding used to return translated messages. Deprecated since version 3.8, will be removed in version 3.10.
python.library.gettext#gettext.NullTranslations.set_output_charset
_parse(fp) No-op in the base class, this method takes file object fp, and reads the data from the file, initializing its message catalog. If you have an unsupported message catalog file format, you should override this method to parse your format.
python.library.gettext#gettext.NullTranslations._parse
gettext.pgettext(context, message)
python.library.gettext#gettext.pgettext
gettext.textdomain(domain=None) Change or query the current global domain. If domain is None, then the current global domain is returned, otherwise the global domain is set to domain, which is returned.
python.library.gettext#gettext.textdomain
gettext.translation(domain, localedir=None, languages=None, class_=None, fallback=False, codeset=None) Return a *Translations instance based on the domain, localedir, and languages, which are first passed to find() to get a list of the associated .mo file paths. Instances with identical .mo file names are cached. The actual class instantiated is class_ if provided, otherwise GNUTranslations. The class’s constructor must take a single file object argument. If provided, codeset will change the charset used to encode translated strings in the lgettext() and lngettext() methods. If multiple files are found, later files are used as fallbacks for earlier ones. To allow setting the fallback, copy.copy() is used to clone each translation object from the cache; the actual instance data is still shared with the cache. If no .mo file is found, this function raises OSError if fallback is false (which is the default), and returns a NullTranslations instance if fallback is true. Changed in version 3.3: IOError used to be raised instead of OSError. Deprecated since version 3.8, will be removed in version 3.10: The codeset parameter.
python.library.gettext#gettext.translation
glob — Unix style pathname pattern expansion Source code: Lib/glob.py The glob module finds all the pathnames matching a specified pattern according to the rules used by the Unix shell, although results are returned in arbitrary order. No tilde expansion is done, but *, ?, and character ranges expressed with [] will be correctly matched. This is done by using the os.scandir() and fnmatch.fnmatch() functions in concert, and not by actually invoking a subshell. Note that unlike fnmatch.fnmatch(), glob treats filenames beginning with a dot (.) as special cases. (For tilde and shell variable expansion, use os.path.expanduser() and os.path.expandvars().) For a literal match, wrap the meta-characters in brackets. For example, '[?]' matches the character '?'. See also The pathlib module offers high-level path objects. glob.glob(pathname, *, recursive=False) Return a possibly-empty list of path names that match pathname, which must be a string containing a path specification. pathname can be either absolute (like /usr/src/Python-1.5/Makefile) or relative (like ../../Tools/*/*.gif), and can contain shell-style wildcards. Broken symlinks are included in the results (as in the shell). Whether or not the results are sorted depends on the file system. If a file that satisfies conditions is removed or added during the call of this function, whether a path name for that file be included is unspecified. If recursive is true, the pattern “**” will match any files and zero or more directories, subdirectories and symbolic links to directories. If the pattern is followed by an os.sep or os.altsep then files will not match. Raises an auditing event glob.glob with arguments pathname, recursive. Note Using the “**” pattern in large directory trees may consume an inordinate amount of time. Changed in version 3.5: Support for recursive globs using “**”. glob.iglob(pathname, *, recursive=False) Return an iterator which yields the same values as glob() without actually storing them all simultaneously. Raises an auditing event glob.glob with arguments pathname, recursive. glob.escape(pathname) Escape all special characters ('?', '*' and '['). This is useful if you want to match an arbitrary literal string that may have special characters in it. Special characters in drive/UNC sharepoints are not escaped, e.g. on Windows escape('//?/c:/Quo vadis?.txt') returns '//?/c:/Quo vadis[?].txt'. New in version 3.4. For example, consider a directory containing the following files: 1.gif, 2.txt, card.gif and a subdirectory sub which contains only the file 3.txt. glob() will produce the following results. Notice how any leading components of the path are preserved. >>> import glob >>> glob.glob('./[0-9].*') ['./1.gif', './2.txt'] >>> glob.glob('*.gif') ['1.gif', 'card.gif'] >>> glob.glob('?.gif') ['1.gif'] >>> glob.glob('**/*.txt', recursive=True) ['2.txt', 'sub/3.txt'] >>> glob.glob('./**/', recursive=True) ['./', './sub/'] If the directory contains files starting with . they won’t be matched by default. For example, consider a directory containing card.gif and .card.gif: >>> import glob >>> glob.glob('*.gif') ['card.gif'] >>> glob.glob('.c*') ['.card.gif'] See also Module fnmatch Shell-style filename (not path) expansion
python.library.glob
glob.escape(pathname) Escape all special characters ('?', '*' and '['). This is useful if you want to match an arbitrary literal string that may have special characters in it. Special characters in drive/UNC sharepoints are not escaped, e.g. on Windows escape('//?/c:/Quo vadis?.txt') returns '//?/c:/Quo vadis[?].txt'. New in version 3.4.
python.library.glob#glob.escape
glob.glob(pathname, *, recursive=False) Return a possibly-empty list of path names that match pathname, which must be a string containing a path specification. pathname can be either absolute (like /usr/src/Python-1.5/Makefile) or relative (like ../../Tools/*/*.gif), and can contain shell-style wildcards. Broken symlinks are included in the results (as in the shell). Whether or not the results are sorted depends on the file system. If a file that satisfies conditions is removed or added during the call of this function, whether a path name for that file be included is unspecified. If recursive is true, the pattern “**” will match any files and zero or more directories, subdirectories and symbolic links to directories. If the pattern is followed by an os.sep or os.altsep then files will not match. Raises an auditing event glob.glob with arguments pathname, recursive. Note Using the “**” pattern in large directory trees may consume an inordinate amount of time. Changed in version 3.5: Support for recursive globs using “**”.
python.library.glob#glob.glob
glob.iglob(pathname, *, recursive=False) Return an iterator which yields the same values as glob() without actually storing them all simultaneously. Raises an auditing event glob.glob with arguments pathname, recursive.
python.library.glob#glob.iglob
globals() Return a dictionary representing the current global symbol table. This is always the dictionary of the current module (inside a function or method, this is the module where it is defined, not the module from which it is called).
python.library.functions#globals
graphlib — Functionality to operate with graph-like structures Source code: Lib/graphlib.py class graphlib.TopologicalSorter(graph=None) Provides functionality to topologically sort a graph of hashable nodes. A topological order is a linear ordering of the vertices in a graph such that for every directed edge u -> v from vertex u to vertex v, vertex u comes before vertex v in the ordering. For instance, the vertices of the graph may represent tasks to be performed, and the edges may represent constraints that one task must be performed before another; in this example, a topological ordering is just a valid sequence for the tasks. A complete topological ordering is possible if and only if the graph has no directed cycles, that is, if it is a directed acyclic graph. If the optional graph argument is provided it must be a dictionary representing a directed acyclic graph where the keys are nodes and the values are iterables of all predecessors of that node in the graph (the nodes that have edges that point to the value in the key). Additional nodes can be added to the graph using the add() method. In the general case, the steps required to perform the sorting of a given graph are as follows: Create an instance of the TopologicalSorter with an optional initial graph. Add additional nodes to the graph. Call prepare() on the graph. While is_active() is True, iterate over the nodes returned by get_ready() and process them. Call done() on each node as it finishes processing. In case just an immediate sorting of the nodes in the graph is required and no parallelism is involved, the convenience method TopologicalSorter.static_order() can be used directly: >>> graph = {"D": {"B", "C"}, "C": {"A"}, "B": {"A"}} >>> ts = TopologicalSorter(graph) >>> tuple(ts.static_order()) ('A', 'C', 'B', 'D') The class is designed to easily support parallel processing of the nodes as they become ready. For instance: topological_sorter = TopologicalSorter() # Add nodes to 'topological_sorter'... topological_sorter.prepare() while topological_sorter.is_active(): for node in topological_sorter.get_ready(): # Worker threads or processes take nodes to work on off the # 'task_queue' queue. task_queue.put(node) # When the work for a node is done, workers put the node in # 'finalized_tasks_queue' so we can get more nodes to work on. # The definition of 'is_active()' guarantees that, at this point, at # least one node has been placed on 'task_queue' that hasn't yet # been passed to 'done()', so this blocking 'get()' must (eventually) # succeed. After calling 'done()', we loop back to call 'get_ready()' # again, so put newly freed nodes on 'task_queue' as soon as # logically possible. node = finalized_tasks_queue.get() topological_sorter.done(node) add(node, *predecessors) Add a new node and its predecessors to the graph. Both the node and all elements in predecessors must be hashable. If called multiple times with the same node argument, the set of dependencies will be the union of all dependencies passed in. It is possible to add a node with no dependencies (predecessors is not provided) or to provide a dependency twice. If a node that has not been provided before is included among predecessors it will be automatically added to the graph with no predecessors of its own. Raises ValueError if called after prepare(). prepare() Mark the graph as finished and check for cycles in the graph. If any cycle is detected, CycleError will be raised, but get_ready() can still be used to obtain as many nodes as possible until cycles block more progress. After a call to this function, the graph cannot be modified, and therefore no more nodes can be added using add(). is_active() Returns True if more progress can be made and False otherwise. Progress can be made if cycles do not block the resolution and either there are still nodes ready that haven’t yet been returned by TopologicalSorter.get_ready() or the number of nodes marked TopologicalSorter.done() is less than the number that have been returned by TopologicalSorter.get_ready(). The __bool__() method of this class defers to this function, so instead of: if ts.is_active(): ... it is possible to simply do: if ts: ... Raises ValueError if called without calling prepare() previously. done(*nodes) Marks a set of nodes returned by TopologicalSorter.get_ready() as processed, unblocking any successor of each node in nodes for being returned in the future by a call to TopologicalSorter.get_ready(). Raises ValueError if any node in nodes has already been marked as processed by a previous call to this method or if a node was not added to the graph by using TopologicalSorter.add(), if called without calling prepare() or if node has not yet been returned by get_ready(). get_ready() Returns a tuple with all the nodes that are ready. Initially it returns all nodes with no predecessors, and once those are marked as processed by calling TopologicalSorter.done(), further calls will return all new nodes that have all their predecessors already processed. Once no more progress can be made, empty tuples are returned. Raises ValueError if called without calling prepare() previously. static_order() Returns an iterable of nodes in a topological order. Using this method does not require to call TopologicalSorter.prepare() or TopologicalSorter.done(). This method is equivalent to: def static_order(self): self.prepare() while self.is_active(): node_group = self.get_ready() yield from node_group self.done(*node_group) The particular order that is returned may depend on the specific order in which the items were inserted in the graph. For example: >>> ts = TopologicalSorter() >>> ts.add(3, 2, 1) >>> ts.add(1, 0) >>> print([*ts.static_order()]) [2, 0, 1, 3] >>> ts2 = TopologicalSorter() >>> ts2.add(1, 0) >>> ts2.add(3, 2, 1) >>> print([*ts2.static_order()]) [0, 2, 1, 3] This is due to the fact that “0” and “2” are in the same level in the graph (they would have been returned in the same call to get_ready()) and the order between them is determined by the order of insertion. If any cycle is detected, CycleError will be raised. New in version 3.9. Exceptions The graphlib module defines the following exception classes: exception graphlib.CycleError Subclass of ValueError raised by TopologicalSorter.prepare() if cycles exist in the working graph. If multiple cycles exist, only one undefined choice among them will be reported and included in the exception. The detected cycle can be accessed via the second element in the args attribute of the exception instance and consists in a list of nodes, such that each node is, in the graph, an immediate predecessor of the next node in the list. In the reported list, the first and the last node will be the same, to make it clear that it is cyclic.
python.library.graphlib
exception graphlib.CycleError Subclass of ValueError raised by TopologicalSorter.prepare() if cycles exist in the working graph. If multiple cycles exist, only one undefined choice among them will be reported and included in the exception. The detected cycle can be accessed via the second element in the args attribute of the exception instance and consists in a list of nodes, such that each node is, in the graph, an immediate predecessor of the next node in the list. In the reported list, the first and the last node will be the same, to make it clear that it is cyclic.
python.library.graphlib#graphlib.CycleError
class graphlib.TopologicalSorter(graph=None) Provides functionality to topologically sort a graph of hashable nodes. A topological order is a linear ordering of the vertices in a graph such that for every directed edge u -> v from vertex u to vertex v, vertex u comes before vertex v in the ordering. For instance, the vertices of the graph may represent tasks to be performed, and the edges may represent constraints that one task must be performed before another; in this example, a topological ordering is just a valid sequence for the tasks. A complete topological ordering is possible if and only if the graph has no directed cycles, that is, if it is a directed acyclic graph. If the optional graph argument is provided it must be a dictionary representing a directed acyclic graph where the keys are nodes and the values are iterables of all predecessors of that node in the graph (the nodes that have edges that point to the value in the key). Additional nodes can be added to the graph using the add() method. In the general case, the steps required to perform the sorting of a given graph are as follows: Create an instance of the TopologicalSorter with an optional initial graph. Add additional nodes to the graph. Call prepare() on the graph. While is_active() is True, iterate over the nodes returned by get_ready() and process them. Call done() on each node as it finishes processing. In case just an immediate sorting of the nodes in the graph is required and no parallelism is involved, the convenience method TopologicalSorter.static_order() can be used directly: >>> graph = {"D": {"B", "C"}, "C": {"A"}, "B": {"A"}} >>> ts = TopologicalSorter(graph) >>> tuple(ts.static_order()) ('A', 'C', 'B', 'D') The class is designed to easily support parallel processing of the nodes as they become ready. For instance: topological_sorter = TopologicalSorter() # Add nodes to 'topological_sorter'... topological_sorter.prepare() while topological_sorter.is_active(): for node in topological_sorter.get_ready(): # Worker threads or processes take nodes to work on off the # 'task_queue' queue. task_queue.put(node) # When the work for a node is done, workers put the node in # 'finalized_tasks_queue' so we can get more nodes to work on. # The definition of 'is_active()' guarantees that, at this point, at # least one node has been placed on 'task_queue' that hasn't yet # been passed to 'done()', so this blocking 'get()' must (eventually) # succeed. After calling 'done()', we loop back to call 'get_ready()' # again, so put newly freed nodes on 'task_queue' as soon as # logically possible. node = finalized_tasks_queue.get() topological_sorter.done(node) add(node, *predecessors) Add a new node and its predecessors to the graph. Both the node and all elements in predecessors must be hashable. If called multiple times with the same node argument, the set of dependencies will be the union of all dependencies passed in. It is possible to add a node with no dependencies (predecessors is not provided) or to provide a dependency twice. If a node that has not been provided before is included among predecessors it will be automatically added to the graph with no predecessors of its own. Raises ValueError if called after prepare(). prepare() Mark the graph as finished and check for cycles in the graph. If any cycle is detected, CycleError will be raised, but get_ready() can still be used to obtain as many nodes as possible until cycles block more progress. After a call to this function, the graph cannot be modified, and therefore no more nodes can be added using add(). is_active() Returns True if more progress can be made and False otherwise. Progress can be made if cycles do not block the resolution and either there are still nodes ready that haven’t yet been returned by TopologicalSorter.get_ready() or the number of nodes marked TopologicalSorter.done() is less than the number that have been returned by TopologicalSorter.get_ready(). The __bool__() method of this class defers to this function, so instead of: if ts.is_active(): ... it is possible to simply do: if ts: ... Raises ValueError if called without calling prepare() previously. done(*nodes) Marks a set of nodes returned by TopologicalSorter.get_ready() as processed, unblocking any successor of each node in nodes for being returned in the future by a call to TopologicalSorter.get_ready(). Raises ValueError if any node in nodes has already been marked as processed by a previous call to this method or if a node was not added to the graph by using TopologicalSorter.add(), if called without calling prepare() or if node has not yet been returned by get_ready(). get_ready() Returns a tuple with all the nodes that are ready. Initially it returns all nodes with no predecessors, and once those are marked as processed by calling TopologicalSorter.done(), further calls will return all new nodes that have all their predecessors already processed. Once no more progress can be made, empty tuples are returned. Raises ValueError if called without calling prepare() previously. static_order() Returns an iterable of nodes in a topological order. Using this method does not require to call TopologicalSorter.prepare() or TopologicalSorter.done(). This method is equivalent to: def static_order(self): self.prepare() while self.is_active(): node_group = self.get_ready() yield from node_group self.done(*node_group) The particular order that is returned may depend on the specific order in which the items were inserted in the graph. For example: >>> ts = TopologicalSorter() >>> ts.add(3, 2, 1) >>> ts.add(1, 0) >>> print([*ts.static_order()]) [2, 0, 1, 3] >>> ts2 = TopologicalSorter() >>> ts2.add(1, 0) >>> ts2.add(3, 2, 1) >>> print([*ts2.static_order()]) [0, 2, 1, 3] This is due to the fact that “0” and “2” are in the same level in the graph (they would have been returned in the same call to get_ready()) and the order between them is determined by the order of insertion. If any cycle is detected, CycleError will be raised. New in version 3.9.
python.library.graphlib#graphlib.TopologicalSorter
add(node, *predecessors) Add a new node and its predecessors to the graph. Both the node and all elements in predecessors must be hashable. If called multiple times with the same node argument, the set of dependencies will be the union of all dependencies passed in. It is possible to add a node with no dependencies (predecessors is not provided) or to provide a dependency twice. If a node that has not been provided before is included among predecessors it will be automatically added to the graph with no predecessors of its own. Raises ValueError if called after prepare().
python.library.graphlib#graphlib.TopologicalSorter.add
done(*nodes) Marks a set of nodes returned by TopologicalSorter.get_ready() as processed, unblocking any successor of each node in nodes for being returned in the future by a call to TopologicalSorter.get_ready(). Raises ValueError if any node in nodes has already been marked as processed by a previous call to this method or if a node was not added to the graph by using TopologicalSorter.add(), if called without calling prepare() or if node has not yet been returned by get_ready().
python.library.graphlib#graphlib.TopologicalSorter.done
get_ready() Returns a tuple with all the nodes that are ready. Initially it returns all nodes with no predecessors, and once those are marked as processed by calling TopologicalSorter.done(), further calls will return all new nodes that have all their predecessors already processed. Once no more progress can be made, empty tuples are returned. Raises ValueError if called without calling prepare() previously.
python.library.graphlib#graphlib.TopologicalSorter.get_ready
is_active() Returns True if more progress can be made and False otherwise. Progress can be made if cycles do not block the resolution and either there are still nodes ready that haven’t yet been returned by TopologicalSorter.get_ready() or the number of nodes marked TopologicalSorter.done() is less than the number that have been returned by TopologicalSorter.get_ready(). The __bool__() method of this class defers to this function, so instead of: if ts.is_active(): ... it is possible to simply do: if ts: ... Raises ValueError if called without calling prepare() previously.
python.library.graphlib#graphlib.TopologicalSorter.is_active
prepare() Mark the graph as finished and check for cycles in the graph. If any cycle is detected, CycleError will be raised, but get_ready() can still be used to obtain as many nodes as possible until cycles block more progress. After a call to this function, the graph cannot be modified, and therefore no more nodes can be added using add().
python.library.graphlib#graphlib.TopologicalSorter.prepare
static_order() Returns an iterable of nodes in a topological order. Using this method does not require to call TopologicalSorter.prepare() or TopologicalSorter.done(). This method is equivalent to: def static_order(self): self.prepare() while self.is_active(): node_group = self.get_ready() yield from node_group self.done(*node_group) The particular order that is returned may depend on the specific order in which the items were inserted in the graph. For example: >>> ts = TopologicalSorter() >>> ts.add(3, 2, 1) >>> ts.add(1, 0) >>> print([*ts.static_order()]) [2, 0, 1, 3] >>> ts2 = TopologicalSorter() >>> ts2.add(1, 0) >>> ts2.add(3, 2, 1) >>> print([*ts2.static_order()]) [0, 2, 1, 3] This is due to the fact that “0” and “2” are in the same level in the graph (they would have been returned in the same call to get_ready()) and the order between them is determined by the order of insertion. If any cycle is detected, CycleError will be raised.
python.library.graphlib#graphlib.TopologicalSorter.static_order
grp — The group database This module provides access to the Unix group database. It is available on all Unix versions. Group database entries are reported as a tuple-like object, whose attributes correspond to the members of the group structure (Attribute field below, see <pwd.h>): Index Attribute Meaning 0 gr_name the name of the group 1 gr_passwd the (encrypted) group password; often empty 2 gr_gid the numerical group ID 3 gr_mem all the group member’s user names The gid is an integer, name and password are strings, and the member list is a list of strings. (Note that most users are not explicitly listed as members of the group they are in according to the password database. Check both databases to get complete membership information. Also note that a gr_name that starts with a + or - is likely to be a YP/NIS reference and may not be accessible via getgrnam() or getgrgid().) It defines the following items: grp.getgrgid(gid) Return the group database entry for the given numeric group ID. KeyError is raised if the entry asked for cannot be found. Deprecated since version 3.6: Since Python 3.6 the support of non-integer arguments like floats or strings in getgrgid() is deprecated. grp.getgrnam(name) Return the group database entry for the given group name. KeyError is raised if the entry asked for cannot be found. grp.getgrall() Return a list of all available group entries, in arbitrary order. See also Module pwd An interface to the user database, similar to this. Module spwd An interface to the shadow password database, similar to this.
python.library.grp
grp.getgrall() Return a list of all available group entries, in arbitrary order.
python.library.grp#grp.getgrall
grp.getgrgid(gid) Return the group database entry for the given numeric group ID. KeyError is raised if the entry asked for cannot be found. Deprecated since version 3.6: Since Python 3.6 the support of non-integer arguments like floats or strings in getgrgid() is deprecated.
python.library.grp#grp.getgrgid
grp.getgrnam(name) Return the group database entry for the given group name. KeyError is raised if the entry asked for cannot be found.
python.library.grp#grp.getgrnam
gzip — Support for gzip files Source code: Lib/gzip.py This module provides a simple interface to compress and decompress files just like the GNU programs gzip and gunzip would. The data compression is provided by the zlib module. The gzip module provides the GzipFile class, as well as the open(), compress() and decompress() convenience functions. The GzipFile class reads and writes gzip-format files, automatically compressing or decompressing the data so that it looks like an ordinary file object. Note that additional file formats which can be decompressed by the gzip and gunzip programs, such as those produced by compress and pack, are not supported by this module. The module defines the following items: gzip.open(filename, mode='rb', compresslevel=9, encoding=None, errors=None, newline=None) Open a gzip-compressed file in binary or text mode, returning a file object. The filename argument can be an actual filename (a str or bytes object), or an existing file object to read from or write to. The mode argument can be any of 'r', 'rb', 'a', 'ab', 'w', 'wb', 'x' or 'xb' for binary mode, or 'rt', 'at', 'wt', or 'xt' for text mode. The default is 'rb'. The compresslevel argument is an integer from 0 to 9, as for the GzipFile constructor. For binary mode, this function is equivalent to the GzipFile constructor: GzipFile(filename, mode, compresslevel). In this case, the encoding, errors and newline arguments must not be provided. For text mode, a GzipFile object is created, and wrapped in an io.TextIOWrapper instance with the specified encoding, error handling behavior, and line ending(s). Changed in version 3.3: Added support for filename being a file object, support for text mode, and the encoding, errors and newline arguments. Changed in version 3.4: Added support for the 'x', 'xb' and 'xt' modes. Changed in version 3.6: Accepts a path-like object. exception gzip.BadGzipFile An exception raised for invalid gzip files. It inherits OSError. EOFError and zlib.error can also be raised for invalid gzip files. New in version 3.8. class gzip.GzipFile(filename=None, mode=None, compresslevel=9, fileobj=None, mtime=None) Constructor for the GzipFile class, which simulates most of the methods of a file object, with the exception of the truncate() method. At least one of fileobj and filename must be given a non-trivial value. The new class instance is based on fileobj, which can be a regular file, an io.BytesIO object, or any other object which simulates a file. It defaults to None, in which case filename is opened to provide a file object. When fileobj is not None, the filename argument is only used to be included in the gzip file header, which may include the original filename of the uncompressed file. It defaults to the filename of fileobj, if discernible; otherwise, it defaults to the empty string, and in this case the original filename is not included in the header. The mode argument can be any of 'r', 'rb', 'a', 'ab', 'w', 'wb', 'x', or 'xb', depending on whether the file will be read or written. The default is the mode of fileobj if discernible; otherwise, the default is 'rb'. In future Python releases the mode of fileobj will not be used. It is better to always specify mode for writing. Note that the file is always opened in binary mode. To open a compressed file in text mode, use open() (or wrap your GzipFile with an io.TextIOWrapper). The compresslevel argument is an integer from 0 to 9 controlling the level of compression; 1 is fastest and produces the least compression, and 9 is slowest and produces the most compression. 0 is no compression. The default is 9. The mtime argument is an optional numeric timestamp to be written to the last modification time field in the stream when compressing. It should only be provided in compression mode. If omitted or None, the current time is used. See the mtime attribute for more details. Calling a GzipFile object’s close() method does not close fileobj, since you might wish to append more material after the compressed data. This also allows you to pass an io.BytesIO object opened for writing as fileobj, and retrieve the resulting memory buffer using the io.BytesIO object’s getvalue() method. GzipFile supports the io.BufferedIOBase interface, including iteration and the with statement. Only the truncate() method isn’t implemented. GzipFile also provides the following method and attribute: peek(n) Read n uncompressed bytes without advancing the file position. At most one single read on the compressed stream is done to satisfy the call. The number of bytes returned may be more or less than requested. Note While calling peek() does not change the file position of the GzipFile, it may change the position of the underlying file object (e.g. if the GzipFile was constructed with the fileobj parameter). New in version 3.2. mtime When decompressing, the value of the last modification time field in the most recently read header may be read from this attribute, as an integer. The initial value before reading any headers is None. All gzip compressed streams are required to contain this timestamp field. Some programs, such as gunzip, make use of the timestamp. The format is the same as the return value of time.time() and the st_mtime attribute of the object returned by os.stat(). Changed in version 3.1: Support for the with statement was added, along with the mtime constructor argument and mtime attribute. Changed in version 3.2: Support for zero-padded and unseekable files was added. Changed in version 3.3: The io.BufferedIOBase.read1() method is now implemented. Changed in version 3.4: Added support for the 'x' and 'xb' modes. Changed in version 3.5: Added support for writing arbitrary bytes-like objects. The read() method now accepts an argument of None. Changed in version 3.6: Accepts a path-like object. Deprecated since version 3.9: Opening GzipFile for writing without specifying the mode argument is deprecated. gzip.compress(data, compresslevel=9, *, mtime=None) Compress the data, returning a bytes object containing the compressed data. compresslevel and mtime have the same meaning as in the GzipFile constructor above. New in version 3.2. Changed in version 3.8: Added the mtime parameter for reproducible output. gzip.decompress(data) Decompress the data, returning a bytes object containing the uncompressed data. New in version 3.2. Examples of usage Example of how to read a compressed file: import gzip with gzip.open('/home/joe/file.txt.gz', 'rb') as f: file_content = f.read() Example of how to create a compressed GZIP file: import gzip content = b"Lots of content here" with gzip.open('/home/joe/file.txt.gz', 'wb') as f: f.write(content) Example of how to GZIP compress an existing file: import gzip import shutil with open('/home/joe/file.txt', 'rb') as f_in: with gzip.open('/home/joe/file.txt.gz', 'wb') as f_out: shutil.copyfileobj(f_in, f_out) Example of how to GZIP compress a binary string: import gzip s_in = b"Lots of content here" s_out = gzip.compress(s_in) See also Module zlib The basic data compression module needed to support the gzip file format. Command Line Interface The gzip module provides a simple command line interface to compress or decompress files. Once executed the gzip module keeps the input file(s). Changed in version 3.8: Add a new command line interface with a usage. By default, when you will execute the CLI, the default compression level is 6. Command line options file If file is not specified, read from sys.stdin. --fast Indicates the fastest compression method (less compression). --best Indicates the slowest compression method (best compression). -d, --decompress Decompress the given file. -h, --help Show the help message.
python.library.gzip
exception gzip.BadGzipFile An exception raised for invalid gzip files. It inherits OSError. EOFError and zlib.error can also be raised for invalid gzip files. New in version 3.8.
python.library.gzip#gzip.BadGzipFile
gzip.compress(data, compresslevel=9, *, mtime=None) Compress the data, returning a bytes object containing the compressed data. compresslevel and mtime have the same meaning as in the GzipFile constructor above. New in version 3.2. Changed in version 3.8: Added the mtime parameter for reproducible output.
python.library.gzip#gzip.compress
gzip.decompress(data) Decompress the data, returning a bytes object containing the uncompressed data. New in version 3.2.
python.library.gzip#gzip.decompress
class gzip.GzipFile(filename=None, mode=None, compresslevel=9, fileobj=None, mtime=None) Constructor for the GzipFile class, which simulates most of the methods of a file object, with the exception of the truncate() method. At least one of fileobj and filename must be given a non-trivial value. The new class instance is based on fileobj, which can be a regular file, an io.BytesIO object, or any other object which simulates a file. It defaults to None, in which case filename is opened to provide a file object. When fileobj is not None, the filename argument is only used to be included in the gzip file header, which may include the original filename of the uncompressed file. It defaults to the filename of fileobj, if discernible; otherwise, it defaults to the empty string, and in this case the original filename is not included in the header. The mode argument can be any of 'r', 'rb', 'a', 'ab', 'w', 'wb', 'x', or 'xb', depending on whether the file will be read or written. The default is the mode of fileobj if discernible; otherwise, the default is 'rb'. In future Python releases the mode of fileobj will not be used. It is better to always specify mode for writing. Note that the file is always opened in binary mode. To open a compressed file in text mode, use open() (or wrap your GzipFile with an io.TextIOWrapper). The compresslevel argument is an integer from 0 to 9 controlling the level of compression; 1 is fastest and produces the least compression, and 9 is slowest and produces the most compression. 0 is no compression. The default is 9. The mtime argument is an optional numeric timestamp to be written to the last modification time field in the stream when compressing. It should only be provided in compression mode. If omitted or None, the current time is used. See the mtime attribute for more details. Calling a GzipFile object’s close() method does not close fileobj, since you might wish to append more material after the compressed data. This also allows you to pass an io.BytesIO object opened for writing as fileobj, and retrieve the resulting memory buffer using the io.BytesIO object’s getvalue() method. GzipFile supports the io.BufferedIOBase interface, including iteration and the with statement. Only the truncate() method isn’t implemented. GzipFile also provides the following method and attribute: peek(n) Read n uncompressed bytes without advancing the file position. At most one single read on the compressed stream is done to satisfy the call. The number of bytes returned may be more or less than requested. Note While calling peek() does not change the file position of the GzipFile, it may change the position of the underlying file object (e.g. if the GzipFile was constructed with the fileobj parameter). New in version 3.2. mtime When decompressing, the value of the last modification time field in the most recently read header may be read from this attribute, as an integer. The initial value before reading any headers is None. All gzip compressed streams are required to contain this timestamp field. Some programs, such as gunzip, make use of the timestamp. The format is the same as the return value of time.time() and the st_mtime attribute of the object returned by os.stat(). Changed in version 3.1: Support for the with statement was added, along with the mtime constructor argument and mtime attribute. Changed in version 3.2: Support for zero-padded and unseekable files was added. Changed in version 3.3: The io.BufferedIOBase.read1() method is now implemented. Changed in version 3.4: Added support for the 'x' and 'xb' modes. Changed in version 3.5: Added support for writing arbitrary bytes-like objects. The read() method now accepts an argument of None. Changed in version 3.6: Accepts a path-like object. Deprecated since version 3.9: Opening GzipFile for writing without specifying the mode argument is deprecated.
python.library.gzip#gzip.GzipFile
mtime When decompressing, the value of the last modification time field in the most recently read header may be read from this attribute, as an integer. The initial value before reading any headers is None. All gzip compressed streams are required to contain this timestamp field. Some programs, such as gunzip, make use of the timestamp. The format is the same as the return value of time.time() and the st_mtime attribute of the object returned by os.stat().
python.library.gzip#gzip.GzipFile.mtime
peek(n) Read n uncompressed bytes without advancing the file position. At most one single read on the compressed stream is done to satisfy the call. The number of bytes returned may be more or less than requested. Note While calling peek() does not change the file position of the GzipFile, it may change the position of the underlying file object (e.g. if the GzipFile was constructed with the fileobj parameter). New in version 3.2.
python.library.gzip#gzip.GzipFile.peek
gzip.open(filename, mode='rb', compresslevel=9, encoding=None, errors=None, newline=None) Open a gzip-compressed file in binary or text mode, returning a file object. The filename argument can be an actual filename (a str or bytes object), or an existing file object to read from or write to. The mode argument can be any of 'r', 'rb', 'a', 'ab', 'w', 'wb', 'x' or 'xb' for binary mode, or 'rt', 'at', 'wt', or 'xt' for text mode. The default is 'rb'. The compresslevel argument is an integer from 0 to 9, as for the GzipFile constructor. For binary mode, this function is equivalent to the GzipFile constructor: GzipFile(filename, mode, compresslevel). In this case, the encoding, errors and newline arguments must not be provided. For text mode, a GzipFile object is created, and wrapped in an io.TextIOWrapper instance with the specified encoding, error handling behavior, and line ending(s). Changed in version 3.3: Added support for filename being a file object, support for text mode, and the encoding, errors and newline arguments. Changed in version 3.4: Added support for the 'x', 'xb' and 'xt' modes. Changed in version 3.6: Accepts a path-like object.
python.library.gzip#gzip.open
hasattr(object, name) The arguments are an object and a string. The result is True if the string is the name of one of the object’s attributes, False if not. (This is implemented by calling getattr(object, name) and seeing whether it raises an AttributeError or not.)
python.library.functions#hasattr
hash(object) Return the hash value of the object (if it has one). Hash values are integers. They are used to quickly compare dictionary keys during a dictionary lookup. Numeric values that compare equal have the same hash value (even if they are of different types, as is the case for 1 and 1.0). Note For objects with custom __hash__() methods, note that hash() truncates the return value based on the bit width of the host machine. See __hash__() for details.
python.library.functions#hash
hashlib — Secure hashes and message digests Source code: Lib/hashlib.py This module implements a common interface to many different secure hash and message digest algorithms. Included are the FIPS secure hash algorithms SHA1, SHA224, SHA256, SHA384, and SHA512 (defined in FIPS 180-2) as well as RSA’s MD5 algorithm (defined in Internet RFC 1321). The terms “secure hash” and “message digest” are interchangeable. Older algorithms were called message digests. The modern term is secure hash. Note If you want the adler32 or crc32 hash functions, they are available in the zlib module. Warning Some algorithms have known hash collision weaknesses, refer to the “See also” section at the end. Hash algorithms There is one constructor method named for each type of hash. All return a hash object with the same simple interface. For example: use sha256() to create a SHA-256 hash object. You can now feed this object with bytes-like objects (normally bytes) using the update() method. At any point you can ask it for the digest of the concatenation of the data fed to it so far using the digest() or hexdigest() methods. Note For better multithreading performance, the Python GIL is released for data larger than 2047 bytes at object creation or on update. Note Feeding string objects into update() is not supported, as hashes work on bytes, not on characters. Constructors for hash algorithms that are always present in this module are sha1(), sha224(), sha256(), sha384(), sha512(), blake2b(), and blake2s(). md5() is normally available as well, though it may be missing or blocked if you are using a rare “FIPS compliant” build of Python. Additional algorithms may also be available depending upon the OpenSSL library that Python uses on your platform. On most platforms the sha3_224(), sha3_256(), sha3_384(), sha3_512(), shake_128(), shake_256() are also available. New in version 3.6: SHA3 (Keccak) and SHAKE constructors sha3_224(), sha3_256(), sha3_384(), sha3_512(), shake_128(), shake_256(). New in version 3.6: blake2b() and blake2s() were added. Changed in version 3.9: All hashlib constructors take a keyword-only argument usedforsecurity with default value True. A false value allows the use of insecure and blocked hashing algorithms in restricted environments. False indicates that the hashing algorithm is not used in a security context, e.g. as a non-cryptographic one-way compression function. Hashlib now uses SHA3 and SHAKE from OpenSSL 1.1.1 and newer. For example, to obtain the digest of the byte string b'Nobody inspects the spammish repetition': >>> import hashlib >>> m = hashlib.sha256() >>> m.update(b"Nobody inspects") >>> m.update(b" the spammish repetition") >>> m.digest() b'\x03\x1e\xdd}Ae\x15\x93\xc5\xfe\\\x00o\xa5u+7\xfd\xdf\xf7\xbcN\x84:\xa6\xaf\x0c\x95\x0fK\x94\x06' >>> m.digest_size 32 >>> m.block_size 64 More condensed: >>> hashlib.sha224(b"Nobody inspects the spammish repetition").hexdigest() 'a4337bc45a8fc544c03f52dc550cd6e1e87021bc896588bd79e901e2' hashlib.new(name, [data, ]*, usedforsecurity=True) Is a generic constructor that takes the string name of the desired algorithm as its first parameter. It also exists to allow access to the above listed hashes as well as any other algorithms that your OpenSSL library may offer. The named constructors are much faster than new() and should be preferred. Using new() with an algorithm provided by OpenSSL: >>> h = hashlib.new('sha512_256') >>> h.update(b"Nobody inspects the spammish repetition") >>> h.hexdigest() '19197dc4d03829df858011c6c87600f994a858103bbc19005f20987aa19a97e2' Hashlib provides the following constant attributes: hashlib.algorithms_guaranteed A set containing the names of the hash algorithms guaranteed to be supported by this module on all platforms. Note that ‘md5’ is in this list despite some upstream vendors offering an odd “FIPS compliant” Python build that excludes it. New in version 3.2. hashlib.algorithms_available A set containing the names of the hash algorithms that are available in the running Python interpreter. These names will be recognized when passed to new(). algorithms_guaranteed will always be a subset. The same algorithm may appear multiple times in this set under different names (thanks to OpenSSL). New in version 3.2. The following values are provided as constant attributes of the hash objects returned by the constructors: hash.digest_size The size of the resulting hash in bytes. hash.block_size The internal block size of the hash algorithm in bytes. A hash object has the following attributes: hash.name The canonical name of this hash, always lowercase and always suitable as a parameter to new() to create another hash of this type. Changed in version 3.4: The name attribute has been present in CPython since its inception, but until Python 3.4 was not formally specified, so may not exist on some platforms. A hash object has the following methods: hash.update(data) Update the hash object with the bytes-like object. Repeated calls are equivalent to a single call with the concatenation of all the arguments: m.update(a); m.update(b) is equivalent to m.update(a+b). Changed in version 3.1: The Python GIL is released to allow other threads to run while hash updates on data larger than 2047 bytes is taking place when using hash algorithms supplied by OpenSSL. hash.digest() Return the digest of the data passed to the update() method so far. This is a bytes object of size digest_size which may contain bytes in the whole range from 0 to 255. hash.hexdigest() Like digest() except the digest is returned as a string object of double length, containing only hexadecimal digits. This may be used to exchange the value safely in email or other non-binary environments. hash.copy() Return a copy (“clone”) of the hash object. This can be used to efficiently compute the digests of data sharing a common initial substring. SHAKE variable length digests The shake_128() and shake_256() algorithms provide variable length digests with length_in_bits//2 up to 128 or 256 bits of security. As such, their digest methods require a length. Maximum length is not limited by the SHAKE algorithm. shake.digest(length) Return the digest of the data passed to the update() method so far. This is a bytes object of size length which may contain bytes in the whole range from 0 to 255. shake.hexdigest(length) Like digest() except the digest is returned as a string object of double length, containing only hexadecimal digits. This may be used to exchange the value safely in email or other non-binary environments. Key derivation Key derivation and key stretching algorithms are designed for secure password hashing. Naive algorithms such as sha1(password) are not resistant against brute-force attacks. A good password hashing function must be tunable, slow, and include a salt. hashlib.pbkdf2_hmac(hash_name, password, salt, iterations, dklen=None) The function provides PKCS#5 password-based key derivation function 2. It uses HMAC as pseudorandom function. The string hash_name is the desired name of the hash digest algorithm for HMAC, e.g. ‘sha1’ or ‘sha256’. password and salt are interpreted as buffers of bytes. Applications and libraries should limit password to a sensible length (e.g. 1024). salt should be about 16 or more bytes from a proper source, e.g. os.urandom(). The number of iterations should be chosen based on the hash algorithm and computing power. As of 2013, at least 100,000 iterations of SHA-256 are suggested. dklen is the length of the derived key. If dklen is None then the digest size of the hash algorithm hash_name is used, e.g. 64 for SHA-512. >>> import hashlib >>> dk = hashlib.pbkdf2_hmac('sha256', b'password', b'salt', 100000) >>> dk.hex() '0394a2ede332c9a13eb82e9b24631604c31df978b4e2f0fbd2c549944f9d79a5' New in version 3.4. Note A fast implementation of pbkdf2_hmac is available with OpenSSL. The Python implementation uses an inline version of hmac. It is about three times slower and doesn’t release the GIL. hashlib.scrypt(password, *, salt, n, r, p, maxmem=0, dklen=64) The function provides scrypt password-based key derivation function as defined in RFC 7914. password and salt must be bytes-like objects. Applications and libraries should limit password to a sensible length (e.g. 1024). salt should be about 16 or more bytes from a proper source, e.g. os.urandom(). n is the CPU/Memory cost factor, r the block size, p parallelization factor and maxmem limits memory (OpenSSL 1.1.0 defaults to 32 MiB). dklen is the length of the derived key. Availability: OpenSSL 1.1+. New in version 3.6. BLAKE2 BLAKE2 is a cryptographic hash function defined in RFC 7693 that comes in two flavors: BLAKE2b, optimized for 64-bit platforms and produces digests of any size between 1 and 64 bytes, BLAKE2s, optimized for 8- to 32-bit platforms and produces digests of any size between 1 and 32 bytes. BLAKE2 supports keyed mode (a faster and simpler replacement for HMAC), salted hashing, personalization, and tree hashing. Hash objects from this module follow the API of standard library’s hashlib objects. Creating hash objects New hash objects are created by calling constructor functions: hashlib.blake2b(data=b'', *, digest_size=64, key=b'', salt=b'', person=b'', fanout=1, depth=1, leaf_size=0, node_offset=0, node_depth=0, inner_size=0, last_node=False, usedforsecurity=True) hashlib.blake2s(data=b'', *, digest_size=32, key=b'', salt=b'', person=b'', fanout=1, depth=1, leaf_size=0, node_offset=0, node_depth=0, inner_size=0, last_node=False, usedforsecurity=True) These functions return the corresponding hash objects for calculating BLAKE2b or BLAKE2s. They optionally take these general parameters: data: initial chunk of data to hash, which must be bytes-like object. It can be passed only as positional argument. digest_size: size of output digest in bytes. key: key for keyed hashing (up to 64 bytes for BLAKE2b, up to 32 bytes for BLAKE2s). salt: salt for randomized hashing (up to 16 bytes for BLAKE2b, up to 8 bytes for BLAKE2s). person: personalization string (up to 16 bytes for BLAKE2b, up to 8 bytes for BLAKE2s). The following table shows limits for general parameters (in bytes): Hash digest_size len(key) len(salt) len(person) BLAKE2b 64 64 16 16 BLAKE2s 32 32 8 8 Note BLAKE2 specification defines constant lengths for salt and personalization parameters, however, for convenience, this implementation accepts byte strings of any size up to the specified length. If the length of the parameter is less than specified, it is padded with zeros, thus, for example, b'salt' and b'salt\x00' is the same value. (This is not the case for key.) These sizes are available as module constants described below. Constructor functions also accept the following tree hashing parameters: fanout: fanout (0 to 255, 0 if unlimited, 1 in sequential mode). depth: maximal depth of tree (1 to 255, 255 if unlimited, 1 in sequential mode). leaf_size: maximal byte length of leaf (0 to 2**32-1, 0 if unlimited or in sequential mode). node_offset: node offset (0 to 2**64-1 for BLAKE2b, 0 to 2**48-1 for BLAKE2s, 0 for the first, leftmost, leaf, or in sequential mode). node_depth: node depth (0 to 255, 0 for leaves, or in sequential mode). inner_size: inner digest size (0 to 64 for BLAKE2b, 0 to 32 for BLAKE2s, 0 in sequential mode). last_node: boolean indicating whether the processed node is the last one (False for sequential mode). See section 2.10 in BLAKE2 specification for comprehensive review of tree hashing. Constants blake2b.SALT_SIZE blake2s.SALT_SIZE Salt length (maximum length accepted by constructors). blake2b.PERSON_SIZE blake2s.PERSON_SIZE Personalization string length (maximum length accepted by constructors). blake2b.MAX_KEY_SIZE blake2s.MAX_KEY_SIZE Maximum key size. blake2b.MAX_DIGEST_SIZE blake2s.MAX_DIGEST_SIZE Maximum digest size that the hash function can output. Examples Simple hashing To calculate hash of some data, you should first construct a hash object by calling the appropriate constructor function (blake2b() or blake2s()), then update it with the data by calling update() on the object, and, finally, get the digest out of the object by calling digest() (or hexdigest() for hex-encoded string). >>> from hashlib import blake2b >>> h = blake2b() >>> h.update(b'Hello world') >>> h.hexdigest() '6ff843ba685842aa82031d3f53c48b66326df7639a63d128974c5c14f31a0f33343a8c65551134ed1ae0f2b0dd2bb495dc81039e3eeb0aa1bb0388bbeac29183' As a shortcut, you can pass the first chunk of data to update directly to the constructor as the positional argument: >>> from hashlib import blake2b >>> blake2b(b'Hello world').hexdigest() '6ff843ba685842aa82031d3f53c48b66326df7639a63d128974c5c14f31a0f33343a8c65551134ed1ae0f2b0dd2bb495dc81039e3eeb0aa1bb0388bbeac29183' You can call hash.update() as many times as you need to iteratively update the hash: >>> from hashlib import blake2b >>> items = [b'Hello', b' ', b'world'] >>> h = blake2b() >>> for item in items: ... h.update(item) >>> h.hexdigest() '6ff843ba685842aa82031d3f53c48b66326df7639a63d128974c5c14f31a0f33343a8c65551134ed1ae0f2b0dd2bb495dc81039e3eeb0aa1bb0388bbeac29183' Using different digest sizes BLAKE2 has configurable size of digests up to 64 bytes for BLAKE2b and up to 32 bytes for BLAKE2s. For example, to replace SHA-1 with BLAKE2b without changing the size of output, we can tell BLAKE2b to produce 20-byte digests: >>> from hashlib import blake2b >>> h = blake2b(digest_size=20) >>> h.update(b'Replacing SHA1 with the more secure function') >>> h.hexdigest() 'd24f26cf8de66472d58d4e1b1774b4c9158b1f4c' >>> h.digest_size 20 >>> len(h.digest()) 20 Hash objects with different digest sizes have completely different outputs (shorter hashes are not prefixes of longer hashes); BLAKE2b and BLAKE2s produce different outputs even if the output length is the same: >>> from hashlib import blake2b, blake2s >>> blake2b(digest_size=10).hexdigest() '6fa1d8fcfd719046d762' >>> blake2b(digest_size=11).hexdigest() 'eb6ec15daf9546254f0809' >>> blake2s(digest_size=10).hexdigest() '1bf21a98c78a1c376ae9' >>> blake2s(digest_size=11).hexdigest() '567004bf96e4a25773ebf4' Keyed hashing Keyed hashing can be used for authentication as a faster and simpler replacement for Hash-based message authentication code (HMAC). BLAKE2 can be securely used in prefix-MAC mode thanks to the indifferentiability property inherited from BLAKE. This example shows how to get a (hex-encoded) 128-bit authentication code for message b'message data' with key b'pseudorandom key': >>> from hashlib import blake2b >>> h = blake2b(key=b'pseudorandom key', digest_size=16) >>> h.update(b'message data') >>> h.hexdigest() '3d363ff7401e02026f4a4687d4863ced' As a practical example, a web application can symmetrically sign cookies sent to users and later verify them to make sure they weren’t tampered with: >>> from hashlib import blake2b >>> from hmac import compare_digest >>> >>> SECRET_KEY = b'pseudorandomly generated server secret key' >>> AUTH_SIZE = 16 >>> >>> def sign(cookie): ... h = blake2b(digest_size=AUTH_SIZE, key=SECRET_KEY) ... h.update(cookie) ... return h.hexdigest().encode('utf-8') >>> >>> def verify(cookie, sig): ... good_sig = sign(cookie) ... return compare_digest(good_sig, sig) >>> >>> cookie = b'user-alice' >>> sig = sign(cookie) >>> print("{0},{1}".format(cookie.decode('utf-8'), sig)) user-alice,b'43b3c982cf697e0c5ab22172d1ca7421' >>> verify(cookie, sig) True >>> verify(b'user-bob', sig) False >>> verify(cookie, b'0102030405060708090a0b0c0d0e0f00') False Even though there’s a native keyed hashing mode, BLAKE2 can, of course, be used in HMAC construction with hmac module: >>> import hmac, hashlib >>> m = hmac.new(b'secret key', digestmod=hashlib.blake2s) >>> m.update(b'message') >>> m.hexdigest() 'e3c8102868d28b5ff85fc35dda07329970d1a01e273c37481326fe0c861c8142' Randomized hashing By setting salt parameter users can introduce randomization to the hash function. Randomized hashing is useful for protecting against collision attacks on the hash function used in digital signatures. Randomized hashing is designed for situations where one party, the message preparer, generates all or part of a message to be signed by a second party, the message signer. If the message preparer is able to find cryptographic hash function collisions (i.e., two messages producing the same hash value), then they might prepare meaningful versions of the message that would produce the same hash value and digital signature, but with different results (e.g., transferring $1,000,000 to an account, rather than $10). Cryptographic hash functions have been designed with collision resistance as a major goal, but the current concentration on attacking cryptographic hash functions may result in a given cryptographic hash function providing less collision resistance than expected. Randomized hashing offers the signer additional protection by reducing the likelihood that a preparer can generate two or more messages that ultimately yield the same hash value during the digital signature generation process — even if it is practical to find collisions for the hash function. However, the use of randomized hashing may reduce the amount of security provided by a digital signature when all portions of the message are prepared by the signer. (NIST SP-800-106 “Randomized Hashing for Digital Signatures”) In BLAKE2 the salt is processed as a one-time input to the hash function during initialization, rather than as an input to each compression function. Warning Salted hashing (or just hashing) with BLAKE2 or any other general-purpose cryptographic hash function, such as SHA-256, is not suitable for hashing passwords. See BLAKE2 FAQ for more information. >>> import os >>> from hashlib import blake2b >>> msg = b'some message' >>> # Calculate the first hash with a random salt. >>> salt1 = os.urandom(blake2b.SALT_SIZE) >>> h1 = blake2b(salt=salt1) >>> h1.update(msg) >>> # Calculate the second hash with a different random salt. >>> salt2 = os.urandom(blake2b.SALT_SIZE) >>> h2 = blake2b(salt=salt2) >>> h2.update(msg) >>> # The digests are different. >>> h1.digest() != h2.digest() True Personalization Sometimes it is useful to force hash function to produce different digests for the same input for different purposes. Quoting the authors of the Skein hash function: We recommend that all application designers seriously consider doing this; we have seen many protocols where a hash that is computed in one part of the protocol can be used in an entirely different part because two hash computations were done on similar or related data, and the attacker can force the application to make the hash inputs the same. Personalizing each hash function used in the protocol summarily stops this type of attack. (The Skein Hash Function Family, p. 21) BLAKE2 can be personalized by passing bytes to the person argument: >>> from hashlib import blake2b >>> FILES_HASH_PERSON = b'MyApp Files Hash' >>> BLOCK_HASH_PERSON = b'MyApp Block Hash' >>> h = blake2b(digest_size=32, person=FILES_HASH_PERSON) >>> h.update(b'the same content') >>> h.hexdigest() '20d9cd024d4fb086aae819a1432dd2466de12947831b75c5a30cf2676095d3b4' >>> h = blake2b(digest_size=32, person=BLOCK_HASH_PERSON) >>> h.update(b'the same content') >>> h.hexdigest() 'cf68fb5761b9c44e7878bfb2c4c9aea52264a80b75005e65619778de59f383a3' Personalization together with the keyed mode can also be used to derive different keys from a single one. >>> from hashlib import blake2s >>> from base64 import b64decode, b64encode >>> orig_key = b64decode(b'Rm5EPJai72qcK3RGBpW3vPNfZy5OZothY+kHY6h21KM=') >>> enc_key = blake2s(key=orig_key, person=b'kEncrypt').digest() >>> mac_key = blake2s(key=orig_key, person=b'kMAC').digest() >>> print(b64encode(enc_key).decode('utf-8')) rbPb15S/Z9t+agffno5wuhB77VbRi6F9Iv2qIxU7WHw= >>> print(b64encode(mac_key).decode('utf-8')) G9GtHFE1YluXY1zWPlYk1e/nWfu0WSEb0KRcjhDeP/o= Tree mode Here’s an example of hashing a minimal tree with two leaf nodes: 10 / \ 00 01 This example uses 64-byte internal digests, and returns the 32-byte final digest: >>> from hashlib import blake2b >>> >>> FANOUT = 2 >>> DEPTH = 2 >>> LEAF_SIZE = 4096 >>> INNER_SIZE = 64 >>> >>> buf = bytearray(6000) >>> >>> # Left leaf ... h00 = blake2b(buf[0:LEAF_SIZE], fanout=FANOUT, depth=DEPTH, ... leaf_size=LEAF_SIZE, inner_size=INNER_SIZE, ... node_offset=0, node_depth=0, last_node=False) >>> # Right leaf ... h01 = blake2b(buf[LEAF_SIZE:], fanout=FANOUT, depth=DEPTH, ... leaf_size=LEAF_SIZE, inner_size=INNER_SIZE, ... node_offset=1, node_depth=0, last_node=True) >>> # Root node ... h10 = blake2b(digest_size=32, fanout=FANOUT, depth=DEPTH, ... leaf_size=LEAF_SIZE, inner_size=INNER_SIZE, ... node_offset=0, node_depth=1, last_node=True) >>> h10.update(h00.digest()) >>> h10.update(h01.digest()) >>> h10.hexdigest() '3ad2a9b37c6070e374c7a8c508fe20ca86b6ed54e286e93a0318e95e881db5aa' Credits BLAKE2 was designed by Jean-Philippe Aumasson, Samuel Neves, Zooko Wilcox-O’Hearn, and Christian Winnerlein based on SHA-3 finalist BLAKE created by Jean-Philippe Aumasson, Luca Henzen, Willi Meier, and Raphael C.-W. Phan. It uses core algorithm from ChaCha cipher designed by Daniel J. Bernstein. The stdlib implementation is based on pyblake2 module. It was written by Dmitry Chestnykh based on C implementation written by Samuel Neves. The documentation was copied from pyblake2 and written by Dmitry Chestnykh. The C code was partly rewritten for Python by Christian Heimes. The following public domain dedication applies for both C hash function implementation, extension code, and this documentation: To the extent possible under law, the author(s) have dedicated all copyright and related and neighboring rights to this software to the public domain worldwide. This software is distributed without any warranty. You should have received a copy of the CC0 Public Domain Dedication along with this software. If not, see https://creativecommons.org/publicdomain/zero/1.0/. The following people have helped with development or contributed their changes to the project and the public domain according to the Creative Commons Public Domain Dedication 1.0 Universal: Alexandr Sokolovskiy See also Module hmac A module to generate message authentication codes using hashes. Module base64 Another way to encode binary hashes for non-binary environments. https://blake2.net Official BLAKE2 website. https://csrc.nist.gov/csrc/media/publications/fips/180/2/archive/2002-08-01/documents/fips180-2.pdf The FIPS 180-2 publication on Secure Hash Algorithms. https://en.wikipedia.org/wiki/Cryptographic_hash_function#Cryptographic_hash_algorithms Wikipedia article with information on which algorithms have known issues and what that means regarding their use. https://www.ietf.org/rfc/rfc2898.txt PKCS #5: Password-Based Cryptography Specification Version 2.0
python.library.hashlib
hashlib.algorithms_available A set containing the names of the hash algorithms that are available in the running Python interpreter. These names will be recognized when passed to new(). algorithms_guaranteed will always be a subset. The same algorithm may appear multiple times in this set under different names (thanks to OpenSSL). New in version 3.2.
python.library.hashlib#hashlib.algorithms_available
hashlib.algorithms_guaranteed A set containing the names of the hash algorithms guaranteed to be supported by this module on all platforms. Note that ‘md5’ is in this list despite some upstream vendors offering an odd “FIPS compliant” Python build that excludes it. New in version 3.2.
python.library.hashlib#hashlib.algorithms_guaranteed
hashlib.blake2b(data=b'', *, digest_size=64, key=b'', salt=b'', person=b'', fanout=1, depth=1, leaf_size=0, node_offset=0, node_depth=0, inner_size=0, last_node=False, usedforsecurity=True)
python.library.hashlib#hashlib.blake2b
blake2b.MAX_DIGEST_SIZE
python.library.hashlib#hashlib.blake2b.MAX_DIGEST_SIZE
blake2b.MAX_KEY_SIZE
python.library.hashlib#hashlib.blake2b.MAX_KEY_SIZE
blake2b.PERSON_SIZE
python.library.hashlib#hashlib.blake2b.PERSON_SIZE
blake2b.SALT_SIZE
python.library.hashlib#hashlib.blake2b.SALT_SIZE
hashlib.blake2s(data=b'', *, digest_size=32, key=b'', salt=b'', person=b'', fanout=1, depth=1, leaf_size=0, node_offset=0, node_depth=0, inner_size=0, last_node=False, usedforsecurity=True)
python.library.hashlib#hashlib.blake2s
blake2s.MAX_DIGEST_SIZE
python.library.hashlib#hashlib.blake2s.MAX_DIGEST_SIZE
blake2s.MAX_KEY_SIZE
python.library.hashlib#hashlib.blake2s.MAX_KEY_SIZE
blake2s.PERSON_SIZE
python.library.hashlib#hashlib.blake2s.PERSON_SIZE
blake2s.SALT_SIZE
python.library.hashlib#hashlib.blake2s.SALT_SIZE
hash.block_size The internal block size of the hash algorithm in bytes.
python.library.hashlib#hashlib.hash.block_size
hash.copy() Return a copy (“clone”) of the hash object. This can be used to efficiently compute the digests of data sharing a common initial substring.
python.library.hashlib#hashlib.hash.copy
hash.digest() Return the digest of the data passed to the update() method so far. This is a bytes object of size digest_size which may contain bytes in the whole range from 0 to 255.
python.library.hashlib#hashlib.hash.digest
hash.digest_size The size of the resulting hash in bytes.
python.library.hashlib#hashlib.hash.digest_size
hash.hexdigest() Like digest() except the digest is returned as a string object of double length, containing only hexadecimal digits. This may be used to exchange the value safely in email or other non-binary environments.
python.library.hashlib#hashlib.hash.hexdigest
hash.name The canonical name of this hash, always lowercase and always suitable as a parameter to new() to create another hash of this type. Changed in version 3.4: The name attribute has been present in CPython since its inception, but until Python 3.4 was not formally specified, so may not exist on some platforms.
python.library.hashlib#hashlib.hash.name
hash.update(data) Update the hash object with the bytes-like object. Repeated calls are equivalent to a single call with the concatenation of all the arguments: m.update(a); m.update(b) is equivalent to m.update(a+b). Changed in version 3.1: The Python GIL is released to allow other threads to run while hash updates on data larger than 2047 bytes is taking place when using hash algorithms supplied by OpenSSL.
python.library.hashlib#hashlib.hash.update
hashlib.new(name, [data, ]*, usedforsecurity=True) Is a generic constructor that takes the string name of the desired algorithm as its first parameter. It also exists to allow access to the above listed hashes as well as any other algorithms that your OpenSSL library may offer. The named constructors are much faster than new() and should be preferred.
python.library.hashlib#hashlib.new
hashlib.pbkdf2_hmac(hash_name, password, salt, iterations, dklen=None) The function provides PKCS#5 password-based key derivation function 2. It uses HMAC as pseudorandom function. The string hash_name is the desired name of the hash digest algorithm for HMAC, e.g. ‘sha1’ or ‘sha256’. password and salt are interpreted as buffers of bytes. Applications and libraries should limit password to a sensible length (e.g. 1024). salt should be about 16 or more bytes from a proper source, e.g. os.urandom(). The number of iterations should be chosen based on the hash algorithm and computing power. As of 2013, at least 100,000 iterations of SHA-256 are suggested. dklen is the length of the derived key. If dklen is None then the digest size of the hash algorithm hash_name is used, e.g. 64 for SHA-512. >>> import hashlib >>> dk = hashlib.pbkdf2_hmac('sha256', b'password', b'salt', 100000) >>> dk.hex() '0394a2ede332c9a13eb82e9b24631604c31df978b4e2f0fbd2c549944f9d79a5' New in version 3.4. Note A fast implementation of pbkdf2_hmac is available with OpenSSL. The Python implementation uses an inline version of hmac. It is about three times slower and doesn’t release the GIL.
python.library.hashlib#hashlib.pbkdf2_hmac
hashlib.scrypt(password, *, salt, n, r, p, maxmem=0, dklen=64) The function provides scrypt password-based key derivation function as defined in RFC 7914. password and salt must be bytes-like objects. Applications and libraries should limit password to a sensible length (e.g. 1024). salt should be about 16 or more bytes from a proper source, e.g. os.urandom(). n is the CPU/Memory cost factor, r the block size, p parallelization factor and maxmem limits memory (OpenSSL 1.1.0 defaults to 32 MiB). dklen is the length of the derived key. Availability: OpenSSL 1.1+. New in version 3.6.
python.library.hashlib#hashlib.scrypt
shake.digest(length) Return the digest of the data passed to the update() method so far. This is a bytes object of size length which may contain bytes in the whole range from 0 to 255.
python.library.hashlib#hashlib.shake.digest
shake.hexdigest(length) Like digest() except the digest is returned as a string object of double length, containing only hexadecimal digits. This may be used to exchange the value safely in email or other non-binary environments.
python.library.hashlib#hashlib.shake.hexdigest
heapq — Heap queue algorithm Source code: Lib/heapq.py This module provides an implementation of the heap queue algorithm, also known as the priority queue algorithm. Heaps are binary trees for which every parent node has a value less than or equal to any of its children. This implementation uses arrays for which heap[k] <= heap[2*k+1] and heap[k] <= heap[2*k+2] for all k, counting elements from zero. For the sake of comparison, non-existing elements are considered to be infinite. The interesting property of a heap is that its smallest element is always the root, heap[0]. The API below differs from textbook heap algorithms in two aspects: (a) We use zero-based indexing. This makes the relationship between the index for a node and the indexes for its children slightly less obvious, but is more suitable since Python uses zero-based indexing. (b) Our pop method returns the smallest item, not the largest (called a “min heap” in textbooks; a “max heap” is more common in texts because of its suitability for in-place sorting). These two make it possible to view the heap as a regular Python list without surprises: heap[0] is the smallest item, and heap.sort() maintains the heap invariant! To create a heap, use a list initialized to [], or you can transform a populated list into a heap via function heapify(). The following functions are provided: heapq.heappush(heap, item) Push the value item onto the heap, maintaining the heap invariant. heapq.heappop(heap) Pop and return the smallest item from the heap, maintaining the heap invariant. If the heap is empty, IndexError is raised. To access the smallest item without popping it, use heap[0]. heapq.heappushpop(heap, item) Push item on the heap, then pop and return the smallest item from the heap. The combined action runs more efficiently than heappush() followed by a separate call to heappop(). heapq.heapify(x) Transform list x into a heap, in-place, in linear time. heapq.heapreplace(heap, item) Pop and return the smallest item from the heap, and also push the new item. The heap size doesn’t change. If the heap is empty, IndexError is raised. This one step operation is more efficient than a heappop() followed by heappush() and can be more appropriate when using a fixed-size heap. The pop/push combination always returns an element from the heap and replaces it with item. The value returned may be larger than the item added. If that isn’t desired, consider using heappushpop() instead. Its push/pop combination returns the smaller of the two values, leaving the larger value on the heap. The module also offers three general purpose functions based on heaps. heapq.merge(*iterables, key=None, reverse=False) Merge multiple sorted inputs into a single sorted output (for example, merge timestamped entries from multiple log files). Returns an iterator over the sorted values. Similar to sorted(itertools.chain(*iterables)) but returns an iterable, does not pull the data into memory all at once, and assumes that each of the input streams is already sorted (smallest to largest). Has two optional arguments which must be specified as keyword arguments. key specifies a key function of one argument that is used to extract a comparison key from each input element. The default value is None (compare the elements directly). reverse is a boolean value. If set to True, then the input elements are merged as if each comparison were reversed. To achieve behavior similar to sorted(itertools.chain(*iterables), reverse=True), all iterables must be sorted from largest to smallest. Changed in version 3.5: Added the optional key and reverse parameters. heapq.nlargest(n, iterable, key=None) Return a list with the n largest elements from the dataset defined by iterable. key, if provided, specifies a function of one argument that is used to extract a comparison key from each element in iterable (for example, key=str.lower). Equivalent to: sorted(iterable, key=key, reverse=True)[:n]. heapq.nsmallest(n, iterable, key=None) Return a list with the n smallest elements from the dataset defined by iterable. key, if provided, specifies a function of one argument that is used to extract a comparison key from each element in iterable (for example, key=str.lower). Equivalent to: sorted(iterable, key=key)[:n]. The latter two functions perform best for smaller values of n. For larger values, it is more efficient to use the sorted() function. Also, when n==1, it is more efficient to use the built-in min() and max() functions. If repeated usage of these functions is required, consider turning the iterable into an actual heap. Basic Examples A heapsort can be implemented by pushing all values onto a heap and then popping off the smallest values one at a time: >>> def heapsort(iterable): ... h = [] ... for value in iterable: ... heappush(h, value) ... return [heappop(h) for i in range(len(h))] ... >>> heapsort([1, 3, 5, 7, 9, 2, 4, 6, 8, 0]) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] This is similar to sorted(iterable), but unlike sorted(), this implementation is not stable. Heap elements can be tuples. This is useful for assigning comparison values (such as task priorities) alongside the main record being tracked: >>> h = [] >>> heappush(h, (5, 'write code')) >>> heappush(h, (7, 'release product')) >>> heappush(h, (1, 'write spec')) >>> heappush(h, (3, 'create tests')) >>> heappop(h) (1, 'write spec') Priority Queue Implementation Notes A priority queue is common use for a heap, and it presents several implementation challenges: Sort stability: how do you get two tasks with equal priorities to be returned in the order they were originally added? Tuple comparison breaks for (priority, task) pairs if the priorities are equal and the tasks do not have a default comparison order. If the priority of a task changes, how do you move it to a new position in the heap? Or if a pending task needs to be deleted, how do you find it and remove it from the queue? A solution to the first two challenges is to store entries as 3-element list including the priority, an entry count, and the task. The entry count serves as a tie-breaker so that two tasks with the same priority are returned in the order they were added. And since no two entry counts are the same, the tuple comparison will never attempt to directly compare two tasks. Another solution to the problem of non-comparable tasks is to create a wrapper class that ignores the task item and only compares the priority field: from dataclasses import dataclass, field from typing import Any @dataclass(order=True) class PrioritizedItem: priority: int item: Any=field(compare=False) The remaining challenges revolve around finding a pending task and making changes to its priority or removing it entirely. Finding a task can be done with a dictionary pointing to an entry in the queue. Removing the entry or changing its priority is more difficult because it would break the heap structure invariants. So, a possible solution is to mark the entry as removed and add a new entry with the revised priority: pq = [] # list of entries arranged in a heap entry_finder = {} # mapping of tasks to entries REMOVED = '<removed-task>' # placeholder for a removed task counter = itertools.count() # unique sequence count def add_task(task, priority=0): 'Add a new task or update the priority of an existing task' if task in entry_finder: remove_task(task) count = next(counter) entry = [priority, count, task] entry_finder[task] = entry heappush(pq, entry) def remove_task(task): 'Mark an existing task as REMOVED. Raise KeyError if not found.' entry = entry_finder.pop(task) entry[-1] = REMOVED def pop_task(): 'Remove and return the lowest priority task. Raise KeyError if empty.' while pq: priority, count, task = heappop(pq) if task is not REMOVED: del entry_finder[task] return task raise KeyError('pop from an empty priority queue') Theory Heaps are arrays for which a[k] <= a[2*k+1] and a[k] <= a[2*k+2] for all k, counting elements from 0. For the sake of comparison, non-existing elements are considered to be infinite. The interesting property of a heap is that a[0] is always its smallest element. The strange invariant above is meant to be an efficient memory representation for a tournament. The numbers below are k, not a[k]: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 In the tree above, each cell k is topping 2*k+1 and 2*k+2. In a usual binary tournament we see in sports, each cell is the winner over the two cells it tops, and we can trace the winner down the tree to see all opponents s/he had. However, in many computer applications of such tournaments, we do not need to trace the history of a winner. To be more memory efficient, when a winner is promoted, we try to replace it by something else at a lower level, and the rule becomes that a cell and the two cells it tops contain three different items, but the top cell “wins” over the two topped cells. If this heap invariant is protected at all time, index 0 is clearly the overall winner. The simplest algorithmic way to remove it and find the “next” winner is to move some loser (let’s say cell 30 in the diagram above) into the 0 position, and then percolate this new 0 down the tree, exchanging values, until the invariant is re-established. This is clearly logarithmic on the total number of items in the tree. By iterating over all items, you get an O(n log n) sort. A nice feature of this sort is that you can efficiently insert new items while the sort is going on, provided that the inserted items are not “better” than the last 0’th element you extracted. This is especially useful in simulation contexts, where the tree holds all incoming events, and the “win” condition means the smallest scheduled time. When an event schedules other events for execution, they are scheduled into the future, so they can easily go into the heap. So, a heap is a good structure for implementing schedulers (this is what I used for my MIDI sequencer :-). Various structures for implementing schedulers have been extensively studied, and heaps are good for this, as they are reasonably speedy, the speed is almost constant, and the worst case is not much different than the average case. However, there are other representations which are more efficient overall, yet the worst cases might be terrible. Heaps are also very useful in big disk sorts. You most probably all know that a big sort implies producing “runs” (which are pre-sorted sequences, whose size is usually related to the amount of CPU memory), followed by a merging passes for these runs, which merging is often very cleverly organised 1. It is very important that the initial sort produces the longest runs possible. Tournaments are a good way to achieve that. If, using all the memory available to hold a tournament, you replace and percolate items that happen to fit the current run, you’ll produce runs which are twice the size of the memory for random input, and much better for input fuzzily ordered. Moreover, if you output the 0’th item on disk and get an input which may not fit in the current tournament (because the value “wins” over the last output value), it cannot fit in the heap, so the size of the heap decreases. The freed memory could be cleverly reused immediately for progressively building a second heap, which grows at exactly the same rate the first heap is melting. When the first heap completely vanishes, you switch heaps and start a new run. Clever and quite effective! In a word, heaps are useful memory structures to know. I use them in a few applications, and I think it is good to keep a ‘heap’ module around. :-) Footnotes 1 The disk balancing algorithms which are current, nowadays, are more annoying than clever, and this is a consequence of the seeking capabilities of the disks. On devices which cannot seek, like big tape drives, the story was quite different, and one had to be very clever to ensure (far in advance) that each tape movement will be the most effective possible (that is, will best participate at “progressing” the merge). Some tapes were even able to read backwards, and this was also used to avoid the rewinding time. Believe me, real good tape sorts were quite spectacular to watch! From all times, sorting has always been a Great Art! :-)
python.library.heapq
heapq.heapify(x) Transform list x into a heap, in-place, in linear time.
python.library.heapq#heapq.heapify
heapq.heappop(heap) Pop and return the smallest item from the heap, maintaining the heap invariant. If the heap is empty, IndexError is raised. To access the smallest item without popping it, use heap[0].
python.library.heapq#heapq.heappop
heapq.heappush(heap, item) Push the value item onto the heap, maintaining the heap invariant.
python.library.heapq#heapq.heappush
heapq.heappushpop(heap, item) Push item on the heap, then pop and return the smallest item from the heap. The combined action runs more efficiently than heappush() followed by a separate call to heappop().
python.library.heapq#heapq.heappushpop
heapq.heapreplace(heap, item) Pop and return the smallest item from the heap, and also push the new item. The heap size doesn’t change. If the heap is empty, IndexError is raised. This one step operation is more efficient than a heappop() followed by heappush() and can be more appropriate when using a fixed-size heap. The pop/push combination always returns an element from the heap and replaces it with item. The value returned may be larger than the item added. If that isn’t desired, consider using heappushpop() instead. Its push/pop combination returns the smaller of the two values, leaving the larger value on the heap.
python.library.heapq#heapq.heapreplace
heapq.merge(*iterables, key=None, reverse=False) Merge multiple sorted inputs into a single sorted output (for example, merge timestamped entries from multiple log files). Returns an iterator over the sorted values. Similar to sorted(itertools.chain(*iterables)) but returns an iterable, does not pull the data into memory all at once, and assumes that each of the input streams is already sorted (smallest to largest). Has two optional arguments which must be specified as keyword arguments. key specifies a key function of one argument that is used to extract a comparison key from each input element. The default value is None (compare the elements directly). reverse is a boolean value. If set to True, then the input elements are merged as if each comparison were reversed. To achieve behavior similar to sorted(itertools.chain(*iterables), reverse=True), all iterables must be sorted from largest to smallest. Changed in version 3.5: Added the optional key and reverse parameters.
python.library.heapq#heapq.merge
heapq.nlargest(n, iterable, key=None) Return a list with the n largest elements from the dataset defined by iterable. key, if provided, specifies a function of one argument that is used to extract a comparison key from each element in iterable (for example, key=str.lower). Equivalent to: sorted(iterable, key=key, reverse=True)[:n].
python.library.heapq#heapq.nlargest
heapq.nsmallest(n, iterable, key=None) Return a list with the n smallest elements from the dataset defined by iterable. key, if provided, specifies a function of one argument that is used to extract a comparison key from each element in iterable (for example, key=str.lower). Equivalent to: sorted(iterable, key=key)[:n].
python.library.heapq#heapq.nsmallest
help([object]) Invoke the built-in help system. (This function is intended for interactive use.) If no argument is given, the interactive help system starts on the interpreter console. If the argument is a string, then the string is looked up as the name of a module, function, class, method, keyword, or documentation topic, and a help page is printed on the console. If the argument is any other kind of object, a help page on the object is generated. Note that if a slash(/) appears in the parameter list of a function, when invoking help(), it means that the parameters prior to the slash are positional-only. For more info, see the FAQ entry on positional-only parameters. This function is added to the built-in namespace by the site module. Changed in version 3.4: Changes to pydoc and inspect mean that the reported signatures for callables are now more comprehensive and consistent.
python.library.functions#help
hex(x) Convert an integer number to a lowercase hexadecimal string prefixed with “0x”. If x is not a Python int object, it has to define an __index__() method that returns an integer. Some examples: >>> hex(255) '0xff' >>> hex(-42) '-0x2a' If you want to convert an integer number to an uppercase or lower hexadecimal string with prefix or not, you can use either of the following ways: >>> '%#x' % 255, '%x' % 255, '%X' % 255 ('0xff', 'ff', 'FF') >>> format(255, '#x'), format(255, 'x'), format(255, 'X') ('0xff', 'ff', 'FF') >>> f'{255:#x}', f'{255:x}', f'{255:X}' ('0xff', 'ff', 'FF') See also format() for more information. See also int() for converting a hexadecimal string to an integer using a base of 16. Note To obtain a hexadecimal string representation for a float, use the float.hex() method.
python.library.functions#hex
hmac — Keyed-Hashing for Message Authentication Source code: Lib/hmac.py This module implements the HMAC algorithm as described by RFC 2104. hmac.new(key, msg=None, digestmod='') Return a new hmac object. key is a bytes or bytearray object giving the secret key. If msg is present, the method call update(msg) is made. digestmod is the digest name, digest constructor or module for the HMAC object to use. It may be any name suitable to hashlib.new(). Despite its argument position, it is required. Changed in version 3.4: Parameter key can be a bytes or bytearray object. Parameter msg can be of any type supported by hashlib. Parameter digestmod can be the name of a hash algorithm. Deprecated since version 3.4, removed in version 3.8: MD5 as implicit default digest for digestmod is deprecated. The digestmod parameter is now required. Pass it as a keyword argument to avoid awkwardness when you do not have an initial msg. hmac.digest(key, msg, digest) Return digest of msg for given secret key and digest. The function is equivalent to HMAC(key, msg, digest).digest(), but uses an optimized C or inline implementation, which is faster for messages that fit into memory. The parameters key, msg, and digest have the same meaning as in new(). CPython implementation detail, the optimized C implementation is only used when digest is a string and name of a digest algorithm, which is supported by OpenSSL. New in version 3.7. An HMAC object has the following methods: HMAC.update(msg) Update the hmac object with msg. Repeated calls are equivalent to a single call with the concatenation of all the arguments: m.update(a); m.update(b) is equivalent to m.update(a + b). Changed in version 3.4: Parameter msg can be of any type supported by hashlib. HMAC.digest() Return the digest of the bytes passed to the update() method so far. This bytes object will be the same length as the digest_size of the digest given to the constructor. It may contain non-ASCII bytes, including NUL bytes. Warning When comparing the output of digest() to an externally-supplied digest during a verification routine, it is recommended to use the compare_digest() function instead of the == operator to reduce the vulnerability to timing attacks. HMAC.hexdigest() Like digest() except the digest is returned as a string twice the length containing only hexadecimal digits. This may be used to exchange the value safely in email or other non-binary environments. Warning When comparing the output of hexdigest() to an externally-supplied digest during a verification routine, it is recommended to use the compare_digest() function instead of the == operator to reduce the vulnerability to timing attacks. HMAC.copy() Return a copy (“clone”) of the hmac object. This can be used to efficiently compute the digests of strings that share a common initial substring. A hash object has the following attributes: HMAC.digest_size The size of the resulting HMAC digest in bytes. HMAC.block_size The internal block size of the hash algorithm in bytes. New in version 3.4. HMAC.name The canonical name of this HMAC, always lowercase, e.g. hmac-md5. New in version 3.4. Deprecated since version 3.9: The undocumented attributes HMAC.digest_cons, HMAC.inner, and HMAC.outer are internal implementation details and will be removed in Python 3.10. This module also provides the following helper function: hmac.compare_digest(a, b) Return a == b. This function uses an approach designed to prevent timing analysis by avoiding content-based short circuiting behaviour, making it appropriate for cryptography. a and b must both be of the same type: either str (ASCII only, as e.g. returned by HMAC.hexdigest()), or a bytes-like object. Note If a and b are of different lengths, or if an error occurs, a timing attack could theoretically reveal information about the types and lengths of a and b—but not their values. New in version 3.3. Changed in version 3.9: The function uses OpenSSL’s CRYPTO_memcmp() internally when available. See also Module hashlib The Python module providing secure hash functions.
python.library.hmac
hmac.compare_digest(a, b) Return a == b. This function uses an approach designed to prevent timing analysis by avoiding content-based short circuiting behaviour, making it appropriate for cryptography. a and b must both be of the same type: either str (ASCII only, as e.g. returned by HMAC.hexdigest()), or a bytes-like object. Note If a and b are of different lengths, or if an error occurs, a timing attack could theoretically reveal information about the types and lengths of a and b—but not their values. New in version 3.3. Changed in version 3.9: The function uses OpenSSL’s CRYPTO_memcmp() internally when available.
python.library.hmac#hmac.compare_digest
hmac.digest(key, msg, digest) Return digest of msg for given secret key and digest. The function is equivalent to HMAC(key, msg, digest).digest(), but uses an optimized C or inline implementation, which is faster for messages that fit into memory. The parameters key, msg, and digest have the same meaning as in new(). CPython implementation detail, the optimized C implementation is only used when digest is a string and name of a digest algorithm, which is supported by OpenSSL. New in version 3.7.
python.library.hmac#hmac.digest
HMAC.block_size The internal block size of the hash algorithm in bytes. New in version 3.4.
python.library.hmac#hmac.HMAC.block_size
HMAC.copy() Return a copy (“clone”) of the hmac object. This can be used to efficiently compute the digests of strings that share a common initial substring.
python.library.hmac#hmac.HMAC.copy
HMAC.digest() Return the digest of the bytes passed to the update() method so far. This bytes object will be the same length as the digest_size of the digest given to the constructor. It may contain non-ASCII bytes, including NUL bytes. Warning When comparing the output of digest() to an externally-supplied digest during a verification routine, it is recommended to use the compare_digest() function instead of the == operator to reduce the vulnerability to timing attacks.
python.library.hmac#hmac.HMAC.digest
HMAC.digest_size The size of the resulting HMAC digest in bytes.
python.library.hmac#hmac.HMAC.digest_size
HMAC.hexdigest() Like digest() except the digest is returned as a string twice the length containing only hexadecimal digits. This may be used to exchange the value safely in email or other non-binary environments. Warning When comparing the output of hexdigest() to an externally-supplied digest during a verification routine, it is recommended to use the compare_digest() function instead of the == operator to reduce the vulnerability to timing attacks.
python.library.hmac#hmac.HMAC.hexdigest