doc_content
stringlengths
1
386k
doc_id
stringlengths
5
188
ZipInfo.flag_bits ZIP flag bits.
python.library.zipfile#zipfile.ZipInfo.flag_bits
classmethod ZipInfo.from_file(filename, arcname=None, *, strict_timestamps=True) Construct a ZipInfo instance for a file on the filesystem, in preparation for adding it to a zip file. filename should be the path to a file or directory on the filesystem. If arcname is specified, it is used as the name within the archive. If arcname is not specified, the name will be the same as filename, but with any drive letter and leading path separators removed. The strict_timestamps argument, when set to False, allows to zip files older than 1980-01-01 at the cost of setting the timestamp to 1980-01-01. Similar behavior occurs with files newer than 2107-12-31, the timestamp is also set to the limit. New in version 3.6. Changed in version 3.6.2: The filename parameter accepts a path-like object. New in version 3.8: The strict_timestamps keyword-only argument
python.library.zipfile#zipfile.ZipInfo.from_file
ZipInfo.header_offset Byte offset to the file header.
python.library.zipfile#zipfile.ZipInfo.header_offset
ZipInfo.internal_attr Internal attributes.
python.library.zipfile#zipfile.ZipInfo.internal_attr
ZipInfo.is_dir() Return True if this archive member is a directory. This uses the entry’s name: directories should always end with /. New in version 3.6.
python.library.zipfile#zipfile.ZipInfo.is_dir
ZipInfo.reserved Must be zero.
python.library.zipfile#zipfile.ZipInfo.reserved
ZipInfo.volume Volume number of file header.
python.library.zipfile#zipfile.ZipInfo.volume
zipfile.ZIP_BZIP2 The numeric constant for the BZIP2 compression method. This requires the bz2 module. New in version 3.3.
python.library.zipfile#zipfile.ZIP_BZIP2
zipfile.ZIP_DEFLATED The numeric constant for the usual ZIP compression method. This requires the zlib module.
python.library.zipfile#zipfile.ZIP_DEFLATED
zipfile.ZIP_LZMA The numeric constant for the LZMA compression method. This requires the lzma module. New in version 3.3. Note The ZIP file format specification has included support for bzip2 compression since 2001, and for LZMA compression since 2006. However, some tools (including older Python releases) do not support these compression methods, and may either refuse to process the ZIP file altogether, or fail to extract individual files.
python.library.zipfile#zipfile.ZIP_LZMA
zipfile.ZIP_STORED The numeric constant for an uncompressed archive member.
python.library.zipfile#zipfile.ZIP_STORED
zipimport — Import modules from Zip archives Source code: Lib/zipimport.py This module adds the ability to import Python modules (*.py, *.pyc) and packages from ZIP-format archives. It is usually not needed to use the zipimport module explicitly; it is automatically used by the built-in import mechanism for sys.path items that are paths to ZIP archives. Typically, sys.path is a list of directory names as strings. This module also allows an item of sys.path to be a string naming a ZIP file archive. The ZIP archive can contain a subdirectory structure to support package imports, and a path within the archive can be specified to only import from a subdirectory. For example, the path example.zip/lib/ would only import from the lib/ subdirectory within the archive. Any files may be present in the ZIP archive, but only files .py and .pyc are available for import. ZIP import of dynamic modules (.pyd, .so) is disallowed. Note that if an archive only contains .py files, Python will not attempt to modify the archive by adding the corresponding .pyc file, meaning that if a ZIP archive doesn’t contain .pyc files, importing may be rather slow. Changed in version 3.8: Previously, ZIP archives with an archive comment were not supported. See also PKZIP Application Note Documentation on the ZIP file format by Phil Katz, the creator of the format and algorithms used. PEP 273 - Import Modules from Zip Archives Written by James C. Ahlstrom, who also provided an implementation. Python 2.3 follows the specification in PEP 273, but uses an implementation written by Just van Rossum that uses the import hooks described in PEP 302. PEP 302 - New Import Hooks The PEP to add the import hooks that help this module work. This module defines an exception: exception zipimport.ZipImportError Exception raised by zipimporter objects. It’s a subclass of ImportError, so it can be caught as ImportError, too. zipimporter Objects zipimporter is the class for importing ZIP files. class zipimport.zipimporter(archivepath) Create a new zipimporter instance. archivepath must be a path to a ZIP file, or to a specific path within a ZIP file. For example, an archivepath of foo/bar.zip/lib will look for modules in the lib directory inside the ZIP file foo/bar.zip (provided that it exists). ZipImportError is raised if archivepath doesn’t point to a valid ZIP archive. find_module(fullname[, path]) Search for a module specified by fullname. fullname must be the fully qualified (dotted) module name. It returns the zipimporter instance itself if the module was found, or None if it wasn’t. The optional path argument is ignored—it’s there for compatibility with the importer protocol. get_code(fullname) Return the code object for the specified module. Raise ZipImportError if the module couldn’t be found. get_data(pathname) Return the data associated with pathname. Raise OSError if the file wasn’t found. Changed in version 3.3: IOError used to be raised instead of OSError. get_filename(fullname) Return the value __file__ would be set to if the specified module was imported. Raise ZipImportError if the module couldn’t be found. New in version 3.1. get_source(fullname) Return the source code for the specified module. Raise ZipImportError if the module couldn’t be found, return None if the archive does contain the module, but has no source for it. is_package(fullname) Return True if the module specified by fullname is a package. Raise ZipImportError if the module couldn’t be found. load_module(fullname) Load the module specified by fullname. fullname must be the fully qualified (dotted) module name. It returns the imported module, or raises ZipImportError if it wasn’t found. archive The file name of the importer’s associated ZIP file, without a possible subpath. prefix The subpath within the ZIP file where modules are searched. This is the empty string for zipimporter objects which point to the root of the ZIP file. The archive and prefix attributes, when combined with a slash, equal the original archivepath argument given to the zipimporter constructor. Examples Here is an example that imports a module from a ZIP archive - note that the zipimport module is not explicitly used. $ unzip -l example.zip Archive: example.zip Length Date Time Name -------- ---- ---- ---- 8467 11-26-02 22:30 jwzthreading.py -------- ------- 8467 1 file $ ./python Python 2.3 (#1, Aug 1 2003, 19:54:32) >>> import sys >>> sys.path.insert(0, 'example.zip') # Add .zip file to front of path >>> import jwzthreading >>> jwzthreading.__file__ 'example.zip/jwzthreading.py'
python.library.zipimport
class zipimport.zipimporter(archivepath) Create a new zipimporter instance. archivepath must be a path to a ZIP file, or to a specific path within a ZIP file. For example, an archivepath of foo/bar.zip/lib will look for modules in the lib directory inside the ZIP file foo/bar.zip (provided that it exists). ZipImportError is raised if archivepath doesn’t point to a valid ZIP archive. find_module(fullname[, path]) Search for a module specified by fullname. fullname must be the fully qualified (dotted) module name. It returns the zipimporter instance itself if the module was found, or None if it wasn’t. The optional path argument is ignored—it’s there for compatibility with the importer protocol. get_code(fullname) Return the code object for the specified module. Raise ZipImportError if the module couldn’t be found. get_data(pathname) Return the data associated with pathname. Raise OSError if the file wasn’t found. Changed in version 3.3: IOError used to be raised instead of OSError. get_filename(fullname) Return the value __file__ would be set to if the specified module was imported. Raise ZipImportError if the module couldn’t be found. New in version 3.1. get_source(fullname) Return the source code for the specified module. Raise ZipImportError if the module couldn’t be found, return None if the archive does contain the module, but has no source for it. is_package(fullname) Return True if the module specified by fullname is a package. Raise ZipImportError if the module couldn’t be found. load_module(fullname) Load the module specified by fullname. fullname must be the fully qualified (dotted) module name. It returns the imported module, or raises ZipImportError if it wasn’t found. archive The file name of the importer’s associated ZIP file, without a possible subpath. prefix The subpath within the ZIP file where modules are searched. This is the empty string for zipimporter objects which point to the root of the ZIP file. The archive and prefix attributes, when combined with a slash, equal the original archivepath argument given to the zipimporter constructor.
python.library.zipimport#zipimport.zipimporter
archive The file name of the importer’s associated ZIP file, without a possible subpath.
python.library.zipimport#zipimport.zipimporter.archive
find_module(fullname[, path]) Search for a module specified by fullname. fullname must be the fully qualified (dotted) module name. It returns the zipimporter instance itself if the module was found, or None if it wasn’t. The optional path argument is ignored—it’s there for compatibility with the importer protocol.
python.library.zipimport#zipimport.zipimporter.find_module
get_code(fullname) Return the code object for the specified module. Raise ZipImportError if the module couldn’t be found.
python.library.zipimport#zipimport.zipimporter.get_code
get_data(pathname) Return the data associated with pathname. Raise OSError if the file wasn’t found. Changed in version 3.3: IOError used to be raised instead of OSError.
python.library.zipimport#zipimport.zipimporter.get_data
get_filename(fullname) Return the value __file__ would be set to if the specified module was imported. Raise ZipImportError if the module couldn’t be found. New in version 3.1.
python.library.zipimport#zipimport.zipimporter.get_filename
get_source(fullname) Return the source code for the specified module. Raise ZipImportError if the module couldn’t be found, return None if the archive does contain the module, but has no source for it.
python.library.zipimport#zipimport.zipimporter.get_source
is_package(fullname) Return True if the module specified by fullname is a package. Raise ZipImportError if the module couldn’t be found.
python.library.zipimport#zipimport.zipimporter.is_package
load_module(fullname) Load the module specified by fullname. fullname must be the fully qualified (dotted) module name. It returns the imported module, or raises ZipImportError if it wasn’t found.
python.library.zipimport#zipimport.zipimporter.load_module
prefix The subpath within the ZIP file where modules are searched. This is the empty string for zipimporter objects which point to the root of the ZIP file.
python.library.zipimport#zipimport.zipimporter.prefix
exception zipimport.ZipImportError Exception raised by zipimporter objects. It’s a subclass of ImportError, so it can be caught as ImportError, too.
python.library.zipimport#zipimport.ZipImportError
zlib — Compression compatible with gzip For applications that require data compression, the functions in this module allow compression and decompression, using the zlib library. The zlib library has its own home page at https://www.zlib.net. There are known incompatibilities between the Python module and versions of the zlib library earlier than 1.1.3; 1.1.3 has a security vulnerability, so we recommend using 1.1.4 or later. zlib’s functions have many options and often need to be used in a particular order. This documentation doesn’t attempt to cover all of the permutations; consult the zlib manual at http://www.zlib.net/manual.html for authoritative information. For reading and writing .gz files see the gzip module. The available exception and functions in this module are: exception zlib.error Exception raised on compression and decompression errors. zlib.adler32(data[, value]) Computes an Adler-32 checksum of data. (An Adler-32 checksum is almost as reliable as a CRC32 but can be computed much more quickly.) The result is an unsigned 32-bit integer. If value is present, it is used as the starting value of the checksum; otherwise, a default value of 1 is used. Passing in value allows computing a running checksum over the concatenation of several inputs. The algorithm is not cryptographically strong, and should not be used for authentication or digital signatures. Since the algorithm is designed for use as a checksum algorithm, it is not suitable for use as a general hash algorithm. Changed in version 3.0: Always returns an unsigned value. To generate the same numeric value across all Python versions and platforms, use adler32(data) & 0xffffffff. zlib.compress(data, /, level=-1) Compresses the bytes in data, returning a bytes object containing compressed data. level is an integer from 0 to 9 or -1 controlling the level of compression; 1 (Z_BEST_SPEED) is fastest and produces the least compression, 9 (Z_BEST_COMPRESSION) is slowest and produces the most. 0 (Z_NO_COMPRESSION) is no compression. The default value is -1 (Z_DEFAULT_COMPRESSION). Z_DEFAULT_COMPRESSION represents a default compromise between speed and compression (currently equivalent to level 6). Raises the error exception if any error occurs. Changed in version 3.6: level can now be used as a keyword parameter. zlib.compressobj(level=-1, method=DEFLATED, wbits=MAX_WBITS, memLevel=DEF_MEM_LEVEL, strategy=Z_DEFAULT_STRATEGY[, zdict]) Returns a compression object, to be used for compressing data streams that won’t fit into memory at once. level is the compression level – an integer from 0 to 9 or -1. A value of 1 (Z_BEST_SPEED) is fastest and produces the least compression, while a value of 9 (Z_BEST_COMPRESSION) is slowest and produces the most. 0 (Z_NO_COMPRESSION) is no compression. The default value is -1 (Z_DEFAULT_COMPRESSION). Z_DEFAULT_COMPRESSION represents a default compromise between speed and compression (currently equivalent to level 6). method is the compression algorithm. Currently, the only supported value is DEFLATED. The wbits argument controls the size of the history buffer (or the “window size”) used when compressing data, and whether a header and trailer is included in the output. It can take several ranges of values, defaulting to 15 (MAX_WBITS): +9 to +15: The base-two logarithm of the window size, which therefore ranges between 512 and 32768. Larger values produce better compression at the expense of greater memory usage. The resulting output will include a zlib-specific header and trailer. −9 to −15: Uses the absolute value of wbits as the window size logarithm, while producing a raw output stream with no header or trailing checksum. +25 to +31 = 16 + (9 to 15): Uses the low 4 bits of the value as the window size logarithm, while including a basic gzip header and trailing checksum in the output. The memLevel argument controls the amount of memory used for the internal compression state. Valid values range from 1 to 9. Higher values use more memory, but are faster and produce smaller output. strategy is used to tune the compression algorithm. Possible values are Z_DEFAULT_STRATEGY, Z_FILTERED, Z_HUFFMAN_ONLY, Z_RLE (zlib 1.2.0.1) and Z_FIXED (zlib 1.2.2.2). zdict is a predefined compression dictionary. This is a sequence of bytes (such as a bytes object) containing subsequences that are expected to occur frequently in the data that is to be compressed. Those subsequences that are expected to be most common should come at the end of the dictionary. Changed in version 3.3: Added the zdict parameter and keyword argument support. zlib.crc32(data[, value]) Computes a CRC (Cyclic Redundancy Check) checksum of data. The result is an unsigned 32-bit integer. If value is present, it is used as the starting value of the checksum; otherwise, a default value of 0 is used. Passing in value allows computing a running checksum over the concatenation of several inputs. The algorithm is not cryptographically strong, and should not be used for authentication or digital signatures. Since the algorithm is designed for use as a checksum algorithm, it is not suitable for use as a general hash algorithm. Changed in version 3.0: Always returns an unsigned value. To generate the same numeric value across all Python versions and platforms, use crc32(data) & 0xffffffff. zlib.decompress(data, /, wbits=MAX_WBITS, bufsize=DEF_BUF_SIZE) Decompresses the bytes in data, returning a bytes object containing the uncompressed data. The wbits parameter depends on the format of data, and is discussed further below. If bufsize is given, it is used as the initial size of the output buffer. Raises the error exception if any error occurs. The wbits parameter controls the size of the history buffer (or “window size”), and what header and trailer format is expected. It is similar to the parameter for compressobj(), but accepts more ranges of values: +8 to +15: The base-two logarithm of the window size. The input must include a zlib header and trailer. 0: Automatically determine the window size from the zlib header. Only supported since zlib 1.2.3.5. −8 to −15: Uses the absolute value of wbits as the window size logarithm. The input must be a raw stream with no header or trailer. +24 to +31 = 16 + (8 to 15): Uses the low 4 bits of the value as the window size logarithm. The input must include a gzip header and trailer. +40 to +47 = 32 + (8 to 15): Uses the low 4 bits of the value as the window size logarithm, and automatically accepts either the zlib or gzip format. When decompressing a stream, the window size must not be smaller than the size originally used to compress the stream; using a too-small value may result in an error exception. The default wbits value corresponds to the largest window size and requires a zlib header and trailer to be included. bufsize is the initial size of the buffer used to hold decompressed data. If more space is required, the buffer size will be increased as needed, so you don’t have to get this value exactly right; tuning it will only save a few calls to malloc(). Changed in version 3.6: wbits and bufsize can be used as keyword arguments. zlib.decompressobj(wbits=MAX_WBITS[, zdict]) Returns a decompression object, to be used for decompressing data streams that won’t fit into memory at once. The wbits parameter controls the size of the history buffer (or the “window size”), and what header and trailer format is expected. It has the same meaning as described for decompress(). The zdict parameter specifies a predefined compression dictionary. If provided, this must be the same dictionary as was used by the compressor that produced the data that is to be decompressed. Note If zdict is a mutable object (such as a bytearray), you must not modify its contents between the call to decompressobj() and the first call to the decompressor’s decompress() method. Changed in version 3.3: Added the zdict parameter. Compression objects support the following methods: Compress.compress(data) Compress data, returning a bytes object containing compressed data for at least part of the data in data. This data should be concatenated to the output produced by any preceding calls to the compress() method. Some input may be kept in internal buffers for later processing. Compress.flush([mode]) All pending input is processed, and a bytes object containing the remaining compressed output is returned. mode can be selected from the constants Z_NO_FLUSH, Z_PARTIAL_FLUSH, Z_SYNC_FLUSH, Z_FULL_FLUSH, Z_BLOCK (zlib 1.2.3.4), or Z_FINISH, defaulting to Z_FINISH. Except Z_FINISH, all constants allow compressing further bytestrings of data, while Z_FINISH finishes the compressed stream and prevents compressing any more data. After calling flush() with mode set to Z_FINISH, the compress() method cannot be called again; the only realistic action is to delete the object. Compress.copy() Returns a copy of the compression object. This can be used to efficiently compress a set of data that share a common initial prefix. Changed in version 3.8: Added copy.copy() and copy.deepcopy() support to compression objects. Decompression objects support the following methods and attributes: Decompress.unused_data A bytes object which contains any bytes past the end of the compressed data. That is, this remains b"" until the last byte that contains compression data is available. If the whole bytestring turned out to contain compressed data, this is b"", an empty bytes object. Decompress.unconsumed_tail A bytes object that contains any data that was not consumed by the last decompress() call because it exceeded the limit for the uncompressed data buffer. This data has not yet been seen by the zlib machinery, so you must feed it (possibly with further data concatenated to it) back to a subsequent decompress() method call in order to get correct output. Decompress.eof A boolean indicating whether the end of the compressed data stream has been reached. This makes it possible to distinguish between a properly-formed compressed stream, and an incomplete or truncated one. New in version 3.3. Decompress.decompress(data, max_length=0) Decompress data, returning a bytes object containing the uncompressed data corresponding to at least part of the data in string. This data should be concatenated to the output produced by any preceding calls to the decompress() method. Some of the input data may be preserved in internal buffers for later processing. If the optional parameter max_length is non-zero then the return value will be no longer than max_length. This may mean that not all of the compressed input can be processed; and unconsumed data will be stored in the attribute unconsumed_tail. This bytestring must be passed to a subsequent call to decompress() if decompression is to continue. If max_length is zero then the whole input is decompressed, and unconsumed_tail is empty. Changed in version 3.6: max_length can be used as a keyword argument. Decompress.flush([length]) All pending input is processed, and a bytes object containing the remaining uncompressed output is returned. After calling flush(), the decompress() method cannot be called again; the only realistic action is to delete the object. The optional parameter length sets the initial size of the output buffer. Decompress.copy() Returns a copy of the decompression object. This can be used to save the state of the decompressor midway through the data stream in order to speed up random seeks into the stream at a future point. Changed in version 3.8: Added copy.copy() and copy.deepcopy() support to decompression objects. Information about the version of the zlib library in use is available through the following constants: zlib.ZLIB_VERSION The version string of the zlib library that was used for building the module. This may be different from the zlib library actually used at runtime, which is available as ZLIB_RUNTIME_VERSION. zlib.ZLIB_RUNTIME_VERSION The version string of the zlib library actually loaded by the interpreter. New in version 3.3. See also Module gzip Reading and writing gzip-format files. http://www.zlib.net The zlib library home page. http://www.zlib.net/manual.html The zlib manual explains the semantics and usage of the library’s many functions.
python.library.zlib
zlib.adler32(data[, value]) Computes an Adler-32 checksum of data. (An Adler-32 checksum is almost as reliable as a CRC32 but can be computed much more quickly.) The result is an unsigned 32-bit integer. If value is present, it is used as the starting value of the checksum; otherwise, a default value of 1 is used. Passing in value allows computing a running checksum over the concatenation of several inputs. The algorithm is not cryptographically strong, and should not be used for authentication or digital signatures. Since the algorithm is designed for use as a checksum algorithm, it is not suitable for use as a general hash algorithm. Changed in version 3.0: Always returns an unsigned value. To generate the same numeric value across all Python versions and platforms, use adler32(data) & 0xffffffff.
python.library.zlib#zlib.adler32
zlib.compress(data, /, level=-1) Compresses the bytes in data, returning a bytes object containing compressed data. level is an integer from 0 to 9 or -1 controlling the level of compression; 1 (Z_BEST_SPEED) is fastest and produces the least compression, 9 (Z_BEST_COMPRESSION) is slowest and produces the most. 0 (Z_NO_COMPRESSION) is no compression. The default value is -1 (Z_DEFAULT_COMPRESSION). Z_DEFAULT_COMPRESSION represents a default compromise between speed and compression (currently equivalent to level 6). Raises the error exception if any error occurs. Changed in version 3.6: level can now be used as a keyword parameter.
python.library.zlib#zlib.compress
Compress.compress(data) Compress data, returning a bytes object containing compressed data for at least part of the data in data. This data should be concatenated to the output produced by any preceding calls to the compress() method. Some input may be kept in internal buffers for later processing.
python.library.zlib#zlib.Compress.compress
Compress.copy() Returns a copy of the compression object. This can be used to efficiently compress a set of data that share a common initial prefix.
python.library.zlib#zlib.Compress.copy
Compress.flush([mode]) All pending input is processed, and a bytes object containing the remaining compressed output is returned. mode can be selected from the constants Z_NO_FLUSH, Z_PARTIAL_FLUSH, Z_SYNC_FLUSH, Z_FULL_FLUSH, Z_BLOCK (zlib 1.2.3.4), or Z_FINISH, defaulting to Z_FINISH. Except Z_FINISH, all constants allow compressing further bytestrings of data, while Z_FINISH finishes the compressed stream and prevents compressing any more data. After calling flush() with mode set to Z_FINISH, the compress() method cannot be called again; the only realistic action is to delete the object.
python.library.zlib#zlib.Compress.flush
zlib.compressobj(level=-1, method=DEFLATED, wbits=MAX_WBITS, memLevel=DEF_MEM_LEVEL, strategy=Z_DEFAULT_STRATEGY[, zdict]) Returns a compression object, to be used for compressing data streams that won’t fit into memory at once. level is the compression level – an integer from 0 to 9 or -1. A value of 1 (Z_BEST_SPEED) is fastest and produces the least compression, while a value of 9 (Z_BEST_COMPRESSION) is slowest and produces the most. 0 (Z_NO_COMPRESSION) is no compression. The default value is -1 (Z_DEFAULT_COMPRESSION). Z_DEFAULT_COMPRESSION represents a default compromise between speed and compression (currently equivalent to level 6). method is the compression algorithm. Currently, the only supported value is DEFLATED. The wbits argument controls the size of the history buffer (or the “window size”) used when compressing data, and whether a header and trailer is included in the output. It can take several ranges of values, defaulting to 15 (MAX_WBITS): +9 to +15: The base-two logarithm of the window size, which therefore ranges between 512 and 32768. Larger values produce better compression at the expense of greater memory usage. The resulting output will include a zlib-specific header and trailer. −9 to −15: Uses the absolute value of wbits as the window size logarithm, while producing a raw output stream with no header or trailing checksum. +25 to +31 = 16 + (9 to 15): Uses the low 4 bits of the value as the window size logarithm, while including a basic gzip header and trailing checksum in the output. The memLevel argument controls the amount of memory used for the internal compression state. Valid values range from 1 to 9. Higher values use more memory, but are faster and produce smaller output. strategy is used to tune the compression algorithm. Possible values are Z_DEFAULT_STRATEGY, Z_FILTERED, Z_HUFFMAN_ONLY, Z_RLE (zlib 1.2.0.1) and Z_FIXED (zlib 1.2.2.2). zdict is a predefined compression dictionary. This is a sequence of bytes (such as a bytes object) containing subsequences that are expected to occur frequently in the data that is to be compressed. Those subsequences that are expected to be most common should come at the end of the dictionary. Changed in version 3.3: Added the zdict parameter and keyword argument support.
python.library.zlib#zlib.compressobj
zlib.crc32(data[, value]) Computes a CRC (Cyclic Redundancy Check) checksum of data. The result is an unsigned 32-bit integer. If value is present, it is used as the starting value of the checksum; otherwise, a default value of 0 is used. Passing in value allows computing a running checksum over the concatenation of several inputs. The algorithm is not cryptographically strong, and should not be used for authentication or digital signatures. Since the algorithm is designed for use as a checksum algorithm, it is not suitable for use as a general hash algorithm. Changed in version 3.0: Always returns an unsigned value. To generate the same numeric value across all Python versions and platforms, use crc32(data) & 0xffffffff.
python.library.zlib#zlib.crc32
zlib.decompress(data, /, wbits=MAX_WBITS, bufsize=DEF_BUF_SIZE) Decompresses the bytes in data, returning a bytes object containing the uncompressed data. The wbits parameter depends on the format of data, and is discussed further below. If bufsize is given, it is used as the initial size of the output buffer. Raises the error exception if any error occurs. The wbits parameter controls the size of the history buffer (or “window size”), and what header and trailer format is expected. It is similar to the parameter for compressobj(), but accepts more ranges of values: +8 to +15: The base-two logarithm of the window size. The input must include a zlib header and trailer. 0: Automatically determine the window size from the zlib header. Only supported since zlib 1.2.3.5. −8 to −15: Uses the absolute value of wbits as the window size logarithm. The input must be a raw stream with no header or trailer. +24 to +31 = 16 + (8 to 15): Uses the low 4 bits of the value as the window size logarithm. The input must include a gzip header and trailer. +40 to +47 = 32 + (8 to 15): Uses the low 4 bits of the value as the window size logarithm, and automatically accepts either the zlib or gzip format. When decompressing a stream, the window size must not be smaller than the size originally used to compress the stream; using a too-small value may result in an error exception. The default wbits value corresponds to the largest window size and requires a zlib header and trailer to be included. bufsize is the initial size of the buffer used to hold decompressed data. If more space is required, the buffer size will be increased as needed, so you don’t have to get this value exactly right; tuning it will only save a few calls to malloc(). Changed in version 3.6: wbits and bufsize can be used as keyword arguments.
python.library.zlib#zlib.decompress
Decompress.copy() Returns a copy of the decompression object. This can be used to save the state of the decompressor midway through the data stream in order to speed up random seeks into the stream at a future point.
python.library.zlib#zlib.Decompress.copy
Decompress.decompress(data, max_length=0) Decompress data, returning a bytes object containing the uncompressed data corresponding to at least part of the data in string. This data should be concatenated to the output produced by any preceding calls to the decompress() method. Some of the input data may be preserved in internal buffers for later processing. If the optional parameter max_length is non-zero then the return value will be no longer than max_length. This may mean that not all of the compressed input can be processed; and unconsumed data will be stored in the attribute unconsumed_tail. This bytestring must be passed to a subsequent call to decompress() if decompression is to continue. If max_length is zero then the whole input is decompressed, and unconsumed_tail is empty. Changed in version 3.6: max_length can be used as a keyword argument.
python.library.zlib#zlib.Decompress.decompress
Decompress.eof A boolean indicating whether the end of the compressed data stream has been reached. This makes it possible to distinguish between a properly-formed compressed stream, and an incomplete or truncated one. New in version 3.3.
python.library.zlib#zlib.Decompress.eof
Decompress.flush([length]) All pending input is processed, and a bytes object containing the remaining uncompressed output is returned. After calling flush(), the decompress() method cannot be called again; the only realistic action is to delete the object. The optional parameter length sets the initial size of the output buffer.
python.library.zlib#zlib.Decompress.flush
Decompress.unconsumed_tail A bytes object that contains any data that was not consumed by the last decompress() call because it exceeded the limit for the uncompressed data buffer. This data has not yet been seen by the zlib machinery, so you must feed it (possibly with further data concatenated to it) back to a subsequent decompress() method call in order to get correct output.
python.library.zlib#zlib.Decompress.unconsumed_tail
Decompress.unused_data A bytes object which contains any bytes past the end of the compressed data. That is, this remains b"" until the last byte that contains compression data is available. If the whole bytestring turned out to contain compressed data, this is b"", an empty bytes object.
python.library.zlib#zlib.Decompress.unused_data
zlib.decompressobj(wbits=MAX_WBITS[, zdict]) Returns a decompression object, to be used for decompressing data streams that won’t fit into memory at once. The wbits parameter controls the size of the history buffer (or the “window size”), and what header and trailer format is expected. It has the same meaning as described for decompress(). The zdict parameter specifies a predefined compression dictionary. If provided, this must be the same dictionary as was used by the compressor that produced the data that is to be decompressed. Note If zdict is a mutable object (such as a bytearray), you must not modify its contents between the call to decompressobj() and the first call to the decompressor’s decompress() method. Changed in version 3.3: Added the zdict parameter.
python.library.zlib#zlib.decompressobj
exception zlib.error Exception raised on compression and decompression errors.
python.library.zlib#zlib.error
zlib.ZLIB_RUNTIME_VERSION The version string of the zlib library actually loaded by the interpreter. New in version 3.3.
python.library.zlib#zlib.ZLIB_RUNTIME_VERSION
zlib.ZLIB_VERSION The version string of the zlib library that was used for building the module. This may be different from the zlib library actually used at runtime, which is available as ZLIB_RUNTIME_VERSION.
python.library.zlib#zlib.ZLIB_VERSION
zoneinfo — IANA time zone support New in version 3.9. The zoneinfo module provides a concrete time zone implementation to support the IANA time zone database as originally specified in PEP 615. By default, zoneinfo uses the system’s time zone data if available; if no system time zone data is available, the library will fall back to using the first-party tzdata package available on PyPI. See also Module: datetime Provides the time and datetime types with which the ZoneInfo class is designed to be used. Package tzdata First-party package maintained by the CPython core developers to supply time zone data via PyPI. Using ZoneInfo ZoneInfo is a concrete implementation of the datetime.tzinfo abstract base class, and is intended to be attached to tzinfo, either via the constructor, the datetime.replace method or datetime.astimezone: >>> from zoneinfo import ZoneInfo >>> from datetime import datetime, timedelta >>> dt = datetime(2020, 10, 31, 12, tzinfo=ZoneInfo("America/Los_Angeles")) >>> print(dt) 2020-10-31 12:00:00-07:00 >>> dt.tzname() 'PDT' Datetimes constructed in this way are compatible with datetime arithmetic and handle daylight saving time transitions with no further intervention: >>> dt_add = dt + timedelta(days=1) >>> print(dt_add) 2020-11-01 12:00:00-08:00 >>> dt_add.tzname() 'PST' These time zones also support the fold attribute introduced in PEP 495. During offset transitions which induce ambiguous times (such as a daylight saving time to standard time transition), the offset from before the transition is used when fold=0, and the offset after the transition is used when fold=1, for example: >>> dt = datetime(2020, 11, 1, 1, tzinfo=ZoneInfo("America/Los_Angeles")) >>> print(dt) 2020-11-01 01:00:00-07:00 >>> print(dt.replace(fold=1)) 2020-11-01 01:00:00-08:00 When converting from another time zone, the fold will be set to the correct value: >>> from datetime import timezone >>> LOS_ANGELES = ZoneInfo("America/Los_Angeles") >>> dt_utc = datetime(2020, 11, 1, 8, tzinfo=timezone.utc) >>> # Before the PDT -> PST transition >>> print(dt_utc.astimezone(LOS_ANGELES)) 2020-11-01 01:00:00-07:00 >>> # After the PDT -> PST transition >>> print((dt_utc + timedelta(hours=1)).astimezone(LOS_ANGELES)) 2020-11-01 01:00:00-08:00 Data sources The zoneinfo module does not directly provide time zone data, and instead pulls time zone information from the system time zone database or the first-party PyPI package tzdata, if available. Some systems, including notably Windows systems, do not have an IANA database available, and so for projects targeting cross-platform compatibility that require time zone data, it is recommended to declare a dependency on tzdata. If neither system data nor tzdata are available, all calls to ZoneInfo will raise ZoneInfoNotFoundError. Configuring the data sources When ZoneInfo(key) is called, the constructor first searches the directories specified in TZPATH for a file matching key, and on failure looks for a match in the tzdata package. This behavior can be configured in three ways: The default TZPATH when not otherwise specified can be configured at compile time. TZPATH can be configured using an environment variable. At runtime, the search path can be manipulated using the reset_tzpath() function. Compile-time configuration The default TZPATH includes several common deployment locations for the time zone database (except on Windows, where there are no “well-known” locations for time zone data). On POSIX systems, downstream distributors and those building Python from source who know where their system time zone data is deployed may change the default time zone path by specifying the compile-time option TZPATH (or, more likely, the configure flag --with-tzpath), which should be a string delimited by os.pathsep. On all platforms, the configured value is available as the TZPATH key in sysconfig.get_config_var(). Environment configuration When initializing TZPATH (either at import time or whenever reset_tzpath() is called with no arguments), the zoneinfo module will use the environment variable PYTHONTZPATH, if it exists, to set the search path. PYTHONTZPATH This is an os.pathsep-separated string containing the time zone search path to use. It must consist of only absolute rather than relative paths. Relative components specified in PYTHONTZPATH will not be used, but otherwise the behavior when a relative path is specified is implementation-defined; CPython will raise InvalidTZPathWarning, but other implementations are free to silently ignore the erroneous component or raise an exception. To set the system to ignore the system data and use the tzdata package instead, set PYTHONTZPATH="". Runtime configuration The TZ search path can also be configured at runtime using the reset_tzpath() function. This is generally not an advisable operation, though it is reasonable to use it in test functions that require the use of a specific time zone path (or require disabling access to the system time zones). The ZoneInfo class class zoneinfo.ZoneInfo(key) A concrete datetime.tzinfo subclass that represents an IANA time zone specified by the string key. Calls to the primary constructor will always return objects that compare identically; put another way, barring cache invalidation via ZoneInfo.clear_cache(), for all values of key, the following assertion will always be true: a = ZoneInfo(key) b = ZoneInfo(key) assert a is b key must be in the form of a relative, normalized POSIX path, with no up-level references. The constructor will raise ValueError if a non-conforming key is passed. If no file matching key is found, the constructor will raise ZoneInfoNotFoundError. The ZoneInfo class has two alternate constructors: classmethod ZoneInfo.from_file(fobj, /, key=None) Constructs a ZoneInfo object from a file-like object returning bytes (e.g. a file opened in binary mode or an io.BytesIO object). Unlike the primary constructor, this always constructs a new object. The key parameter sets the name of the zone for the purposes of __str__() and __repr__(). Objects created via this constructor cannot be pickled (see pickling). classmethod ZoneInfo.no_cache(key) An alternate constructor that bypasses the constructor’s cache. It is identical to the primary constructor, but returns a new object on each call. This is most likely to be useful for testing or demonstration purposes, but it can also be used to create a system with a different cache invalidation strategy. Objects created via this constructor will also bypass the cache of a deserializing process when unpickled. Caution Using this constructor may change the semantics of your datetimes in surprising ways, only use it if you know that you need to. The following class methods are also available: classmethod ZoneInfo.clear_cache(*, only_keys=None) A method for invalidating the cache on the ZoneInfo class. If no arguments are passed, all caches are invalidated and the next call to the primary constructor for each key will return a new instance. If an iterable of key names is passed to the only_keys parameter, only the specified keys will be removed from the cache. Keys passed to only_keys but not found in the cache are ignored. Warning Invoking this function may change the semantics of datetimes using ZoneInfo in surprising ways; this modifies process-wide global state and thus may have wide-ranging effects. Only use it if you know that you need to. The class has one attribute: ZoneInfo.key This is a read-only attribute that returns the value of key passed to the constructor, which should be a lookup key in the IANA time zone database (e.g. America/New_York, Europe/Paris or Asia/Tokyo). For zones constructed from file without specifying a key parameter, this will be set to None. Note Although it is a somewhat common practice to expose these to end users, these values are designed to be primary keys for representing the relevant zones and not necessarily user-facing elements. Projects like CLDR (the Unicode Common Locale Data Repository) can be used to get more user-friendly strings from these keys. String representations The string representation returned when calling str on a ZoneInfo object defaults to using the ZoneInfo.key attribute (see the note on usage in the attribute documentation): >>> zone = ZoneInfo("Pacific/Kwajalein") >>> str(zone) 'Pacific/Kwajalein' >>> dt = datetime(2020, 4, 1, 3, 15, tzinfo=zone) >>> f"{dt.isoformat()} [{dt.tzinfo}]" '2020-04-01T03:15:00+12:00 [Pacific/Kwajalein]' For objects constructed from a file without specifying a key parameter, str falls back to calling repr(). ZoneInfo’s repr is implementation-defined and not necessarily stable between versions, but it is guaranteed not to be a valid ZoneInfo key. Pickle serialization Rather than serializing all transition data, ZoneInfo objects are serialized by key, and ZoneInfo objects constructed from files (even those with a value for key specified) cannot be pickled. The behavior of a ZoneInfo file depends on how it was constructed: ZoneInfo(key): When constructed with the primary constructor, a ZoneInfo object is serialized by key, and when deserialized, the deserializing process uses the primary and thus it is expected that these are expected to be the same object as other references to the same time zone. For example, if europe_berlin_pkl is a string containing a pickle constructed from ZoneInfo("Europe/Berlin"), one would expect the following behavior: >>> a = ZoneInfo("Europe/Berlin") >>> b = pickle.loads(europe_berlin_pkl) >>> a is b True ZoneInfo.no_cache(key): When constructed from the cache-bypassing constructor, the ZoneInfo object is also serialized by key, but when deserialized, the deserializing process uses the cache bypassing constructor. If europe_berlin_pkl_nc is a string containing a pickle constructed from ZoneInfo.no_cache("Europe/Berlin"), one would expect the following behavior: >>> a = ZoneInfo("Europe/Berlin") >>> b = pickle.loads(europe_berlin_pkl_nc) >>> a is b False ZoneInfo.from_file(fobj, /, key=None): When constructed from a file, the ZoneInfo object raises an exception on pickling. If an end user wants to pickle a ZoneInfo constructed from a file, it is recommended that they use a wrapper type or a custom serialization function: either serializing by key or storing the contents of the file object and serializing that. This method of serialization requires that the time zone data for the required key be available on both the serializing and deserializing side, similar to the way that references to classes and functions are expected to exist in both the serializing and deserializing environments. It also means that no guarantees are made about the consistency of results when unpickling a ZoneInfo pickled in an environment with a different version of the time zone data. Functions zoneinfo.available_timezones() Get a set containing all the valid keys for IANA time zones available anywhere on the time zone path. This is recalculated on every call to the function. This function only includes canonical zone names and does not include “special” zones such as those under the posix/ and right/ directories, or the posixrules zone. Caution This function may open a large number of files, as the best way to determine if a file on the time zone path is a valid time zone is to read the “magic string” at the beginning. Note These values are not designed to be exposed to end-users; for user facing elements, applications should use something like CLDR (the Unicode Common Locale Data Repository) to get more user-friendly strings. See also the cautionary note on ZoneInfo.key. zoneinfo.reset_tzpath(to=None) Sets or resets the time zone search path (TZPATH) for the module. When called with no arguments, TZPATH is set to the default value. Calling reset_tzpath will not invalidate the ZoneInfo cache, and so calls to the primary ZoneInfo constructor will only use the new TZPATH in the case of a cache miss. The to parameter must be a sequence of strings or os.PathLike and not a string, all of which must be absolute paths. ValueError will be raised if something other than an absolute path is passed. Globals zoneinfo.TZPATH A read-only sequence representing the time zone search path – when constructing a ZoneInfo from a key, the key is joined to each entry in the TZPATH, and the first file found is used. TZPATH may contain only absolute paths, never relative paths, regardless of how it is configured. The object that zoneinfo.TZPATH points to may change in response to a call to reset_tzpath(), so it is recommended to use zoneinfo.TZPATH rather than importing TZPATH from zoneinfo or assigning a long-lived variable to zoneinfo.TZPATH. For more information on configuring the time zone search path, see Configuring the data sources. Exceptions and warnings exception zoneinfo.ZoneInfoNotFoundError Raised when construction of a ZoneInfo object fails because the specified key could not be found on the system. This is a subclass of KeyError. exception zoneinfo.InvalidTZPathWarning Raised when PYTHONTZPATH contains an invalid component that will be filtered out, such as a relative path.
python.library.zoneinfo
zoneinfo.available_timezones() Get a set containing all the valid keys for IANA time zones available anywhere on the time zone path. This is recalculated on every call to the function. This function only includes canonical zone names and does not include “special” zones such as those under the posix/ and right/ directories, or the posixrules zone. Caution This function may open a large number of files, as the best way to determine if a file on the time zone path is a valid time zone is to read the “magic string” at the beginning. Note These values are not designed to be exposed to end-users; for user facing elements, applications should use something like CLDR (the Unicode Common Locale Data Repository) to get more user-friendly strings. See also the cautionary note on ZoneInfo.key.
python.library.zoneinfo#zoneinfo.available_timezones
exception zoneinfo.InvalidTZPathWarning Raised when PYTHONTZPATH contains an invalid component that will be filtered out, such as a relative path.
python.library.zoneinfo#zoneinfo.InvalidTZPathWarning
zoneinfo.reset_tzpath(to=None) Sets or resets the time zone search path (TZPATH) for the module. When called with no arguments, TZPATH is set to the default value. Calling reset_tzpath will not invalidate the ZoneInfo cache, and so calls to the primary ZoneInfo constructor will only use the new TZPATH in the case of a cache miss. The to parameter must be a sequence of strings or os.PathLike and not a string, all of which must be absolute paths. ValueError will be raised if something other than an absolute path is passed.
python.library.zoneinfo#zoneinfo.reset_tzpath
zoneinfo.TZPATH A read-only sequence representing the time zone search path – when constructing a ZoneInfo from a key, the key is joined to each entry in the TZPATH, and the first file found is used. TZPATH may contain only absolute paths, never relative paths, regardless of how it is configured. The object that zoneinfo.TZPATH points to may change in response to a call to reset_tzpath(), so it is recommended to use zoneinfo.TZPATH rather than importing TZPATH from zoneinfo or assigning a long-lived variable to zoneinfo.TZPATH. For more information on configuring the time zone search path, see Configuring the data sources.
python.library.zoneinfo#zoneinfo.TZPATH
class zoneinfo.ZoneInfo(key) A concrete datetime.tzinfo subclass that represents an IANA time zone specified by the string key. Calls to the primary constructor will always return objects that compare identically; put another way, barring cache invalidation via ZoneInfo.clear_cache(), for all values of key, the following assertion will always be true: a = ZoneInfo(key) b = ZoneInfo(key) assert a is b key must be in the form of a relative, normalized POSIX path, with no up-level references. The constructor will raise ValueError if a non-conforming key is passed. If no file matching key is found, the constructor will raise ZoneInfoNotFoundError.
python.library.zoneinfo#zoneinfo.ZoneInfo
classmethod ZoneInfo.clear_cache(*, only_keys=None) A method for invalidating the cache on the ZoneInfo class. If no arguments are passed, all caches are invalidated and the next call to the primary constructor for each key will return a new instance. If an iterable of key names is passed to the only_keys parameter, only the specified keys will be removed from the cache. Keys passed to only_keys but not found in the cache are ignored. Warning Invoking this function may change the semantics of datetimes using ZoneInfo in surprising ways; this modifies process-wide global state and thus may have wide-ranging effects. Only use it if you know that you need to.
python.library.zoneinfo#zoneinfo.ZoneInfo.clear_cache
classmethod ZoneInfo.from_file(fobj, /, key=None) Constructs a ZoneInfo object from a file-like object returning bytes (e.g. a file opened in binary mode or an io.BytesIO object). Unlike the primary constructor, this always constructs a new object. The key parameter sets the name of the zone for the purposes of __str__() and __repr__(). Objects created via this constructor cannot be pickled (see pickling).
python.library.zoneinfo#zoneinfo.ZoneInfo.from_file
ZoneInfo.key This is a read-only attribute that returns the value of key passed to the constructor, which should be a lookup key in the IANA time zone database (e.g. America/New_York, Europe/Paris or Asia/Tokyo). For zones constructed from file without specifying a key parameter, this will be set to None. Note Although it is a somewhat common practice to expose these to end users, these values are designed to be primary keys for representing the relevant zones and not necessarily user-facing elements. Projects like CLDR (the Unicode Common Locale Data Repository) can be used to get more user-friendly strings from these keys.
python.library.zoneinfo#zoneinfo.ZoneInfo.key
classmethod ZoneInfo.no_cache(key) An alternate constructor that bypasses the constructor’s cache. It is identical to the primary constructor, but returns a new object on each call. This is most likely to be useful for testing or demonstration purposes, but it can also be used to create a system with a different cache invalidation strategy. Objects created via this constructor will also bypass the cache of a deserializing process when unpickled. Caution Using this constructor may change the semantics of your datetimes in surprising ways, only use it if you know that you need to.
python.library.zoneinfo#zoneinfo.ZoneInfo.no_cache
exception zoneinfo.ZoneInfoNotFoundError Raised when construction of a ZoneInfo object fails because the specified key could not be found on the system. This is a subclass of KeyError.
python.library.zoneinfo#zoneinfo.ZoneInfoNotFoundError
_thread — Low-level threading API This module provides low-level primitives for working with multiple threads (also called light-weight processes or tasks) — multiple threads of control sharing their global data space. For synchronization, simple locks (also called mutexes or binary semaphores) are provided. The threading module provides an easier to use and higher-level threading API built on top of this module. Changed in version 3.7: This module used to be optional, it is now always available. This module defines the following constants and functions: exception _thread.error Raised on thread-specific errors. Changed in version 3.3: This is now a synonym of the built-in RuntimeError. _thread.LockType This is the type of lock objects. _thread.start_new_thread(function, args[, kwargs]) Start a new thread and return its identifier. The thread executes the function function with the argument list args (which must be a tuple). The optional kwargs argument specifies a dictionary of keyword arguments. When the function returns, the thread silently exits. When the function terminates with an unhandled exception, sys.unraisablehook() is called to handle the exception. The object attribute of the hook argument is function. By default, a stack trace is printed and then the thread exits (but other threads continue to run). When the function raises a SystemExit exception, it is silently ignored. Changed in version 3.8: sys.unraisablehook() is now used to handle unhandled exceptions. _thread.interrupt_main() Simulate the effect of a signal.SIGINT signal arriving in the main thread. A thread can use this function to interrupt the main thread. If signal.SIGINT isn’t handled by Python (it was set to signal.SIG_DFL or signal.SIG_IGN), this function does nothing. _thread.exit() Raise the SystemExit exception. When not caught, this will cause the thread to exit silently. _thread.allocate_lock() Return a new lock object. Methods of locks are described below. The lock is initially unlocked. _thread.get_ident() Return the ‘thread identifier’ of the current thread. This is a nonzero integer. Its value has no direct meaning; it is intended as a magic cookie to be used e.g. to index a dictionary of thread-specific data. Thread identifiers may be recycled when a thread exits and another thread is created. _thread.get_native_id() Return the native integral Thread ID of the current thread assigned by the kernel. This is a non-negative integer. Its value may be used to uniquely identify this particular thread system-wide (until the thread terminates, after which the value may be recycled by the OS). Availability: Windows, FreeBSD, Linux, macOS, OpenBSD, NetBSD, AIX. New in version 3.8. _thread.stack_size([size]) Return the thread stack size used when creating new threads. The optional size argument specifies the stack size to be used for subsequently created threads, and must be 0 (use platform or configured default) or a positive integer value of at least 32,768 (32 KiB). If size is not specified, 0 is used. If changing the thread stack size is unsupported, a RuntimeError is raised. If the specified stack size is invalid, a ValueError is raised and the stack size is unmodified. 32 KiB is currently the minimum supported stack size value to guarantee sufficient stack space for the interpreter itself. Note that some platforms may have particular restrictions on values for the stack size, such as requiring a minimum stack size > 32 KiB or requiring allocation in multiples of the system memory page size - platform documentation should be referred to for more information (4 KiB pages are common; using multiples of 4096 for the stack size is the suggested approach in the absence of more specific information). Availability: Windows, systems with POSIX threads. _thread.TIMEOUT_MAX The maximum value allowed for the timeout parameter of Lock.acquire(). Specifying a timeout greater than this value will raise an OverflowError. New in version 3.2. Lock objects have the following methods: lock.acquire(waitflag=1, timeout=-1) Without any optional argument, this method acquires the lock unconditionally, if necessary waiting until it is released by another thread (only one thread at a time can acquire a lock — that’s their reason for existence). If the integer waitflag argument is present, the action depends on its value: if it is zero, the lock is only acquired if it can be acquired immediately without waiting, while if it is nonzero, the lock is acquired unconditionally as above. If the floating-point timeout argument is present and positive, it specifies the maximum wait time in seconds before returning. A negative timeout argument specifies an unbounded wait. You cannot specify a timeout if waitflag is zero. The return value is True if the lock is acquired successfully, False if not. Changed in version 3.2: The timeout parameter is new. Changed in version 3.2: Lock acquires can now be interrupted by signals on POSIX. lock.release() Releases the lock. The lock must have been acquired earlier, but not necessarily by the same thread. lock.locked() Return the status of the lock: True if it has been acquired by some thread, False if not. In addition to these methods, lock objects can also be used via the with statement, e.g.: import _thread a_lock = _thread.allocate_lock() with a_lock: print("a_lock is locked while this executes") Caveats: Threads interact strangely with interrupts: the KeyboardInterrupt exception will be received by an arbitrary thread. (When the signal module is available, interrupts always go to the main thread.) Calling sys.exit() or raising the SystemExit exception is equivalent to calling _thread.exit(). It is not possible to interrupt the acquire() method on a lock — the KeyboardInterrupt exception will happen after the lock has been acquired. When the main thread exits, it is system defined whether the other threads survive. On most systems, they are killed without executing try … finally clauses or executing object destructors. When the main thread exits, it does not do any of its usual cleanup (except that try … finally clauses are honored), and the standard I/O files are not flushed.
python.library._thread
_thread.allocate_lock() Return a new lock object. Methods of locks are described below. The lock is initially unlocked.
python.library._thread#_thread.allocate_lock
exception _thread.error Raised on thread-specific errors. Changed in version 3.3: This is now a synonym of the built-in RuntimeError.
python.library._thread#_thread.error
_thread.exit() Raise the SystemExit exception. When not caught, this will cause the thread to exit silently.
python.library._thread#_thread.exit
_thread.get_ident() Return the ‘thread identifier’ of the current thread. This is a nonzero integer. Its value has no direct meaning; it is intended as a magic cookie to be used e.g. to index a dictionary of thread-specific data. Thread identifiers may be recycled when a thread exits and another thread is created.
python.library._thread#_thread.get_ident
_thread.get_native_id() Return the native integral Thread ID of the current thread assigned by the kernel. This is a non-negative integer. Its value may be used to uniquely identify this particular thread system-wide (until the thread terminates, after which the value may be recycled by the OS). Availability: Windows, FreeBSD, Linux, macOS, OpenBSD, NetBSD, AIX. New in version 3.8.
python.library._thread#_thread.get_native_id
_thread.interrupt_main() Simulate the effect of a signal.SIGINT signal arriving in the main thread. A thread can use this function to interrupt the main thread. If signal.SIGINT isn’t handled by Python (it was set to signal.SIG_DFL or signal.SIG_IGN), this function does nothing.
python.library._thread#_thread.interrupt_main
lock.acquire(waitflag=1, timeout=-1) Without any optional argument, this method acquires the lock unconditionally, if necessary waiting until it is released by another thread (only one thread at a time can acquire a lock — that’s their reason for existence). If the integer waitflag argument is present, the action depends on its value: if it is zero, the lock is only acquired if it can be acquired immediately without waiting, while if it is nonzero, the lock is acquired unconditionally as above. If the floating-point timeout argument is present and positive, it specifies the maximum wait time in seconds before returning. A negative timeout argument specifies an unbounded wait. You cannot specify a timeout if waitflag is zero. The return value is True if the lock is acquired successfully, False if not. Changed in version 3.2: The timeout parameter is new. Changed in version 3.2: Lock acquires can now be interrupted by signals on POSIX.
python.library._thread#_thread.lock.acquire
lock.locked() Return the status of the lock: True if it has been acquired by some thread, False if not.
python.library._thread#_thread.lock.locked
lock.release() Releases the lock. The lock must have been acquired earlier, but not necessarily by the same thread.
python.library._thread#_thread.lock.release
_thread.LockType This is the type of lock objects.
python.library._thread#_thread.LockType
_thread.stack_size([size]) Return the thread stack size used when creating new threads. The optional size argument specifies the stack size to be used for subsequently created threads, and must be 0 (use platform or configured default) or a positive integer value of at least 32,768 (32 KiB). If size is not specified, 0 is used. If changing the thread stack size is unsupported, a RuntimeError is raised. If the specified stack size is invalid, a ValueError is raised and the stack size is unmodified. 32 KiB is currently the minimum supported stack size value to guarantee sufficient stack space for the interpreter itself. Note that some platforms may have particular restrictions on values for the stack size, such as requiring a minimum stack size > 32 KiB or requiring allocation in multiples of the system memory page size - platform documentation should be referred to for more information (4 KiB pages are common; using multiples of 4096 for the stack size is the suggested approach in the absence of more specific information). Availability: Windows, systems with POSIX threads.
python.library._thread#_thread.stack_size
_thread.start_new_thread(function, args[, kwargs]) Start a new thread and return its identifier. The thread executes the function function with the argument list args (which must be a tuple). The optional kwargs argument specifies a dictionary of keyword arguments. When the function returns, the thread silently exits. When the function terminates with an unhandled exception, sys.unraisablehook() is called to handle the exception. The object attribute of the hook argument is function. By default, a stack trace is printed and then the thread exits (but other threads continue to run). When the function raises a SystemExit exception, it is silently ignored. Changed in version 3.8: sys.unraisablehook() is now used to handle unhandled exceptions.
python.library._thread#_thread.start_new_thread
_thread.TIMEOUT_MAX The maximum value allowed for the timeout parameter of Lock.acquire(). Specifying a timeout greater than this value will raise an OverflowError. New in version 3.2.
python.library._thread#_thread.TIMEOUT_MAX
__debug__ This constant is true if Python was not started with an -O option. See also the assert statement.
python.library.constants#__debug__
__future__ — Future statement definitions Source code: Lib/__future__.py __future__ is a real module, and serves three purposes: To avoid confusing existing tools that analyze import statements and expect to find the modules they’re importing. To ensure that future statements run under releases prior to 2.1 at least yield runtime exceptions (the import of __future__ will fail, because there was no module of that name prior to 2.1). To document when incompatible changes were introduced, and when they will be — or were — made mandatory. This is a form of executable documentation, and can be inspected programmatically via importing __future__ and examining its contents. Each statement in __future__.py is of the form: FeatureName = _Feature(OptionalRelease, MandatoryRelease, CompilerFlag) where, normally, OptionalRelease is less than MandatoryRelease, and both are 5-tuples of the same form as sys.version_info: (PY_MAJOR_VERSION, # the 2 in 2.1.0a3; an int PY_MINOR_VERSION, # the 1; an int PY_MICRO_VERSION, # the 0; an int PY_RELEASE_LEVEL, # "alpha", "beta", "candidate" or "final"; string PY_RELEASE_SERIAL # the 3; an int ) OptionalRelease records the first release in which the feature was accepted. In the case of a MandatoryRelease that has not yet occurred, MandatoryRelease predicts the release in which the feature will become part of the language. Else MandatoryRelease records when the feature became part of the language; in releases at or after that, modules no longer need a future statement to use the feature in question, but may continue to use such imports. MandatoryRelease may also be None, meaning that a planned feature got dropped. Instances of class _Feature have two corresponding methods, getOptionalRelease() and getMandatoryRelease(). CompilerFlag is the (bitfield) flag that should be passed in the fourth argument to the built-in function compile() to enable the feature in dynamically compiled code. This flag is stored in the compiler_flag attribute on _Feature instances. No feature description will ever be deleted from __future__. Since its introduction in Python 2.1 the following features have found their way into the language using this mechanism: feature optional in mandatory in effect nested_scopes 2.1.0b1 2.2 PEP 227: Statically Nested Scopes generators 2.2.0a1 2.3 PEP 255: Simple Generators division 2.2.0a2 3.0 PEP 238: Changing the Division Operator absolute_import 2.5.0a1 3.0 PEP 328: Imports: Multi-Line and Absolute/Relative with_statement 2.5.0a1 2.6 PEP 343: The “with” Statement print_function 2.6.0a2 3.0 PEP 3105: Make print a function unicode_literals 2.6.0a2 3.0 PEP 3112: Bytes literals in Python 3000 generator_stop 3.5.0b1 3.7 PEP 479: StopIteration handling inside generators annotations 3.7.0b1 3.10 PEP 563: Postponed evaluation of annotations See also Future statements How the compiler treats future imports.
python.library.__future__
__import__(name, globals=None, locals=None, fromlist=(), level=0) Note This is an advanced function that is not needed in everyday Python programming, unlike importlib.import_module(). This function is invoked by the import statement. It can be replaced (by importing the builtins module and assigning to builtins.__import__) in order to change semantics of the import statement, but doing so is strongly discouraged as it is usually simpler to use import hooks (see PEP 302) to attain the same goals and does not cause issues with code which assumes the default import implementation is in use. Direct use of __import__() is also discouraged in favor of importlib.import_module(). The function imports the module name, potentially using the given globals and locals to determine how to interpret the name in a package context. The fromlist gives the names of objects or submodules that should be imported from the module given by name. The standard implementation does not use its locals argument at all, and uses its globals only to determine the package context of the import statement. level specifies whether to use absolute or relative imports. 0 (the default) means only perform absolute imports. Positive values for level indicate the number of parent directories to search relative to the directory of the module calling __import__() (see PEP 328 for the details). When the name variable is of the form package.module, normally, the top-level package (the name up till the first dot) is returned, not the module named by name. However, when a non-empty fromlist argument is given, the module named by name is returned. For example, the statement import spam results in bytecode resembling the following code: spam = __import__('spam', globals(), locals(), [], 0) The statement import spam.ham results in this call: spam = __import__('spam.ham', globals(), locals(), [], 0) Note how __import__() returns the toplevel module here because this is the object that is bound to a name by the import statement. On the other hand, the statement from spam.ham import eggs, sausage as saus results in _temp = __import__('spam.ham', globals(), locals(), ['eggs', 'sausage'], 0) eggs = _temp.eggs saus = _temp.sausage Here, the spam.ham module is returned from __import__(). From this object, the names to import are retrieved and assigned to their respective names. If you simply want to import a module (potentially within a package) by name, use importlib.import_module(). Changed in version 3.3: Negative values for level are no longer supported (which also changes the default value to 0). Changed in version 3.9: When the command line options -E or -I are being used, the environment variable PYTHONCASEOK is now ignored.
python.library.functions#__import__
__main__ — Top-level script environment '__main__' is the name of the scope in which top-level code executes. A module’s __name__ is set equal to '__main__' when read from standard input, a script, or from an interactive prompt. A module can discover whether or not it is running in the main scope by checking its own __name__, which allows a common idiom for conditionally executing code in a module when it is run as a script or with python -m but not when it is imported: if __name__ == "__main__": # execute only if run as a script main() For a package, the same effect can be achieved by including a __main__.py module, the contents of which will be executed when the module is run with -m.
python.library.__main__
matplotlib._api Helper functions for managing the Matplotlib API. This documentation is only relevant for Matplotlib developers, not for users. Warning This module and its submodules are for internal use only. Do not use them in your own code. We may change the API at any time with no warning. matplotlib._api.caching_module_getattr(cls)[source] Helper decorator for implementing module-level __getattr__ as a class. This decorator must be used at the module toplevel as follows: @caching_module_getattr class __getattr__: # The class *must* be named ``__getattr__``. @property # Only properties are taken into account. def name(self): ... The __getattr__ class will be replaced by a __getattr__ function such that trying to access name on the module will resolve the corresponding property (which may be decorated e.g. with _api.deprecated for deprecating module globals). The properties are all implicitly cached. Moreover, a suitable AttributeError is generated and raised if no property with the given name exists. matplotlib._api.check_getitem(_mapping, **kwargs)[source] kwargs must consist of a single key, value pair. If key is in _mapping, return _mapping[value]; else, raise an appropriate ValueError. Examples >>> _api.check_getitem({"foo": "bar"}, arg=arg) matplotlib._api.check_in_list(_values, *, _print_supported_values=True, **kwargs)[source] For each key, value pair in kwargs, check that value is in _values. Parameters _valuesiterable Sequence of values to check on. _print_supported_valuesbool, default: True Whether to print _values when raising ValueError. **kwargsdict key, value pairs as keyword arguments to find in _values. Raises ValueError If any value in kwargs is not found in _values. Examples >>> _api.check_in_list(["foo", "bar"], arg=arg, other_arg=other_arg) matplotlib._api.check_isinstance(_types, **kwargs)[source] For each key, value pair in kwargs, check that value is an instance of one of _types; if not, raise an appropriate TypeError. As a special case, a None entry in _types is treated as NoneType. Examples >>> _api.check_isinstance((SomeClass, None), arg=arg) matplotlib._api.check_shape(_shape, **kwargs)[source] For each key, value pair in kwargs, check that value has the shape _shape, if not, raise an appropriate ValueError. None in the shape is treated as a "free" size that can have any length. e.g. (None, 2) -> (N, 2) The values checked must be numpy arrays. Examples To check for (N, 2) shaped arrays >>> _api.check_shape((None, 2), arg=arg, other_arg=other_arg) classmatplotlib._api.classproperty(fget, fset=None, fdel=None, doc=None)[source] Bases: object Like property, but also triggers on access via the class, and it is the class that's passed as argument. Examples class C: @classproperty def foo(cls): return cls.__name__ assert C.foo == "C" propertyfget matplotlib._api.select_matching_signature(funcs, *args, **kwargs)[source] Select and call the function that accepts *args, **kwargs. funcs is a list of functions which should not raise any exception (other than TypeError if the arguments passed do not match their signature). select_matching_signature tries to call each of the functions in funcs with *args, **kwargs (in the order in which they are given). Calls that fail with a TypeError are silently skipped. As soon as a call succeeds, select_matching_signature returns its return value. If no function accepts *args, **kwargs, then the TypeError raised by the last failing call is re-raised. Callers should normally make sure that any *args, **kwargs can only bind a single func (to avoid any ambiguity), although this is not checked by select_matching_signature. Notes select_matching_signature is intended to help implementing signature-overloaded functions. In general, such functions should be avoided, except for back-compatibility concerns. A typical use pattern is def my_func(*args, **kwargs): params = select_matching_signature( [lambda old1, old2: locals(), lambda new: locals()], *args, **kwargs) if "old1" in params: warn_deprecated(...) old1, old2 = params.values() # note that locals() is ordered. else: new, = params.values() # do things with params which allows my_func to be called either with two parameters (old1 and old2) or a single one (new). Note that the new signature is given last, so that callers get a TypeError corresponding to the new signature if the arguments they passed in do not match any signature. matplotlib._api.warn_external(message, category=None)[source] warnings.warn wrapper that sets stacklevel to "outside Matplotlib". The original emitter of the warning can be obtained by patching this function back to warnings.warn, i.e. _api.warn_external = warnings.warn (or functools.partial(warnings.warn, stacklevel=2), etc.). Helper functions for deprecating parts of the Matplotlib API. This documentation is only relevant for Matplotlib developers, not for users. Warning This module is for internal use only. Do not use it in your own code. We may change the API at any time with no warning. exceptionmatplotlib._api.deprecation.MatplotlibDeprecationWarning[source] Bases: DeprecationWarning A class for issuing deprecation warnings for Matplotlib users. matplotlib._api.deprecation.delete_parameter(since, name, func=None, **kwargs)[source] Decorator indicating that parameter name of func is being deprecated. The actual implementation of func should keep the name parameter in its signature, or accept a **kwargs argument (through which name would be passed). Parameters that come after the deprecated parameter effectively become keyword-only (as they cannot be passed positionally without triggering the DeprecationWarning on the deprecated parameter), and should be marked as such after the deprecation period has passed and the deprecated parameter is removed. Parameters other than since, name, and func are keyword-only and forwarded to warn_deprecated. Examples @_api.delete_parameter("3.1", "unused") def func(used_arg, other_arg, unused, more_args): ... matplotlib._api.deprecation.deprecate_method_override(method, obj, *, allow_empty=False, **kwargs)[source] Return obj.method with a deprecation if it was overridden, else None. Parameters method An unbound method, i.e. an expression of the form Class.method_name. Remember that within the body of a method, one can always use __class__ to refer to the class that is currently being defined. obj Either an object of the class where method is defined, or a subclass of that class. allow_emptybool, default: False Whether to allow overrides by "empty" methods without emitting a warning. **kwargs Additional parameters passed to warn_deprecated to generate the deprecation warning; must at least include the "since" key. classmatplotlib._api.deprecation.deprecate_privatize_attribute(*args, **kwargs)[source] Bases: object Helper to deprecate public access to an attribute (or method). This helper should only be used at class scope, as follows: class Foo: attr = _deprecate_privatize_attribute(*args, **kwargs) where all parameters are forwarded to deprecated. This form makes attr a property which forwards read and write access to self._attr (same name but with a leading underscore), with a deprecation warning. Note that the attribute name is derived from the name this helper is assigned to. This helper also works for deprecating methods. matplotlib._api.deprecation.deprecated(since, *, message='', name='', alternative='', pending=False, obj_type=None, addendum='', removal='')[source] Decorator to mark a function, a class, or a property as deprecated. When deprecating a classmethod, a staticmethod, or a property, the @deprecated decorator should go under @classmethod and @staticmethod (i.e., deprecated should directly decorate the underlying callable), but over @property. When deprecating a class C intended to be used as a base class in a multiple inheritance hierarchy, C must define an __init__ method (if C instead inherited its __init__ from its own base class, then @deprecated would mess up __init__ inheritance when installing its own (deprecation-emitting) C.__init__). Parameters are the same as for warn_deprecated, except that obj_type defaults to 'class' if decorating a class, 'attribute' if decorating a property, and 'function' otherwise. Examples @deprecated('1.4.0') def the_function_to_deprecate(): pass matplotlib._api.deprecation.make_keyword_only(since, name, func=None)[source] Decorator indicating that passing parameter name (or any of the following ones) positionally to func is being deprecated. When used on a method that has a pyplot wrapper, this should be the outermost decorator, so that boilerplate.py can access the original signature. matplotlib._api.deprecation.mplDeprecation[source] alias of matplotlib._api.deprecation.MatplotlibDeprecationWarning matplotlib._api.deprecation.rename_parameter(since, old, new, func=None)[source] Decorator indicating that parameter old of func is renamed to new. The actual implementation of func should use new, not old. If old is passed to func, a DeprecationWarning is emitted, and its value is used, even if new is also passed by keyword (this is to simplify pyplot wrapper functions, which always pass new explicitly to the Axes method). If new is also passed but positionally, a TypeError will be raised by the underlying function during argument binding. Examples @_api.rename_parameter("3.1", "bad_name", "good_name") def func(good_name): ... matplotlib._api.deprecation.suppress_matplotlib_deprecation_warning() matplotlib._api.deprecation.warn_deprecated(since, *, message='', name='', alternative='', pending=False, obj_type='', addendum='', removal='')[source] Display a standardized deprecation. Parameters sincestr The release at which this API became deprecated. messagestr, optional Override the default deprecation message. The %(since)s, %(name)s, %(alternative)s, %(obj_type)s, %(addendum)s, and %(removal)s format specifiers will be replaced by the values of the respective arguments passed to this function. namestr, optional The name of the deprecated object. alternativestr, optional An alternative API that the user may use in place of the deprecated API. The deprecation warning will tell the user about this alternative if provided. pendingbool, optional If True, uses a PendingDeprecationWarning instead of a DeprecationWarning. Cannot be used together with removal. obj_typestr, optional The object type being deprecated. addendumstr, optional Additional text appended directly to the final message. removalstr, optional The expected removal version. With the default (an empty string), a removal version is automatically computed from since. Set to other Falsy values to not schedule a removal date. Cannot be used together with pending. Examples # To warn of the deprecation of "matplotlib.name_of_module" warn_deprecated('1.4.0', name='matplotlib.name_of_module', obj_type='module')
matplotlib._api_api
matplotlib._api.caching_module_getattr(cls)[source] Helper decorator for implementing module-level __getattr__ as a class. This decorator must be used at the module toplevel as follows: @caching_module_getattr class __getattr__: # The class *must* be named ``__getattr__``. @property # Only properties are taken into account. def name(self): ... The __getattr__ class will be replaced by a __getattr__ function such that trying to access name on the module will resolve the corresponding property (which may be decorated e.g. with _api.deprecated for deprecating module globals). The properties are all implicitly cached. Moreover, a suitable AttributeError is generated and raised if no property with the given name exists.
matplotlib._api_api#matplotlib._api.caching_module_getattr
matplotlib._api.check_getitem(_mapping, **kwargs)[source] kwargs must consist of a single key, value pair. If key is in _mapping, return _mapping[value]; else, raise an appropriate ValueError. Examples >>> _api.check_getitem({"foo": "bar"}, arg=arg)
matplotlib._api_api#matplotlib._api.check_getitem
matplotlib._api.check_in_list(_values, *, _print_supported_values=True, **kwargs)[source] For each key, value pair in kwargs, check that value is in _values. Parameters _valuesiterable Sequence of values to check on. _print_supported_valuesbool, default: True Whether to print _values when raising ValueError. **kwargsdict key, value pairs as keyword arguments to find in _values. Raises ValueError If any value in kwargs is not found in _values. Examples >>> _api.check_in_list(["foo", "bar"], arg=arg, other_arg=other_arg)
matplotlib._api_api#matplotlib._api.check_in_list
matplotlib._api.check_isinstance(_types, **kwargs)[source] For each key, value pair in kwargs, check that value is an instance of one of _types; if not, raise an appropriate TypeError. As a special case, a None entry in _types is treated as NoneType. Examples >>> _api.check_isinstance((SomeClass, None), arg=arg)
matplotlib._api_api#matplotlib._api.check_isinstance
matplotlib._api.check_shape(_shape, **kwargs)[source] For each key, value pair in kwargs, check that value has the shape _shape, if not, raise an appropriate ValueError. None in the shape is treated as a "free" size that can have any length. e.g. (None, 2) -> (N, 2) The values checked must be numpy arrays. Examples To check for (N, 2) shaped arrays >>> _api.check_shape((None, 2), arg=arg, other_arg=other_arg)
matplotlib._api_api#matplotlib._api.check_shape
classmatplotlib._api.classproperty(fget, fset=None, fdel=None, doc=None)[source] Bases: object Like property, but also triggers on access via the class, and it is the class that's passed as argument. Examples class C: @classproperty def foo(cls): return cls.__name__ assert C.foo == "C" propertyfget
matplotlib._api_api#matplotlib._api.classproperty
matplotlib._api.deprecation.delete_parameter(since, name, func=None, **kwargs)[source] Decorator indicating that parameter name of func is being deprecated. The actual implementation of func should keep the name parameter in its signature, or accept a **kwargs argument (through which name would be passed). Parameters that come after the deprecated parameter effectively become keyword-only (as they cannot be passed positionally without triggering the DeprecationWarning on the deprecated parameter), and should be marked as such after the deprecation period has passed and the deprecated parameter is removed. Parameters other than since, name, and func are keyword-only and forwarded to warn_deprecated. Examples @_api.delete_parameter("3.1", "unused") def func(used_arg, other_arg, unused, more_args): ...
matplotlib._api_api#matplotlib._api.deprecation.delete_parameter
matplotlib._api.deprecation.deprecate_method_override(method, obj, *, allow_empty=False, **kwargs)[source] Return obj.method with a deprecation if it was overridden, else None. Parameters method An unbound method, i.e. an expression of the form Class.method_name. Remember that within the body of a method, one can always use __class__ to refer to the class that is currently being defined. obj Either an object of the class where method is defined, or a subclass of that class. allow_emptybool, default: False Whether to allow overrides by "empty" methods without emitting a warning. **kwargs Additional parameters passed to warn_deprecated to generate the deprecation warning; must at least include the "since" key.
matplotlib._api_api#matplotlib._api.deprecation.deprecate_method_override
classmatplotlib._api.deprecation.deprecate_privatize_attribute(*args, **kwargs)[source] Bases: object Helper to deprecate public access to an attribute (or method). This helper should only be used at class scope, as follows: class Foo: attr = _deprecate_privatize_attribute(*args, **kwargs) where all parameters are forwarded to deprecated. This form makes attr a property which forwards read and write access to self._attr (same name but with a leading underscore), with a deprecation warning. Note that the attribute name is derived from the name this helper is assigned to. This helper also works for deprecating methods.
matplotlib._api_api#matplotlib._api.deprecation.deprecate_privatize_attribute
matplotlib._api.deprecation.deprecated(since, *, message='', name='', alternative='', pending=False, obj_type=None, addendum='', removal='')[source] Decorator to mark a function, a class, or a property as deprecated. When deprecating a classmethod, a staticmethod, or a property, the @deprecated decorator should go under @classmethod and @staticmethod (i.e., deprecated should directly decorate the underlying callable), but over @property. When deprecating a class C intended to be used as a base class in a multiple inheritance hierarchy, C must define an __init__ method (if C instead inherited its __init__ from its own base class, then @deprecated would mess up __init__ inheritance when installing its own (deprecation-emitting) C.__init__). Parameters are the same as for warn_deprecated, except that obj_type defaults to 'class' if decorating a class, 'attribute' if decorating a property, and 'function' otherwise. Examples @deprecated('1.4.0') def the_function_to_deprecate(): pass
matplotlib._api_api#matplotlib._api.deprecation.deprecated
matplotlib._api.deprecation.make_keyword_only(since, name, func=None)[source] Decorator indicating that passing parameter name (or any of the following ones) positionally to func is being deprecated. When used on a method that has a pyplot wrapper, this should be the outermost decorator, so that boilerplate.py can access the original signature.
matplotlib._api_api#matplotlib._api.deprecation.make_keyword_only
exceptionmatplotlib._api.deprecation.MatplotlibDeprecationWarning[source] Bases: DeprecationWarning A class for issuing deprecation warnings for Matplotlib users.
matplotlib._api_api#matplotlib._api.deprecation.MatplotlibDeprecationWarning
matplotlib._api.deprecation.mplDeprecation[source] alias of matplotlib._api.deprecation.MatplotlibDeprecationWarning
matplotlib._api_api#matplotlib._api.deprecation.mplDeprecation
matplotlib._api.deprecation.rename_parameter(since, old, new, func=None)[source] Decorator indicating that parameter old of func is renamed to new. The actual implementation of func should use new, not old. If old is passed to func, a DeprecationWarning is emitted, and its value is used, even if new is also passed by keyword (this is to simplify pyplot wrapper functions, which always pass new explicitly to the Axes method). If new is also passed but positionally, a TypeError will be raised by the underlying function during argument binding. Examples @_api.rename_parameter("3.1", "bad_name", "good_name") def func(good_name): ...
matplotlib._api_api#matplotlib._api.deprecation.rename_parameter
matplotlib._api.deprecation.suppress_matplotlib_deprecation_warning()
matplotlib._api_api#matplotlib._api.deprecation.suppress_matplotlib_deprecation_warning
matplotlib._api.deprecation.warn_deprecated(since, *, message='', name='', alternative='', pending=False, obj_type='', addendum='', removal='')[source] Display a standardized deprecation. Parameters sincestr The release at which this API became deprecated. messagestr, optional Override the default deprecation message. The %(since)s, %(name)s, %(alternative)s, %(obj_type)s, %(addendum)s, and %(removal)s format specifiers will be replaced by the values of the respective arguments passed to this function. namestr, optional The name of the deprecated object. alternativestr, optional An alternative API that the user may use in place of the deprecated API. The deprecation warning will tell the user about this alternative if provided. pendingbool, optional If True, uses a PendingDeprecationWarning instead of a DeprecationWarning. Cannot be used together with removal. obj_typestr, optional The object type being deprecated. addendumstr, optional Additional text appended directly to the final message. removalstr, optional The expected removal version. With the default (an empty string), a removal version is automatically computed from since. Set to other Falsy values to not schedule a removal date. Cannot be used together with pending. Examples # To warn of the deprecation of "matplotlib.name_of_module" warn_deprecated('1.4.0', name='matplotlib.name_of_module', obj_type='module')
matplotlib._api_api#matplotlib._api.deprecation.warn_deprecated
matplotlib._api.select_matching_signature(funcs, *args, **kwargs)[source] Select and call the function that accepts *args, **kwargs. funcs is a list of functions which should not raise any exception (other than TypeError if the arguments passed do not match their signature). select_matching_signature tries to call each of the functions in funcs with *args, **kwargs (in the order in which they are given). Calls that fail with a TypeError are silently skipped. As soon as a call succeeds, select_matching_signature returns its return value. If no function accepts *args, **kwargs, then the TypeError raised by the last failing call is re-raised. Callers should normally make sure that any *args, **kwargs can only bind a single func (to avoid any ambiguity), although this is not checked by select_matching_signature. Notes select_matching_signature is intended to help implementing signature-overloaded functions. In general, such functions should be avoided, except for back-compatibility concerns. A typical use pattern is def my_func(*args, **kwargs): params = select_matching_signature( [lambda old1, old2: locals(), lambda new: locals()], *args, **kwargs) if "old1" in params: warn_deprecated(...) old1, old2 = params.values() # note that locals() is ordered. else: new, = params.values() # do things with params which allows my_func to be called either with two parameters (old1 and old2) or a single one (new). Note that the new signature is given last, so that callers get a TypeError corresponding to the new signature if the arguments they passed in do not match any signature.
matplotlib._api_api#matplotlib._api.select_matching_signature
matplotlib._api.warn_external(message, category=None)[source] warnings.warn wrapper that sets stacklevel to "outside Matplotlib". The original emitter of the warning can be obtained by patching this function back to warnings.warn, i.e. _api.warn_external = warnings.warn (or functools.partial(warnings.warn, stacklevel=2), etc.).
matplotlib._api_api#matplotlib._api.warn_external
matplotlib._enums Enums representing sets of strings that Matplotlib uses as input parameters. Matplotlib often uses simple data types like strings or tuples to define a concept; e.g. the line capstyle can be specified as one of 'butt', 'round', or 'projecting'. The classes in this module are used internally and serve to document these concepts formally. As an end-user you will not use these classes directly, but only the values they define. classmatplotlib._enums.JoinStyle(value)[source] Define how the connection between two line segments is drawn. For a visual impression of each JoinStyle, view these docs online, or run JoinStyle.demo. Lines in Matplotlib are typically defined by a 1D Path and a finite linewidth, where the underlying 1D Path represents the center of the stroked line. By default, GraphicsContextBase defines the boundaries of a stroked line to simply be every point within some radius, linewidth/2, away from any point of the center line. However, this results in corners appearing "rounded", which may not be the desired behavior if you are drawing, for example, a polygon or pointed star. Supported values: 'miter' the "arrow-tip" style. Each boundary of the filled-in area will extend in a straight line parallel to the tangent vector of the centerline at the point it meets the corner, until they meet in a sharp point. 'round' stokes every point within a radius of linewidth/2 of the center lines. 'bevel' the "squared-off" style. It can be thought of as a rounded corner where the "circular" part of the corner has been cut off. Note Very long miter tips are cut off (to form a bevel) after a backend-dependent limit called the "miter limit", which specifies the maximum allowed ratio of miter length to line width. For example, the PDF backend uses the default value of 10 specified by the PDF standard, while the SVG backend does not even specify the miter limit, resulting in a default value of 4 per the SVG specification. Matplotlib does not currently allow the user to adjust this parameter. A more detailed description of the effect of a miter limit can be found in the Mozilla Developer Docs (Source code, png, pdf) staticdemo()[source] Demonstrate how each JoinStyle looks for various join angles. classmatplotlib._enums.CapStyle(value)[source] Define how the two endpoints (caps) of an unclosed line are drawn. How to draw the start and end points of lines that represent a closed curve (i.e. that end in a CLOSEPOLY) is controlled by the line's JoinStyle. For all other lines, how the start and end points are drawn is controlled by the CapStyle. For a visual impression of each CapStyle, view these docs online or run CapStyle.demo. Supported values: 'butt' the line is squared off at its endpoint. 'projecting' the line is squared off as in butt, but the filled in area extends beyond the endpoint a distance of linewidth/2. 'round' like butt, but a semicircular cap is added to the end of the line, of radius linewidth/2. (Source code, png, pdf) staticdemo()[source] Demonstrate how each CapStyle looks for a thick line segment.
matplotlib._enums_api
classmatplotlib._enums.CapStyle(value)[source] Define how the two endpoints (caps) of an unclosed line are drawn. How to draw the start and end points of lines that represent a closed curve (i.e. that end in a CLOSEPOLY) is controlled by the line's JoinStyle. For all other lines, how the start and end points are drawn is controlled by the CapStyle. For a visual impression of each CapStyle, view these docs online or run CapStyle.demo. Supported values: 'butt' the line is squared off at its endpoint. 'projecting' the line is squared off as in butt, but the filled in area extends beyond the endpoint a distance of linewidth/2. 'round' like butt, but a semicircular cap is added to the end of the line, of radius linewidth/2. (Source code, png, pdf) staticdemo()[source] Demonstrate how each CapStyle looks for a thick line segment.
matplotlib._enums_api#matplotlib._enums.CapStyle
staticdemo()[source] Demonstrate how each CapStyle looks for a thick line segment.
matplotlib._enums_api#matplotlib._enums.CapStyle.demo
classmatplotlib._enums.JoinStyle(value)[source] Define how the connection between two line segments is drawn. For a visual impression of each JoinStyle, view these docs online, or run JoinStyle.demo. Lines in Matplotlib are typically defined by a 1D Path and a finite linewidth, where the underlying 1D Path represents the center of the stroked line. By default, GraphicsContextBase defines the boundaries of a stroked line to simply be every point within some radius, linewidth/2, away from any point of the center line. However, this results in corners appearing "rounded", which may not be the desired behavior if you are drawing, for example, a polygon or pointed star. Supported values: 'miter' the "arrow-tip" style. Each boundary of the filled-in area will extend in a straight line parallel to the tangent vector of the centerline at the point it meets the corner, until they meet in a sharp point. 'round' stokes every point within a radius of linewidth/2 of the center lines. 'bevel' the "squared-off" style. It can be thought of as a rounded corner where the "circular" part of the corner has been cut off. Note Very long miter tips are cut off (to form a bevel) after a backend-dependent limit called the "miter limit", which specifies the maximum allowed ratio of miter length to line width. For example, the PDF backend uses the default value of 10 specified by the PDF standard, while the SVG backend does not even specify the miter limit, resulting in a default value of 4 per the SVG specification. Matplotlib does not currently allow the user to adjust this parameter. A more detailed description of the effect of a miter limit can be found in the Mozilla Developer Docs (Source code, png, pdf) staticdemo()[source] Demonstrate how each JoinStyle looks for various join angles.
matplotlib._enums_api#matplotlib._enums.JoinStyle
staticdemo()[source] Demonstrate how each JoinStyle looks for various join angles.
matplotlib._enums_api#matplotlib._enums.JoinStyle.demo
matplotlib.afm A python interface to Adobe Font Metrics Files. Although a number of other Python implementations exist, and may be more complete than this, it was decided not to go with them because they were either: copyrighted or used a non-BSD compatible license had too many dependencies and a free standing lib was needed did more than needed and it was easier to write afresh rather than figure out how to get just what was needed. It is pretty easy to use, and has no external dependencies: >>> import matplotlib as mpl >>> from pathlib import Path >>> afm_path = Path(mpl.get_data_path(), 'fonts', 'afm', 'ptmr8a.afm') >>> >>> from matplotlib.afm import AFM >>> with afm_path.open('rb') as fh: ... afm = AFM(fh) >>> afm.string_width_height('What the heck?') (6220.0, 694) >>> afm.get_fontname() 'Times-Roman' >>> afm.get_kern_dist('A', 'f') 0 >>> afm.get_kern_dist('A', 'y') -92.0 >>> afm.get_bbox_char('!') [130, -9, 238, 676] As in the Adobe Font Metrics File Format Specification, all dimensions are given in units of 1/1000 of the scale factor (point size) of the font being used. classmatplotlib.afm.AFM(fh)[source] Bases: object Parse the AFM file in file object fh. propertyfamily_name The font family name, e.g., 'Times'. get_angle()[source] Return the fontangle as float. get_bbox_char(c, isord=False)[source] get_capheight()[source] Return the cap height as float. get_familyname()[source] Return the font family name, e.g., 'Times'. get_fontname()[source] Return the font name, e.g., 'Times-Roman'. get_fullname()[source] Return the font full name, e.g., 'Times-Roman'. get_height_char(c, isord=False)[source] Get the bounding box (ink) height of character c (space is 0). get_horizontal_stem_width()[source] Return the standard horizontal stem width as float, or None if not specified in AFM file. get_kern_dist(c1, c2)[source] Return the kerning pair distance (possibly 0) for chars c1 and c2. get_kern_dist_from_name(name1, name2)[source] Return the kerning pair distance (possibly 0) for chars name1 and name2. get_name_char(c, isord=False)[source] Get the name of the character, i.e., ';' is 'semicolon'. get_str_bbox(s)[source] Return the string bounding box. get_str_bbox_and_descent(s)[source] Return the string bounding box and the maximal descent. get_underline_thickness()[source] Return the underline thickness as float. get_vertical_stem_width()[source] Return the standard vertical stem width as float, or None if not specified in AFM file. get_weight()[source] Return the font weight, e.g., 'Bold' or 'Roman'. get_width_char(c, isord=False)[source] Get the width of the character from the character metric WX field. get_width_from_char_name(name)[source] Get the width of the character from a type1 character name. get_xheight()[source] Return the xheight as float. propertypostscript_name string_width_height(s)[source] Return the string width (including kerning) and string height as a (w, h) tuple. classmatplotlib.afm.CharMetrics(width, name, bbox)[source] Bases: tuple Represents the character metrics of a single character. Notes The fields do currently only describe a subset of character metrics information defined in the AFM standard. Create new instance of CharMetrics(width, name, bbox) bbox The bbox of the character (B) as a tuple (llx, lly, urx, ury). name The character name (N). width The character width (WX). classmatplotlib.afm.CompositePart(name, dx, dy)[source] Bases: tuple Represents the information on a composite element of a composite char. Create new instance of CompositePart(name, dx, dy) dx x-displacement of the part from the origin. dy y-displacement of the part from the origin. name Name of the part, e.g. 'acute'.
matplotlib.afm_api
classmatplotlib.afm.AFM(fh)[source] Bases: object Parse the AFM file in file object fh. propertyfamily_name The font family name, e.g., 'Times'. get_angle()[source] Return the fontangle as float. get_bbox_char(c, isord=False)[source] get_capheight()[source] Return the cap height as float. get_familyname()[source] Return the font family name, e.g., 'Times'. get_fontname()[source] Return the font name, e.g., 'Times-Roman'. get_fullname()[source] Return the font full name, e.g., 'Times-Roman'. get_height_char(c, isord=False)[source] Get the bounding box (ink) height of character c (space is 0). get_horizontal_stem_width()[source] Return the standard horizontal stem width as float, or None if not specified in AFM file. get_kern_dist(c1, c2)[source] Return the kerning pair distance (possibly 0) for chars c1 and c2. get_kern_dist_from_name(name1, name2)[source] Return the kerning pair distance (possibly 0) for chars name1 and name2. get_name_char(c, isord=False)[source] Get the name of the character, i.e., ';' is 'semicolon'. get_str_bbox(s)[source] Return the string bounding box. get_str_bbox_and_descent(s)[source] Return the string bounding box and the maximal descent. get_underline_thickness()[source] Return the underline thickness as float. get_vertical_stem_width()[source] Return the standard vertical stem width as float, or None if not specified in AFM file. get_weight()[source] Return the font weight, e.g., 'Bold' or 'Roman'. get_width_char(c, isord=False)[source] Get the width of the character from the character metric WX field. get_width_from_char_name(name)[source] Get the width of the character from a type1 character name. get_xheight()[source] Return the xheight as float. propertypostscript_name string_width_height(s)[source] Return the string width (including kerning) and string height as a (w, h) tuple.
matplotlib.afm_api#matplotlib.afm.AFM
get_angle()[source] Return the fontangle as float.
matplotlib.afm_api#matplotlib.afm.AFM.get_angle
get_bbox_char(c, isord=False)[source]
matplotlib.afm_api#matplotlib.afm.AFM.get_bbox_char
get_capheight()[source] Return the cap height as float.
matplotlib.afm_api#matplotlib.afm.AFM.get_capheight