doc_content stringlengths 1 386k | doc_id stringlengths 5 188 |
|---|---|
tabnanny.verbose
Flag indicating whether to print verbose messages. This is incremented by the -v option if called as a script. | python.library.tabnanny#tabnanny.verbose |
tarfile — Read and write tar archive files Source code: Lib/tarfile.py The tarfile module makes it possible to read and write tar archives, including those using gzip, bz2 and lzma compression. Use the zipfile module to read or write .zip files, or the higher-level functions in shutil. Some facts and figures: reads and writes gzip, bz2 and lzma compressed archives if the respective modules are available. read/write support for the POSIX.1-1988 (ustar) format. read/write support for the GNU tar format including longname and longlink extensions, read-only support for all variants of the sparse extension including restoration of sparse files. read/write support for the POSIX.1-2001 (pax) format. handles directories, regular files, hardlinks, symbolic links, fifos, character devices and block devices and is able to acquire and restore file information like timestamp, access permissions and owner. Changed in version 3.3: Added support for lzma compression.
tarfile.open(name=None, mode='r', fileobj=None, bufsize=10240, **kwargs)
Return a TarFile object for the pathname name. For detailed information on TarFile objects and the keyword arguments that are allowed, see TarFile Objects. mode has to be a string of the form 'filemode[:compression]', it defaults to 'r'. Here is a full list of mode combinations:
mode action
'r' or 'r:*' Open for reading with transparent compression (recommended).
'r:' Open for reading exclusively without compression.
'r:gz' Open for reading with gzip compression.
'r:bz2' Open for reading with bzip2 compression.
'r:xz' Open for reading with lzma compression.
'x' or 'x:' Create a tarfile exclusively without compression. Raise an FileExistsError exception if it already exists.
'x:gz' Create a tarfile with gzip compression. Raise an FileExistsError exception if it already exists.
'x:bz2' Create a tarfile with bzip2 compression. Raise an FileExistsError exception if it already exists.
'x:xz' Create a tarfile with lzma compression. Raise an FileExistsError exception if it already exists.
'a' or 'a:' Open for appending with no compression. The file is created if it does not exist.
'w' or 'w:' Open for uncompressed writing.
'w:gz' Open for gzip compressed writing.
'w:bz2' Open for bzip2 compressed writing.
'w:xz' Open for lzma compressed writing. Note that 'a:gz', 'a:bz2' or 'a:xz' is not possible. If mode is not suitable to open a certain (compressed) file for reading, ReadError is raised. Use mode 'r' to avoid this. If a compression method is not supported, CompressionError is raised. If fileobj is specified, it is used as an alternative to a file object opened in binary mode for name. It is supposed to be at position 0. For modes 'w:gz', 'r:gz', 'w:bz2', 'r:bz2', 'x:gz', 'x:bz2', tarfile.open() accepts the keyword argument compresslevel (default 9) to specify the compression level of the file. For special purposes, there is a second format for mode: 'filemode|[compression]'. tarfile.open() will return a TarFile object that processes its data as a stream of blocks. No random seeking will be done on the file. If given, fileobj may be any object that has a read() or write() method (depending on the mode). bufsize specifies the blocksize and defaults to 20 * 512 bytes. Use this variant in combination with e.g. sys.stdin, a socket file object or a tape device. However, such a TarFile object is limited in that it does not allow random access, see Examples. The currently possible modes:
Mode Action
'r|*' Open a stream of tar blocks for reading with transparent compression.
'r|' Open a stream of uncompressed tar blocks for reading.
'r|gz' Open a gzip compressed stream for reading.
'r|bz2' Open a bzip2 compressed stream for reading.
'r|xz' Open an lzma compressed stream for reading.
'w|' Open an uncompressed stream for writing.
'w|gz' Open a gzip compressed stream for writing.
'w|bz2' Open a bzip2 compressed stream for writing.
'w|xz' Open an lzma compressed stream for writing. Changed in version 3.5: The 'x' (exclusive creation) mode was added. Changed in version 3.6: The name parameter accepts a path-like object.
class tarfile.TarFile
Class for reading and writing tar archives. Do not use this class directly: use tarfile.open() instead. See TarFile Objects.
tarfile.is_tarfile(name)
Return True if name is a tar archive file, that the tarfile module can read. name may be a str, file, or file-like object. Changed in version 3.9: Support for file and file-like objects.
The tarfile module defines the following exceptions:
exception tarfile.TarError
Base class for all tarfile exceptions.
exception tarfile.ReadError
Is raised when a tar archive is opened, that either cannot be handled by the tarfile module or is somehow invalid.
exception tarfile.CompressionError
Is raised when a compression method is not supported or when the data cannot be decoded properly.
exception tarfile.StreamError
Is raised for the limitations that are typical for stream-like TarFile objects.
exception tarfile.ExtractError
Is raised for non-fatal errors when using TarFile.extract(), but only if TarFile.errorlevel== 2.
exception tarfile.HeaderError
Is raised by TarInfo.frombuf() if the buffer it gets is invalid.
The following constants are available at the module level:
tarfile.ENCODING
The default character encoding: 'utf-8' on Windows, the value returned by sys.getfilesystemencoding() otherwise.
Each of the following constants defines a tar archive format that the tarfile module is able to create. See section Supported tar formats for details.
tarfile.USTAR_FORMAT
POSIX.1-1988 (ustar) format.
tarfile.GNU_FORMAT
GNU tar format.
tarfile.PAX_FORMAT
POSIX.1-2001 (pax) format.
tarfile.DEFAULT_FORMAT
The default format for creating archives. This is currently PAX_FORMAT. Changed in version 3.8: The default format for new archives was changed to PAX_FORMAT from GNU_FORMAT.
See also
Module zipfile
Documentation of the zipfile standard module. Archiving operations
Documentation of the higher-level archiving facilities provided by the standard shutil module. GNU tar manual, Basic Tar Format
Documentation for tar archive files, including GNU tar extensions. TarFile Objects The TarFile object provides an interface to a tar archive. A tar archive is a sequence of blocks. An archive member (a stored file) is made up of a header block followed by data blocks. It is possible to store a file in a tar archive several times. Each archive member is represented by a TarInfo object, see TarInfo Objects for details. A TarFile object can be used as a context manager in a with statement. It will automatically be closed when the block is completed. Please note that in the event of an exception an archive opened for writing will not be finalized; only the internally used file object will be closed. See the Examples section for a use case. New in version 3.2: Added support for the context management protocol.
class tarfile.TarFile(name=None, mode='r', fileobj=None, format=DEFAULT_FORMAT, tarinfo=TarInfo, dereference=False, ignore_zeros=False, encoding=ENCODING, errors='surrogateescape', pax_headers=None, debug=0, errorlevel=0)
All following arguments are optional and can be accessed as instance attributes as well. name is the pathname of the archive. name may be a path-like object. It can be omitted if fileobj is given. In this case, the file object’s name attribute is used if it exists. mode is either 'r' to read from an existing archive, 'a' to append data to an existing file, 'w' to create a new file overwriting an existing one, or 'x' to create a new file only if it does not already exist. If fileobj is given, it is used for reading or writing data. If it can be determined, mode is overridden by fileobj’s mode. fileobj will be used from position 0. Note fileobj is not closed, when TarFile is closed. format controls the archive format for writing. It must be one of the constants USTAR_FORMAT, GNU_FORMAT or PAX_FORMAT that are defined at module level. When reading, format will be automatically detected, even if different formats are present in a single archive. The tarinfo argument can be used to replace the default TarInfo class with a different one. If dereference is False, add symbolic and hard links to the archive. If it is True, add the content of the target files to the archive. This has no effect on systems that do not support symbolic links. If ignore_zeros is False, treat an empty block as the end of the archive. If it is True, skip empty (and invalid) blocks and try to get as many members as possible. This is only useful for reading concatenated or damaged archives. debug can be set from 0 (no debug messages) up to 3 (all debug messages). The messages are written to sys.stderr. If errorlevel is 0, all errors are ignored when using TarFile.extract(). Nevertheless, they appear as error messages in the debug output, when debugging is enabled. If 1, all fatal errors are raised as OSError exceptions. If 2, all non-fatal errors are raised as TarError exceptions as well. The encoding and errors arguments define the character encoding to be used for reading or writing the archive and how conversion errors are going to be handled. The default settings will work for most users. See section Unicode issues for in-depth information. The pax_headers argument is an optional dictionary of strings which will be added as a pax global header if format is PAX_FORMAT. Changed in version 3.2: Use 'surrogateescape' as the default for the errors argument. Changed in version 3.5: The 'x' (exclusive creation) mode was added. Changed in version 3.6: The name parameter accepts a path-like object.
classmethod TarFile.open(...)
Alternative constructor. The tarfile.open() function is actually a shortcut to this classmethod.
TarFile.getmember(name)
Return a TarInfo object for member name. If name can not be found in the archive, KeyError is raised. Note If a member occurs more than once in the archive, its last occurrence is assumed to be the most up-to-date version.
TarFile.getmembers()
Return the members of the archive as a list of TarInfo objects. The list has the same order as the members in the archive.
TarFile.getnames()
Return the members as a list of their names. It has the same order as the list returned by getmembers().
TarFile.list(verbose=True, *, members=None)
Print a table of contents to sys.stdout. If verbose is False, only the names of the members are printed. If it is True, output similar to that of ls -l is produced. If optional members is given, it must be a subset of the list returned by getmembers(). Changed in version 3.5: Added the members parameter.
TarFile.next()
Return the next member of the archive as a TarInfo object, when TarFile is opened for reading. Return None if there is no more available.
TarFile.extractall(path=".", members=None, *, numeric_owner=False)
Extract all members from the archive to the current working directory or directory path. If optional members is given, it must be a subset of the list returned by getmembers(). Directory information like owner, modification time and permissions are set after all members have been extracted. This is done to work around two problems: A directory’s modification time is reset each time a file is created in it. And, if a directory’s permissions do not allow writing, extracting files to it will fail. If numeric_owner is True, the uid and gid numbers from the tarfile are used to set the owner/group for the extracted files. Otherwise, the named values from the tarfile are used. Warning Never extract archives from untrusted sources without prior inspection. It is possible that files are created outside of path, e.g. members that have absolute filenames starting with "/" or filenames with two dots "..". Changed in version 3.5: Added the numeric_owner parameter. Changed in version 3.6: The path parameter accepts a path-like object.
TarFile.extract(member, path="", set_attrs=True, *, numeric_owner=False)
Extract a member from the archive to the current working directory, using its full name. Its file information is extracted as accurately as possible. member may be a filename or a TarInfo object. You can specify a different directory using path. path may be a path-like object. File attributes (owner, mtime, mode) are set unless set_attrs is false. If numeric_owner is True, the uid and gid numbers from the tarfile are used to set the owner/group for the extracted files. Otherwise, the named values from the tarfile are used. Note The extract() method does not take care of several extraction issues. In most cases you should consider using the extractall() method. Warning See the warning for extractall(). Changed in version 3.2: Added the set_attrs parameter. Changed in version 3.5: Added the numeric_owner parameter. Changed in version 3.6: The path parameter accepts a path-like object.
TarFile.extractfile(member)
Extract a member from the archive as a file object. member may be a filename or a TarInfo object. If member is a regular file or a link, an io.BufferedReader object is returned. For all other existing members, None is returned. If member does not appear in the archive, KeyError is raised. Changed in version 3.3: Return an io.BufferedReader object.
TarFile.add(name, arcname=None, recursive=True, *, filter=None)
Add the file name to the archive. name may be any type of file (directory, fifo, symbolic link, etc.). If given, arcname specifies an alternative name for the file in the archive. Directories are added recursively by default. This can be avoided by setting recursive to False. Recursion adds entries in sorted order. If filter is given, it should be a function that takes a TarInfo object argument and returns the changed TarInfo object. If it instead returns None the TarInfo object will be excluded from the archive. See Examples for an example. Changed in version 3.2: Added the filter parameter. Changed in version 3.7: Recursion adds entries in sorted order.
TarFile.addfile(tarinfo, fileobj=None)
Add the TarInfo object tarinfo to the archive. If fileobj is given, it should be a binary file, and tarinfo.size bytes are read from it and added to the archive. You can create TarInfo objects directly, or by using gettarinfo().
TarFile.gettarinfo(name=None, arcname=None, fileobj=None)
Create a TarInfo object from the result of os.stat() or equivalent on an existing file. The file is either named by name, or specified as a file object fileobj with a file descriptor. name may be a path-like object. If given, arcname specifies an alternative name for the file in the archive, otherwise, the name is taken from fileobj’s name attribute, or the name argument. The name should be a text string. You can modify some of the TarInfo’s attributes before you add it using addfile(). If the file object is not an ordinary file object positioned at the beginning of the file, attributes such as size may need modifying. This is the case for objects such as GzipFile. The name may also be modified, in which case arcname could be a dummy string. Changed in version 3.6: The name parameter accepts a path-like object.
TarFile.close()
Close the TarFile. In write mode, two finishing zero blocks are appended to the archive.
TarFile.pax_headers
A dictionary containing key-value pairs of pax global headers.
TarInfo Objects A TarInfo object represents one member in a TarFile. Aside from storing all required attributes of a file (like file type, size, time, permissions, owner etc.), it provides some useful methods to determine its type. It does not contain the file’s data itself. TarInfo objects are returned by TarFile’s methods getmember(), getmembers() and gettarinfo().
class tarfile.TarInfo(name="")
Create a TarInfo object.
classmethod TarInfo.frombuf(buf, encoding, errors)
Create and return a TarInfo object from string buffer buf. Raises HeaderError if the buffer is invalid.
classmethod TarInfo.fromtarfile(tarfile)
Read the next member from the TarFile object tarfile and return it as a TarInfo object.
TarInfo.tobuf(format=DEFAULT_FORMAT, encoding=ENCODING, errors='surrogateescape')
Create a string buffer from a TarInfo object. For information on the arguments see the constructor of the TarFile class. Changed in version 3.2: Use 'surrogateescape' as the default for the errors argument.
A TarInfo object has the following public data attributes:
TarInfo.name
Name of the archive member.
TarInfo.size
Size in bytes.
TarInfo.mtime
Time of last modification.
TarInfo.mode
Permission bits.
TarInfo.type
File type. type is usually one of these constants: REGTYPE, AREGTYPE, LNKTYPE, SYMTYPE, DIRTYPE, FIFOTYPE, CONTTYPE, CHRTYPE, BLKTYPE, GNUTYPE_SPARSE. To determine the type of a TarInfo object more conveniently, use the is*() methods below.
TarInfo.linkname
Name of the target file name, which is only present in TarInfo objects of type LNKTYPE and SYMTYPE.
TarInfo.uid
User ID of the user who originally stored this member.
TarInfo.gid
Group ID of the user who originally stored this member.
TarInfo.uname
User name.
TarInfo.gname
Group name.
TarInfo.pax_headers
A dictionary containing key-value pairs of an associated pax extended header.
A TarInfo object also provides some convenient query methods:
TarInfo.isfile()
Return True if the Tarinfo object is a regular file.
TarInfo.isreg()
Same as isfile().
TarInfo.isdir()
Return True if it is a directory.
TarInfo.issym()
Return True if it is a symbolic link.
TarInfo.islnk()
Return True if it is a hard link.
TarInfo.ischr()
Return True if it is a character device.
TarInfo.isblk()
Return True if it is a block device.
TarInfo.isfifo()
Return True if it is a FIFO.
TarInfo.isdev()
Return True if it is one of character device, block device or FIFO.
Command-Line Interface New in version 3.4. The tarfile module provides a simple command-line interface to interact with tar archives. If you want to create a new tar archive, specify its name after the -c option and then list the filename(s) that should be included: $ python -m tarfile -c monty.tar spam.txt eggs.txt
Passing a directory is also acceptable: $ python -m tarfile -c monty.tar life-of-brian_1979/
If you want to extract a tar archive into the current directory, use the -e option: $ python -m tarfile -e monty.tar
You can also extract a tar archive into a different directory by passing the directory’s name: $ python -m tarfile -e monty.tar other-dir/
For a list of the files in a tar archive, use the -l option: $ python -m tarfile -l monty.tar
Command-line options
-l <tarfile>
--list <tarfile>
List files in a tarfile.
-c <tarfile> <source1> ... <sourceN>
--create <tarfile> <source1> ... <sourceN>
Create tarfile from source files.
-e <tarfile> [<output_dir>]
--extract <tarfile> [<output_dir>]
Extract tarfile into the current directory if output_dir is not specified.
-t <tarfile>
--test <tarfile>
Test whether the tarfile is valid or not.
-v, --verbose
Verbose output.
Examples How to extract an entire tar archive to the current working directory: import tarfile
tar = tarfile.open("sample.tar.gz")
tar.extractall()
tar.close()
How to extract a subset of a tar archive with TarFile.extractall() using a generator function instead of a list: import os
import tarfile
def py_files(members):
for tarinfo in members:
if os.path.splitext(tarinfo.name)[1] == ".py":
yield tarinfo
tar = tarfile.open("sample.tar.gz")
tar.extractall(members=py_files(tar))
tar.close()
How to create an uncompressed tar archive from a list of filenames: import tarfile
tar = tarfile.open("sample.tar", "w")
for name in ["foo", "bar", "quux"]:
tar.add(name)
tar.close()
The same example using the with statement: import tarfile
with tarfile.open("sample.tar", "w") as tar:
for name in ["foo", "bar", "quux"]:
tar.add(name)
How to read a gzip compressed tar archive and display some member information: import tarfile
tar = tarfile.open("sample.tar.gz", "r:gz")
for tarinfo in tar:
print(tarinfo.name, "is", tarinfo.size, "bytes in size and is ", end="")
if tarinfo.isreg():
print("a regular file.")
elif tarinfo.isdir():
print("a directory.")
else:
print("something else.")
tar.close()
How to create an archive and reset the user information using the filter parameter in TarFile.add(): import tarfile
def reset(tarinfo):
tarinfo.uid = tarinfo.gid = 0
tarinfo.uname = tarinfo.gname = "root"
return tarinfo
tar = tarfile.open("sample.tar.gz", "w:gz")
tar.add("foo", filter=reset)
tar.close()
Supported tar formats There are three tar formats that can be created with the tarfile module: The POSIX.1-1988 ustar format (USTAR_FORMAT). It supports filenames up to a length of at best 256 characters and linknames up to 100 characters. The maximum file size is 8 GiB. This is an old and limited but widely supported format. The GNU tar format (GNU_FORMAT). It supports long filenames and linknames, files bigger than 8 GiB and sparse files. It is the de facto standard on GNU/Linux systems. tarfile fully supports the GNU tar extensions for long names, sparse file support is read-only.
The POSIX.1-2001 pax format (PAX_FORMAT). It is the most flexible format with virtually no limits. It supports long filenames and linknames, large files and stores pathnames in a portable way. Modern tar implementations, including GNU tar, bsdtar/libarchive and star, fully support extended pax features; some old or unmaintained libraries may not, but should treat pax archives as if they were in the universally-supported ustar format. It is the current default format for new archives. It extends the existing ustar format with extra headers for information that cannot be stored otherwise. There are two flavours of pax headers: Extended headers only affect the subsequent file header, global headers are valid for the complete archive and affect all following files. All the data in a pax header is encoded in UTF-8 for portability reasons. There are some more variants of the tar format which can be read, but not created: The ancient V7 format. This is the first tar format from Unix Seventh Edition, storing only regular files and directories. Names must not be longer than 100 characters, there is no user/group name information. Some archives have miscalculated header checksums in case of fields with non-ASCII characters. The SunOS tar extended format. This format is a variant of the POSIX.1-2001 pax format, but is not compatible. Unicode issues The tar format was originally conceived to make backups on tape drives with the main focus on preserving file system information. Nowadays tar archives are commonly used for file distribution and exchanging archives over networks. One problem of the original format (which is the basis of all other formats) is that there is no concept of supporting different character encodings. For example, an ordinary tar archive created on a UTF-8 system cannot be read correctly on a Latin-1 system if it contains non-ASCII characters. Textual metadata (like filenames, linknames, user/group names) will appear damaged. Unfortunately, there is no way to autodetect the encoding of an archive. The pax format was designed to solve this problem. It stores non-ASCII metadata using the universal character encoding UTF-8. The details of character conversion in tarfile are controlled by the encoding and errors keyword arguments of the TarFile class. encoding defines the character encoding to use for the metadata in the archive. The default value is sys.getfilesystemencoding() or 'ascii' as a fallback. Depending on whether the archive is read or written, the metadata must be either decoded or encoded. If encoding is not set appropriately, this conversion may fail. The errors argument defines how characters are treated that cannot be converted. Possible values are listed in section Error Handlers. The default scheme is 'surrogateescape' which Python also uses for its file system calls, see File Names, Command Line Arguments, and Environment Variables. For PAX_FORMAT archives (the default), encoding is generally not needed because all the metadata is stored using UTF-8. encoding is only used in the rare cases when binary pax headers are decoded or when strings with surrogate characters are stored. | python.library.tarfile |
exception tarfile.CompressionError
Is raised when a compression method is not supported or when the data cannot be decoded properly. | python.library.tarfile#tarfile.CompressionError |
tarfile.DEFAULT_FORMAT
The default format for creating archives. This is currently PAX_FORMAT. Changed in version 3.8: The default format for new archives was changed to PAX_FORMAT from GNU_FORMAT. | python.library.tarfile#tarfile.DEFAULT_FORMAT |
tarfile.ENCODING
The default character encoding: 'utf-8' on Windows, the value returned by sys.getfilesystemencoding() otherwise. | python.library.tarfile#tarfile.ENCODING |
exception tarfile.ExtractError
Is raised for non-fatal errors when using TarFile.extract(), but only if TarFile.errorlevel== 2. | python.library.tarfile#tarfile.ExtractError |
tarfile.GNU_FORMAT
GNU tar format. | python.library.tarfile#tarfile.GNU_FORMAT |
exception tarfile.HeaderError
Is raised by TarInfo.frombuf() if the buffer it gets is invalid. | python.library.tarfile#tarfile.HeaderError |
tarfile.is_tarfile(name)
Return True if name is a tar archive file, that the tarfile module can read. name may be a str, file, or file-like object. Changed in version 3.9: Support for file and file-like objects. | python.library.tarfile#tarfile.is_tarfile |
tarfile.open(name=None, mode='r', fileobj=None, bufsize=10240, **kwargs)
Return a TarFile object for the pathname name. For detailed information on TarFile objects and the keyword arguments that are allowed, see TarFile Objects. mode has to be a string of the form 'filemode[:compression]', it defaults to 'r'. Here is a full list of mode combinations:
mode action
'r' or 'r:*' Open for reading with transparent compression (recommended).
'r:' Open for reading exclusively without compression.
'r:gz' Open for reading with gzip compression.
'r:bz2' Open for reading with bzip2 compression.
'r:xz' Open for reading with lzma compression.
'x' or 'x:' Create a tarfile exclusively without compression. Raise an FileExistsError exception if it already exists.
'x:gz' Create a tarfile with gzip compression. Raise an FileExistsError exception if it already exists.
'x:bz2' Create a tarfile with bzip2 compression. Raise an FileExistsError exception if it already exists.
'x:xz' Create a tarfile with lzma compression. Raise an FileExistsError exception if it already exists.
'a' or 'a:' Open for appending with no compression. The file is created if it does not exist.
'w' or 'w:' Open for uncompressed writing.
'w:gz' Open for gzip compressed writing.
'w:bz2' Open for bzip2 compressed writing.
'w:xz' Open for lzma compressed writing. Note that 'a:gz', 'a:bz2' or 'a:xz' is not possible. If mode is not suitable to open a certain (compressed) file for reading, ReadError is raised. Use mode 'r' to avoid this. If a compression method is not supported, CompressionError is raised. If fileobj is specified, it is used as an alternative to a file object opened in binary mode for name. It is supposed to be at position 0. For modes 'w:gz', 'r:gz', 'w:bz2', 'r:bz2', 'x:gz', 'x:bz2', tarfile.open() accepts the keyword argument compresslevel (default 9) to specify the compression level of the file. For special purposes, there is a second format for mode: 'filemode|[compression]'. tarfile.open() will return a TarFile object that processes its data as a stream of blocks. No random seeking will be done on the file. If given, fileobj may be any object that has a read() or write() method (depending on the mode). bufsize specifies the blocksize and defaults to 20 * 512 bytes. Use this variant in combination with e.g. sys.stdin, a socket file object or a tape device. However, such a TarFile object is limited in that it does not allow random access, see Examples. The currently possible modes:
Mode Action
'r|*' Open a stream of tar blocks for reading with transparent compression.
'r|' Open a stream of uncompressed tar blocks for reading.
'r|gz' Open a gzip compressed stream for reading.
'r|bz2' Open a bzip2 compressed stream for reading.
'r|xz' Open an lzma compressed stream for reading.
'w|' Open an uncompressed stream for writing.
'w|gz' Open a gzip compressed stream for writing.
'w|bz2' Open a bzip2 compressed stream for writing.
'w|xz' Open an lzma compressed stream for writing. Changed in version 3.5: The 'x' (exclusive creation) mode was added. Changed in version 3.6: The name parameter accepts a path-like object. | python.library.tarfile#tarfile.open |
tarfile.PAX_FORMAT
POSIX.1-2001 (pax) format. | python.library.tarfile#tarfile.PAX_FORMAT |
exception tarfile.ReadError
Is raised when a tar archive is opened, that either cannot be handled by the tarfile module or is somehow invalid. | python.library.tarfile#tarfile.ReadError |
exception tarfile.StreamError
Is raised for the limitations that are typical for stream-like TarFile objects. | python.library.tarfile#tarfile.StreamError |
exception tarfile.TarError
Base class for all tarfile exceptions. | python.library.tarfile#tarfile.TarError |
class tarfile.TarFile(name=None, mode='r', fileobj=None, format=DEFAULT_FORMAT, tarinfo=TarInfo, dereference=False, ignore_zeros=False, encoding=ENCODING, errors='surrogateescape', pax_headers=None, debug=0, errorlevel=0)
All following arguments are optional and can be accessed as instance attributes as well. name is the pathname of the archive. name may be a path-like object. It can be omitted if fileobj is given. In this case, the file object’s name attribute is used if it exists. mode is either 'r' to read from an existing archive, 'a' to append data to an existing file, 'w' to create a new file overwriting an existing one, or 'x' to create a new file only if it does not already exist. If fileobj is given, it is used for reading or writing data. If it can be determined, mode is overridden by fileobj’s mode. fileobj will be used from position 0. Note fileobj is not closed, when TarFile is closed. format controls the archive format for writing. It must be one of the constants USTAR_FORMAT, GNU_FORMAT or PAX_FORMAT that are defined at module level. When reading, format will be automatically detected, even if different formats are present in a single archive. The tarinfo argument can be used to replace the default TarInfo class with a different one. If dereference is False, add symbolic and hard links to the archive. If it is True, add the content of the target files to the archive. This has no effect on systems that do not support symbolic links. If ignore_zeros is False, treat an empty block as the end of the archive. If it is True, skip empty (and invalid) blocks and try to get as many members as possible. This is only useful for reading concatenated or damaged archives. debug can be set from 0 (no debug messages) up to 3 (all debug messages). The messages are written to sys.stderr. If errorlevel is 0, all errors are ignored when using TarFile.extract(). Nevertheless, they appear as error messages in the debug output, when debugging is enabled. If 1, all fatal errors are raised as OSError exceptions. If 2, all non-fatal errors are raised as TarError exceptions as well. The encoding and errors arguments define the character encoding to be used for reading or writing the archive and how conversion errors are going to be handled. The default settings will work for most users. See section Unicode issues for in-depth information. The pax_headers argument is an optional dictionary of strings which will be added as a pax global header if format is PAX_FORMAT. Changed in version 3.2: Use 'surrogateescape' as the default for the errors argument. Changed in version 3.5: The 'x' (exclusive creation) mode was added. Changed in version 3.6: The name parameter accepts a path-like object. | python.library.tarfile#tarfile.TarFile |
TarFile.add(name, arcname=None, recursive=True, *, filter=None)
Add the file name to the archive. name may be any type of file (directory, fifo, symbolic link, etc.). If given, arcname specifies an alternative name for the file in the archive. Directories are added recursively by default. This can be avoided by setting recursive to False. Recursion adds entries in sorted order. If filter is given, it should be a function that takes a TarInfo object argument and returns the changed TarInfo object. If it instead returns None the TarInfo object will be excluded from the archive. See Examples for an example. Changed in version 3.2: Added the filter parameter. Changed in version 3.7: Recursion adds entries in sorted order. | python.library.tarfile#tarfile.TarFile.add |
TarFile.addfile(tarinfo, fileobj=None)
Add the TarInfo object tarinfo to the archive. If fileobj is given, it should be a binary file, and tarinfo.size bytes are read from it and added to the archive. You can create TarInfo objects directly, or by using gettarinfo(). | python.library.tarfile#tarfile.TarFile.addfile |
TarFile.close()
Close the TarFile. In write mode, two finishing zero blocks are appended to the archive. | python.library.tarfile#tarfile.TarFile.close |
TarFile.extract(member, path="", set_attrs=True, *, numeric_owner=False)
Extract a member from the archive to the current working directory, using its full name. Its file information is extracted as accurately as possible. member may be a filename or a TarInfo object. You can specify a different directory using path. path may be a path-like object. File attributes (owner, mtime, mode) are set unless set_attrs is false. If numeric_owner is True, the uid and gid numbers from the tarfile are used to set the owner/group for the extracted files. Otherwise, the named values from the tarfile are used. Note The extract() method does not take care of several extraction issues. In most cases you should consider using the extractall() method. Warning See the warning for extractall(). Changed in version 3.2: Added the set_attrs parameter. Changed in version 3.5: Added the numeric_owner parameter. Changed in version 3.6: The path parameter accepts a path-like object. | python.library.tarfile#tarfile.TarFile.extract |
TarFile.extractall(path=".", members=None, *, numeric_owner=False)
Extract all members from the archive to the current working directory or directory path. If optional members is given, it must be a subset of the list returned by getmembers(). Directory information like owner, modification time and permissions are set after all members have been extracted. This is done to work around two problems: A directory’s modification time is reset each time a file is created in it. And, if a directory’s permissions do not allow writing, extracting files to it will fail. If numeric_owner is True, the uid and gid numbers from the tarfile are used to set the owner/group for the extracted files. Otherwise, the named values from the tarfile are used. Warning Never extract archives from untrusted sources without prior inspection. It is possible that files are created outside of path, e.g. members that have absolute filenames starting with "/" or filenames with two dots "..". Changed in version 3.5: Added the numeric_owner parameter. Changed in version 3.6: The path parameter accepts a path-like object. | python.library.tarfile#tarfile.TarFile.extractall |
TarFile.extractfile(member)
Extract a member from the archive as a file object. member may be a filename or a TarInfo object. If member is a regular file or a link, an io.BufferedReader object is returned. For all other existing members, None is returned. If member does not appear in the archive, KeyError is raised. Changed in version 3.3: Return an io.BufferedReader object. | python.library.tarfile#tarfile.TarFile.extractfile |
TarFile.getmember(name)
Return a TarInfo object for member name. If name can not be found in the archive, KeyError is raised. Note If a member occurs more than once in the archive, its last occurrence is assumed to be the most up-to-date version. | python.library.tarfile#tarfile.TarFile.getmember |
TarFile.getmembers()
Return the members of the archive as a list of TarInfo objects. The list has the same order as the members in the archive. | python.library.tarfile#tarfile.TarFile.getmembers |
TarFile.getnames()
Return the members as a list of their names. It has the same order as the list returned by getmembers(). | python.library.tarfile#tarfile.TarFile.getnames |
TarFile.gettarinfo(name=None, arcname=None, fileobj=None)
Create a TarInfo object from the result of os.stat() or equivalent on an existing file. The file is either named by name, or specified as a file object fileobj with a file descriptor. name may be a path-like object. If given, arcname specifies an alternative name for the file in the archive, otherwise, the name is taken from fileobj’s name attribute, or the name argument. The name should be a text string. You can modify some of the TarInfo’s attributes before you add it using addfile(). If the file object is not an ordinary file object positioned at the beginning of the file, attributes such as size may need modifying. This is the case for objects such as GzipFile. The name may also be modified, in which case arcname could be a dummy string. Changed in version 3.6: The name parameter accepts a path-like object. | python.library.tarfile#tarfile.TarFile.gettarinfo |
TarFile.list(verbose=True, *, members=None)
Print a table of contents to sys.stdout. If verbose is False, only the names of the members are printed. If it is True, output similar to that of ls -l is produced. If optional members is given, it must be a subset of the list returned by getmembers(). Changed in version 3.5: Added the members parameter. | python.library.tarfile#tarfile.TarFile.list |
TarFile.next()
Return the next member of the archive as a TarInfo object, when TarFile is opened for reading. Return None if there is no more available. | python.library.tarfile#tarfile.TarFile.next |
classmethod TarFile.open(...)
Alternative constructor. The tarfile.open() function is actually a shortcut to this classmethod. | python.library.tarfile#tarfile.TarFile.open |
TarFile.pax_headers
A dictionary containing key-value pairs of pax global headers. | python.library.tarfile#tarfile.TarFile.pax_headers |
class tarfile.TarInfo(name="")
Create a TarInfo object. | python.library.tarfile#tarfile.TarInfo |
classmethod TarInfo.frombuf(buf, encoding, errors)
Create and return a TarInfo object from string buffer buf. Raises HeaderError if the buffer is invalid. | python.library.tarfile#tarfile.TarInfo.frombuf |
classmethod TarInfo.fromtarfile(tarfile)
Read the next member from the TarFile object tarfile and return it as a TarInfo object. | python.library.tarfile#tarfile.TarInfo.fromtarfile |
TarInfo.gid
Group ID of the user who originally stored this member. | python.library.tarfile#tarfile.TarInfo.gid |
TarInfo.gname
Group name. | python.library.tarfile#tarfile.TarInfo.gname |
TarInfo.isblk()
Return True if it is a block device. | python.library.tarfile#tarfile.TarInfo.isblk |
TarInfo.ischr()
Return True if it is a character device. | python.library.tarfile#tarfile.TarInfo.ischr |
TarInfo.isdev()
Return True if it is one of character device, block device or FIFO. | python.library.tarfile#tarfile.TarInfo.isdev |
TarInfo.isdir()
Return True if it is a directory. | python.library.tarfile#tarfile.TarInfo.isdir |
TarInfo.isfifo()
Return True if it is a FIFO. | python.library.tarfile#tarfile.TarInfo.isfifo |
TarInfo.isfile()
Return True if the Tarinfo object is a regular file. | python.library.tarfile#tarfile.TarInfo.isfile |
TarInfo.islnk()
Return True if it is a hard link. | python.library.tarfile#tarfile.TarInfo.islnk |
TarInfo.isreg()
Same as isfile(). | python.library.tarfile#tarfile.TarInfo.isreg |
TarInfo.issym()
Return True if it is a symbolic link. | python.library.tarfile#tarfile.TarInfo.issym |
TarInfo.linkname
Name of the target file name, which is only present in TarInfo objects of type LNKTYPE and SYMTYPE. | python.library.tarfile#tarfile.TarInfo.linkname |
TarInfo.mode
Permission bits. | python.library.tarfile#tarfile.TarInfo.mode |
TarInfo.mtime
Time of last modification. | python.library.tarfile#tarfile.TarInfo.mtime |
TarInfo.name
Name of the archive member. | python.library.tarfile#tarfile.TarInfo.name |
TarInfo.pax_headers
A dictionary containing key-value pairs of an associated pax extended header. | python.library.tarfile#tarfile.TarInfo.pax_headers |
TarInfo.size
Size in bytes. | python.library.tarfile#tarfile.TarInfo.size |
TarInfo.tobuf(format=DEFAULT_FORMAT, encoding=ENCODING, errors='surrogateescape')
Create a string buffer from a TarInfo object. For information on the arguments see the constructor of the TarFile class. Changed in version 3.2: Use 'surrogateescape' as the default for the errors argument. | python.library.tarfile#tarfile.TarInfo.tobuf |
TarInfo.type
File type. type is usually one of these constants: REGTYPE, AREGTYPE, LNKTYPE, SYMTYPE, DIRTYPE, FIFOTYPE, CONTTYPE, CHRTYPE, BLKTYPE, GNUTYPE_SPARSE. To determine the type of a TarInfo object more conveniently, use the is*() methods below. | python.library.tarfile#tarfile.TarInfo.type |
TarInfo.uid
User ID of the user who originally stored this member. | python.library.tarfile#tarfile.TarInfo.uid |
TarInfo.uname
User name. | python.library.tarfile#tarfile.TarInfo.uname |
tarfile.USTAR_FORMAT
POSIX.1-1988 (ustar) format. | python.library.tarfile#tarfile.USTAR_FORMAT |
telnetlib — Telnet client Source code: Lib/telnetlib.py The telnetlib module provides a Telnet class that implements the Telnet protocol. See RFC 854 for details about the protocol. In addition, it provides symbolic constants for the protocol characters (see below), and for the telnet options. The symbolic names of the telnet options follow the definitions in arpa/telnet.h, with the leading TELOPT_ removed. For symbolic names of options which are traditionally not included in arpa/telnet.h, see the module source itself. The symbolic constants for the telnet commands are: IAC, DONT, DO, WONT, WILL, SE (Subnegotiation End), NOP (No Operation), DM (Data Mark), BRK (Break), IP (Interrupt process), AO (Abort output), AYT (Are You There), EC (Erase Character), EL (Erase Line), GA (Go Ahead), SB (Subnegotiation Begin).
class telnetlib.Telnet(host=None, port=0[, timeout])
Telnet represents a connection to a Telnet server. The instance is initially not connected by default; the open() method must be used to establish a connection. Alternatively, the host name and optional port number can be passed to the constructor too, in which case the connection to the server will be established before the constructor returns. The optional timeout parameter specifies a timeout in seconds for blocking operations like the connection attempt (if not specified, the global default timeout setting will be used). Do not reopen an already connected instance. This class has many read_*() methods. Note that some of them raise EOFError when the end of the connection is read, because they can return an empty string for other reasons. See the individual descriptions below. A Telnet object is a context manager and can be used in a with statement. When the with block ends, the close() method is called: >>> from telnetlib import Telnet
>>> with Telnet('localhost', 23) as tn:
... tn.interact()
...
Changed in version 3.6: Context manager support added
See also
RFC 854 - Telnet Protocol Specification
Definition of the Telnet protocol. Telnet Objects Telnet instances have the following methods:
Telnet.read_until(expected, timeout=None)
Read until a given byte string, expected, is encountered or until timeout seconds have passed. When no match is found, return whatever is available instead, possibly empty bytes. Raise EOFError if the connection is closed and no cooked data is available.
Telnet.read_all()
Read all data until EOF as bytes; block until connection closed.
Telnet.read_some()
Read at least one byte of cooked data unless EOF is hit. Return b'' if EOF is hit. Block if no data is immediately available.
Telnet.read_very_eager()
Read everything that can be without blocking in I/O (eager). Raise EOFError if connection closed and no cooked data available. Return b'' if no cooked data available otherwise. Do not block unless in the midst of an IAC sequence.
Telnet.read_eager()
Read readily available data. Raise EOFError if connection closed and no cooked data available. Return b'' if no cooked data available otherwise. Do not block unless in the midst of an IAC sequence.
Telnet.read_lazy()
Process and return data already in the queues (lazy). Raise EOFError if connection closed and no data available. Return b'' if no cooked data available otherwise. Do not block unless in the midst of an IAC sequence.
Telnet.read_very_lazy()
Return any data available in the cooked queue (very lazy). Raise EOFError if connection closed and no data available. Return b'' if no cooked data available otherwise. This method never blocks.
Telnet.read_sb_data()
Return the data collected between a SB/SE pair (suboption begin/end). The callback should access these data when it was invoked with a SE command. This method never blocks.
Telnet.open(host, port=0[, timeout])
Connect to a host. The optional second argument is the port number, which defaults to the standard Telnet port (23). The optional timeout parameter specifies a timeout in seconds for blocking operations like the connection attempt (if not specified, the global default timeout setting will be used). Do not try to reopen an already connected instance. Raises an auditing event telnetlib.Telnet.open with arguments self, host, port.
Telnet.msg(msg, *args)
Print a debug message when the debug level is > 0. If extra arguments are present, they are substituted in the message using the standard string formatting operator.
Telnet.set_debuglevel(debuglevel)
Set the debug level. The higher the value of debuglevel, the more debug output you get (on sys.stdout).
Telnet.close()
Close the connection.
Telnet.get_socket()
Return the socket object used internally.
Telnet.fileno()
Return the file descriptor of the socket object used internally.
Telnet.write(buffer)
Write a byte string to the socket, doubling any IAC characters. This can block if the connection is blocked. May raise OSError if the connection is closed. Raises an auditing event telnetlib.Telnet.write with arguments self, buffer. Changed in version 3.3: This method used to raise socket.error, which is now an alias of OSError.
Telnet.interact()
Interaction function, emulates a very dumb Telnet client.
Telnet.mt_interact()
Multithreaded version of interact().
Telnet.expect(list, timeout=None)
Read until one from a list of a regular expressions matches. The first argument is a list of regular expressions, either compiled (regex objects) or uncompiled (byte strings). The optional second argument is a timeout, in seconds; the default is to block indefinitely. Return a tuple of three items: the index in the list of the first regular expression that matches; the match object returned; and the bytes read up till and including the match. If end of file is found and no bytes were read, raise EOFError. Otherwise, when nothing matches, return (-1, None, data) where data is the bytes received so far (may be empty bytes if a timeout happened). If a regular expression ends with a greedy match (such as .*) or if more than one expression can match the same input, the results are non-deterministic, and may depend on the I/O timing.
Telnet.set_option_negotiation_callback(callback)
Each time a telnet option is read on the input flow, this callback (if set) is called with the following parameters: callback(telnet socket, command (DO/DONT/WILL/WONT), option). No other action is done afterwards by telnetlib.
Telnet Example A simple example illustrating typical use: import getpass
import telnetlib
HOST = "localhost"
user = input("Enter your remote account: ")
password = getpass.getpass()
tn = telnetlib.Telnet(HOST)
tn.read_until(b"login: ")
tn.write(user.encode('ascii') + b"\n")
if password:
tn.read_until(b"Password: ")
tn.write(password.encode('ascii') + b"\n")
tn.write(b"ls\n")
tn.write(b"exit\n")
print(tn.read_all().decode('ascii')) | python.library.telnetlib |
class telnetlib.Telnet(host=None, port=0[, timeout])
Telnet represents a connection to a Telnet server. The instance is initially not connected by default; the open() method must be used to establish a connection. Alternatively, the host name and optional port number can be passed to the constructor too, in which case the connection to the server will be established before the constructor returns. The optional timeout parameter specifies a timeout in seconds for blocking operations like the connection attempt (if not specified, the global default timeout setting will be used). Do not reopen an already connected instance. This class has many read_*() methods. Note that some of them raise EOFError when the end of the connection is read, because they can return an empty string for other reasons. See the individual descriptions below. A Telnet object is a context manager and can be used in a with statement. When the with block ends, the close() method is called: >>> from telnetlib import Telnet
>>> with Telnet('localhost', 23) as tn:
... tn.interact()
...
Changed in version 3.6: Context manager support added | python.library.telnetlib#telnetlib.Telnet |
Telnet.close()
Close the connection. | python.library.telnetlib#telnetlib.Telnet.close |
Telnet.expect(list, timeout=None)
Read until one from a list of a regular expressions matches. The first argument is a list of regular expressions, either compiled (regex objects) or uncompiled (byte strings). The optional second argument is a timeout, in seconds; the default is to block indefinitely. Return a tuple of three items: the index in the list of the first regular expression that matches; the match object returned; and the bytes read up till and including the match. If end of file is found and no bytes were read, raise EOFError. Otherwise, when nothing matches, return (-1, None, data) where data is the bytes received so far (may be empty bytes if a timeout happened). If a regular expression ends with a greedy match (such as .*) or if more than one expression can match the same input, the results are non-deterministic, and may depend on the I/O timing. | python.library.telnetlib#telnetlib.Telnet.expect |
Telnet.fileno()
Return the file descriptor of the socket object used internally. | python.library.telnetlib#telnetlib.Telnet.fileno |
Telnet.get_socket()
Return the socket object used internally. | python.library.telnetlib#telnetlib.Telnet.get_socket |
Telnet.interact()
Interaction function, emulates a very dumb Telnet client. | python.library.telnetlib#telnetlib.Telnet.interact |
Telnet.msg(msg, *args)
Print a debug message when the debug level is > 0. If extra arguments are present, they are substituted in the message using the standard string formatting operator. | python.library.telnetlib#telnetlib.Telnet.msg |
Telnet.mt_interact()
Multithreaded version of interact(). | python.library.telnetlib#telnetlib.Telnet.mt_interact |
Telnet.open(host, port=0[, timeout])
Connect to a host. The optional second argument is the port number, which defaults to the standard Telnet port (23). The optional timeout parameter specifies a timeout in seconds for blocking operations like the connection attempt (if not specified, the global default timeout setting will be used). Do not try to reopen an already connected instance. Raises an auditing event telnetlib.Telnet.open with arguments self, host, port. | python.library.telnetlib#telnetlib.Telnet.open |
Telnet.read_all()
Read all data until EOF as bytes; block until connection closed. | python.library.telnetlib#telnetlib.Telnet.read_all |
Telnet.read_eager()
Read readily available data. Raise EOFError if connection closed and no cooked data available. Return b'' if no cooked data available otherwise. Do not block unless in the midst of an IAC sequence. | python.library.telnetlib#telnetlib.Telnet.read_eager |
Telnet.read_lazy()
Process and return data already in the queues (lazy). Raise EOFError if connection closed and no data available. Return b'' if no cooked data available otherwise. Do not block unless in the midst of an IAC sequence. | python.library.telnetlib#telnetlib.Telnet.read_lazy |
Telnet.read_sb_data()
Return the data collected between a SB/SE pair (suboption begin/end). The callback should access these data when it was invoked with a SE command. This method never blocks. | python.library.telnetlib#telnetlib.Telnet.read_sb_data |
Telnet.read_some()
Read at least one byte of cooked data unless EOF is hit. Return b'' if EOF is hit. Block if no data is immediately available. | python.library.telnetlib#telnetlib.Telnet.read_some |
Telnet.read_until(expected, timeout=None)
Read until a given byte string, expected, is encountered or until timeout seconds have passed. When no match is found, return whatever is available instead, possibly empty bytes. Raise EOFError if the connection is closed and no cooked data is available. | python.library.telnetlib#telnetlib.Telnet.read_until |
Telnet.read_very_eager()
Read everything that can be without blocking in I/O (eager). Raise EOFError if connection closed and no cooked data available. Return b'' if no cooked data available otherwise. Do not block unless in the midst of an IAC sequence. | python.library.telnetlib#telnetlib.Telnet.read_very_eager |
Telnet.read_very_lazy()
Return any data available in the cooked queue (very lazy). Raise EOFError if connection closed and no data available. Return b'' if no cooked data available otherwise. This method never blocks. | python.library.telnetlib#telnetlib.Telnet.read_very_lazy |
Telnet.set_debuglevel(debuglevel)
Set the debug level. The higher the value of debuglevel, the more debug output you get (on sys.stdout). | python.library.telnetlib#telnetlib.Telnet.set_debuglevel |
Telnet.set_option_negotiation_callback(callback)
Each time a telnet option is read on the input flow, this callback (if set) is called with the following parameters: callback(telnet socket, command (DO/DONT/WILL/WONT), option). No other action is done afterwards by telnetlib. | python.library.telnetlib#telnetlib.Telnet.set_option_negotiation_callback |
Telnet.write(buffer)
Write a byte string to the socket, doubling any IAC characters. This can block if the connection is blocked. May raise OSError if the connection is closed. Raises an auditing event telnetlib.Telnet.write with arguments self, buffer. Changed in version 3.3: This method used to raise socket.error, which is now an alias of OSError. | python.library.telnetlib#telnetlib.Telnet.write |
tempfile — Generate temporary files and directories Source code: Lib/tempfile.py This module creates temporary files and directories. It works on all supported platforms. TemporaryFile, NamedTemporaryFile, TemporaryDirectory, and SpooledTemporaryFile are high-level interfaces which provide automatic cleanup and can be used as context managers. mkstemp() and mkdtemp() are lower-level functions which require manual cleanup. All the user-callable functions and constructors take additional arguments which allow direct control over the location and name of temporary files and directories. Files names used by this module include a string of random characters which allows those files to be securely created in shared temporary directories. To maintain backward compatibility, the argument order is somewhat odd; it is recommended to use keyword arguments for clarity. The module defines the following user-callable items:
tempfile.TemporaryFile(mode='w+b', buffering=-1, encoding=None, newline=None, suffix=None, prefix=None, dir=None, *, errors=None)
Return a file-like object that can be used as a temporary storage area. The file is created securely, using the same rules as mkstemp(). It will be destroyed as soon as it is closed (including an implicit close when the object is garbage collected). Under Unix, the directory entry for the file is either not created at all or is removed immediately after the file is created. Other platforms do not support this; your code should not rely on a temporary file created using this function having or not having a visible name in the file system. The resulting object can be used as a context manager (see Examples). On completion of the context or destruction of the file object the temporary file will be removed from the filesystem. The mode parameter defaults to 'w+b' so that the file created can be read and written without being closed. Binary mode is used so that it behaves consistently on all platforms without regard for the data that is stored. buffering, encoding, errors and newline are interpreted as for open(). The dir, prefix and suffix parameters have the same meaning and defaults as with mkstemp(). The returned object is a true file object on POSIX platforms. On other platforms, it is a file-like object whose file attribute is the underlying true file object. The os.O_TMPFILE flag is used if it is available and works (Linux-specific, requires Linux kernel 3.11 or later). Raises an auditing event tempfile.mkstemp with argument fullpath. Changed in version 3.5: The os.O_TMPFILE flag is now used if available. Changed in version 3.8: Added errors parameter.
tempfile.NamedTemporaryFile(mode='w+b', buffering=-1, encoding=None, newline=None, suffix=None, prefix=None, dir=None, delete=True, *, errors=None)
This function operates exactly as TemporaryFile() does, except that the file is guaranteed to have a visible name in the file system (on Unix, the directory entry is not unlinked). That name can be retrieved from the name attribute of the returned file-like object. Whether the name can be used to open the file a second time, while the named temporary file is still open, varies across platforms (it can be so used on Unix; it cannot on Windows NT or later). If delete is true (the default), the file is deleted as soon as it is closed. The returned object is always a file-like object whose file attribute is the underlying true file object. This file-like object can be used in a with statement, just like a normal file. Raises an auditing event tempfile.mkstemp with argument fullpath. Changed in version 3.8: Added errors parameter.
tempfile.SpooledTemporaryFile(max_size=0, mode='w+b', buffering=-1, encoding=None, newline=None, suffix=None, prefix=None, dir=None, *, errors=None)
This function operates exactly as TemporaryFile() does, except that data is spooled in memory until the file size exceeds max_size, or until the file’s fileno() method is called, at which point the contents are written to disk and operation proceeds as with TemporaryFile(). The resulting file has one additional method, rollover(), which causes the file to roll over to an on-disk file regardless of its size. The returned object is a file-like object whose _file attribute is either an io.BytesIO or io.TextIOWrapper object (depending on whether binary or text mode was specified) or a true file object, depending on whether rollover() has been called. This file-like object can be used in a with statement, just like a normal file. Changed in version 3.3: the truncate method now accepts a size argument. Changed in version 3.8: Added errors parameter.
tempfile.TemporaryDirectory(suffix=None, prefix=None, dir=None)
This function securely creates a temporary directory using the same rules as mkdtemp(). The resulting object can be used as a context manager (see Examples). On completion of the context or destruction of the temporary directory object the newly created temporary directory and all its contents are removed from the filesystem. The directory name can be retrieved from the name attribute of the returned object. When the returned object is used as a context manager, the name will be assigned to the target of the as clause in the with statement, if there is one. The directory can be explicitly cleaned up by calling the cleanup() method. Raises an auditing event tempfile.mkdtemp with argument fullpath. New in version 3.2.
tempfile.mkstemp(suffix=None, prefix=None, dir=None, text=False)
Creates a temporary file in the most secure manner possible. There are no race conditions in the file’s creation, assuming that the platform properly implements the os.O_EXCL flag for os.open(). The file is readable and writable only by the creating user ID. If the platform uses permission bits to indicate whether a file is executable, the file is executable by no one. The file descriptor is not inherited by child processes. Unlike TemporaryFile(), the user of mkstemp() is responsible for deleting the temporary file when done with it. If suffix is not None, the file name will end with that suffix, otherwise there will be no suffix. mkstemp() does not put a dot between the file name and the suffix; if you need one, put it at the beginning of suffix. If prefix is not None, the file name will begin with that prefix; otherwise, a default prefix is used. The default is the return value of gettempprefix() or gettempprefixb(), as appropriate. If dir is not None, the file will be created in that directory; otherwise, a default directory is used. The default directory is chosen from a platform-dependent list, but the user of the application can control the directory location by setting the TMPDIR, TEMP or TMP environment variables. There is thus no guarantee that the generated filename will have any nice properties, such as not requiring quoting when passed to external commands via os.popen(). If any of suffix, prefix, and dir are not None, they must be the same type. If they are bytes, the returned name will be bytes instead of str. If you want to force a bytes return value with otherwise default behavior, pass suffix=b''. If text is specified and true, the file is opened in text mode. Otherwise, (the default) the file is opened in binary mode. mkstemp() returns a tuple containing an OS-level handle to an open file (as would be returned by os.open()) and the absolute pathname of that file, in that order. Raises an auditing event tempfile.mkstemp with argument fullpath. Changed in version 3.5: suffix, prefix, and dir may now be supplied in bytes in order to obtain a bytes return value. Prior to this, only str was allowed. suffix and prefix now accept and default to None to cause an appropriate default value to be used. Changed in version 3.6: The dir parameter now accepts a path-like object.
tempfile.mkdtemp(suffix=None, prefix=None, dir=None)
Creates a temporary directory in the most secure manner possible. There are no race conditions in the directory’s creation. The directory is readable, writable, and searchable only by the creating user ID. The user of mkdtemp() is responsible for deleting the temporary directory and its contents when done with it. The prefix, suffix, and dir arguments are the same as for mkstemp(). mkdtemp() returns the absolute pathname of the new directory. Raises an auditing event tempfile.mkdtemp with argument fullpath. Changed in version 3.5: suffix, prefix, and dir may now be supplied in bytes in order to obtain a bytes return value. Prior to this, only str was allowed. suffix and prefix now accept and default to None to cause an appropriate default value to be used. Changed in version 3.6: The dir parameter now accepts a path-like object.
tempfile.gettempdir()
Return the name of the directory used for temporary files. This defines the default value for the dir argument to all functions in this module. Python searches a standard list of directories to find one which the calling user can create files in. The list is: The directory named by the TMPDIR environment variable. The directory named by the TEMP environment variable. The directory named by the TMP environment variable.
A platform-specific location: On Windows, the directories C:\TEMP, C:\TMP, \TEMP, and \TMP, in that order. On all other platforms, the directories /tmp, /var/tmp, and /usr/tmp, in that order. As a last resort, the current working directory. The result of this search is cached, see the description of tempdir below.
tempfile.gettempdirb()
Same as gettempdir() but the return value is in bytes. New in version 3.5.
tempfile.gettempprefix()
Return the filename prefix used to create temporary files. This does not contain the directory component.
tempfile.gettempprefixb()
Same as gettempprefix() but the return value is in bytes. New in version 3.5.
The module uses a global variable to store the name of the directory used for temporary files returned by gettempdir(). It can be set directly to override the selection process, but this is discouraged. All functions in this module take a dir argument which can be used to specify the directory and this is the recommended approach.
tempfile.tempdir
When set to a value other than None, this variable defines the default value for the dir argument to the functions defined in this module. If tempdir is None (the default) at any call to any of the above functions except gettempprefix() it is initialized following the algorithm described in gettempdir().
Examples Here are some examples of typical usage of the tempfile module: >>> import tempfile
# create a temporary file and write some data to it
>>> fp = tempfile.TemporaryFile()
>>> fp.write(b'Hello world!')
# read data from file
>>> fp.seek(0)
>>> fp.read()
b'Hello world!'
# close the file, it will be removed
>>> fp.close()
# create a temporary file using a context manager
>>> with tempfile.TemporaryFile() as fp:
... fp.write(b'Hello world!')
... fp.seek(0)
... fp.read()
b'Hello world!'
>>>
# file is now closed and removed
# create a temporary directory using the context manager
>>> with tempfile.TemporaryDirectory() as tmpdirname:
... print('created temporary directory', tmpdirname)
>>>
# directory and contents have been removed
Deprecated functions and variables A historical way to create temporary files was to first generate a file name with the mktemp() function and then create a file using this name. Unfortunately this is not secure, because a different process may create a file with this name in the time between the call to mktemp() and the subsequent attempt to create the file by the first process. The solution is to combine the two steps and create the file immediately. This approach is used by mkstemp() and the other functions described above.
tempfile.mktemp(suffix='', prefix='tmp', dir=None)
Deprecated since version 2.3: Use mkstemp() instead. Return an absolute pathname of a file that did not exist at the time the call is made. The prefix, suffix, and dir arguments are similar to those of mkstemp(), except that bytes file names, suffix=None and prefix=None are not supported. Warning Use of this function may introduce a security hole in your program. By the time you get around to doing anything with the file name it returns, someone else may have beaten you to the punch. mktemp() usage can be replaced easily with NamedTemporaryFile(), passing it the delete=False parameter: >>> f = NamedTemporaryFile(delete=False)
>>> f.name
'/tmp/tmptjujjt'
>>> f.write(b"Hello World!\n")
13
>>> f.close()
>>> os.unlink(f.name)
>>> os.path.exists(f.name)
False | python.library.tempfile |
tempfile.gettempdir()
Return the name of the directory used for temporary files. This defines the default value for the dir argument to all functions in this module. Python searches a standard list of directories to find one which the calling user can create files in. The list is: The directory named by the TMPDIR environment variable. The directory named by the TEMP environment variable. The directory named by the TMP environment variable.
A platform-specific location: On Windows, the directories C:\TEMP, C:\TMP, \TEMP, and \TMP, in that order. On all other platforms, the directories /tmp, /var/tmp, and /usr/tmp, in that order. As a last resort, the current working directory. The result of this search is cached, see the description of tempdir below. | python.library.tempfile#tempfile.gettempdir |
tempfile.gettempdirb()
Same as gettempdir() but the return value is in bytes. New in version 3.5. | python.library.tempfile#tempfile.gettempdirb |
tempfile.gettempprefix()
Return the filename prefix used to create temporary files. This does not contain the directory component. | python.library.tempfile#tempfile.gettempprefix |
tempfile.gettempprefixb()
Same as gettempprefix() but the return value is in bytes. New in version 3.5. | python.library.tempfile#tempfile.gettempprefixb |
tempfile.mkdtemp(suffix=None, prefix=None, dir=None)
Creates a temporary directory in the most secure manner possible. There are no race conditions in the directory’s creation. The directory is readable, writable, and searchable only by the creating user ID. The user of mkdtemp() is responsible for deleting the temporary directory and its contents when done with it. The prefix, suffix, and dir arguments are the same as for mkstemp(). mkdtemp() returns the absolute pathname of the new directory. Raises an auditing event tempfile.mkdtemp with argument fullpath. Changed in version 3.5: suffix, prefix, and dir may now be supplied in bytes in order to obtain a bytes return value. Prior to this, only str was allowed. suffix and prefix now accept and default to None to cause an appropriate default value to be used. Changed in version 3.6: The dir parameter now accepts a path-like object. | python.library.tempfile#tempfile.mkdtemp |
tempfile.mkstemp(suffix=None, prefix=None, dir=None, text=False)
Creates a temporary file in the most secure manner possible. There are no race conditions in the file’s creation, assuming that the platform properly implements the os.O_EXCL flag for os.open(). The file is readable and writable only by the creating user ID. If the platform uses permission bits to indicate whether a file is executable, the file is executable by no one. The file descriptor is not inherited by child processes. Unlike TemporaryFile(), the user of mkstemp() is responsible for deleting the temporary file when done with it. If suffix is not None, the file name will end with that suffix, otherwise there will be no suffix. mkstemp() does not put a dot between the file name and the suffix; if you need one, put it at the beginning of suffix. If prefix is not None, the file name will begin with that prefix; otherwise, a default prefix is used. The default is the return value of gettempprefix() or gettempprefixb(), as appropriate. If dir is not None, the file will be created in that directory; otherwise, a default directory is used. The default directory is chosen from a platform-dependent list, but the user of the application can control the directory location by setting the TMPDIR, TEMP or TMP environment variables. There is thus no guarantee that the generated filename will have any nice properties, such as not requiring quoting when passed to external commands via os.popen(). If any of suffix, prefix, and dir are not None, they must be the same type. If they are bytes, the returned name will be bytes instead of str. If you want to force a bytes return value with otherwise default behavior, pass suffix=b''. If text is specified and true, the file is opened in text mode. Otherwise, (the default) the file is opened in binary mode. mkstemp() returns a tuple containing an OS-level handle to an open file (as would be returned by os.open()) and the absolute pathname of that file, in that order. Raises an auditing event tempfile.mkstemp with argument fullpath. Changed in version 3.5: suffix, prefix, and dir may now be supplied in bytes in order to obtain a bytes return value. Prior to this, only str was allowed. suffix and prefix now accept and default to None to cause an appropriate default value to be used. Changed in version 3.6: The dir parameter now accepts a path-like object. | python.library.tempfile#tempfile.mkstemp |
tempfile.mktemp(suffix='', prefix='tmp', dir=None)
Deprecated since version 2.3: Use mkstemp() instead. Return an absolute pathname of a file that did not exist at the time the call is made. The prefix, suffix, and dir arguments are similar to those of mkstemp(), except that bytes file names, suffix=None and prefix=None are not supported. Warning Use of this function may introduce a security hole in your program. By the time you get around to doing anything with the file name it returns, someone else may have beaten you to the punch. mktemp() usage can be replaced easily with NamedTemporaryFile(), passing it the delete=False parameter: >>> f = NamedTemporaryFile(delete=False)
>>> f.name
'/tmp/tmptjujjt'
>>> f.write(b"Hello World!\n")
13
>>> f.close()
>>> os.unlink(f.name)
>>> os.path.exists(f.name)
False | python.library.tempfile#tempfile.mktemp |
tempfile.NamedTemporaryFile(mode='w+b', buffering=-1, encoding=None, newline=None, suffix=None, prefix=None, dir=None, delete=True, *, errors=None)
This function operates exactly as TemporaryFile() does, except that the file is guaranteed to have a visible name in the file system (on Unix, the directory entry is not unlinked). That name can be retrieved from the name attribute of the returned file-like object. Whether the name can be used to open the file a second time, while the named temporary file is still open, varies across platforms (it can be so used on Unix; it cannot on Windows NT or later). If delete is true (the default), the file is deleted as soon as it is closed. The returned object is always a file-like object whose file attribute is the underlying true file object. This file-like object can be used in a with statement, just like a normal file. Raises an auditing event tempfile.mkstemp with argument fullpath. Changed in version 3.8: Added errors parameter. | python.library.tempfile#tempfile.NamedTemporaryFile |
tempfile.SpooledTemporaryFile(max_size=0, mode='w+b', buffering=-1, encoding=None, newline=None, suffix=None, prefix=None, dir=None, *, errors=None)
This function operates exactly as TemporaryFile() does, except that data is spooled in memory until the file size exceeds max_size, or until the file’s fileno() method is called, at which point the contents are written to disk and operation proceeds as with TemporaryFile(). The resulting file has one additional method, rollover(), which causes the file to roll over to an on-disk file regardless of its size. The returned object is a file-like object whose _file attribute is either an io.BytesIO or io.TextIOWrapper object (depending on whether binary or text mode was specified) or a true file object, depending on whether rollover() has been called. This file-like object can be used in a with statement, just like a normal file. Changed in version 3.3: the truncate method now accepts a size argument. Changed in version 3.8: Added errors parameter. | python.library.tempfile#tempfile.SpooledTemporaryFile |
tempfile.tempdir
When set to a value other than None, this variable defines the default value for the dir argument to the functions defined in this module. If tempdir is None (the default) at any call to any of the above functions except gettempprefix() it is initialized following the algorithm described in gettempdir(). | python.library.tempfile#tempfile.tempdir |
tempfile.TemporaryDirectory(suffix=None, prefix=None, dir=None)
This function securely creates a temporary directory using the same rules as mkdtemp(). The resulting object can be used as a context manager (see Examples). On completion of the context or destruction of the temporary directory object the newly created temporary directory and all its contents are removed from the filesystem. The directory name can be retrieved from the name attribute of the returned object. When the returned object is used as a context manager, the name will be assigned to the target of the as clause in the with statement, if there is one. The directory can be explicitly cleaned up by calling the cleanup() method. Raises an auditing event tempfile.mkdtemp with argument fullpath. New in version 3.2. | python.library.tempfile#tempfile.TemporaryDirectory |
tempfile.TemporaryFile(mode='w+b', buffering=-1, encoding=None, newline=None, suffix=None, prefix=None, dir=None, *, errors=None)
Return a file-like object that can be used as a temporary storage area. The file is created securely, using the same rules as mkstemp(). It will be destroyed as soon as it is closed (including an implicit close when the object is garbage collected). Under Unix, the directory entry for the file is either not created at all or is removed immediately after the file is created. Other platforms do not support this; your code should not rely on a temporary file created using this function having or not having a visible name in the file system. The resulting object can be used as a context manager (see Examples). On completion of the context or destruction of the file object the temporary file will be removed from the filesystem. The mode parameter defaults to 'w+b' so that the file created can be read and written without being closed. Binary mode is used so that it behaves consistently on all platforms without regard for the data that is stored. buffering, encoding, errors and newline are interpreted as for open(). The dir, prefix and suffix parameters have the same meaning and defaults as with mkstemp(). The returned object is a true file object on POSIX platforms. On other platforms, it is a file-like object whose file attribute is the underlying true file object. The os.O_TMPFILE flag is used if it is available and works (Linux-specific, requires Linux kernel 3.11 or later). Raises an auditing event tempfile.mkstemp with argument fullpath. Changed in version 3.5: The os.O_TMPFILE flag is now used if available. Changed in version 3.8: Added errors parameter. | python.library.tempfile#tempfile.TemporaryFile |
termios — POSIX style tty control This module provides an interface to the POSIX calls for tty I/O control. For a complete description of these calls, see termios(3) Unix manual page. It is only available for those Unix versions that support POSIX termios style tty I/O control configured during installation. All functions in this module take a file descriptor fd as their first argument. This can be an integer file descriptor, such as returned by sys.stdin.fileno(), or a file object, such as sys.stdin itself. This module also defines all the constants needed to work with the functions provided here; these have the same name as their counterparts in C. Please refer to your system documentation for more information on using these terminal control interfaces. The module defines the following functions:
termios.tcgetattr(fd)
Return a list containing the tty attributes for file descriptor fd, as follows: [iflag, oflag, cflag, lflag, ispeed, ospeed, cc] where cc is a list of the tty special characters (each a string of length 1, except the items with indices VMIN and VTIME, which are integers when these fields are defined). The interpretation of the flags and the speeds as well as the indexing in the cc array must be done using the symbolic constants defined in the termios module.
termios.tcsetattr(fd, when, attributes)
Set the tty attributes for file descriptor fd from the attributes, which is a list like the one returned by tcgetattr(). The when argument determines when the attributes are changed: TCSANOW to change immediately, TCSADRAIN to change after transmitting all queued output, or TCSAFLUSH to change after transmitting all queued output and discarding all queued input.
termios.tcsendbreak(fd, duration)
Send a break on file descriptor fd. A zero duration sends a break for 0.25–0.5 seconds; a nonzero duration has a system dependent meaning.
termios.tcdrain(fd)
Wait until all output written to file descriptor fd has been transmitted.
termios.tcflush(fd, queue)
Discard queued data on file descriptor fd. The queue selector specifies which queue: TCIFLUSH for the input queue, TCOFLUSH for the output queue, or TCIOFLUSH for both queues.
termios.tcflow(fd, action)
Suspend or resume input or output on file descriptor fd. The action argument can be TCOOFF to suspend output, TCOON to restart output, TCIOFF to suspend input, or TCION to restart input.
See also
Module tty
Convenience functions for common terminal control operations. Example Here’s a function that prompts for a password with echoing turned off. Note the technique using a separate tcgetattr() call and a try … finally statement to ensure that the old tty attributes are restored exactly no matter what happens: def getpass(prompt="Password: "):
import termios, sys
fd = sys.stdin.fileno()
old = termios.tcgetattr(fd)
new = termios.tcgetattr(fd)
new[3] = new[3] & ~termios.ECHO # lflags
try:
termios.tcsetattr(fd, termios.TCSADRAIN, new)
passwd = input(prompt)
finally:
termios.tcsetattr(fd, termios.TCSADRAIN, old)
return passwd | python.library.termios |
termios.tcdrain(fd)
Wait until all output written to file descriptor fd has been transmitted. | python.library.termios#termios.tcdrain |
termios.tcflow(fd, action)
Suspend or resume input or output on file descriptor fd. The action argument can be TCOOFF to suspend output, TCOON to restart output, TCIOFF to suspend input, or TCION to restart input. | python.library.termios#termios.tcflow |
termios.tcflush(fd, queue)
Discard queued data on file descriptor fd. The queue selector specifies which queue: TCIFLUSH for the input queue, TCOFLUSH for the output queue, or TCIOFLUSH for both queues. | python.library.termios#termios.tcflush |
termios.tcgetattr(fd)
Return a list containing the tty attributes for file descriptor fd, as follows: [iflag, oflag, cflag, lflag, ispeed, ospeed, cc] where cc is a list of the tty special characters (each a string of length 1, except the items with indices VMIN and VTIME, which are integers when these fields are defined). The interpretation of the flags and the speeds as well as the indexing in the cc array must be done using the symbolic constants defined in the termios module. | python.library.termios#termios.tcgetattr |
termios.tcsendbreak(fd, duration)
Send a break on file descriptor fd. A zero duration sends a break for 0.25–0.5 seconds; a nonzero duration has a system dependent meaning. | python.library.termios#termios.tcsendbreak |
termios.tcsetattr(fd, when, attributes)
Set the tty attributes for file descriptor fd from the attributes, which is a list like the one returned by tcgetattr(). The when argument determines when the attributes are changed: TCSANOW to change immediately, TCSADRAIN to change after transmitting all queued output, or TCSAFLUSH to change after transmitting all queued output and discarding all queued input. | python.library.termios#termios.tcsetattr |
test — Regression tests package for Python Note The test package is meant for internal use by Python only. It is documented for the benefit of the core developers of Python. Any use of this package outside of Python’s standard library is discouraged as code mentioned here can change or be removed without notice between releases of Python. The test package contains all regression tests for Python as well as the modules test.support and test.regrtest. test.support is used to enhance your tests while test.regrtest drives the testing suite. Each module in the test package whose name starts with test_ is a testing suite for a specific module or feature. All new tests should be written using the unittest or doctest module. Some older tests are written using a “traditional” testing style that compares output printed to sys.stdout; this style of test is considered deprecated. See also
Module unittest
Writing PyUnit regression tests.
Module doctest
Tests embedded in documentation strings. Writing Unit Tests for the test package It is preferred that tests that use the unittest module follow a few guidelines. One is to name the test module by starting it with test_ and end it with the name of the module being tested. The test methods in the test module should start with test_ and end with a description of what the method is testing. This is needed so that the methods are recognized by the test driver as test methods. Also, no documentation string for the method should be included. A comment (such as # Tests function returns only True or False) should be used to provide documentation for test methods. This is done because documentation strings get printed out if they exist and thus what test is being run is not stated. A basic boilerplate is often used: import unittest
from test import support
class MyTestCase1(unittest.TestCase):
# Only use setUp() and tearDown() if necessary
def setUp(self):
... code to execute in preparation for tests ...
def tearDown(self):
... code to execute to clean up after tests ...
def test_feature_one(self):
# Test feature one.
... testing code ...
def test_feature_two(self):
# Test feature two.
... testing code ...
... more test methods ...
class MyTestCase2(unittest.TestCase):
... same structure as MyTestCase1 ...
... more test classes ...
if __name__ == '__main__':
unittest.main()
This code pattern allows the testing suite to be run by test.regrtest, on its own as a script that supports the unittest CLI, or via the python -m unittest CLI. The goal for regression testing is to try to break code. This leads to a few guidelines to be followed: The testing suite should exercise all classes, functions, and constants. This includes not just the external API that is to be presented to the outside world but also “private” code. Whitebox testing (examining the code being tested when the tests are being written) is preferred. Blackbox testing (testing only the published user interface) is not complete enough to make sure all boundary and edge cases are tested. Make sure all possible values are tested including invalid ones. This makes sure that not only all valid values are acceptable but also that improper values are handled correctly. Exhaust as many code paths as possible. Test where branching occurs and thus tailor input to make sure as many different paths through the code are taken. Add an explicit test for any bugs discovered for the tested code. This will make sure that the error does not crop up again if the code is changed in the future. Make sure to clean up after your tests (such as close and remove all temporary files). If a test is dependent on a specific condition of the operating system then verify the condition already exists before attempting the test. Import as few modules as possible and do it as soon as possible. This minimizes external dependencies of tests and also minimizes possible anomalous behavior from side-effects of importing a module.
Try to maximize code reuse. On occasion, tests will vary by something as small as what type of input is used. Minimize code duplication by subclassing a basic test class with a class that specifies the input: class TestFuncAcceptsSequencesMixin:
func = mySuperWhammyFunction
def test_func(self):
self.func(self.arg)
class AcceptLists(TestFuncAcceptsSequencesMixin, unittest.TestCase):
arg = [1, 2, 3]
class AcceptStrings(TestFuncAcceptsSequencesMixin, unittest.TestCase):
arg = 'abc'
class AcceptTuples(TestFuncAcceptsSequencesMixin, unittest.TestCase):
arg = (1, 2, 3)
When using this pattern, remember that all classes that inherit from unittest.TestCase are run as tests. The Mixin class in the example above does not have any data and so can’t be run by itself, thus it does not inherit from unittest.TestCase. See also Test Driven Development
A book by Kent Beck on writing tests before code. Running tests using the command-line interface The test package can be run as a script to drive Python’s regression test suite, thanks to the -m option: python -m test. Under the hood, it uses test.regrtest; the call python -m test.regrtest used in previous Python versions still works. Running the script by itself automatically starts running all regression tests in the test package. It does this by finding all modules in the package whose name starts with test_, importing them, and executing the function test_main() if present or loading the tests via unittest.TestLoader.loadTestsFromModule if test_main does not exist. The names of tests to execute may also be passed to the script. Specifying a single regression test (python -m test test_spam) will minimize output and only print whether the test passed or failed. Running test directly allows what resources are available for tests to use to be set. You do this by using the -u command-line option. Specifying all as the value for the -u option enables all possible resources: python -m test -uall. If all but one resource is desired (a more common case), a comma-separated list of resources that are not desired may be listed after all. The command python -m test -uall,-audio,-largefile will run test with all resources except the audio and largefile resources. For a list of all resources and more command-line options, run python -m test -h. Some other ways to execute the regression tests depend on what platform the tests are being executed on. On Unix, you can run make test at the top-level directory where Python was built. On Windows, executing rt.bat from your PCbuild directory will run all regression tests. test.support — Utilities for the Python test suite The test.support module provides support for Python’s regression test suite. Note test.support is not a public module. It is documented here to help Python developers write tests. The API of this module is subject to change without backwards compatibility concerns between releases. This module defines the following exceptions:
exception test.support.TestFailed
Exception to be raised when a test fails. This is deprecated in favor of unittest-based tests and unittest.TestCase’s assertion methods.
exception test.support.ResourceDenied
Subclass of unittest.SkipTest. Raised when a resource (such as a network connection) is not available. Raised by the requires() function.
The test.support module defines the following constants:
test.support.verbose
True when verbose output is enabled. Should be checked when more detailed information is desired about a running test. verbose is set by test.regrtest.
test.support.is_jython
True if the running interpreter is Jython.
test.support.is_android
True if the system is Android.
test.support.unix_shell
Path for shell if not on Windows; otherwise None.
test.support.FS_NONASCII
A non-ASCII character encodable by os.fsencode().
test.support.TESTFN
Set to a name that is safe to use as the name of a temporary file. Any temporary file that is created should be closed and unlinked (removed).
test.support.TESTFN_UNICODE
Set to a non-ASCII name for a temporary file.
test.support.TESTFN_ENCODING
Set to sys.getfilesystemencoding().
test.support.TESTFN_UNENCODABLE
Set to a filename (str type) that should not be able to be encoded by file system encoding in strict mode. It may be None if it’s not possible to generate such a filename.
test.support.TESTFN_UNDECODABLE
Set to a filename (bytes type) that should not be able to be decoded by file system encoding in strict mode. It may be None if it’s not possible to generate such a filename.
test.support.TESTFN_NONASCII
Set to a filename containing the FS_NONASCII character.
test.support.LOOPBACK_TIMEOUT
Timeout in seconds for tests using a network server listening on the network local loopback interface like 127.0.0.1. The timeout is long enough to prevent test failure: it takes into account that the client and the server can run in different threads or even different processes. The timeout should be long enough for connect(), recv() and send() methods of socket.socket. Its default value is 5 seconds. See also INTERNET_TIMEOUT.
test.support.INTERNET_TIMEOUT
Timeout in seconds for network requests going to the Internet. The timeout is short enough to prevent a test to wait for too long if the Internet request is blocked for whatever reason. Usually, a timeout using INTERNET_TIMEOUT should not mark a test as failed, but skip the test instead: see transient_internet(). Its default value is 1 minute. See also LOOPBACK_TIMEOUT.
test.support.SHORT_TIMEOUT
Timeout in seconds to mark a test as failed if the test takes “too long”. The timeout value depends on the regrtest --timeout command line option. If a test using SHORT_TIMEOUT starts to fail randomly on slow buildbots, use LONG_TIMEOUT instead. Its default value is 30 seconds.
test.support.LONG_TIMEOUT
Timeout in seconds to detect when a test hangs. It is long enough to reduce the risk of test failure on the slowest Python buildbots. It should not be used to mark a test as failed if the test takes “too long”. The timeout value depends on the regrtest --timeout command line option. Its default value is 5 minutes. See also LOOPBACK_TIMEOUT, INTERNET_TIMEOUT and SHORT_TIMEOUT.
test.support.SAVEDCWD
Set to os.getcwd().
test.support.PGO
Set when tests can be skipped when they are not useful for PGO.
test.support.PIPE_MAX_SIZE
A constant that is likely larger than the underlying OS pipe buffer size, to make writes blocking.
test.support.SOCK_MAX_SIZE
A constant that is likely larger than the underlying OS socket buffer size, to make writes blocking.
test.support.TEST_SUPPORT_DIR
Set to the top level directory that contains test.support.
test.support.TEST_HOME_DIR
Set to the top level directory for the test package.
test.support.TEST_DATA_DIR
Set to the data directory within the test package.
test.support.MAX_Py_ssize_t
Set to sys.maxsize for big memory tests.
test.support.max_memuse
Set by set_memlimit() as the memory limit for big memory tests. Limited by MAX_Py_ssize_t.
test.support.real_max_memuse
Set by set_memlimit() as the memory limit for big memory tests. Not limited by MAX_Py_ssize_t.
test.support.MISSING_C_DOCSTRINGS
Return True if running on CPython, not on Windows, and configuration not set with WITH_DOC_STRINGS.
test.support.HAVE_DOCSTRINGS
Check for presence of docstrings.
test.support.TEST_HTTP_URL
Define the URL of a dedicated HTTP server for the network tests.
test.support.ALWAYS_EQ
Object that is equal to anything. Used to test mixed type comparison.
test.support.NEVER_EQ
Object that is not equal to anything (even to ALWAYS_EQ). Used to test mixed type comparison.
test.support.LARGEST
Object that is greater than anything (except itself). Used to test mixed type comparison.
test.support.SMALLEST
Object that is less than anything (except itself). Used to test mixed type comparison.
The test.support module defines the following functions:
test.support.forget(module_name)
Remove the module named module_name from sys.modules and delete any byte-compiled files of the module.
test.support.unload(name)
Delete name from sys.modules.
test.support.unlink(filename)
Call os.unlink() on filename. On Windows platforms, this is wrapped with a wait loop that checks for the existence fo the file.
test.support.rmdir(filename)
Call os.rmdir() on filename. On Windows platforms, this is wrapped with a wait loop that checks for the existence of the file.
test.support.rmtree(path)
Call shutil.rmtree() on path or call os.lstat() and os.rmdir() to remove a path and its contents. On Windows platforms, this is wrapped with a wait loop that checks for the existence of the files.
test.support.make_legacy_pyc(source)
Move a PEP 3147/PEP 488 pyc file to its legacy pyc location and return the file system path to the legacy pyc file. The source value is the file system path to the source file. It does not need to exist, however the PEP 3147/488 pyc file must exist.
test.support.is_resource_enabled(resource)
Return True if resource is enabled and available. The list of available resources is only set when test.regrtest is executing the tests.
test.support.python_is_optimized()
Return True if Python was not built with -O0 or -Og.
test.support.with_pymalloc()
Return _testcapi.WITH_PYMALLOC.
test.support.requires(resource, msg=None)
Raise ResourceDenied if resource is not available. msg is the argument to ResourceDenied if it is raised. Always returns True if called by a function whose __name__ is '__main__'. Used when tests are executed by test.regrtest.
test.support.system_must_validate_cert(f)
Raise unittest.SkipTest on TLS certification validation failures.
test.support.sortdict(dict)
Return a repr of dict with keys sorted.
test.support.findfile(filename, subdir=None)
Return the path to the file named filename. If no match is found filename is returned. This does not equal a failure since it could be the path to the file. Setting subdir indicates a relative path to use to find the file rather than looking directly in the path directories.
test.support.create_empty_file(filename)
Create an empty file with filename. If it already exists, truncate it.
test.support.fd_count()
Count the number of open file descriptors.
test.support.match_test(test)
Match test to patterns set in set_match_tests().
test.support.set_match_tests(patterns)
Define match test with regular expression patterns.
test.support.run_unittest(*classes)
Execute unittest.TestCase subclasses passed to the function. The function scans the classes for methods starting with the prefix test_ and executes the tests individually. It is also legal to pass strings as parameters; these should be keys in sys.modules. Each associated module will be scanned by unittest.TestLoader.loadTestsFromModule(). This is usually seen in the following test_main() function: def test_main():
support.run_unittest(__name__)
This will run all tests defined in the named module.
test.support.run_doctest(module, verbosity=None, optionflags=0)
Run doctest.testmod() on the given module. Return (failure_count, test_count). If verbosity is None, doctest.testmod() is run with verbosity set to verbose. Otherwise, it is run with verbosity set to None. optionflags is passed as optionflags to doctest.testmod().
test.support.setswitchinterval(interval)
Set the sys.setswitchinterval() to the given interval. Defines a minimum interval for Android systems to prevent the system from hanging.
test.support.check_impl_detail(**guards)
Use this check to guard CPython’s implementation-specific tests or to run them only on the implementations guarded by the arguments: check_impl_detail() # Only on CPython (default).
check_impl_detail(jython=True) # Only on Jython.
check_impl_detail(cpython=False) # Everywhere except CPython.
test.support.check_warnings(*filters, quiet=True)
A convenience wrapper for warnings.catch_warnings() that makes it easier to test that a warning was correctly raised. It is approximately equivalent to calling warnings.catch_warnings(record=True) with warnings.simplefilter() set to always and with the option to automatically validate the results that are recorded. check_warnings accepts 2-tuples of the form ("message regexp",
WarningCategory) as positional arguments. If one or more filters are provided, or if the optional keyword argument quiet is False, it checks to make sure the warnings are as expected: each specified filter must match at least one of the warnings raised by the enclosed code or the test fails, and if any warnings are raised that do not match any of the specified filters the test fails. To disable the first of these checks, set quiet to True. If no arguments are specified, it defaults to: check_warnings(("", Warning), quiet=True)
In this case all warnings are caught and no errors are raised. On entry to the context manager, a WarningRecorder instance is returned. The underlying warnings list from catch_warnings() is available via the recorder object’s warnings attribute. As a convenience, the attributes of the object representing the most recent warning can also be accessed directly through the recorder object (see example below). If no warning has been raised, then any of the attributes that would otherwise be expected on an object representing a warning will return None. The recorder object also has a reset() method, which clears the warnings list. The context manager is designed to be used like this: with check_warnings(("assertion is always true", SyntaxWarning),
("", UserWarning)):
exec('assert(False, "Hey!")')
warnings.warn(UserWarning("Hide me!"))
In this case if either warning was not raised, or some other warning was raised, check_warnings() would raise an error. When a test needs to look more deeply into the warnings, rather than just checking whether or not they occurred, code like this can be used: with check_warnings(quiet=True) as w:
warnings.warn("foo")
assert str(w.args[0]) == "foo"
warnings.warn("bar")
assert str(w.args[0]) == "bar"
assert str(w.warnings[0].args[0]) == "foo"
assert str(w.warnings[1].args[0]) == "bar"
w.reset()
assert len(w.warnings) == 0
Here all warnings will be caught, and the test code tests the captured warnings directly. Changed in version 3.2: New optional arguments filters and quiet.
test.support.check_no_resource_warning(testcase)
Context manager to check that no ResourceWarning was raised. You must remove the object which may emit ResourceWarning before the end of the context manager.
test.support.set_memlimit(limit)
Set the values for max_memuse and real_max_memuse for big memory tests.
test.support.record_original_stdout(stdout)
Store the value from stdout. It is meant to hold the stdout at the time the regrtest began.
test.support.get_original_stdout()
Return the original stdout set by record_original_stdout() or sys.stdout if it’s not set.
test.support.args_from_interpreter_flags()
Return a list of command line arguments reproducing the current settings in sys.flags and sys.warnoptions.
test.support.optim_args_from_interpreter_flags()
Return a list of command line arguments reproducing the current optimization settings in sys.flags.
test.support.captured_stdin()
test.support.captured_stdout()
test.support.captured_stderr()
A context managers that temporarily replaces the named stream with io.StringIO object. Example use with output streams: with captured_stdout() as stdout, captured_stderr() as stderr:
print("hello")
print("error", file=sys.stderr)
assert stdout.getvalue() == "hello\n"
assert stderr.getvalue() == "error\n"
Example use with input stream: with captured_stdin() as stdin:
stdin.write('hello\n')
stdin.seek(0)
# call test code that consumes from sys.stdin
captured = input()
self.assertEqual(captured, "hello")
test.support.temp_dir(path=None, quiet=False)
A context manager that creates a temporary directory at path and yields the directory. If path is None, the temporary directory is created using tempfile.mkdtemp(). If quiet is False, the context manager raises an exception on error. Otherwise, if path is specified and cannot be created, only a warning is issued.
test.support.change_cwd(path, quiet=False)
A context manager that temporarily changes the current working directory to path and yields the directory. If quiet is False, the context manager raises an exception on error. Otherwise, it issues only a warning and keeps the current working directory the same.
test.support.temp_cwd(name='tempcwd', quiet=False)
A context manager that temporarily creates a new directory and changes the current working directory (CWD). The context manager creates a temporary directory in the current directory with name name before temporarily changing the current working directory. If name is None, the temporary directory is created using tempfile.mkdtemp(). If quiet is False and it is not possible to create or change the CWD, an error is raised. Otherwise, only a warning is raised and the original CWD is used.
test.support.temp_umask(umask)
A context manager that temporarily sets the process umask.
test.support.disable_faulthandler()
A context manager that replaces sys.stderr with sys.__stderr__.
test.support.gc_collect()
Force as many objects as possible to be collected. This is needed because timely deallocation is not guaranteed by the garbage collector. This means that __del__ methods may be called later than expected and weakrefs may remain alive for longer than expected.
test.support.disable_gc()
A context manager that disables the garbage collector upon entry and reenables it upon exit.
test.support.swap_attr(obj, attr, new_val)
Context manager to swap out an attribute with a new object. Usage: with swap_attr(obj, "attr", 5):
...
This will set obj.attr to 5 for the duration of the with block, restoring the old value at the end of the block. If attr doesn’t exist on obj, it will be created and then deleted at the end of the block. The old value (or None if it doesn’t exist) will be assigned to the target of the “as” clause, if there is one.
test.support.swap_item(obj, attr, new_val)
Context manager to swap out an item with a new object. Usage: with swap_item(obj, "item", 5):
...
This will set obj["item"] to 5 for the duration of the with block, restoring the old value at the end of the block. If item doesn’t exist on obj, it will be created and then deleted at the end of the block. The old value (or None if it doesn’t exist) will be assigned to the target of the “as” clause, if there is one.
test.support.print_warning(msg)
Print a warning into sys.__stderr__. Format the message as: f"Warning -- {msg}". If msg is made of multiple lines, add "Warning -- " prefix to each line. New in version 3.9.
test.support.wait_process(pid, *, exitcode, timeout=None)
Wait until process pid completes and check that the process exit code is exitcode. Raise an AssertionError if the process exit code is not equal to exitcode. If the process runs longer than timeout seconds (SHORT_TIMEOUT by default), kill the process and raise an AssertionError. The timeout feature is not available on Windows. New in version 3.9.
test.support.wait_threads_exit(timeout=60.0)
Context manager to wait until all threads created in the with statement exit.
test.support.start_threads(threads, unlock=None)
Context manager to start threads. It attempts to join the threads upon exit.
test.support.calcobjsize(fmt)
Return struct.calcsize() for nP{fmt}0n or, if gettotalrefcount exists, 2PnP{fmt}0P.
test.support.calcvobjsize(fmt)
Return struct.calcsize() for nPn{fmt}0n or, if gettotalrefcount exists, 2PnPn{fmt}0P.
test.support.checksizeof(test, o, size)
For testcase test, assert that the sys.getsizeof for o plus the GC header size equals size.
test.support.can_symlink()
Return True if the OS supports symbolic links, False otherwise.
test.support.can_xattr()
Return True if the OS supports xattr, False otherwise.
@test.support.skip_unless_symlink
A decorator for running tests that require support for symbolic links.
@test.support.skip_unless_xattr
A decorator for running tests that require support for xattr.
@test.support.anticipate_failure(condition)
A decorator to conditionally mark tests with unittest.expectedFailure(). Any use of this decorator should have an associated comment identifying the relevant tracker issue.
@test.support.run_with_locale(catstr, *locales)
A decorator for running a function in a different locale, correctly resetting it after it has finished. catstr is the locale category as a string (for example "LC_ALL"). The locales passed will be tried sequentially, and the first valid locale will be used.
@test.support.run_with_tz(tz)
A decorator for running a function in a specific timezone, correctly resetting it after it has finished.
@test.support.requires_freebsd_version(*min_version)
Decorator for the minimum version when running test on FreeBSD. If the FreeBSD version is less than the minimum, raise unittest.SkipTest.
@test.support.requires_linux_version(*min_version)
Decorator for the minimum version when running test on Linux. If the Linux version is less than the minimum, raise unittest.SkipTest.
@test.support.requires_mac_version(*min_version)
Decorator for the minimum version when running test on Mac OS X. If the MAC OS X version is less than the minimum, raise unittest.SkipTest.
@test.support.requires_IEEE_754
Decorator for skipping tests on non-IEEE 754 platforms.
@test.support.requires_zlib
Decorator for skipping tests if zlib doesn’t exist.
@test.support.requires_gzip
Decorator for skipping tests if gzip doesn’t exist.
@test.support.requires_bz2
Decorator for skipping tests if bz2 doesn’t exist.
@test.support.requires_lzma
Decorator for skipping tests if lzma doesn’t exist.
@test.support.requires_resource(resource)
Decorator for skipping tests if resource is not available.
@test.support.requires_docstrings
Decorator for only running the test if HAVE_DOCSTRINGS.
@test.support.cpython_only(test)
Decorator for tests only applicable to CPython.
@test.support.impl_detail(msg=None, **guards)
Decorator for invoking check_impl_detail() on guards. If that returns False, then uses msg as the reason for skipping the test.
@test.support.no_tracing(func)
Decorator to temporarily turn off tracing for the duration of the test.
@test.support.refcount_test(test)
Decorator for tests which involve reference counting. The decorator does not run the test if it is not run by CPython. Any trace function is unset for the duration of the test to prevent unexpected refcounts caused by the trace function.
@test.support.reap_threads(func)
Decorator to ensure the threads are cleaned up even if the test fails.
@test.support.bigmemtest(size, memuse, dry_run=True)
Decorator for bigmem tests. size is a requested size for the test (in arbitrary, test-interpreted units.) memuse is the number of bytes per unit for the test, or a good estimate of it. For example, a test that needs two byte buffers, of 4 GiB each, could be decorated with @bigmemtest(size=_4G, memuse=2). The size argument is normally passed to the decorated test method as an extra argument. If dry_run is True, the value passed to the test method may be less than the requested value. If dry_run is False, it means the test doesn’t support dummy runs when -M is not specified.
@test.support.bigaddrspacetest(f)
Decorator for tests that fill the address space. f is the function to wrap.
test.support.make_bad_fd()
Create an invalid file descriptor by opening and closing a temporary file, and returning its descriptor.
test.support.check_syntax_error(testcase, statement, errtext='', *, lineno=None, offset=None)
Test for syntax errors in statement by attempting to compile statement. testcase is the unittest instance for the test. errtext is the regular expression which should match the string representation of the raised SyntaxError. If lineno is not None, compares to the line of the exception. If offset is not None, compares to the offset of the exception.
test.support.check_syntax_warning(testcase, statement, errtext='', *, lineno=1, offset=None)
Test for syntax warning in statement by attempting to compile statement. Test also that the SyntaxWarning is emitted only once, and that it will be converted to a SyntaxError when turned into error. testcase is the unittest instance for the test. errtext is the regular expression which should match the string representation of the emitted SyntaxWarning and raised SyntaxError. If lineno is not None, compares to the line of the warning and exception. If offset is not None, compares to the offset of the exception. New in version 3.8.
test.support.open_urlresource(url, *args, **kw)
Open url. If open fails, raises TestFailed.
test.support.import_module(name, deprecated=False, *, required_on())
This function imports and returns the named module. Unlike a normal import, this function raises unittest.SkipTest if the module cannot be imported. Module and package deprecation messages are suppressed during this import if deprecated is True. If a module is required on a platform but optional for others, set required_on to an iterable of platform prefixes which will be compared against sys.platform. New in version 3.1.
test.support.import_fresh_module(name, fresh=(), blocked=(), deprecated=False)
This function imports and returns a fresh copy of the named Python module by removing the named module from sys.modules before doing the import. Note that unlike reload(), the original module is not affected by this operation. fresh is an iterable of additional module names that are also removed from the sys.modules cache before doing the import. blocked is an iterable of module names that are replaced with None in the module cache during the import to ensure that attempts to import them raise ImportError. The named module and any modules named in the fresh and blocked parameters are saved before starting the import and then reinserted into sys.modules when the fresh import is complete. Module and package deprecation messages are suppressed during this import if deprecated is True. This function will raise ImportError if the named module cannot be imported. Example use: # Get copies of the warnings module for testing without affecting the
# version being used by the rest of the test suite. One copy uses the
# C implementation, the other is forced to use the pure Python fallback
# implementation
py_warnings = import_fresh_module('warnings', blocked=['_warnings'])
c_warnings = import_fresh_module('warnings', fresh=['_warnings'])
New in version 3.1.
test.support.modules_setup()
Return a copy of sys.modules.
test.support.modules_cleanup(oldmodules)
Remove modules except for oldmodules and encodings in order to preserve internal cache.
test.support.threading_setup()
Return current thread count and copy of dangling threads.
test.support.threading_cleanup(*original_values)
Cleanup up threads not specified in original_values. Designed to emit a warning if a test leaves running threads in the background.
test.support.join_thread(thread, timeout=30.0)
Join a thread within timeout. Raise an AssertionError if thread is still alive after timeout seconds.
test.support.reap_children()
Use this at the end of test_main whenever sub-processes are started. This will help ensure that no extra children (zombies) stick around to hog resources and create problems when looking for refleaks.
test.support.get_attribute(obj, name)
Get an attribute, raising unittest.SkipTest if AttributeError is raised.
test.support.catch_threading_exception()
Context manager catching threading.Thread exception using threading.excepthook(). Attributes set when an exception is catched: exc_type exc_value exc_traceback thread See threading.excepthook() documentation. These attributes are deleted at the context manager exit. Usage: with support.catch_threading_exception() as cm:
# code spawning a thread which raises an exception
...
# check the thread exception, use cm attributes:
# exc_type, exc_value, exc_traceback, thread
...
# exc_type, exc_value, exc_traceback, thread attributes of cm no longer
# exists at this point
# (to avoid reference cycles)
New in version 3.8.
test.support.catch_unraisable_exception()
Context manager catching unraisable exception using sys.unraisablehook(). Storing the exception value (cm.unraisable.exc_value) creates a reference cycle. The reference cycle is broken explicitly when the context manager exits. Storing the object (cm.unraisable.object) can resurrect it if it is set to an object which is being finalized. Exiting the context manager clears the stored object. Usage: with support.catch_unraisable_exception() as cm:
# code creating an "unraisable exception"
...
# check the unraisable exception: use cm.unraisable
...
# cm.unraisable attribute no longer exists at this point
# (to break a reference cycle)
New in version 3.8.
test.support.load_package_tests(pkg_dir, loader, standard_tests, pattern)
Generic implementation of the unittest load_tests protocol for use in test packages. pkg_dir is the root directory of the package; loader, standard_tests, and pattern are the arguments expected by load_tests. In simple cases, the test package’s __init__.py can be the following: import os
from test.support import load_package_tests
def load_tests(*args):
return load_package_tests(os.path.dirname(__file__), *args)
test.support.fs_is_case_insensitive(directory)
Return True if the file system for directory is case-insensitive.
test.support.detect_api_mismatch(ref_api, other_api, *, ignore=())
Returns the set of attributes, functions or methods of ref_api not found on other_api, except for a defined list of items to be ignored in this check specified in ignore. By default this skips private attributes beginning with ‘_’ but includes all magic methods, i.e. those starting and ending in ‘__’. New in version 3.5.
test.support.patch(test_instance, object_to_patch, attr_name, new_value)
Override object_to_patch.attr_name with new_value. Also add cleanup procedure to test_instance to restore object_to_patch for attr_name. The attr_name should be a valid attribute for object_to_patch.
test.support.run_in_subinterp(code)
Run code in subinterpreter. Raise unittest.SkipTest if tracemalloc is enabled.
test.support.check_free_after_iterating(test, iter, cls, args=())
Assert that iter is deallocated after iterating.
test.support.missing_compiler_executable(cmd_names=[])
Check for the existence of the compiler executables whose names are listed in cmd_names or all the compiler executables when cmd_names is empty and return the first missing executable or None when none is found missing.
test.support.check__all__(test_case, module, name_of_module=None, extra=(), blacklist=())
Assert that the __all__ variable of module contains all public names. The module’s public names (its API) are detected automatically based on whether they match the public name convention and were defined in module. The name_of_module argument can specify (as a string or tuple thereof) what module(s) an API could be defined in order to be detected as a public API. One case for this is when module imports part of its public API from other modules, possibly a C backend (like csv and its _csv). The extra argument can be a set of names that wouldn’t otherwise be automatically detected as “public”, like objects without a proper __module__ attribute. If provided, it will be added to the automatically detected ones. The blacklist argument can be a set of names that must not be treated as part of the public API even though their names indicate otherwise. Example use: import bar
import foo
import unittest
from test import support
class MiscTestCase(unittest.TestCase):
def test__all__(self):
support.check__all__(self, foo)
class OtherTestCase(unittest.TestCase):
def test__all__(self):
extra = {'BAR_CONST', 'FOO_CONST'}
blacklist = {'baz'} # Undocumented name.
# bar imports part of its API from _bar.
support.check__all__(self, bar, ('bar', '_bar'),
extra=extra, blacklist=blacklist)
New in version 3.6.
The test.support module defines the following classes:
class test.support.TransientResource(exc, **kwargs)
Instances are a context manager that raises ResourceDenied if the specified exception type is raised. Any keyword arguments are treated as attribute/value pairs to be compared against any exception raised within the with statement. Only if all pairs match properly against attributes on the exception is ResourceDenied raised.
class test.support.EnvironmentVarGuard
Class used to temporarily set or unset environment variables. Instances can be used as a context manager and have a complete dictionary interface for querying/modifying the underlying os.environ. After exit from the context manager all changes to environment variables done through this instance will be rolled back. Changed in version 3.1: Added dictionary interface.
EnvironmentVarGuard.set(envvar, value)
Temporarily set the environment variable envvar to the value of value.
EnvironmentVarGuard.unset(envvar)
Temporarily unset the environment variable envvar.
class test.support.SuppressCrashReport
A context manager used to try to prevent crash dialog popups on tests that are expected to crash a subprocess. On Windows, it disables Windows Error Reporting dialogs using SetErrorMode. On UNIX, resource.setrlimit() is used to set resource.RLIMIT_CORE’s soft limit to 0 to prevent coredump file creation. On both platforms, the old value is restored by __exit__().
class test.support.CleanImport(*module_names)
A context manager to force import to return a new module reference. This is useful for testing module-level behaviors, such as the emission of a DeprecationWarning on import. Example usage: with CleanImport('foo'):
importlib.import_module('foo') # New reference.
class test.support.DirsOnSysPath(*paths)
A context manager to temporarily add directories to sys.path. This makes a copy of sys.path, appends any directories given as positional arguments, then reverts sys.path to the copied settings when the context ends. Note that all sys.path modifications in the body of the context manager, including replacement of the object, will be reverted at the end of the block.
class test.support.SaveSignals
Class to save and restore signal handlers registered by the Python signal handler.
class test.support.Matcher
matches(self, d, **kwargs)
Try to match a single dict with the supplied arguments.
match_value(self, k, dv, v)
Try to match a single stored value (dv) with a supplied value (v).
class test.support.WarningsRecorder
Class used to record warnings for unit tests. See documentation of check_warnings() above for more details.
class test.support.BasicTestRunner
run(test)
Run test and return the result.
class test.support.FakePath(path)
Simple path-like object. It implements the __fspath__() method which just returns the path argument. If path is an exception, it will be raised in __fspath__().
test.support.socket_helper — Utilities for socket tests The test.support.socket_helper module provides support for socket tests. New in version 3.9.
test.support.socket_helper.IPV6_ENABLED
Set to True if IPv6 is enabled on this host, False otherwise.
test.support.socket_helper.find_unused_port(family=socket.AF_INET, socktype=socket.SOCK_STREAM)
Returns an unused port that should be suitable for binding. This is achieved by creating a temporary socket with the same family and type as the sock parameter (default is AF_INET, SOCK_STREAM), and binding it to the specified host address (defaults to 0.0.0.0) with the port set to 0, eliciting an unused ephemeral port from the OS. The temporary socket is then closed and deleted, and the ephemeral port is returned. Either this method or bind_port() should be used for any tests where a server socket needs to be bound to a particular port for the duration of the test. Which one to use depends on whether the calling code is creating a Python socket, or if an unused port needs to be provided in a constructor or passed to an external program (i.e. the -accept argument to openssl’s s_server mode). Always prefer bind_port() over find_unused_port() where possible. Using a hard coded port is discouraged since it can make multiple instances of the test impossible to run simultaneously, which is a problem for buildbots.
test.support.socket_helper.bind_port(sock, host=HOST)
Bind the socket to a free port and return the port number. Relies on ephemeral ports in order to ensure we are using an unbound port. This is important as many tests may be running simultaneously, especially in a buildbot environment. This method raises an exception if the sock.family is AF_INET and sock.type is SOCK_STREAM, and the socket has SO_REUSEADDR or SO_REUSEPORT set on it. Tests should never set these socket options for TCP/IP sockets. The only case for setting these options is testing multicasting via multiple UDP sockets. Additionally, if the SO_EXCLUSIVEADDRUSE socket option is available (i.e. on Windows), it will be set on the socket. This will prevent anyone else from binding to our host/port for the duration of the test.
test.support.socket_helper.bind_unix_socket(sock, addr)
Bind a unix socket, raising unittest.SkipTest if PermissionError is raised.
@test.support.socket_helper.skip_unless_bind_unix_socket
A decorator for running tests that require a functional bind() for Unix sockets.
test.support.socket_helper.transient_internet(resource_name, *, timeout=30.0, errnos=())
A context manager that raises ResourceDenied when various issues with the internet connection manifest themselves as exceptions.
test.support.script_helper — Utilities for the Python execution tests The test.support.script_helper module provides support for Python’s script execution tests.
test.support.script_helper.interpreter_requires_environment()
Return True if sys.executable interpreter requires environment variables in order to be able to run at all. This is designed to be used with @unittest.skipIf() to annotate tests that need to use an assert_python*() function to launch an isolated mode (-I) or no environment mode (-E) sub-interpreter process. A normal build & test does not run into this situation but it can happen when trying to run the standard library test suite from an interpreter that doesn’t have an obvious home with Python’s current home finding logic. Setting PYTHONHOME is one way to get most of the testsuite to run in that situation. PYTHONPATH or PYTHONUSERSITE are other common environment variables that might impact whether or not the interpreter can start.
test.support.script_helper.run_python_until_end(*args, **env_vars)
Set up the environment based on env_vars for running the interpreter in a subprocess. The values can include __isolated, __cleanenv, __cwd, and TERM. Changed in version 3.9: The function no longer strips whitespaces from stderr.
test.support.script_helper.assert_python_ok(*args, **env_vars)
Assert that running the interpreter with args and optional environment variables env_vars succeeds (rc == 0) and return a (return code,
stdout, stderr) tuple. If the __cleanenv keyword is set, env_vars is used as a fresh environment. Python is started in isolated mode (command line option -I), except if the __isolated keyword is set to False. Changed in version 3.9: The function no longer strips whitespaces from stderr.
test.support.script_helper.assert_python_failure(*args, **env_vars)
Assert that running the interpreter with args and optional environment variables env_vars fails (rc != 0) and return a (return code,
stdout, stderr) tuple. See assert_python_ok() for more options. Changed in version 3.9: The function no longer strips whitespaces from stderr.
test.support.script_helper.spawn_python(*args, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, **kw)
Run a Python subprocess with the given arguments. kw is extra keyword args to pass to subprocess.Popen(). Returns a subprocess.Popen object.
test.support.script_helper.kill_python(p)
Run the given subprocess.Popen process until completion and return stdout.
test.support.script_helper.make_script(script_dir, script_basename, source, omit_suffix=False)
Create script containing source in path script_dir and script_basename. If omit_suffix is False, append .py to the name. Return the full script path.
test.support.script_helper.make_zip_script(zip_dir, zip_basename, script_name, name_in_zip=None)
Create zip file at zip_dir and zip_basename with extension zip which contains the files in script_name. name_in_zip is the archive name. Return a tuple containing (full path, full path of archive name).
test.support.script_helper.make_pkg(pkg_dir, init_source='')
Create a directory named pkg_dir containing an __init__ file with init_source as its contents.
test.support.script_helper.make_zip_pkg(zip_dir, zip_basename, pkg_name, script_basename, source, depth=1, compiled=False)
Create a zip package directory with a path of zip_dir and zip_basename containing an empty __init__ file and a file script_basename containing the source. If compiled is True, both source files will be compiled and added to the zip package. Return a tuple of the full zip path and the archive name for the zip file.
test.support.bytecode_helper — Support tools for testing correct bytecode generation The test.support.bytecode_helper module provides support for testing and inspecting bytecode generation. New in version 3.9. The module defines the following class:
class test.support.bytecode_helper.BytecodeTestCase(unittest.TestCase)
This class has custom assertion methods for inspecting bytecode.
BytecodeTestCase.get_disassembly_as_string(co)
Return the disassembly of co as string.
BytecodeTestCase.assertInBytecode(x, opname, argval=_UNSPECIFIED)
Return instr if opname is found, otherwise throws AssertionError.
BytecodeTestCase.assertNotInBytecode(x, opname, argval=_UNSPECIFIED)
Throws AssertionError if opname is found. | python.library.test |
test.support.ALWAYS_EQ
Object that is equal to anything. Used to test mixed type comparison. | python.library.test#test.support.ALWAYS_EQ |
@test.support.anticipate_failure(condition)
A decorator to conditionally mark tests with unittest.expectedFailure(). Any use of this decorator should have an associated comment identifying the relevant tracker issue. | python.library.test#test.support.anticipate_failure |
test.support.args_from_interpreter_flags()
Return a list of command line arguments reproducing the current settings in sys.flags and sys.warnoptions. | python.library.test#test.support.args_from_interpreter_flags |
class test.support.BasicTestRunner
run(test)
Run test and return the result. | python.library.test#test.support.BasicTestRunner |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.