doc_content
stringlengths
1
386k
doc_id
stringlengths
5
188
PurePath.suffixes A list of the path’s file extensions: >>> PurePosixPath('my/library.tar.gar').suffixes ['.tar', '.gar'] >>> PurePosixPath('my/library.tar.gz').suffixes ['.tar', '.gz'] >>> PurePosixPath('my/library').suffixes []
python.library.pathlib#pathlib.PurePath.suffixes
PurePath.with_name(name) Return a new path with the name changed. If the original path doesn’t have a name, ValueError is raised: >>> p = PureWindowsPath('c:/Downloads/pathlib.tar.gz') >>> p.with_name('setup.py') PureWindowsPath('c:/Downloads/setup.py') >>> p = PureWindowsPath('c:/') >>> p.with_name('setup.py') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/antoine/cpython/default/Lib/pathlib.py", line 751, in with_name raise ValueError("%r has an empty name" % (self,)) ValueError: PureWindowsPath('c:/') has an empty name
python.library.pathlib#pathlib.PurePath.with_name
PurePath.with_stem(stem) Return a new path with the stem changed. If the original path doesn’t have a name, ValueError is raised: >>> p = PureWindowsPath('c:/Downloads/draft.txt') >>> p.with_stem('final') PureWindowsPath('c:/Downloads/final.txt') >>> p = PureWindowsPath('c:/Downloads/pathlib.tar.gz') >>> p.with_stem('lib') PureWindowsPath('c:/Downloads/lib.gz') >>> p = PureWindowsPath('c:/') >>> p.with_stem('') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/antoine/cpython/default/Lib/pathlib.py", line 861, in with_stem return self.with_name(stem + self.suffix) File "/home/antoine/cpython/default/Lib/pathlib.py", line 851, in with_name raise ValueError("%r has an empty name" % (self,)) ValueError: PureWindowsPath('c:/') has an empty name New in version 3.9.
python.library.pathlib#pathlib.PurePath.with_stem
PurePath.with_suffix(suffix) Return a new path with the suffix changed. If the original path doesn’t have a suffix, the new suffix is appended instead. If the suffix is an empty string, the original suffix is removed: >>> p = PureWindowsPath('c:/Downloads/pathlib.tar.gz') >>> p.with_suffix('.bz2') PureWindowsPath('c:/Downloads/pathlib.tar.bz2') >>> p = PureWindowsPath('README') >>> p.with_suffix('.txt') PureWindowsPath('README.txt') >>> p = PureWindowsPath('README.txt') >>> p.with_suffix('') PureWindowsPath('README')
python.library.pathlib#pathlib.PurePath.with_suffix
class pathlib.PurePosixPath(*pathsegments) A subclass of PurePath, this path flavour represents non-Windows filesystem paths: >>> PurePosixPath('/etc') PurePosixPath('/etc') pathsegments is specified similarly to PurePath.
python.library.pathlib#pathlib.PurePosixPath
class pathlib.PureWindowsPath(*pathsegments) A subclass of PurePath, this path flavour represents Windows filesystem paths: >>> PureWindowsPath('c:/Program Files/') PureWindowsPath('c:/Program Files') pathsegments is specified similarly to PurePath.
python.library.pathlib#pathlib.PureWindowsPath
class pathlib.WindowsPath(*pathsegments) A subclass of Path and PureWindowsPath, this class represents concrete Windows filesystem paths: >>> WindowsPath('c:/Program Files/') WindowsPath('c:/Program Files') pathsegments is specified similarly to PurePath.
python.library.pathlib#pathlib.WindowsPath
pdb — The Python Debugger Source code: Lib/pdb.py The module pdb defines an interactive source code debugger for Python programs. It supports setting (conditional) breakpoints and single stepping at the source line level, inspection of stack frames, source code listing, and evaluation of arbitrary Python code in the context of any stack frame. It also supports post-mortem debugging and can be called under program control. The debugger is extensible – it is actually defined as the class Pdb. This is currently undocumented but easily understood by reading the source. The extension interface uses the modules bdb and cmd. The debugger’s prompt is (Pdb). Typical usage to run a program under control of the debugger is: >>> import pdb >>> import mymodule >>> pdb.run('mymodule.test()') > <string>(0)?() (Pdb) continue > <string>(1)?() (Pdb) continue NameError: 'spam' > <string>(1)?() (Pdb) Changed in version 3.3: Tab-completion via the readline module is available for commands and command arguments, e.g. the current global and local names are offered as arguments of the p command. pdb.py can also be invoked as a script to debug other scripts. For example: python3 -m pdb myscript.py When invoked as a script, pdb will automatically enter post-mortem debugging if the program being debugged exits abnormally. After post-mortem debugging (or after normal exit of the program), pdb will restart the program. Automatic restarting preserves pdb’s state (such as breakpoints) and in most cases is more useful than quitting the debugger upon program’s exit. New in version 3.2: pdb.py now accepts a -c option that executes commands as if given in a .pdbrc file, see Debugger Commands. New in version 3.7: pdb.py now accepts a -m option that execute modules similar to the way python3 -m does. As with a script, the debugger will pause execution just before the first line of the module. The typical usage to break into the debugger from a running program is to insert import pdb; pdb.set_trace() at the location you want to break into the debugger. You can then step through the code following this statement, and continue running without the debugger using the continue command. New in version 3.7: The built-in breakpoint(), when called with defaults, can be used instead of import pdb; pdb.set_trace(). The typical usage to inspect a crashed program is: >>> import pdb >>> import mymodule >>> mymodule.test() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "./mymodule.py", line 4, in test test2() File "./mymodule.py", line 3, in test2 print(spam) NameError: spam >>> pdb.pm() > ./mymodule.py(3)test2() -> print(spam) (Pdb) The module defines the following functions; each enters the debugger in a slightly different way: pdb.run(statement, globals=None, locals=None) Execute the statement (given as a string or a code object) under debugger control. The debugger prompt appears before any code is executed; you can set breakpoints and type continue, or you can step through the statement using step or next (all these commands are explained below). The optional globals and locals arguments specify the environment in which the code is executed; by default the dictionary of the module __main__ is used. (See the explanation of the built-in exec() or eval() functions.) pdb.runeval(expression, globals=None, locals=None) Evaluate the expression (given as a string or a code object) under debugger control. When runeval() returns, it returns the value of the expression. Otherwise this function is similar to run(). pdb.runcall(function, *args, **kwds) Call the function (a function or method object, not a string) with the given arguments. When runcall() returns, it returns whatever the function call returned. The debugger prompt appears as soon as the function is entered. pdb.set_trace(*, header=None) Enter the debugger at the calling stack frame. This is useful to hard-code a breakpoint at a given point in a program, even if the code is not otherwise being debugged (e.g. when an assertion fails). If given, header is printed to the console just before debugging begins. Changed in version 3.7: The keyword-only argument header. pdb.post_mortem(traceback=None) Enter post-mortem debugging of the given traceback object. If no traceback is given, it uses the one of the exception that is currently being handled (an exception must be being handled if the default is to be used). pdb.pm() Enter post-mortem debugging of the traceback found in sys.last_traceback. The run* functions and set_trace() are aliases for instantiating the Pdb class and calling the method of the same name. If you want to access further features, you have to do this yourself: class pdb.Pdb(completekey='tab', stdin=None, stdout=None, skip=None, nosigint=False, readrc=True) Pdb is the debugger class. The completekey, stdin and stdout arguments are passed to the underlying cmd.Cmd class; see the description there. The skip argument, if given, must be an iterable of glob-style module name patterns. The debugger will not step into frames that originate in a module that matches one of these patterns. 1 By default, Pdb sets a handler for the SIGINT signal (which is sent when the user presses Ctrl-C on the console) when you give a continue command. This allows you to break into the debugger again by pressing Ctrl-C. If you want Pdb not to touch the SIGINT handler, set nosigint to true. The readrc argument defaults to true and controls whether Pdb will load .pdbrc files from the filesystem. Example call to enable tracing with skip: import pdb; pdb.Pdb(skip=['django.*']).set_trace() Raises an auditing event pdb.Pdb with no arguments. New in version 3.1: The skip argument. New in version 3.2: The nosigint argument. Previously, a SIGINT handler was never set by Pdb. Changed in version 3.6: The readrc argument. run(statement, globals=None, locals=None) runeval(expression, globals=None, locals=None) runcall(function, *args, **kwds) set_trace() See the documentation for the functions explained above. Debugger Commands The commands recognized by the debugger are listed below. Most commands can be abbreviated to one or two letters as indicated; e.g. h(elp) means that either h or help can be used to enter the help command (but not he or hel, nor H or Help or HELP). Arguments to commands must be separated by whitespace (spaces or tabs). Optional arguments are enclosed in square brackets ([]) in the command syntax; the square brackets must not be typed. Alternatives in the command syntax are separated by a vertical bar (|). Entering a blank line repeats the last command entered. Exception: if the last command was a list command, the next 11 lines are listed. Commands that the debugger doesn’t recognize are assumed to be Python statements and are executed in the context of the program being debugged. Python statements can also be prefixed with an exclamation point (!). This is a powerful way to inspect the program being debugged; it is even possible to change a variable or call a function. When an exception occurs in such a statement, the exception name is printed but the debugger’s state is not changed. The debugger supports aliases. Aliases can have parameters which allows one a certain level of adaptability to the context under examination. Multiple commands may be entered on a single line, separated by ;;. (A single ; is not used as it is the separator for multiple commands in a line that is passed to the Python parser.) No intelligence is applied to separating the commands; the input is split at the first ;; pair, even if it is in the middle of a quoted string. If a file .pdbrc exists in the user’s home directory or in the current directory, it is read in and executed as if it had been typed at the debugger prompt. This is particularly useful for aliases. If both files exist, the one in the home directory is read first and aliases defined there can be overridden by the local file. Changed in version 3.2: .pdbrc can now contain commands that continue debugging, such as continue or next. Previously, these commands had no effect. h(elp) [command] Without argument, print the list of available commands. With a command as argument, print help about that command. help pdb displays the full documentation (the docstring of the pdb module). Since the command argument must be an identifier, help exec must be entered to get help on the ! command. w(here) Print a stack trace, with the most recent frame at the bottom. An arrow indicates the current frame, which determines the context of most commands. d(own) [count] Move the current frame count (default one) levels down in the stack trace (to a newer frame). u(p) [count] Move the current frame count (default one) levels up in the stack trace (to an older frame). b(reak) [([filename:]lineno | function) [, condition]] With a lineno argument, set a break there in the current file. With a function argument, set a break at the first executable statement within that function. The line number may be prefixed with a filename and a colon, to specify a breakpoint in another file (probably one that hasn’t been loaded yet). The file is searched on sys.path. Note that each breakpoint is assigned a number to which all the other breakpoint commands refer. If a second argument is present, it is an expression which must evaluate to true before the breakpoint is honored. Without argument, list all breaks, including for each breakpoint, the number of times that breakpoint has been hit, the current ignore count, and the associated condition if any. tbreak [([filename:]lineno | function) [, condition]] Temporary breakpoint, which is removed automatically when it is first hit. The arguments are the same as for break. cl(ear) [filename:lineno | bpnumber ...] With a filename:lineno argument, clear all the breakpoints at this line. With a space separated list of breakpoint numbers, clear those breakpoints. Without argument, clear all breaks (but first ask confirmation). disable [bpnumber ...] Disable the breakpoints given as a space separated list of breakpoint numbers. Disabling a breakpoint means it cannot cause the program to stop execution, but unlike clearing a breakpoint, it remains in the list of breakpoints and can be (re-)enabled. enable [bpnumber ...] Enable the breakpoints specified. ignore bpnumber [count] Set the ignore count for the given breakpoint number. If count is omitted, the ignore count is set to 0. A breakpoint becomes active when the ignore count is zero. When non-zero, the count is decremented each time the breakpoint is reached and the breakpoint is not disabled and any associated condition evaluates to true. condition bpnumber [condition] Set a new condition for the breakpoint, an expression which must evaluate to true before the breakpoint is honored. If condition is absent, any existing condition is removed; i.e., the breakpoint is made unconditional. commands [bpnumber] Specify a list of commands for breakpoint number bpnumber. The commands themselves appear on the following lines. Type a line containing just end to terminate the commands. An example: (Pdb) commands 1 (com) p some_variable (com) end (Pdb) To remove all commands from a breakpoint, type commands and follow it immediately with end; that is, give no commands. With no bpnumber argument, commands refers to the last breakpoint set. You can use breakpoint commands to start your program up again. Simply use the continue command, or step, or any other command that resumes execution. Specifying any command resuming execution (currently continue, step, next, return, jump, quit and their abbreviations) terminates the command list (as if that command was immediately followed by end). This is because any time you resume execution (even with a simple next or step), you may encounter another breakpoint—which could have its own command list, leading to ambiguities about which list to execute. If you use the ‘silent’ command in the command list, the usual message about stopping at a breakpoint is not printed. This may be desirable for breakpoints that are to print a specific message and then continue. If none of the other commands print anything, you see no sign that the breakpoint was reached. s(tep) Execute the current line, stop at the first possible occasion (either in a function that is called or on the next line in the current function). n(ext) Continue execution until the next line in the current function is reached or it returns. (The difference between next and step is that step stops inside a called function, while next executes called functions at (nearly) full speed, only stopping at the next line in the current function.) unt(il) [lineno] Without argument, continue execution until the line with a number greater than the current one is reached. With a line number, continue execution until a line with a number greater or equal to that is reached. In both cases, also stop when the current frame returns. Changed in version 3.2: Allow giving an explicit line number. r(eturn) Continue execution until the current function returns. c(ont(inue)) Continue execution, only stop when a breakpoint is encountered. j(ump) lineno Set the next line that will be executed. Only available in the bottom-most frame. This lets you jump back and execute code again, or jump forward to skip code that you don’t want to run. It should be noted that not all jumps are allowed – for instance it is not possible to jump into the middle of a for loop or out of a finally clause. l(ist) [first[, last]] List source code for the current file. Without arguments, list 11 lines around the current line or continue the previous listing. With . as argument, list 11 lines around the current line. With one argument, list 11 lines around at that line. With two arguments, list the given range; if the second argument is less than the first, it is interpreted as a count. The current line in the current frame is indicated by ->. If an exception is being debugged, the line where the exception was originally raised or propagated is indicated by >>, if it differs from the current line. New in version 3.2: The >> marker. ll | longlist List all source code for the current function or frame. Interesting lines are marked as for list. New in version 3.2. a(rgs) Print the argument list of the current function. p expression Evaluate the expression in the current context and print its value. Note print() can also be used, but is not a debugger command — this executes the Python print() function. pp expression Like the p command, except the value of the expression is pretty-printed using the pprint module. whatis expression Print the type of the expression. source expression Try to get source code for the given object and display it. New in version 3.2. display [expression] Display the value of the expression if it changed, each time execution stops in the current frame. Without expression, list all display expressions for the current frame. New in version 3.2. undisplay [expression] Do not display the expression any more in the current frame. Without expression, clear all display expressions for the current frame. New in version 3.2. interact Start an interactive interpreter (using the code module) whose global namespace contains all the (global and local) names found in the current scope. New in version 3.2. alias [name [command]] Create an alias called name that executes command. The command must not be enclosed in quotes. Replaceable parameters can be indicated by %1, %2, and so on, while %* is replaced by all the parameters. If no command is given, the current alias for name is shown. If no arguments are given, all aliases are listed. Aliases may be nested and can contain anything that can be legally typed at the pdb prompt. Note that internal pdb commands can be overridden by aliases. Such a command is then hidden until the alias is removed. Aliasing is recursively applied to the first word of the command line; all other words in the line are left alone. As an example, here are two useful aliases (especially when placed in the .pdbrc file): # Print instance variables (usage "pi classInst") alias pi for k in %1.__dict__.keys(): print("%1.",k,"=",%1.__dict__[k]) # Print instance variables in self alias ps pi self unalias name Delete the specified alias. ! statement Execute the (one-line) statement in the context of the current stack frame. The exclamation point can be omitted unless the first word of the statement resembles a debugger command. To set a global variable, you can prefix the assignment command with a global statement on the same line, e.g.: (Pdb) global list_options; list_options = ['-l'] (Pdb) run [args ...] restart [args ...] Restart the debugged Python program. If an argument is supplied, it is split with shlex and the result is used as the new sys.argv. History, breakpoints, actions and debugger options are preserved. restart is an alias for run. q(uit) Quit from the debugger. The program being executed is aborted. debug code Enter a recursive debugger that steps through the code argument (which is an arbitrary expression or statement to be executed in the current environment). retval Print the return value for the last return of a function. Footnotes 1 Whether a frame is considered to originate in a certain module is determined by the __name__ in the frame globals.
python.library.pdb
class pdb.Pdb(completekey='tab', stdin=None, stdout=None, skip=None, nosigint=False, readrc=True) Pdb is the debugger class. The completekey, stdin and stdout arguments are passed to the underlying cmd.Cmd class; see the description there. The skip argument, if given, must be an iterable of glob-style module name patterns. The debugger will not step into frames that originate in a module that matches one of these patterns. 1 By default, Pdb sets a handler for the SIGINT signal (which is sent when the user presses Ctrl-C on the console) when you give a continue command. This allows you to break into the debugger again by pressing Ctrl-C. If you want Pdb not to touch the SIGINT handler, set nosigint to true. The readrc argument defaults to true and controls whether Pdb will load .pdbrc files from the filesystem. Example call to enable tracing with skip: import pdb; pdb.Pdb(skip=['django.*']).set_trace() Raises an auditing event pdb.Pdb with no arguments. New in version 3.1: The skip argument. New in version 3.2: The nosigint argument. Previously, a SIGINT handler was never set by Pdb. Changed in version 3.6: The readrc argument. run(statement, globals=None, locals=None) runeval(expression, globals=None, locals=None) runcall(function, *args, **kwds) set_trace() See the documentation for the functions explained above.
python.library.pdb#pdb.Pdb
run(statement, globals=None, locals=None) runeval(expression, globals=None, locals=None) runcall(function, *args, **kwds) set_trace() See the documentation for the functions explained above.
python.library.pdb#pdb.Pdb.run
run(statement, globals=None, locals=None) runeval(expression, globals=None, locals=None) runcall(function, *args, **kwds) set_trace() See the documentation for the functions explained above.
python.library.pdb#pdb.Pdb.runcall
run(statement, globals=None, locals=None) runeval(expression, globals=None, locals=None) runcall(function, *args, **kwds) set_trace() See the documentation for the functions explained above.
python.library.pdb#pdb.Pdb.runeval
run(statement, globals=None, locals=None) runeval(expression, globals=None, locals=None) runcall(function, *args, **kwds) set_trace() See the documentation for the functions explained above.
python.library.pdb#pdb.Pdb.set_trace
pdb.pm() Enter post-mortem debugging of the traceback found in sys.last_traceback.
python.library.pdb#pdb.pm
pdb.post_mortem(traceback=None) Enter post-mortem debugging of the given traceback object. If no traceback is given, it uses the one of the exception that is currently being handled (an exception must be being handled if the default is to be used).
python.library.pdb#pdb.post_mortem
pdb.run(statement, globals=None, locals=None) Execute the statement (given as a string or a code object) under debugger control. The debugger prompt appears before any code is executed; you can set breakpoints and type continue, or you can step through the statement using step or next (all these commands are explained below). The optional globals and locals arguments specify the environment in which the code is executed; by default the dictionary of the module __main__ is used. (See the explanation of the built-in exec() or eval() functions.)
python.library.pdb#pdb.run
pdb.runcall(function, *args, **kwds) Call the function (a function or method object, not a string) with the given arguments. When runcall() returns, it returns whatever the function call returned. The debugger prompt appears as soon as the function is entered.
python.library.pdb#pdb.runcall
pdb.runeval(expression, globals=None, locals=None) Evaluate the expression (given as a string or a code object) under debugger control. When runeval() returns, it returns the value of the expression. Otherwise this function is similar to run().
python.library.pdb#pdb.runeval
pdb.set_trace(*, header=None) Enter the debugger at the calling stack frame. This is useful to hard-code a breakpoint at a given point in a program, even if the code is not otherwise being debugged (e.g. when an assertion fails). If given, header is printed to the console just before debugging begins. Changed in version 3.7: The keyword-only argument header.
python.library.pdb#pdb.set_trace
exception PendingDeprecationWarning Base class for warnings about features which are obsolete and expected to be deprecated in the future, but are not deprecated at the moment. This class is rarely used as emitting a warning about a possible upcoming deprecation is unusual, and DeprecationWarning is preferred for already active deprecations. Ignored by the default warning filters. Enabling the Python Development Mode shows this warning.
python.library.exceptions#PendingDeprecationWarning
exception PermissionError Raised when trying to run an operation without the adequate access rights - for example filesystem permissions. Corresponds to errno EACCES and EPERM.
python.library.exceptions#PermissionError
pickle — Python object serialization Source code: Lib/pickle.py The pickle module implements binary protocols for serializing and de-serializing a Python object structure. “Pickling” is the process whereby a Python object hierarchy is converted into a byte stream, and “unpickling” is the inverse operation, whereby a byte stream (from a binary file or bytes-like object) is converted back into an object hierarchy. Pickling (and unpickling) is alternatively known as “serialization”, “marshalling,” 1 or “flattening”; however, to avoid confusion, the terms used here are “pickling” and “unpickling”. Warning The pickle module is not secure. Only unpickle data you trust. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling. Never unpickle data that could have come from an untrusted source, or that could have been tampered with. Consider signing data with hmac if you need to ensure that it has not been tampered with. Safer serialization formats such as json may be more appropriate if you are processing untrusted data. See Comparison with json. Relationship to other Python modules Comparison with marshal Python has a more primitive serialization module called marshal, but in general pickle should always be the preferred way to serialize Python objects. marshal exists primarily to support Python’s .pyc files. The pickle module differs from marshal in several significant ways: The pickle module keeps track of the objects it has already serialized, so that later references to the same object won’t be serialized again. marshal doesn’t do this. This has implications both for recursive objects and object sharing. Recursive objects are objects that contain references to themselves. These are not handled by marshal, and in fact, attempting to marshal recursive objects will crash your Python interpreter. Object sharing happens when there are multiple references to the same object in different places in the object hierarchy being serialized. pickle stores such objects only once, and ensures that all other references point to the master copy. Shared objects remain shared, which can be very important for mutable objects. marshal cannot be used to serialize user-defined classes and their instances. pickle can save and restore class instances transparently, however the class definition must be importable and live in the same module as when the object was stored. The marshal serialization format is not guaranteed to be portable across Python versions. Because its primary job in life is to support .pyc files, the Python implementers reserve the right to change the serialization format in non-backwards compatible ways should the need arise. The pickle serialization format is guaranteed to be backwards compatible across Python releases provided a compatible pickle protocol is chosen and pickling and unpickling code deals with Python 2 to Python 3 type differences if your data is crossing that unique breaking change language boundary. Comparison with json There are fundamental differences between the pickle protocols and JSON (JavaScript Object Notation): JSON is a text serialization format (it outputs unicode text, although most of the time it is then encoded to utf-8), while pickle is a binary serialization format; JSON is human-readable, while pickle is not; JSON is interoperable and widely used outside of the Python ecosystem, while pickle is Python-specific; JSON, by default, can only represent a subset of the Python built-in types, and no custom classes; pickle can represent an extremely large number of Python types (many of them automatically, by clever usage of Python’s introspection facilities; complex cases can be tackled by implementing specific object APIs); Unlike pickle, deserializing untrusted JSON does not in itself create an arbitrary code execution vulnerability. See also The json module: a standard library module allowing JSON serialization and deserialization. Data stream format The data format used by pickle is Python-specific. This has the advantage that there are no restrictions imposed by external standards such as JSON or XDR (which can’t represent pointer sharing); however it means that non-Python programs may not be able to reconstruct pickled Python objects. By default, the pickle data format uses a relatively compact binary representation. If you need optimal size characteristics, you can efficiently compress pickled data. The module pickletools contains tools for analyzing data streams generated by pickle. pickletools source code has extensive comments about opcodes used by pickle protocols. There are currently 6 different protocols which can be used for pickling. The higher the protocol used, the more recent the version of Python needed to read the pickle produced. Protocol version 0 is the original “human-readable” protocol and is backwards compatible with earlier versions of Python. Protocol version 1 is an old binary format which is also compatible with earlier versions of Python. Protocol version 2 was introduced in Python 2.3. It provides much more efficient pickling of new-style classes. Refer to PEP 307 for information about improvements brought by protocol 2. Protocol version 3 was added in Python 3.0. It has explicit support for bytes objects and cannot be unpickled by Python 2.x. This was the default protocol in Python 3.0–3.7. Protocol version 4 was added in Python 3.4. It adds support for very large objects, pickling more kinds of objects, and some data format optimizations. It is the default protocol starting with Python 3.8. Refer to PEP 3154 for information about improvements brought by protocol 4. Protocol version 5 was added in Python 3.8. It adds support for out-of-band data and speedup for in-band data. Refer to PEP 574 for information about improvements brought by protocol 5. Note Serialization is a more primitive notion than persistence; although pickle reads and writes file objects, it does not handle the issue of naming persistent objects, nor the (even more complicated) issue of concurrent access to persistent objects. The pickle module can transform a complex object into a byte stream and it can transform the byte stream into an object with the same internal structure. Perhaps the most obvious thing to do with these byte streams is to write them onto a file, but it is also conceivable to send them across a network or store them in a database. The shelve module provides a simple interface to pickle and unpickle objects on DBM-style database files. Module Interface To serialize an object hierarchy, you simply call the dumps() function. Similarly, to de-serialize a data stream, you call the loads() function. However, if you want more control over serialization and de-serialization, you can create a Pickler or an Unpickler object, respectively. The pickle module provides the following constants: pickle.HIGHEST_PROTOCOL An integer, the highest protocol version available. This value can be passed as a protocol value to functions dump() and dumps() as well as the Pickler constructor. pickle.DEFAULT_PROTOCOL An integer, the default protocol version used for pickling. May be less than HIGHEST_PROTOCOL. Currently the default protocol is 4, first introduced in Python 3.4 and incompatible with previous versions. Changed in version 3.0: The default protocol is 3. Changed in version 3.8: The default protocol is 4. The pickle module provides the following functions to make the pickling process more convenient: pickle.dump(obj, file, protocol=None, *, fix_imports=True, buffer_callback=None) Write the pickled representation of the object obj to the open file object file. This is equivalent to Pickler(file, protocol).dump(obj). Arguments file, protocol, fix_imports and buffer_callback have the same meaning as in the Pickler constructor. Changed in version 3.8: The buffer_callback argument was added. pickle.dumps(obj, protocol=None, *, fix_imports=True, buffer_callback=None) Return the pickled representation of the object obj as a bytes object, instead of writing it to a file. Arguments protocol, fix_imports and buffer_callback have the same meaning as in the Pickler constructor. Changed in version 3.8: The buffer_callback argument was added. pickle.load(file, *, fix_imports=True, encoding="ASCII", errors="strict", buffers=None) Read the pickled representation of an object from the open file object file and return the reconstituted object hierarchy specified therein. This is equivalent to Unpickler(file).load(). The protocol version of the pickle is detected automatically, so no protocol argument is needed. Bytes past the pickled representation of the object are ignored. Arguments file, fix_imports, encoding, errors, strict and buffers have the same meaning as in the Unpickler constructor. Changed in version 3.8: The buffers argument was added. pickle.loads(data, /, *, fix_imports=True, encoding="ASCII", errors="strict", buffers=None) Return the reconstituted object hierarchy of the pickled representation data of an object. data must be a bytes-like object. The protocol version of the pickle is detected automatically, so no protocol argument is needed. Bytes past the pickled representation of the object are ignored. Arguments file, fix_imports, encoding, errors, strict and buffers have the same meaning as in the Unpickler constructor. Changed in version 3.8: The buffers argument was added. The pickle module defines three exceptions: exception pickle.PickleError Common base class for the other pickling exceptions. It inherits Exception. exception pickle.PicklingError Error raised when an unpicklable object is encountered by Pickler. It inherits PickleError. Refer to What can be pickled and unpickled? to learn what kinds of objects can be pickled. exception pickle.UnpicklingError Error raised when there is a problem unpickling an object, such as a data corruption or a security violation. It inherits PickleError. Note that other exceptions may also be raised during unpickling, including (but not necessarily limited to) AttributeError, EOFError, ImportError, and IndexError. The pickle module exports three classes, Pickler, Unpickler and PickleBuffer: class pickle.Pickler(file, protocol=None, *, fix_imports=True, buffer_callback=None) This takes a binary file for writing a pickle data stream. The optional protocol argument, an integer, tells the pickler to use the given protocol; supported protocols are 0 to HIGHEST_PROTOCOL. If not specified, the default is DEFAULT_PROTOCOL. If a negative number is specified, HIGHEST_PROTOCOL is selected. The file argument must have a write() method that accepts a single bytes argument. It can thus be an on-disk file opened for binary writing, an io.BytesIO instance, or any other custom object that meets this interface. If fix_imports is true and protocol is less than 3, pickle will try to map the new Python 3 names to the old module names used in Python 2, so that the pickle data stream is readable with Python 2. If buffer_callback is None (the default), buffer views are serialized into file as part of the pickle stream. If buffer_callback is not None, then it can be called any number of times with a buffer view. If the callback returns a false value (such as None), the given buffer is out-of-band; otherwise the buffer is serialized in-band, i.e. inside the pickle stream. It is an error if buffer_callback is not None and protocol is None or smaller than 5. Changed in version 3.8: The buffer_callback argument was added. dump(obj) Write the pickled representation of obj to the open file object given in the constructor. persistent_id(obj) Do nothing by default. This exists so a subclass can override it. If persistent_id() returns None, obj is pickled as usual. Any other value causes Pickler to emit the returned value as a persistent ID for obj. The meaning of this persistent ID should be defined by Unpickler.persistent_load(). Note that the value returned by persistent_id() cannot itself have a persistent ID. See Persistence of External Objects for details and examples of uses. dispatch_table A pickler object’s dispatch table is a registry of reduction functions of the kind which can be declared using copyreg.pickle(). It is a mapping whose keys are classes and whose values are reduction functions. A reduction function takes a single argument of the associated class and should conform to the same interface as a __reduce__() method. By default, a pickler object will not have a dispatch_table attribute, and it will instead use the global dispatch table managed by the copyreg module. However, to customize the pickling for a specific pickler object one can set the dispatch_table attribute to a dict-like object. Alternatively, if a subclass of Pickler has a dispatch_table attribute then this will be used as the default dispatch table for instances of that class. See Dispatch Tables for usage examples. New in version 3.3. reducer_override(self, obj) Special reducer that can be defined in Pickler subclasses. This method has priority over any reducer in the dispatch_table. It should conform to the same interface as a __reduce__() method, and can optionally return NotImplemented to fallback on dispatch_table-registered reducers to pickle obj. For a detailed example, see Custom Reduction for Types, Functions, and Other Objects. New in version 3.8. fast Deprecated. Enable fast mode if set to a true value. The fast mode disables the usage of memo, therefore speeding the pickling process by not generating superfluous PUT opcodes. It should not be used with self-referential objects, doing otherwise will cause Pickler to recurse infinitely. Use pickletools.optimize() if you need more compact pickles. class pickle.Unpickler(file, *, fix_imports=True, encoding="ASCII", errors="strict", buffers=None) This takes a binary file for reading a pickle data stream. The protocol version of the pickle is detected automatically, so no protocol argument is needed. The argument file must have three methods, a read() method that takes an integer argument, a readinto() method that takes a buffer argument and a readline() method that requires no arguments, as in the io.BufferedIOBase interface. Thus file can be an on-disk file opened for binary reading, an io.BytesIO object, or any other custom object that meets this interface. The optional arguments fix_imports, encoding and errors are used to control compatibility support for pickle stream generated by Python 2. If fix_imports is true, pickle will try to map the old Python 2 names to the new names used in Python 3. The encoding and errors tell pickle how to decode 8-bit string instances pickled by Python 2; these default to ‘ASCII’ and ‘strict’, respectively. The encoding can be ‘bytes’ to read these 8-bit string instances as bytes objects. Using encoding='latin1' is required for unpickling NumPy arrays and instances of datetime, date and time pickled by Python 2. If buffers is None (the default), then all data necessary for deserialization must be contained in the pickle stream. This means that the buffer_callback argument was None when a Pickler was instantiated (or when dump() or dumps() was called). If buffers is not None, it should be an iterable of buffer-enabled objects that is consumed each time the pickle stream references an out-of-band buffer view. Such buffers have been given in order to the buffer_callback of a Pickler object. Changed in version 3.8: The buffers argument was added. load() Read the pickled representation of an object from the open file object given in the constructor, and return the reconstituted object hierarchy specified therein. Bytes past the pickled representation of the object are ignored. persistent_load(pid) Raise an UnpicklingError by default. If defined, persistent_load() should return the object specified by the persistent ID pid. If an invalid persistent ID is encountered, an UnpicklingError should be raised. See Persistence of External Objects for details and examples of uses. find_class(module, name) Import module if necessary and return the object called name from it, where the module and name arguments are str objects. Note, unlike its name suggests, find_class() is also used for finding functions. Subclasses may override this to gain control over what type of objects and how they can be loaded, potentially reducing security risks. Refer to Restricting Globals for details. Raises an auditing event pickle.find_class with arguments module, name. class pickle.PickleBuffer(buffer) A wrapper for a buffer representing picklable data. buffer must be a buffer-providing object, such as a bytes-like object or a N-dimensional array. PickleBuffer is itself a buffer provider, therefore it is possible to pass it to other APIs expecting a buffer-providing object, such as memoryview. PickleBuffer objects can only be serialized using pickle protocol 5 or higher. They are eligible for out-of-band serialization. New in version 3.8. raw() Return a memoryview of the memory area underlying this buffer. The returned object is a one-dimensional, C-contiguous memoryview with format B (unsigned bytes). BufferError is raised if the buffer is neither C- nor Fortran-contiguous. release() Release the underlying buffer exposed by the PickleBuffer object. What can be pickled and unpickled? The following types can be pickled: None, True, and False integers, floating point numbers, complex numbers strings, bytes, bytearrays tuples, lists, sets, and dictionaries containing only picklable objects functions defined at the top level of a module (using def, not lambda) built-in functions defined at the top level of a module classes that are defined at the top level of a module instances of such classes whose __dict__ or the result of calling __getstate__() is picklable (see section Pickling Class Instances for details). Attempts to pickle unpicklable objects will raise the PicklingError exception; when this happens, an unspecified number of bytes may have already been written to the underlying file. Trying to pickle a highly recursive data structure may exceed the maximum recursion depth, a RecursionError will be raised in this case. You can carefully raise this limit with sys.setrecursionlimit(). Note that functions (built-in and user-defined) are pickled by “fully qualified” name reference, not by value. 2 This means that only the function name is pickled, along with the name of the module the function is defined in. Neither the function’s code, nor any of its function attributes are pickled. Thus the defining module must be importable in the unpickling environment, and the module must contain the named object, otherwise an exception will be raised. 3 Similarly, classes are pickled by named reference, so the same restrictions in the unpickling environment apply. Note that none of the class’s code or data is pickled, so in the following example the class attribute attr is not restored in the unpickling environment: class Foo: attr = 'A class attribute' picklestring = pickle.dumps(Foo) These restrictions are why picklable functions and classes must be defined in the top level of a module. Similarly, when class instances are pickled, their class’s code and data are not pickled along with them. Only the instance data are pickled. This is done on purpose, so you can fix bugs in a class or add methods to the class and still load objects that were created with an earlier version of the class. If you plan to have long-lived objects that will see many versions of a class, it may be worthwhile to put a version number in the objects so that suitable conversions can be made by the class’s __setstate__() method. Pickling Class Instances In this section, we describe the general mechanisms available to you to define, customize, and control how class instances are pickled and unpickled. In most cases, no additional code is needed to make instances picklable. By default, pickle will retrieve the class and the attributes of an instance via introspection. When a class instance is unpickled, its __init__() method is usually not invoked. The default behaviour first creates an uninitialized instance and then restores the saved attributes. The following code shows an implementation of this behaviour: def save(obj): return (obj.__class__, obj.__dict__) def load(cls, attributes): obj = cls.__new__(cls) obj.__dict__.update(attributes) return obj Classes can alter the default behaviour by providing one or several special methods: object.__getnewargs_ex__() In protocols 2 and newer, classes that implements the __getnewargs_ex__() method can dictate the values passed to the __new__() method upon unpickling. The method must return a pair (args, kwargs) where args is a tuple of positional arguments and kwargs a dictionary of named arguments for constructing the object. Those will be passed to the __new__() method upon unpickling. You should implement this method if the __new__() method of your class requires keyword-only arguments. Otherwise, it is recommended for compatibility to implement __getnewargs__(). Changed in version 3.6: __getnewargs_ex__() is now used in protocols 2 and 3. object.__getnewargs__() This method serves a similar purpose as __getnewargs_ex__(), but supports only positional arguments. It must return a tuple of arguments args which will be passed to the __new__() method upon unpickling. __getnewargs__() will not be called if __getnewargs_ex__() is defined. Changed in version 3.6: Before Python 3.6, __getnewargs__() was called instead of __getnewargs_ex__() in protocols 2 and 3. object.__getstate__() Classes can further influence how their instances are pickled; if the class defines the method __getstate__(), it is called and the returned object is pickled as the contents for the instance, instead of the contents of the instance’s dictionary. If the __getstate__() method is absent, the instance’s __dict__ is pickled as usual. object.__setstate__(state) Upon unpickling, if the class defines __setstate__(), it is called with the unpickled state. In that case, there is no requirement for the state object to be a dictionary. Otherwise, the pickled state must be a dictionary and its items are assigned to the new instance’s dictionary. Note If __getstate__() returns a false value, the __setstate__() method will not be called upon unpickling. Refer to the section Handling Stateful Objects for more information about how to use the methods __getstate__() and __setstate__(). Note At unpickling time, some methods like __getattr__(), __getattribute__(), or __setattr__() may be called upon the instance. In case those methods rely on some internal invariant being true, the type should implement __new__() to establish such an invariant, as __init__() is not called when unpickling an instance. As we shall see, pickle does not use directly the methods described above. In fact, these methods are part of the copy protocol which implements the __reduce__() special method. The copy protocol provides a unified interface for retrieving the data necessary for pickling and copying objects. 4 Although powerful, implementing __reduce__() directly in your classes is error prone. For this reason, class designers should use the high-level interface (i.e., __getnewargs_ex__(), __getstate__() and __setstate__()) whenever possible. We will show, however, cases where using __reduce__() is the only option or leads to more efficient pickling or both. object.__reduce__() The interface is currently defined as follows. The __reduce__() method takes no argument and shall return either a string or preferably a tuple (the returned object is often referred to as the “reduce value”). If a string is returned, the string should be interpreted as the name of a global variable. It should be the object’s local name relative to its module; the pickle module searches the module namespace to determine the object’s module. This behaviour is typically useful for singletons. When a tuple is returned, it must be between two and six items long. Optional items can either be omitted, or None can be provided as their value. The semantics of each item are in order: A callable object that will be called to create the initial version of the object. A tuple of arguments for the callable object. An empty tuple must be given if the callable does not accept any argument. Optionally, the object’s state, which will be passed to the object’s __setstate__() method as previously described. If the object has no such method then, the value must be a dictionary and it will be added to the object’s __dict__ attribute. Optionally, an iterator (and not a sequence) yielding successive items. These items will be appended to the object either using obj.append(item) or, in batch, using obj.extend(list_of_items). This is primarily used for list subclasses, but may be used by other classes as long as they have append() and extend() methods with the appropriate signature. (Whether append() or extend() is used depends on which pickle protocol version is used as well as the number of items to append, so both must be supported.) Optionally, an iterator (not a sequence) yielding successive key-value pairs. These items will be stored to the object using obj[key] = value. This is primarily used for dictionary subclasses, but may be used by other classes as long as they implement __setitem__(). Optionally, a callable with a (obj, state) signature. This callable allows the user to programmatically control the state-updating behavior of a specific object, instead of using obj’s static __setstate__() method. If not None, this callable will have priority over obj’s __setstate__(). New in version 3.8: The optional sixth tuple item, (obj, state), was added. object.__reduce_ex__(protocol) Alternatively, a __reduce_ex__() method may be defined. The only difference is this method should take a single integer argument, the protocol version. When defined, pickle will prefer it over the __reduce__() method. In addition, __reduce__() automatically becomes a synonym for the extended version. The main use for this method is to provide backwards-compatible reduce values for older Python releases. Persistence of External Objects For the benefit of object persistence, the pickle module supports the notion of a reference to an object outside the pickled data stream. Such objects are referenced by a persistent ID, which should be either a string of alphanumeric characters (for protocol 0) 5 or just an arbitrary object (for any newer protocol). The resolution of such persistent IDs is not defined by the pickle module; it will delegate this resolution to the user-defined methods on the pickler and unpickler, persistent_id() and persistent_load() respectively. To pickle objects that have an external persistent ID, the pickler must have a custom persistent_id() method that takes an object as an argument and returns either None or the persistent ID for that object. When None is returned, the pickler simply pickles the object as normal. When a persistent ID string is returned, the pickler will pickle that object, along with a marker so that the unpickler will recognize it as a persistent ID. To unpickle external objects, the unpickler must have a custom persistent_load() method that takes a persistent ID object and returns the referenced object. Here is a comprehensive example presenting how persistent ID can be used to pickle external objects by reference. # Simple example presenting how persistent ID can be used to pickle # external objects by reference. import pickle import sqlite3 from collections import namedtuple # Simple class representing a record in our database. MemoRecord = namedtuple("MemoRecord", "key, task") class DBPickler(pickle.Pickler): def persistent_id(self, obj): # Instead of pickling MemoRecord as a regular class instance, we emit a # persistent ID. if isinstance(obj, MemoRecord): # Here, our persistent ID is simply a tuple, containing a tag and a # key, which refers to a specific record in the database. return ("MemoRecord", obj.key) else: # If obj does not have a persistent ID, return None. This means obj # needs to be pickled as usual. return None class DBUnpickler(pickle.Unpickler): def __init__(self, file, connection): super().__init__(file) self.connection = connection def persistent_load(self, pid): # This method is invoked whenever a persistent ID is encountered. # Here, pid is the tuple returned by DBPickler. cursor = self.connection.cursor() type_tag, key_id = pid if type_tag == "MemoRecord": # Fetch the referenced record from the database and return it. cursor.execute("SELECT * FROM memos WHERE key=?", (str(key_id),)) key, task = cursor.fetchone() return MemoRecord(key, task) else: # Always raises an error if you cannot return the correct object. # Otherwise, the unpickler will think None is the object referenced # by the persistent ID. raise pickle.UnpicklingError("unsupported persistent object") def main(): import io import pprint # Initialize and populate our database. conn = sqlite3.connect(":memory:") cursor = conn.cursor() cursor.execute("CREATE TABLE memos(key INTEGER PRIMARY KEY, task TEXT)") tasks = ( 'give food to fish', 'prepare group meeting', 'fight with a zebra', ) for task in tasks: cursor.execute("INSERT INTO memos VALUES(NULL, ?)", (task,)) # Fetch the records to be pickled. cursor.execute("SELECT * FROM memos") memos = [MemoRecord(key, task) for key, task in cursor] # Save the records using our custom DBPickler. file = io.BytesIO() DBPickler(file).dump(memos) print("Pickled records:") pprint.pprint(memos) # Update a record, just for good measure. cursor.execute("UPDATE memos SET task='learn italian' WHERE key=1") # Load the records from the pickle data stream. file.seek(0) memos = DBUnpickler(file, conn).load() print("Unpickled records:") pprint.pprint(memos) if __name__ == '__main__': main() Dispatch Tables If one wants to customize pickling of some classes without disturbing any other code which depends on pickling, then one can create a pickler with a private dispatch table. The global dispatch table managed by the copyreg module is available as copyreg.dispatch_table. Therefore, one may choose to use a modified copy of copyreg.dispatch_table as a private dispatch table. For example f = io.BytesIO() p = pickle.Pickler(f) p.dispatch_table = copyreg.dispatch_table.copy() p.dispatch_table[SomeClass] = reduce_SomeClass creates an instance of pickle.Pickler with a private dispatch table which handles the SomeClass class specially. Alternatively, the code class MyPickler(pickle.Pickler): dispatch_table = copyreg.dispatch_table.copy() dispatch_table[SomeClass] = reduce_SomeClass f = io.BytesIO() p = MyPickler(f) does the same, but all instances of MyPickler will by default share the same dispatch table. The equivalent code using the copyreg module is copyreg.pickle(SomeClass, reduce_SomeClass) f = io.BytesIO() p = pickle.Pickler(f) Handling Stateful Objects Here’s an example that shows how to modify pickling behavior for a class. The TextReader class opens a text file, and returns the line number and line contents each time its readline() method is called. If a TextReader instance is pickled, all attributes except the file object member are saved. When the instance is unpickled, the file is reopened, and reading resumes from the last location. The __setstate__() and __getstate__() methods are used to implement this behavior. class TextReader: """Print and number lines in a text file.""" def __init__(self, filename): self.filename = filename self.file = open(filename) self.lineno = 0 def readline(self): self.lineno += 1 line = self.file.readline() if not line: return None if line.endswith('\n'): line = line[:-1] return "%i: %s" % (self.lineno, line) def __getstate__(self): # Copy the object's state from self.__dict__ which contains # all our instance attributes. Always use the dict.copy() # method to avoid modifying the original state. state = self.__dict__.copy() # Remove the unpicklable entries. del state['file'] return state def __setstate__(self, state): # Restore instance attributes (i.e., filename and lineno). self.__dict__.update(state) # Restore the previously opened file's state. To do so, we need to # reopen it and read from it until the line count is restored. file = open(self.filename) for _ in range(self.lineno): file.readline() # Finally, save the file. self.file = file A sample usage might be something like this: >>> reader = TextReader("hello.txt") >>> reader.readline() '1: Hello world!' >>> reader.readline() '2: I am line number two.' >>> new_reader = pickle.loads(pickle.dumps(reader)) >>> new_reader.readline() '3: Goodbye!' Custom Reduction for Types, Functions, and Other Objects New in version 3.8. Sometimes, dispatch_table may not be flexible enough. In particular we may want to customize pickling based on another criterion than the object’s type, or we may want to customize the pickling of functions and classes. For those cases, it is possible to subclass from the Pickler class and implement a reducer_override() method. This method can return an arbitrary reduction tuple (see __reduce__()). It can alternatively return NotImplemented to fallback to the traditional behavior. If both the dispatch_table and reducer_override() are defined, then reducer_override() method takes priority. Note For performance reasons, reducer_override() may not be called for the following objects: None, True, False, and exact instances of int, float, bytes, str, dict, set, frozenset, list and tuple. Here is a simple example where we allow pickling and reconstructing a given class: import io import pickle class MyClass: my_attribute = 1 class MyPickler(pickle.Pickler): def reducer_override(self, obj): """Custom reducer for MyClass.""" if getattr(obj, "__name__", None) == "MyClass": return type, (obj.__name__, obj.__bases__, {'my_attribute': obj.my_attribute}) else: # For any other object, fallback to usual reduction return NotImplemented f = io.BytesIO() p = MyPickler(f) p.dump(MyClass) del MyClass unpickled_class = pickle.loads(f.getvalue()) assert isinstance(unpickled_class, type) assert unpickled_class.__name__ == "MyClass" assert unpickled_class.my_attribute == 1 Out-of-band Buffers New in version 3.8. In some contexts, the pickle module is used to transfer massive amounts of data. Therefore, it can be important to minimize the number of memory copies, to preserve performance and resource consumption. However, normal operation of the pickle module, as it transforms a graph-like structure of objects into a sequential stream of bytes, intrinsically involves copying data to and from the pickle stream. This constraint can be eschewed if both the provider (the implementation of the object types to be transferred) and the consumer (the implementation of the communications system) support the out-of-band transfer facilities provided by pickle protocol 5 and higher. Provider API The large data objects to be pickled must implement a __reduce_ex__() method specialized for protocol 5 and higher, which returns a PickleBuffer instance (instead of e.g. a bytes object) for any large data. A PickleBuffer object signals that the underlying buffer is eligible for out-of-band data transfer. Those objects remain compatible with normal usage of the pickle module. However, consumers can also opt-in to tell pickle that they will handle those buffers by themselves. Consumer API A communications system can enable custom handling of the PickleBuffer objects generated when serializing an object graph. On the sending side, it needs to pass a buffer_callback argument to Pickler (or to the dump() or dumps() function), which will be called with each PickleBuffer generated while pickling the object graph. Buffers accumulated by the buffer_callback will not see their data copied into the pickle stream, only a cheap marker will be inserted. On the receiving side, it needs to pass a buffers argument to Unpickler (or to the load() or loads() function), which is an iterable of the buffers which were passed to buffer_callback. That iterable should produce buffers in the same order as they were passed to buffer_callback. Those buffers will provide the data expected by the reconstructors of the objects whose pickling produced the original PickleBuffer objects. Between the sending side and the receiving side, the communications system is free to implement its own transfer mechanism for out-of-band buffers. Potential optimizations include the use of shared memory or datatype-dependent compression. Example Here is a trivial example where we implement a bytearray subclass able to participate in out-of-band buffer pickling: class ZeroCopyByteArray(bytearray): def __reduce_ex__(self, protocol): if protocol >= 5: return type(self)._reconstruct, (PickleBuffer(self),), None else: # PickleBuffer is forbidden with pickle protocols <= 4. return type(self)._reconstruct, (bytearray(self),) @classmethod def _reconstruct(cls, obj): with memoryview(obj) as m: # Get a handle over the original buffer object obj = m.obj if type(obj) is cls: # Original buffer object is a ZeroCopyByteArray, return it # as-is. return obj else: return cls(obj) The reconstructor (the _reconstruct class method) returns the buffer’s providing object if it has the right type. This is an easy way to simulate zero-copy behaviour on this toy example. On the consumer side, we can pickle those objects the usual way, which when unserialized will give us a copy of the original object: b = ZeroCopyByteArray(b"abc") data = pickle.dumps(b, protocol=5) new_b = pickle.loads(data) print(b == new_b) # True print(b is new_b) # False: a copy was made But if we pass a buffer_callback and then give back the accumulated buffers when unserializing, we are able to get back the original object: b = ZeroCopyByteArray(b"abc") buffers = [] data = pickle.dumps(b, protocol=5, buffer_callback=buffers.append) new_b = pickle.loads(data, buffers=buffers) print(b == new_b) # True print(b is new_b) # True: no copy was made This example is limited by the fact that bytearray allocates its own memory: you cannot create a bytearray instance that is backed by another object’s memory. However, third-party datatypes such as NumPy arrays do not have this limitation, and allow use of zero-copy pickling (or making as few copies as possible) when transferring between distinct processes or systems. See also PEP 574 – Pickle protocol 5 with out-of-band data Restricting Globals By default, unpickling will import any class or function that it finds in the pickle data. For many applications, this behaviour is unacceptable as it permits the unpickler to import and invoke arbitrary code. Just consider what this hand-crafted pickle data stream does when loaded: >>> import pickle >>> pickle.loads(b"cos\nsystem\n(S'echo hello world'\ntR.") hello world 0 In this example, the unpickler imports the os.system() function and then apply the string argument “echo hello world”. Although this example is inoffensive, it is not difficult to imagine one that could damage your system. For this reason, you may want to control what gets unpickled by customizing Unpickler.find_class(). Unlike its name suggests, Unpickler.find_class() is called whenever a global (i.e., a class or a function) is requested. Thus it is possible to either completely forbid globals or restrict them to a safe subset. Here is an example of an unpickler allowing only few safe classes from the builtins module to be loaded: import builtins import io import pickle safe_builtins = { 'range', 'complex', 'set', 'frozenset', 'slice', } class RestrictedUnpickler(pickle.Unpickler): def find_class(self, module, name): # Only allow safe classes from builtins. if module == "builtins" and name in safe_builtins: return getattr(builtins, name) # Forbid everything else. raise pickle.UnpicklingError("global '%s.%s' is forbidden" % (module, name)) def restricted_loads(s): """Helper function analogous to pickle.loads().""" return RestrictedUnpickler(io.BytesIO(s)).load() A sample usage of our unpickler working has intended: >>> restricted_loads(pickle.dumps([1, 2, range(15)])) [1, 2, range(0, 15)] >>> restricted_loads(b"cos\nsystem\n(S'echo hello world'\ntR.") Traceback (most recent call last): ... pickle.UnpicklingError: global 'os.system' is forbidden >>> restricted_loads(b'cbuiltins\neval\n' ... b'(S\'getattr(__import__("os"), "system")' ... b'("echo hello world")\'\ntR.') Traceback (most recent call last): ... pickle.UnpicklingError: global 'builtins.eval' is forbidden As our examples shows, you have to be careful with what you allow to be unpickled. Therefore if security is a concern, you may want to consider alternatives such as the marshalling API in xmlrpc.client or third-party solutions. Performance Recent versions of the pickle protocol (from protocol 2 and upwards) feature efficient binary encodings for several common features and built-in types. Also, the pickle module has a transparent optimizer written in C. Examples For the simplest code, use the dump() and load() functions. import pickle # An arbitrary collection of objects supported by pickle. data = { 'a': [1, 2.0, 3, 4+6j], 'b': ("character string", b"byte string"), 'c': {None, True, False} } with open('data.pickle', 'wb') as f: # Pickle the 'data' dictionary using the highest protocol available. pickle.dump(data, f, pickle.HIGHEST_PROTOCOL) The following example reads the resulting pickled data. import pickle with open('data.pickle', 'rb') as f: # The protocol version used is detected automatically, so we do not # have to specify it. data = pickle.load(f) See also Module copyreg Pickle interface constructor registration for extension types. Module pickletools Tools for working with and analyzing pickled data. Module shelve Indexed databases of objects; uses pickle. Module copy Shallow and deep object copying. Module marshal High-performance serialization of built-in types. Footnotes 1 Don’t confuse this with the marshal module 2 This is why lambda functions cannot be pickled: all lambda functions share the same name: <lambda>. 3 The exception raised will likely be an ImportError or an AttributeError but it could be something else. 4 The copy module uses this protocol for shallow and deep copying operations. 5 The limitation on alphanumeric characters is due to the fact the persistent IDs, in protocol 0, are delimited by the newline character. Therefore if any kind of newline characters occurs in persistent IDs, the resulting pickle will become unreadable.
python.library.pickle
pickle.DEFAULT_PROTOCOL An integer, the default protocol version used for pickling. May be less than HIGHEST_PROTOCOL. Currently the default protocol is 4, first introduced in Python 3.4 and incompatible with previous versions. Changed in version 3.0: The default protocol is 3. Changed in version 3.8: The default protocol is 4.
python.library.pickle#pickle.DEFAULT_PROTOCOL
pickle.dump(obj, file, protocol=None, *, fix_imports=True, buffer_callback=None) Write the pickled representation of the object obj to the open file object file. This is equivalent to Pickler(file, protocol).dump(obj). Arguments file, protocol, fix_imports and buffer_callback have the same meaning as in the Pickler constructor. Changed in version 3.8: The buffer_callback argument was added.
python.library.pickle#pickle.dump
pickle.dumps(obj, protocol=None, *, fix_imports=True, buffer_callback=None) Return the pickled representation of the object obj as a bytes object, instead of writing it to a file. Arguments protocol, fix_imports and buffer_callback have the same meaning as in the Pickler constructor. Changed in version 3.8: The buffer_callback argument was added.
python.library.pickle#pickle.dumps
pickle.HIGHEST_PROTOCOL An integer, the highest protocol version available. This value can be passed as a protocol value to functions dump() and dumps() as well as the Pickler constructor.
python.library.pickle#pickle.HIGHEST_PROTOCOL
pickle.load(file, *, fix_imports=True, encoding="ASCII", errors="strict", buffers=None) Read the pickled representation of an object from the open file object file and return the reconstituted object hierarchy specified therein. This is equivalent to Unpickler(file).load(). The protocol version of the pickle is detected automatically, so no protocol argument is needed. Bytes past the pickled representation of the object are ignored. Arguments file, fix_imports, encoding, errors, strict and buffers have the same meaning as in the Unpickler constructor. Changed in version 3.8: The buffers argument was added.
python.library.pickle#pickle.load
pickle.loads(data, /, *, fix_imports=True, encoding="ASCII", errors="strict", buffers=None) Return the reconstituted object hierarchy of the pickled representation data of an object. data must be a bytes-like object. The protocol version of the pickle is detected automatically, so no protocol argument is needed. Bytes past the pickled representation of the object are ignored. Arguments file, fix_imports, encoding, errors, strict and buffers have the same meaning as in the Unpickler constructor. Changed in version 3.8: The buffers argument was added.
python.library.pickle#pickle.loads
class pickle.PickleBuffer(buffer) A wrapper for a buffer representing picklable data. buffer must be a buffer-providing object, such as a bytes-like object or a N-dimensional array. PickleBuffer is itself a buffer provider, therefore it is possible to pass it to other APIs expecting a buffer-providing object, such as memoryview. PickleBuffer objects can only be serialized using pickle protocol 5 or higher. They are eligible for out-of-band serialization. New in version 3.8. raw() Return a memoryview of the memory area underlying this buffer. The returned object is a one-dimensional, C-contiguous memoryview with format B (unsigned bytes). BufferError is raised if the buffer is neither C- nor Fortran-contiguous. release() Release the underlying buffer exposed by the PickleBuffer object.
python.library.pickle#pickle.PickleBuffer
raw() Return a memoryview of the memory area underlying this buffer. The returned object is a one-dimensional, C-contiguous memoryview with format B (unsigned bytes). BufferError is raised if the buffer is neither C- nor Fortran-contiguous.
python.library.pickle#pickle.PickleBuffer.raw
release() Release the underlying buffer exposed by the PickleBuffer object.
python.library.pickle#pickle.PickleBuffer.release
exception pickle.PickleError Common base class for the other pickling exceptions. It inherits Exception.
python.library.pickle#pickle.PickleError
class pickle.Pickler(file, protocol=None, *, fix_imports=True, buffer_callback=None) This takes a binary file for writing a pickle data stream. The optional protocol argument, an integer, tells the pickler to use the given protocol; supported protocols are 0 to HIGHEST_PROTOCOL. If not specified, the default is DEFAULT_PROTOCOL. If a negative number is specified, HIGHEST_PROTOCOL is selected. The file argument must have a write() method that accepts a single bytes argument. It can thus be an on-disk file opened for binary writing, an io.BytesIO instance, or any other custom object that meets this interface. If fix_imports is true and protocol is less than 3, pickle will try to map the new Python 3 names to the old module names used in Python 2, so that the pickle data stream is readable with Python 2. If buffer_callback is None (the default), buffer views are serialized into file as part of the pickle stream. If buffer_callback is not None, then it can be called any number of times with a buffer view. If the callback returns a false value (such as None), the given buffer is out-of-band; otherwise the buffer is serialized in-band, i.e. inside the pickle stream. It is an error if buffer_callback is not None and protocol is None or smaller than 5. Changed in version 3.8: The buffer_callback argument was added. dump(obj) Write the pickled representation of obj to the open file object given in the constructor. persistent_id(obj) Do nothing by default. This exists so a subclass can override it. If persistent_id() returns None, obj is pickled as usual. Any other value causes Pickler to emit the returned value as a persistent ID for obj. The meaning of this persistent ID should be defined by Unpickler.persistent_load(). Note that the value returned by persistent_id() cannot itself have a persistent ID. See Persistence of External Objects for details and examples of uses. dispatch_table A pickler object’s dispatch table is a registry of reduction functions of the kind which can be declared using copyreg.pickle(). It is a mapping whose keys are classes and whose values are reduction functions. A reduction function takes a single argument of the associated class and should conform to the same interface as a __reduce__() method. By default, a pickler object will not have a dispatch_table attribute, and it will instead use the global dispatch table managed by the copyreg module. However, to customize the pickling for a specific pickler object one can set the dispatch_table attribute to a dict-like object. Alternatively, if a subclass of Pickler has a dispatch_table attribute then this will be used as the default dispatch table for instances of that class. See Dispatch Tables for usage examples. New in version 3.3. reducer_override(self, obj) Special reducer that can be defined in Pickler subclasses. This method has priority over any reducer in the dispatch_table. It should conform to the same interface as a __reduce__() method, and can optionally return NotImplemented to fallback on dispatch_table-registered reducers to pickle obj. For a detailed example, see Custom Reduction for Types, Functions, and Other Objects. New in version 3.8. fast Deprecated. Enable fast mode if set to a true value. The fast mode disables the usage of memo, therefore speeding the pickling process by not generating superfluous PUT opcodes. It should not be used with self-referential objects, doing otherwise will cause Pickler to recurse infinitely. Use pickletools.optimize() if you need more compact pickles.
python.library.pickle#pickle.Pickler
dispatch_table A pickler object’s dispatch table is a registry of reduction functions of the kind which can be declared using copyreg.pickle(). It is a mapping whose keys are classes and whose values are reduction functions. A reduction function takes a single argument of the associated class and should conform to the same interface as a __reduce__() method. By default, a pickler object will not have a dispatch_table attribute, and it will instead use the global dispatch table managed by the copyreg module. However, to customize the pickling for a specific pickler object one can set the dispatch_table attribute to a dict-like object. Alternatively, if a subclass of Pickler has a dispatch_table attribute then this will be used as the default dispatch table for instances of that class. See Dispatch Tables for usage examples. New in version 3.3.
python.library.pickle#pickle.Pickler.dispatch_table
dump(obj) Write the pickled representation of obj to the open file object given in the constructor.
python.library.pickle#pickle.Pickler.dump
fast Deprecated. Enable fast mode if set to a true value. The fast mode disables the usage of memo, therefore speeding the pickling process by not generating superfluous PUT opcodes. It should not be used with self-referential objects, doing otherwise will cause Pickler to recurse infinitely. Use pickletools.optimize() if you need more compact pickles.
python.library.pickle#pickle.Pickler.fast
persistent_id(obj) Do nothing by default. This exists so a subclass can override it. If persistent_id() returns None, obj is pickled as usual. Any other value causes Pickler to emit the returned value as a persistent ID for obj. The meaning of this persistent ID should be defined by Unpickler.persistent_load(). Note that the value returned by persistent_id() cannot itself have a persistent ID. See Persistence of External Objects for details and examples of uses.
python.library.pickle#pickle.Pickler.persistent_id
reducer_override(self, obj) Special reducer that can be defined in Pickler subclasses. This method has priority over any reducer in the dispatch_table. It should conform to the same interface as a __reduce__() method, and can optionally return NotImplemented to fallback on dispatch_table-registered reducers to pickle obj. For a detailed example, see Custom Reduction for Types, Functions, and Other Objects. New in version 3.8.
python.library.pickle#pickle.Pickler.reducer_override
exception pickle.PicklingError Error raised when an unpicklable object is encountered by Pickler. It inherits PickleError. Refer to What can be pickled and unpickled? to learn what kinds of objects can be pickled.
python.library.pickle#pickle.PicklingError
class pickle.Unpickler(file, *, fix_imports=True, encoding="ASCII", errors="strict", buffers=None) This takes a binary file for reading a pickle data stream. The protocol version of the pickle is detected automatically, so no protocol argument is needed. The argument file must have three methods, a read() method that takes an integer argument, a readinto() method that takes a buffer argument and a readline() method that requires no arguments, as in the io.BufferedIOBase interface. Thus file can be an on-disk file opened for binary reading, an io.BytesIO object, or any other custom object that meets this interface. The optional arguments fix_imports, encoding and errors are used to control compatibility support for pickle stream generated by Python 2. If fix_imports is true, pickle will try to map the old Python 2 names to the new names used in Python 3. The encoding and errors tell pickle how to decode 8-bit string instances pickled by Python 2; these default to ‘ASCII’ and ‘strict’, respectively. The encoding can be ‘bytes’ to read these 8-bit string instances as bytes objects. Using encoding='latin1' is required for unpickling NumPy arrays and instances of datetime, date and time pickled by Python 2. If buffers is None (the default), then all data necessary for deserialization must be contained in the pickle stream. This means that the buffer_callback argument was None when a Pickler was instantiated (or when dump() or dumps() was called). If buffers is not None, it should be an iterable of buffer-enabled objects that is consumed each time the pickle stream references an out-of-band buffer view. Such buffers have been given in order to the buffer_callback of a Pickler object. Changed in version 3.8: The buffers argument was added. load() Read the pickled representation of an object from the open file object given in the constructor, and return the reconstituted object hierarchy specified therein. Bytes past the pickled representation of the object are ignored. persistent_load(pid) Raise an UnpicklingError by default. If defined, persistent_load() should return the object specified by the persistent ID pid. If an invalid persistent ID is encountered, an UnpicklingError should be raised. See Persistence of External Objects for details and examples of uses. find_class(module, name) Import module if necessary and return the object called name from it, where the module and name arguments are str objects. Note, unlike its name suggests, find_class() is also used for finding functions. Subclasses may override this to gain control over what type of objects and how they can be loaded, potentially reducing security risks. Refer to Restricting Globals for details. Raises an auditing event pickle.find_class with arguments module, name.
python.library.pickle#pickle.Unpickler
find_class(module, name) Import module if necessary and return the object called name from it, where the module and name arguments are str objects. Note, unlike its name suggests, find_class() is also used for finding functions. Subclasses may override this to gain control over what type of objects and how they can be loaded, potentially reducing security risks. Refer to Restricting Globals for details. Raises an auditing event pickle.find_class with arguments module, name.
python.library.pickle#pickle.Unpickler.find_class
load() Read the pickled representation of an object from the open file object given in the constructor, and return the reconstituted object hierarchy specified therein. Bytes past the pickled representation of the object are ignored.
python.library.pickle#pickle.Unpickler.load
persistent_load(pid) Raise an UnpicklingError by default. If defined, persistent_load() should return the object specified by the persistent ID pid. If an invalid persistent ID is encountered, an UnpicklingError should be raised. See Persistence of External Objects for details and examples of uses.
python.library.pickle#pickle.Unpickler.persistent_load
exception pickle.UnpicklingError Error raised when there is a problem unpickling an object, such as a data corruption or a security violation. It inherits PickleError. Note that other exceptions may also be raised during unpickling, including (but not necessarily limited to) AttributeError, EOFError, ImportError, and IndexError.
python.library.pickle#pickle.UnpicklingError
pickletools — Tools for pickle developers Source code: Lib/pickletools.py This module contains various constants relating to the intimate details of the pickle module, some lengthy comments about the implementation, and a few useful functions for analyzing pickled data. The contents of this module are useful for Python core developers who are working on the pickle; ordinary users of the pickle module probably won’t find the pickletools module relevant. Command line usage New in version 3.2. When invoked from the command line, python -m pickletools will disassemble the contents of one or more pickle files. Note that if you want to see the Python object stored in the pickle rather than the details of pickle format, you may want to use -m pickle instead. However, when the pickle file that you want to examine comes from an untrusted source, -m pickletools is a safer option because it does not execute pickle bytecode. For example, with a tuple (1, 2) pickled in file x.pickle: $ python -m pickle x.pickle (1, 2) $ python -m pickletools x.pickle 0: \x80 PROTO 3 2: K BININT1 1 4: K BININT1 2 6: \x86 TUPLE2 7: q BINPUT 0 9: . STOP highest protocol among opcodes = 2 Command line options -a, --annotate Annotate each line with a short opcode description. -o, --output=<file> Name of a file where the output should be written. -l, --indentlevel=<num> The number of blanks by which to indent a new MARK level. -m, --memo When multiple objects are disassembled, preserve memo between disassemblies. -p, --preamble=<preamble> When more than one pickle file are specified, print given preamble before each disassembly. Programmatic Interface pickletools.dis(pickle, out=None, memo=None, indentlevel=4, annotate=0) Outputs a symbolic disassembly of the pickle to the file-like object out, defaulting to sys.stdout. pickle can be a string or a file-like object. memo can be a Python dictionary that will be used as the pickle’s memo; it can be used to perform disassemblies across multiple pickles created by the same pickler. Successive levels, indicated by MARK opcodes in the stream, are indented by indentlevel spaces. If a nonzero value is given to annotate, each opcode in the output is annotated with a short description. The value of annotate is used as a hint for the column where annotation should start. New in version 3.2: The annotate argument. pickletools.genops(pickle) Provides an iterator over all of the opcodes in a pickle, returning a sequence of (opcode, arg, pos) triples. opcode is an instance of an OpcodeInfo class; arg is the decoded value, as a Python object, of the opcode’s argument; pos is the position at which this opcode is located. pickle can be a string or a file-like object. pickletools.optimize(picklestring) Returns a new equivalent pickle string after eliminating unused PUT opcodes. The optimized pickle is shorter, takes less transmission time, requires less storage space, and unpickles more efficiently.
python.library.pickletools
pickletools.dis(pickle, out=None, memo=None, indentlevel=4, annotate=0) Outputs a symbolic disassembly of the pickle to the file-like object out, defaulting to sys.stdout. pickle can be a string or a file-like object. memo can be a Python dictionary that will be used as the pickle’s memo; it can be used to perform disassemblies across multiple pickles created by the same pickler. Successive levels, indicated by MARK opcodes in the stream, are indented by indentlevel spaces. If a nonzero value is given to annotate, each opcode in the output is annotated with a short description. The value of annotate is used as a hint for the column where annotation should start. New in version 3.2: The annotate argument.
python.library.pickletools#pickletools.dis
pickletools.genops(pickle) Provides an iterator over all of the opcodes in a pickle, returning a sequence of (opcode, arg, pos) triples. opcode is an instance of an OpcodeInfo class; arg is the decoded value, as a Python object, of the opcode’s argument; pos is the position at which this opcode is located. pickle can be a string or a file-like object.
python.library.pickletools#pickletools.genops
pickletools.optimize(picklestring) Returns a new equivalent pickle string after eliminating unused PUT opcodes. The optimized pickle is shorter, takes less transmission time, requires less storage space, and unpickles more efficiently.
python.library.pickletools#pickletools.optimize
pipes — Interface to shell pipelines Source code: Lib/pipes.py The pipes module defines a class to abstract the concept of a pipeline — a sequence of converters from one file to another. Because the module uses /bin/sh command lines, a POSIX or compatible shell for os.system() and os.popen() is required. The pipes module defines the following class: class pipes.Template An abstraction of a pipeline. Example: >>> import pipes >>> t = pipes.Template() >>> t.append('tr a-z A-Z', '--') >>> f = t.open('pipefile', 'w') >>> f.write('hello world') >>> f.close() >>> open('pipefile').read() 'HELLO WORLD' Template Objects Template objects following methods: Template.reset() Restore a pipeline template to its initial state. Template.clone() Return a new, equivalent, pipeline template. Template.debug(flag) If flag is true, turn debugging on. Otherwise, turn debugging off. When debugging is on, commands to be executed are printed, and the shell is given set -x command to be more verbose. Template.append(cmd, kind) Append a new action at the end. The cmd variable must be a valid bourne shell command. The kind variable consists of two letters. The first letter can be either of '-' (which means the command reads its standard input), 'f' (which means the commands reads a given file on the command line) or '.' (which means the commands reads no input, and hence must be first.) Similarly, the second letter can be either of '-' (which means the command writes to standard output), 'f' (which means the command writes a file on the command line) or '.' (which means the command does not write anything, and hence must be last.) Template.prepend(cmd, kind) Add a new action at the beginning. See append() for explanations of the arguments. Template.open(file, mode) Return a file-like object, open to file, but read from or written to by the pipeline. Note that only one of 'r', 'w' may be given. Template.copy(infile, outfile) Copy infile to outfile through the pipe.
python.library.pipes
class pipes.Template An abstraction of a pipeline.
python.library.pipes#pipes.Template
Template.append(cmd, kind) Append a new action at the end. The cmd variable must be a valid bourne shell command. The kind variable consists of two letters. The first letter can be either of '-' (which means the command reads its standard input), 'f' (which means the commands reads a given file on the command line) or '.' (which means the commands reads no input, and hence must be first.) Similarly, the second letter can be either of '-' (which means the command writes to standard output), 'f' (which means the command writes a file on the command line) or '.' (which means the command does not write anything, and hence must be last.)
python.library.pipes#pipes.Template.append
Template.clone() Return a new, equivalent, pipeline template.
python.library.pipes#pipes.Template.clone
Template.copy(infile, outfile) Copy infile to outfile through the pipe.
python.library.pipes#pipes.Template.copy
Template.debug(flag) If flag is true, turn debugging on. Otherwise, turn debugging off. When debugging is on, commands to be executed are printed, and the shell is given set -x command to be more verbose.
python.library.pipes#pipes.Template.debug
Template.open(file, mode) Return a file-like object, open to file, but read from or written to by the pipeline. Note that only one of 'r', 'w' may be given.
python.library.pipes#pipes.Template.open
Template.prepend(cmd, kind) Add a new action at the beginning. See append() for explanations of the arguments.
python.library.pipes#pipes.Template.prepend
Template.reset() Restore a pipeline template to its initial state.
python.library.pipes#pipes.Template.reset
pkgutil — Package extension utility Source code: Lib/pkgutil.py This module provides utilities for the import system, in particular package support. class pkgutil.ModuleInfo(module_finder, name, ispkg) A namedtuple that holds a brief summary of a module’s info. New in version 3.6. pkgutil.extend_path(path, name) Extend the search path for the modules which comprise a package. Intended use is to place the following code in a package’s __init__.py: from pkgutil import extend_path __path__ = extend_path(__path__, __name__) This will add to the package’s __path__ all subdirectories of directories on sys.path named after the package. This is useful if one wants to distribute different parts of a single logical package as multiple directories. It also looks for *.pkg files beginning where * matches the name argument. This feature is similar to *.pth files (see the site module for more information), except that it doesn’t special-case lines starting with import. A *.pkg file is trusted at face value: apart from checking for duplicates, all entries found in a *.pkg file are added to the path, regardless of whether they exist on the filesystem. (This is a feature.) If the input path is not a list (as is the case for frozen packages) it is returned unchanged. The input path is not modified; an extended copy is returned. Items are only appended to the copy at the end. It is assumed that sys.path is a sequence. Items of sys.path that are not strings referring to existing directories are ignored. Unicode items on sys.path that cause errors when used as filenames may cause this function to raise an exception (in line with os.path.isdir() behavior). class pkgutil.ImpImporter(dirname=None) PEP 302 Finder that wraps Python’s “classic” import algorithm. If dirname is a string, a PEP 302 finder is created that searches that directory. If dirname is None, a PEP 302 finder is created that searches the current sys.path, plus any modules that are frozen or built-in. Note that ImpImporter does not currently support being used by placement on sys.meta_path. Deprecated since version 3.3: This emulation is no longer needed, as the standard import mechanism is now fully PEP 302 compliant and available in importlib. class pkgutil.ImpLoader(fullname, file, filename, etc) Loader that wraps Python’s “classic” import algorithm. Deprecated since version 3.3: This emulation is no longer needed, as the standard import mechanism is now fully PEP 302 compliant and available in importlib. pkgutil.find_loader(fullname) Retrieve a module loader for the given fullname. This is a backwards compatibility wrapper around importlib.util.find_spec() that converts most failures to ImportError and only returns the loader rather than the full ModuleSpec. Changed in version 3.3: Updated to be based directly on importlib rather than relying on the package internal PEP 302 import emulation. Changed in version 3.4: Updated to be based on PEP 451 pkgutil.get_importer(path_item) Retrieve a finder for the given path_item. The returned finder is cached in sys.path_importer_cache if it was newly created by a path hook. The cache (or part of it) can be cleared manually if a rescan of sys.path_hooks is necessary. Changed in version 3.3: Updated to be based directly on importlib rather than relying on the package internal PEP 302 import emulation. pkgutil.get_loader(module_or_name) Get a loader object for module_or_name. If the module or package is accessible via the normal import mechanism, a wrapper around the relevant part of that machinery is returned. Returns None if the module cannot be found or imported. If the named module is not already imported, its containing package (if any) is imported, in order to establish the package __path__. Changed in version 3.3: Updated to be based directly on importlib rather than relying on the package internal PEP 302 import emulation. Changed in version 3.4: Updated to be based on PEP 451 pkgutil.iter_importers(fullname='') Yield finder objects for the given module name. If fullname contains a ‘.’, the finders will be for the package containing fullname, otherwise they will be all registered top level finders (i.e. those on both sys.meta_path and sys.path_hooks). If the named module is in a package, that package is imported as a side effect of invoking this function. If no module name is specified, all top level finders are produced. Changed in version 3.3: Updated to be based directly on importlib rather than relying on the package internal PEP 302 import emulation. pkgutil.iter_modules(path=None, prefix='') Yields ModuleInfo for all submodules on path, or, if path is None, all top-level modules on sys.path. path should be either None or a list of paths to look for modules in. prefix is a string to output on the front of every module name on output. Note Only works for a finder which defines an iter_modules() method. This interface is non-standard, so the module also provides implementations for importlib.machinery.FileFinder and zipimport.zipimporter. Changed in version 3.3: Updated to be based directly on importlib rather than relying on the package internal PEP 302 import emulation. pkgutil.walk_packages(path=None, prefix='', onerror=None) Yields ModuleInfo for all modules recursively on path, or, if path is None, all accessible modules. path should be either None or a list of paths to look for modules in. prefix is a string to output on the front of every module name on output. Note that this function must import all packages (not all modules!) on the given path, in order to access the __path__ attribute to find submodules. onerror is a function which gets called with one argument (the name of the package which was being imported) if any exception occurs while trying to import a package. If no onerror function is supplied, ImportErrors are caught and ignored, while all other exceptions are propagated, terminating the search. Examples: # list all modules python can access walk_packages() # list all submodules of ctypes walk_packages(ctypes.__path__, ctypes.__name__ + '.') Note Only works for a finder which defines an iter_modules() method. This interface is non-standard, so the module also provides implementations for importlib.machinery.FileFinder and zipimport.zipimporter. Changed in version 3.3: Updated to be based directly on importlib rather than relying on the package internal PEP 302 import emulation. pkgutil.get_data(package, resource) Get a resource from a package. This is a wrapper for the loader get_data API. The package argument should be the name of a package, in standard module format (foo.bar). The resource argument should be in the form of a relative filename, using / as the path separator. The parent directory name .. is not allowed, and nor is a rooted name (starting with a /). The function returns a binary string that is the contents of the specified resource. For packages located in the filesystem, which have already been imported, this is the rough equivalent of: d = os.path.dirname(sys.modules[package].__file__) data = open(os.path.join(d, resource), 'rb').read() If the package cannot be located or loaded, or it uses a loader which does not support get_data, then None is returned. In particular, the loader for namespace packages does not support get_data. pkgutil.resolve_name(name) Resolve a name to an object. This functionality is used in numerous places in the standard library (see bpo-12915) - and equivalent functionality is also in widely used third-party packages such as setuptools, Django and Pyramid. It is expected that name will be a string in one of the following formats, where W is shorthand for a valid Python identifier and dot stands for a literal period in these pseudo-regexes: W(.W)* W(.W)*:(W(.W)*)? The first form is intended for backward compatibility only. It assumes that some part of the dotted name is a package, and the rest is an object somewhere within that package, possibly nested inside other objects. Because the place where the package stops and the object hierarchy starts can’t be inferred by inspection, repeated attempts to import must be done with this form. In the second form, the caller makes the division point clear through the provision of a single colon: the dotted name to the left of the colon is a package to be imported, and the dotted name to the right is the object hierarchy within that package. Only one import is needed in this form. If it ends with the colon, then a module object is returned. The function will return an object (which might be a module), or raise one of the following exceptions: ValueError – if name isn’t in a recognised format. ImportError – if an import failed when it shouldn’t have. AttributeError – If a failure occurred when traversing the object hierarchy within the imported package to get to the desired object. New in version 3.9.
python.library.pkgutil
pkgutil.extend_path(path, name) Extend the search path for the modules which comprise a package. Intended use is to place the following code in a package’s __init__.py: from pkgutil import extend_path __path__ = extend_path(__path__, __name__) This will add to the package’s __path__ all subdirectories of directories on sys.path named after the package. This is useful if one wants to distribute different parts of a single logical package as multiple directories. It also looks for *.pkg files beginning where * matches the name argument. This feature is similar to *.pth files (see the site module for more information), except that it doesn’t special-case lines starting with import. A *.pkg file is trusted at face value: apart from checking for duplicates, all entries found in a *.pkg file are added to the path, regardless of whether they exist on the filesystem. (This is a feature.) If the input path is not a list (as is the case for frozen packages) it is returned unchanged. The input path is not modified; an extended copy is returned. Items are only appended to the copy at the end. It is assumed that sys.path is a sequence. Items of sys.path that are not strings referring to existing directories are ignored. Unicode items on sys.path that cause errors when used as filenames may cause this function to raise an exception (in line with os.path.isdir() behavior).
python.library.pkgutil#pkgutil.extend_path
pkgutil.find_loader(fullname) Retrieve a module loader for the given fullname. This is a backwards compatibility wrapper around importlib.util.find_spec() that converts most failures to ImportError and only returns the loader rather than the full ModuleSpec. Changed in version 3.3: Updated to be based directly on importlib rather than relying on the package internal PEP 302 import emulation. Changed in version 3.4: Updated to be based on PEP 451
python.library.pkgutil#pkgutil.find_loader
pkgutil.get_data(package, resource) Get a resource from a package. This is a wrapper for the loader get_data API. The package argument should be the name of a package, in standard module format (foo.bar). The resource argument should be in the form of a relative filename, using / as the path separator. The parent directory name .. is not allowed, and nor is a rooted name (starting with a /). The function returns a binary string that is the contents of the specified resource. For packages located in the filesystem, which have already been imported, this is the rough equivalent of: d = os.path.dirname(sys.modules[package].__file__) data = open(os.path.join(d, resource), 'rb').read() If the package cannot be located or loaded, or it uses a loader which does not support get_data, then None is returned. In particular, the loader for namespace packages does not support get_data.
python.library.pkgutil#pkgutil.get_data
pkgutil.get_importer(path_item) Retrieve a finder for the given path_item. The returned finder is cached in sys.path_importer_cache if it was newly created by a path hook. The cache (or part of it) can be cleared manually if a rescan of sys.path_hooks is necessary. Changed in version 3.3: Updated to be based directly on importlib rather than relying on the package internal PEP 302 import emulation.
python.library.pkgutil#pkgutil.get_importer
pkgutil.get_loader(module_or_name) Get a loader object for module_or_name. If the module or package is accessible via the normal import mechanism, a wrapper around the relevant part of that machinery is returned. Returns None if the module cannot be found or imported. If the named module is not already imported, its containing package (if any) is imported, in order to establish the package __path__. Changed in version 3.3: Updated to be based directly on importlib rather than relying on the package internal PEP 302 import emulation. Changed in version 3.4: Updated to be based on PEP 451
python.library.pkgutil#pkgutil.get_loader
class pkgutil.ImpImporter(dirname=None) PEP 302 Finder that wraps Python’s “classic” import algorithm. If dirname is a string, a PEP 302 finder is created that searches that directory. If dirname is None, a PEP 302 finder is created that searches the current sys.path, plus any modules that are frozen or built-in. Note that ImpImporter does not currently support being used by placement on sys.meta_path. Deprecated since version 3.3: This emulation is no longer needed, as the standard import mechanism is now fully PEP 302 compliant and available in importlib.
python.library.pkgutil#pkgutil.ImpImporter
class pkgutil.ImpLoader(fullname, file, filename, etc) Loader that wraps Python’s “classic” import algorithm. Deprecated since version 3.3: This emulation is no longer needed, as the standard import mechanism is now fully PEP 302 compliant and available in importlib.
python.library.pkgutil#pkgutil.ImpLoader
pkgutil.iter_importers(fullname='') Yield finder objects for the given module name. If fullname contains a ‘.’, the finders will be for the package containing fullname, otherwise they will be all registered top level finders (i.e. those on both sys.meta_path and sys.path_hooks). If the named module is in a package, that package is imported as a side effect of invoking this function. If no module name is specified, all top level finders are produced. Changed in version 3.3: Updated to be based directly on importlib rather than relying on the package internal PEP 302 import emulation.
python.library.pkgutil#pkgutil.iter_importers
pkgutil.iter_modules(path=None, prefix='') Yields ModuleInfo for all submodules on path, or, if path is None, all top-level modules on sys.path. path should be either None or a list of paths to look for modules in. prefix is a string to output on the front of every module name on output. Note Only works for a finder which defines an iter_modules() method. This interface is non-standard, so the module also provides implementations for importlib.machinery.FileFinder and zipimport.zipimporter. Changed in version 3.3: Updated to be based directly on importlib rather than relying on the package internal PEP 302 import emulation.
python.library.pkgutil#pkgutil.iter_modules
class pkgutil.ModuleInfo(module_finder, name, ispkg) A namedtuple that holds a brief summary of a module’s info. New in version 3.6.
python.library.pkgutil#pkgutil.ModuleInfo
pkgutil.resolve_name(name) Resolve a name to an object. This functionality is used in numerous places in the standard library (see bpo-12915) - and equivalent functionality is also in widely used third-party packages such as setuptools, Django and Pyramid. It is expected that name will be a string in one of the following formats, where W is shorthand for a valid Python identifier and dot stands for a literal period in these pseudo-regexes: W(.W)* W(.W)*:(W(.W)*)? The first form is intended for backward compatibility only. It assumes that some part of the dotted name is a package, and the rest is an object somewhere within that package, possibly nested inside other objects. Because the place where the package stops and the object hierarchy starts can’t be inferred by inspection, repeated attempts to import must be done with this form. In the second form, the caller makes the division point clear through the provision of a single colon: the dotted name to the left of the colon is a package to be imported, and the dotted name to the right is the object hierarchy within that package. Only one import is needed in this form. If it ends with the colon, then a module object is returned. The function will return an object (which might be a module), or raise one of the following exceptions: ValueError – if name isn’t in a recognised format. ImportError – if an import failed when it shouldn’t have. AttributeError – If a failure occurred when traversing the object hierarchy within the imported package to get to the desired object. New in version 3.9.
python.library.pkgutil#pkgutil.resolve_name
pkgutil.walk_packages(path=None, prefix='', onerror=None) Yields ModuleInfo for all modules recursively on path, or, if path is None, all accessible modules. path should be either None or a list of paths to look for modules in. prefix is a string to output on the front of every module name on output. Note that this function must import all packages (not all modules!) on the given path, in order to access the __path__ attribute to find submodules. onerror is a function which gets called with one argument (the name of the package which was being imported) if any exception occurs while trying to import a package. If no onerror function is supplied, ImportErrors are caught and ignored, while all other exceptions are propagated, terminating the search. Examples: # list all modules python can access walk_packages() # list all submodules of ctypes walk_packages(ctypes.__path__, ctypes.__name__ + '.') Note Only works for a finder which defines an iter_modules() method. This interface is non-standard, so the module also provides implementations for importlib.machinery.FileFinder and zipimport.zipimporter. Changed in version 3.3: Updated to be based directly on importlib rather than relying on the package internal PEP 302 import emulation.
python.library.pkgutil#pkgutil.walk_packages
platform — Access to underlying platform’s identifying data Source code: Lib/platform.py Note Specific platforms listed alphabetically, with Linux included in the Unix section. Cross Platform platform.architecture(executable=sys.executable, bits='', linkage='') Queries the given executable (defaults to the Python interpreter binary) for various architecture information. Returns a tuple (bits, linkage) which contain information about the bit architecture and the linkage format used for the executable. Both values are returned as strings. Values that cannot be determined are returned as given by the parameter presets. If bits is given as '', the sizeof(pointer) (or sizeof(long) on Python version < 1.5.2) is used as indicator for the supported pointer size. The function relies on the system’s file command to do the actual work. This is available on most if not all Unix platforms and some non-Unix platforms and then only if the executable points to the Python interpreter. Reasonable defaults are used when the above needs are not met. Note On Mac OS X (and perhaps other platforms), executable files may be universal files containing multiple architectures. To get at the “64-bitness” of the current interpreter, it is more reliable to query the sys.maxsize attribute: is_64bits = sys.maxsize > 2**32 platform.machine() Returns the machine type, e.g. 'i386'. An empty string is returned if the value cannot be determined. platform.node() Returns the computer’s network name (may not be fully qualified!). An empty string is returned if the value cannot be determined. platform.platform(aliased=0, terse=0) Returns a single string identifying the underlying platform with as much useful information as possible. The output is intended to be human readable rather than machine parseable. It may look different on different platforms and this is intended. If aliased is true, the function will use aliases for various platforms that report system names which differ from their common names, for example SunOS will be reported as Solaris. The system_alias() function is used to implement this. Setting terse to true causes the function to return only the absolute minimum information needed to identify the platform. Changed in version 3.8: On macOS, the function now uses mac_ver(), if it returns a non-empty release string, to get the macOS version rather than the darwin version. platform.processor() Returns the (real) processor name, e.g. 'amdk6'. An empty string is returned if the value cannot be determined. Note that many platforms do not provide this information or simply return the same value as for machine(). NetBSD does this. platform.python_build() Returns a tuple (buildno, builddate) stating the Python build number and date as strings. platform.python_compiler() Returns a string identifying the compiler used for compiling Python. platform.python_branch() Returns a string identifying the Python implementation SCM branch. platform.python_implementation() Returns a string identifying the Python implementation. Possible return values are: ‘CPython’, ‘IronPython’, ‘Jython’, ‘PyPy’. platform.python_revision() Returns a string identifying the Python implementation SCM revision. platform.python_version() Returns the Python version as string 'major.minor.patchlevel'. Note that unlike the Python sys.version, the returned value will always include the patchlevel (it defaults to 0). platform.python_version_tuple() Returns the Python version as tuple (major, minor, patchlevel) of strings. Note that unlike the Python sys.version, the returned value will always include the patchlevel (it defaults to '0'). platform.release() Returns the system’s release, e.g. '2.2.0' or 'NT' An empty string is returned if the value cannot be determined. platform.system() Returns the system/OS name, such as 'Linux', 'Darwin', 'Java', 'Windows'. An empty string is returned if the value cannot be determined. platform.system_alias(system, release, version) Returns (system, release, version) aliased to common marketing names used for some systems. It also does some reordering of the information in some cases where it would otherwise cause confusion. platform.version() Returns the system’s release version, e.g. '#3 on degas'. An empty string is returned if the value cannot be determined. platform.uname() Fairly portable uname interface. Returns a namedtuple() containing six attributes: system, node, release, version, machine, and processor. Note that this adds a sixth attribute (processor) not present in the os.uname() result. Also, the attribute names are different for the first two attributes; os.uname() names them sysname and nodename. Entries which cannot be determined are set to ''. Changed in version 3.3: Result changed from a tuple to a namedtuple. Java Platform platform.java_ver(release='', vendor='', vminfo=('', '', ''), osinfo=('', '', '')) Version interface for Jython. Returns a tuple (release, vendor, vminfo, osinfo) with vminfo being a tuple (vm_name, vm_release, vm_vendor) and osinfo being a tuple (os_name, os_version, os_arch). Values which cannot be determined are set to the defaults given as parameters (which all default to ''). Windows Platform platform.win32_ver(release='', version='', csd='', ptype='') Get additional version information from the Windows Registry and return a tuple (release, version, csd, ptype) referring to OS release, version number, CSD level (service pack) and OS type (multi/single processor). As a hint: ptype is 'Uniprocessor Free' on single processor NT machines and 'Multiprocessor Free' on multi processor machines. The ‘Free’ refers to the OS version being free of debugging code. It could also state ‘Checked’ which means the OS version uses debugging code, i.e. code that checks arguments, ranges, etc. platform.win32_edition() Returns a string representing the current Windows edition. Possible values include but are not limited to 'Enterprise', 'IoTUAP', 'ServerStandard', and 'nanoserver'. New in version 3.8. platform.win32_is_iot() Return True if the Windows edition returned by win32_edition() is recognized as an IoT edition. New in version 3.8. Mac OS Platform platform.mac_ver(release='', versioninfo=('', '', ''), machine='') Get Mac OS version information and return it as tuple (release, versioninfo, machine) with versioninfo being a tuple (version, dev_stage, non_release_version). Entries which cannot be determined are set to ''. All tuple entries are strings. Unix Platforms platform.libc_ver(executable=sys.executable, lib='', version='', chunksize=16384) Tries to determine the libc version against which the file executable (defaults to the Python interpreter) is linked. Returns a tuple of strings (lib, version) which default to the given parameters in case the lookup fails. Note that this function has intimate knowledge of how different libc versions add symbols to the executable is probably only usable for executables compiled using gcc. The file is read and scanned in chunks of chunksize bytes.
python.library.platform
platform.architecture(executable=sys.executable, bits='', linkage='') Queries the given executable (defaults to the Python interpreter binary) for various architecture information. Returns a tuple (bits, linkage) which contain information about the bit architecture and the linkage format used for the executable. Both values are returned as strings. Values that cannot be determined are returned as given by the parameter presets. If bits is given as '', the sizeof(pointer) (or sizeof(long) on Python version < 1.5.2) is used as indicator for the supported pointer size. The function relies on the system’s file command to do the actual work. This is available on most if not all Unix platforms and some non-Unix platforms and then only if the executable points to the Python interpreter. Reasonable defaults are used when the above needs are not met. Note On Mac OS X (and perhaps other platforms), executable files may be universal files containing multiple architectures. To get at the “64-bitness” of the current interpreter, it is more reliable to query the sys.maxsize attribute: is_64bits = sys.maxsize > 2**32
python.library.platform#platform.architecture
platform.java_ver(release='', vendor='', vminfo=('', '', ''), osinfo=('', '', '')) Version interface for Jython. Returns a tuple (release, vendor, vminfo, osinfo) with vminfo being a tuple (vm_name, vm_release, vm_vendor) and osinfo being a tuple (os_name, os_version, os_arch). Values which cannot be determined are set to the defaults given as parameters (which all default to '').
python.library.platform#platform.java_ver
platform.libc_ver(executable=sys.executable, lib='', version='', chunksize=16384) Tries to determine the libc version against which the file executable (defaults to the Python interpreter) is linked. Returns a tuple of strings (lib, version) which default to the given parameters in case the lookup fails. Note that this function has intimate knowledge of how different libc versions add symbols to the executable is probably only usable for executables compiled using gcc. The file is read and scanned in chunks of chunksize bytes.
python.library.platform#platform.libc_ver
platform.machine() Returns the machine type, e.g. 'i386'. An empty string is returned if the value cannot be determined.
python.library.platform#platform.machine
platform.mac_ver(release='', versioninfo=('', '', ''), machine='') Get Mac OS version information and return it as tuple (release, versioninfo, machine) with versioninfo being a tuple (version, dev_stage, non_release_version). Entries which cannot be determined are set to ''. All tuple entries are strings.
python.library.platform#platform.mac_ver
platform.node() Returns the computer’s network name (may not be fully qualified!). An empty string is returned if the value cannot be determined.
python.library.platform#platform.node
platform.platform(aliased=0, terse=0) Returns a single string identifying the underlying platform with as much useful information as possible. The output is intended to be human readable rather than machine parseable. It may look different on different platforms and this is intended. If aliased is true, the function will use aliases for various platforms that report system names which differ from their common names, for example SunOS will be reported as Solaris. The system_alias() function is used to implement this. Setting terse to true causes the function to return only the absolute minimum information needed to identify the platform. Changed in version 3.8: On macOS, the function now uses mac_ver(), if it returns a non-empty release string, to get the macOS version rather than the darwin version.
python.library.platform#platform.platform
platform.processor() Returns the (real) processor name, e.g. 'amdk6'. An empty string is returned if the value cannot be determined. Note that many platforms do not provide this information or simply return the same value as for machine(). NetBSD does this.
python.library.platform#platform.processor
platform.python_branch() Returns a string identifying the Python implementation SCM branch.
python.library.platform#platform.python_branch
platform.python_build() Returns a tuple (buildno, builddate) stating the Python build number and date as strings.
python.library.platform#platform.python_build
platform.python_compiler() Returns a string identifying the compiler used for compiling Python.
python.library.platform#platform.python_compiler
platform.python_implementation() Returns a string identifying the Python implementation. Possible return values are: ‘CPython’, ‘IronPython’, ‘Jython’, ‘PyPy’.
python.library.platform#platform.python_implementation
platform.python_revision() Returns a string identifying the Python implementation SCM revision.
python.library.platform#platform.python_revision
platform.python_version() Returns the Python version as string 'major.minor.patchlevel'. Note that unlike the Python sys.version, the returned value will always include the patchlevel (it defaults to 0).
python.library.platform#platform.python_version
platform.python_version_tuple() Returns the Python version as tuple (major, minor, patchlevel) of strings. Note that unlike the Python sys.version, the returned value will always include the patchlevel (it defaults to '0').
python.library.platform#platform.python_version_tuple
platform.release() Returns the system’s release, e.g. '2.2.0' or 'NT' An empty string is returned if the value cannot be determined.
python.library.platform#platform.release
platform.system() Returns the system/OS name, such as 'Linux', 'Darwin', 'Java', 'Windows'. An empty string is returned if the value cannot be determined.
python.library.platform#platform.system
platform.system_alias(system, release, version) Returns (system, release, version) aliased to common marketing names used for some systems. It also does some reordering of the information in some cases where it would otherwise cause confusion.
python.library.platform#platform.system_alias
platform.uname() Fairly portable uname interface. Returns a namedtuple() containing six attributes: system, node, release, version, machine, and processor. Note that this adds a sixth attribute (processor) not present in the os.uname() result. Also, the attribute names are different for the first two attributes; os.uname() names them sysname and nodename. Entries which cannot be determined are set to ''. Changed in version 3.3: Result changed from a tuple to a namedtuple.
python.library.platform#platform.uname
platform.version() Returns the system’s release version, e.g. '#3 on degas'. An empty string is returned if the value cannot be determined.
python.library.platform#platform.version
platform.win32_edition() Returns a string representing the current Windows edition. Possible values include but are not limited to 'Enterprise', 'IoTUAP', 'ServerStandard', and 'nanoserver'. New in version 3.8.
python.library.platform#platform.win32_edition
platform.win32_is_iot() Return True if the Windows edition returned by win32_edition() is recognized as an IoT edition. New in version 3.8.
python.library.platform#platform.win32_is_iot
platform.win32_ver(release='', version='', csd='', ptype='') Get additional version information from the Windows Registry and return a tuple (release, version, csd, ptype) referring to OS release, version number, CSD level (service pack) and OS type (multi/single processor). As a hint: ptype is 'Uniprocessor Free' on single processor NT machines and 'Multiprocessor Free' on multi processor machines. The ‘Free’ refers to the OS version being free of debugging code. It could also state ‘Checked’ which means the OS version uses debugging code, i.e. code that checks arguments, ranges, etc.
python.library.platform#platform.win32_ver
plistlib — Generate and parse Apple .plist files Source code: Lib/plistlib.py This module provides an interface for reading and writing the “property list” files used by Apple, primarily on macOS and iOS. This module supports both binary and XML plist files. The property list (.plist) file format is a simple serialization supporting basic object types, like dictionaries, lists, numbers and strings. Usually the top level object is a dictionary. To write out and to parse a plist file, use the dump() and load() functions. To work with plist data in bytes objects, use dumps() and loads(). Values can be strings, integers, floats, booleans, tuples, lists, dictionaries (but only with string keys), bytes, bytearray or datetime.datetime objects. Changed in version 3.4: New API, old API deprecated. Support for binary format plists added. Changed in version 3.8: Support added for reading and writing UID tokens in binary plists as used by NSKeyedArchiver and NSKeyedUnarchiver. Changed in version 3.9: Old API removed. See also PList manual page Apple’s documentation of the file format. This module defines the following functions: plistlib.load(fp, *, fmt=None, dict_type=dict) Read a plist file. fp should be a readable and binary file object. Return the unpacked root object (which usually is a dictionary). The fmt is the format of the file and the following values are valid: None: Autodetect the file format FMT_XML: XML file format FMT_BINARY: Binary plist format The dict_type is the type used for dictionaries that are read from the plist file. XML data for the FMT_XML format is parsed using the Expat parser from xml.parsers.expat – see its documentation for possible exceptions on ill-formed XML. Unknown elements will simply be ignored by the plist parser. The parser for the binary format raises InvalidFileException when the file cannot be parsed. New in version 3.4. plistlib.loads(data, *, fmt=None, dict_type=dict) Load a plist from a bytes object. See load() for an explanation of the keyword arguments. New in version 3.4. plistlib.dump(value, fp, *, fmt=FMT_XML, sort_keys=True, skipkeys=False) Write value to a plist file. Fp should be a writable, binary file object. The fmt argument specifies the format of the plist file and can be one of the following values: FMT_XML: XML formatted plist file FMT_BINARY: Binary formatted plist file When sort_keys is true (the default) the keys for dictionaries will be written to the plist in sorted order, otherwise they will be written in the iteration order of the dictionary. When skipkeys is false (the default) the function raises TypeError when a key of a dictionary is not a string, otherwise such keys are skipped. A TypeError will be raised if the object is of an unsupported type or a container that contains objects of unsupported types. An OverflowError will be raised for integer values that cannot be represented in (binary) plist files. New in version 3.4. plistlib.dumps(value, *, fmt=FMT_XML, sort_keys=True, skipkeys=False) Return value as a plist-formatted bytes object. See the documentation for dump() for an explanation of the keyword arguments of this function. New in version 3.4. The following classes are available: class plistlib.UID(data) Wraps an int. This is used when reading or writing NSKeyedArchiver encoded data, which contains UID (see PList manual). It has one attribute, data, which can be used to retrieve the int value of the UID. data must be in the range 0 <= data < 2**64. New in version 3.8. The following constants are available: plistlib.FMT_XML The XML format for plist files. New in version 3.4. plistlib.FMT_BINARY The binary format for plist files New in version 3.4. Examples Generating a plist: pl = dict( aString = "Doodah", aList = ["A", "B", 12, 32.1, [1, 2, 3]], aFloat = 0.1, anInt = 728, aDict = dict( anotherString = "<hello & hi there!>", aThirdString = "M\xe4ssig, Ma\xdf", aTrueValue = True, aFalseValue = False, ), someData = b"<binary gunk>", someMoreData = b"<lots of binary gunk>" * 10, aDate = datetime.datetime.fromtimestamp(time.mktime(time.gmtime())), ) with open(fileName, 'wb') as fp: dump(pl, fp) Parsing a plist: with open(fileName, 'rb') as fp: pl = load(fp) print(pl["aKey"])
python.library.plistlib
plistlib.dump(value, fp, *, fmt=FMT_XML, sort_keys=True, skipkeys=False) Write value to a plist file. Fp should be a writable, binary file object. The fmt argument specifies the format of the plist file and can be one of the following values: FMT_XML: XML formatted plist file FMT_BINARY: Binary formatted plist file When sort_keys is true (the default) the keys for dictionaries will be written to the plist in sorted order, otherwise they will be written in the iteration order of the dictionary. When skipkeys is false (the default) the function raises TypeError when a key of a dictionary is not a string, otherwise such keys are skipped. A TypeError will be raised if the object is of an unsupported type or a container that contains objects of unsupported types. An OverflowError will be raised for integer values that cannot be represented in (binary) plist files. New in version 3.4.
python.library.plistlib#plistlib.dump
plistlib.dumps(value, *, fmt=FMT_XML, sort_keys=True, skipkeys=False) Return value as a plist-formatted bytes object. See the documentation for dump() for an explanation of the keyword arguments of this function. New in version 3.4.
python.library.plistlib#plistlib.dumps
plistlib.FMT_BINARY The binary format for plist files New in version 3.4.
python.library.plistlib#plistlib.FMT_BINARY
plistlib.FMT_XML The XML format for plist files. New in version 3.4.
python.library.plistlib#plistlib.FMT_XML
plistlib.load(fp, *, fmt=None, dict_type=dict) Read a plist file. fp should be a readable and binary file object. Return the unpacked root object (which usually is a dictionary). The fmt is the format of the file and the following values are valid: None: Autodetect the file format FMT_XML: XML file format FMT_BINARY: Binary plist format The dict_type is the type used for dictionaries that are read from the plist file. XML data for the FMT_XML format is parsed using the Expat parser from xml.parsers.expat – see its documentation for possible exceptions on ill-formed XML. Unknown elements will simply be ignored by the plist parser. The parser for the binary format raises InvalidFileException when the file cannot be parsed. New in version 3.4.
python.library.plistlib#plistlib.load