diff --git "a/final_dataset.jsonl" "b/final_dataset.jsonl" new file mode 100644--- /dev/null +++ "b/final_dataset.jsonl" @@ -0,0 +1,461 @@ +{"url": "https://docs.python.org/3/copyright.html", "title": "Copyright", "content": "Copyright\u00b6\nPython and this documentation is:\nCopyright \u00a9 2001 Python Software Foundation. All rights reserved.\nCopyright \u00a9 2000 BeOpen.com. All rights reserved.\nCopyright \u00a9 1995-2000 Corporation for National Research Initiatives. All rights reserved.\nCopyright \u00a9 1991-1995 Stichting Mathematisch Centrum. All rights reserved.\nSee History and License for complete license and permissions information.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 100} +{"url": "https://docs.python.org/3/tutorial/appendix.html", "title": "Appendix", "content": "16. Appendix\u00b6\n16.1. Interactive Mode\u00b6\nThere are two variants of the interactive REPL. The classic basic interpreter is supported on all platforms with minimal line control capabilities.\nSince Python 3.13, a new interactive shell is used by default.\nThis one supports color, multiline editing, history browsing, and\npaste mode. To disable color, see Controlling color for\ndetails. Function keys provide some additional functionality.\nF1 enters the interactive help browser pydoc\n.\nF2 allows for browsing command-line history with neither output nor the\n>>> and \u2026 prompts. F3 enters \u201cpaste mode\u201d, which\nmakes pasting larger blocks of code easier. Press F3 to return to\nthe regular prompt.\nWhen using the new interactive shell, exit the shell by typing exit or quit. Adding call parentheses after those commands is not required.\nIf the new interactive shell is not desired, it can be disabled via\nthe PYTHON_BASIC_REPL\nenvironment variable.\n16.1.1. Error Handling\u00b6\nWhen an error occurs, the interpreter prints an error message and a stack trace.\nIn interactive mode, it then returns to the primary prompt; when input came from\na file, it exits with a nonzero exit status after printing the stack trace.\n(Exceptions handled by an except\nclause in a try\nstatement\nare not errors in this context.) Some errors are unconditionally fatal and\ncause an exit with a nonzero exit status; this applies to internal inconsistencies and\nsome cases of running out of memory. All error messages are written to the\nstandard error stream; normal output from executed commands is written to\nstandard output.\nTyping the interrupt character (usually Control-C or Delete) to the primary or\nsecondary prompt cancels the input and returns to the primary prompt. [1]\nTyping an interrupt while a command is executing raises the\nKeyboardInterrupt\nexception, which may be handled by a try\nstatement.\n16.1.2. Executable Python Scripts\u00b6\nOn BSD\u2019ish Unix systems, Python scripts can be made directly executable, like shell scripts, by putting the line\n#!/usr/bin/env python3\n(assuming that the interpreter is on the user\u2019s PATH\n) at the beginning\nof the script and giving the file an executable mode. The #!\nmust be the\nfirst two characters of the file. On some platforms, this first line must end\nwith a Unix-style line ending ('\\n'\n), not a Windows ('\\r\\n'\n) line\nending. Note that the hash, or pound, character, '#'\n, is used to start a\ncomment in Python.\nThe script can be given an executable mode, or permission, using the chmod command.\n$ chmod +x myscript.py\nOn Windows systems, there is no notion of an \u201cexecutable mode\u201d. The Python\ninstaller automatically associates .py\nfiles with python.exe\nso that\na double-click on a Python file will run it as a script. The extension can\nalso be .pyw\n, in that case, the console window that normally appears is\nsuppressed.\n16.1.3. The Interactive Startup File\u00b6\nWhen you use Python interactively, it is frequently handy to have some standard\ncommands executed every time the interpreter is started. You can do this by\nsetting an environment variable named PYTHONSTARTUP\nto the name of a\nfile containing your start-up commands. This is similar to the .profile\nfeature of the Unix shells.\nThis file is only read in interactive sessions, not when Python reads commands\nfrom a script, and not when /dev/tty\nis given as the explicit source of\ncommands (which otherwise behaves like an interactive session). It is executed\nin the same namespace where interactive commands are executed, so that objects\nthat it defines or imports can be used without qualification in the interactive\nsession. You can also change the prompts sys.ps1\nand sys.ps2\nin this\nfile.\nIf you want to read an additional start-up file from the current directory, you\ncan program this in the global start-up file using code like if\nos.path.isfile('.pythonrc.py'): exec(open('.pythonrc.py').read())\n.\nIf you want to use the startup file in a script, you must do this explicitly\nin the script:\nimport os\nfilename = os.environ.get('PYTHONSTARTUP')\nif filename and os.path.isfile(filename):\nwith open(filename) as fobj:\nstartup_file = fobj.read()\nexec(startup_file)\n16.1.4. The Customization Modules\u00b6\nPython provides two hooks to let you customize it: sitecustomize and usercustomize. To see how it works, you need first to find the location of your user site-packages directory. Start Python and run this code:\n>>> import site\n>>> site.getusersitepackages()\n'/home/user/.local/lib/python3.x/site-packages'\nNow you can create a file named usercustomize.py\nin that directory and\nput anything you want in it. It will affect every invocation of Python, unless\nit is started with the -s\noption to disable the automatic import.\nsitecustomize works in the same way, but is typically created by an\nadministrator of the computer in the global site-packages directory, and is\nimported before usercustomize. See the documentation of the site\nmodule for more details.\nFootnotes", "code_snippets": ["\n", "\n", " ", " ", "\n", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", "\n", "\n", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 1234} +{"url": "https://docs.python.org/3/tutorial/floatingpoint.html", "title": "Floating-Point Arithmetic: Issues and Limitations", "content": "15. Floating-Point Arithmetic: Issues and Limitations\u00b6\nFloating-point numbers are represented in computer hardware as base 2 (binary)\nfractions. For example, the decimal fraction 0.625\nhas value 6/10 + 2/100 + 5/1000, and in the same way the binary fraction 0.101\nhas value 1/2 + 0/4 + 1/8. These two fractions have identical values, the only\nreal difference being that the first is written in base 10 fractional notation,\nand the second in base 2.\nUnfortunately, most decimal fractions cannot be represented exactly as binary fractions. A consequence is that, in general, the decimal floating-point numbers you enter are only approximated by the binary floating-point numbers actually stored in the machine.\nThe problem is easier to understand at first in base 10. Consider the fraction 1/3. You can approximate that as a base 10 fraction:\n0.3\nor, better,\n0.33\nor, better,\n0.333\nand so on. No matter how many digits you\u2019re willing to write down, the result will never be exactly 1/3, but will be an increasingly better approximation of 1/3.\nIn the same way, no matter how many base 2 digits you\u2019re willing to use, the decimal value 0.1 cannot be represented exactly as a base 2 fraction. In base 2, 1/10 is the infinitely repeating fraction\n0.0001100110011001100110011001100110011001100110011...\nStop at any finite number of bits, and you get an approximation. On most\nmachines today, floats are approximated using a binary fraction with\nthe numerator using the first 53 bits starting with the most significant bit and\nwith the denominator as a power of two. In the case of 1/10, the binary fraction\nis 3602879701896397 / 2 ** 55\nwhich is close to but not exactly\nequal to the true value of 1/10.\nMany users are not aware of the approximation because of the way values are displayed. Python only prints a decimal approximation to the true decimal value of the binary approximation stored by the machine. On most machines, if Python were to print the true decimal value of the binary approximation stored for 0.1, it would have to display:\n>>> 0.1\n0.1000000000000000055511151231257827021181583404541015625\nThat is more digits than most people find useful, so Python keeps the number of digits manageable by displaying a rounded value instead:\n>>> 1 / 10\n0.1\nJust remember, even though the printed result looks like the exact value of 1/10, the actual stored value is the nearest representable binary fraction.\nInterestingly, there are many different decimal numbers that share the same\nnearest approximate binary fraction. For example, the numbers 0.1\nand\n0.10000000000000001\nand\n0.1000000000000000055511151231257827021181583404541015625\nare all\napproximated by 3602879701896397 / 2 ** 55\n. Since all of these decimal\nvalues share the same approximation, any one of them could be displayed\nwhile still preserving the invariant eval(repr(x)) == x\n.\nHistorically, the Python prompt and built-in repr()\nfunction would choose\nthe one with 17 significant digits, 0.10000000000000001\n. Starting with\nPython 3.1, Python (on most systems) is now able to choose the shortest of\nthese and simply display 0.1\n.\nNote that this is in the very nature of binary floating point: this is not a bug in Python, and it is not a bug in your code either. You\u2019ll see the same kind of thing in all languages that support your hardware\u2019s floating-point arithmetic (although some languages may not display the difference by default, or in all output modes).\nFor more pleasant output, you may wish to use string formatting to produce a limited number of significant digits:\n>>> format(math.pi, '.12g') # give 12 significant digits\n'3.14159265359'\n>>> format(math.pi, '.2f') # give 2 digits after the point\n'3.14'\n>>> repr(math.pi)\n'3.141592653589793'\nIt\u2019s important to realize that this is, in a real sense, an illusion: you\u2019re simply rounding the display of the true machine value.\nOne illusion may beget another. For example, since 0.1 is not exactly 1/10, summing three values of 0.1 may not yield exactly 0.3, either:\n>>> 0.1 + 0.1 + 0.1 == 0.3\nFalse\nAlso, since the 0.1 cannot get any closer to the exact value of 1/10 and\n0.3 cannot get any closer to the exact value of 3/10, then pre-rounding with\nround()\nfunction cannot help:\n>>> round(0.1, 1) + round(0.1, 1) + round(0.1, 1) == round(0.3, 1)\nFalse\nThough the numbers cannot be made closer to their intended exact values,\nthe math.isclose()\nfunction can be useful for comparing inexact values:\n>>> math.isclose(0.1 + 0.1 + 0.1, 0.3)\nTrue\nAlternatively, the round()\nfunction can be used to compare rough\napproximations:\n>>> round(math.pi, ndigits=2) == round(22 / 7, ndigits=2)\nTrue\nBinary floating-point arithmetic holds many surprises like this. The problem with \u201c0.1\u201d is explained in precise detail below, in the \u201cRepresentation Error\u201d section. See Examples of Floating Point Problems for a pleasant summary of how binary floating point works and the kinds of problems commonly encountered in practice. Also see The Perils of Floating Point for a more complete account of other common surprises.\nAs that says near the end, \u201cthere are no easy answers.\u201d Still, don\u2019t be unduly wary of floating point! The errors in Python float operations are inherited from the floating-point hardware, and on most machines are on the order of no more than 1 part in 2**53 per operation. That\u2019s more than adequate for most tasks, but you do need to keep in mind that it\u2019s not decimal arithmetic and that every float operation can suffer a new rounding error.\nWhile pathological cases do exist, for most casual use of floating-point\narithmetic you\u2019ll see the result you expect in the end if you simply round the\ndisplay of your final results to the number of decimal digits you expect.\nstr()\nusually suffices, and for finer control see the str.format()\nmethod\u2019s format specifiers in Format String Syntax.\nFor use cases which require exact decimal representation, try using the\ndecimal\nmodule which implements decimal arithmetic suitable for\naccounting applications and high-precision applications.\nAnother form of exact arithmetic is supported by the fractions\nmodule\nwhich implements arithmetic based on rational numbers (so the numbers like\n1/3 can be represented exactly).\nIf you are a heavy user of floating-point operations you should take a look at the NumPy package and many other packages for mathematical and statistical operations supplied by the SciPy project. See .\nPython provides tools that may help on those rare occasions when you really\ndo want to know the exact value of a float. The\nfloat.as_integer_ratio()\nmethod expresses the value of a float as a\nfraction:\n>>> x = 3.14159\n>>> x.as_integer_ratio()\n(3537115888337719, 1125899906842624)\nSince the ratio is exact, it can be used to losslessly recreate the original value:\n>>> x == 3537115888337719 / 1125899906842624\nTrue\nThe float.hex()\nmethod expresses a float in hexadecimal (base\n16), again giving the exact value stored by your computer:\n>>> x.hex()\n'0x1.921f9f01b866ep+1'\nThis precise hexadecimal representation can be used to reconstruct the float value exactly:\n>>> x == float.fromhex('0x1.921f9f01b866ep+1')\nTrue\nSince the representation is exact, it is useful for reliably porting values across different versions of Python (platform independence) and exchanging data with other languages that support the same format (such as Java and C99).\nAnother helpful tool is the sum()\nfunction which helps mitigate\nloss-of-precision during summation. It uses extended precision for\nintermediate rounding steps as values are added onto a running total.\nThat can make a difference in overall accuracy so that the errors do not\naccumulate to the point where they affect the final total:\n>>> 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 == 1.0\nFalse\n>>> sum([0.1] * 10) == 1.0\nTrue\nThe math.fsum()\ngoes further and tracks all of the \u201clost digits\u201d\nas values are added onto a running total so that the result has only a\nsingle rounding. This is slower than sum()\nbut will be more\naccurate in uncommon cases where large magnitude inputs mostly cancel\neach other out leaving a final sum near zero:\n>>> arr = [-0.10430216751806065, -266310978.67179024, 143401161448607.16,\n... -143401161400469.7, 266262841.31058735, -0.003244936839808227]\n>>> float(sum(map(Fraction, arr))) # Exact summation with single rounding\n8.042173697819788e-13\n>>> math.fsum(arr) # Single rounding\n8.042173697819788e-13\n>>> sum(arr) # Multiple roundings in extended precision\n8.042178034628478e-13\n>>> total = 0.0\n>>> for x in arr:\n... total += x # Multiple roundings in standard precision\n...\n>>> total # Straight addition has no correct digits!\n-0.0051575902860057365\n15.1. Representation Error\u00b6\nThis section explains the \u201c0.1\u201d example in detail, and shows how you can perform an exact analysis of cases like this yourself. Basic familiarity with binary floating-point representation is assumed.\nRepresentation error refers to the fact that some (most, actually) decimal fractions cannot be represented exactly as binary (base 2) fractions. This is the chief reason why Python (or Perl, C, C++, Java, Fortran, and many others) often won\u2019t display the exact decimal number you expect.\nWhy is that? 1/10 is not exactly representable as a binary fraction. Since at least 2000, almost all machines use IEEE 754 binary floating-point arithmetic, and almost all platforms map Python floats to IEEE 754 binary64 \u201cdouble precision\u201d values. IEEE 754 binary64 values contain 53 bits of precision, so on input the computer strives to convert 0.1 to the closest fraction it can of the form J/2**N where J is an integer containing exactly 53 bits. Rewriting\n1 / 10 ~= J / (2**N)\nas\nJ ~= 2**N / 10\nand recalling that J has exactly 53 bits (is >= 2**52\nbut < 2**53\n),\nthe best value for N is 56:\n>>> 2**52 <= 2**56 // 10 < 2**53\nTrue\nThat is, 56 is the only value for N that leaves J with exactly 53 bits. The best possible value for J is then that quotient rounded:\n>>> q, r = divmod(2**56, 10)\n>>> r\n6\nSince the remainder is more than half of 10, the best approximation is obtained by rounding up:\n>>> q+1\n7205759403792794\nTherefore the best possible approximation to 1/10 in IEEE 754 double precision is:\n7205759403792794 / 2 ** 56\nDividing both the numerator and denominator by two reduces the fraction to:\n3602879701896397 / 2 ** 55\nNote that since we rounded up, this is actually a little bit larger than 1/10; if we had not rounded up, the quotient would have been a little bit smaller than 1/10. But in no case can it be exactly 1/10!\nSo the computer never \u201csees\u201d 1/10: what it sees is the exact fraction given above, the best IEEE 754 double approximation it can get:\n>>> 0.1 * 2 ** 55\n3602879701896397.0\nIf we multiply that fraction by 10**55, we can see the value out to 55 decimal digits:\n>>> 3602879701896397 * 10 ** 55 // 2 ** 55\n1000000000000000055511151231257827021181583404541015625\nmeaning that the exact number stored in the computer is equal to the decimal value 0.1000000000000000055511151231257827021181583404541015625. Instead of displaying the full decimal value, many languages (including older versions of Python), round the result to 17 significant digits:\n>>> format(0.1, '.17f')\n'0.10000000000000001'\nThe fractions\nand decimal\nmodules make these calculations\neasy:\n>>> from decimal import Decimal\n>>> from fractions import Fraction\n>>> Fraction.from_float(0.1)\nFraction(3602879701896397, 36028797018963968)\n>>> (0.1).as_integer_ratio()\n(3602879701896397, 36028797018963968)\n>>> Decimal.from_float(0.1)\nDecimal('0.1000000000000000055511151231257827021181583404541015625')\n>>> format(Decimal.from_float(0.1), '.17')\n'0.10000000000000001'", "code_snippets": ["\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 2909} +{"url": "https://docs.python.org/3/tutorial/interactive.html", "title": "Interactive Input Editing and History Substitution", "content": "14. Interactive Input Editing and History Substitution\u00b6\nSome versions of the Python interpreter support editing of the current input line and history substitution, similar to facilities found in the Korn shell and the GNU Bash shell. This is implemented using the GNU Readline library, which supports various styles of editing. This library has its own documentation which we won\u2019t duplicate here.\n14.1. Tab Completion and History Editing\u00b6\nCompletion of variable and module names is\nautomatically enabled at interpreter startup so\nthat the Tab key invokes the completion function; it looks at\nPython statement names, the current local variables, and the available\nmodule names. For dotted expressions such as string.a\n, it will evaluate\nthe expression up to the final '.'\nand then suggest completions from\nthe attributes of the resulting object. Note that this may execute\napplication-defined code if an object with a __getattr__()\nmethod\nis part of the expression. The default configuration also saves your\nhistory into a file named .python_history\nin your user directory.\nThe history will be available again during the next interactive interpreter\nsession.\n14.2. Alternatives to the Interactive Interpreter\u00b6\nThis facility is an enormous step forward compared to earlier versions of the\ninterpreter; however, some wishes are left: It would be nice if the proper\nindentation were suggested on continuation lines (the parser knows if an\nINDENT\ntoken is required next). The completion mechanism might\nuse the interpreter\u2019s symbol table. A command to check (or even suggest)\nmatching parentheses, quotes, etc., would also be useful.\nOne alternative enhanced interactive interpreter that has been around for quite some time is IPython, which features tab completion, object exploration and advanced history management. It can also be thoroughly customized and embedded into other applications. Another similar enhanced interactive environment is bpython.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 487} +{"url": "https://docs.python.org/3/tutorial/whatnow.html", "title": "What Now?", "content": "13. What Now?\u00b6\nReading this tutorial has probably reinforced your interest in using Python \u2014 you should be eager to apply Python to solving your real-world problems. Where should you go to learn more?\nThis tutorial is part of Python\u2019s documentation set. Some other documents in the set are:\n-\nYou should browse through this manual, which gives complete (though terse) reference material about types, functions, and the modules in the standard library. The standard Python distribution includes a lot of additional code. There are modules to read Unix mailboxes, retrieve documents via HTTP, generate random numbers, parse command-line options, compress data, and many other tasks. Skimming through the Library Reference will give you an idea of what\u2019s available.\nInstalling Python Modules explains how to install additional modules written by other Python users.\nThe Python Language Reference: A detailed explanation of Python\u2019s syntax and semantics. It\u2019s heavy reading, but is useful as a complete guide to the language itself.\nMore Python resources:\nhttps://www.python.org: The major Python website. It contains code, documentation, and pointers to Python-related pages around the web.\nhttps://docs.python.org: Fast access to Python\u2019s documentation.\nhttps://pypi.org: The Python Package Index, previously also nicknamed the Cheese Shop [1], is an index of user-created Python modules that are available for download. Once you begin releasing code, you can register it here so that others can find it.\nhttps://code.activestate.com/recipes/langs/python/: The Python Cookbook is a sizable collection of code examples, larger modules, and useful scripts. Particularly notable contributions are collected in a book also titled Python Cookbook (O\u2019Reilly & Associates, ISBN 0-596-00797-3.)\nhttps://pyvideo.org collects links to Python-related videos from conferences and user-group meetings.\nhttps://scipy.org: The Scientific Python project includes modules for fast array computations and manipulations plus a host of packages for such things as linear algebra, Fourier transforms, non-linear solvers, random number distributions, statistical analysis and the like.\nFor Python-related questions and problem reports, you can post to the newsgroup comp.lang.python, or send them to the mailing list at python-list@python.org. The newsgroup and mailing list are gatewayed, so messages posted to one will automatically be forwarded to the other. There are hundreds of postings a day, asking (and answering) questions, suggesting new features, and announcing new modules. Mailing list archives are available at https://mail.python.org/pipermail/.\nBefore posting, be sure to check the list of Frequently Asked Questions (also called the FAQ). The FAQ answers many of the questions that come up again and again, and may already contain the solution for your problem.\nFootnotes", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 716} +{"url": "https://docs.python.org/3/tutorial/venv.html", "title": "Virtual Environments and Packages", "content": "12. Virtual Environments and Packages\u00b6\n12.1. Introduction\u00b6\nPython applications will often use packages and modules that don\u2019t come as part of the standard library. Applications will sometimes need a specific version of a library, because the application may require that a particular bug has been fixed or the application may be written using an obsolete version of the library\u2019s interface.\nThis means it may not be possible for one Python installation to meet the requirements of every application. If application A needs version 1.0 of a particular module but application B needs version 2.0, then the requirements are in conflict and installing either version 1.0 or 2.0 will leave one application unable to run.\nThe solution for this problem is to create a virtual environment, a self-contained directory tree that contains a Python installation for a particular version of Python, plus a number of additional packages.\nDifferent applications can then use different virtual environments. To resolve the earlier example of conflicting requirements, application A can have its own virtual environment with version 1.0 installed while application B has another virtual environment with version 2.0. If application B requires a library be upgraded to version 3.0, this will not affect application A\u2019s environment.\n12.2. Creating Virtual Environments\u00b6\nThe module used to create and manage virtual environments is called\nvenv\n. venv\nwill install the Python version from which\nthe command was run (as reported by the --version\noption).\nFor instance, executing the command with python3.12\nwill install\nversion 3.12.\nTo create a virtual environment, decide upon a directory where you want to\nplace it, and run the venv\nmodule as a script with the directory path:\npython -m venv tutorial-env\nThis will create the tutorial-env\ndirectory if it doesn\u2019t exist,\nand also create directories inside it containing a copy of the Python\ninterpreter and various supporting files.\nA common directory location for a virtual environment is .venv\n.\nThis name keeps the directory typically hidden in your shell and thus\nout of the way while giving it a name that explains why the directory\nexists. It also prevents clashing with .env\nenvironment variable\ndefinition files that some tooling supports.\nOnce you\u2019ve created a virtual environment, you may activate it.\nOn Windows, run:\ntutorial-env\\Scripts\\activate\nOn Unix or MacOS, run:\nsource tutorial-env/bin/activate\n(This script is written for the bash shell. If you use the\ncsh or fish shells, there are alternate\nactivate.csh\nand activate.fish\nscripts you should use\ninstead.)\nActivating the virtual environment will change your shell\u2019s prompt to show what\nvirtual environment you\u2019re using, and modify the environment so that running\npython\nwill get you that particular version and installation of Python.\nFor example:\n$ source ~/envs/tutorial-env/bin/activate\n(tutorial-env) $ python\nPython 3.5.1 (default, May 6 2016, 10:59:36)\n...\n>>> import sys\n>>> sys.path\n['', '/usr/local/lib/python35.zip', ...,\n'~/envs/tutorial-env/lib/python3.5/site-packages']\n>>>\nTo deactivate a virtual environment, type:\ndeactivate\ninto the terminal.\n12.3. Managing Packages with pip\u00b6\nYou can install, upgrade, and remove packages using a program called\npip. By default pip\nwill install packages from the Python\nPackage Index. You can browse the Python\nPackage Index by going to it in your web browser.\npip\nhas a number of subcommands: \u201cinstall\u201d, \u201cuninstall\u201d,\n\u201cfreeze\u201d, etc. (Consult the Installing Python Modules guide for\ncomplete documentation for pip\n.)\nYou can install the latest version of a package by specifying a package\u2019s name:\n(tutorial-env) $ python -m pip install novas\nCollecting novas\nDownloading novas-3.1.1.3.tar.gz (136kB)\nInstalling collected packages: novas\nRunning setup.py install for novas\nSuccessfully installed novas-3.1.1.3\nYou can also install a specific version of a package by giving the\npackage name followed by ==\nand the version number:\n(tutorial-env) $ python -m pip install requests==2.6.0\nCollecting requests==2.6.0\nUsing cached requests-2.6.0-py2.py3-none-any.whl\nInstalling collected packages: requests\nSuccessfully installed requests-2.6.0\nIf you re-run this command, pip\nwill notice that the requested\nversion is already installed and do nothing. You can supply a\ndifferent version number to get that version, or you can run python\n-m pip install --upgrade\nto upgrade the package to the latest version:\n(tutorial-env) $ python -m pip install --upgrade requests\nCollecting requests\nInstalling collected packages: requests\nFound existing installation: requests 2.6.0\nUninstalling requests-2.6.0:\nSuccessfully uninstalled requests-2.6.0\nSuccessfully installed requests-2.7.0\npython -m pip uninstall\nfollowed by one or more package names will\nremove the packages from the virtual environment.\npython -m pip show\nwill display information about a particular package:\n(tutorial-env) $ python -m pip show requests\n---\nMetadata-Version: 2.0\nName: requests\nVersion: 2.7.0\nSummary: Python HTTP for Humans.\nHome-page: http://python-requests.org\nAuthor: Kenneth Reitz\nAuthor-email: me@kennethreitz.com\nLicense: Apache 2.0\nLocation: /Users/akuchling/envs/tutorial-env/lib/python3.4/site-packages\nRequires:\npython -m pip list\nwill display all of the packages installed in\nthe virtual environment:\n(tutorial-env) $ python -m pip list\nnovas (3.1.1.3)\nnumpy (1.9.2)\npip (7.0.3)\nrequests (2.7.0)\nsetuptools (16.0)\npython -m pip freeze\nwill produce a similar list of the installed packages,\nbut the output uses the format that python -m pip install\nexpects.\nA common convention is to put this list in a requirements.txt\nfile:\n(tutorial-env) $ python -m pip freeze > requirements.txt\n(tutorial-env) $ cat requirements.txt\nnovas==3.1.1.3\nnumpy==1.9.2\nrequests==2.7.0\nThe requirements.txt\ncan then be committed to version control and\nshipped as part of an application. Users can then install all the\nnecessary packages with install -r\n:\n(tutorial-env) $ python -m pip install -r requirements.txt\nCollecting novas==3.1.1.3 (from -r requirements.txt (line 1))\n...\nCollecting numpy==1.9.2 (from -r requirements.txt (line 2))\n...\nCollecting requests==2.7.0 (from -r requirements.txt (line 3))\n...\nInstalling collected packages: novas, numpy, requests\nRunning setup.py install for novas\nSuccessfully installed novas-3.1.1.3 numpy-1.9.2 requests-2.7.0\npip\nhas many more options. Consult the Installing Python Modules\nguide for complete documentation for pip\n. When you\u2019ve written\na package and want to make it available on the Python Package Index,\nconsult the Python packaging user guide.", "code_snippets": [" ", " ", " ", "\n", "\\", "\\", "\n", " ", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 1653} +{"url": "https://docs.python.org/3/tutorial/stdlib2.html", "title": "Brief Tour of the Standard Library \u2014 Part II", "content": "11. Brief Tour of the Standard Library \u2014 Part II\u00b6\nThis second tour covers more advanced modules that support professional programming needs. These modules rarely occur in small scripts.\n11.1. Output Formatting\u00b6\nThe reprlib\nmodule provides a version of repr()\ncustomized for\nabbreviated displays of large or deeply nested containers:\n>>> import reprlib\n>>> reprlib.repr(set('supercalifragilisticexpialidocious'))\n\"{'a', 'c', 'd', 'e', 'f', 'g', ...}\"\nThe pprint\nmodule offers more sophisticated control over printing both\nbuilt-in and user defined objects in a way that is readable by the interpreter.\nWhen the result is longer than one line, the \u201cpretty printer\u201d adds line breaks\nand indentation to more clearly reveal data structure:\n>>> import pprint\n>>> t = [[[['black', 'cyan'], 'white', ['green', 'red']], [['magenta',\n... 'yellow'], 'blue']]]\n...\n>>> pprint.pprint(t, width=30)\n[[[['black', 'cyan'],\n'white',\n['green', 'red']],\n[['magenta', 'yellow'],\n'blue']]]\nThe textwrap\nmodule formats paragraphs of text to fit a given screen\nwidth:\n>>> import textwrap\n>>> doc = \"\"\"The wrap() method is just like fill() except that it returns\n... a list of strings instead of one big string with newlines to separate\n... the wrapped lines.\"\"\"\n...\n>>> print(textwrap.fill(doc, width=40))\nThe wrap() method is just like fill()\nexcept that it returns a list of strings\ninstead of one big string with newlines\nto separate the wrapped lines.\nThe locale\nmodule accesses a database of culture specific data formats.\nThe grouping attribute of locale\u2019s format function provides a direct way of\nformatting numbers with group separators:\n>>> import locale\n>>> locale.setlocale(locale.LC_ALL, 'English_United States.1252')\n'English_United States.1252'\n>>> conv = locale.localeconv() # get a mapping of conventions\n>>> x = 1234567.8\n>>> locale.format_string(\"%d\", x, grouping=True)\n'1,234,567'\n>>> locale.format_string(\"%s%.*f\", (conv['currency_symbol'],\n... conv['frac_digits'], x), grouping=True)\n'$1,234,567.80'\n11.2. Templating\u00b6\nThe string\nmodule includes a versatile Template\nclass\nwith a simplified syntax suitable for editing by end-users. This allows users\nto customize their applications without having to alter the application.\nThe format uses placeholder names formed by $\nwith valid Python identifiers\n(alphanumeric characters and underscores). Surrounding the placeholder with\nbraces allows it to be followed by more alphanumeric letters with no intervening\nspaces. Writing $$\ncreates a single escaped $\n:\n>>> from string import Template\n>>> t = Template('${village}folk send $$10 to $cause.')\n>>> t.substitute(village='Nottingham', cause='the ditch fund')\n'Nottinghamfolk send $10 to the ditch fund.'\nThe substitute()\nmethod raises a KeyError\nwhen a\nplaceholder is not supplied in a dictionary or a keyword argument. For\nmail-merge style applications, user supplied data may be incomplete and the\nsafe_substitute()\nmethod may be more appropriate \u2014\nit will leave placeholders unchanged if data is missing:\n>>> t = Template('Return the $item to $owner.')\n>>> d = dict(item='unladen swallow')\n>>> t.substitute(d)\nTraceback (most recent call last):\n...\nKeyError: 'owner'\n>>> t.safe_substitute(d)\n'Return the unladen swallow to $owner.'\nTemplate subclasses can specify a custom delimiter. For example, a batch renaming utility for a photo browser may elect to use percent signs for placeholders such as the current date, image sequence number, or file format:\n>>> import time, os.path\n>>> photofiles = ['img_1074.jpg', 'img_1076.jpg', 'img_1077.jpg']\n>>> class BatchRename(Template):\n... delimiter = '%'\n...\n>>> fmt = input('Enter rename style (%d-date %n-seqnum %f-format): ')\nEnter rename style (%d-date %n-seqnum %f-format): Ashley_%n%f\n>>> t = BatchRename(fmt)\n>>> date = time.strftime('%d%b%y')\n>>> for i, filename in enumerate(photofiles):\n... base, ext = os.path.splitext(filename)\n... newname = t.substitute(d=date, n=i, f=ext)\n... print('{0} --> {1}'.format(filename, newname))\nimg_1074.jpg --> Ashley_0.jpg\nimg_1076.jpg --> Ashley_1.jpg\nimg_1077.jpg --> Ashley_2.jpg\nAnother application for templating is separating program logic from the details of multiple output formats. This makes it possible to substitute custom templates for XML files, plain text reports, and HTML web reports.\n11.3. Working with Binary Data Record Layouts\u00b6\nThe struct\nmodule provides pack()\nand\nunpack()\nfunctions for working with variable length binary\nrecord formats. The following example shows\nhow to loop through header information in a ZIP file without using the\nzipfile\nmodule. Pack codes \"H\"\nand \"I\"\nrepresent two and four\nbyte unsigned numbers respectively. The \"<\"\nindicates that they are\nstandard size and in little-endian byte order:\nimport struct\nwith open('myfile.zip', 'rb') as f:\ndata = f.read()\nstart = 0\nfor i in range(3): # show the first 3 file headers\nstart += 14\nfields = struct.unpack('>> import weakref, gc\n>>> class A:\n... def __init__(self, value):\n... self.value = value\n... def __repr__(self):\n... return str(self.value)\n...\n>>> a = A(10) # create a reference\n>>> d = weakref.WeakValueDictionary()\n>>> d['primary'] = a # does not create a reference\n>>> d['primary'] # fetch the object if it is still alive\n10\n>>> del a # remove the one reference\n>>> gc.collect() # run garbage collection right away\n0\n>>> d['primary'] # entry was automatically removed\nTraceback (most recent call last):\nFile \"\", line 1, in \nd['primary'] # entry was automatically removed\nFile \"C:/python314/lib/weakref.py\", line 46, in __getitem__\no = self.data[key]()\nKeyError: 'primary'\n11.7. Tools for Working with Lists\u00b6\nMany data structure needs can be met with the built-in list type. However, sometimes there is a need for alternative implementations with different performance trade-offs.\nThe array\nmodule provides an array\nobject that is like\na list that stores only homogeneous data and stores it more compactly. The\nfollowing example shows an array of numbers stored as two byte unsigned binary\nnumbers (typecode \"H\"\n) rather than the usual 16 bytes per entry for regular\nlists of Python int objects:\n>>> from array import array\n>>> a = array('H', [4000, 10, 700, 22222])\n>>> sum(a)\n26932\n>>> a[1:3]\narray('H', [10, 700])\nThe collections\nmodule provides a deque\nobject\nthat is like a list with faster appends and pops from the left side but slower\nlookups in the middle. These objects are well suited for implementing queues\nand breadth first tree searches:\n>>> from collections import deque\n>>> d = deque([\"task1\", \"task2\", \"task3\"])\n>>> d.append(\"task4\")\n>>> print(\"Handling\", d.popleft())\nHandling task1\nunsearched = deque([starting_node])\ndef breadth_first_search(unsearched):\nnode = unsearched.popleft()\nfor m in gen_moves(node):\nif is_goal(m):\nreturn m\nunsearched.append(m)\nIn addition to alternative list implementations, the library also offers other\ntools such as the bisect\nmodule with functions for manipulating sorted\nlists:\n>>> import bisect\n>>> scores = [(100, 'perl'), (200, 'tcl'), (400, 'lua'), (500, 'python')]\n>>> bisect.insort(scores, (300, 'ruby'))\n>>> scores\n[(100, 'perl'), (200, 'tcl'), (300, 'ruby'), (400, 'lua'), (500, 'python')]\nThe heapq\nmodule provides functions for implementing heaps based on\nregular lists. The lowest valued entry is always kept at position zero. This\nis useful for applications which repeatedly access the smallest element but do\nnot want to run a full list sort:\n>>> from heapq import heapify, heappop, heappush\n>>> data = [1, 3, 5, 7, 9, 2, 4, 6, 8, 0]\n>>> heapify(data) # rearrange the list into heap order\n>>> heappush(data, -5) # add a new entry\n>>> [heappop(data) for i in range(3)] # fetch the three smallest entries\n[-5, 0, 1]\n11.8. Decimal Floating-Point Arithmetic\u00b6\nThe decimal\nmodule offers a Decimal\ndatatype for\ndecimal floating-point arithmetic. Compared to the built-in float\nimplementation of binary floating point, the class is especially helpful for\nfinancial applications and other uses which require exact decimal representation,\ncontrol over precision,\ncontrol over rounding to meet legal or regulatory requirements,\ntracking of significant decimal places, or\napplications where the user expects the results to match calculations done by hand.\nFor example, calculating a 5% tax on a 70 cent phone charge gives different results in decimal floating point and binary floating point. The difference becomes significant if the results are rounded to the nearest cent:\n>>> from decimal import *\n>>> round(Decimal('0.70') * Decimal('1.05'), 2)\nDecimal('0.74')\n>>> round(.70 * 1.05, 2)\n0.73\nThe Decimal\nresult keeps a trailing zero, automatically\ninferring four place significance from multiplicands with two place\nsignificance. Decimal reproduces mathematics as done by hand and avoids\nissues that can arise when binary floating point cannot exactly represent\ndecimal quantities.\nExact representation enables the Decimal\nclass to perform\nmodulo calculations and equality tests that are unsuitable for binary floating\npoint:\n>>> Decimal('1.00') % Decimal('.10')\nDecimal('0.00')\n>>> 1.00 % 0.10\n0.09999999999999995\n>>> sum([Decimal('0.1')]*10) == Decimal('1.0')\nTrue\n>>> 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 == 1.0\nFalse\nThe decimal\nmodule provides arithmetic with as much precision as needed:\n>>> getcontext().prec = 36\n>>> Decimal(1) / Decimal(7)\nDecimal('0.142857142857142857142857142857142857')", "code_snippets": ["\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", " ", "\n", " ", " ", " ", "\n", "\n", " ", "\n", " ", " ", "\n", " ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", ": ", "\n", "\n", "\n", "\n", " ", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n", " ", " ", "\n", "\n\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", "\n\n", "\n", "\n", "\n", "\n\n", " ", " ", " ", " ", "\n ", " ", " ", "\n\n", " ", " ", "\n", " ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", "\n\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n\n ", " ", " ", " ", " ", " ", "\n", "\n\n", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", " ", " ", "\n\n ", "\n ", " ", " ", " ", " ", "\n ", "\n ", "\n ", " ", "\n\n", " ", " ", " ", "\n", "\n", "\n\n", " ", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", " ", " ", "\n", "\n", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", "\n", " ", " ", "\n", " ", "\n", "\n", " ", "\n", "\n File ", ", line ", ", in ", "\n", " ", "\n File ", ", line ", ", in ", "\n", " ", " ", "\n", ": ", "\n", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", "\n", " ", " ", " ", " ", "\n", "\n", " ", "\n", "\n", " ", " ", "\n", "\n ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", "\n ", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", "\n", " ", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n\n", " ", " ", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 3436} +{"url": "https://docs.python.org/3/tutorial/stdlib.html", "title": "Brief Tour of the Standard Library", "content": "10. Brief Tour of the Standard Library\u00b6\n10.1. Operating System Interface\u00b6\nThe os\nmodule provides dozens of functions for interacting with the\noperating system:\n>>> import os\n>>> os.getcwd() # Return the current working directory\n'C:\\\\Python314'\n>>> os.chdir('/server/accesslogs') # Change current working directory\n>>> os.system('mkdir today') # Run the command mkdir in the system shell\n0\nBe sure to use the import os\nstyle instead of from os import *\n. This\nwill keep os.open()\nfrom shadowing the built-in open()\nfunction which\noperates much differently.\nThe built-in dir()\nand help()\nfunctions are useful as interactive\naids for working with large modules like os\n:\n>>> import os\n>>> dir(os)\n\n>>> help(os)\n\nFor daily file and directory management tasks, the shutil\nmodule provides\na higher level interface that is easier to use:\n>>> import shutil\n>>> shutil.copyfile('data.db', 'archive.db')\n'archive.db'\n>>> shutil.move('/build/executables', 'installdir')\n'installdir'\n10.2. File Wildcards\u00b6\nThe glob\nmodule provides a function for making file lists from directory\nwildcard searches:\n>>> import glob\n>>> glob.glob('*.py')\n['primes.py', 'random.py', 'quote.py']\n10.3. Command Line Arguments\u00b6\nCommon utility scripts often need to process command line arguments. These\narguments are stored in the sys\nmodule\u2019s argv attribute as a list. For\ninstance, let\u2019s take the following demo.py\nfile:\n# File demo.py\nimport sys\nprint(sys.argv)\nHere is the output from running python demo.py one two three\nat the command\nline:\n['demo.py', 'one', 'two', 'three']\nThe argparse\nmodule provides a more sophisticated mechanism to process\ncommand line arguments. The following script extracts one or more filenames\nand an optional number of lines to be displayed:\nimport argparse\nparser = argparse.ArgumentParser(\nprog='top',\ndescription='Show top lines from each file')\nparser.add_argument('filenames', nargs='+')\nparser.add_argument('-l', '--lines', type=int, default=10)\nargs = parser.parse_args()\nprint(args)\nWhen run at the command line with python top.py --lines=5 alpha.txt\nbeta.txt\n, the script sets args.lines\nto 5\nand args.filenames\nto ['alpha.txt', 'beta.txt']\n.\n10.4. Error Output Redirection and Program Termination\u00b6\nThe sys\nmodule also has attributes for stdin, stdout, and stderr.\nThe latter is useful for emitting warnings and error messages to make them\nvisible even when stdout has been redirected:\n>>> sys.stderr.write('Warning, log file not found starting a new one\\n')\nWarning, log file not found starting a new one\nThe most direct way to terminate a script is to use sys.exit()\n.\n10.5. String Pattern Matching\u00b6\nThe re\nmodule provides regular expression tools for advanced string\nprocessing. For complex matching and manipulation, regular expressions offer\nsuccinct, optimized solutions:\n>>> import re\n>>> re.findall(r'\\bf[a-z]*', 'which foot or hand fell fastest')\n['foot', 'fell', 'fastest']\n>>> re.sub(r'(\\b[a-z]+) \\1', r'\\1', 'cat in the the hat')\n'cat in the hat'\nWhen only simple capabilities are needed, string methods are preferred because they are easier to read and debug:\n>>> 'tea for too'.replace('too', 'two')\n'tea for two'\n10.6. Mathematics\u00b6\nThe math\nmodule gives access to the underlying C library functions for\nfloating-point math:\n>>> import math\n>>> math.cos(math.pi / 4)\n0.70710678118654757\n>>> math.log(1024, 2)\n10.0\nThe random\nmodule provides tools for making random selections:\n>>> import random\n>>> random.choice(['apple', 'pear', 'banana'])\n'apple'\n>>> random.sample(range(100), 10) # sampling without replacement\n[30, 83, 16, 4, 8, 81, 41, 50, 18, 33]\n>>> random.random() # random float from the interval [0.0, 1.0)\n0.17970987693706186\n>>> random.randrange(6) # random integer chosen from range(6)\n4\nThe statistics\nmodule calculates basic statistical properties\n(the mean, median, variance, etc.) of numeric data:\n>>> import statistics\n>>> data = [2.75, 1.75, 1.25, 0.25, 0.5, 1.25, 3.5]\n>>> statistics.mean(data)\n1.6071428571428572\n>>> statistics.median(data)\n1.25\n>>> statistics.variance(data)\n1.3720238095238095\nThe SciPy project has many other modules for numerical computations.\n10.7. Internet Access\u00b6\nThere are a number of modules for accessing the internet and processing internet\nprotocols. Two of the simplest are urllib.request\nfor retrieving data\nfrom URLs and smtplib\nfor sending mail:\n>>> from urllib.request import urlopen\n>>> with urlopen('https://docs.python.org/3/') as response:\n... for line in response:\n... line = line.decode() # Convert bytes to a str\n... if 'updated' in line:\n... print(line.rstrip()) # Remove trailing newline\n...\nLast updated on Nov 11, 2025 (20:11 UTC).\n>>> import smtplib\n>>> server = smtplib.SMTP('localhost')\n>>> server.sendmail('soothsayer@example.org', 'jcaesar@example.org',\n... \"\"\"To: jcaesar@example.org\n... From: soothsayer@example.org\n...\n... Beware the Ides of March.\n... \"\"\")\n>>> server.quit()\n(Note that the second example needs a mailserver running on localhost.)\n10.8. Dates and Times\u00b6\nThe datetime\nmodule supplies classes for manipulating dates and times in\nboth simple and complex ways. While date and time arithmetic is supported, the\nfocus of the implementation is on efficient member extraction for output\nformatting and manipulation. The module also supports objects that are timezone\naware.\n>>> # dates are easily constructed and formatted\n>>> from datetime import date\n>>> now = date.today()\n>>> now\ndatetime.date(2003, 12, 2)\n>>> now.strftime(\"%m-%d-%y. %d %b %Y is a %A on the %d day of %B.\")\n'12-02-03. 02 Dec 2003 is a Tuesday on the 02 day of December.'\n>>> # dates support calendar arithmetic\n>>> birthday = date(1964, 7, 31)\n>>> age = now - birthday\n>>> age.days\n14368\n10.9. Data Compression\u00b6\nCommon data archiving and compression formats are directly supported by modules\nincluding: zlib\n, gzip\n, bz2\n, lzma\n, zipfile\nand\ntarfile\n.\n>>> import zlib\n>>> s = b'witch which has which witches wrist watch'\n>>> len(s)\n41\n>>> t = zlib.compress(s)\n>>> len(t)\n37\n>>> zlib.decompress(t)\nb'witch which has which witches wrist watch'\n>>> zlib.crc32(s)\n226805979\n10.10. Performance Measurement\u00b6\nSome Python users develop a deep interest in knowing the relative performance of different approaches to the same problem. Python provides a measurement tool that answers those questions immediately.\nFor example, it may be tempting to use the tuple packing and unpacking feature\ninstead of the traditional approach to swapping arguments. The timeit\nmodule quickly demonstrates a modest performance advantage:\n>>> from timeit import Timer\n>>> Timer('t=a; a=b; b=t', 'a=1; b=2').timeit()\n0.57535828626024577\n>>> Timer('a,b = b,a', 'a=1; b=2').timeit()\n0.54962537085770791\nIn contrast to timeit\n\u2019s fine level of granularity, the profile\nand\npstats\nmodules provide tools for identifying time critical sections in\nlarger blocks of code.\n10.11. Quality Control\u00b6\nOne approach for developing high quality software is to write tests for each function as it is developed and to run those tests frequently during the development process.\nThe doctest\nmodule provides a tool for scanning a module and validating\ntests embedded in a program\u2019s docstrings. Test construction is as simple as\ncutting-and-pasting a typical call along with its results into the docstring.\nThis improves the documentation by providing the user with an example and it\nallows the doctest module to make sure the code remains true to the\ndocumentation:\ndef average(values):\n\"\"\"Computes the arithmetic mean of a list of numbers.\n>>> print(average([20, 30, 70]))\n40.0\n\"\"\"\nreturn sum(values) / len(values)\nimport doctest\ndoctest.testmod() # automatically validate the embedded tests\nThe unittest\nmodule is not as effortless as the doctest\nmodule,\nbut it allows a more comprehensive set of tests to be maintained in a separate\nfile:\nimport unittest\nclass TestStatisticalFunctions(unittest.TestCase):\ndef test_average(self):\nself.assertEqual(average([20, 30, 70]), 40.0)\nself.assertEqual(round(average([1, 5, 7]), 1), 4.3)\nwith self.assertRaises(ZeroDivisionError):\naverage([])\nwith self.assertRaises(TypeError):\naverage(20, 30, 70)\nunittest.main() # Calling from the command line invokes all tests\n10.12. Batteries Included\u00b6\nPython has a \u201cbatteries included\u201d philosophy. This is best seen through the sophisticated and robust capabilities of its larger packages. For example:\nThe\nxmlrpc.client\nandxmlrpc.server\nmodules make implementing remote procedure calls into an almost trivial task. Despite the modules\u2019 names, no direct knowledge or handling of XML is needed.The\nemail\npackage is a library for managing email messages, including MIME and other RFC 5322-based message documents. Unlikesmtplib\nandpoplib\nwhich actually send and receive messages, the email package has a complete toolset for building or decoding complex message structures (including attachments) and for implementing internet encoding and header protocols.The\njson\npackage provides robust support for parsing this popular data interchange format. Thecsv\nmodule supports direct reading and writing of files in Comma-Separated Value format, commonly supported by databases and spreadsheets. XML processing is supported by thexml.etree.ElementTree\n,xml.dom\nandxml.sax\npackages. Together, these modules and packages greatly simplify data interchange between Python applications and other tools.The\nsqlite3\nmodule is a wrapper for the SQLite database library, providing a persistent database that can be updated and accessed using slightly nonstandard SQL syntax.Internationalization is supported by a number of modules including\ngettext\n,locale\n, and thecodecs\npackage.", "code_snippets": ["\n", " ", "\n", "\n", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", "\n\n", " ", " ", "\n ", "\n ", "\n", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", " ", " ", "\n", "\n", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n\n", "\n", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", " ", "\n", "\n", " ", "\n", "\n", "\n", "\n\n", "\n", "\n", "\n ", " ", " ", " ", "\n\n", "\n", " ", "\n", "\n\n", "\n\n ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", "\n ", "\n ", " ", "\n ", " ", " ", "\n\n", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 2426} +{"url": "https://docs.python.org/3/tutorial/classes.html", "title": "Classes", "content": "9. Classes\u00b6\nClasses provide a means of bundling data and functionality together. Creating a new class creates a new type of object, allowing new instances of that type to be made. Each class instance can have attributes attached to it for maintaining its state. Class instances can also have methods (defined by its class) for modifying its state.\nCompared with other programming languages, Python\u2019s class mechanism adds classes with a minimum of new syntax and semantics. It is a mixture of the class mechanisms found in C++ and Modula-3. Python classes provide all the standard features of Object Oriented Programming: the class inheritance mechanism allows multiple base classes, a derived class can override any methods of its base class or classes, and a method can call the method of a base class with the same name. Objects can contain arbitrary amounts and kinds of data. As is true for modules, classes partake of the dynamic nature of Python: they are created at runtime, and can be modified further after creation.\nIn C++ terminology, normally class members (including the data members) are public (except see below Private Variables), and all member functions are virtual. As in Modula-3, there are no shorthands for referencing the object\u2019s members from its methods: the method function is declared with an explicit first argument representing the object, which is provided implicitly by the call. As in Smalltalk, classes themselves are objects. This provides semantics for importing and renaming. Unlike C++ and Modula-3, built-in types can be used as base classes for extension by the user. Also, like in C++, most built-in operators with special syntax (arithmetic operators, subscripting etc.) can be redefined for class instances.\n(Lacking universally accepted terminology to talk about classes, I will make occasional use of Smalltalk and C++ terms. I would use Modula-3 terms, since its object-oriented semantics are closer to those of Python than C++, but I expect that few readers have heard of it.)\n9.1. A Word About Names and Objects\u00b6\nObjects have individuality, and multiple names (in multiple scopes) can be bound to the same object. This is known as aliasing in other languages. This is usually not appreciated on a first glance at Python, and can be safely ignored when dealing with immutable basic types (numbers, strings, tuples). However, aliasing has a possibly surprising effect on the semantics of Python code involving mutable objects such as lists, dictionaries, and most other types. This is usually used to the benefit of the program, since aliases behave like pointers in some respects. For example, passing an object is cheap since only a pointer is passed by the implementation; and if a function modifies an object passed as an argument, the caller will see the change \u2014 this eliminates the need for two different argument passing mechanisms as in Pascal.\n9.2. Python Scopes and Namespaces\u00b6\nBefore introducing classes, I first have to tell you something about Python\u2019s scope rules. Class definitions play some neat tricks with namespaces, and you need to know how scopes and namespaces work to fully understand what\u2019s going on. Incidentally, knowledge about this subject is useful for any advanced Python programmer.\nLet\u2019s begin with some definitions.\nA namespace is a mapping from names to objects. Most namespaces are currently\nimplemented as Python dictionaries, but that\u2019s normally not noticeable in any\nway (except for performance), and it may change in the future. Examples of\nnamespaces are: the set of built-in names (containing functions such as abs()\n, and\nbuilt-in exception names); the global names in a module; and the local names in\na function invocation. In a sense the set of attributes of an object also form\na namespace. The important thing to know about namespaces is that there is\nabsolutely no relation between names in different namespaces; for instance, two\ndifferent modules may both define a function maximize\nwithout confusion \u2014\nusers of the modules must prefix it with the module name.\nBy the way, I use the word attribute for any name following a dot \u2014 for\nexample, in the expression z.real\n, real\nis an attribute of the object\nz\n. Strictly speaking, references to names in modules are attribute\nreferences: in the expression modname.funcname\n, modname\nis a module\nobject and funcname\nis an attribute of it. In this case there happens to be\na straightforward mapping between the module\u2019s attributes and the global names\ndefined in the module: they share the same namespace! [1]\nAttributes may be read-only or writable. In the latter case, assignment to\nattributes is possible. Module attributes are writable: you can write\nmodname.the_answer = 42\n. Writable attributes may also be deleted with the\ndel\nstatement. For example, del modname.the_answer\nwill remove\nthe attribute the_answer\nfrom the object named by modname\n.\nNamespaces are created at different moments and have different lifetimes. The\nnamespace containing the built-in names is created when the Python interpreter\nstarts up, and is never deleted. The global namespace for a module is created\nwhen the module definition is read in; normally, module namespaces also last\nuntil the interpreter quits. The statements executed by the top-level\ninvocation of the interpreter, either read from a script file or interactively,\nare considered part of a module called __main__\n, so they have their own\nglobal namespace. (The built-in names actually also live in a module; this is\ncalled builtins\n.)\nThe local namespace for a function is created when the function is called, and deleted when the function returns or raises an exception that is not handled within the function. (Actually, forgetting would be a better way to describe what actually happens.) Of course, recursive invocations each have their own local namespace.\nA scope is a textual region of a Python program where a namespace is directly accessible. \u201cDirectly accessible\u201d here means that an unqualified reference to a name attempts to find the name in the namespace.\nAlthough scopes are determined statically, they are used dynamically. At any time during execution, there are 3 or 4 nested scopes whose namespaces are directly accessible:\nthe innermost scope, which is searched first, contains the local names\nthe scopes of any enclosing functions, which are searched starting with the nearest enclosing scope, contain non-local, but also non-global names\nthe next-to-last scope contains the current module\u2019s global names\nthe outermost scope (searched last) is the namespace containing built-in names\nIf a name is declared global, then all references and assignments go directly to\nthe next-to-last scope containing the module\u2019s global names. To rebind variables\nfound outside of the innermost scope, the nonlocal\nstatement can be\nused; if not declared nonlocal, those variables are read-only (an attempt to\nwrite to such a variable will simply create a new local variable in the\ninnermost scope, leaving the identically named outer variable unchanged).\nUsually, the local scope references the local names of the (textually) current function. Outside functions, the local scope references the same namespace as the global scope: the module\u2019s namespace. Class definitions place yet another namespace in the local scope.\nIt is important to realize that scopes are determined textually: the global scope of a function defined in a module is that module\u2019s namespace, no matter from where or by what alias the function is called. On the other hand, the actual search for names is done dynamically, at run time \u2014 however, the language definition is evolving towards static name resolution, at \u201ccompile\u201d time, so don\u2019t rely on dynamic name resolution! (In fact, local variables are already determined statically.)\nA special quirk of Python is that \u2013 if no global\nor nonlocal\nstatement is in effect \u2013 assignments to names always go into the innermost scope.\nAssignments do not copy data \u2014 they just bind names to objects. The same is true\nfor deletions: the statement del x\nremoves the binding of x\nfrom the\nnamespace referenced by the local scope. In fact, all operations that introduce\nnew names use the local scope: in particular, import\nstatements and\nfunction definitions bind the module or function name in the local scope.\nThe global\nstatement can be used to indicate that particular\nvariables live in the global scope and should be rebound there; the\nnonlocal\nstatement indicates that particular variables live in\nan enclosing scope and should be rebound there.\n9.2.1. Scopes and Namespaces Example\u00b6\nThis is an example demonstrating how to reference the different scopes and\nnamespaces, and how global\nand nonlocal\naffect variable\nbinding:\ndef scope_test():\ndef do_local():\nspam = \"local spam\"\ndef do_nonlocal():\nnonlocal spam\nspam = \"nonlocal spam\"\ndef do_global():\nglobal spam\nspam = \"global spam\"\nspam = \"test spam\"\ndo_local()\nprint(\"After local assignment:\", spam)\ndo_nonlocal()\nprint(\"After nonlocal assignment:\", spam)\ndo_global()\nprint(\"After global assignment:\", spam)\nscope_test()\nprint(\"In global scope:\", spam)\nThe output of the example code is:\nAfter local assignment: test spam\nAfter nonlocal assignment: nonlocal spam\nAfter global assignment: nonlocal spam\nIn global scope: global spam\nNote how the local assignment (which is default) didn\u2019t change scope_test's\nbinding of spam. The nonlocal\nassignment changed scope_test's\nbinding of spam, and the global\nassignment changed the module-level\nbinding.\nYou can also see that there was no previous binding for spam before the\nglobal\nassignment.\n9.3. A First Look at Classes\u00b6\nClasses introduce a little bit of new syntax, three new object types, and some new semantics.\n9.3.1. Class Definition Syntax\u00b6\nThe simplest form of class definition looks like this:\nclass ClassName:\n\n.\n.\n.\n\nClass definitions, like function definitions (def\nstatements) must be\nexecuted before they have any effect. (You could conceivably place a class\ndefinition in a branch of an if\nstatement, or inside a function.)\nIn practice, the statements inside a class definition will usually be function definitions, but other statements are allowed, and sometimes useful \u2014 we\u2019ll come back to this later. The function definitions inside a class normally have a peculiar form of argument list, dictated by the calling conventions for methods \u2014 again, this is explained later.\nWhen a class definition is entered, a new namespace is created, and used as the local scope \u2014 thus, all assignments to local variables go into this new namespace. In particular, function definitions bind the name of the new function here.\nWhen a class definition is left normally (via the end), a class object is\ncreated. This is basically a wrapper around the contents of the namespace\ncreated by the class definition; we\u2019ll learn more about class objects in the\nnext section. The original local scope (the one in effect just before the class\ndefinition was entered) is reinstated, and the class object is bound here to the\nclass name given in the class definition header (ClassName\nin the\nexample).\n9.3.2. Class Objects\u00b6\nClass objects support two kinds of operations: attribute references and instantiation.\nAttribute references use the standard syntax used for all attribute references\nin Python: obj.name\n. Valid attribute names are all the names that were in\nthe class\u2019s namespace when the class object was created. So, if the class\ndefinition looked like this:\nclass MyClass:\n\"\"\"A simple example class\"\"\"\ni = 12345\ndef f(self):\nreturn 'hello world'\nthen MyClass.i\nand MyClass.f\nare valid attribute references, returning\nan integer and a function object, respectively. Class attributes can also be\nassigned to, so you can change the value of MyClass.i\nby assignment.\n__doc__\nis also a valid attribute, returning the docstring\nbelonging to the class: \"A simple example class\"\n.\nClass instantiation uses function notation. Just pretend that the class object is a parameterless function that returns a new instance of the class. For example (assuming the above class):\nx = MyClass()\ncreates a new instance of the class and assigns this object to the local\nvariable x\n.\nThe instantiation operation (\u201ccalling\u201d a class object) creates an empty object.\nMany classes like to create objects with instances customized to a specific\ninitial state. Therefore a class may define a special method named\n__init__()\n, like this:\ndef __init__(self):\nself.data = []\nWhen a class defines an __init__()\nmethod, class instantiation\nautomatically invokes __init__()\nfor the newly created class instance. So\nin this example, a new, initialized instance can be obtained by:\nx = MyClass()\nOf course, the __init__()\nmethod may have arguments for greater\nflexibility. In that case, arguments given to the class instantiation operator\nare passed on to __init__()\n. For example,\n>>> class Complex:\n... def __init__(self, realpart, imagpart):\n... self.r = realpart\n... self.i = imagpart\n...\n>>> x = Complex(3.0, -4.5)\n>>> x.r, x.i\n(3.0, -4.5)\n9.3.3. Instance Objects\u00b6\nNow what can we do with instance objects? The only operations understood by instance objects are attribute references. There are two kinds of valid attribute names: data attributes and methods.\nData attributes correspond to \u201cinstance variables\u201d in Smalltalk, and to \u201cdata\nmembers\u201d in C++. Data attributes need not be declared; like local variables,\nthey spring into existence when they are first assigned to. For example, if\nx\nis the instance of MyClass\ncreated above, the following piece of\ncode will print the value 16\n, without leaving a trace:\nx.counter = 1\nwhile x.counter < 10:\nx.counter = x.counter * 2\nprint(x.counter)\ndel x.counter\nThe other kind of instance attribute reference is a method. A method is a function that \u201cbelongs to\u201d an object.\nValid method names of an instance object depend on its class. By definition,\nall attributes of a class that are function objects define corresponding\nmethods of its instances. So in our example, x.f\nis a valid method\nreference, since MyClass.f\nis a function, but x.i\nis not, since\nMyClass.i\nis not. But x.f\nis not the same thing as MyClass.f\n\u2014 it\nis a method object, not a function object.\n9.3.4. Method Objects\u00b6\nUsually, a method is called right after it is bound:\nx.f()\nIf x = MyClass()\n, as above, this will return the string 'hello world'\n.\nHowever, it is not necessary to call a method right away: x.f\nis a method\nobject, and can be stored away and called at a later time. For example:\nxf = x.f\nwhile True:\nprint(xf())\nwill continue to print hello world\nuntil the end of time.\nWhat exactly happens when a method is called? You may have noticed that\nx.f()\nwas called without an argument above, even though the function\ndefinition for f()\nspecified an argument. What happened to the argument?\nSurely Python raises an exception when a function that requires an argument is\ncalled without any \u2014 even if the argument isn\u2019t actually used\u2026\nActually, you may have guessed the answer: the special thing about methods is\nthat the instance object is passed as the first argument of the function. In our\nexample, the call x.f()\nis exactly equivalent to MyClass.f(x)\n. In\ngeneral, calling a method with a list of n arguments is equivalent to calling\nthe corresponding function with an argument list that is created by inserting\nthe method\u2019s instance object before the first argument.\nIn general, methods work as follows. When a non-data attribute of an instance is referenced, the instance\u2019s class is searched. If the name denotes a valid class attribute that is a function object, references to both the instance object and the function object are packed into a method object. When the method object is called with an argument list, a new argument list is constructed from the instance object and the argument list, and the function object is called with this new argument list.\n9.3.5. Class and Instance Variables\u00b6\nGenerally speaking, instance variables are for data unique to each instance and class variables are for attributes and methods shared by all instances of the class:\nclass Dog:\nkind = 'canine' # class variable shared by all instances\ndef __init__(self, name):\nself.name = name # instance variable unique to each instance\n>>> d = Dog('Fido')\n>>> e = Dog('Buddy')\n>>> d.kind # shared by all dogs\n'canine'\n>>> e.kind # shared by all dogs\n'canine'\n>>> d.name # unique to d\n'Fido'\n>>> e.name # unique to e\n'Buddy'\nAs discussed in A Word About Names and Objects, shared data can have possibly surprising effects involving mutable objects such as lists and dictionaries. For example, the tricks list in the following code should not be used as a class variable because just a single list would be shared by all Dog instances:\nclass Dog:\ntricks = [] # mistaken use of a class variable\ndef __init__(self, name):\nself.name = name\ndef add_trick(self, trick):\nself.tricks.append(trick)\n>>> d = Dog('Fido')\n>>> e = Dog('Buddy')\n>>> d.add_trick('roll over')\n>>> e.add_trick('play dead')\n>>> d.tricks # unexpectedly shared by all dogs\n['roll over', 'play dead']\nCorrect design of the class should use an instance variable instead:\nclass Dog:\ndef __init__(self, name):\nself.name = name\nself.tricks = [] # creates a new empty list for each dog\ndef add_trick(self, trick):\nself.tricks.append(trick)\n>>> d = Dog('Fido')\n>>> e = Dog('Buddy')\n>>> d.add_trick('roll over')\n>>> e.add_trick('play dead')\n>>> d.tricks\n['roll over']\n>>> e.tricks\n['play dead']\n9.4. Random Remarks\u00b6\nIf the same attribute name occurs in both an instance and in a class, then attribute lookup prioritizes the instance:\n>>> class Warehouse:\n... purpose = 'storage'\n... region = 'west'\n...\n>>> w1 = Warehouse()\n>>> print(w1.purpose, w1.region)\nstorage west\n>>> w2 = Warehouse()\n>>> w2.region = 'east'\n>>> print(w2.purpose, w2.region)\nstorage east\nData attributes may be referenced by methods as well as by ordinary users (\u201cclients\u201d) of an object. In other words, classes are not usable to implement pure abstract data types. In fact, nothing in Python makes it possible to enforce data hiding \u2014 it is all based upon convention. (On the other hand, the Python implementation, written in C, can completely hide implementation details and control access to an object if necessary; this can be used by extensions to Python written in C.)\nClients should use data attributes with care \u2014 clients may mess up invariants maintained by the methods by stamping on their data attributes. Note that clients may add data attributes of their own to an instance object without affecting the validity of the methods, as long as name conflicts are avoided \u2014 again, a naming convention can save a lot of headaches here.\nThere is no shorthand for referencing data attributes (or other methods!) from within methods. I find that this actually increases the readability of methods: there is no chance of confusing local variables and instance variables when glancing through a method.\nOften, the first argument of a method is called self\n. This is nothing more\nthan a convention: the name self\nhas absolutely no special meaning to\nPython. Note, however, that by not following the convention your code may be\nless readable to other Python programmers, and it is also conceivable that a\nclass browser program might be written that relies upon such a convention.\nAny function object that is a class attribute defines a method for instances of that class. It is not necessary that the function definition is textually enclosed in the class definition: assigning a function object to a local variable in the class is also ok. For example:\n# Function defined outside the class\ndef f1(self, x, y):\nreturn min(x, x+y)\nclass C:\nf = f1\ndef g(self):\nreturn 'hello world'\nh = g\nNow f\n, g\nand h\nare all attributes of class C\nthat refer to\nfunction objects, and consequently they are all methods of instances of\nC\n\u2014 h\nbeing exactly equivalent to g\n. Note that this practice\nusually only serves to confuse the reader of a program.\nMethods may call other methods by using method attributes of the self\nargument:\nclass Bag:\ndef __init__(self):\nself.data = []\ndef add(self, x):\nself.data.append(x)\ndef addtwice(self, x):\nself.add(x)\nself.add(x)\nMethods may reference global names in the same way as ordinary functions. The global scope associated with a method is the module containing its definition. (A class is never used as a global scope.) While one rarely encounters a good reason for using global data in a method, there are many legitimate uses of the global scope: for one thing, functions and modules imported into the global scope can be used by methods, as well as functions and classes defined in it. Usually, the class containing the method is itself defined in this global scope, and in the next section we\u2019ll find some good reasons why a method would want to reference its own class.\nEach value is an object, and therefore has a class (also called its type).\nIt is stored as object.__class__\n.\n9.5. Inheritance\u00b6\nOf course, a language feature would not be worthy of the name \u201cclass\u201d without supporting inheritance. The syntax for a derived class definition looks like this:\nclass DerivedClassName(BaseClassName):\n\n.\n.\n.\n\nThe name BaseClassName\nmust be defined in a\nnamespace accessible from the scope containing the\nderived class definition. In place of a base class name, other arbitrary\nexpressions are also allowed. This can be useful, for example, when the base\nclass is defined in another module:\nclass DerivedClassName(modname.BaseClassName):\nExecution of a derived class definition proceeds the same as for a base class. When the class object is constructed, the base class is remembered. This is used for resolving attribute references: if a requested attribute is not found in the class, the search proceeds to look in the base class. This rule is applied recursively if the base class itself is derived from some other class.\nThere\u2019s nothing special about instantiation of derived classes:\nDerivedClassName()\ncreates a new instance of the class. Method references\nare resolved as follows: the corresponding class attribute is searched,\ndescending down the chain of base classes if necessary, and the method reference\nis valid if this yields a function object.\nDerived classes may override methods of their base classes. Because methods\nhave no special privileges when calling other methods of the same object, a\nmethod of a base class that calls another method defined in the same base class\nmay end up calling a method of a derived class that overrides it. (For C++\nprogrammers: all methods in Python are effectively virtual\n.)\nAn overriding method in a derived class may in fact want to extend rather than\nsimply replace the base class method of the same name. There is a simple way to\ncall the base class method directly: just call BaseClassName.methodname(self,\narguments)\n. This is occasionally useful to clients as well. (Note that this\nonly works if the base class is accessible as BaseClassName\nin the global\nscope.)\nPython has two built-in functions that work with inheritance:\nUse\nisinstance()\nto check an instance\u2019s type:isinstance(obj, int)\nwill beTrue\nonly ifobj.__class__\nisint\nor some class derived fromint\n.Use\nissubclass()\nto check class inheritance:issubclass(bool, int)\nisTrue\nsincebool\nis a subclass ofint\n. However,issubclass(float, int)\nisFalse\nsincefloat\nis not a subclass ofint\n.\n9.5.1. Multiple Inheritance\u00b6\nPython supports a form of multiple inheritance as well. A class definition with multiple base classes looks like this:\nclass DerivedClassName(Base1, Base2, Base3):\n\n.\n.\n.\n\nFor most purposes, in the simplest cases, you can think of the search for\nattributes inherited from a parent class as depth-first, left-to-right, not\nsearching twice in the same class where there is an overlap in the hierarchy.\nThus, if an attribute is not found in DerivedClassName\n, it is searched\nfor in Base1\n, then (recursively) in the base classes of Base1\n,\nand if it was not found there, it was searched for in Base2\n, and so on.\nIn fact, it is slightly more complex than that; the method resolution order\nchanges dynamically to support cooperative calls to super()\n. This\napproach is known in some other multiple-inheritance languages as\ncall-next-method and is more powerful than the super call found in\nsingle-inheritance languages.\nDynamic ordering is necessary because all cases of multiple inheritance exhibit\none or more diamond relationships (where at least one of the parent classes\ncan be accessed through multiple paths from the bottommost class). For example,\nall classes inherit from object\n, so any case of multiple inheritance\nprovides more than one path to reach object\n. To keep the base classes\nfrom being accessed more than once, the dynamic algorithm linearizes the search\norder in a way that preserves the left-to-right ordering specified in each\nclass, that calls each parent only once, and that is monotonic (meaning that a\nclass can be subclassed without affecting the precedence order of its parents).\nTaken together, these properties make it possible to design reliable and\nextensible classes with multiple inheritance. For more detail, see\nThe Python 2.3 Method Resolution Order.\n9.6. Private Variables\u00b6\n\u201cPrivate\u201d instance variables that cannot be accessed except from inside an\nobject don\u2019t exist in Python. However, there is a convention that is followed\nby most Python code: a name prefixed with an underscore (e.g. _spam\n) should\nbe treated as a non-public part of the API (whether it is a function, a method\nor a data member). It should be considered an implementation detail and subject\nto change without notice.\nSince there is a valid use-case for class-private members (namely to avoid name\nclashes of names with names defined by subclasses), there is limited support for\nsuch a mechanism, called name mangling. Any identifier of the form\n__spam\n(at least two leading underscores, at most one trailing underscore)\nis textually replaced with _classname__spam\n, where classname\nis the\ncurrent class name with leading underscore(s) stripped. This mangling is done\nwithout regard to the syntactic position of the identifier, as long as it\noccurs within the definition of a class.\nSee also\nThe private name mangling specifications for details and special cases.\nName mangling is helpful for letting subclasses override methods without breaking intraclass method calls. For example:\nclass Mapping:\ndef __init__(self, iterable):\nself.items_list = []\nself.__update(iterable)\ndef update(self, iterable):\nfor item in iterable:\nself.items_list.append(item)\n__update = update # private copy of original update() method\nclass MappingSubclass(Mapping):\ndef update(self, keys, values):\n# provides new signature for update()\n# but does not break __init__()\nfor item in zip(keys, values):\nself.items_list.append(item)\nThe above example would work even if MappingSubclass\nwere to introduce a\n__update\nidentifier since it is replaced with _Mapping__update\nin the\nMapping\nclass and _MappingSubclass__update\nin the MappingSubclass\nclass respectively.\nNote that the mangling rules are designed mostly to avoid accidents; it still is possible to access or modify a variable that is considered private. This can even be useful in special circumstances, such as in the debugger.\nNotice that code passed to exec()\nor eval()\ndoes not consider the\nclassname of the invoking class to be the current class; this is similar to the\neffect of the global\nstatement, the effect of which is likewise restricted\nto code that is byte-compiled together. The same restriction applies to\ngetattr()\n, setattr()\nand delattr()\n, as well as when referencing\n__dict__\ndirectly.\n9.7. Odds and Ends\u00b6\nSometimes it is useful to have a data type similar to the Pascal \u201crecord\u201d or C\n\u201cstruct\u201d, bundling together a few named data items. The idiomatic approach\nis to use dataclasses\nfor this purpose:\nfrom dataclasses import dataclass\n@dataclass\nclass Employee:\nname: str\ndept: str\nsalary: int\n>>> john = Employee('john', 'computer lab', 1000)\n>>> john.dept\n'computer lab'\n>>> john.salary\n1000\nA piece of Python code that expects a particular abstract data type can often be\npassed a class that emulates the methods of that data type instead. For\ninstance, if you have a function that formats some data from a file object, you\ncan define a class with methods read()\nand\nreadline()\nthat get the\ndata from a string buffer instead, and pass it as an argument.\nInstance method objects have attributes, too:\nm.__self__\nis the instance\nobject with the method m()\n, and m.__func__\nis\nthe function object\ncorresponding to the method.\n9.8. Iterators\u00b6\nBy now you have probably noticed that most container objects can be looped over\nusing a for\nstatement:\nfor element in [1, 2, 3]:\nprint(element)\nfor element in (1, 2, 3):\nprint(element)\nfor key in {'one':1, 'two':2}:\nprint(key)\nfor char in \"123\":\nprint(char)\nfor line in open(\"myfile.txt\"):\nprint(line, end='')\nThis style of access is clear, concise, and convenient. The use of iterators\npervades and unifies Python. Behind the scenes, the for\nstatement\ncalls iter()\non the container object. The function returns an iterator\nobject that defines the method __next__()\nwhich accesses\nelements in the container one at a time. When there are no more elements,\n__next__()\nraises a StopIteration\nexception which tells the\nfor\nloop to terminate. You can call the __next__()\nmethod\nusing the next()\nbuilt-in function; this example shows how it all works:\n>>> s = 'abc'\n>>> it = iter(s)\n>>> it\n\n>>> next(it)\n'a'\n>>> next(it)\n'b'\n>>> next(it)\n'c'\n>>> next(it)\nTraceback (most recent call last):\nFile \"\", line 1, in \nnext(it)\nStopIteration\nHaving seen the mechanics behind the iterator protocol, it is easy to add\niterator behavior to your classes. Define an __iter__()\nmethod which\nreturns an object with a __next__()\nmethod. If the class\ndefines __next__()\n, then __iter__()\ncan just return self\n:\nclass Reverse:\n\"\"\"Iterator for looping over a sequence backwards.\"\"\"\ndef __init__(self, data):\nself.data = data\nself.index = len(data)\ndef __iter__(self):\nreturn self\ndef __next__(self):\nif self.index == 0:\nraise StopIteration\nself.index = self.index - 1\nreturn self.data[self.index]\n>>> rev = Reverse('spam')\n>>> iter(rev)\n<__main__.Reverse object at 0x00A1DB50>\n>>> for char in rev:\n... print(char)\n...\nm\na\np\ns\n9.9. Generators\u00b6\nGenerators are a simple and powerful tool for creating iterators. They\nare written like regular functions but use the yield\nstatement\nwhenever they want to return data. Each time next()\nis called on it, the\ngenerator resumes where it left off (it remembers all the data values and which\nstatement was last executed). An example shows that generators can be trivially\neasy to create:\ndef reverse(data):\nfor index in range(len(data)-1, -1, -1):\nyield data[index]\n>>> for char in reverse('golf'):\n... print(char)\n...\nf\nl\no\ng\nAnything that can be done with generators can also be done with class-based\niterators as described in the previous section. What makes generators so\ncompact is that the __iter__()\nand __next__()\nmethods\nare created automatically.\nAnother key feature is that the local variables and execution state are\nautomatically saved between calls. This made the function easier to write and\nmuch more clear than an approach using instance variables like self.index\nand self.data\n.\nIn addition to automatic method creation and saving program state, when\ngenerators terminate, they automatically raise StopIteration\n. In\ncombination, these features make it easy to create iterators with no more effort\nthan writing a regular function.\n9.10. Generator Expressions\u00b6\nSome simple generators can be coded succinctly as expressions using a syntax similar to list comprehensions but with parentheses instead of square brackets. These expressions are designed for situations where the generator is used right away by an enclosing function. Generator expressions are more compact but less versatile than full generator definitions and tend to be more memory friendly than equivalent list comprehensions.\nExamples:\n>>> sum(i*i for i in range(10)) # sum of squares\n285\n>>> xvec = [10, 20, 30]\n>>> yvec = [7, 5, 3]\n>>> sum(x*y for x,y in zip(xvec, yvec)) # dot product\n260\n>>> unique_words = set(word for line in page for word in line.split())\n>>> valedictorian = max((student.gpa, student.name) for student in graduates)\n>>> data = 'golf'\n>>> list(data[i] for i in range(len(data)-1, -1, -1))\n['f', 'l', 'o', 'g']\nFootnotes", "code_snippets": ["\n ", "\n ", " ", " ", "\n\n ", "\n ", " ", "\n ", " ", " ", "\n\n ", "\n ", " ", "\n ", " ", " ", "\n\n ", " ", " ", "\n ", "\n ", " ", "\n ", "\n ", " ", "\n ", "\n ", " ", "\n\n", "\n", " ", "\n", "\n ", "\n ", "\n ", "\n ", "\n ", "\n", "\n", "\n ", " ", " ", "\n\n ", "\n ", " ", "\n", " ", " ", "\n", "\n ", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", " ", "\n", "\n", " ", " ", "\n", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n", "\n", " ", "\n", "\n", " ", " ", "\n", " ", "\n ", "\n", "\n\n ", " ", " ", " ", "\n\n ", " ", "\n ", " ", " ", " ", "\n\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n\n ", " ", " ", " ", "\n\n ", " ", "\n ", " ", " ", "\n\n ", " ", "\n ", "\n\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", " ", "\n", " ", " ", "\n", " ", "\n", "\n\n ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n\n ", " ", "\n ", "\n\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", " ", "\n", " ", "\n", "\n", " ", "\n", "\n", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", " ", " ", "\n", " ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", "\n", "\n", "\n", " ", " ", "\n ", " ", " ", "\n\n", "\n ", " ", " ", "\n\n ", "\n ", " ", "\n\n ", " ", " ", "\n", "\n ", "\n ", " ", " ", "\n\n ", " ", "\n ", "\n\n ", " ", "\n ", "\n ", "\n", "\n ", "\n ", "\n ", "\n ", "\n ", "\n", "\n", " ", " ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n", "\n ", " ", "\n ", " ", " ", "\n ", "\n\n ", " ", "\n ", " ", " ", " ", "\n ", "\n\n ", " ", " ", " ", "\n\n", "\n\n ", " ", " ", "\n ", "\n ", "\n ", " ", " ", " ", " ", "\n ", "\n", " ", "\n\n", "\n", "\n ", " ", "\n ", " ", "\n ", " ", "\n", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", "\n ", "\n", " ", " ", " ", " ", " ", "\n ", "\n", " ", " ", " ", " ", "\n ", "\n", " ", " ", " ", "\n ", "\n", " ", " ", " ", "\n ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n File ", ", line ", ", in ", "\n", "\n", "\n", "\n", "\n ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n\n ", "\n ", " ", "\n\n ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", "\n", " ", " ", "\n", "\n", "\n", " ", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n ", " ", " ", " ", " ", " ", "\n ", " ", "\n", " ", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", "\n", "\n\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", "\n\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n\n", " ", " ", " ", " ", " ", " ", " ", "\n\n", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 8139} +{"url": "https://docs.python.org/3/faq/index.html", "title": "Python Frequently Asked Questions", "content": "Theme\nAuto\nLight\nDark\nPrevious topic\nRemote debugging attachment protocol\nNext topic\nGeneral Python FAQ\nThis page\nReport a bug\nImprove this page\nShow source\nNavigation\nindex\nmodules\n|\nnext\n|\nprevious\n|\nPython\n\u00bb\n3.14.3 Documentation\n\u00bb\nPython Frequently Asked Questions\n|\nTheme\nAuto\nLight\nDark\n|\nPython Frequently Asked Questions\n\u00b6\nGeneral Python FAQ\nProgramming FAQ\nDesign and History FAQ\nLibrary and Extension FAQ\nExtending/Embedding FAQ\nPython on Windows FAQ\nGraphic User Interface FAQ\n\u201cWhy is Python Installed on my Computer?\u201d FAQ\nPrevious topic\nRemote debugging attachment protocol\nNext topic\nGeneral Python FAQ\nThis page\nReport a bug\nImprove this page\nShow source\n\u00ab\nNavigation\nindex\nmodules\n|\nnext\n|\nprevious\n|\nPython\n\u00bb\n3.14.3 Documentation\n\u00bb\nPython Frequently Asked Questions\n|\nTheme\nAuto\nLight\nDark\n|", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 201} +{"url": "https://docs.python.org/3/installing/index.html", "title": "Installing Python Modules", "content": "Installing Python Modules\u00b6\n- Email:\nAs a popular open source development project, Python has an active supporting community of contributors and users that also make their software available for other Python developers to use under open source license terms.\nThis allows Python users to share and collaborate effectively, benefiting from the solutions others have already created to common (and sometimes even rare!) problems, as well as potentially contributing their own solutions to the common pool.\nThis guide covers the installation part of the process. For a guide to creating and sharing your own Python projects, refer to the Python packaging user guide.\nNote\nFor corporate and other institutional users, be aware that many organisations have their own policies around using and contributing to open source software. Please take such policies into account when making use of the distribution and installation tools provided with Python.\nKey terms\u00b6\npip\nis the preferred installer program. Starting with Python 3.4, it is included by default with the Python binary installers.A virtual environment is a semi-isolated Python environment that allows packages to be installed for use by a particular application, rather than being installed system wide.\nvenv\nis the standard tool for creating virtual environments, and has been part of Python since Python 3.3. Starting with Python 3.4, it defaults to installingpip\ninto all created virtual environments.virtualenv\nis a third party alternative (and predecessor) tovenv\n. It allows virtual environments to be used on versions of Python prior to 3.4, which either don\u2019t providevenv\nat all, or aren\u2019t able to automatically installpip\ninto created environments.The Python Package Index is a public repository of open source licensed packages made available for use by other Python users.\nthe Python Packaging Authority is the group of developers and documentation authors responsible for the maintenance and evolution of the standard packaging tools and the associated metadata and file format standards. They maintain a variety of tools, documentation, and issue trackers on GitHub.\ndistutils\nis the original build and distribution system first added to the Python standard library in 1998. While direct use ofdistutils\nis being phased out, it still laid the foundation for the current packaging and distribution infrastructure, and it not only remains part of the standard library, but its name lives on in other ways (such as the name of the mailing list used to coordinate Python packaging standards development).\nChanged in version 3.5: The use of venv\nis now recommended for creating virtual environments.\nBasic usage\u00b6\nThe standard packaging tools are all designed to be used from the command line.\nThe following command will install the latest version of a module and its dependencies from the Python Package Index:\npython -m pip install SomePackage\nNote\nFor POSIX users (including macOS and Linux users), the examples in this guide assume the use of a virtual environment.\nFor Windows users, the examples in this guide assume that the option to adjust the system PATH environment variable was selected when installing Python.\nIt\u2019s also possible to specify an exact or minimum version directly on the\ncommand line. When using comparator operators such as >\n, <\nor some other\nspecial character which get interpreted by shell, the package name and the\nversion should be enclosed within double quotes:\npython -m pip install SomePackage==1.0.4 # specific version\npython -m pip install \"SomePackage>=1.0.4\" # minimum version\nNormally, if a suitable module is already installed, attempting to install it again will have no effect. Upgrading existing modules must be requested explicitly:\npython -m pip install --upgrade SomePackage\nMore information and resources regarding pip\nand its capabilities can be\nfound in the Python Packaging User Guide.\nCreation of virtual environments is done through the venv\nmodule.\nInstalling packages into an active virtual environment uses the commands shown\nabove.\nHow do I \u2026?\u00b6\nThese are quick answers or links for some common tasks.\n\u2026 install pip\nin versions of Python prior to Python 3.4?\u00b6\nPython only started bundling pip\nwith Python 3.4. For earlier versions,\npip\nneeds to be \u201cbootstrapped\u201d as described in the Python Packaging\nUser Guide.\n\u2026 install packages just for the current user?\u00b6\nPassing the --user\noption to python -m pip install\nwill install a\npackage just for the current user, rather than for all users of the system.\n\u2026 install scientific Python packages?\u00b6\nA number of scientific Python packages have complex binary dependencies, and\naren\u2019t currently easy to install using pip\ndirectly. At this point in\ntime, it will often be easier for users to install these packages by\nother means\nrather than attempting to install them with pip\n.\n\u2026 work with multiple versions of Python installed in parallel?\u00b6\nOn Linux, macOS, and other POSIX systems, use the versioned Python commands\nin combination with the -m\nswitch to run the appropriate copy of\npip\n:\npython2 -m pip install SomePackage # default Python 2\npython2.7 -m pip install SomePackage # specifically Python 2.7\npython3 -m pip install SomePackage # default Python 3\npython3.4 -m pip install SomePackage # specifically Python 3.4\nAppropriately versioned pip\ncommands may also be available.\nOn Windows, use the py\nPython launcher in combination with the -m\nswitch:\npy -2 -m pip install SomePackage # default Python 2\npy -2.7 -m pip install SomePackage # specifically Python 2.7\npy -3 -m pip install SomePackage # default Python 3\npy -3.4 -m pip install SomePackage # specifically Python 3.4\nCommon installation issues\u00b6\nInstalling into the system Python on Linux\u00b6\nOn Linux systems, a Python installation will typically be included as part\nof the distribution. Installing into this Python installation requires\nroot access to the system, and may interfere with the operation of the\nsystem package manager and other components of the system if a component\nis unexpectedly upgraded using pip\n.\nOn such systems, it is often better to use a virtual environment or a\nper-user installation when installing packages with pip\n.\nPip not installed\u00b6\nIt is possible that pip\ndoes not get installed by default. One potential fix is:\npython -m ensurepip --default-pip\nThere are also additional resources for installing pip.\nInstalling binary extensions\u00b6\nPython has typically relied heavily on source based distribution, with end users being expected to compile extension modules from source as part of the installation process.\nWith the introduction of support for the binary wheel\nformat, and the\nability to publish wheels for at least Windows and macOS through the\nPython Package Index, this problem is expected to diminish over time,\nas users are more regularly able to install pre-built extensions rather\nthan needing to build them themselves.\nSome of the solutions for installing scientific software\nthat are not yet available as pre-built wheel\nfiles may also help with\nobtaining other binary extensions without needing to build them locally.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1770} +{"url": "https://docs.python.org/3/tutorial/errors.html", "title": "Errors and Exceptions", "content": "8. Errors and Exceptions\u00b6\nUntil now error messages haven\u2019t been more than mentioned, but if you have tried out the examples you have probably seen some. There are (at least) two distinguishable kinds of errors: syntax errors and exceptions.\n8.1. Syntax Errors\u00b6\nSyntax errors, also known as parsing errors, are perhaps the most common kind of complaint you get while you are still learning Python:\n>>> while True print('Hello world')\nFile \"\", line 1\nwhile True print('Hello world')\n^^^^^\nSyntaxError: invalid syntax\nThe parser repeats the offending line and displays little arrows pointing\nat the place where the error was detected. Note that this is not always the\nplace that needs to be fixed. In the example, the error is detected at the\nfunction print()\n, since a colon (':'\n) is missing just before it.\nThe file name (\nin our example) and line number are printed so you\nknow where to look in case the input came from a file.\n8.2. Exceptions\u00b6\nEven if a statement or expression is syntactically correct, it may cause an error when an attempt is made to execute it. Errors detected during execution are called exceptions and are not unconditionally fatal: you will soon learn how to handle them in Python programs. Most exceptions are not handled by programs, however, and result in error messages as shown here:\n>>> 10 * (1/0)\nTraceback (most recent call last):\nFile \"\", line 1, in \n10 * (1/0)\n~^~\nZeroDivisionError: division by zero\n>>> 4 + spam*3\nTraceback (most recent call last):\nFile \"\", line 1, in \n4 + spam*3\n^^^^\nNameError: name 'spam' is not defined\n>>> '2' + 2\nTraceback (most recent call last):\nFile \"\", line 1, in \n'2' + 2\n~~~~^~~\nTypeError: can only concatenate str (not \"int\") to str\nThe last line of the error message indicates what happened. Exceptions come in\ndifferent types, and the type is printed as part of the message: the types in\nthe example are ZeroDivisionError\n, NameError\nand TypeError\n.\nThe string printed as the exception type is the name of the built-in exception\nthat occurred. This is true for all built-in exceptions, but need not be true\nfor user-defined exceptions (although it is a useful convention). Standard\nexception names are built-in identifiers (not reserved keywords).\nThe rest of the line provides detail based on the type of exception and what caused it.\nThe preceding part of the error message shows the context where the exception occurred, in the form of a stack traceback. In general it contains a stack traceback listing source lines; however, it will not display lines read from standard input.\nBuilt-in Exceptions lists the built-in exceptions and their meanings.\n8.3. Handling Exceptions\u00b6\nIt is possible to write programs that handle selected exceptions. Look at the\nfollowing example, which asks the user for input until a valid integer has been\nentered, but allows the user to interrupt the program (using Control-C or\nwhatever the operating system supports); note that a user-generated interruption\nis signalled by raising the KeyboardInterrupt\nexception.\n>>> while True:\n... try:\n... x = int(input(\"Please enter a number: \"))\n... break\n... except ValueError:\n... print(\"Oops! That was no valid number. Try again...\")\n...\nThe try\nstatement works as follows.\nFirst, the try clause (the statement(s) between the\ntry\nandexcept\nkeywords) is executed.If no exception occurs, the except clause is skipped and execution of the\ntry\nstatement is finished.If an exception occurs during execution of the\ntry\nclause, the rest of the clause is skipped. Then, if its type matches the exception named after theexcept\nkeyword, the except clause is executed, and then execution continues after the try/except block.If an exception occurs which does not match the exception named in the except clause, it is passed on to outer\ntry\nstatements; if no handler is found, it is an unhandled exception and execution stops with an error message.\nA try\nstatement may have more than one except clause, to specify\nhandlers for different exceptions. At most one handler will be executed.\nHandlers only handle exceptions that occur in the corresponding try clause,\nnot in other handlers of the same try\nstatement. An except clause\nmay name multiple exceptions as a parenthesized tuple, for example:\n... except (RuntimeError, TypeError, NameError):\n... pass\nA class in an except\nclause matches exceptions which are instances of the\nclass itself or one of its derived classes (but not the other way around \u2014 an\nexcept clause listing a derived class does not match instances of its base classes).\nFor example, the following code will print B, C, D in that order:\nclass B(Exception):\npass\nclass C(B):\npass\nclass D(C):\npass\nfor cls in [B, C, D]:\ntry:\nraise cls()\nexcept D:\nprint(\"D\")\nexcept C:\nprint(\"C\")\nexcept B:\nprint(\"B\")\nNote that if the except clauses were reversed (with except B\nfirst), it\nwould have printed B, B, B \u2014 the first matching except clause is triggered.\nWhen an exception occurs, it may have associated values, also known as the exception\u2019s arguments. The presence and types of the arguments depend on the exception type.\nThe except clause may specify a variable after the exception name. The\nvariable is bound to the exception instance which typically has an args\nattribute that stores the arguments. For convenience, builtin exception\ntypes define __str__()\nto print all the arguments without explicitly\naccessing .args\n.\n>>> try:\n... raise Exception('spam', 'eggs')\n... except Exception as inst:\n... print(type(inst)) # the exception type\n... print(inst.args) # arguments stored in .args\n... print(inst) # __str__ allows args to be printed directly,\n... # but may be overridden in exception subclasses\n... x, y = inst.args # unpack args\n... print('x =', x)\n... print('y =', y)\n...\n\n('spam', 'eggs')\n('spam', 'eggs')\nx = spam\ny = eggs\nThe exception\u2019s __str__()\noutput is printed as the last part (\u2018detail\u2019)\nof the message for unhandled exceptions.\nBaseException\nis the common base class of all exceptions. One of its\nsubclasses, Exception\n, is the base class of all the non-fatal exceptions.\nExceptions which are not subclasses of Exception\nare not typically\nhandled, because they are used to indicate that the program should terminate.\nThey include SystemExit\nwhich is raised by sys.exit()\nand\nKeyboardInterrupt\nwhich is raised when a user wishes to interrupt\nthe program.\nException\ncan be used as a wildcard that catches (almost) everything.\nHowever, it is good practice to be as specific as possible with the types\nof exceptions that we intend to handle, and to allow any unexpected\nexceptions to propagate on.\nThe most common pattern for handling Exception\nis to print or log\nthe exception and then re-raise it (allowing a caller to handle the\nexception as well):\nimport sys\ntry:\nf = open('myfile.txt')\ns = f.readline()\ni = int(s.strip())\nexcept OSError as err:\nprint(\"OS error:\", err)\nexcept ValueError:\nprint(\"Could not convert data to an integer.\")\nexcept Exception as err:\nprint(f\"Unexpected {err=}, {type(err)=}\")\nraise\nThe try\n\u2026 except\nstatement has an optional else\nclause, which, when present, must follow all except clauses. It is useful\nfor code that must be executed if the try clause does not raise an exception.\nFor example:\nfor arg in sys.argv[1:]:\ntry:\nf = open(arg, 'r')\nexcept OSError:\nprint('cannot open', arg)\nelse:\nprint(arg, 'has', len(f.readlines()), 'lines')\nf.close()\nThe use of the else\nclause is better than adding additional code to\nthe try\nclause because it avoids accidentally catching an exception\nthat wasn\u2019t raised by the code being protected by the try\n\u2026\nexcept\nstatement.\nException handlers do not handle only exceptions that occur immediately in the try clause, but also those that occur inside functions that are called (even indirectly) in the try clause. For example:\n>>> def this_fails():\n... x = 1/0\n...\n>>> try:\n... this_fails()\n... except ZeroDivisionError as err:\n... print('Handling run-time error:', err)\n...\nHandling run-time error: division by zero\n8.4. Raising Exceptions\u00b6\nThe raise\nstatement allows the programmer to force a specified\nexception to occur. For example:\n>>> raise NameError('HiThere')\nTraceback (most recent call last):\nFile \"\", line 1, in \nraise NameError('HiThere')\nNameError: HiThere\nThe sole argument to raise\nindicates the exception to be raised.\nThis must be either an exception instance or an exception class (a class that\nderives from BaseException\n, such as Exception\nor one of its\nsubclasses). If an exception class is passed, it will be implicitly\ninstantiated by calling its constructor with no arguments:\nraise ValueError # shorthand for 'raise ValueError()'\nIf you need to determine whether an exception was raised but don\u2019t intend to\nhandle it, a simpler form of the raise\nstatement allows you to\nre-raise the exception:\n>>> try:\n... raise NameError('HiThere')\n... except NameError:\n... print('An exception flew by!')\n... raise\n...\nAn exception flew by!\nTraceback (most recent call last):\nFile \"\", line 2, in \nraise NameError('HiThere')\nNameError: HiThere\n8.5. Exception Chaining\u00b6\nIf an unhandled exception occurs inside an except\nsection, it will\nhave the exception being handled attached to it and included in the error\nmessage:\n>>> try:\n... open(\"database.sqlite\")\n... except OSError:\n... raise RuntimeError(\"unable to handle error\")\n...\nTraceback (most recent call last):\nFile \"\", line 2, in \nopen(\"database.sqlite\")\n~~~~^^^^^^^^^^^^^^^^^^^\nFileNotFoundError: [Errno 2] No such file or directory: 'database.sqlite'\nDuring handling of the above exception, another exception occurred:\nTraceback (most recent call last):\nFile \"\", line 4, in \nraise RuntimeError(\"unable to handle error\")\nRuntimeError: unable to handle error\nTo indicate that an exception is a direct consequence of another, the\nraise\nstatement allows an optional from\nclause:\n# exc must be exception instance or None.\nraise RuntimeError from exc\nThis can be useful when you are transforming exceptions. For example:\n>>> def func():\n... raise ConnectionError\n...\n>>> try:\n... func()\n... except ConnectionError as exc:\n... raise RuntimeError('Failed to open database') from exc\n...\nTraceback (most recent call last):\nFile \"\", line 2, in \nfunc()\n~~~~^^\nFile \"\", line 2, in func\nConnectionError\nThe above exception was the direct cause of the following exception:\nTraceback (most recent call last):\nFile \"\", line 4, in \nraise RuntimeError('Failed to open database') from exc\nRuntimeError: Failed to open database\nIt also allows disabling automatic exception chaining using the from None\nidiom:\n>>> try:\n... open('database.sqlite')\n... except OSError:\n... raise RuntimeError from None\n...\nTraceback (most recent call last):\nFile \"\", line 4, in \nraise RuntimeError from None\nRuntimeError\nFor more information about chaining mechanics, see Built-in Exceptions.\n8.6. User-defined Exceptions\u00b6\nPrograms may name their own exceptions by creating a new exception class (see\nClasses for more about Python classes). Exceptions should typically\nbe derived from the Exception\nclass, either directly or indirectly.\nException classes can be defined which do anything any other class can do, but are usually kept simple, often only offering a number of attributes that allow information about the error to be extracted by handlers for the exception.\nMost exceptions are defined with names that end in \u201cError\u201d, similar to the naming of the standard exceptions.\nMany standard modules define their own exceptions to report errors that may occur in functions they define.\n8.7. Defining Clean-up Actions\u00b6\nThe try\nstatement has another optional clause which is intended to\ndefine clean-up actions that must be executed under all circumstances. For\nexample:\n>>> try:\n... raise KeyboardInterrupt\n... finally:\n... print('Goodbye, world!')\n...\nGoodbye, world!\nTraceback (most recent call last):\nFile \"\", line 2, in \nraise KeyboardInterrupt\nKeyboardInterrupt\nIf a finally\nclause is present, the finally\nclause will execute as the last task before the try\nstatement completes. The finally\nclause runs whether or\nnot the try\nstatement produces an exception. The following\npoints discuss more complex cases when an exception occurs:\nIf an exception occurs during execution of the\ntry\nclause, the exception may be handled by anexcept\nclause. If the exception is not handled by anexcept\nclause, the exception is re-raised after thefinally\nclause has been executed.An exception could occur during execution of an\nexcept\norelse\nclause. Again, the exception is re-raised after thefinally\nclause has been executed.If the\nfinally\nclause executes abreak\n,continue\norreturn\nstatement, exceptions are not re-raised. This can be confusing and is therefore discouraged. From version 3.14 the compiler emits aSyntaxWarning\nfor it (see PEP 765).If the\ntry\nstatement reaches abreak\n,continue\norreturn\nstatement, thefinally\nclause will execute just prior to thebreak\n,continue\norreturn\nstatement\u2019s execution.If a\nfinally\nclause includes areturn\nstatement, the returned value will be the one from thefinally\nclause\u2019sreturn\nstatement, not the value from thetry\nclause\u2019sreturn\nstatement. This can be confusing and is therefore discouraged. From version 3.14 the compiler emits aSyntaxWarning\nfor it (see PEP 765).\nFor example:\n>>> def bool_return():\n... try:\n... return True\n... finally:\n... return False\n...\n>>> bool_return()\nFalse\nA more complicated example:\n>>> def divide(x, y):\n... try:\n... result = x / y\n... except ZeroDivisionError:\n... print(\"division by zero!\")\n... else:\n... print(\"result is\", result)\n... finally:\n... print(\"executing finally clause\")\n...\n>>> divide(2, 1)\nresult is 2.0\nexecuting finally clause\n>>> divide(2, 0)\ndivision by zero!\nexecuting finally clause\n>>> divide(\"2\", \"1\")\nexecuting finally clause\nTraceback (most recent call last):\nFile \"\", line 1, in \ndivide(\"2\", \"1\")\n~~~~~~^^^^^^^^^^\nFile \"\", line 3, in divide\nresult = x / y\n~~^~~\nTypeError: unsupported operand type(s) for /: 'str' and 'str'\nAs you can see, the finally\nclause is executed in any event. The\nTypeError\nraised by dividing two strings is not handled by the\nexcept\nclause and therefore re-raised after the finally\nclause has been executed.\nIn real world applications, the finally\nclause is useful for\nreleasing external resources (such as files or network connections), regardless\nof whether the use of the resource was successful.\n8.8. Predefined Clean-up Actions\u00b6\nSome objects define standard clean-up actions to be undertaken when the object is no longer needed, regardless of whether or not the operation using the object succeeded or failed. Look at the following example, which tries to open a file and print its contents to the screen.\nfor line in open(\"myfile.txt\"):\nprint(line, end=\"\")\nThe problem with this code is that it leaves the file open for an indeterminate\namount of time after this part of the code has finished executing.\nThis is not an issue in simple scripts, but can be a problem for larger\napplications. The with\nstatement allows objects like files to be\nused in a way that ensures they are always cleaned up promptly and correctly.\nwith open(\"myfile.txt\") as f:\nfor line in f:\nprint(line, end=\"\")\nAfter the statement is executed, the file f is always closed, even if a problem was encountered while processing the lines. Objects which, like files, provide predefined clean-up actions will indicate this in their documentation.\n8.10. Enriching Exceptions with Notes\u00b6\nWhen an exception is created in order to be raised, it is usually initialized\nwith information that describes the error that has occurred. There are cases\nwhere it is useful to add information after the exception was caught. For this\npurpose, exceptions have a method add_note(note)\nthat accepts a string and\nadds it to the exception\u2019s notes list. The standard traceback rendering\nincludes all notes, in the order they were added, after the exception.\n>>> try:\n... raise TypeError('bad type')\n... except Exception as e:\n... e.add_note('Add some information')\n... e.add_note('Add some more information')\n... raise\n...\nTraceback (most recent call last):\nFile \"\", line 2, in \nraise TypeError('bad type')\nTypeError: bad type\nAdd some information\nAdd some more information\n>>>\nFor example, when collecting exceptions into an exception group, we may want to add context information for the individual errors. In the following each exception in the group has a note indicating when this error has occurred.\n>>> def f():\n... raise OSError('operation failed')\n...\n>>> excs = []\n>>> for i in range(3):\n... try:\n... f()\n... except Exception as e:\n... e.add_note(f'Happened in Iteration {i+1}')\n... excs.append(e)\n...\n>>> raise ExceptionGroup('We have some problems', excs)\n+ Exception Group Traceback (most recent call last):\n| File \"\", line 1, in \n| raise ExceptionGroup('We have some problems', excs)\n| ExceptionGroup: We have some problems (3 sub-exceptions)\n+-+---------------- 1 ----------------\n| Traceback (most recent call last):\n| File \"\", line 3, in \n| f()\n| ~^^\n| File \"\", line 2, in f\n| raise OSError('operation failed')\n| OSError: operation failed\n| Happened in Iteration 1\n+---------------- 2 ----------------\n| Traceback (most recent call last):\n| File \"\", line 3, in \n| f()\n| ~^^\n| File \"\", line 2, in f\n| raise OSError('operation failed')\n| OSError: operation failed\n| Happened in Iteration 2\n+---------------- 3 ----------------\n| Traceback (most recent call last):\n| File \"\", line 3, in \n| f()\n| ~^^\n| File \"\", line 2, in f\n| raise OSError('operation failed')\n| OSError: operation failed\n| Happened in Iteration 3\n+------------------------------------\n>>>", "code_snippets": [" ", " ", "\n File ", ", line ", "\n", " ", " ", "\n", "\n", ": ", "\n", " ", " ", "\n", "\n File ", ", line ", ", in ", "\n", " ", " ", "\n", "\n", ": ", "\n", " ", " ", "\n", "\n File ", ", line ", ", in ", "\n", " ", " ", "\n", "\n", ": ", "\n", " ", " ", "\n", "\n File ", ", line ", ", in ", "\n", " ", " ", "\n", "\n", ": ", "\n", " ", "\n", " ", "\n", " ", " ", " ", "\n", " ", "\n", " ", " ", "\n", " ", "\n", "\n", " ", " ", " ", " ", "\n", " ", "\n", "\n ", "\n\n", "\n ", "\n\n", "\n ", "\n\n", " ", " ", " ", " ", " ", "\n ", "\n ", " ", "\n ", " ", "\n ", "\n ", " ", "\n ", "\n ", " ", "\n ", "\n", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n\n", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n", " ", " ", " ", "\n ", " ", "\n", " ", "\n ", "\n", " ", " ", " ", "\n ", "\n ", "\n", " ", " ", " ", "\n ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", " ", " ", " ", "\n ", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", " ", "\n", "\n File ", ", line ", ", in ", "\n", " ", "\n", ": ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", " ", "\n", " ", "\n", " ", "\n", "\n", "\n", "\n File ", ", line ", ", in ", "\n", " ", "\n", ": ", "\n", "\n", " ", "\n", " ", "\n", " ", " ", "\n", "\n", "\n File ", ", line ", ", in ", "\n", "\n", "\n", ": ", "\n\n", "\n\n", "\n File ", ", line ", ", in ", "\n", " ", "\n", ": ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n File ", ", line ", ", in ", "\n", "\n", "\n File ", ", line ", ", in ", "\n", "\n\n", "\n\n", "\n File ", ", line ", ", in ", "\n", " ", " ", "\n", ": ", "\n", "\n", " ", "\n", " ", "\n", " ", " ", " ", "\n", "\n", "\n File ", ", line ", ", in ", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", " ", "\n", "\n", "\n", "\n File ", ", line ", ", in ", "\n", " ", "\n", "\n", "\n", " ", "\n", " ", " ", "\n", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", " ", "\n", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", "\n", " ", "\n", " ", "\n", " ", " ", "\n", " ", "\n", " ", "\n", "\n", " ", "\n", "\n", "\n", " ", "\n", "\n", "\n", " ", "\n", "\n", "\n File ", ", line ", ", in ", "\n", " ", "\n", "\n File ", ", line ", ", in ", "\n", " ", " ", " ", " ", "\n", "\n", ": ", "\n", " ", " ", " ", "\n ", " ", "\n", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", " ", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n", "\n", "\n", " ", "\n", " ", " ", " ", "\n", " ", "\n", " ", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", " ", "\n", " ", " ", " ", " ", "\n", " ", "\n", "\n", " ", "\n", " ", " ", " ", "\n", "\n", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", " ", "\n", " ", "\n", "\n", "\n File ", ", line ", ", in ", "\n", " ", "\n", ": ", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", " ", "\n", " ", " ", " ", " ", "\n", " ", "\n", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 4806} +{"url": "https://docs.python.org/3/howto/index.html", "title": "Python HOWTOs", "content": "Python HOWTOs\u00b6\nPython HOWTOs are documents that cover a specific topic in-depth. Modeled on the Linux Documentation Project\u2019s HOWTO collection, this collection is an effort to foster documentation that\u2019s more detailed than the Python Library Reference.\nGeneral:\nAdvanced development:\nDebugging and profiling:", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 77} +{"url": "https://docs.python.org/3/c-api/monitoring.html", "title": "Monitoring C API", "content": "Monitoring C API\u00b6\nAdded in version 3.13.\nAn extension may need to interact with the event monitoring system. Subscribing\nto events and registering callbacks can be done via the Python API exposed in\nsys.monitoring\n.\nGenerating Execution Events\u00b6\nThe functions below make it possible for an extension to fire monitoring\nevents as it emulates the execution of Python code. Each of these functions\naccepts a PyMonitoringState\nstruct which contains concise information\nabout the activation state of events, as well as the event arguments, which\ninclude a PyObject*\nrepresenting the code object, the instruction offset\nand sometimes additional, event-specific arguments (see sys.monitoring\nfor details about the signatures of the different event callbacks).\nThe codelike\nargument should be an instance of types.CodeType\nor of a type that emulates it.\nThe VM disables tracing when firing an event, so there is no need for user code to do that.\nMonitoring functions should not be called with an exception set, except those listed below as working with the current exception.\n-\ntype PyMonitoringState\u00b6\nRepresentation of the state of an event type. It is allocated by the user while its contents are maintained by the monitoring API functions described below.\nAll of the functions below return 0 on success and -1 (with an exception set) on error.\nSee sys.monitoring\nfor descriptions of the events.\n-\nint PyMonitoring_FirePyStartEvent(PyMonitoringState *state, PyObject *codelike, int32_t offset)\u00b6\nFire a\nPY_START\nevent.\n-\nint PyMonitoring_FirePyResumeEvent(PyMonitoringState *state, PyObject *codelike, int32_t offset)\u00b6\nFire a\nPY_RESUME\nevent.\n-\nint PyMonitoring_FirePyReturnEvent(PyMonitoringState *state, PyObject *codelike, int32_t offset, PyObject *retval)\u00b6\nFire a\nPY_RETURN\nevent.\n-\nint PyMonitoring_FirePyYieldEvent(PyMonitoringState *state, PyObject *codelike, int32_t offset, PyObject *retval)\u00b6\nFire a\nPY_YIELD\nevent.\n-\nint PyMonitoring_FireCallEvent(PyMonitoringState *state, PyObject *codelike, int32_t offset, PyObject *callable, PyObject *arg0)\u00b6\nFire a\nCALL\nevent.\n-\nint PyMonitoring_FireLineEvent(PyMonitoringState *state, PyObject *codelike, int32_t offset, int lineno)\u00b6\nFire a\nLINE\nevent.\n-\nint PyMonitoring_FireJumpEvent(PyMonitoringState *state, PyObject *codelike, int32_t offset, PyObject *target_offset)\u00b6\nFire a\nJUMP\nevent.\n-\nint PyMonitoring_FireBranchLeftEvent(PyMonitoringState *state, PyObject *codelike, int32_t offset, PyObject *target_offset)\u00b6\nFire a\nBRANCH_LEFT\nevent.\n-\nint PyMonitoring_FireBranchRightEvent(PyMonitoringState *state, PyObject *codelike, int32_t offset, PyObject *target_offset)\u00b6\nFire a\nBRANCH_RIGHT\nevent.\n-\nint PyMonitoring_FireCReturnEvent(PyMonitoringState *state, PyObject *codelike, int32_t offset, PyObject *retval)\u00b6\nFire a\nC_RETURN\nevent.\n-\nint PyMonitoring_FirePyThrowEvent(PyMonitoringState *state, PyObject *codelike, int32_t offset)\u00b6\nFire a\nPY_THROW\nevent with the current exception (as returned byPyErr_GetRaisedException()\n).\n-\nint PyMonitoring_FireRaiseEvent(PyMonitoringState *state, PyObject *codelike, int32_t offset)\u00b6\nFire a\nRAISE\nevent with the current exception (as returned byPyErr_GetRaisedException()\n).\n-\nint PyMonitoring_FireCRaiseEvent(PyMonitoringState *state, PyObject *codelike, int32_t offset)\u00b6\nFire a\nC_RAISE\nevent with the current exception (as returned byPyErr_GetRaisedException()\n).\n-\nint PyMonitoring_FireReraiseEvent(PyMonitoringState *state, PyObject *codelike, int32_t offset)\u00b6\nFire a\nRERAISE\nevent with the current exception (as returned byPyErr_GetRaisedException()\n).\n-\nint PyMonitoring_FireExceptionHandledEvent(PyMonitoringState *state, PyObject *codelike, int32_t offset)\u00b6\nFire an\nEXCEPTION_HANDLED\nevent with the current exception (as returned byPyErr_GetRaisedException()\n).\n-\nint PyMonitoring_FirePyUnwindEvent(PyMonitoringState *state, PyObject *codelike, int32_t offset)\u00b6\nFire a\nPY_UNWIND\nevent with the current exception (as returned byPyErr_GetRaisedException()\n).\n-\nint PyMonitoring_FireStopIterationEvent(PyMonitoringState *state, PyObject *codelike, int32_t offset, PyObject *value)\u00b6\nFire a\nSTOP_ITERATION\nevent. Ifvalue\nis an instance ofStopIteration\n, it is used. Otherwise, a newStopIteration\ninstance is created withvalue\nas its argument.\nManaging the Monitoring State\u00b6\nMonitoring states can be managed with the help of monitoring scopes. A scope would typically correspond to a Python function.\n-\nint PyMonitoring_EnterScope(PyMonitoringState *state_array, uint64_t *version, const uint8_t *event_types, Py_ssize_t length)\u00b6\nEnter a monitored scope.\nevent_types\nis an array of the event IDs for events that may be fired from the scope. For example, the ID of aPY_START\nevent is the valuePY_MONITORING_EVENT_PY_START\n, which is numerically equal to the base-2 logarithm ofsys.monitoring.events.PY_START\n.state_array\nis an array with a monitoring state entry for each event inevent_types\n, it is allocated by the user but populated byPyMonitoring_EnterScope()\nwith information about the activation state of the event. The size ofevent_types\n(and hence also ofstate_array\n) is given inlength\n.The\nversion\nargument is a pointer to a value which should be allocated by the user together withstate_array\nand initialized to 0, and then set only byPyMonitoring_EnterScope()\nitself. It allows this function to determine whether event states have changed since the previous call, and to return quickly if they have not.The scopes referred to here are lexical scopes: a function, class or method.\nPyMonitoring_EnterScope()\nshould be called whenever the lexical scope is entered. Scopes can be reentered, reusing the same state_array and version, in situations like when emulating a recursive Python function. When a code-like\u2019s execution is paused, such as when emulating a generator, the scope needs to be exited and re-entered.The macros for event_types are:\nMacro\nEvent\n-\nPY_MONITORING_EVENT_BRANCH_LEFT\u00b6\n-\nPY_MONITORING_EVENT_BRANCH_RIGHT\u00b6\n-\nPY_MONITORING_EVENT_CALL\u00b6\n-\nPY_MONITORING_EVENT_C_RAISE\u00b6\n-\nPY_MONITORING_EVENT_C_RETURN\u00b6\n-\nPY_MONITORING_EVENT_EXCEPTION_HANDLED\u00b6\n-\nPY_MONITORING_EVENT_INSTRUCTION\u00b6\n-\nPY_MONITORING_EVENT_JUMP\u00b6\n-\nPY_MONITORING_EVENT_LINE\u00b6\n-\nPY_MONITORING_EVENT_PY_RESUME\u00b6\n-\nPY_MONITORING_EVENT_PY_RETURN\u00b6\n-\nPY_MONITORING_EVENT_PY_START\u00b6\n-\nPY_MONITORING_EVENT_PY_THROW\u00b6\n-\nPY_MONITORING_EVENT_PY_UNWIND\u00b6\n-\nPY_MONITORING_EVENT_PY_YIELD\u00b6\n-\nPY_MONITORING_EVENT_RAISE\u00b6\n-\nPY_MONITORING_EVENT_RERAISE\u00b6\n-\nPY_MONITORING_EVENT_STOP_ITERATION\u00b6\n-\nPY_MONITORING_EVENT_BRANCH_LEFT\u00b6\n-\nint PyMonitoring_ExitScope(void)\u00b6\nExit the last scope that was entered with\nPyMonitoring_EnterScope()\n.\n-\nint PY_MONITORING_IS_INSTRUMENTED_EVENT(uint8_t ev)\u00b6\nReturn true if the event corresponding to the event ID ev is a local event.\nAdded in version 3.13.\nDeprecated since version 3.14: This function is soft deprecated.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1701} +{"url": "https://docs.python.org/3/faq/installed.html", "title": "\u201cWhy is Python Installed on my Computer?\u201d FAQ", "content": "\u201cWhy is Python Installed on my Computer?\u201d FAQ\u00b6\nWhat is Python?\u00b6\nPython is a programming language. It\u2019s used for many different applications. It\u2019s used in some high schools and colleges as an introductory programming language because Python is easy to learn, but it\u2019s also used by professional software developers at places such as Google, NASA, and Lucasfilm Ltd.\nIf you wish to learn more about Python, start with the Beginner\u2019s Guide to Python.\nWhy is Python installed on my machine?\u00b6\nIf you find Python installed on your system but don\u2019t remember installing it, there are several possible ways it could have gotten there.\nPerhaps another user on the computer wanted to learn programming and installed it; you\u2019ll have to figure out who\u2019s been using the machine and might have installed it.\nA third-party application installed on the machine might have been written in Python and included a Python installation. There are many such applications, from GUI programs to network servers and administrative scripts.\nSome Windows machines also have Python installed. At this writing we\u2019re aware of computers from Hewlett-Packard and Compaq that include Python. Apparently some of HP/Compaq\u2019s administrative tools are written in Python.\nMany Unix-compatible operating systems, such as macOS and some Linux distributions, have Python installed by default; it\u2019s included in the base installation.\nCan I delete Python?\u00b6\nThat depends on where Python came from.\nIf someone installed it deliberately, you can remove it without hurting anything. On Windows, use the Add/Remove Programs icon in the Control Panel.\nIf Python was installed by a third-party application, you can also remove it, but that application will no longer work. You should use that application\u2019s uninstaller rather than removing Python directly.\nIf Python came with your operating system, removing it is not recommended. If you remove it, whatever tools were written in Python will no longer run, and some of them might be important to you. Reinstalling the whole system would then be required to fix things again.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 518} +{"url": "https://docs.python.org/3/faq/gui.html", "title": null, "content": "Graphic User Interface FAQ\u00b6\nGeneral GUI Questions\u00b6\nWhat GUI toolkits exist for Python?\u00b6\nStandard builds of Python include an object-oriented interface to the Tcl/Tk widget set, called tkinter. This is probably the easiest to install (since it comes included with most binary distributions of Python) and use. For more info about Tk, including pointers to the source, see the Tcl/Tk home page. Tcl/Tk is fully portable to the macOS, Windows, and Unix platforms.\nDepending on what platform(s) you are aiming at, there are also several alternatives. A list of cross-platform and platform-specific GUI frameworks can be found on the python wiki.\nTkinter questions\u00b6\nHow do I freeze Tkinter applications?\u00b6\nFreeze is a tool to create stand-alone applications. When freezing Tkinter applications, the applications will not be truly stand-alone, as the application will still need the Tcl and Tk libraries.\nOne solution is to ship the application with the Tcl and Tk libraries, and point\nto them at run-time using the TCL_LIBRARY\nand TK_LIBRARY\nenvironment variables.\nVarious third-party freeze libraries such as py2exe and cx_Freeze have handling for Tkinter applications built-in.\nCan I have Tk events handled while waiting for I/O?\u00b6\nOn platforms other than Windows, yes, and you don\u2019t even\nneed threads! But you\u2019ll have to restructure your I/O\ncode a bit. Tk has the equivalent of Xt\u2019s XtAddInput()\ncall, which allows you\nto register a callback function which will be called from the Tk mainloop when\nI/O is possible on a file descriptor. See File Handlers.\nI can\u2019t get key bindings to work in Tkinter: why?\u00b6\nAn often-heard complaint is that event handlers bound\nto events with the bind()\nmethod\ndon\u2019t get handled even when the appropriate key is pressed.\nThe most common cause is that the widget to which the binding applies doesn\u2019t have \u201ckeyboard focus\u201d. Check out the Tk documentation for the focus command. Usually a widget is given the keyboard focus by clicking in it (but not for labels; see the takefocus option).", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 504} +{"url": "https://docs.python.org/3/faq/windows.html", "title": null, "content": "Python on Windows FAQ\u00b6\nHow do I run a Python program under Windows?\u00b6\nThis is not necessarily a straightforward question. If you are already familiar with running programs from the Windows command line then everything will seem obvious; otherwise, you might need a little more guidance.\nUnless you use some sort of integrated development environment, you will end up\ntyping Windows commands into what is referred to as a\n\u201cCommand prompt window\u201d. Usually you can create such a window from your\nsearch bar by searching for cmd\n. You should be able to recognize\nwhen you have started such a window because you will see a Windows \u201ccommand\nprompt\u201d, which usually looks like this:\nC:\\>\nThe letter may be different, and there might be other things after it, so you might just as easily see something like:\nD:\\YourName\\Projects\\Python>\ndepending on how your computer has been set up and what else you have recently done with it. Once you have started such a window, you are well on the way to running Python programs.\nYou need to realize that your Python scripts have to be processed by another program called the Python interpreter. The interpreter reads your script, compiles it into bytecodes, and then executes the bytecodes to run your program. So, how do you arrange for the interpreter to handle your Python?\nFirst, you need to make sure that your command window recognises the word\n\u201cpy\u201d as an instruction to start the interpreter. If you have opened a\ncommand window, you should try entering the command py\nand hitting\nreturn:\nC:\\Users\\YourName> py\nYou should then see something like:\nPython 3.6.4 (v3.6.4:d48eceb, Dec 19 2017, 06:04:45) [MSC v.1900 32 bit (Intel)] on win32\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>>\nYou have started the interpreter in \u201cinteractive mode\u201d. That means you can enter Python statements or expressions interactively and have them executed or evaluated while you wait. This is one of Python\u2019s strongest features. Check it by entering a few expressions of your choice and seeing the results:\n>>> print(\"Hello\")\nHello\n>>> \"Hello\" * 3\n'HelloHelloHello'\nMany people use the interactive mode as a convenient yet highly programmable\ncalculator. When you want to end your interactive Python session,\ncall the exit()\nfunction or hold the Ctrl key down\nwhile you enter a Z, then hit the \u201cEnter\u201d key to get\nback to your Windows command prompt.\nYou may also find that you have a Start-menu entry such as >>>\nprompt in a new window. If so, the window will disappear\nafter you call the exit()\nfunction or enter the Ctrl-Z\ncharacter; Windows is running a single \u201cpython\u201d\ncommand in the window, and closes it when you terminate the interpreter.\nNow that we know the py\ncommand is recognized, you can give your\nPython script to it. You\u2019ll have to give either an absolute or a\nrelative path to the Python script. Let\u2019s say your Python script is\nlocated in your desktop and is named hello.py\n, and your command\nprompt is nicely opened in your home directory so you\u2019re seeing something\nsimilar to:\nC:\\Users\\YourName>\nSo now you\u2019ll ask the py\ncommand to give your script to Python by\ntyping py\nfollowed by your script path:\nC:\\Users\\YourName> py Desktop\\hello.py\nhello\nHow do I make Python scripts executable?\u00b6\nOn Windows, the standard Python installer already associates the .py\nextension with a file type (Python.File) and gives that file type an open\ncommand that runs the interpreter (D:\\Program Files\\Python\\python.exe \"%1\"\n%*\n). This is enough to make scripts executable from the command prompt as\n\u2018foo.py\u2019. If you\u2019d rather be able to execute the script by simple typing \u2018foo\u2019\nwith no extension you need to add .py to the PATHEXT environment variable.\nWhy does Python sometimes take so long to start?\u00b6\nUsually Python starts very quickly on Windows, but occasionally there are bug reports that Python suddenly begins to take a long time to start up. This is made even more puzzling because Python will work fine on other Windows systems which appear to be configured identically.\nThe problem may be caused by a misconfiguration of virus checking software on the problem machine. Some virus scanners have been known to introduce startup overhead of two orders of magnitude when the scanner is configured to monitor all reads from the filesystem. Try checking the configuration of virus scanning software on your systems to ensure that they are indeed configured identically. McAfee, when configured to scan all file system read activity, is a particular offender.\nHow do I make an executable from a Python script?\u00b6\nSee How can I create a stand-alone binary from a Python script? for a list of tools that can be used to make executables.\nIs a *.pyd\nfile the same as a DLL?\u00b6\nYes, .pyd files are dll\u2019s, but there are a few differences. If you have a DLL\nnamed foo.pyd\n, then it must have a function PyInit_foo()\n. You can then\nwrite Python \u201cimport foo\u201d, and Python will search for foo.pyd (as well as\nfoo.py, foo.pyc) and if it finds it, will attempt to call PyInit_foo()\nto\ninitialize it. You do not link your .exe with foo.lib, as that would cause\nWindows to require the DLL to be present.\nNote that the search path for foo.pyd is PYTHONPATH, not the same as the path\nthat Windows uses to search for foo.dll. Also, foo.pyd need not be present to\nrun your program, whereas if you linked your program with a dll, the dll is\nrequired. Of course, foo.pyd is required if you want to say import foo\n. In\na DLL, linkage is declared in the source code with __declspec(dllexport)\n.\nIn a .pyd, linkage is defined in a list of available functions.\nHow can I embed Python into a Windows application?\u00b6\nEmbedding the Python interpreter in a Windows app can be summarized as follows:\nDo not build Python into your .exe file directly. On Windows, Python must be a DLL to handle importing modules that are themselves DLL\u2019s. (This is the first key undocumented fact.) Instead, link to\npythonNN.dll\n; it is typically installed inC:\\Windows\\System\n. NN is the Python version, a number such as \u201c33\u201d for Python 3.3.You can link to Python in two different ways. Load-time linking means linking against\npythonNN.lib\n, while run-time linking means linking againstpythonNN.dll\n. (General note:pythonNN.lib\nis the so-called \u201cimport lib\u201d corresponding topythonNN.dll\n. It merely defines symbols for the linker.)Run-time linking greatly simplifies link options; everything happens at run time. Your code must load\npythonNN.dll\nusing the WindowsLoadLibraryEx()\nroutine. The code must also use access routines and data inpythonNN.dll\n(that is, Python\u2019s C API\u2019s) using pointers obtained by the WindowsGetProcAddress()\nroutine. Macros can make using these pointers transparent to any C code that calls routines in Python\u2019s C API.If you use SWIG, it is easy to create a Python \u201cextension module\u201d that will make the app\u2019s data and methods available to Python. SWIG will handle just about all the grungy details for you. The result is C code that you link into your .exe file (!) You do not have to create a DLL file, and this also simplifies linking.\nSWIG will create an init function (a C function) whose name depends on the name of the extension module. For example, if the name of the module is leo, the init function will be called initleo(). If you use SWIG shadow classes, as you should, the init function will be called initleoc(). This initializes a mostly hidden helper class used by the shadow class.\nThe reason you can link the C code in step 2 into your .exe file is that calling the initialization function is equivalent to importing the module into Python! (This is the second key undocumented fact.)\nIn short, you can use the following code to initialize the Python interpreter with your extension module.\n#include ... Py_Initialize(); // Initialize Python. initmyAppc(); // Initialize (import) the helper class. PyRun_SimpleString(\"import myApp\"); // Import the shadow class.\nThere are two problems with Python\u2019s C API which will become apparent if you use a compiler other than MSVC, the compiler used to build pythonNN.dll.\nProblem 1: The so-called \u201cVery High Level\u201d functions that take\nFILE *\narguments will not work in a multi-compiler environment because each compiler\u2019s notion of astruct FILE\nwill be different. From an implementation standpoint these are very low level functions.Problem 2: SWIG generates the following code when generating wrappers to void functions:\nPy_INCREF(Py_None); _resultobj = Py_None; return _resultobj;\nAlas, Py_None is a macro that expands to a reference to a complex data structure called _Py_NoneStruct inside pythonNN.dll. Again, this code will fail in a mult-compiler environment. Replace such code by:\nreturn Py_BuildValue(\"\");\nIt may be possible to use SWIG\u2019s\n%typemap\ncommand to make the change automatically, though I have not been able to get this to work (I\u2019m a complete SWIG newbie).Using a Python shell script to put up a Python interpreter window from inside your Windows app is not a good idea; the resulting window will be independent of your app\u2019s windowing system. Rather, you (or the wxPythonWindow class) should create a \u201cnative\u201d interpreter window. It is easy to connect that window to the Python interpreter. You can redirect Python\u2019s i/o to _any_ object that supports read and write, so all you need is a Python object (defined in your extension module) that contains read() and write() methods.\nHow do I keep editors from inserting tabs into my Python source?\u00b6\nThe FAQ does not recommend using tabs, and the Python style guide, PEP 8, recommends 4 spaces for distributed Python code; this is also the Emacs python-mode default.\nUnder any editor, mixing tabs and spaces is a bad idea. MSVC is no different in this respect, and is easily configured to use spaces: Take\n, and for file type \u201cDefault\u201d set \u201cTab size\u201d and \u201cIndent size\u201d to 4, and select the \u201cInsert spaces\u201d radio button.Python raises IndentationError\nor TabError\nif mixed tabs\nand spaces are causing problems in leading whitespace.\nYou may also run the tabnanny\nmodule to check a directory tree\nin batch mode.\nHow do I check for a keypress without blocking?\u00b6\nUse the msvcrt\nmodule. This is a standard Windows-specific extension module.\nIt defines a function kbhit()\nwhich checks whether a keyboard hit is\npresent, and getch()\nwhich gets one character without echoing it.\nHow do I solve the missing api-ms-win-crt-runtime-l1-1-0.dll error?\u00b6\nThis can occur on Python 3.5 and later when using Windows 8.1 or earlier without all updates having been installed. First ensure your operating system is supported and is up to date, and if that does not resolve the issue, visit the Microsoft support page for guidance on manually installing the C Runtime update.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 2676} +{"url": "https://docs.python.org/3/faq/extending.html", "title": null, "content": "Extending/Embedding FAQ\u00b6\nCan I create my own functions in C?\u00b6\nYes, you can create built-in modules containing functions, variables, exceptions and even new types in C. This is explained in the document Extending and Embedding the Python Interpreter.\nMost intermediate or advanced Python books will also cover this topic.\nCan I create my own functions in C++?\u00b6\nYes, using the C compatibility features found in C++. Place extern \"C\" {\n... }\naround the Python include files and put extern \"C\"\nbefore each\nfunction that is going to be called by the Python interpreter. Global or static\nC++ objects with constructors are probably not a good idea.\nWriting C is hard; are there any alternatives?\u00b6\nThere are a number of alternatives to writing your own C extensions, depending on what you\u2019re trying to do. Recommended third party tools offer both simpler and more sophisticated approaches to creating C and C++ extensions for Python.\nHow can I execute arbitrary Python statements from C?\u00b6\nThe highest-level function to do this is PyRun_SimpleString()\nwhich takes\na single string argument to be executed in the context of the module\n__main__\nand returns 0\nfor success and -1\nwhen an exception occurred\n(including SyntaxError\n). If you want more control, use\nPyRun_String()\n; see the source for PyRun_SimpleString()\nin\nPython/pythonrun.c\n.\nHow can I evaluate an arbitrary Python expression from C?\u00b6\nCall the function PyRun_String()\nfrom the previous question with the\nstart symbol Py_eval_input\n; it parses an expression, evaluates it and\nreturns its value.\nHow do I extract C values from a Python object?\u00b6\nThat depends on the object\u2019s type. If it\u2019s a tuple, PyTuple_Size()\nreturns its length and PyTuple_GetItem()\nreturns the item at a specified\nindex. Lists have similar functions, PyList_Size()\nand\nPyList_GetItem()\n.\nFor bytes, PyBytes_Size()\nreturns its length and\nPyBytes_AsStringAndSize()\nprovides a pointer to its value and its\nlength. Note that Python bytes objects may contain null bytes so C\u2019s\nstrlen()\nshould not be used.\nTo test the type of an object, first make sure it isn\u2019t NULL\n, and then use\nPyBytes_Check()\n, PyTuple_Check()\n, PyList_Check()\n, etc.\nThere is also a high-level API to Python objects which is provided by the\nso-called \u2018abstract\u2019 interface \u2013 read Include/abstract.h\nfor further\ndetails. It allows interfacing with any kind of Python sequence using calls\nlike PySequence_Length()\n, PySequence_GetItem()\n, etc. as well\nas many other useful protocols such as numbers (PyNumber_Index()\net\nal.) and mappings in the PyMapping APIs.\nHow do I use Py_BuildValue() to create a tuple of arbitrary length?\u00b6\nYou can\u2019t. Use PyTuple_Pack()\ninstead.\nHow do I call an object\u2019s method from C?\u00b6\nThe PyObject_CallMethod()\nfunction can be used to call an arbitrary\nmethod of an object. The parameters are the object, the name of the method to\ncall, a format string like that used with Py_BuildValue()\n, and the\nargument values:\nPyObject *\nPyObject_CallMethod(PyObject *object, const char *method_name,\nconst char *arg_format, ...);\nThis works for any object that has methods \u2013 whether built-in or user-defined.\nYou are responsible for eventually Py_DECREF()\n\u2018ing the return value.\nTo call, e.g., a file object\u2019s \u201cseek\u201d method with arguments 10, 0 (assuming the file object pointer is \u201cf\u201d):\nres = PyObject_CallMethod(f, \"seek\", \"(ii)\", 10, 0);\nif (res == NULL) {\n... an exception occurred ...\n}\nelse {\nPy_DECREF(res);\n}\nNote that since PyObject_CallObject()\nalways wants a tuple for the\nargument list, to call a function without arguments, pass \u201c()\u201d for the format,\nand to call a function with one argument, surround the argument in parentheses,\ne.g. \u201c(i)\u201d.\nHow do I catch the output from PyErr_Print() (or anything that prints to stdout/stderr)?\u00b6\nIn Python code, define an object that supports the write()\nmethod. Assign\nthis object to sys.stdout\nand sys.stderr\n. Call print_error, or\njust allow the standard traceback mechanism to work. Then, the output will go\nwherever your write()\nmethod sends it.\nThe easiest way to do this is to use the io.StringIO\nclass:\n>>> import io, sys\n>>> sys.stdout = io.StringIO()\n>>> print('foo')\n>>> print('hello world!')\n>>> sys.stderr.write(sys.stdout.getvalue())\nfoo\nhello world!\nA custom object to do the same would look like this:\n>>> import io, sys\n>>> class StdoutCatcher(io.TextIOBase):\n... def __init__(self):\n... self.data = []\n... def write(self, stuff):\n... self.data.append(stuff)\n...\n>>> import sys\n>>> sys.stdout = StdoutCatcher()\n>>> print('foo')\n>>> print('hello world!')\n>>> sys.stderr.write(''.join(sys.stdout.data))\nfoo\nhello world!\nHow do I access a module written in Python from C?\u00b6\nYou can get a pointer to the module object as follows:\nmodule = PyImport_ImportModule(\"\");\nIf the module hasn\u2019t been imported yet (i.e. it is not yet present in\nsys.modules\n), this initializes the module; otherwise it simply returns\nthe value of sys.modules[\"\"]\n. Note that it doesn\u2019t enter the\nmodule into any namespace \u2013 it only ensures it has been initialized and is\nstored in sys.modules\n.\nYou can then access the module\u2019s attributes (i.e. any name defined in the module) as follows:\nattr = PyObject_GetAttrString(module, \"\");\nCalling PyObject_SetAttrString()\nto assign to variables in the module\nalso works.\nHow do I interface to C++ objects from Python?\u00b6\nDepending on your requirements, there are many approaches. To do this manually, begin by reading the \u201cExtending and Embedding\u201d document. Realize that for the Python run-time system, there isn\u2019t a whole lot of difference between C and C++ \u2013 so the strategy of building a new Python type around a C structure (pointer) type will also work for C++ objects.\nFor C++ libraries, see Writing C is hard; are there any alternatives?.\nI added a module using the Setup file and the make fails; why?\u00b6\nSetup must end in a newline, if there is no newline there, the build process fails. (Fixing this requires some ugly shell script hackery, and this bug is so minor that it doesn\u2019t seem worth the effort.)\nHow do I debug an extension?\u00b6\nWhen using GDB with dynamically loaded extensions, you can\u2019t set a breakpoint in your extension until your extension is loaded.\nIn your .gdbinit\nfile (or interactively), add the command:\nbr _PyImport_LoadDynamicModule\nThen, when you run GDB:\n$ gdb /local/bin/python\ngdb) run myscript.py\ngdb) continue # repeat until your extension is loaded\ngdb) finish # so that your extension is loaded\ngdb) br myfunction.c:50\ngdb) continue\nI want to compile a Python module on my Linux system, but some files are missing. Why?\u00b6\nMost packaged versions of Python omit some files required for compiling Python extensions.\nFor Red Hat, install the python3-devel RPM to get the necessary files.\nFor Debian, run apt-get install python3-dev\n.\nHow do I tell \u201cincomplete input\u201d from \u201cinvalid input\u201d?\u00b6\nSometimes you want to emulate the Python interactive interpreter\u2019s behavior, where it gives you a continuation prompt when the input is incomplete (e.g. you typed the start of an \u201cif\u201d statement or you didn\u2019t close your parentheses or triple string quotes), but it gives you a syntax error message immediately when the input is invalid.\nIn Python you can use the codeop\nmodule, which approximates the parser\u2019s\nbehavior sufficiently. IDLE uses this, for example.\nThe easiest way to do it in C is to call PyRun_InteractiveLoop()\n(perhaps\nin a separate thread) and let the Python interpreter handle the input for\nyou. You can also set the PyOS_ReadlineFunctionPointer()\nto point at your\ncustom input function. See Modules/readline.c\nand Parser/myreadline.c\nfor more hints.\nHow do I find undefined g++ symbols __builtin_new or __pure_virtual?\u00b6\nTo dynamically load g++ extension modules, you must recompile Python, relink it\nusing g++ (change LINKCC in the Python Modules Makefile), and link your\nextension module using g++ (e.g., g++ -shared -o mymodule.so mymodule.o\n).\nCan I create an object class with some methods implemented in C and others in Python (e.g. through inheritance)?\u00b6\nYes, you can inherit from built-in classes such as int\n, list\n,\ndict\n, etc.\nThe Boost Python Library (BPL, https://www.boost.org/libs/python/doc/index.html) provides a way of doing this from C++ (i.e. you can inherit from an extension class written in C++ using the BPL).", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 2071} +{"url": "https://docs.python.org/3/faq/library.html", "title": null, "content": "Library and Extension FAQ\u00b6\nGeneral Library Questions\u00b6\nHow do I find a module or application to perform task X?\u00b6\nCheck the Library Reference to see if there\u2019s a relevant standard library module. (Eventually you\u2019ll learn what\u2019s in the standard library and will be able to skip this step.)\nFor third-party packages, search the Python Package Index or try Google or another web search engine. Searching for \u201cPython\u201d plus a keyword or two for your topic of interest will usually find something helpful.\nWhere is the math.py (socket.py, regex.py, etc.) source file?\u00b6\nIf you can\u2019t find a source file for a module it may be a built-in or\ndynamically loaded module implemented in C, C++ or other compiled language.\nIn this case you may not have the source file or it may be something like\nmathmodule.c\n, somewhere in a C source directory (not on the Python Path).\nThere are (at least) three kinds of modules in Python:\nmodules written in Python (.py);\nmodules written in C and dynamically loaded (.dll, .pyd, .so, .sl, etc);\nmodules written in C and linked with the interpreter; to get a list of these, type:\nimport sys print(sys.builtin_module_names)\nHow do I make a Python script executable on Unix?\u00b6\nYou need to do two things: the script file\u2019s mode must be executable and the\nfirst line must begin with #!\nfollowed by the path of the Python\ninterpreter.\nThe first is done by executing chmod +x scriptfile\nor perhaps chmod 755\nscriptfile\n.\nThe second can be done in a number of ways. The most straightforward way is to write\n#!/usr/local/bin/python\nas the very first line of your file, using the pathname for where the Python interpreter is installed on your platform.\nIf you would like the script to be independent of where the Python interpreter\nlives, you can use the env program. Almost all Unix variants support\nthe following, assuming the Python interpreter is in a directory on the user\u2019s\nPATH\n:\n#!/usr/bin/env python\nDon\u2019t do this for CGI scripts. The PATH\nvariable for CGI scripts is\noften very minimal, so you need to use the actual absolute pathname of the\ninterpreter.\nOccasionally, a user\u2019s environment is so full that the /usr/bin/env program fails; or there\u2019s no env program at all. In that case, you can try the following hack (due to Alex Rezinsky):\n#! /bin/sh\n\"\"\":\"\nexec python $0 ${1+\"$@\"}\n\"\"\"\nThe minor disadvantage is that this defines the script\u2019s __doc__ string. However, you can fix that by adding\n__doc__ = \"\"\"...Whatever...\"\"\"\nIs there a curses/termcap package for Python?\u00b6\nFor Unix variants: The standard Python source distribution comes with a curses module in the Modules subdirectory, though it\u2019s not compiled by default. (Note that this is not available in the Windows distribution \u2013 there is no curses module for Windows.)\nThe curses\nmodule supports basic curses features as well as many additional\nfunctions from ncurses and SYSV curses such as colour, alternative character set\nsupport, pads, and mouse support. This means the module isn\u2019t compatible with\noperating systems that only have BSD curses, but there don\u2019t seem to be any\ncurrently maintained OSes that fall into this category.\nIs there an equivalent to C\u2019s onexit() in Python?\u00b6\nThe atexit\nmodule provides a register function that is similar to C\u2019s\nonexit()\n.\nWhy don\u2019t my signal handlers work?\u00b6\nThe most common problem is that the signal handler is declared with the wrong argument list. It is called as\nhandler(signum, frame)\nso it should be declared with two parameters:\ndef handler(signum, frame):\n...\nCommon tasks\u00b6\nHow do I test a Python program or component?\u00b6\nPython comes with two testing frameworks. The doctest\nmodule finds\nexamples in the docstrings for a module and runs them, comparing the output with\nthe expected output given in the docstring.\nThe unittest\nmodule is a fancier testing framework modelled on Java and\nSmalltalk testing frameworks.\nTo make testing easier, you should use good modular design in your program. Your program should have almost all functionality encapsulated in either functions or class methods \u2013 and this sometimes has the surprising and delightful effect of making the program run faster (because local variable accesses are faster than global accesses). Furthermore the program should avoid depending on mutating global variables, since this makes testing much more difficult to do.\nThe \u201cglobal main logic\u201d of your program may be as simple as\nif __name__ == \"__main__\":\nmain_logic()\nat the bottom of the main module of your program.\nOnce your program is organized as a tractable collection of function and class behaviours, you should write test functions that exercise the behaviours. A test suite that automates a sequence of tests can be associated with each module. This sounds like a lot of work, but since Python is so terse and flexible it\u2019s surprisingly easy. You can make coding much more pleasant and fun by writing your test functions in parallel with the \u201cproduction code\u201d, since this makes it easy to find bugs and even design flaws earlier.\n\u201cSupport modules\u201d that are not intended to be the main module of a program may include a self-test of the module.\nif __name__ == \"__main__\":\nself_test()\nEven programs that interact with complex external interfaces may be tested when the external interfaces are unavailable by using \u201cfake\u201d interfaces implemented in Python.\nHow do I create documentation from doc strings?\u00b6\nThe pydoc\nmodule can create HTML from the doc strings in your Python\nsource code. An alternative for creating API documentation purely from\ndocstrings is epydoc. Sphinx can also include docstring content.\nHow do I get a single keypress at a time?\u00b6\nFor Unix variants there are several solutions. It\u2019s straightforward to do this using curses, but curses is a fairly large module to learn.\nThreads\u00b6\nHow do I program using threads?\u00b6\nBe sure to use the threading\nmodule and not the _thread\nmodule.\nThe threading\nmodule builds convenient abstractions on top of the\nlow-level primitives provided by the _thread\nmodule.\nNone of my threads seem to run: why?\u00b6\nAs soon as the main thread exits, all threads are killed. Your main thread is running too quickly, giving the threads no time to do any work.\nA simple fix is to add a sleep to the end of the program that\u2019s long enough for all the threads to finish:\nimport threading, time\ndef thread_task(name, n):\nfor i in range(n):\nprint(name, i)\nfor i in range(10):\nT = threading.Thread(target=thread_task, args=(str(i), i))\nT.start()\ntime.sleep(10) # <---------------------------!\nBut now (on many platforms) the threads don\u2019t run in parallel, but appear to run sequentially, one at a time! The reason is that the OS thread scheduler doesn\u2019t start a new thread until the previous thread is blocked.\nA simple fix is to add a tiny sleep to the start of the run function:\ndef thread_task(name, n):\ntime.sleep(0.001) # <--------------------!\nfor i in range(n):\nprint(name, i)\nfor i in range(10):\nT = threading.Thread(target=thread_task, args=(str(i), i))\nT.start()\ntime.sleep(10)\nInstead of trying to guess a good delay value for time.sleep()\n,\nit\u2019s better to use some kind of semaphore mechanism. One idea is to use the\nqueue\nmodule to create a queue object, let each thread append a token to\nthe queue when it finishes, and let the main thread read as many tokens from the\nqueue as there are threads.\nHow do I parcel out work among a bunch of worker threads?\u00b6\nThe easiest way is to use the concurrent.futures\nmodule,\nespecially the ThreadPoolExecutor\nclass.\nOr, if you want fine control over the dispatching algorithm, you can write\nyour own logic manually. Use the queue\nmodule to create a queue\ncontaining a list of jobs. The Queue\nclass maintains a\nlist of objects and has a .put(obj)\nmethod that adds items to the queue and\na .get()\nmethod to return them. The class will take care of the locking\nnecessary to ensure that each job is handed out exactly once.\nHere\u2019s a trivial example:\nimport threading, queue, time\n# The worker thread gets jobs off the queue. When the queue is empty, it\n# assumes there will be no more work and exits.\n# (Realistically workers will run until terminated.)\ndef worker():\nprint('Running worker')\ntime.sleep(0.1)\nwhile True:\ntry:\narg = q.get(block=False)\nexcept queue.Empty:\nprint('Worker', threading.current_thread(), end=' ')\nprint('queue empty')\nbreak\nelse:\nprint('Worker', threading.current_thread(), end=' ')\nprint('running with argument', arg)\ntime.sleep(0.5)\n# Create queue\nq = queue.Queue()\n# Start a pool of 5 workers\nfor i in range(5):\nt = threading.Thread(target=worker, name='worker %i' % (i+1))\nt.start()\n# Begin adding work to the queue\nfor i in range(50):\nq.put(i)\n# Give threads time to run\nprint('Main thread sleeping')\ntime.sleep(5)\nWhen run, this will produce the following output:\nRunning worker\nRunning worker\nRunning worker\nRunning worker\nRunning worker\nMain thread sleeping\nWorker running with argument 0\nWorker running with argument 1\nWorker running with argument 2\nWorker running with argument 3\nWorker running with argument 4\nWorker running with argument 5\n...\nConsult the module\u2019s documentation for more details; the Queue\nclass provides a featureful interface.\nWhat kinds of global value mutation are thread-safe?\u00b6\nA global interpreter lock (GIL) is used internally to ensure that only one\nthread runs in the Python VM at a time. In general, Python offers to switch\namong threads only between bytecode instructions; how frequently it switches can\nbe set via sys.setswitchinterval()\n. Each bytecode instruction and\ntherefore all the C implementation code reached from each instruction is\ntherefore atomic from the point of view of a Python program.\nIn theory, this means an exact accounting requires an exact understanding of the PVM bytecode implementation. In practice, it means that operations on shared variables of built-in data types (ints, lists, dicts, etc) that \u201clook atomic\u201d really are.\nFor example, the following operations are all atomic (L, L1, L2 are lists, D, D1, D2 are dicts, x, y are objects, i, j are ints):\nL.append(x)\nL1.extend(L2)\nx = L[i]\nx = L.pop()\nL1[i:j] = L2\nL.sort()\nx = y\nx.field = y\nD[x] = y\nD1.update(D2)\nD.keys()\nThese aren\u2019t:\ni = i+1\nL.append(L[-1])\nL[i] = L[j]\nD[x] = D[x] + 1\nOperations that replace other objects may invoke those other objects\u2019\n__del__()\nmethod when their reference count reaches zero, and that can\naffect things. This is especially true for the mass updates to dictionaries and\nlists. When in doubt, use a mutex!\nCan\u2019t we get rid of the Global Interpreter Lock?\u00b6\nThe global interpreter lock (GIL) is often seen as a hindrance to Python\u2019s deployment on high-end multiprocessor server machines, because a multi-threaded Python program effectively only uses one CPU, due to the insistence that (almost) all Python code can only run while the GIL is held.\nWith the approval of PEP 703 work is now underway to remove the GIL from the CPython implementation of Python. Initially it will be implemented as an optional compiler flag when building the interpreter, and so separate builds will be available with and without the GIL. Long-term, the hope is to settle on a single build, once the performance implications of removing the GIL are fully understood. Python 3.13 is likely to be the first release containing this work, although it may not be completely functional in this release.\nThe current work to remove the GIL is based on a fork of Python 3.9 with the GIL removed by Sam Gross. Prior to that, in the days of Python 1.5, Greg Stein actually implemented a comprehensive patch set (the \u201cfree threading\u201d patches) that removed the GIL and replaced it with fine-grained locking. Adam Olsen did a similar experiment in his python-safethread project. Unfortunately, both of these earlier experiments exhibited a sharp drop in single-thread performance (at least 30% slower), due to the amount of fine-grained locking necessary to compensate for the removal of the GIL. The Python 3.9 fork is the first attempt at removing the GIL with an acceptable performance impact.\nThe presence of the GIL in current Python releases\ndoesn\u2019t mean that you can\u2019t make good use of Python on multi-CPU machines!\nYou just have to be creative with dividing the work up between multiple\nprocesses rather than multiple threads. The\nProcessPoolExecutor\nclass in the new\nconcurrent.futures\nmodule provides an easy way of doing so; the\nmultiprocessing\nmodule provides a lower-level API in case you want\nmore control over dispatching of tasks.\nJudicious use of C extensions will also help; if you use a C extension to\nperform a time-consuming task, the extension can release the GIL while the\nthread of execution is in the C code and allow other threads to get some work\ndone. Some standard library modules such as zlib\nand hashlib\nalready do this.\nAn alternative approach to reducing the impact of the GIL is to make the GIL a per-interpreter-state lock rather than truly global. This was first implemented in Python 3.12 and is available in the C API. A Python interface to it is expected in Python 3.13. The main limitation to it at the moment is likely to be 3rd party extension modules, since these must be written with multiple interpreters in mind in order to be usable, so many older extension modules will not be usable.\nInput and Output\u00b6\nHow do I delete a file? (And other file questions\u2026)\u00b6\nUse os.remove(filename)\nor os.unlink(filename)\n; for documentation, see\nthe os\nmodule. The two functions are identical; unlink()\nis simply\nthe name of the Unix system call for this function.\nTo remove a directory, use os.rmdir()\n; use os.mkdir()\nto create one.\nos.makedirs(path)\nwill create any intermediate directories in path\nthat\ndon\u2019t exist. os.removedirs(path)\nwill remove intermediate directories as\nlong as they\u2019re empty; if you want to delete an entire directory tree and its\ncontents, use shutil.rmtree()\n.\nTo rename a file, use os.rename(old_path, new_path)\n.\nTo truncate a file, open it using f = open(filename, \"rb+\")\n, and use\nf.truncate(offset)\n; offset defaults to the current seek position. There\u2019s\nalso os.ftruncate(fd, offset)\nfor files opened with os.open()\n, where\nfd is the file descriptor (a small integer).\nThe shutil\nmodule also contains a number of functions to work on files\nincluding copyfile()\n, copytree()\n, and\nrmtree()\n.\nHow do I copy a file?\u00b6\nThe shutil\nmodule contains a copyfile()\nfunction.\nNote that on Windows NTFS volumes, it does not copy\nalternate data streams\nnor resource forks\non macOS HFS+ volumes, though both are now rarely used.\nIt also doesn\u2019t copy file permissions and metadata, though using\nshutil.copy2()\ninstead will preserve most (though not all) of it.\nHow do I read (or write) binary data?\u00b6\nTo read or write complex binary data formats, it\u2019s best to use the struct\nmodule. It allows you to take a string containing binary data (usually numbers)\nand convert it to Python objects; and vice versa.\nFor example, the following code reads two 2-byte integers and one 4-byte integer in big-endian format from a file:\nimport struct\nwith open(filename, \"rb\") as f:\ns = f.read(8)\nx, y, z = struct.unpack(\">hhl\", s)\nThe \u2018>\u2019 in the format string forces big-endian data; the letter \u2018h\u2019 reads one \u201cshort integer\u201d (2 bytes), and \u2018l\u2019 reads one \u201clong integer\u201d (4 bytes) from the string.\nFor data that is more regular (e.g. a homogeneous list of ints or floats),\nyou can also use the array\nmodule.\nI can\u2019t seem to use os.read() on a pipe created with os.popen(); why?\u00b6\nos.read()\nis a low-level function which takes a file descriptor, a small\ninteger representing the opened file. os.popen()\ncreates a high-level\nfile object, the same type returned by the built-in open()\nfunction.\nThus, to read n bytes from a pipe p created with os.popen()\n, you need to\nuse p.read(n)\n.\nHow do I access the serial (RS232) port?\u00b6\nFor Win32, OSX, Linux, BSD, Jython, IronPython:\nFor Unix, see a Usenet post by Mitch Chapman:\nWhy doesn\u2019t closing sys.stdout (stdin, stderr) really close it?\u00b6\nPython file objects are a high-level layer of abstraction on low-level C file descriptors.\nFor most file objects you create in Python via the built-in open()\nfunction, f.close()\nmarks the Python file object as being closed from\nPython\u2019s point of view, and also arranges to close the underlying C file\ndescriptor. This also happens automatically in f\n\u2019s destructor, when\nf\nbecomes garbage.\nBut stdin, stdout and stderr are treated specially by Python, because of the\nspecial status also given to them by C. Running sys.stdout.close()\nmarks\nthe Python-level file object as being closed, but does not close the\nassociated C file descriptor.\nTo close the underlying C file descriptor for one of these three, you should\nfirst be sure that\u2019s what you really want to do (e.g., you may confuse\nextension modules trying to do I/O). If it is, use os.close()\n:\nos.close(stdin.fileno())\nos.close(stdout.fileno())\nos.close(stderr.fileno())\nOr you can use the numeric constants 0, 1 and 2, respectively.\nNetwork/Internet Programming\u00b6\nWhat WWW tools are there for Python?\u00b6\nSee the chapters titled Internet Protocols and Support and Internet Data Handling in the Library Reference Manual. Python has many modules that will help you build server-side and client-side web systems.\nA summary of available frameworks is maintained by Paul Boddie at https://wiki.python.org/moin/WebProgramming.\nWhat module should I use to help with generating HTML?\u00b6\nYou can find a collection of useful links on the Web Programming wiki page.\nHow do I send mail from a Python script?\u00b6\nUse the standard library module smtplib\n.\nHere\u2019s a very simple interactive mail sender that uses it. This method will work on any host that supports an SMTP listener.\nimport sys, smtplib\nfromaddr = input(\"From: \")\ntoaddrs = input(\"To: \").split(',')\nprint(\"Enter message, end with ^D:\")\nmsg = ''\nwhile True:\nline = sys.stdin.readline()\nif not line:\nbreak\nmsg += line\n# The actual mail send\nserver = smtplib.SMTP('localhost')\nserver.sendmail(fromaddr, toaddrs, msg)\nserver.quit()\nA Unix-only alternative uses sendmail. The location of the sendmail program\nvaries between systems; sometimes it is /usr/lib/sendmail\n, sometimes\n/usr/sbin/sendmail\n. The sendmail manual page will help you out. Here\u2019s\nsome sample code:\nimport os\nSENDMAIL = \"/usr/sbin/sendmail\" # sendmail location\np = os.popen(\"%s -t -i\" % SENDMAIL, \"w\")\np.write(\"To: receiver@example.com\\n\")\np.write(\"Subject: test\\n\")\np.write(\"\\n\") # blank line separating headers from body\np.write(\"Some text\\n\")\np.write(\"some more text\\n\")\nsts = p.close()\nif sts != 0:\nprint(\"Sendmail exit status\", sts)\nHow do I avoid blocking in the connect() method of a socket?\u00b6\nThe select\nmodule is commonly used to help with asynchronous I/O on\nsockets.\nTo prevent the TCP connect from blocking, you can set the socket to non-blocking\nmode. Then when you do the connect()\n,\nyou will either connect immediately\n(unlikely) or get an exception that contains the error number as .errno\n.\nerrno.EINPROGRESS\nindicates that the connection is in progress, but hasn\u2019t\nfinished yet. Different OSes will return different values, so you\u2019re going to\nhave to check what\u2019s returned on your system.\nYou can use the connect_ex()\nmethod\nto avoid creating an exception.\nIt will just return the errno value.\nTo poll, you can call connect_ex()\nagain later\n\u2013 0\nor errno.EISCONN\nindicate that you\u2019re connected \u2013 or you can pass this\nsocket to select.select()\nto check if it\u2019s writable.\nDatabases\u00b6\nAre there any interfaces to database packages in Python?\u00b6\nYes.\nInterfaces to disk-based hashes such as DBM\nand GDBM\nare also included with standard Python. There is also the\nsqlite3\nmodule, which provides a lightweight disk-based relational\ndatabase.\nSupport for most relational databases is available. See the DatabaseProgramming wiki page for details.\nHow do you implement persistent objects in Python?\u00b6\nThe pickle\nlibrary module solves this in a very general way (though you\nstill can\u2019t store things like open files, sockets or windows), and the\nshelve\nlibrary module uses pickle and (g)dbm to create persistent\nmappings containing arbitrary Python objects.\nMathematics and Numerics\u00b6\nHow do I generate random numbers in Python?\u00b6\nThe standard module random\nimplements a random number generator. Usage\nis simple:\nimport random\nrandom.random()\nThis returns a random floating-point number in the range [0, 1).\nThere are also many other specialized generators in this module, such as:\nrandrange(a, b)\nchooses an integer in the range [a, b).uniform(a, b)\nchooses a floating-point number in the range [a, b).normalvariate(mean, sdev)\nsamples the normal (Gaussian) distribution.\nSome higher-level functions operate on sequences directly, such as:\nchoice(S)\nchooses a random element from a given sequence.shuffle(L)\nshuffles a list in-place, i.e. permutes it randomly.\nThere\u2019s also a Random\nclass you can instantiate to create independent\nmultiple random number generators.", "code_snippets": ["\n", "\n", "\n", "\n", " ", " ", "\n", " ", "\n", " ", "\n ", "\n", " ", " ", " ", "\n ", "\n", " ", " ", " ", "\n ", "\n", "\n\n", " ", "\n ", " ", " ", " ", "\n ", " ", "\n\n", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", "\n\n", " ", "\n", " ", "\n ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n\n", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", "\n\n", "\n", "\n\n", "\n", "\n", "\n", "\n ", "\n ", "\n ", " ", "\n ", "\n ", " ", " ", "\n ", " ", "\n ", " ", " ", "\n ", "\n ", "\n ", "\n ", " ", " ", "\n ", " ", "\n ", "\n\n", "\n", " ", " ", "\n\n", "\n", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n ", "\n\n", "\n", " ", " ", " ", "\n ", "\n\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n", "\n\n", " ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n\n", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n\n", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", "\n", "\n", " ", "\n", "\n", "\n", " ", " ", "\n", " ", " ", " ", "\n ", " ", "\n", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 5270} +{"url": "https://docs.python.org/3/faq/design.html", "title": null, "content": "Design and History FAQ\u00b6\nWhy does Python use indentation for grouping of statements?\u00b6\nGuido van Rossum believes that using indentation for grouping is extremely elegant and contributes a lot to the clarity of the average Python program. Most people learn to love this feature after a while.\nSince there are no begin/end brackets there cannot be a disagreement between grouping perceived by the parser and the human reader. Occasionally C programmers will encounter a fragment of code like this:\nif (x <= y)\nx++;\ny--;\nz++;\nOnly the x++\nstatement is executed if the condition is true, but the\nindentation leads many to believe otherwise. Even experienced C programmers will\nsometimes stare at it a long time wondering as to why y\nis being decremented even\nfor x > y\n.\nBecause there are no begin/end brackets, Python is much less prone to coding-style conflicts. In C there are many different ways to place the braces. After becoming used to reading and writing code using a particular style, it is normal to feel somewhat uneasy when reading (or being required to write) in a different one.\nMany coding styles place begin/end brackets on a line by themselves. This makes programs considerably longer and wastes valuable screen space, making it harder to get a good overview of a program. Ideally, a function should fit on one screen (say, 20\u201330 lines). 20 lines of Python can do a lot more work than 20 lines of C. This is not solely due to the lack of begin/end brackets \u2013 the lack of declarations and the high-level data types are also responsible \u2013 but the indentation-based syntax certainly helps.\nWhy am I getting strange results with simple arithmetic operations?\u00b6\nSee the next question.\nWhy are floating-point calculations so inaccurate?\u00b6\nUsers are often surprised by results like this:\n>>> 1.2 - 1.0\n0.19999999999999996\nand think it is a bug in Python. It\u2019s not. This has little to do with Python, and much more to do with how the underlying platform handles floating-point numbers.\nThe float\ntype in CPython uses a C double\nfor storage. A\nfloat\nobject\u2019s value is stored in binary floating-point with a fixed\nprecision (typically 53 bits) and Python uses C operations, which in turn rely\non the hardware implementation in the processor, to perform floating-point\noperations. This means that as far as floating-point operations are concerned,\nPython behaves like many popular languages including C and Java.\nMany numbers that can be written easily in decimal notation cannot be expressed exactly in binary floating point. For example, after:\n>>> x = 1.2\nthe value stored for x\nis a (very good) approximation to the decimal value\n1.2\n, but is not exactly equal to it. On a typical machine, the actual\nstored value is:\n1.0011001100110011001100110011001100110011001100110011 (binary)\nwhich is exactly:\n1.1999999999999999555910790149937383830547332763671875 (decimal)\nThe typical precision of 53 bits provides Python floats with 15\u201316 decimal digits of accuracy.\nFor a fuller explanation, please see the floating-point arithmetic chapter in the Python tutorial.\nWhy are Python strings immutable?\u00b6\nThere are several advantages.\nOne is performance: knowing that a string is immutable means we can allocate space for it at creation time, and the storage requirements are fixed and unchanging. This is also one of the reasons for the distinction between tuples and lists.\nAnother advantage is that strings in Python are considered as \u201celemental\u201d as numbers. No amount of activity will change the value 8 to anything else, and in Python, no amount of activity will change the string \u201ceight\u201d to anything else.\nWhy must \u2018self\u2019 be used explicitly in method definitions and calls?\u00b6\nThe idea was borrowed from Modula-3. It turns out to be very useful, for a variety of reasons.\nFirst, it\u2019s more obvious that you are using a method or instance attribute\ninstead of a local variable. Reading self.x\nor self.meth()\nmakes it\nabsolutely clear that an instance variable or method is used even if you don\u2019t\nknow the class definition by heart. In C++, you can sort of tell by the lack of\na local variable declaration (assuming globals are rare or easily recognizable)\n\u2013 but in Python, there are no local variable declarations, so you\u2019d have to\nlook up the class definition to be sure. Some C++ and Java coding standards\ncall for instance attributes to have an m_\nprefix, so this explicitness is\nstill useful in those languages, too.\nSecond, it means that no special syntax is necessary if you want to explicitly\nreference or call the method from a particular class. In C++, if you want to\nuse a method from a base class which is overridden in a derived class, you have\nto use the ::\noperator \u2013 in Python you can write\nbaseclass.methodname(self, )\n. This is particularly useful\nfor __init__()\nmethods, and in general in cases where a derived class\nmethod wants to extend the base class method of the same name and thus has to\ncall the base class method somehow.\nFinally, for instance variables it solves a syntactic problem with assignment:\nsince local variables in Python are (by definition!) those variables to which a\nvalue is assigned in a function body (and that aren\u2019t explicitly declared\nglobal), there has to be some way to tell the interpreter that an assignment was\nmeant to assign to an instance variable instead of to a local variable, and it\nshould preferably be syntactic (for efficiency reasons). C++ does this through\ndeclarations, but Python doesn\u2019t have declarations and it would be a pity having\nto introduce them just for this purpose. Using the explicit self.var\nsolves\nthis nicely. Similarly, for using instance variables, having to write\nself.var\nmeans that references to unqualified names inside a method don\u2019t\nhave to search the instance\u2019s directories. To put it another way, local\nvariables and instance variables live in two different namespaces, and you need\nto tell Python which namespace to use.\nWhy can\u2019t I use an assignment in an expression?\u00b6\nStarting in Python 3.8, you can!\nAssignment expressions using the walrus operator :=\nassign a variable in an\nexpression:\nwhile chunk := fp.read(200):\nprint(chunk)\nSee PEP 572 for more information.\nWhy does Python use methods for some functionality (e.g. list.index()) but functions for other (e.g. len(list))?\u00b6\nAs Guido said:\n(a) For some operations, prefix notation just reads better than postfix \u2013 prefix (and infix!) operations have a long tradition in mathematics which likes notations where the visuals help the mathematician thinking about a problem. Compare the easy with which we rewrite a formula like x*(a+b) into x*a + x*b to the clumsiness of doing the same thing using a raw OO notation.\n(b) When I read code that says len(x) I know that it is asking for the length of something. This tells me two things: the result is an integer, and the argument is some kind of container. To the contrary, when I read x.len(), I have to already know that x is some kind of container implementing an interface or inheriting from a class that has a standard len(). Witness the confusion we occasionally have when a class that is not implementing a mapping has a get() or keys() method, or something that isn\u2019t a file has a write() method.\n\u2014https://mail.python.org/pipermail/python-3000/2006-November/004643.html\nWhy is join() a string method instead of a list or tuple method?\u00b6\nStrings became much more like other standard types starting in Python 1.6, when methods were added which give the same functionality that has always been available using the functions of the string module. Most of these new methods have been widely accepted, but the one which appears to make some programmers feel uncomfortable is:\n\", \".join(['1', '2', '4', '8', '16'])\nwhich gives the result:\n\"1, 2, 4, 8, 16\"\nThere are two common arguments against this usage.\nThe first runs along the lines of: \u201cIt looks really ugly using a method of a string literal (string constant)\u201d, to which the answer is that it might, but a string literal is just a fixed value. If the methods are to be allowed on names bound to strings there is no logical reason to make them unavailable on literals.\nThe second objection is typically cast as: \u201cI am really telling a sequence to\njoin its members together with a string constant\u201d. Sadly, you aren\u2019t. For some\nreason there seems to be much less difficulty with having split()\nas\na string method, since in that case it is easy to see that\n\"1, 2, 4, 8, 16\".split(\", \")\nis an instruction to a string literal to return the substrings delimited by the given separator (or, by default, arbitrary runs of white space).\njoin()\nis a string method because in using it you are telling the\nseparator string to iterate over a sequence of strings and insert itself between\nadjacent elements. This method can be used with any argument which obeys the\nrules for sequence objects, including any new classes you might define yourself.\nSimilar methods exist for bytes and bytearray objects.\nHow fast are exceptions?\u00b6\nA try\n/except\nblock is extremely efficient if no exceptions\nare raised. Actually\ncatching an exception is expensive. In versions of Python prior to 2.0 it was\ncommon to use this idiom:\ntry:\nvalue = mydict[key]\nexcept KeyError:\nmydict[key] = getvalue(key)\nvalue = mydict[key]\nThis only made sense when you expected the dict to have the key almost all the time. If that wasn\u2019t the case, you coded it like this:\nif key in mydict:\nvalue = mydict[key]\nelse:\nvalue = mydict[key] = getvalue(key)\nFor this specific case, you could also use value = dict.setdefault(key,\ngetvalue(key))\n, but only if the getvalue()\ncall is cheap enough because it\nis evaluated in all cases.\nWhy isn\u2019t there a switch or case statement in Python?\u00b6\nIn general, structured switch statements execute one block of code\nwhen an expression has a particular value or set of values.\nSince Python 3.10 one can easily match literal values, or constants\nwithin a namespace, with a match ... case\nstatement.\nAn older alternative is a sequence of if... elif... elif... else\n.\nFor cases where you need to choose from a very large number of possibilities, you can create a dictionary mapping case values to functions to call. For example:\nfunctions = {'a': function_1,\n'b': function_2,\n'c': self.method_1}\nfunc = functions[value]\nfunc()\nFor calling methods on objects, you can simplify yet further by using the\ngetattr()\nbuilt-in to retrieve methods with a particular name:\nclass MyVisitor:\ndef visit_a(self):\n...\ndef dispatch(self, value):\nmethod_name = 'visit_' + str(value)\nmethod = getattr(self, method_name)\nmethod()\nIt\u2019s suggested that you use a prefix for the method names, such as visit_\nin\nthis example. Without such a prefix, if values are coming from an untrusted\nsource, an attacker would be able to call any method on your object.\nImitating switch with fallthrough, as with C\u2019s switch-case-default, is possible, much harder, and less needed.\nCan\u2019t you emulate threads in the interpreter instead of relying on an OS-specific thread implementation?\u00b6\nAnswer 1: Unfortunately, the interpreter pushes at least one C stack frame for each Python stack frame. Also, extensions can call back into Python at almost random moments. Therefore, a complete threads implementation requires thread support for C.\nAnswer 2: Fortunately, there is Stackless Python, which has a completely redesigned interpreter loop that avoids the C stack.\nWhy can\u2019t lambda expressions contain statements?\u00b6\nPython lambda expressions cannot contain statements because Python\u2019s syntactic framework can\u2019t handle statements nested inside expressions. However, in Python, this is not a serious problem. Unlike lambda forms in other languages, where they add functionality, Python lambdas are only a shorthand notation if you\u2019re too lazy to define a function.\nFunctions are already first class objects in Python, and can be declared in a local scope. Therefore the only advantage of using a lambda instead of a locally defined function is that you don\u2019t need to invent a name for the function \u2013 but that\u2019s just a local variable to which the function object (which is exactly the same type of object that a lambda expression yields) is assigned!\nCan Python be compiled to machine code, C or some other language?\u00b6\nCython compiles a modified version of Python with optional annotations into C extensions. Nuitka is an up-and-coming compiler of Python into C++ code, aiming to support the full Python language.\nHow does Python manage memory?\u00b6\nThe details of Python memory management depend on the implementation. The\nstandard implementation of Python, CPython, uses reference counting to\ndetect inaccessible objects, and another mechanism to collect reference cycles,\nperiodically executing a cycle detection algorithm which looks for inaccessible\ncycles and deletes the objects involved. The gc\nmodule provides functions\nto perform a garbage collection, obtain debugging statistics, and tune the\ncollector\u2019s parameters.\nOther implementations (such as Jython or PyPy), however, can rely on a different mechanism such as a full-blown garbage collector. This difference can cause some subtle porting problems if your Python code depends on the behavior of the reference counting implementation.\nIn some Python implementations, the following code (which is fine in CPython) will probably run out of file descriptors:\nfor file in very_long_list_of_files:\nf = open(file)\nc = f.read(1)\nIndeed, using CPython\u2019s reference counting and destructor scheme, each new\nassignment to f\ncloses the previous file. With a traditional GC, however,\nthose file objects will only get collected (and closed) at varying and possibly\nlong intervals.\nIf you want to write code that will work with any Python implementation,\nyou should explicitly close the file or use the with\nstatement;\nthis will work regardless of memory management scheme:\nfor file in very_long_list_of_files:\nwith open(file) as f:\nc = f.read(1)\nWhy doesn\u2019t CPython use a more traditional garbage collection scheme?\u00b6\nFor one thing, this is not a C standard feature and hence it\u2019s not portable. (Yes, we know about the Boehm GC library. It has bits of assembler code for most common platforms, not for all of them, and although it is mostly transparent, it isn\u2019t completely transparent; patches are required to get Python to work with it.)\nTraditional GC also becomes a problem when Python is embedded into other\napplications. While in a standalone Python it\u2019s fine to replace the standard\nmalloc()\nand free()\nwith versions provided by the GC library, an application\nembedding Python may want to have its own substitute for malloc()\nand free()\n,\nand may not want Python\u2019s. Right now, CPython works with anything that\nimplements malloc()\nand free()\nproperly.\nWhy isn\u2019t all memory freed when CPython exits?\u00b6\nObjects referenced from the global namespaces of Python modules are not always deallocated when Python exits. This may happen if there are circular references. There are also certain bits of memory that are allocated by the C library that are impossible to free (e.g. a tool like Purify will complain about these). Python is, however, aggressive about cleaning up memory on exit and does try to destroy every single object.\nIf you want to force Python to delete certain things on deallocation use the\natexit\nmodule to run a function that will force those deletions.\nWhy are there separate tuple and list data types?\u00b6\nLists and tuples, while similar in many respects, are generally used in\nfundamentally different ways. Tuples can be thought of as being similar to\nPascal records\nor C structs\n; they\u2019re small collections of related data which may\nbe of different types which are operated on as a group. For example, a\nCartesian coordinate is appropriately represented as a tuple of two or three\nnumbers.\nLists, on the other hand, are more like arrays in other languages. They tend to\nhold a varying number of objects all of which have the same type and which are\noperated on one-by-one. For example, os.listdir('.')\nreturns a list of\nstrings representing the files in the current directory. Functions which\noperate on this output would generally not break if you added another file or\ntwo to the directory.\nTuples are immutable, meaning that once a tuple has been created, you can\u2019t replace any of its elements with a new value. Lists are mutable, meaning that you can always change a list\u2019s elements. Only immutable elements can be used as dictionary keys, and hence only tuples and not lists can be used as keys.\nHow are lists implemented in CPython?\u00b6\nCPython\u2019s lists are really variable-length arrays, not Lisp-style linked lists. The implementation uses a contiguous array of references to other objects, and keeps a pointer to this array and the array\u2019s length in a list head structure.\nThis makes indexing a list a[i]\nan operation whose cost is independent of\nthe size of the list or the value of the index.\nWhen items are appended or inserted, the array of references is resized. Some cleverness is applied to improve the performance of appending items repeatedly; when the array must be grown, some extra space is allocated so the next few times don\u2019t require an actual resize.\nHow are dictionaries implemented in CPython?\u00b6\nCPython\u2019s dictionaries are implemented as resizable hash tables. Compared to B-trees, this gives better performance for lookup (the most common operation by far) under most circumstances, and the implementation is simpler.\nDictionaries work by computing a hash code for each key stored in the dictionary\nusing the hash()\nbuilt-in function. The hash code varies widely depending\non the key and a per-process seed; for example, 'Python'\ncould hash to\n-539294296\nwhile 'python'\n, a string that differs by a single bit, could hash\nto 1142331976\n. The hash code is then used to calculate a location in an\ninternal array where the value will be stored. Assuming that you\u2019re storing\nkeys that all have different hash values, this means that dictionaries take\nconstant time \u2013 O(1), in Big-O notation \u2013 to retrieve a key.\nWhy must dictionary keys be immutable?\u00b6\nThe hash table implementation of dictionaries uses a hash value calculated from the key value to find the key. If the key were a mutable object, its value could change, and thus its hash could also change. But since whoever changes the key object can\u2019t tell that it was being used as a dictionary key, it can\u2019t move the entry around in the dictionary. Then, when you try to look up the same object in the dictionary it won\u2019t be found because its hash value is different. If you tried to look up the old value it wouldn\u2019t be found either, because the value of the object found in that hash bin would be different.\nIf you want a dictionary indexed with a list, simply convert the list to a tuple\nfirst; the function tuple(L)\ncreates a tuple with the same entries as the\nlist L\n. Tuples are immutable and can therefore be used as dictionary keys.\nSome unacceptable solutions that have been proposed:\nHash lists by their address (object ID). This doesn\u2019t work because if you construct a new list with the same value it won\u2019t be found; e.g.:\nmydict = {[1, 2]: '12'} print(mydict[[1, 2]])\nwould raise a\nKeyError\nexception because the id of the[1, 2]\nused in the second line differs from that in the first line. In other words, dictionary keys should be compared using==\n, not usingis\n.Make a copy when using a list as a key. This doesn\u2019t work because the list, being a mutable object, could contain a reference to itself, and then the copying code would run into an infinite loop.\nAllow lists as keys but tell the user not to modify them. This would allow a class of hard-to-track bugs in programs when you forgot or modified a list by accident. It also invalidates an important invariant of dictionaries: every value in\nd.keys()\nis usable as a key of the dictionary.Mark lists as read-only once they are used as a dictionary key. The problem is that it\u2019s not just the top-level object that could change its value; you could use a tuple containing a list as a key. Entering anything as a key into a dictionary would require marking all objects reachable from there as read-only \u2013 and again, self-referential objects could cause an infinite loop.\nThere is a trick to get around this if you need to, but use it at your own risk:\nYou can wrap a mutable structure inside a class instance which has both a\n__eq__()\nand a __hash__()\nmethod.\nYou must then make sure that the\nhash value for all such wrapper objects that reside in a dictionary (or other\nhash based structure), remain fixed while the object is in the dictionary (or\nother structure).\nclass ListWrapper:\ndef __init__(self, the_list):\nself.the_list = the_list\ndef __eq__(self, other):\nreturn self.the_list == other.the_list\ndef __hash__(self):\nl = self.the_list\nresult = 98767 - len(l)*555\nfor i, el in enumerate(l):\ntry:\nresult = result + (hash(el) % 9999999) * 1001 + i\nexcept Exception:\nresult = (result % 7777777) + i * 333\nreturn result\nNote that the hash computation is complicated by the possibility that some members of the list may be unhashable and also by the possibility of arithmetic overflow.\nFurthermore it must always be the case that if o1 == o2\n(ie o1.__eq__(o2)\nis True\n) then hash(o1) == hash(o2)\n(ie, o1.__hash__() == o2.__hash__()\n),\nregardless of whether the object is in a dictionary or not. If you fail to meet\nthese restrictions dictionaries and other hash based structures will misbehave.\nIn the case of ListWrapper\n, whenever the wrapper object is in a dictionary the\nwrapped list must not change to avoid anomalies. Don\u2019t do this unless you are\nprepared to think hard about the requirements and the consequences of not\nmeeting them correctly. Consider yourself warned.\nWhy doesn\u2019t list.sort() return the sorted list?\u00b6\nIn situations where performance matters, making a copy of the list just to sort\nit would be wasteful. Therefore, list.sort()\nsorts the list in place. In\norder to remind you of that fact, it does not return the sorted list. This way,\nyou won\u2019t be fooled into accidentally overwriting a list when you need a sorted\ncopy but also need to keep the unsorted version around.\nIf you want to return a new list, use the built-in sorted()\nfunction\ninstead. This function creates a new list from a provided iterable, sorts\nit and returns it. For example, here\u2019s how to iterate over the keys of a\ndictionary in sorted order:\nfor key in sorted(mydict):\n... # do whatever with mydict[key]...\nHow do you specify and enforce an interface spec in Python?\u00b6\nAn interface specification for a module as provided by languages such as C++ and Java describes the prototypes for the methods and functions of the module. Many feel that compile-time enforcement of interface specifications helps in the construction of large programs.\nPython 2.6 adds an abc\nmodule that lets you define Abstract Base Classes\n(ABCs). You can then use isinstance()\nand issubclass()\nto check\nwhether an instance or a class implements a particular ABC. The\ncollections.abc\nmodule defines a set of useful ABCs such as\nIterable\n, Container\n, and\nMutableMapping\n.\nFor Python, many of the advantages of interface specifications can be obtained by an appropriate test discipline for components.\nA good test suite for a module can both provide a regression test and serve as a\nmodule interface specification and a set of examples. Many Python modules can\nbe run as a script to provide a simple \u201cself test.\u201d Even modules which use\ncomplex external interfaces can often be tested in isolation using trivial\n\u201cstub\u201d emulations of the external interface. The doctest\nand\nunittest\nmodules or third-party test frameworks can be used to construct\nexhaustive test suites that exercise every line of code in a module.\nAn appropriate testing discipline can help build large complex applications in\nPython as well as having interface specifications would. In fact, it can be\nbetter because an interface specification cannot test certain properties of a\nprogram. For example, the list.append()\nmethod is expected to add new elements\nto the end of some internal list; an interface specification cannot test that\nyour list.append()\nimplementation will actually do this correctly, but it\u2019s\ntrivial to check this property in a test suite.\nWriting test suites is very helpful, and you might want to design your code to make it easily tested. One increasingly popular technique, test-driven development, calls for writing parts of the test suite first, before you write any of the actual code. Of course Python allows you to be sloppy and not write test cases at all.\nWhy is there no goto?\u00b6\nIn the 1970s people realized that unrestricted goto could lead\nto messy \u201cspaghetti\u201d code that was hard to understand and revise.\nIn a high-level language, it is also unneeded as long as there\nare ways to branch (in Python, with if\nstatements and or\n,\nand\n, and if\n/else\nexpressions) and loop (with while\nand for\nstatements, possibly containing continue\nand break\n).\nOne can also use exceptions to provide a \u201cstructured goto\u201d\nthat works even across\nfunction calls. Many feel that exceptions can conveniently emulate all\nreasonable uses of the go\nor goto\nconstructs of C, Fortran, and other\nlanguages. For example:\nclass label(Exception): pass # declare a label\ntry:\n...\nif condition: raise label() # goto label\n...\nexcept label: # where to goto\npass\n...\nThis doesn\u2019t allow you to jump into the middle of a loop, but that\u2019s usually\nconsidered an abuse of goto\nanyway. Use sparingly.\nWhy can\u2019t raw strings (r-strings) end with a backslash?\u00b6\nMore precisely, they can\u2019t end with an odd number of backslashes: the unpaired backslash at the end escapes the closing quote character, leaving an unterminated string.\nRaw strings were designed to ease creating input for processors (chiefly regular expression engines) that want to do their own backslash escape processing. Such processors consider an unmatched trailing backslash to be an error anyway, so raw strings disallow that. In return, they allow you to pass on the string quote character by escaping it with a backslash. These rules work well when r-strings are used for their intended purpose.\nIf you\u2019re trying to build Windows pathnames, note that all Windows system calls accept forward slashes too:\nf = open(\"/mydir/file.txt\") # works fine!\nIf you\u2019re trying to build a pathname for a DOS command, try e.g. one of\ndir = r\"\\this\\is\\my\\dos\\dir\" \"\\\\\"\ndir = r\"\\this\\is\\my\\dos\\dir\\ \"[:-1]\ndir = \"\\\\this\\\\is\\\\my\\\\dos\\\\dir\\\\\"\nWhy doesn\u2019t Python have a \u201cwith\u201d statement for attribute assignments?\u00b6\nPython has a with\nstatement that wraps the execution of a block, calling code\non the entrance and exit from the block. Some languages have a construct that\nlooks like this:\nwith obj:\na = 1 # equivalent to obj.a = 1\ntotal = total + 1 # obj.total = obj.total + 1\nIn Python, such a construct would be ambiguous.\nOther languages, such as Object Pascal, Delphi, and C++, use static types, so it\u2019s possible to know, in an unambiguous way, what member is being assigned to. This is the main point of static typing \u2013 the compiler always knows the scope of every variable at compile time.\nPython uses dynamic types. It is impossible to know in advance which attribute will be referenced at runtime. Member attributes may be added or removed from objects on the fly. This makes it impossible to know, from a simple reading, what attribute is being referenced: a local one, a global one, or a member attribute?\nFor instance, take the following incomplete snippet:\ndef foo(a):\nwith a:\nprint(x)\nThe snippet assumes that a\nmust have a member attribute called x\n. However,\nthere is nothing in Python that tells the interpreter this. What should happen\nif a\nis, let us say, an integer? If there is a global variable named x\n,\nwill it be used inside the with\nblock? As you see, the dynamic nature of Python\nmakes such choices much harder.\nThe primary benefit of with\nand similar language features (reduction of code\nvolume) can, however, easily be achieved in Python by assignment. Instead of:\nfunction(args).mydict[index][index].a = 21\nfunction(args).mydict[index][index].b = 42\nfunction(args).mydict[index][index].c = 63\nwrite this:\nref = function(args).mydict[index][index]\nref.a = 21\nref.b = 42\nref.c = 63\nThis also has the side-effect of increasing execution speed because name bindings are resolved at run-time in Python, and the second version only needs to perform the resolution once.\nSimilar proposals that would introduce syntax to further reduce code volume, such as using a \u2018leading dot\u2019, have been rejected in favour of explicitness (see https://mail.python.org/pipermail/python-ideas/2016-May/040070.html).\nWhy don\u2019t generators support the with statement?\u00b6\nFor technical reasons, a generator used directly as a context manager\nwould not work correctly. When, as is most common, a generator is used as\nan iterator run to completion, no closing is needed. When it is, wrap\nit as contextlib.closing(generator)\nin the with\nstatement.\nWhy are colons required for the if/while/def/class statements?\u00b6\nThe colon is required primarily to enhance readability (one of the results of the experimental ABC language). Consider this:\nif a == b\nprint(a)\nversus\nif a == b:\nprint(a)\nNotice how the second one is slightly easier to read. Notice further how a colon sets off the example in this FAQ answer; it\u2019s a standard usage in English.\nAnother minor reason is that the colon makes it easier for editors with syntax highlighting; they can look for colons to decide when indentation needs to be increased instead of having to do a more elaborate parsing of the program text.\nWhy does Python allow commas at the end of lists and tuples?\u00b6\nPython lets you add a trailing comma at the end of lists, tuples, and dictionaries:\n[1, 2, 3,]\n('a', 'b', 'c',)\nd = {\n\"A\": [1, 5],\n\"B\": [6, 7], # last trailing comma is optional but good style\n}\nThere are several reasons to allow this.\nWhen you have a literal value for a list, tuple, or dictionary spread across multiple lines, it\u2019s easier to add more elements because you don\u2019t have to remember to add a comma to the previous line. The lines can also be reordered without creating a syntax error.\nAccidentally omitting the comma can lead to errors that are hard to diagnose. For example:\nx = [\n\"fee\",\n\"fie\"\n\"foo\",\n\"fum\"\n]\nThis list looks like it has four elements, but it actually contains three: \u201cfee\u201d, \u201cfiefoo\u201d and \u201cfum\u201d. Always adding the comma avoids this source of error.\nAllowing the trailing comma may also make programmatic code generation easier.", "code_snippets": [" ", " ", " ", "\n ", "\n ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", " ", "\n", " ", "\n", " ", " ", " ", "\n ", "\n", " ", " ", " ", " ", "\n", "\n", "\n", "\n ", " ", " ", "\n", " ", "\n ", " ", " ", "\n ", " ", " ", "\n", " ", " ", " ", "\n ", " ", " ", "\n", "\n ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n ", " ", "\n ", " ", "\n\n", " ", " ", "\n", "\n", "\n ", "\n ", "\n\n ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", "\n", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", "\n", "\n ", " ", "\n ", " ", " ", "\n\n ", " ", "\n ", " ", " ", " ", "\n\n ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", "\n", " ", " ", " ", "\n ", " ", "\n", " ", " ", "\n\n", "\n ", "\n ", " ", " ", " ", " ", "\n ", "\n", " ", " ", "\n ", "\n", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n", "\n ", " ", "\n ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n ", "\n", " ", " ", " ", "\n ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n", "\n", " ", " ", "\n ", "\n ", "\n ", "\n ", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 7616} +{"url": "https://docs.python.org/3/faq/programming.html", "title": null, "content": "Programming FAQ\u00b6\nGeneral Questions\u00b6\nIs there a source code level debugger with breakpoints, single-stepping, etc.?\u00b6\nYes.\nSeveral debuggers for Python are described below, and the built-in function\nbreakpoint()\nallows you to drop into any of them.\nThe pdb module is a simple but adequate console-mode debugger for Python. It is\npart of the standard Python library, and is documented in the Library\nReference Manual\n. You can also write your own debugger by using the code\nfor pdb as an example.\nThe IDLE interactive development environment, which is part of the standard Python distribution (normally available as Tools/scripts/idle3), includes a graphical debugger.\nPythonWin is a Python IDE that includes a GUI debugger based on pdb. The PythonWin debugger colors breakpoints and has quite a few cool features such as debugging non-PythonWin programs. PythonWin is available as part of pywin32 project and as a part of the ActivePython distribution.\nEric is an IDE built on PyQt and the Scintilla editing component.\ntrepan3k is a gdb-like debugger.\nVisual Studio Code is an IDE with debugging tools that integrates with version-control software.\nThere are a number of commercial Python IDEs that include graphical debuggers. They include:\nAre there tools to help find bugs or perform static analysis?\u00b6\nYes.\nPylint and Pyflakes do basic checking that will help you catch bugs sooner.\nStatic type checkers such as Mypy, Pyre, and Pytype can check type hints in Python source code.\nHow can I create a stand-alone binary from a Python script?\u00b6\nYou don\u2019t need the ability to compile Python to C code if all you want is a stand-alone program that users can download and run without having to install the Python distribution first. There are a number of tools that determine the set of modules required by a program and bind these modules together with a Python binary to produce a single executable.\nOne is to use the freeze tool, which is included in the Python source tree as Tools/freeze. It converts Python byte code to C arrays; with a C compiler you can embed all your modules into a new program, which is then linked with the standard Python modules.\nIt works by scanning your source recursively for import statements (in both forms) and looking for the modules in the standard Python path as well as in the source directory (for built-in modules). It then turns the bytecode for modules written in Python into C code (array initializers that can be turned into code objects using the marshal module) and creates a custom-made config file that only contains those built-in modules which are actually used in the program. It then compiles the generated C code and links it with the rest of the Python interpreter to form a self-contained binary which acts exactly like your script.\nThe following packages can help with the creation of console and GUI executables:\nNuitka (Cross-platform)\nPyInstaller (Cross-platform)\nPyOxidizer (Cross-platform)\ncx_Freeze (Cross-platform)\npy2app (macOS only)\npy2exe (Windows only)\nAre there coding standards or a style guide for Python programs?\u00b6\nYes. The coding style required for standard library modules is documented as PEP 8.\nCore Language\u00b6\nWhy am I getting an UnboundLocalError when the variable has a value?\u00b6\nIt can be a surprise to get the UnboundLocalError\nin previously working\ncode when it is modified by adding an assignment statement somewhere in\nthe body of a function.\nThis code:\n>>> x = 10\n>>> def bar():\n... print(x)\n...\n>>> bar()\n10\nworks, but this code:\n>>> x = 10\n>>> def foo():\n... print(x)\n... x += 1\nresults in an UnboundLocalError\n:\n>>> foo()\nTraceback (most recent call last):\n...\nUnboundLocalError: local variable 'x' referenced before assignment\nThis is because when you make an assignment to a variable in a scope, that\nvariable becomes local to that scope and shadows any similarly named variable\nin the outer scope. Since the last statement in foo assigns a new value to\nx\n, the compiler recognizes it as a local variable. Consequently when the\nearlier print(x)\nattempts to print the uninitialized local variable and\nan error results.\nIn the example above you can access the outer scope variable by declaring it global:\n>>> x = 10\n>>> def foobar():\n... global x\n... print(x)\n... x += 1\n...\n>>> foobar()\n10\nThis explicit declaration is required in order to remind you that (unlike the superficially analogous situation with class and instance variables) you are actually modifying the value of the variable in the outer scope:\n>>> print(x)\n11\nYou can do a similar thing in a nested scope using the nonlocal\nkeyword:\n>>> def foo():\n... x = 10\n... def bar():\n... nonlocal x\n... print(x)\n... x += 1\n... bar()\n... print(x)\n...\n>>> foo()\n10\n11\nWhat are the rules for local and global variables in Python?\u00b6\nIn Python, variables that are only referenced inside a function are implicitly global. If a variable is assigned a value anywhere within the function\u2019s body, it\u2019s assumed to be a local unless explicitly declared as global.\nThough a bit surprising at first, a moment\u2019s consideration explains this. On\none hand, requiring global\nfor assigned variables provides a bar\nagainst unintended side-effects. On the other hand, if global\nwas required\nfor all global references, you\u2019d be using global\nall the time. You\u2019d have\nto declare as global every reference to a built-in function or to a component of\nan imported module. This clutter would defeat the usefulness of the global\ndeclaration for identifying side-effects.\nWhy do lambdas defined in a loop with different values all return the same result?\u00b6\nAssume you use a for loop to define a few different lambdas (or even plain functions), e.g.:\n>>> squares = []\n>>> for x in range(5):\n... squares.append(lambda: x**2)\nThis gives you a list that contains 5 lambdas that calculate x**2\n. You\nmight expect that, when called, they would return, respectively, 0\n, 1\n,\n4\n, 9\n, and 16\n. However, when you actually try you will see that\nthey all return 16\n:\n>>> squares[2]()\n16\n>>> squares[4]()\n16\nThis happens because x\nis not local to the lambdas, but is defined in\nthe outer scope, and it is accessed when the lambda is called \u2014 not when it\nis defined. At the end of the loop, the value of x\nis 4\n, so all the\nfunctions now return 4**2\n, i.e. 16\n. You can also verify this by\nchanging the value of x\nand see how the results of the lambdas change:\n>>> x = 8\n>>> squares[2]()\n64\nIn order to avoid this, you need to save the values in variables local to the\nlambdas, so that they don\u2019t rely on the value of the global x\n:\n>>> squares = []\n>>> for x in range(5):\n... squares.append(lambda n=x: n**2)\nHere, n=x\ncreates a new variable n\nlocal to the lambda and computed\nwhen the lambda is defined so that it has the same value that x\nhad at\nthat point in the loop. This means that the value of n\nwill be 0\nin the first lambda, 1\nin the second, 2\nin the third, and so on.\nTherefore each lambda will now return the correct result:\n>>> squares[2]()\n4\n>>> squares[4]()\n16\nNote that this behaviour is not peculiar to lambdas, but applies to regular functions too.\nWhat are the \u201cbest practices\u201d for using import in a module?\u00b6\nIn general, don\u2019t use from modulename import *\n. Doing so clutters the\nimporter\u2019s namespace, and makes it much harder for linters to detect undefined\nnames.\nImport modules at the top of a file. Doing so makes it clear what other modules your code requires and avoids questions of whether the module name is in scope. Using one import per line makes it easy to add and delete module imports, but using multiple imports per line uses less screen space.\nIt\u2019s good practice if you import modules in the following order:\nthird-party library modules (anything installed in Python\u2019s site-packages directory) \u2013 e.g.\ndateutil\n,requests\n,PIL.Image\nlocally developed modules\nIt is sometimes necessary to move imports to a function or class to avoid problems with circular imports. Gordon McMillan says:\nCircular imports are fine where both modules use the \u201cimport \u201d form of import. They fail when the 2nd module wants to grab a name out of the first (\u201cfrom module import name\u201d) and the import is at the top level. That\u2019s because names in the 1st are not yet available, because the first module is busy importing the 2nd.\nIn this case, if the second module is only used in one function, then the import can easily be moved into that function. By the time the import is called, the first module will have finished initializing, and the second module can do its import.\nIt may also be necessary to move imports out of the top level of code if some of the modules are platform-specific. In that case, it may not even be possible to import all of the modules at the top of the file. In this case, importing the correct modules in the corresponding platform-specific code is a good option.\nOnly move imports into a local scope, such as inside a function definition, if\nit\u2019s necessary to solve a problem such as avoiding a circular import or are\ntrying to reduce the initialization time of a module. This technique is\nespecially helpful if many of the imports are unnecessary depending on how the\nprogram executes. You may also want to move imports into a function if the\nmodules are only ever used in that function. Note that loading a module the\nfirst time may be expensive because of the one time initialization of the\nmodule, but loading a module multiple times is virtually free, costing only a\ncouple of dictionary lookups. Even if the module name has gone out of scope,\nthe module is probably available in sys.modules\n.\nHow can I pass optional or keyword parameters from one function to another?\u00b6\nCollect the arguments using the *\nand **\nspecifiers in the function\u2019s\nparameter list; this gives you the positional arguments as a tuple and the\nkeyword arguments as a dictionary. You can then pass these arguments when\ncalling another function by using *\nand **\n:\ndef f(x, *args, **kwargs):\n...\nkwargs['width'] = '14.3c'\n...\ng(x, *args, **kwargs)\nWhat is the difference between arguments and parameters?\u00b6\nParameters are defined by the names that appear in a function definition, whereas arguments are the values actually passed to a function when calling it. Parameters define what kind of arguments a function can accept. For example, given the function definition:\ndef func(foo, bar=None, **kwargs):\npass\nfoo, bar and kwargs are parameters of func\n. However, when calling\nfunc\n, for example:\nfunc(42, bar=314, extra=somevar)\nthe values 42\n, 314\n, and somevar\nare arguments.\nWhy did changing list \u2018y\u2019 also change list \u2018x\u2019?\u00b6\nIf you wrote code like:\n>>> x = []\n>>> y = x\n>>> y.append(10)\n>>> y\n[10]\n>>> x\n[10]\nyou might be wondering why appending an element to y\nchanged x\ntoo.\nThere are two factors that produce this result:\nVariables are simply names that refer to objects. Doing\ny = x\ndoesn\u2019t create a copy of the list \u2013 it creates a new variabley\nthat refers to the same objectx\nrefers to. This means that there is only one object (the list), and bothx\nandy\nrefer to it.Lists are mutable, which means that you can change their content.\nAfter the call to append()\n, the content of the mutable object has\nchanged from []\nto [10]\n. Since both the variables refer to the same\nobject, using either name accesses the modified value [10]\n.\nIf we instead assign an immutable object to x\n:\n>>> x = 5 # ints are immutable\n>>> y = x\n>>> x = x + 1 # 5 can't be mutated, we are creating a new object here\n>>> x\n6\n>>> y\n5\nwe can see that in this case x\nand y\nare not equal anymore. This is\nbecause integers are immutable, and when we do x = x + 1\nwe are not\nmutating the int 5\nby incrementing its value; instead, we are creating a\nnew object (the int 6\n) and assigning it to x\n(that is, changing which\nobject x\nrefers to). After this assignment we have two objects (the ints\n6\nand 5\n) and two variables that refer to them (x\nnow refers to\n6\nbut y\nstill refers to 5\n).\nSome operations (for example y.append(10)\nand y.sort()\n) mutate the\nobject, whereas superficially similar operations (for example y = y + [10]\nand sorted(y)\n) create a new object. In general in Python (and in all cases\nin the standard library) a method that mutates an object will return None\nto help avoid getting the two types of operations confused. So if you\nmistakenly write y.sort()\nthinking it will give you a sorted copy of y\n,\nyou\u2019ll instead end up with None\n, which will likely cause your program to\ngenerate an easily diagnosed error.\nHowever, there is one class of operations where the same operation sometimes\nhas different behaviors with different types: the augmented assignment\noperators. For example, +=\nmutates lists but not tuples or ints (a_list\n+= [1, 2, 3]\nis equivalent to a_list.extend([1, 2, 3])\nand mutates\na_list\n, whereas some_tuple += (1, 2, 3)\nand some_int += 1\ncreate\nnew objects).\nIn other words:\nIf we have a mutable object (\nlist\n,dict\n,set\n, etc.), we can use some specific operations to mutate it and all the variables that refer to it will see the change.If we have an immutable object (\nstr\n,int\n,tuple\n, etc.), all the variables that refer to it will always see the same value, but operations that transform that value into a new value always return a new object.\nIf you want to know if two variables refer to the same object or not, you can\nuse the is\noperator, or the built-in function id()\n.\nHow do I write a function with output parameters (call by reference)?\u00b6\nRemember that arguments are passed by assignment in Python. Since assignment just creates references to objects, there\u2019s no alias between an argument name in the caller and callee, and so no call-by-reference per se. You can achieve the desired effect in a number of ways.\nBy returning a tuple of the results:\n>>> def func1(a, b): ... a = 'new-value' # a and b are local names ... b = b + 1 # assigned to new objects ... return a, b # return new values ... >>> x, y = 'old-value', 99 >>> func1(x, y) ('new-value', 100)\nThis is almost always the clearest solution.\nBy using global variables. This isn\u2019t thread-safe, and is not recommended.\nBy passing a mutable (changeable in-place) object:\n>>> def func2(a): ... a[0] = 'new-value' # 'a' references a mutable list ... a[1] = a[1] + 1 # changes a shared object ... >>> args = ['old-value', 99] >>> func2(args) >>> args ['new-value', 100]\nBy passing in a dictionary that gets mutated:\n>>> def func3(args): ... args['a'] = 'new-value' # args is a mutable dictionary ... args['b'] = args['b'] + 1 # change it in-place ... >>> args = {'a': 'old-value', 'b': 99} >>> func3(args) >>> args {'a': 'new-value', 'b': 100}\nOr bundle up values in a class instance:\n>>> class Namespace: ... def __init__(self, /, **args): ... for key, value in args.items(): ... setattr(self, key, value) ... >>> def func4(args): ... args.a = 'new-value' # args is a mutable Namespace ... args.b = args.b + 1 # change object in-place ... >>> args = Namespace(a='old-value', b=99) >>> func4(args) >>> vars(args) {'a': 'new-value', 'b': 100}\nThere\u2019s almost never a good reason to get this complicated.\nYour best choice is to return a tuple containing the multiple results.\nHow do you make a higher order function in Python?\u00b6\nYou have two choices: you can use nested scopes or you can use callable objects.\nFor example, suppose you wanted to define linear(a,b)\nwhich returns a\nfunction f(x)\nthat computes the value a*x+b\n. Using nested scopes:\ndef linear(a, b):\ndef result(x):\nreturn a * x + b\nreturn result\nOr using a callable object:\nclass linear:\ndef __init__(self, a, b):\nself.a, self.b = a, b\ndef __call__(self, x):\nreturn self.a * x + self.b\nIn both cases,\ntaxes = linear(0.3, 2)\ngives a callable object where taxes(10e6) == 0.3 * 10e6 + 2\n.\nThe callable object approach has the disadvantage that it is a bit slower and results in slightly longer code. However, note that a collection of callables can share their signature via inheritance:\nclass exponential(linear):\n# __init__ inherited\ndef __call__(self, x):\nreturn self.a * (x ** self.b)\nObject can encapsulate state for several methods:\nclass counter:\nvalue = 0\ndef set(self, x):\nself.value = x\ndef up(self):\nself.value = self.value + 1\ndef down(self):\nself.value = self.value - 1\ncount = counter()\ninc, dec, reset = count.up, count.down, count.set\nHere inc()\n, dec()\nand reset()\nact like functions which share the\nsame counting variable.\nHow do I copy an object in Python?\u00b6\nIn general, try copy.copy()\nor copy.deepcopy()\nfor the general case.\nNot all objects can be copied, but most can.\nSome objects can be copied more easily. Dictionaries have a copy()\nmethod:\nnewdict = olddict.copy()\nSequences can be copied by slicing:\nnew_l = l[:]\nHow can I find the methods or attributes of an object?\u00b6\nFor an instance x\nof a user-defined class, dir(x)\nreturns an alphabetized\nlist of the names containing the instance attributes and methods and attributes\ndefined by its class.\nHow can my code discover the name of an object?\u00b6\nGenerally speaking, it can\u2019t, because objects don\u2019t really have names.\nEssentially, assignment always binds a name to a value; the same is true of\ndef\nand class\nstatements, but in that case the value is a\ncallable. Consider the following code:\n>>> class A:\n... pass\n...\n>>> B = A\n>>> a = B()\n>>> b = a\n>>> print(b)\n<__main__.A object at 0x16D07CC>\n>>> print(a)\n<__main__.A object at 0x16D07CC>\nArguably the class has a name: even though it is bound to two names and invoked\nthrough the name B\nthe created instance is still reported as an instance of\nclass A\n. However, it is impossible to say whether the instance\u2019s name is a\nor\nb\n, since both names are bound to the same value.\nGenerally speaking it should not be necessary for your code to \u201cknow the names\u201d of particular values. Unless you are deliberately writing introspective programs, this is usually an indication that a change of approach might be beneficial.\nIn comp.lang.python, Fredrik Lundh once gave an excellent analogy in answer to this question:\nThe same way as you get the name of that cat you found on your porch: the cat (object) itself cannot tell you its name, and it doesn\u2019t really care \u2013 so the only way to find out what it\u2019s called is to ask all your neighbours (namespaces) if it\u2019s their cat (object)\u2026\n\u2026.and don\u2019t be surprised if you\u2019ll find that it\u2019s known by many names, or no name at all!\nWhat\u2019s up with the comma operator\u2019s precedence?\u00b6\nComma is not an operator in Python. Consider this session:\n>>> \"a\" in \"b\", \"a\"\n(False, 'a')\nSince the comma is not an operator, but a separator between expressions the above is evaluated as if you had entered:\n(\"a\" in \"b\"), \"a\"\nnot:\n\"a\" in (\"b\", \"a\")\nThe same is true of the various assignment operators (=\n, +=\netc). They\nare not truly operators but syntactic delimiters in assignment statements.\nIs there an equivalent of C\u2019s \u201c?:\u201d ternary operator?\u00b6\nYes, there is. The syntax is as follows:\n[on_true] if [expression] else [on_false]\nx, y = 50, 25\nsmall = x if x < y else y\nBefore this syntax was introduced in Python 2.5, a common idiom was to use logical operators:\n[expression] and [on_true] or [on_false]\nHowever, this idiom is unsafe, as it can give wrong results when on_true\nhas a false boolean value. Therefore, it is always better to use\nthe ... if ... else ...\nform.\nIs it possible to write obfuscated one-liners in Python?\u00b6\nYes. Usually this is done by nesting lambda\nwithin\nlambda\n. See the following three examples, slightly adapted from Ulf Bartelt:\nfrom functools import reduce\n# Primes < 1000\nprint(list(filter(None,map(lambda y:y*reduce(lambda x,y:x*y!=0,\nmap(lambda x,y=y:y%x,range(2,int(pow(y,0.5)+1))),1),range(2,1000)))))\n# First 10 Fibonacci numbers\nprint(list(map(lambda x,f=lambda x,f:(f(x-1,f)+f(x-2,f)) if x>1 else 1:\nf(x,f), range(10))))\n# Mandelbrot set\nprint((lambda Ru,Ro,Iu,Io,IM,Sx,Sy:reduce(lambda x,y:x+'\\n'+y,map(lambda y,\nIu=Iu,Io=Io,Ru=Ru,Ro=Ro,Sy=Sy,L=lambda yc,Iu=Iu,Io=Io,Ru=Ru,Ro=Ro,i=IM,\nSx=Sx,Sy=Sy:reduce(lambda x,y:x+y,map(lambda x,xc=Ru,yc=yc,Ru=Ru,Ro=Ro,\ni=i,Sx=Sx,F=lambda xc,yc,x,y,k,f=lambda xc,yc,x,y,k,f:(k<=0)or (x*x+y*y\n>=4.0) or 1+f(xc,yc,x*x-y*y+xc,2.0*x*y+yc,k-1,f):f(xc,yc,x,y,k,f):chr(\n64+F(Ru+x*(Ro-Ru)/Sx,yc,0,0,i)),range(Sx))):L(Iu+y*(Io-Iu)/Sy),range(Sy\n))))(-2.1, 0.7, -1.2, 1.2, 30, 80, 24))\n# \\___ ___/ \\___ ___/ | | |__ lines on screen\n# V V | |______ columns on screen\n# | | |__________ maximum of \"iterations\"\n# | |_________________ range on y axis\n# |____________________________ range on x axis\nDon\u2019t try this at home, kids!\nWhat does the slash(/) in the parameter list of a function mean?\u00b6\nA slash in the argument list of a function denotes that the parameters prior to\nit are positional-only. Positional-only parameters are the ones without an\nexternally usable name. Upon calling a function that accepts positional-only\nparameters, arguments are mapped to parameters based solely on their position.\nFor example, divmod()\nis a function that accepts positional-only\nparameters. Its documentation looks like this:\n>>> help(divmod)\nHelp on built-in function divmod in module builtins:\ndivmod(x, y, /)\nReturn the tuple (x//y, x%y). Invariant: div*y + mod == x.\nThe slash at the end of the parameter list means that both parameters are\npositional-only. Thus, calling divmod()\nwith keyword arguments would lead\nto an error:\n>>> divmod(x=3, y=4)\nTraceback (most recent call last):\nFile \"\", line 1, in \nTypeError: divmod() takes no keyword arguments\nNumbers and strings\u00b6\nHow do I specify hexadecimal and octal integers?\u00b6\nTo specify an octal digit, precede the octal value with a zero, and then a lower or uppercase \u201co\u201d. For example, to set the variable \u201ca\u201d to the octal value \u201c10\u201d (8 in decimal), type:\n>>> a = 0o10\n>>> a\n8\nHexadecimal is just as easy. Simply precede the hexadecimal number with a zero, and then a lower or uppercase \u201cx\u201d. Hexadecimal digits can be specified in lower or uppercase. For example, in the Python interpreter:\n>>> a = 0xa5\n>>> a\n165\n>>> b = 0XB2\n>>> b\n178\nWhy does -22 // 10 return -3?\u00b6\nIt\u2019s primarily driven by the desire that i % j\nhave the same sign as j\n.\nIf you want that, and also want:\ni == (i // j) * j + (i % j)\nthen integer division has to return the floor. C also requires that identity to\nhold, and then compilers that truncate i // j\nneed to make i % j\nhave\nthe same sign as i\n.\nThere are few real use cases for i % j\nwhen j\nis negative. When j\nis positive, there are many, and in virtually all of them it\u2019s more useful for\ni % j\nto be >= 0\n. If the clock says 10 now, what did it say 200 hours\nago? -190 % 12 == 2\nis useful; -190 % 12 == -10\nis a bug waiting to\nbite.\nHow do I get int literal attribute instead of SyntaxError?\u00b6\nTrying to lookup an int\nliteral attribute in the normal manner gives\na SyntaxError\nbecause the period is seen as a decimal point:\n>>> 1.__class__\nFile \"\", line 1\n1.__class__\n^\nSyntaxError: invalid decimal literal\nThe solution is to separate the literal from the period with either a space or parentheses.\n>>> 1 .__class__\n\n>>> (1).__class__\n\nHow do I convert a string to a number?\u00b6\nFor integers, use the built-in int()\ntype constructor, e.g. int('144')\n== 144\n. Similarly, float()\nconverts to a floating-point number,\ne.g. float('144') == 144.0\n.\nBy default, these interpret the number as decimal, so that int('0144') ==\n144\nholds true, and int('0x144')\nraises ValueError\n. int(string,\nbase)\ntakes the base to convert from as a second optional argument, so int(\n'0x144', 16) == 324\n. If the base is specified as 0, the number is interpreted\nusing Python\u2019s rules: a leading \u20180o\u2019 indicates octal, and \u20180x\u2019 indicates a hex\nnumber.\nDo not use the built-in function eval()\nif all you need is to convert\nstrings to numbers. eval()\nwill be significantly slower and it presents a\nsecurity risk: someone could pass you a Python expression that might have\nunwanted side effects. For example, someone could pass\n__import__('os').system(\"rm -rf $HOME\")\nwhich would erase your home\ndirectory.\neval()\nalso has the effect of interpreting numbers as Python expressions,\nso that e.g. eval('09')\ngives a syntax error because Python does not allow\nleading \u20180\u2019 in a decimal number (except \u20180\u2019).\nHow do I convert a number to a string?\u00b6\nTo convert, e.g., the number 144\nto the string '144'\n, use the built-in type\nconstructor str()\n. If you want a hexadecimal or octal representation, use\nthe built-in functions hex()\nor oct()\n. For fancy formatting, see\nthe f-strings and Format String Syntax sections,\ne.g. \"{:04d}\".format(144)\nyields\n'0144'\nand \"{:.3f}\".format(1.0/3.0)\nyields '0.333'\n.\nHow do I modify a string in place?\u00b6\nYou can\u2019t, because strings are immutable. In most situations, you should\nsimply construct a new string from the various parts you want to assemble\nit from. However, if you need an object with the ability to modify in-place\nunicode data, try using an io.StringIO\nobject or the array\nmodule:\n>>> import io\n>>> s = \"Hello, world\"\n>>> sio = io.StringIO(s)\n>>> sio.getvalue()\n'Hello, world'\n>>> sio.seek(7)\n7\n>>> sio.write(\"there!\")\n6\n>>> sio.getvalue()\n'Hello, there!'\n>>> import array\n>>> a = array.array('w', s)\n>>> print(a)\narray('w', 'Hello, world')\n>>> a[0] = 'y'\n>>> print(a)\narray('w', 'yello, world')\n>>> a.tounicode()\n'yello, world'\nHow do I use strings to call functions/methods?\u00b6\nThere are various techniques.\nThe best is to use a dictionary that maps strings to functions. The primary advantage of this technique is that the strings do not need to match the names of the functions. This is also the primary technique used to emulate a case construct:\ndef a(): pass def b(): pass dispatch = {'go': a, 'stop': b} # Note lack of parens for funcs dispatch[get_input()]() # Note trailing parens to call function\nUse the built-in function\ngetattr()\n:import foo getattr(foo, 'bar')()\nNote that\ngetattr()\nworks on any object, including classes, class instances, modules, and so on.This is used in several places in the standard library, like this:\nclass Foo: def do_foo(self): ... def do_bar(self): ... f = getattr(foo_instance, 'do_' + opname) f()\nUse\nlocals()\nto resolve the function name:def myFunc(): print(\"hello\") fname = \"myFunc\" f = locals()[fname] f()\nIs there an equivalent to Perl\u2019s chomp()\nfor removing trailing newlines from strings?\u00b6\nYou can use S.rstrip(\"\\r\\n\")\nto remove all occurrences of any line\nterminator from the end of the string S\nwithout removing other trailing\nwhitespace. If the string S\nrepresents more than one line, with several\nempty lines at the end, the line terminators for all the blank lines will\nbe removed:\n>>> lines = (\"line 1 \\r\\n\"\n... \"\\r\\n\"\n... \"\\r\\n\")\n>>> lines.rstrip(\"\\n\\r\")\n'line 1 '\nSince this is typically only desired when reading text one line at a time, using\nS.rstrip()\nthis way works well.\nIs there a scanf()\nor sscanf()\nequivalent?\u00b6\nNot as such.\nFor simple input parsing, the easiest approach is usually to split the line into\nwhitespace-delimited words using the split()\nmethod of string objects\nand then convert decimal strings to numeric values using int()\nor\nfloat()\n. split()\nsupports an optional \u201csep\u201d parameter which is useful\nif the line uses something other than whitespace as a separator.\nFor more complicated input parsing, regular expressions are more powerful\nthan C\u2019s sscanf\nand better suited for the task.\nWhat does UnicodeDecodeError\nor UnicodeEncodeError\nerror mean?\u00b6\nSee the Unicode HOWTO.\nCan I end a raw string with an odd number of backslashes?\u00b6\nA raw string ending with an odd number of backslashes will escape the string\u2019s quote:\n>>> r'C:\\this\\will\\not\\work\\'\nFile \"\", line 1\nr'C:\\this\\will\\not\\work\\'\n^\nSyntaxError: unterminated string literal (detected at line 1)\nThere are several workarounds for this. One is to use regular strings and double the backslashes:\n>>> 'C:\\\\this\\\\will\\\\work\\\\'\n'C:\\\\this\\\\will\\\\work\\\\'\nAnother is to concatenate a regular string containing an escaped backslash to the raw string:\n>>> r'C:\\this\\will\\work' '\\\\'\n'C:\\\\this\\\\will\\\\work\\\\'\nIt is also possible to use os.path.join()\nto append a backslash on Windows:\n>>> os.path.join(r'C:\\this\\will\\work', '')\n'C:\\\\this\\\\will\\\\work\\\\'\nNote that while a backslash will \u201cescape\u201d a quote for the purposes of determining where the raw string ends, no escaping occurs when interpreting the value of the raw string. That is, the backslash remains present in the value of the raw string:\n>>> r'backslash\\'preserved'\n\"backslash\\\\'preserved\"\nAlso see the specification in the language reference.\nPerformance\u00b6\nMy program is too slow. How do I speed it up?\u00b6\nThat\u2019s a tough one, in general. First, here are a list of things to remember before diving further:\nPerformance characteristics vary across Python implementations. This FAQ focuses on CPython.\nBehaviour can vary across operating systems, especially when talking about I/O or multi-threading.\nYou should always find the hot spots in your program before attempting to optimize any code (see the\nprofile\nmodule).Writing benchmark scripts will allow you to iterate quickly when searching for improvements (see the\ntimeit\nmodule).It is highly recommended to have good code coverage (through unit testing or any other technique) before potentially introducing regressions hidden in sophisticated optimizations.\nThat being said, there are many tricks to speed up Python code. Here are some general principles which go a long way towards reaching acceptable performance levels:\nMaking your algorithms faster (or changing to faster ones) can yield much larger benefits than trying to sprinkle micro-optimization tricks all over your code.\nUse the right data structures. Study documentation for the Built-in Types and the\ncollections\nmodule.When the standard library provides a primitive for doing something, it is likely (although not guaranteed) to be faster than any alternative you may come up with. This is doubly true for primitives written in C, such as builtins and some extension types. For example, be sure to use either the\nlist.sort()\nbuilt-in method or the relatedsorted()\nfunction to do sorting (and see the Sorting Techniques for examples of moderately advanced usage).Abstractions tend to create indirections and force the interpreter to work more. If the levels of indirection outweigh the amount of useful work done, your program will be slower. You should avoid excessive abstraction, especially under the form of tiny functions or methods (which are also often detrimental to readability).\nIf you have reached the limit of what pure Python can allow, there are tools to take you further away. For example, Cython can compile a slightly modified version of Python code into a C extension, and can be used on many different platforms. Cython can take advantage of compilation (and optional type annotations) to make your code significantly faster than when interpreted. If you are confident in your C programming skills, you can also write a C extension module yourself.\nSee also\nThe wiki page devoted to performance tips.\nWhat is the most efficient way to concatenate many strings together?\u00b6\nstr\nand bytes\nobjects are immutable, therefore concatenating\nmany strings together is inefficient as each concatenation creates a new\nobject. In the general case, the total runtime cost is quadratic in the\ntotal string length.\nTo accumulate many str\nobjects, the recommended idiom is to place\nthem into a list and call str.join()\nat the end:\nchunks = []\nfor s in my_strings:\nchunks.append(s)\nresult = ''.join(chunks)\n(another reasonably efficient idiom is to use io.StringIO\n)\nTo accumulate many bytes\nobjects, the recommended idiom is to extend\na bytearray\nobject using in-place concatenation (the +=\noperator):\nresult = bytearray()\nfor b in my_bytes_objects:\nresult += b\nSequences (Tuples/Lists)\u00b6\nHow do I convert between tuples and lists?\u00b6\nThe type constructor tuple(seq)\nconverts any sequence (actually, any\niterable) into a tuple with the same items in the same order.\nFor example, tuple([1, 2, 3])\nyields (1, 2, 3)\nand tuple('abc')\nyields ('a', 'b', 'c')\n. If the argument is a tuple, it does not make a copy\nbut returns the same object, so it is cheap to call tuple()\nwhen you\naren\u2019t sure that an object is already a tuple.\nThe type constructor list(seq)\nconverts any sequence or iterable into a list\nwith the same items in the same order. For example, list((1, 2, 3))\nyields\n[1, 2, 3]\nand list('abc')\nyields ['a', 'b', 'c']\n. If the argument\nis a list, it makes a copy just like seq[:]\nwould.\nWhat\u2019s a negative index?\u00b6\nPython sequences are indexed with positive numbers and negative numbers. For\npositive numbers 0 is the first index 1 is the second index and so forth. For\nnegative indices -1 is the last index and -2 is the penultimate (next to last)\nindex and so forth. Think of seq[-n]\nas the same as seq[len(seq)-n]\n.\nUsing negative indices can be very convenient. For example S[:-1]\nis all of\nthe string except for its last character, which is useful for removing the\ntrailing newline from a string.\nHow do I iterate over a sequence in reverse order?\u00b6\nUse the reversed()\nbuilt-in function:\nfor x in reversed(sequence):\n... # do something with x ...\nThis won\u2019t touch your original sequence, but build a new copy with reversed order to iterate over.\nHow do you remove duplicates from a list?\u00b6\nSee the Python Cookbook for a long discussion of many ways to do this:\nIf you don\u2019t mind reordering the list, sort it and then scan from the end of the list, deleting duplicates as you go:\nif mylist:\nmylist.sort()\nlast = mylist[-1]\nfor i in range(len(mylist)-2, -1, -1):\nif last == mylist[i]:\ndel mylist[i]\nelse:\nlast = mylist[i]\nIf all elements of the list may be used as set keys (i.e. they are all hashable) this is often faster\nmylist = list(set(mylist))\nThis converts the list into a set, thereby removing duplicates, and then back into a list.\nHow do you remove multiple items from a list?\u00b6\nAs with removing duplicates, explicitly iterating in reverse with a delete condition is one possibility. However, it is easier and faster to use slice replacement with an implicit or explicit forward iteration. Here are three variations:\nmylist[:] = filter(keep_function, mylist)\nmylist[:] = (x for x in mylist if keep_condition)\nmylist[:] = [x for x in mylist if keep_condition]\nThe list comprehension may be fastest.\nHow do you make an array in Python?\u00b6\nUse a list:\n[\"this\", 1, \"is\", \"an\", \"array\"]\nLists are equivalent to C or Pascal arrays in their time complexity; the primary difference is that a Python list can contain objects of many different types.\nThe array\nmodule also provides methods for creating arrays of fixed types\nwith compact representations, but they are slower to index than lists. Also\nnote that NumPy\nand other third party packages define array-like structures with\nvarious characteristics as well.\nTo get Lisp-style linked lists, you can emulate cons cells using tuples:\nlisp_list = (\"like\", (\"this\", (\"example\", None) ) )\nIf mutability is desired, you could use lists instead of tuples. Here the\nanalogue of a Lisp car is lisp_list[0]\nand the analogue of cdr is\nlisp_list[1]\n. Only do this if you\u2019re sure you really need to, because it\u2019s\nusually a lot slower than using Python lists.\nHow do I create a multidimensional list?\u00b6\nYou probably tried to make a multidimensional array like this:\n>>> A = [[None] * 2] * 3\nThis looks correct if you print it:\n>>> A\n[[None, None], [None, None], [None, None]]\nBut when you assign a value, it shows up in multiple places:\n>>> A[0][0] = 5\n>>> A\n[[5, None], [5, None], [5, None]]\nThe reason is that replicating a list with *\ndoesn\u2019t create copies, it only\ncreates references to the existing objects. The *3\ncreates a list\ncontaining 3 references to the same list of length two. Changes to one row will\nshow in all rows, which is almost certainly not what you want.\nThe suggested approach is to create a list of the desired length first and then fill in each element with a newly created list:\nA = [None] * 3\nfor i in range(3):\nA[i] = [None] * 2\nThis generates a list containing 3 different lists of length two. You can also use a list comprehension:\nw, h = 2, 3\nA = [[None] * w for i in range(h)]\nOr, you can use an extension that provides a matrix datatype; NumPy is the best known.\nHow do I apply a method or function to a sequence of objects?\u00b6\nTo call a method or function and accumulate the return values is a list, a list comprehension is an elegant solution:\nresult = [obj.method() for obj in mylist]\nresult = [function(obj) for obj in mylist]\nTo just run the method or function without saving the return values,\na plain for\nloop will suffice:\nfor obj in mylist:\nobj.method()\nfor obj in mylist:\nfunction(obj)\nWhy does a_tuple[i] += [\u2018item\u2019] raise an exception when the addition works?\u00b6\nThis is because of a combination of the fact that augmented assignment operators are assignment operators, and the difference between mutable and immutable objects in Python.\nThis discussion applies in general when augmented assignment operators are\napplied to elements of a tuple that point to mutable objects, but we\u2019ll use\na list\nand +=\nas our exemplar.\nIf you wrote:\n>>> a_tuple = (1, 2)\n>>> a_tuple[0] += 1\nTraceback (most recent call last):\n...\nTypeError: 'tuple' object does not support item assignment\nThe reason for the exception should be immediately clear: 1\nis added to the\nobject a_tuple[0]\npoints to (1\n), producing the result object, 2\n,\nbut when we attempt to assign the result of the computation, 2\n, to element\n0\nof the tuple, we get an error because we can\u2019t change what an element of\na tuple points to.\nUnder the covers, what this augmented assignment statement is doing is approximately this:\n>>> result = a_tuple[0] + 1\n>>> a_tuple[0] = result\nTraceback (most recent call last):\n...\nTypeError: 'tuple' object does not support item assignment\nIt is the assignment part of the operation that produces the error, since a tuple is immutable.\nWhen you write something like:\n>>> a_tuple = (['foo'], 'bar')\n>>> a_tuple[0] += ['item']\nTraceback (most recent call last):\n...\nTypeError: 'tuple' object does not support item assignment\nThe exception is a bit more surprising, and even more surprising is the fact that even though there was an error, the append worked:\n>>> a_tuple[0]\n['foo', 'item']\nTo see why this happens, you need to know that (a) if an object implements an\n__iadd__()\nmagic method, it gets called when the +=\naugmented\nassignment\nis executed, and its return value is what gets used in the assignment statement;\nand (b) for lists, __iadd__()\nis equivalent to calling\nextend()\non the list and returning the list.\nThat\u2019s why we say that for lists, +=\nis a \u201cshorthand\u201d for list.extend()\n:\n>>> a_list = []\n>>> a_list += [1]\n>>> a_list\n[1]\nThis is equivalent to:\n>>> result = a_list.__iadd__([1])\n>>> a_list = result\nThe object pointed to by a_list has been mutated, and the pointer to the\nmutated object is assigned back to a_list\n. The end result of the\nassignment is a no-op, since it is a pointer to the same object that a_list\nwas previously pointing to, but the assignment still happens.\nThus, in our tuple example what is happening is equivalent to:\n>>> result = a_tuple[0].__iadd__(['item'])\n>>> a_tuple[0] = result\nTraceback (most recent call last):\n...\nTypeError: 'tuple' object does not support item assignment\nThe __iadd__()\nsucceeds, and thus the list is extended, but even though\nresult\npoints to the same object that a_tuple[0]\nalready points to,\nthat final assignment still results in an error, because tuples are immutable.\nI want to do a complicated sort: can you do a Schwartzian Transform in Python?\u00b6\nThe technique, attributed to Randal Schwartz of the Perl community, sorts the\nelements of a list by a metric which maps each element to its \u201csort value\u201d. In\nPython, use the key\nargument for the list.sort()\nmethod:\nIsorted = L[:]\nIsorted.sort(key=lambda s: int(s[10:15]))\nHow can I sort one list by values from another list?\u00b6\nMerge them into an iterator of tuples, sort the resulting list, and then pick out the element you want.\n>>> list1 = [\"what\", \"I'm\", \"sorting\", \"by\"]\n>>> list2 = [\"something\", \"else\", \"to\", \"sort\"]\n>>> pairs = zip(list1, list2)\n>>> pairs = sorted(pairs)\n>>> pairs\n[(\"I'm\", 'else'), ('by', 'sort'), ('sorting', 'to'), ('what', 'something')]\n>>> result = [x[1] for x in pairs]\n>>> result\n['else', 'sort', 'to', 'something']\nObjects\u00b6\nWhat is a class?\u00b6\nA class is the particular object type created by executing a class statement. Class objects are used as templates to create instance objects, which embody both the data (attributes) and code (methods) specific to a datatype.\nA class can be based on one or more other classes, called its base class(es). It\nthen inherits the attributes and methods of its base classes. This allows an\nobject model to be successively refined by inheritance. You might have a\ngeneric Mailbox\nclass that provides basic accessor methods for a mailbox,\nand subclasses such as MboxMailbox\n, MaildirMailbox\n, OutlookMailbox\nthat handle various specific mailbox formats.\nWhat is a method?\u00b6\nA method is a function on some object x\nthat you normally call as\nx.name(arguments...)\n. Methods are defined as functions inside the class\ndefinition:\nclass C:\ndef meth(self, arg):\nreturn arg * 2 + self.attribute\nWhat is self?\u00b6\nSelf is merely a conventional name for the first argument of a method. A method\ndefined as meth(self, a, b, c)\nshould be called as x.meth(a, b, c)\nfor\nsome instance x\nof the class in which the definition occurs; the called\nmethod will think it is called as meth(x, a, b, c)\n.\nSee also Why must \u2018self\u2019 be used explicitly in method definitions and calls?.\nHow do I check if an object is an instance of a given class or of a subclass of it?\u00b6\nUse the built-in function isinstance(obj, cls)\n. You can\ncheck if an object\nis an instance of any of a number of classes by providing a tuple instead of a\nsingle class, e.g. isinstance(obj, (class1, class2, ...))\n, and can also\ncheck whether an object is one of Python\u2019s built-in types, e.g.\nisinstance(obj, str)\nor isinstance(obj, (int, float, complex))\n.\nNote that isinstance()\nalso checks for virtual inheritance from an\nabstract base class. So, the test will return True\nfor a\nregistered class even if hasn\u2019t directly or indirectly inherited from it. To\ntest for \u201ctrue inheritance\u201d, scan the MRO of the class:\nfrom collections.abc import Mapping\nclass P:\npass\nclass C(P):\npass\nMapping.register(P)\n>>> c = C()\n>>> isinstance(c, C) # direct\nTrue\n>>> isinstance(c, P) # indirect\nTrue\n>>> isinstance(c, Mapping) # virtual\nTrue\n# Actual inheritance chain\n>>> type(c).__mro__\n(, , )\n# Test for \"true inheritance\"\n>>> Mapping in type(c).__mro__\nFalse\nNote that most programs do not use isinstance()\non user-defined classes\nvery often. If you are developing the classes yourself, a more proper\nobject-oriented style is to define methods on the classes that encapsulate a\nparticular behaviour, instead of checking the object\u2019s class and doing a\ndifferent thing based on what class it is. For example, if you have a function\nthat does something:\ndef search(obj):\nif isinstance(obj, Mailbox):\n... # code to search a mailbox\nelif isinstance(obj, Document):\n... # code to search a document\nelif ...\nA better approach is to define a search()\nmethod on all the classes and just\ncall it:\nclass Mailbox:\ndef search(self):\n... # code to search a mailbox\nclass Document:\ndef search(self):\n... # code to search a document\nobj.search()\nWhat is delegation?\u00b6\nDelegation is an object oriented technique (also called a design pattern).\nLet\u2019s say you have an object x\nand want to change the behaviour of just one\nof its methods. You can create a new class that provides a new implementation\nof the method you\u2019re interested in changing and delegates all other methods to\nthe corresponding method of x\n.\nPython programmers can easily implement delegation. For example, the following class implements a class that behaves like a file but converts all written data to uppercase:\nclass UpperOut:\ndef __init__(self, outfile):\nself._outfile = outfile\ndef write(self, s):\nself._outfile.write(s.upper())\ndef __getattr__(self, name):\nreturn getattr(self._outfile, name)\nHere the UpperOut\nclass redefines the write()\nmethod to convert the\nargument string to uppercase before calling the underlying\nself._outfile.write()\nmethod. All other methods are delegated to the\nunderlying self._outfile\nobject. The delegation is accomplished via the\n__getattr__()\nmethod; consult the language reference\nfor more information about controlling attribute access.\nNote that for more general cases delegation can get trickier. When attributes\nmust be set as well as retrieved, the class must define a __setattr__()\nmethod too, and it must do so carefully. The basic implementation of\n__setattr__()\nis roughly equivalent to the following:\nclass X:\n...\ndef __setattr__(self, name, value):\nself.__dict__[name] = value\n...\nMany __setattr__()\nimplementations call object.__setattr__()\nto set\nan attribute on self without causing infinite recursion:\nclass X:\ndef __setattr__(self, name, value):\n# Custom logic here...\nobject.__setattr__(self, name, value)\nAlternatively, it is possible to set attributes by inserting\nentries into self.__dict__\ndirectly.\nHow do I call a method defined in a base class from a derived class that extends it?\u00b6\nUse the built-in super()\nfunction:\nclass Derived(Base):\ndef meth(self):\nsuper().meth() # calls Base.meth\nIn the example, super()\nwill automatically determine the instance from\nwhich it was called (the self\nvalue), look up the method resolution\norder (MRO) with type(self).__mro__\n, and return the next in line after\nDerived\nin the MRO: Base\n.\nHow can I organize my code to make it easier to change the base class?\u00b6\nYou could assign the base class to an alias and derive from the alias. Then all you have to change is the value assigned to the alias. Incidentally, this trick is also handy if you want to decide dynamically (e.g. depending on availability of resources) which base class to use. Example:\nclass Base:\n...\nBaseAlias = Base\nclass Derived(BaseAlias):\n...\nHow do I create static class data and static class methods?\u00b6\nBoth static data and static methods (in the sense of C++ or Java) are supported in Python.\nFor static data, simply define a class attribute. To assign a new value to the attribute, you have to explicitly use the class name in the assignment:\nclass C:\ncount = 0 # number of times C.__init__ called\ndef __init__(self):\nC.count = C.count + 1\ndef getcount(self):\nreturn C.count # or return self.count\nc.count\nalso refers to C.count\nfor any c\nsuch that isinstance(c,\nC)\nholds, unless overridden by c\nitself or by some class on the base-class\nsearch path from c.__class__\nback to C\n.\nCaution: within a method of C, an assignment like self.count = 42\ncreates a\nnew and unrelated instance named \u201ccount\u201d in self\n\u2019s own dict. Rebinding of a\nclass-static data name must always specify the class whether inside a method or\nnot:\nC.count = 314\nStatic methods are possible:\nclass C:\n@staticmethod\ndef static(arg1, arg2, arg3):\n# No 'self' parameter!\n...\nHowever, a far more straightforward way to get the effect of a static method is via a simple module-level function:\ndef getcount():\nreturn C.count\nIf your code is structured so as to define one class (or tightly related class hierarchy) per module, this supplies the desired encapsulation.\nHow can I overload constructors (or methods) in Python?\u00b6\nThis answer actually applies to all methods, but the question usually comes up first in the context of constructors.\nIn C++ you\u2019d write\nclass C {\nC() { cout << \"No arguments\\n\"; }\nC(int i) { cout << \"Argument is \" << i << \"\\n\"; }\n}\nIn Python you have to write a single constructor that catches all cases using default arguments. For example:\nclass C:\ndef __init__(self, i=None):\nif i is None:\nprint(\"No arguments\")\nelse:\nprint(\"Argument is\", i)\nThis is not entirely equivalent, but close enough in practice.\nYou could also try a variable-length argument list, e.g.\ndef __init__(self, *args):\n...\nThe same approach works for all method definitions.\nI try to use __spam and I get an error about _SomeClassName__spam.\u00b6\nVariable names with double leading underscores are \u201cmangled\u201d to provide a simple\nbut effective way to define class private variables. Any identifier of the form\n__spam\n(at least two leading underscores, at most one trailing underscore)\nis textually replaced with _classname__spam\n, where classname\nis the\ncurrent class name with any leading underscores stripped.\nThe identifier can be used unchanged within the class, but to access it outside the class, the mangled name must be used:\nclass A:\ndef __one(self):\nreturn 1\ndef two(self):\nreturn 2 * self.__one()\nclass B(A):\ndef three(self):\nreturn 3 * self._A__one()\nfour = 4 * A()._A__one()\nIn particular, this does not guarantee privacy since an outside user can still deliberately access the private attribute; many Python programmers never bother to use private variable names at all.\nSee also\nThe private name mangling specifications for details and special cases.\nMy class defines __del__ but it is not called when I delete the object.\u00b6\nThere are several possible reasons for this.\nThe del\nstatement does not necessarily call __del__()\n\u2013 it simply\ndecrements the object\u2019s reference count, and if this reaches zero\n__del__()\nis called.\nIf your data structures contain circular links (e.g. a tree where each child has\na parent reference and each parent has a list of children) the reference counts\nwill never go back to zero. Once in a while Python runs an algorithm to detect\nsuch cycles, but the garbage collector might run some time after the last\nreference to your data structure vanishes, so your __del__()\nmethod may be\ncalled at an inconvenient and random time. This is inconvenient if you\u2019re trying\nto reproduce a problem. Worse, the order in which object\u2019s __del__()\nmethods are executed is arbitrary. You can run gc.collect()\nto force a\ncollection, but there are pathological cases where objects will never be\ncollected.\nDespite the cycle collector, it\u2019s still a good idea to define an explicit\nclose()\nmethod on objects to be called whenever you\u2019re done with them. The\nclose()\nmethod can then remove attributes that refer to subobjects. Don\u2019t\ncall __del__()\ndirectly \u2013 __del__()\nshould call close()\nand\nclose()\nshould make sure that it can be called more than once for the same\nobject.\nAnother way to avoid cyclical references is to use the weakref\nmodule,\nwhich allows you to point to objects without incrementing their reference count.\nTree data structures, for instance, should use weak references for their parent\nand sibling references (if they need them!).\nFinally, if your __del__()\nmethod raises an exception, a warning message\nis printed to sys.stderr\n.\nHow do I get a list of all instances of a given class?\u00b6\nPython does not keep track of all instances of a class (or of a built-in type). You can program the class\u2019s constructor to keep track of all instances by keeping a list of weak references to each instance.\nWhy does the result of id()\nappear to be not unique?\u00b6\nThe id()\nbuiltin returns an integer that is guaranteed to be unique during\nthe lifetime of the object. Since in CPython, this is the object\u2019s memory\naddress, it happens frequently that after an object is deleted from memory, the\nnext freshly created object is allocated at the same position in memory. This\nis illustrated by this example:\n>>> id(1000)\n13901272\n>>> id(2000)\n13901272\nThe two ids belong to different integer objects that are created before, and\ndeleted immediately after execution of the id()\ncall. To be sure that\nobjects whose id you want to examine are still alive, create another reference\nto the object:\n>>> a = 1000; b = 2000\n>>> id(a)\n13901272\n>>> id(b)\n13891296\nWhen can I rely on identity tests with the is operator?\u00b6\nThe is\noperator tests for object identity. The test a is b\nis\nequivalent to id(a) == id(b)\n.\nThe most important property of an identity test is that an object is always\nidentical to itself, a is a\nalways returns True\n. Identity tests are\nusually faster than equality tests. And unlike equality tests, identity tests\nare guaranteed to return a boolean True\nor False\n.\nHowever, identity tests can only be substituted for equality tests when object identity is assured. Generally, there are three circumstances where identity is guaranteed:\nAssignments create new names but do not change object identity. After the assignment\nnew = old\n, it is guaranteed thatnew is old\n.Putting an object in a container that stores object references does not change object identity. After the list assignment\ns[0] = x\n, it is guaranteed thats[0] is x\n.If an object is a singleton, it means that only one instance of that object can exist. After the assignments\na = None\nandb = None\n, it is guaranteed thata is b\nbecauseNone\nis a singleton.\nIn most other circumstances, identity tests are inadvisable and equality tests\nare preferred. In particular, identity tests should not be used to check\nconstants such as int\nand str\nwhich aren\u2019t guaranteed to be\nsingletons:\n>>> a = 1000\n>>> b = 500\n>>> c = b + 500\n>>> a is c\nFalse\n>>> a = 'Python'\n>>> b = 'Py'\n>>> c = b + 'thon'\n>>> a is c\nFalse\nLikewise, new instances of mutable containers are never identical:\n>>> a = []\n>>> b = []\n>>> a is b\nFalse\nIn the standard library code, you will see several common patterns for correctly using identity tests:\nAs recommended by PEP 8, an identity test is the preferred way to check for\nNone\n. This reads like plain English in code and avoids confusion with other objects that may have boolean values that evaluate to false.Detecting optional arguments can be tricky when\nNone\nis a valid input value. In those situations, you can create a singleton sentinel object guaranteed to be distinct from other objects. For example, here is how to implement a method that behaves likedict.pop()\n:_sentinel = object() def pop(self, key, default=_sentinel): if key in self: value = self[key] del self[key] return value if default is _sentinel: raise KeyError(key) return default\nContainer implementations sometimes need to augment equality tests with identity tests. This prevents the code from being confused by objects such as\nfloat('NaN')\nthat are not equal to themselves.\nFor example, here is the implementation of\ncollections.abc.Sequence.__contains__()\n:\ndef __contains__(self, value):\nfor v in self:\nif v is value or v == value:\nreturn True\nreturn False\nHow can a subclass control what data is stored in an immutable instance?\u00b6\nWhen subclassing an immutable type, override the __new__()\nmethod\ninstead of the __init__()\nmethod. The latter only runs after an\ninstance is created, which is too late to alter data in an immutable\ninstance.\nAll of these immutable classes have a different signature than their parent class:\nfrom datetime import date\nclass FirstOfMonthDate(date):\n\"Always choose the first day of the month\"\ndef __new__(cls, year, month, day):\nreturn super().__new__(cls, year, month, 1)\nclass NamedInt(int):\n\"Allow text names for some numbers\"\nxlat = {'zero': 0, 'one': 1, 'ten': 10}\ndef __new__(cls, value):\nvalue = cls.xlat.get(value, value)\nreturn super().__new__(cls, value)\nclass TitleStr(str):\n\"Convert str to name suitable for a URL path\"\ndef __new__(cls, s):\ns = s.lower().replace(' ', '-')\ns = ''.join([c for c in s if c.isalnum() or c == '-'])\nreturn super().__new__(cls, s)\nThe classes can be used like this:\n>>> FirstOfMonthDate(2012, 2, 14)\nFirstOfMonthDate(2012, 2, 1)\n>>> NamedInt('ten')\n10\n>>> NamedInt(20)\n20\n>>> TitleStr('Blog: Why Python Rocks')\n'blog-why-python-rocks'\nHow do I cache method calls?\u00b6\nThe two principal tools for caching methods are\nfunctools.cached_property()\nand functools.lru_cache()\n. The\nformer stores results at the instance level and the latter at the class\nlevel.\nThe cached_property approach only works with methods that do not take any arguments. It does not create a reference to the instance. The cached method result will be kept only as long as the instance is alive.\nThe advantage is that when an instance is no longer used, the cached method result will be released right away. The disadvantage is that if instances accumulate, so too will the accumulated method results. They can grow without bound.\nThe lru_cache approach works with methods that have hashable arguments. It creates a reference to the instance unless special efforts are made to pass in weak references.\nThe advantage of the least recently used algorithm is that the cache is bounded by the specified maxsize. The disadvantage is that instances are kept alive until they age out of the cache or until the cache is cleared.\nThis example shows the various techniques:\nclass Weather:\n\"Lookup weather information on a government website\"\ndef __init__(self, station_id):\nself._station_id = station_id\n# The _station_id is private and immutable\ndef current_temperature(self):\n\"Latest hourly observation\"\n# Do not cache this because old results\n# can be out of date.\n@cached_property\ndef location(self):\n\"Return the longitude/latitude coordinates of the station\"\n# Result only depends on the station_id\n@lru_cache(maxsize=20)\ndef historic_rainfall(self, date, units='mm'):\n\"Rainfall on a given date\"\n# Depends on the station_id, date, and units.\nThe above example assumes that the station_id never changes. If the relevant instance attributes are mutable, the cached_property approach can\u2019t be made to work because it cannot detect changes to the attributes.\nTo make the lru_cache approach work when the station_id is mutable,\nthe class needs to define the __eq__()\nand __hash__()\nmethods so that the cache can detect relevant attribute updates:\nclass Weather:\n\"Example with a mutable station identifier\"\ndef __init__(self, station_id):\nself.station_id = station_id\ndef change_station(self, station_id):\nself.station_id = station_id\ndef __eq__(self, other):\nreturn self.station_id == other.station_id\ndef __hash__(self):\nreturn hash(self.station_id)\n@lru_cache(maxsize=20)\ndef historic_rainfall(self, date, units='cm'):\n'Rainfall on a given date'\n# Depends on the station_id, date, and units.\nModules\u00b6\nHow do I create a .pyc file?\u00b6\nWhen a module is imported for the first time (or when the source file has\nchanged since the current compiled file was created) a .pyc\nfile containing\nthe compiled code should be created in a __pycache__\nsubdirectory of the\ndirectory containing the .py\nfile. The .pyc\nfile will have a\nfilename that starts with the same name as the .py\nfile, and ends with\n.pyc\n, with a middle component that depends on the particular python\nbinary that created it. (See PEP 3147 for details.)\nOne reason that a .pyc\nfile may not be created is a permissions problem\nwith the directory containing the source file, meaning that the __pycache__\nsubdirectory cannot be created. This can happen, for example, if you develop as\none user but run as another, such as if you are testing with a web server.\nUnless the PYTHONDONTWRITEBYTECODE\nenvironment variable is set,\ncreation of a .pyc file is automatic if you\u2019re importing a module and Python\nhas the ability (permissions, free space, etc\u2026) to create a __pycache__\nsubdirectory and write the compiled module to that subdirectory.\nRunning Python on a top level script is not considered an import and no\n.pyc\nwill be created. For example, if you have a top-level module\nfoo.py\nthat imports another module xyz.py\n, when you run foo\n(by\ntyping python foo.py\nas a shell command), a .pyc\nwill be created for\nxyz\nbecause xyz\nis imported, but no .pyc\nfile will be created for\nfoo\nsince foo.py\nisn\u2019t being imported.\nIf you need to create a .pyc\nfile for foo\n\u2013 that is, to create a\n.pyc\nfile for a module that is not imported \u2013 you can, using the\npy_compile\nand compileall\nmodules.\nThe py_compile\nmodule can manually compile any module. One way is to use\nthe compile()\nfunction in that module interactively:\n>>> import py_compile\n>>> py_compile.compile('foo.py')\nThis will write the .pyc\nto a __pycache__\nsubdirectory in the same\nlocation as foo.py\n(or you can override that with the optional parameter\ncfile\n).\nYou can also automatically compile all files in a directory or directories using\nthe compileall\nmodule. You can do it from the shell prompt by running\ncompileall.py\nand providing the path of a directory containing Python files\nto compile:\npython -m compileall .\nHow do I find the current module name?\u00b6\nA module can find out its own module name by looking at the predefined global\nvariable __name__\n. If this has the value '__main__'\n, the program is\nrunning as a script. Many modules that are usually used by importing them also\nprovide a command-line interface or a self-test, and only execute this code\nafter checking __name__\n:\ndef main():\nprint('Running test...')\n...\nif __name__ == '__main__':\nmain()\nHow can I have modules that mutually import each other?\u00b6\nSuppose you have the following modules:\nfoo.py\n:\nfrom bar import bar_var\nfoo_var = 1\nbar.py\n:\nfrom foo import foo_var\nbar_var = 2\nThe problem is that the interpreter will perform the following steps:\nmain imports\nfoo\nEmpty globals for\nfoo\nare createdfoo\nis compiled and starts executingfoo\nimportsbar\nEmpty globals for\nbar\nare createdbar\nis compiled and starts executingbar\nimportsfoo\n(which is a no-op since there already is a module namedfoo\n)The import mechanism tries to read\nfoo_var\nfromfoo\nglobals, to setbar.foo_var = foo.foo_var\nThe last step fails, because Python isn\u2019t done with interpreting foo\nyet and\nthe global symbol dictionary for foo\nis still empty.\nThe same thing happens when you use import foo\n, and then try to access\nfoo.foo_var\nin global code.\nThere are (at least) three possible workarounds for this problem.\nGuido van Rossum recommends avoiding all uses of from import ...\n,\nand placing all code inside functions. Initializations of global variables and\nclass variables should use constants or built-in functions only. This means\neverything from an imported module is referenced as .\n.\nJim Roskind suggests performing steps in the following order in each module:\nexports (globals, functions, and classes that don\u2019t need imported base classes)\nimport\nstatementsactive code (including globals that are initialized from imported values).\nVan Rossum doesn\u2019t like this approach much because the imports appear in a strange place, but it does work.\nMatthias Urlichs recommends restructuring your code so that the recursive import is not necessary in the first place.\nThese solutions are not mutually exclusive.\n__import__(\u2018x.y.z\u2019) returns ; how do I get z?\u00b6\nConsider using the convenience function import_module()\nfrom\nimportlib\ninstead:\nz = importlib.import_module('x.y.z')\nWhen I edit an imported module and reimport it, the changes don\u2019t show up. Why does this happen?\u00b6\nFor reasons of efficiency as well as consistency, Python only reads the module file on the first time a module is imported. If it didn\u2019t, in a program consisting of many modules where each one imports the same basic module, the basic module would be parsed and re-parsed many times. To force re-reading of a changed module, do this:\nimport importlib\nimport modname\nimportlib.reload(modname)\nWarning: this technique is not 100% fool-proof. In particular, modules containing statements like\nfrom modname import some_objects\nwill continue to work with the old version of the imported objects. If the module contains class definitions, existing class instances will not be updated to use the new class definition. This can result in the following paradoxical behaviour:\n>>> import importlib\n>>> import cls\n>>> c = cls.C() # Create an instance of C\n>>> importlib.reload(cls)\n\n>>> isinstance(c, cls.C) # isinstance is false?!?\nFalse\nThe nature of the problem is made clear if you print out the \u201cidentity\u201d of the class objects:\n>>> hex(id(c.__class__))\n'0x7352a0'\n>>> hex(id(cls.C))\n'0x4198d0'", "code_snippets": [" ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", "\n", "\n ", "\n", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", "\n\n ", "\n ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", "\n", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n", " ", " ", "\n ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", "\n", " ", " ", " ", " ", "\n", " ", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", "\n", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", " ", "\n ", "\n ", " ", " ", " ", " ", " ", "\n ", " ", "\n", "\n\n ", " ", " ", "\n ", " ", " ", " ", " ", "\n\n ", " ", "\n ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n ", "\n ", " ", "\n ", " ", " ", " ", " ", " ", "\n", "\n\n ", " ", " ", "\n\n ", " ", "\n ", " ", " ", "\n\n ", "\n ", " ", " ", " ", " ", "\n\n ", "\n ", " ", " ", " ", " ", "\n\n", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", " ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", "\n\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", "\n\n", "\n", " ", " ", "\n", " ", "\n\n", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", "\n\n", "\n", " ", " ", " ", "\n", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n\n", "\n", "\n", " ", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n File ", ", line ", "\n", "\n", "\n", ": ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n\n", "\n", " ", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n ", "\n\n", "\n ", "\n\n", " ", " ", " ", " ", " ", " ", "\n\n", " ", "\n", "\n", " ", "\n", "\n ", "\n ", "\n\n ", "\n ", "\n\n", " ", " ", " ", " ", " ", "\n", "\n", "\n ", "\n\n", " ", " ", "\n\n", " ", " ", "\n", "\n", " ", " ", "\n", " ", "\n", " ", "\n", "\n", "\n", "\n File ", ", line ", "\n", "\n", "\n", ": ", "\n", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", " ", " ", "\n ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n ", " ", " ", "\n", " ", " ", " ", "\n ", " ", "\n", " ", "\n ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", "\n ", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n ", "\n\n", " ", " ", " ", "\n ", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", ": ", "\n", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", ": ", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", ": ", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", ": ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n ", " ", "\n ", " ", " ", " ", " ", " ", "\n", "\n ", " ", " ", "\n ", " ", "\n ", " ", " ", "\n ", " ", "\n ", " ", "\n", "\n ", "\n ", " ", "\n\n", "\n ", "\n ", " ", "\n\n", "\n", "\n\n ", " ", "\n ", " ", " ", "\n\n ", " ", "\n ", "\n\n ", " ", "\n ", " ", " ", "\n", "\n ", "\n ", " ", " ", "\n ", " ", " ", "\n ", "\n", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n", "\n ", "\n ", " ", "\n", "\n ", "\n\n", " ", " ", "\n\n", "\n ", "\n", "\n ", " ", " ", " ", "\n\n ", "\n ", " ", " ", " ", " ", "\n\n ", "\n ", " ", " ", "\n", " ", " ", "\n", "\n ", "\n ", " ", " ", "\n ", "\n ", "\n", "\n ", " ", "\n", "\n ", " ", "\n ", " ", " ", " ", "\n ", "\n ", "\n ", " ", "\n", " ", "\n ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", "\n ", " ", "\n", "\n ", "\n\n ", " ", "\n ", " ", " ", "\n ", "\n\n ", "\n ", "\n ", "\n ", "\n\n ", "\n ", "\n ", "\n ", "\n\n ", "\n ", " ", " ", "\n ", "\n ", "\n", "\n ", "\n\n ", " ", "\n ", " ", " ", "\n\n ", " ", "\n ", " ", " ", "\n\n ", " ", "\n ", " ", " ", " ", "\n\n ", "\n ", " ", "\n\n ", "\n ", " ", " ", "\n ", "\n ", "\n", "\n", "\n", " ", " ", " ", "\n", "\n ", "\n ", "\n\n", " ", " ", " ", "\n ", "\n", " ", "\n", " ", " ", "\n", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", " ", "\n", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 16050} +{"url": "https://docs.python.org/3/faq/general.html", "title": null, "content": "General Python FAQ\u00b6\nGeneral Information\u00b6\nWhat is Python?\u00b6\nPython is an interpreted, interactive, object-oriented programming language. It incorporates modules, exceptions, dynamic typing, very high level dynamic data types, and classes. It supports multiple programming paradigms beyond object-oriented programming, such as procedural and functional programming. Python combines remarkable power with very clear syntax. It has interfaces to many system calls and libraries, as well as to various window systems, and is extensible in C or C++. It is also usable as an extension language for applications that need a programmable interface. Finally, Python is portable: it runs on many Unix variants including Linux and macOS, and on Windows.\nTo find out more, start with The Python Tutorial. The Beginner\u2019s Guide to Python links to other introductory tutorials and resources for learning Python.\nWhat is the Python Software Foundation?\u00b6\nThe Python Software Foundation is an independent non-profit organization that holds the copyright on Python versions 2.1 and newer. The PSF\u2019s mission is to advance open source technology related to the Python programming language and to publicize the use of Python. The PSF\u2019s home page is at https://www.python.org/psf/.\nDonations to the PSF are tax-exempt in the US. If you use Python and find it helpful, please contribute via the PSF donation page.\nAre there copyright restrictions on the use of Python?\u00b6\nYou can do anything you want with the source, as long as you leave the copyrights in and display those copyrights in any documentation about Python that you produce. If you honor the copyright rules, it\u2019s OK to use Python for commercial use, to sell copies of Python in source or binary form (modified or unmodified), or to sell products that incorporate Python in some form. We would still like to know about all commercial use of Python, of course.\nSee the license page to find further explanations and the full text of the PSF License.\nThe Python logo is trademarked, and in certain cases permission is required to use it. Consult the Trademark Usage Policy for more information.\nWhy was Python created in the first place?\u00b6\nHere\u2019s a very brief summary of what started it all, written by Guido van Rossum:\nI had extensive experience with implementing an interpreted language in the ABC group at CWI, and from working with this group I had learned a lot about language design. This is the origin of many Python features, including the use of indentation for statement grouping and the inclusion of very-high-level data types (although the details are all different in Python).\nI had a number of gripes about the ABC language, but also liked many of its features. It was impossible to extend the ABC language (or its implementation) to remedy my complaints \u2013 in fact its lack of extensibility was one of its biggest problems. I had some experience with using Modula-2+ and talked with the designers of Modula-3 and read the Modula-3 report. Modula-3 is the origin of the syntax and semantics used for exceptions, and some other Python features.\nI was working in the Amoeba distributed operating system group at CWI. We needed a better way to do system administration than by writing either C programs or Bourne shell scripts, since Amoeba had its own system call interface which wasn\u2019t easily accessible from the Bourne shell. My experience with error handling in Amoeba made me acutely aware of the importance of exceptions as a programming language feature.\nIt occurred to me that a scripting language with a syntax like ABC but with access to the Amoeba system calls would fill the need. I realized that it would be foolish to write an Amoeba-specific language, so I decided that I needed a language that was generally extensible.\nDuring the 1989 Christmas holidays, I had a lot of time on my hand, so I decided to give it a try. During the next year, while still mostly working on it in my own time, Python was used in the Amoeba project with increasing success, and the feedback from colleagues made me add many early improvements.\nIn February 1991, after just over a year of development, I decided to post to USENET. The rest is in the\nMisc/HISTORY\nfile.\nWhat is Python good for?\u00b6\nPython is a high-level general-purpose programming language that can be applied to many different classes of problems.\nThe language comes with a large standard library that covers areas such as string processing (regular expressions, Unicode, calculating differences between files), internet protocols (HTTP, FTP, SMTP, XML-RPC, POP, IMAP), software engineering (unit testing, logging, profiling, parsing Python code), and operating system interfaces (system calls, filesystems, TCP/IP sockets). Look at the table of contents for The Python Standard Library to get an idea of what\u2019s available. A wide variety of third-party extensions are also available. Consult the Python Package Index to find packages of interest to you.\nHow does the Python version numbering scheme work?\u00b6\nPython versions are numbered \u201cA.B.C\u201d or \u201cA.B\u201d:\nA is the major version number \u2013 it is only incremented for really major changes in the language.\nB is the minor version number \u2013 it is incremented for less earth-shattering changes.\nC is the micro version number \u2013 it is incremented for each bugfix release.\nNot all releases are bugfix releases. In the run-up to a new feature release, a series of development releases are made, denoted as alpha, beta, or release candidate. Alphas are early releases in which interfaces aren\u2019t yet finalized; it\u2019s not unexpected to see an interface change between two alpha releases. Betas are more stable, preserving existing interfaces but possibly adding new modules, and release candidates are frozen, making no changes except as needed to fix critical bugs.\nAlpha, beta and release candidate versions have an additional suffix:\nThe suffix for an alpha version is \u201caN\u201d for some small number N.\nThe suffix for a beta version is \u201cbN\u201d for some small number N.\nThe suffix for a release candidate version is \u201crcN\u201d for some small number N.\nIn other words, all versions labeled 2.0aN precede the versions labeled 2.0bN, which precede versions labeled 2.0rcN, and those precede 2.0.\nYou may also find version numbers with a \u201c+\u201d suffix, e.g. \u201c2.2+\u201d. These are unreleased versions, built directly from the CPython development repository. In practice, after a final minor release is made, the version is incremented to the next minor version, which becomes the \u201ca0\u201d version, e.g. \u201c2.4a0\u201d.\nSee the Developer\u2019s Guide\nfor more information about the development cycle, and\nPEP 387 to learn more about Python\u2019s backward compatibility policy. See also\nthe documentation for sys.version\n, sys.hexversion\n, and\nsys.version_info\n.\nHow do I obtain a copy of the Python source?\u00b6\nThe latest Python source distribution is always available from python.org, at https://www.python.org/downloads/. The latest development sources can be obtained at https://github.com/python/cpython/.\nThe source distribution is a gzipped tar file containing the complete C source, Sphinx-formatted documentation, Python library modules, example programs, and several useful pieces of freely distributable software. The source will compile and run out of the box on most UNIX platforms.\nConsult the Getting Started section of the Python Developer\u2019s Guide for more information on getting the source code and compiling it.\nHow do I get documentation on Python?\u00b6\nThe standard documentation for the current stable version of Python is available at https://docs.python.org/3/. EPUB, plain text, and downloadable HTML versions are also available at https://docs.python.org/3/download.html.\nThe documentation is written in reStructuredText and processed by the Sphinx documentation tool. The reStructuredText source for the documentation is part of the Python source distribution.\nI\u2019ve never programmed before. Is there a Python tutorial?\u00b6\nThere are numerous tutorials and books available. The standard documentation includes The Python Tutorial.\nConsult the Beginner\u2019s Guide to find information for beginning Python programmers, including lists of tutorials.\nIs there a newsgroup or mailing list devoted to Python?\u00b6\nThere is a newsgroup, comp.lang.python, and a mailing list, python-list. The newsgroup and mailing list are gatewayed into each other \u2013 if you can read news it\u2019s unnecessary to subscribe to the mailing list. comp.lang.python is high-traffic, receiving hundreds of postings every day, and Usenet readers are often more able to cope with this volume.\nAnnouncements of new software releases and events can be found in comp.lang.python.announce, a low-traffic moderated list that receives about five postings per day. It\u2019s available as the python-announce mailing list.\nMore info about other mailing lists and newsgroups can be found at https://www.python.org/community/lists/.\nHow do I get a beta test version of Python?\u00b6\nAlpha and beta releases are available from https://www.python.org/downloads/. All releases are announced on the comp.lang.python and comp.lang.python.announce newsgroups and on the Python home page at https://www.python.org/; an RSS feed of news is available.\nYou can also access the development version of Python through Git. See The Python Developer\u2019s Guide for details.\nHow do I submit bug reports and patches for Python?\u00b6\nTo report a bug or submit a patch, use the issue tracker at https://github.com/python/cpython/issues.\nFor more information on how Python is developed, consult the Python Developer\u2019s Guide.\nAre there any published articles about Python that I can reference?\u00b6\nIt\u2019s probably best to cite your favorite book about Python.\nThe very first article about Python was written in 1991 and is now quite outdated.\nGuido van Rossum and Jelke de Boer, \u201cInteractively Testing Remote Servers Using the Python Programming Language\u201d, CWI Quarterly, Volume 4, Issue 4 (December 1991), Amsterdam, pp 283\u2013303.\nAre there any books on Python?\u00b6\nYes, there are many, and more are being published. See the python.org wiki at https://wiki.python.org/moin/PythonBooks for a list.\nYou can also search online bookstores for \u201cPython\u201d and filter out the Monty Python references; or perhaps search for \u201cPython\u201d and \u201clanguage\u201d.\nWhere in the world is www.python.org located?\u00b6\nThe Python project\u2019s infrastructure is located all over the world and is managed by the Python Infrastructure Team. Details here.\nWhy is it called Python?\u00b6\nWhen he began implementing Python, Guido van Rossum was also reading the published scripts from \u201cMonty Python\u2019s Flying Circus\u201d, a BBC comedy series from the 1970s. Van Rossum thought he needed a name that was short, unique, and slightly mysterious, so he decided to call the language Python.\nDo I have to like \u201cMonty Python\u2019s Flying Circus\u201d?\u00b6\nNo, but it helps. :)\nPython in the real world\u00b6\nHow stable is Python?\u00b6\nVery stable. New, stable releases have been coming out roughly every 6 to 18 months since 1991, and this seems likely to continue. As of version 3.9, Python will have a new feature release every 12 months (PEP 602).\nThe developers issue bugfix releases of older versions, so the stability of existing releases gradually improves. Bugfix releases, indicated by a third component of the version number (e.g. 3.5.3, 3.6.2), are managed for stability; only fixes for known problems are included in a bugfix release, and it\u2019s guaranteed that interfaces will remain the same throughout a series of bugfix releases.\nThe latest stable releases can always be found on the Python download page. Python 3.x is the recommended version and supported by most widely used libraries. Python 2.x is not maintained anymore.\nHow many people are using Python?\u00b6\nThere are probably millions of users, though it\u2019s difficult to obtain an exact count.\nPython is available for free download, so there are no sales figures, and it\u2019s available from many different sites and packaged with many Linux distributions, so download statistics don\u2019t tell the whole story either.\nThe comp.lang.python newsgroup is very active, but not all Python users post to the group or even read it.\nHave any significant projects been done in Python?\u00b6\nSee https://www.python.org/about/success for a list of projects that use Python. Consulting the proceedings for past Python conferences will reveal contributions from many different companies and organizations.\nHigh-profile Python projects include the Mailman mailing list manager and the Zope application server. Several Linux distributions, most notably Red Hat, have written part or all of their installer and system administration software in Python. Companies that use Python internally include Google, Yahoo, and Lucasfilm Ltd.\nWhat new developments are expected for Python in the future?\u00b6\nSee https://peps.python.org/ for the Python Enhancement Proposals (PEPs). PEPs are design documents describing a suggested new feature for Python, providing a concise technical specification and a rationale. Look for a PEP titled \u201cPython X.Y Release Schedule\u201d, where X.Y is a version that hasn\u2019t been publicly released yet.\nNew development is discussed on the python-dev mailing list.\nIs it reasonable to propose incompatible changes to Python?\u00b6\nIn general, no. There are already millions of lines of Python code around the world, so any change in the language that invalidates more than a very small fraction of existing programs has to be frowned upon. Even if you can provide a conversion program, there\u2019s still the problem of updating all documentation; many books have been written about Python, and we don\u2019t want to invalidate them all at a single stroke.\nProviding a gradual upgrade path is necessary if a feature has to be changed. PEP 5 describes the procedure followed for introducing backward-incompatible changes while minimizing disruption for users.\nIs Python a good language for beginning programmers?\u00b6\nYes.\nIt is still common to start students with a procedural and statically typed language such as Pascal, C, or a subset of C++ or Java. Students may be better served by learning Python as their first language. Python has a very simple and consistent syntax and a large standard library and, most importantly, using Python in a beginning programming course lets students concentrate on important programming skills such as problem decomposition and data type design. With Python, students can be quickly introduced to basic concepts such as loops and procedures. They can probably even work with user-defined objects in their very first course.\nFor a student who has never programmed before, using a statically typed language seems unnatural. It presents additional complexity that the student must master and slows the pace of the course. The students are trying to learn to think like a computer, decompose problems, design consistent interfaces, and encapsulate data. While learning to use a statically typed language is important in the long term, it is not necessarily the best topic to address in the students\u2019 first programming course.\nMany other aspects of Python make it a good first language. Like Java, Python has a large standard library so that students can be assigned programming projects very early in the course that do something. Assignments aren\u2019t restricted to the standard four-function calculator and check balancing programs. By using the standard library, students can gain the satisfaction of working on realistic applications as they learn the fundamentals of programming. Using the standard library also teaches students about code reuse. Third-party modules such as PyGame are also helpful in extending the students\u2019 reach.\nPython\u2019s interactive interpreter enables students to test language features while they\u2019re programming. They can keep a window with the interpreter running while they enter their program\u2019s source in another window. If they can\u2019t remember the methods for a list, they can do something like this:\n>>> L = []\n>>> dir(L)\n['__add__', '__class__', '__contains__', '__delattr__', '__delitem__',\n'__dir__', '__doc__', '__eq__', '__format__', '__ge__',\n'__getattribute__', '__getitem__', '__gt__', '__hash__', '__iadd__',\n'__imul__', '__init__', '__iter__', '__le__', '__len__', '__lt__',\n'__mul__', '__ne__', '__new__', '__reduce__', '__reduce_ex__',\n'__repr__', '__reversed__', '__rmul__', '__setattr__', '__setitem__',\n'__sizeof__', '__str__', '__subclasshook__', 'append', 'clear',\n'copy', 'count', 'extend', 'index', 'insert', 'pop', 'remove',\n'reverse', 'sort']\n>>> [d for d in dir(L) if '__' not in d]\n['append', 'clear', 'copy', 'count', 'extend', 'index', 'insert', 'pop', 'remove', 'reverse', 'sort']\n>>> help(L.append)\nHelp on built-in function append:\nappend(...)\nL.append(object) -> None -- append object to end\n>>> L.append(1)\n>>> L\n[1]\nWith the interpreter, documentation is never far from the student as they are programming.\nThere are also good IDEs for Python. IDLE is a cross-platform IDE for Python that is written in Python using Tkinter. Emacs users will be happy to know that there is a very good Python mode for Emacs. All of these programming environments provide syntax highlighting, auto-indenting, and access to the interactive interpreter while coding. Consult the Python wiki for a full list of Python editing environments.\nIf you want to discuss Python\u2019s use in education, you may be interested in joining the edu-sig mailing list.", "code_snippets": [" ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n\n", "\n", "\n\n", "\n", "\n\n", "\n", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 4345} +{"url": "https://docs.python.org/3/howto/remote_debugging.html", "title": "Remote debugging attachment protocol", "content": "Remote debugging attachment protocol\u00b6\nThis protocol enables external tools to attach to a running CPython process and execute Python code remotely.\nMost platforms require elevated privileges to attach to another Python process.\nDisabling remote debugging\u00b6\nTo disable remote debugging support, use any of the following:\nSet the\nPYTHON_DISABLE_REMOTE_DEBUG\nenvironment variable to1\nbefore starting the interpreter.Use the\n-X disable_remote_debug\ncommand-line option.Compile Python with the\n--without-remote-debug\nbuild flag.\nPermission requirements\u00b6\nAttaching to a running Python process for remote debugging requires elevated privileges on most platforms. The specific requirements and troubleshooting steps depend on your operating system:\nLinux\nThe tracer process must have the CAP_SYS_PTRACE\ncapability or equivalent\nprivileges. You can only trace processes you own and can signal. Tracing may\nfail if the process is already being traced, or if it is running with\nset-user-ID or set-group-ID. Security modules like Yama may further restrict\ntracing.\nTo temporarily relax ptrace restrictions (until reboot), run:\necho 0 | sudo tee /proc/sys/kernel/yama/ptrace_scope\nNote\nDisabling ptrace_scope\nreduces system hardening and should only be done\nin trusted environments.\nIf running inside a container, use --cap-add=SYS_PTRACE\nor\n--privileged\n, and run as root if needed.\nTry re-running the command with elevated privileges:\nsudo -E !!\nmacOS\nTo attach to another process, you typically need to run your debugging tool\nwith elevated privileges. This can be done by using sudo\nor running as\nroot.\nEven when attaching to processes you own, macOS may block debugging unless the debugger is run with root privileges due to system security restrictions.\nWindows\nTo attach to another process, you usually need to run your debugging tool with administrative privileges. Start the command prompt or terminal as Administrator.\nSome processes may still be inaccessible even with Administrator rights,\nunless you have the SeDebugPrivilege\nprivilege enabled.\nTo resolve file or folder access issues, adjust the security permissions:\nRight-click the file or folder and select Properties.\nGo to the Security tab to view users and groups with access.\nClick Edit to modify permissions.\nSelect your user account.\nIn Permissions, check Read or Full control as needed.\nClick Apply, then OK to confirm.\nNote\nEnsure you\u2019ve satisfied all Permission requirements before proceeding.\nThis section describes the low-level protocol that enables external tools to inject and execute a Python script within a running CPython process.\nThis mechanism forms the basis of the sys.remote_exec()\nfunction, which\ninstructs a remote Python process to execute a .py\nfile. However, this\nsection does not document the usage of that function. Instead, it provides a\ndetailed explanation of the underlying protocol, which takes as input the\npid\nof a target Python process and the path to a Python source file to be\nexecuted. This information supports independent reimplementation of the\nprotocol, regardless of programming language.\nWarning\nThe execution of the injected script depends on the interpreter reaching a safe evaluation point. As a result, execution may be delayed depending on the runtime state of the target process.\nOnce injected, the script is executed by the interpreter within the target process the next time a safe evaluation point is reached. This approach enables remote execution capabilities without modifying the behavior or structure of the running Python application.\nSubsequent sections provide a step-by-step description of the protocol, including techniques for locating interpreter structures in memory, safely accessing internal fields, and triggering code execution. Platform-specific variations are noted where applicable, and example implementations are included to clarify each operation.\nLocating the PyRuntime structure\u00b6\nCPython places the PyRuntime\nstructure in a dedicated binary section to\nhelp external tools find it at runtime. The name and format of this section\nvary by platform. For example, .PyRuntime\nis used on ELF systems, and\n__DATA,__PyRuntime\nis used on macOS. Tools can find the offset of this\nstructure by examining the binary on disk.\nThe PyRuntime\nstructure contains CPython\u2019s global interpreter state and\nprovides access to other internal data, including the list of interpreters,\nthread states, and debugger support fields.\nTo work with a remote Python process, a debugger must first find the memory\naddress of the PyRuntime\nstructure in the target process. This address\ncan\u2019t be hardcoded or calculated from a symbol name, because it depends on\nwhere the operating system loaded the binary.\nThe method for finding PyRuntime\ndepends on the platform, but the steps are\nthe same in general:\nFind the base address where the Python binary or shared library was loaded in the target process.\nUse the on-disk binary to locate the offset of the\n.PyRuntime\nsection.Add the section offset to the base address to compute the address in memory.\nThe sections below explain how to do this on each supported platform and include example code.\nLinux (ELF)\nTo find the PyRuntime\nstructure on Linux:\nRead the process\u2019s memory map (for example,\n/proc//maps\n) to find the address where the Python executable orlibpython\nwas loaded.Parse the ELF section headers in the binary to get the offset of the\n.PyRuntime\nsection.Add that offset to the base address from step 1 to get the memory address of\nPyRuntime\n.\nThe following is an example implementation:\ndef find_py_runtime_linux(pid: int) -> int:\n# Step 1: Try to find the Python executable in memory\nbinary_path, base_address = find_mapped_binary(\npid, name_contains=\"python\"\n)\n# Step 2: Fallback to shared library if executable is not found\nif binary_path is None:\nbinary_path, base_address = find_mapped_binary(\npid, name_contains=\"libpython\"\n)\n# Step 3: Parse ELF headers to get .PyRuntime section offset\nsection_offset = parse_elf_section_offset(\nbinary_path, \".PyRuntime\"\n)\n# Step 4: Compute PyRuntime address in memory\nreturn base_address + section_offset\nOn Linux systems, there are two main approaches to read memory from another\nprocess. The first is through the /proc\nfilesystem, specifically by reading from\n/proc/[pid]/mem\nwhich provides direct access to the process\u2019s memory. This\nrequires appropriate permissions - either being the same user as the target\nprocess or having root access. The second approach is using the\nprocess_vm_readv()\nsystem call which provides a more efficient way to copy\nmemory between processes. While ptrace\u2019s PTRACE_PEEKTEXT\noperation can also be\nused to read memory, it is significantly slower as it only reads one word at a\ntime and requires multiple context switches between the tracer and tracee\nprocesses.\nFor parsing ELF sections, the process involves reading and interpreting the ELF file format structures from the binary file on disk. The ELF header contains a pointer to the section header table. Each section header contains metadata about a section including its name (stored in a separate string table), offset, and size. To find a specific section like .PyRuntime, you need to walk through these headers and match the section name. The section header then provides the offset where that section exists in the file, which can be used to calculate its runtime address when the binary is loaded into memory.\nYou can read more about the ELF file format in the ELF specification.\nmacOS (Mach-O)\nTo find the PyRuntime\nstructure on macOS:\nCall\ntask_for_pid()\nto get themach_port_t\ntask port for the target process. This handle is needed to read memory using APIs likemach_vm_read_overwrite\nandmach_vm_region\n.Scan the memory regions to find the one containing the Python executable or\nlibpython\n.Load the binary file from disk and parse the Mach-O headers to find the section named\nPyRuntime\nin the__DATA\nsegment. On macOS, symbol names are automatically prefixed with an underscore, so thePyRuntime\nsymbol appears as_PyRuntime\nin the symbol table, but the section name is not affected.\nThe following is an example implementation:\ndef find_py_runtime_macos(pid: int) -> int:\n# Step 1: Get access to the process's memory\nhandle = get_memory_access_handle(pid)\n# Step 2: Try to find the Python executable in memory\nbinary_path, base_address = find_mapped_binary(\nhandle, name_contains=\"python\"\n)\n# Step 3: Fallback to libpython if the executable is not found\nif binary_path is None:\nbinary_path, base_address = find_mapped_binary(\nhandle, name_contains=\"libpython\"\n)\n# Step 4: Parse Mach-O headers to get __DATA,__PyRuntime section offset\nsection_offset = parse_macho_section_offset(\nbinary_path, \"__DATA\", \"__PyRuntime\"\n)\n# Step 5: Compute the PyRuntime address in memory\nreturn base_address + section_offset\nOn macOS, accessing another process\u2019s memory requires using Mach-O specific APIs\nand file formats. The first step is obtaining a task_port\nhandle via\ntask_for_pid()\n, which provides access to the target process\u2019s memory space.\nThis handle enables memory operations through APIs like\nmach_vm_read_overwrite()\n.\nThe process memory can be examined using mach_vm_region()\nto scan through the\nvirtual memory space, while proc_regionfilename()\nhelps identify which binary\nfiles are loaded at each memory region. When the Python binary or library is\nfound, its Mach-O headers need to be parsed to locate the PyRuntime\nstructure.\nThe Mach-O format organizes code and data into segments and sections. The\nPyRuntime\nstructure lives in a section named __PyRuntime\nwithin the\n__DATA\nsegment. The actual runtime address calculation involves finding the\n__TEXT\nsegment which serves as the binary\u2019s base address, then locating the\n__DATA\nsegment containing our target section. The final address is computed by\ncombining the base address with the appropriate section offsets from the Mach-O\nheaders.\nNote that accessing another process\u2019s memory on macOS typically requires elevated privileges - either root access or special security entitlements granted to the debugging process.\nWindows (PE)\nTo find the PyRuntime\nstructure on Windows:\nUse the ToolHelp API to enumerate all modules loaded in the target process. This is done using functions such as CreateToolhelp32Snapshot, Module32First, and Module32Next.\nIdentify the module corresponding to\npython.exe\norpythonXY.dll\n, whereX\nandY\nare the major and minor version numbers of the Python version, and record its base address.Locate the\nPyRuntim\nsection. Due to the PE format\u2019s 8-character limit on section names (defined asIMAGE_SIZEOF_SHORT_NAME\n), the original namePyRuntime\nis truncated. This section contains thePyRuntime\nstructure.Retrieve the section\u2019s relative virtual address (RVA) and add it to the base address of the module.\nThe following is an example implementation:\ndef find_py_runtime_windows(pid: int) -> int:\n# Step 1: Try to find the Python executable in memory\nbinary_path, base_address = find_loaded_module(\npid, name_contains=\"python\"\n)\n# Step 2: Fallback to shared pythonXY.dll if the executable is not\n# found\nif binary_path is None:\nbinary_path, base_address = find_loaded_module(\npid, name_contains=\"python3\"\n)\n# Step 3: Parse PE section headers to get the RVA of the PyRuntime\n# section. The section name appears as \"PyRuntim\" due to the\n# 8-character limit defined by the PE format (IMAGE_SIZEOF_SHORT_NAME).\nsection_rva = parse_pe_section_offset(binary_path, \"PyRuntim\")\n# Step 4: Compute PyRuntime address in memory\nreturn base_address + section_rva\nOn Windows, accessing another process\u2019s memory requires using the Windows API\nfunctions like CreateToolhelp32Snapshot()\nand Module32First()/Module32Next()\nto enumerate loaded modules. The OpenProcess()\nfunction provides a handle to\naccess the target process\u2019s memory space, enabling memory operations through\nReadProcessMemory()\n.\nThe process memory can be examined by enumerating loaded modules to find the\nPython binary or DLL. When found, its PE headers need to be parsed to locate the\nPyRuntime\nstructure.\nThe PE format organizes code and data into sections. The PyRuntime\nstructure\nlives in a section named \u201cPyRuntim\u201d (truncated from \u201cPyRuntime\u201d due to PE\u2019s\n8-character name limit). The actual runtime address calculation involves finding\nthe module\u2019s base address from the module entry, then locating our target\nsection in the PE headers. The final address is computed by combining the base\naddress with the section\u2019s virtual address from the PE section headers.\nNote that accessing another process\u2019s memory on Windows typically requires\nappropriate privileges - either administrative access or the SeDebugPrivilege\nprivilege granted to the debugging process.\nReading _Py_DebugOffsets\u00b6\nOnce the address of the PyRuntime\nstructure has been determined, the next\nstep is to read the _Py_DebugOffsets\nstructure located at the beginning of\nthe PyRuntime\nblock.\nThis structure provides version-specific field offsets that are needed to safely read interpreter and thread state memory. These offsets vary between CPython versions and must be checked before use to ensure they are compatible.\nTo read and check the debug offsets, follow these steps:\nRead memory from the target process starting at the\nPyRuntime\naddress, covering the same number of bytes as the_Py_DebugOffsets\nstructure. This structure is located at the very start of thePyRuntime\nmemory block. Its layout is defined in CPython\u2019s internal headers and stays the same within a given minor version, but may change in major versions.Check that the structure contains valid data:\nThe\ncookie\nfield must match the expected debug marker.The\nversion\nfield must match the version of the Python interpreter used by the debugger.If either the debugger or the target process is using a pre-release version (for example, an alpha, beta, or release candidate), the versions must match exactly.\nThe\nfree_threaded\nfield must have the same value in both the debugger and the target process.\nIf the structure is valid, the offsets it contains can be used to locate fields in memory. If any check fails, the debugger should stop the operation to avoid reading memory in the wrong format.\nThe following is an example implementation that reads and checks\n_Py_DebugOffsets\n:\ndef read_debug_offsets(pid: int, py_runtime_addr: int) -> DebugOffsets:\n# Step 1: Read memory from the target process at the PyRuntime address\ndata = read_process_memory(\npid, address=py_runtime_addr, size=DEBUG_OFFSETS_SIZE\n)\n# Step 2: Deserialize the raw bytes into a _Py_DebugOffsets structure\ndebug_offsets = parse_debug_offsets(data)\n# Step 3: Validate the contents of the structure\nif debug_offsets.cookie != EXPECTED_COOKIE:\nraise RuntimeError(\"Invalid or missing debug cookie\")\nif debug_offsets.version != LOCAL_PYTHON_VERSION:\nraise RuntimeError(\n\"Mismatch between caller and target Python versions\"\n)\nif debug_offsets.free_threaded != LOCAL_FREE_THREADED:\nraise RuntimeError(\"Mismatch in free-threaded configuration\")\nreturn debug_offsets\nWarning\nProcess suspension recommended\nTo avoid race conditions and ensure memory consistency, it is strongly recommended that the target process be suspended before performing any operations that read or write internal interpreter state. The Python runtime may concurrently mutate interpreter data structures\u2014such as creating or destroying threads\u2014during normal execution. This can result in invalid memory reads or writes.\nA debugger may suspend execution by attaching to the process with ptrace\nor by sending a SIGSTOP\nsignal. Execution should only be resumed after\ndebugger-side memory operations are complete.\nNote\nSome tools, such as profilers or sampling-based debuggers, may operate on a running process without suspension. In such cases, tools must be explicitly designed to handle partially updated or inconsistent memory. For most debugger implementations, suspending the process remains the safest and most robust approach.\nLocating the interpreter and thread state\u00b6\nBefore code can be injected and executed in a remote Python process, the\ndebugger must choose a thread in which to schedule execution. This is necessary\nbecause the control fields used to perform remote code injection are located in\nthe _PyRemoteDebuggerSupport\nstructure, which is embedded in a\nPyThreadState\nobject. These fields are modified by the debugger to request\nexecution of injected scripts.\nThe PyThreadState\nstructure represents a thread running inside a Python\ninterpreter. It maintains the thread\u2019s evaluation context and contains the\nfields required for debugger coordination. Locating a valid PyThreadState\nis therefore a key prerequisite for triggering execution remotely.\nA thread is typically selected based on its role or ID. In most cases, the main thread is used, but some tools may target a specific thread by its native thread ID. Once the target thread is chosen, the debugger must locate both the interpreter and the associated thread state structures in memory.\nThe relevant internal structures are defined as follows:\nPyInterpreterState\nrepresents an isolated Python interpreter instance. Each interpreter maintains its own set of imported modules, built-in state, and thread state list. Although most Python applications use a single interpreter, CPython supports multiple interpreters in the same process.PyThreadState\nrepresents a thread running within an interpreter. It contains execution state and the control fields used by the debugger.\nTo locate a thread:\nUse the offset\nruntime_state.interpreters_head\nto obtain the address of the first interpreter in thePyRuntime\nstructure. This is the entry point to the linked list of active interpreters.Use the offset\ninterpreter_state.threads_main\nto access the main thread state associated with the selected interpreter. This is typically the most reliable thread to target.Optionally, use the offset\ninterpreter_state.threads_head\nto iterate through the linked list of all thread states. EachPyThreadState\nstructure contains anative_thread_id\nfield, which may be compared to a target thread ID to find a specific thread.Once a valid\nPyThreadState\nhas been found, its address can be used in later steps of the protocol, such as writing debugger control fields and scheduling execution.\nThe following is an example implementation that locates the main thread state:\ndef find_main_thread_state(\npid: int, py_runtime_addr: int, debug_offsets: DebugOffsets,\n) -> int:\n# Step 1: Read interpreters_head from PyRuntime\ninterp_head_ptr = (\npy_runtime_addr + debug_offsets.runtime_state.interpreters_head\n)\ninterp_addr = read_pointer(pid, interp_head_ptr)\nif interp_addr == 0:\nraise RuntimeError(\"No interpreter found in the target process\")\n# Step 2: Read the threads_main pointer from the interpreter\nthreads_main_ptr = (\ninterp_addr + debug_offsets.interpreter_state.threads_main\n)\nthread_state_addr = read_pointer(pid, threads_main_ptr)\nif thread_state_addr == 0:\nraise RuntimeError(\"Main thread state is not available\")\nreturn thread_state_addr\nThe following example demonstrates how to locate a thread by its native thread ID:\ndef find_thread_by_id(\npid: int,\ninterp_addr: int,\ndebug_offsets: DebugOffsets,\ntarget_tid: int,\n) -> int:\n# Start at threads_head and walk the linked list\nthread_ptr = read_pointer(\npid,\ninterp_addr + debug_offsets.interpreter_state.threads_head\n)\nwhile thread_ptr:\nnative_tid_ptr = (\nthread_ptr + debug_offsets.thread_state.native_thread_id\n)\nnative_tid = read_int(pid, native_tid_ptr)\nif native_tid == target_tid:\nreturn thread_ptr\nthread_ptr = read_pointer(\npid,\nthread_ptr + debug_offsets.thread_state.next\n)\nraise RuntimeError(\"Thread with the given ID was not found\")\nOnce a valid thread state has been located, the debugger can proceed with modifying its control fields and scheduling execution, as described in the next section.\nWriting control information\u00b6\nOnce a valid PyThreadState\nstructure has been identified, the debugger may\nmodify control fields within it to schedule the execution of a specified Python\nscript. These control fields are checked periodically by the interpreter, and\nwhen set correctly, they trigger the execution of remote code at a safe point\nin the evaluation loop.\nEach PyThreadState\ncontains a _PyRemoteDebuggerSupport\nstructure used\nfor communication between the debugger and the interpreter. The locations of\nits fields are defined by the _Py_DebugOffsets\nstructure and include the\nfollowing:\ndebugger_script_path\n: A fixed-size buffer that holds the full path to a Python source file (.py\n). This file must be accessible and readable by the target process when execution is triggered.debugger_pending_call\n: An integer flag. Setting this to1\ntells the interpreter that a script is ready to be executed.eval_breaker\n: A field checked by the interpreter during execution. Setting bit 5 (_PY_EVAL_PLEASE_STOP_BIT\n, value1U << 5\n) in this field causes the interpreter to pause and check for debugger activity.\nTo complete the injection, the debugger must perform the following steps:\nWrite the full script path into the\ndebugger_script_path\nbuffer.Set\ndebugger_pending_call\nto1\n.Read the current value of\neval_breaker\n, set bit 5 (_PY_EVAL_PLEASE_STOP_BIT\n), and write the updated value back. This signals the interpreter to check for debugger activity.\nThe following is an example implementation:\ndef inject_script(\npid: int,\nthread_state_addr: int,\ndebug_offsets: DebugOffsets,\nscript_path: str\n) -> None:\n# Compute the base offset of _PyRemoteDebuggerSupport\nsupport_base = (\nthread_state_addr +\ndebug_offsets.debugger_support.remote_debugger_support\n)\n# Step 1: Write the script path into debugger_script_path\nscript_path_ptr = (\nsupport_base +\ndebug_offsets.debugger_support.debugger_script_path\n)\nwrite_string(pid, script_path_ptr, script_path)\n# Step 2: Set debugger_pending_call to 1\npending_ptr = (\nsupport_base +\ndebug_offsets.debugger_support.debugger_pending_call\n)\nwrite_int(pid, pending_ptr, 1)\n# Step 3: Set _PY_EVAL_PLEASE_STOP_BIT (bit 5, value 1 << 5) in\n# eval_breaker\neval_breaker_ptr = (\nthread_state_addr +\ndebug_offsets.debugger_support.eval_breaker\n)\nbreaker = read_int(pid, eval_breaker_ptr)\nbreaker |= (1 << 5)\nwrite_int(pid, eval_breaker_ptr, breaker)\nOnce these fields are set, the debugger may resume the process (if it was suspended). The interpreter will process the request at the next safe evaluation point, load the script from disk, and execute it.\nIt is the responsibility of the debugger to ensure that the script file remains present and accessible to the target process during execution.\nNote\nScript execution is asynchronous. The script file cannot be deleted immediately after injection. The debugger should wait until the injected script has produced an observable effect before removing the file. This effect depends on what the script is designed to do. For example, a debugger might wait until the remote process connects back to a socket before removing the script. Once such an effect is observed, it is safe to assume the file is no longer needed.\nSummary\u00b6\nTo inject and execute a Python script in a remote process:\nLocate the\nPyRuntime\nstructure in the target process\u2019s memory.Read and validate the\n_Py_DebugOffsets\nstructure at the beginning ofPyRuntime\n.Use the offsets to locate a valid\nPyThreadState\n.Write the path to a Python script into\ndebugger_script_path\n.Set the\ndebugger_pending_call\nflag to1\n.Set\n_PY_EVAL_PLEASE_STOP_BIT\nin theeval_breaker\nfield.Resume the process (if suspended). The script will execute at the next safe evaluation point.", "code_snippets": [" ", " ", " ", "\n ", "\n ", " ", " ", " ", "\n ", " ", "\n ", "\n\n ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", "\n\n ", "\n ", " ", " ", "\n ", " ", "\n ", "\n\n ", "\n ", " ", " ", " ", "\n", " ", " ", " ", "\n ", "\n ", " ", " ", "\n\n ", "\n ", " ", " ", " ", "\n ", " ", "\n ", "\n\n ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", "\n\n ", "\n ", " ", " ", "\n ", " ", " ", "\n ", "\n\n ", "\n ", " ", " ", " ", "\n", " ", " ", " ", "\n ", "\n ", " ", " ", " ", "\n ", " ", "\n ", "\n\n ", "\n ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", "\n\n ", "\n ", "\n ", "\n ", " ", " ", " ", "\n\n ", "\n ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n ", "\n ", " ", " ", "\n ", " ", " ", "\n ", "\n\n ", "\n ", " ", " ", "\n\n ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", "\n ", "\n ", " ", " ", " ", "\n ", " ", "\n\n ", " ", "\n", "\n ", " ", " ", " ", " ", " ", "\n", " ", " ", "\n ", "\n ", " ", " ", "\n ", " ", " ", "\n ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n\n ", "\n ", " ", " ", "\n ", " ", " ", "\n ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n\n ", " ", "\n", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n\n ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n\n ", " ", "\n", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n", " ", " ", "\n ", "\n ", " ", " ", "\n ", " ", "\n ", "\n ", "\n\n ", "\n ", " ", " ", "\n ", " ", "\n ", "\n ", "\n ", " ", " ", "\n\n ", "\n ", " ", " ", "\n ", " ", "\n ", "\n ", "\n ", " ", " ", "\n\n ", "\n ", "\n ", " ", " ", "\n ", " ", "\n ", "\n ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 5877} +{"url": "https://docs.python.org/3/tutorial/inputoutput.html", "title": "Input and Output", "content": "7. Input and Output\u00b6\nThere are several ways to present the output of a program; data can be printed in a human-readable form, or written to a file for future use. This chapter will discuss some of the possibilities.\n7.1. Fancier Output Formatting\u00b6\nSo far we\u2019ve encountered two ways of writing values: expression statements and\nthe print()\nfunction. (A third way is using the write()\nmethod\nof file objects; the standard output file can be referenced as sys.stdout\n.\nSee the Library Reference for more information on this.)\nOften you\u2019ll want more control over the formatting of your output than simply printing space-separated values. There are several ways to format output.\nTo use formatted string literals, begin a string with\nf\norF\nbefore the opening quotation mark or triple quotation mark. Inside this string, you can write a Python expression between{\nand}\ncharacters that can refer to variables or literal values.>>> year = 2016 >>> event = 'Referendum' >>> f'Results of the {year} {event}' 'Results of the 2016 Referendum'\nThe\nstr.format()\nmethod of strings requires more manual effort. You\u2019ll still use{\nand}\nto mark where a variable will be substituted and can provide detailed formatting directives, but you\u2019ll also need to provide the information to be formatted. In the following code block there are two examples of how to format variables:>>> yes_votes = 42_572_654 >>> total_votes = 85_705_149 >>> percentage = yes_votes / total_votes >>> '{:-9} YES votes {:2.2%}'.format(yes_votes, percentage) ' 42572654 YES votes 49.67%'\nNotice how the\nyes_votes\nare padded with spaces and a negative sign only for negative numbers. The example also printspercentage\nmultiplied by 100, with 2 decimal places and followed by a percent sign (see Format Specification Mini-Language for details).Finally, you can do all the string handling yourself by using string slicing and concatenation operations to create any layout you can imagine. The string type has some methods that perform useful operations for padding strings to a given column width.\nWhen you don\u2019t need fancy output but just want a quick display of some\nvariables for debugging purposes, you can convert any value to a string with\nthe repr()\nor str()\nfunctions.\nThe str()\nfunction is meant to return representations of values which are\nfairly human-readable, while repr()\nis meant to generate representations\nwhich can be read by the interpreter (or will force a SyntaxError\nif\nthere is no equivalent syntax). For objects which don\u2019t have a particular\nrepresentation for human consumption, str()\nwill return the same value as\nrepr()\n. Many values, such as numbers or structures like lists and\ndictionaries, have the same representation using either function. Strings, in\nparticular, have two distinct representations.\nSome examples:\n>>> s = 'Hello, world.'\n>>> str(s)\n'Hello, world.'\n>>> repr(s)\n\"'Hello, world.'\"\n>>> str(1/7)\n'0.14285714285714285'\n>>> x = 10 * 3.25\n>>> y = 200 * 200\n>>> s = 'The value of x is ' + repr(x) + ', and y is ' + repr(y) + '...'\n>>> print(s)\nThe value of x is 32.5, and y is 40000...\n>>> # The repr() of a string adds string quotes and backslashes:\n>>> hello = 'hello, world\\n'\n>>> hellos = repr(hello)\n>>> print(hellos)\n'hello, world\\n'\n>>> # The argument to repr() may be any Python object:\n>>> repr((x, y, ('spam', 'eggs')))\n\"(32.5, 40000, ('spam', 'eggs'))\"\nThe string\nmodule contains support for a simple templating approach\nbased upon regular expressions, via string.Template\n.\nThis offers yet another way to substitute values into strings,\nusing placeholders like $x\nand replacing them with values from a dictionary.\nThis syntax is easy to use, although it offers much less control for formatting.\n7.1.1. Formatted String Literals\u00b6\nFormatted string literals (also called f-strings for\nshort) let you include the value of Python expressions inside a string by\nprefixing the string with f\nor F\nand writing expressions as\n{expression}\n.\nAn optional format specifier can follow the expression. This allows greater control over how the value is formatted. The following example rounds pi to three places after the decimal:\n>>> import math\n>>> print(f'The value of pi is approximately {math.pi:.3f}.')\nThe value of pi is approximately 3.142.\nPassing an integer after the ':'\nwill cause that field to be a minimum\nnumber of characters wide. This is useful for making columns line up.\n>>> table = {'Sjoerd': 4127, 'Jack': 4098, 'Dcab': 7678}\n>>> for name, phone in table.items():\n... print(f'{name:10} ==> {phone:10d}')\n...\nSjoerd ==> 4127\nJack ==> 4098\nDcab ==> 7678\nOther modifiers can be used to convert the value before it is formatted.\n'!a'\napplies ascii()\n, '!s'\napplies str()\n, and '!r'\napplies repr()\n:\n>>> animals = 'eels'\n>>> print(f'My hovercraft is full of {animals}.')\nMy hovercraft is full of eels.\n>>> print(f'My hovercraft is full of {animals!r}.')\nMy hovercraft is full of 'eels'.\nThe =\nspecifier can be used to expand an expression to the text of the\nexpression, an equal sign, then the representation of the evaluated expression:\n>>> bugs = 'roaches'\n>>> count = 13\n>>> area = 'living room'\n>>> print(f'Debugging {bugs=} {count=} {area=}')\nDebugging bugs='roaches' count=13 area='living room'\nSee self-documenting expressions for more information\non the =\nspecifier. For a reference on these format specifications, see\nthe reference guide for the Format Specification Mini-Language.\n7.1.2. The String format() Method\u00b6\nBasic usage of the str.format()\nmethod looks like this:\n>>> print('We are the {} who say \"{}!\"'.format('knights', 'Ni'))\nWe are the knights who say \"Ni!\"\nThe brackets and characters within them (called format fields) are replaced with\nthe objects passed into the str.format()\nmethod. A number in the\nbrackets can be used to refer to the position of the object passed into the\nstr.format()\nmethod.\n>>> print('{0} and {1}'.format('spam', 'eggs'))\nspam and eggs\n>>> print('{1} and {0}'.format('spam', 'eggs'))\neggs and spam\nIf keyword arguments are used in the str.format()\nmethod, their values\nare referred to by using the name of the argument.\n>>> print('This {food} is {adjective}.'.format(\n... food='spam', adjective='absolutely horrible'))\nThis spam is absolutely horrible.\nPositional and keyword arguments can be arbitrarily combined:\n>>> print('The story of {0}, {1}, and {other}.'.format('Bill', 'Manfred',\n... other='Georg'))\nThe story of Bill, Manfred, and Georg.\nIf you have a really long format string that you don\u2019t want to split up, it\nwould be nice if you could reference the variables to be formatted by name\ninstead of by position. This can be done by simply passing the dict and using\nsquare brackets '[]'\nto access the keys.\n>>> table = {'Sjoerd': 4127, 'Jack': 4098, 'Dcab': 8637678}\n>>> print('Jack: {0[Jack]:d}; Sjoerd: {0[Sjoerd]:d}; '\n... 'Dcab: {0[Dcab]:d}'.format(table))\nJack: 4098; Sjoerd: 4127; Dcab: 8637678\nThis could also be done by passing the table\ndictionary as keyword arguments with the **\nnotation.\n>>> table = {'Sjoerd': 4127, 'Jack': 4098, 'Dcab': 8637678}\n>>> print('Jack: {Jack:d}; Sjoerd: {Sjoerd:d}; Dcab: {Dcab:d}'.format(**table))\nJack: 4098; Sjoerd: 4127; Dcab: 8637678\nThis is particularly useful in combination with the built-in function\nvars()\n, which returns a dictionary containing all local variables:\n>>> table = {k: str(v) for k, v in vars().items()}\n>>> message = \" \".join([f'{k}: ' + '{' + k +'};' for k in table.keys()])\n>>> print(message.format(**table))\n__name__: __main__; __doc__: None; __package__: None; __loader__: ...\nAs an example, the following lines produce a tidily aligned set of columns giving integers and their squares and cubes:\n>>> for x in range(1, 11):\n... print('{0:2d} {1:3d} {2:4d}'.format(x, x*x, x*x*x))\n...\n1 1 1\n2 4 8\n3 9 27\n4 16 64\n5 25 125\n6 36 216\n7 49 343\n8 64 512\n9 81 729\n10 100 1000\nFor a complete overview of string formatting with str.format()\n, see\nFormat String Syntax.\n7.1.3. Manual String Formatting\u00b6\nHere\u2019s the same table of squares and cubes, formatted manually:\n>>> for x in range(1, 11):\n... print(repr(x).rjust(2), repr(x*x).rjust(3), end=' ')\n... # Note use of 'end' on previous line\n... print(repr(x*x*x).rjust(4))\n...\n1 1 1\n2 4 8\n3 9 27\n4 16 64\n5 25 125\n6 36 216\n7 49 343\n8 64 512\n9 81 729\n10 100 1000\n(Note that the one space between each column was added by the\nway print()\nworks: it always adds spaces between its arguments.)\nThe str.rjust()\nmethod of string objects right-justifies a string in a\nfield of a given width by padding it with spaces on the left. There are\nsimilar methods str.ljust()\nand str.center()\n. These methods do\nnot write anything, they just return a new string. If the input string is too\nlong, they don\u2019t truncate it, but return it unchanged; this will mess up your\ncolumn lay-out but that\u2019s usually better than the alternative, which would be\nlying about a value. (If you really want truncation you can always add a\nslice operation, as in x.ljust(n)[:n]\n.)\nThere is another method, str.zfill()\n, which pads a numeric string on the\nleft with zeros. It understands about plus and minus signs:\n>>> '12'.zfill(5)\n'00012'\n>>> '-3.14'.zfill(7)\n'-003.14'\n>>> '3.14159265359'.zfill(5)\n'3.14159265359'\n7.1.4. Old string formatting\u00b6\nThe % operator (modulo) can also be used for string formatting.\nGiven format % values\n(where format is a string),\n%\nconversion specifications in format are replaced with\nzero or more elements of values.\nThis operation is commonly known as string\ninterpolation. For example:\n>>> import math\n>>> print('The value of pi is approximately %5.3f.' % math.pi)\nThe value of pi is approximately 3.142.\nMore information can be found in the printf-style String Formatting section.\n7.2. Reading and Writing Files\u00b6\nopen()\nreturns a file object, and is most commonly used with\ntwo positional arguments and one keyword argument:\nopen(filename, mode, encoding=None)\n>>> f = open('workfile', 'w', encoding=\"utf-8\")\nThe first argument is a string containing the filename. The second argument is\nanother string containing a few characters describing the way in which the file\nwill be used. mode can be 'r'\nwhen the file will only be read, 'w'\nfor only writing (an existing file with the same name will be erased), and\n'a'\nopens the file for appending; any data written to the file is\nautomatically added to the end. 'r+'\nopens the file for both reading and\nwriting. The mode argument is optional; 'r'\nwill be assumed if it\u2019s\nomitted.\nNormally, files are opened in text mode, that means, you read and write\nstrings from and to the file, which are encoded in a specific encoding.\nIf encoding is not specified, the default is platform dependent\n(see open()\n).\nBecause UTF-8 is the modern de-facto standard, encoding=\"utf-8\"\nis\nrecommended unless you know that you need to use a different encoding.\nAppending a 'b'\nto the mode opens the file in binary mode.\nBinary mode data is read and written as bytes\nobjects.\nYou can not specify encoding when opening file in binary mode.\nIn text mode, the default when reading is to convert platform-specific line\nendings (\\n\non Unix, \\r\\n\non Windows) to just \\n\n. When writing in\ntext mode, the default is to convert occurrences of \\n\nback to\nplatform-specific line endings. This behind-the-scenes modification\nto file data is fine for text files, but will corrupt binary data like that in\nJPEG\nor EXE\nfiles. Be very careful to use binary mode when\nreading and writing such files.\nIt is good practice to use the with\nkeyword when dealing\nwith file objects. The advantage is that the file is properly closed\nafter its suite finishes, even if an exception is raised at some\npoint. Using with\nis also much shorter than writing\nequivalent try\n-finally\nblocks:\n>>> with open('workfile', encoding=\"utf-8\") as f:\n... read_data = f.read()\n>>> # We can check that the file has been automatically closed.\n>>> f.closed\nTrue\nIf you\u2019re not using the with\nkeyword, then you should call\nf.close()\nto close the file and immediately free up any system\nresources used by it.\nWarning\nCalling f.write()\nwithout using the with\nkeyword or calling\nf.close()\nmight result in the arguments\nof f.write()\nnot being completely written to the disk, even if the\nprogram exits successfully.\nAfter a file object is closed, either by a with\nstatement\nor by calling f.close()\n, attempts to use the file object will\nautomatically fail.\n>>> f.close()\n>>> f.read()\nTraceback (most recent call last):\nFile \"\", line 1, in \nValueError: I/O operation on closed file.\n7.2.1. Methods of File Objects\u00b6\nThe rest of the examples in this section will assume that a file object called\nf\nhas already been created.\nTo read a file\u2019s contents, call f.read(size)\n, which reads some quantity of\ndata and returns it as a string (in text mode) or bytes object (in binary mode).\nsize is an optional numeric argument. When size is omitted or negative, the\nentire contents of the file will be read and returned; it\u2019s your problem if the\nfile is twice as large as your machine\u2019s memory. Otherwise, at most size\ncharacters (in text mode) or size bytes (in binary mode) are read and returned.\nIf the end of the file has been reached, f.read()\nwill return an empty\nstring (''\n).\n>>> f.read()\n'This is the entire file.\\n'\n>>> f.read()\n''\nf.readline()\nreads a single line from the file; a newline character (\\n\n)\nis left at the end of the string, and is only omitted on the last line of the\nfile if the file doesn\u2019t end in a newline. This makes the return value\nunambiguous; if f.readline()\nreturns an empty string, the end of the file\nhas been reached, while a blank line is represented by '\\n'\n, a string\ncontaining only a single newline.\n>>> f.readline()\n'This is the first line of the file.\\n'\n>>> f.readline()\n'Second line of the file\\n'\n>>> f.readline()\n''\nFor reading lines from a file, you can loop over the file object. This is memory efficient, fast, and leads to simple code:\n>>> for line in f:\n... print(line, end='')\n...\nThis is the first line of the file.\nSecond line of the file\nIf you want to read all the lines of a file in a list you can also use\nlist(f)\nor f.readlines()\n.\nf.write(string)\nwrites the contents of string to the file, returning\nthe number of characters written.\n>>> f.write('This is a test\\n')\n15\nOther types of objects need to be converted \u2013 either to a string (in text mode) or a bytes object (in binary mode) \u2013 before writing them:\n>>> value = ('the answer', 42)\n>>> s = str(value) # convert the tuple to string\n>>> f.write(s)\n18\nf.tell()\nreturns an integer giving the file object\u2019s current position in the file\nrepresented as number of bytes from the beginning of the file when in binary mode and\nan opaque number when in text mode.\nTo change the file object\u2019s position, use f.seek(offset, whence)\n. The position is computed\nfrom adding offset to a reference point; the reference point is selected by\nthe whence argument. A whence value of 0 measures from the beginning\nof the file, 1 uses the current file position, and 2 uses the end of the file as\nthe reference point. whence can be omitted and defaults to 0, using the\nbeginning of the file as the reference point.\n>>> f = open('workfile', 'rb+')\n>>> f.write(b'0123456789abcdef')\n16\n>>> f.seek(5) # Go to the 6th byte in the file\n5\n>>> f.read(1)\nb'5'\n>>> f.seek(-3, 2) # Go to the 3rd byte before the end\n13\n>>> f.read(1)\nb'd'\nIn text files (those opened without a b\nin the mode string), only seeks\nrelative to the beginning of the file are allowed (the exception being seeking\nto the very file end with seek(0, 2)\n) and the only valid offset values are\nthose returned from the f.tell()\n, or zero. Any other offset value produces\nundefined behaviour.\nFile objects have some additional methods, such as isatty()\nand\ntruncate()\nwhich are less frequently used; consult the Library\nReference for a complete guide to file objects.\n7.2.2. Saving structured data with json\n\u00b6\nStrings can easily be written to and read from a file. Numbers take a bit more\neffort, since the read()\nmethod only returns strings, which will have to\nbe passed to a function like int()\n, which takes a string like '123'\nand returns its numeric value 123. When you want to save more complex data\ntypes like nested lists and dictionaries, parsing and serializing by hand\nbecomes complicated.\nRather than having users constantly writing and debugging code to save\ncomplicated data types to files, Python allows you to use the popular data\ninterchange format called JSON (JavaScript Object Notation). The standard module called json\ncan take Python\ndata hierarchies, and convert them to string representations; this process is\ncalled serializing. Reconstructing the data from the string representation\nis called deserializing. Between serializing and deserializing, the\nstring representing the object may have been stored in a file or data, or\nsent over a network connection to some distant machine.\nNote\nThe JSON format is commonly used by modern applications to allow for data exchange. Many programmers are already familiar with it, which makes it a good choice for interoperability.\nIf you have an object x\n, you can view its JSON string representation with a\nsimple line of code:\n>>> import json\n>>> x = [1, 'simple', 'list']\n>>> json.dumps(x)\n'[1, \"simple\", \"list\"]'\nAnother variant of the dumps()\nfunction, called dump()\n,\nsimply serializes the object to a text file. So if f\nis a\ntext file object opened for writing, we can do this:\njson.dump(x, f)\nTo decode the object again, if f\nis a binary file or\ntext file object which has been opened for reading:\nx = json.load(f)\nNote\nJSON files must be encoded in UTF-8. Use encoding=\"utf-8\"\nwhen opening\nJSON file as a text file for both of reading and writing.\nThis simple serialization technique can handle lists and dictionaries, but\nserializing arbitrary class instances in JSON requires a bit of extra effort.\nThe reference for the json\nmodule contains an explanation of this.\nSee also\npickle\n- the pickle module\nContrary to JSON, pickle is a protocol which allows the serialization of arbitrarily complex Python objects. As such, it is specific to Python and cannot be used to communicate with applications written in other languages. It is also insecure by default: deserializing pickle data coming from an untrusted source can execute arbitrary code, if the data was crafted by a skilled attacker.", "code_snippets": [" ", " ", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", " ", "\n", " ", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", " ", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", "\n\n", "\n", "\n", "\n", "\n", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", "\n", "\n", "\n", " ", "\n", " ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 4570} +{"url": "https://docs.python.org/3/using/configure.html", "title": "Configure Python", "content": "3. Configure Python\u00b6\n3.1. Build Requirements\u00b6\nTo build CPython, you will need:\nA C11 compiler. Optional C11 features are not required.\nOn Windows, Microsoft Visual Studio 2017 or later is required.\nSupport for IEEE 754 floating-point numbers and floating-point Not-a-Number (NaN).\nSupport for threads.\nChanged in version 3.5: On Windows, Visual Studio 2015 or later is now required.\nChanged in version 3.6: Selected C99 features, like \nand static inline\nfunctions,\nare now required.\nChanged in version 3.7: Thread support is now required.\nChanged in version 3.11: C11 compiler, IEEE 754 and NaN support are now required. On Windows, Visual Studio 2017 or later is required.\nSee also PEP 7 \u201cStyle Guide for C Code\u201d and PEP 11 \u201cCPython platform support\u201d.\n3.1.1. Requirements for optional modules\u00b6\nSome optional modules of the standard library require third-party libraries installed for development (for example, header files must be available).\nMissing requirements are reported in the configure\noutput.\nModules that are missing due to missing dependencies are listed near the end\nof the make\noutput,\nsometimes using an internal name, for example, _ctypes\nfor ctypes\nmodule.\nIf you distribute a CPython interpreter without optional modules, it\u2019s best practice to advise users, who generally expect that standard library modules are available.\nDependencies to build optional modules are:\nDependency |\nMinimum version |\nPython module |\n|---|---|---|\n3.3.0 recommended |\n||\n2.5.0 |\n||\n|\n||\n3.0.18 recommended\n(1.1.1 minimum)\n|\n||\n3.15.2 |\n||\n8.5.12 |\n||\n1.2.2.1 |\n||\n1.4.5 |\nNote that the table does not include all optional modules; in particular,\nplatform-specific modules like winreg\nare not listed here.\nSee also\nThe devguide includes a full list of dependencies required to build all modules and instructions on how to install them on common platforms.\n--with-system-expat\nallows building with an external libexpat library.\nChanged in version 3.1: Tcl/Tk version 8.3.1 is now required for tkinter\n.\nChanged in version 3.5: Tcl/Tk version 8.4 is now required for tkinter\n.\nChanged in version 3.10: OpenSSL 1.1.1 is now required for hashlib\nand ssl\n.\nSQLite 3.7.15 is now required for sqlite3\n.\nChanged in version 3.11: Tcl/Tk version 8.5.12 is now required for tkinter\n.\nChanged in version 3.13: SQLite 3.15.2 is now required for sqlite3\n.\n3.2. Generated files\u00b6\nTo reduce build dependencies, Python source code contains multiple generated files. Commands to regenerate all generated files:\nmake regen-all\nmake regen-stdlib-module-names\nmake regen-limited-abi\nmake regen-configure\nThe Makefile.pre.in\nfile documents generated files, their inputs, and tools used\nto regenerate them. Search for regen-*\nmake targets.\n3.2.1. configure script\u00b6\nThe make regen-configure\ncommand regenerates the aclocal.m4\nfile and\nthe configure\nscript using the Tools/build/regen-configure.sh\nshell\nscript which uses an Ubuntu container to get the same tools versions and have a\nreproducible output.\nThe container is optional, the following command can be run locally:\nautoreconf -ivf -Werror\nThe generated files can change depending on the exact versions of the\ntools used.\nThe container that CPython uses has\nAutoconf 2.72,\naclocal\nfrom Automake 1.16.5,\nand pkg-config 1.8.1.\nChanged in version 3.13: Autoconf 2.71 and aclocal 1.16.5 and are now used to regenerate\nconfigure\n.\nChanged in version 3.14: Autoconf 2.72 is now used to regenerate configure\n.\n3.3. Configure Options\u00b6\nList all configure\nscript options using:\n./configure --help\nSee also the Misc/SpecialBuilds.txt\nin the Python source distribution.\n3.3.1. General Options\u00b6\n- --enable-loadable-sqlite-extensions\u00b6\nSupport loadable extensions in the\n_sqlite\nextension module (default is no) of thesqlite3\nmodule.See the\nsqlite3.Connection.enable_load_extension()\nmethod of thesqlite3\nmodule.Added in version 3.6.\n- --enable-big-digits=[15|30]\u00b6\nDefine the size in bits of Python\nint\ndigits: 15 or 30 bits.By default, the digit size is 30.\nDefine the\nPYLONG_BITS_IN_DIGIT\nto15\nor30\n.\n- --with-suffix=SUFFIX\u00b6\nSet the Python executable suffix to SUFFIX.\nThe default suffix is\n.exe\non Windows and macOS (python.exe\nexecutable),.js\non Emscripten node,.html\non Emscripten browser,.wasm\non WASI, and an empty string on other platforms (python\nexecutable).Changed in version 3.11: The default suffix on WASM platform is one of\n.js\n,.html\nor.wasm\n.\n- --with-tzpath=\u00b6\nSelect the default time zone search path for\nzoneinfo.TZPATH\n. See the Compile-time configuration of thezoneinfo\nmodule.Default:\n/usr/share/zoneinfo:/usr/lib/zoneinfo:/usr/share/lib/zoneinfo:/etc/zoneinfo\n.See\nos.pathsep\npath separator.Added in version 3.9.\n- --without-decimal-contextvar\u00b6\nBuild the\n_decimal\nextension module using a thread-local context rather than a coroutine-local context (default), see thedecimal\nmodule.See\ndecimal.HAVE_CONTEXTVAR\nand thecontextvars\nmodule.Added in version 3.9.\n- --with-dbmliborder=\u00b6\nOverride order to check db backends for the\ndbm\nmoduleA valid value is a colon (\n:\n) separated string with the backend names:ndbm\n;gdbm\n;bdb\n.\n- --without-c-locale-coercion\u00b6\nDisable C locale coercion to a UTF-8 based locale (enabled by default).\nDon\u2019t define the\nPY_COERCE_C_LOCALE\nmacro.See\nPYTHONCOERCECLOCALE\nand the PEP 538.\n- --with-platlibdir=DIRNAME\u00b6\nPython library directory name (default is\nlib\n).Fedora and SuSE use\nlib64\non 64-bit platforms.See\nsys.platlibdir\n.Added in version 3.9.\n- --with-wheel-pkg-dir=PATH\u00b6\nDirectory of wheel packages used by the\nensurepip\nmodule (none by default).Some Linux distribution packaging policies recommend against bundling dependencies. For example, Fedora installs wheel packages in the\n/usr/share/python-wheels/\ndirectory and don\u2019t install theensurepip._bundled\npackage.Added in version 3.10.\n- --with-pkg-config=[check|yes|no]\u00b6\nWhether configure should use pkg-config to detect build dependencies.\ncheck\n(default): pkg-config is optionalyes\n: pkg-config is mandatoryno\n: configure does not use pkg-config even when present\nAdded in version 3.11.\n- --enable-pystats\u00b6\nTurn on internal Python performance statistics gathering.\nBy default, statistics gathering is off. Use\npython3 -X pystats\ncommand or setPYTHONSTATS=1\nenvironment variable to turn on statistics gathering at Python startup.At Python exit, dump statistics if statistics gathering was on and not cleared.\nEffects:\nAdd\n-X pystats\ncommand line option.Add\nPYTHONSTATS\nenvironment variable.Define the\nPy_STATS\nmacro.Add functions to the\nsys\nmodule:sys._stats_on()\n: Turns on statistics gathering.sys._stats_off()\n: Turns off statistics gathering.sys._stats_clear()\n: Clears the statistics.sys._stats_dump()\n: Dump statistics to file, and clears the statistics.\nThe statistics will be dumped to a arbitrary (probably unique) file in\n/tmp/py_stats/\n(Unix) orC:\\temp\\py_stats\\\n(Windows). If that directory does not exist, results will be printed on stderr.Use\nTools/scripts/summarize_stats.py\nto read the stats.Statistics:\nOpcode:\nSpecialization: success, failure, hit, deferred, miss, deopt, failures;\nExecution count;\nPair count.\nCall:\nInlined Python calls;\nPyEval calls;\nFrames pushed;\nFrame object created;\nEval calls: vector, generator, legacy, function VECTORCALL, build class, slot, function \u201cex\u201d, API, method.\nObject:\nincref and decref;\ninterpreter incref and decref;\nallocations: all, 512 bytes, 4 kiB, big;\nfree;\nto/from free lists;\ndictionary materialized/dematerialized;\ntype cache;\noptimization attempts;\noptimization traces created/executed;\nuops executed.\nGarbage collector:\nGarbage collections;\nObjects visited;\nObjects collected.\nAdded in version 3.11.\n- --disable-gil\u00b6\nEnables support for running Python without the global interpreter lock (GIL): free-threaded build.\nDefines the\nPy_GIL_DISABLED\nmacro and adds\"t\"\ntosys.abiflags\n.See Free-threaded CPython for more detail.\nAdded in version 3.13.\n- --enable-experimental-jit=[no|yes|yes-off|interpreter]\u00b6\nIndicate how to integrate the experimental just-in-time compiler.\nno\n: Don\u2019t build the JIT.yes\n: Enable the JIT. To disable it at runtime, set the environment variablePYTHON_JIT=0\n.yes-off\n: Build the JIT, but disable it by default. To enable it at runtime, set the environment variablePYTHON_JIT=1\n.interpreter\n: Enable the \u201cJIT interpreter\u201d (only useful for those debugging the JIT itself). To disable it at runtime, set the environment variablePYTHON_JIT=0\n.\n--enable-experimental-jit=no\nis the default behavior if the option is not provided, and--enable-experimental-jit\nis shorthand for--enable-experimental-jit=yes\n. SeeTools/jit/README.md\nfor more information, including how to install the necessary build-time dependencies.Note\nWhen building CPython with JIT enabled, ensure that your system has Python 3.11 or later installed.\nAdded in version 3.13.\n- PKG_CONFIG\u00b6\nPath to\npkg-config\nutility.\n- PKG_CONFIG_LIBDIR\u00b6\n- PKG_CONFIG_PATH\u00b6\npkg-config\noptions.\n3.3.2. C compiler options\u00b6\n- CC\u00b6\nC compiler command.\n- CFLAGS\u00b6\nC compiler flags.\n- CPP\u00b6\nC preprocessor command.\n- CPPFLAGS\u00b6\nC preprocessor flags, e.g.\n-Iinclude_dir\n.\n3.3.3. Linker options\u00b6\n- LDFLAGS\u00b6\nLinker flags, e.g.\n-Llibrary_directory\n.\n- LIBS\u00b6\nLibraries to pass to the linker, e.g.\n-llibrary\n.\n- MACHDEP\u00b6\nName for machine-dependent library files.\n3.3.4. Options for third-party dependencies\u00b6\nAdded in version 3.11.\n- BZIP2_CFLAGS\u00b6\n- BZIP2_LIBS\u00b6\nC compiler and linker flags to link Python to\nlibbz2\n, used bybz2\nmodule, overridingpkg-config\n.\n- CURSES_CFLAGS\u00b6\n- CURSES_LIBS\u00b6\nC compiler and linker flags for\nlibncurses\norlibncursesw\n, used bycurses\nmodule, overridingpkg-config\n.\n- GDBM_CFLAGS\u00b6\n- GDBM_LIBS\u00b6\nC compiler and linker flags for\ngdbm\n.\n- LIBEDIT_CFLAGS\u00b6\n- LIBEDIT_LIBS\u00b6\nC compiler and linker flags for\nlibedit\n, used byreadline\nmodule, overridingpkg-config\n.\n- LIBFFI_CFLAGS\u00b6\n- LIBMPDEC_CFLAGS\u00b6\n- LIBMPDEC_LIBS\u00b6\nC compiler and linker flags for\nlibmpdec\n, used bydecimal\nmodule, overridingpkg-config\n.Note\nThese environment variables have no effect unless\n--with-system-libmpdec\nis specified.\n- LIBLZMA_CFLAGS\u00b6\n- LIBREADLINE_CFLAGS\u00b6\n- LIBREADLINE_LIBS\u00b6\nC compiler and linker flags for\nlibreadline\n, used byreadline\nmodule, overridingpkg-config\n.\n- LIBSQLITE3_CFLAGS\u00b6\n- LIBSQLITE3_LIBS\u00b6\nC compiler and linker flags for\nlibsqlite3\n, used bysqlite3\nmodule, overridingpkg-config\n.\n- LIBUUID_CFLAGS\u00b6\n- LIBZSTD_CFLAGS\u00b6\n- LIBZSTD_LIBS\u00b6\nC compiler and linker flags for\nlibzstd\n, used bycompression.zstd\nmodule, overridingpkg-config\n.Added in version 3.14.\n- PANEL_CFLAGS\u00b6\n- PANEL_LIBS\u00b6\nC compiler and linker flags for PANEL, overriding\npkg-config\n.C compiler and linker flags for\nlibpanel\norlibpanelw\n, used bycurses.panel\nmodule, overridingpkg-config\n.\n- TCLTK_CFLAGS\u00b6\n- TCLTK_LIBS\u00b6\nC compiler and linker flags for TCLTK, overriding\npkg-config\n.\n- ZLIB_CFLAGS\u00b6\n3.3.5. WebAssembly Options\u00b6\n- --enable-wasm-dynamic-linking\u00b6\nTurn on dynamic linking support for WASM.\nDynamic linking enables\ndlopen\n. File size of the executable increases due to limited dead code elimination and additional features.Added in version 3.11.\n- --enable-wasm-pthreads\u00b6\nTurn on pthreads support for WASM.\nAdded in version 3.11.\n3.3.6. Install Options\u00b6\n- --prefix=PREFIX\u00b6\nInstall architecture-independent files in PREFIX. On Unix, it defaults to\n/usr/local\n.This value can be retrieved at runtime using\nsys.prefix\n.As an example, one can use\n--prefix=\"$HOME/.local/\"\nto install a Python in its home directory.\n- --exec-prefix=EPREFIX\u00b6\nInstall architecture-dependent files in EPREFIX, defaults to\n--prefix\n.This value can be retrieved at runtime using\nsys.exec_prefix\n.\n3.3.7. Performance options\u00b6\nConfiguring Python using --enable-optimizations --with-lto\n(PGO + LTO) is\nrecommended for best performance. The experimental --enable-bolt\nflag can\nalso be used to improve performance.\n- --enable-optimizations\u00b6\nEnable Profile Guided Optimization (PGO) using\nPROFILE_TASK\n(disabled by default).The C compiler Clang requires\nllvm-profdata\nprogram for PGO. On macOS, GCC also requires it: GCC is just an alias to Clang on macOS.Disable also semantic interposition in libpython if\n--enable-shared\nand GCC is used: add-fno-semantic-interposition\nto the compiler and linker flags.Note\nDuring the build, you may encounter compiler warnings about profile data not being available for some source files. These warnings are harmless, as only a subset of the code is exercised during profile data acquisition. To disable these warnings on Clang, manually suppress them by adding\n-Wno-profile-instr-unprofiled\ntoCFLAGS\n.Added in version 3.6.\nChanged in version 3.10: Use\n-fno-semantic-interposition\non GCC.\n- PROFILE_TASK\u00b6\nEnvironment variable used in the Makefile: Python command line arguments for the PGO generation task.\nDefault:\n-m test --pgo --timeout=$(TESTTIMEOUT)\n.Added in version 3.8.\nChanged in version 3.13: Task failure is no longer ignored silently.\n- --with-lto=[full|thin|no|yes]\u00b6\nEnable Link Time Optimization (LTO) in any build (disabled by default).\nThe C compiler Clang requires\nllvm-ar\nfor LTO (ar\non macOS), as well as an LTO-aware linker (ld.gold\norlld\n).Added in version 3.6.\nAdded in version 3.11: To use ThinLTO feature, use\n--with-lto=thin\non Clang.Changed in version 3.12: Use ThinLTO as the default optimization policy on Clang if the compiler accepts the flag.\n- --enable-bolt\u00b6\nEnable usage of the BOLT post-link binary optimizer (disabled by default).\nBOLT is part of the LLVM project but is not always included in their binary distributions. This flag requires that\nllvm-bolt\nandmerge-fdata\nare available.BOLT is still a fairly new project so this flag should be considered experimental for now. Because this tool operates on machine code its success is dependent on a combination of the build environment + the other optimization configure args + the CPU architecture, and not all combinations are supported. BOLT versions before LLVM 16 are known to crash BOLT under some scenarios. Use of LLVM 16 or newer for BOLT optimization is strongly encouraged.\nThe\nBOLT_INSTRUMENT_FLAGS\nandBOLT_APPLY_FLAGS\nconfigure variables can be defined to override the default set of arguments for llvm-bolt to instrument and apply BOLT data to binaries, respectively.Added in version 3.12.\n- BOLT_APPLY_FLAGS\u00b6\nArguments to\nllvm-bolt\nwhen creating a BOLT optimized binary.Added in version 3.12.\n- BOLT_INSTRUMENT_FLAGS\u00b6\nArguments to\nllvm-bolt\nwhen instrumenting binaries.Added in version 3.12.\n- --with-computed-gotos\u00b6\nEnable computed gotos in evaluation loop (enabled by default on supported compilers).\n- --with-tail-call-interp\u00b6\nEnable interpreters using tail calls in CPython. If enabled, enabling PGO (\n--enable-optimizations\n) is highly recommended. This option specifically requires a C compiler with proper tail call support, and the preserve_none calling convention. For example, Clang 19 and newer supports this feature.Added in version 3.14.\n- --without-mimalloc\u00b6\nDisable the fast mimalloc allocator (enabled by default).\nSee also\nPYTHONMALLOC\nenvironment variable.\n- --without-pymalloc\u00b6\nDisable the specialized Python memory allocator pymalloc (enabled by default).\nSee also\nPYTHONMALLOC\nenvironment variable.\n- --without-doc-strings\u00b6\nDisable static documentation strings to reduce the memory footprint (enabled by default). Documentation strings defined in Python are not affected.\nDon\u2019t define the\nWITH_DOC_STRINGS\nmacro.See the\nPyDoc_STRVAR()\nmacro.\n- --enable-profiling\u00b6\nEnable C-level code profiling with\ngprof\n(disabled by default).\n- --with-strict-overflow\u00b6\nAdd\n-fstrict-overflow\nto the C compiler flags (by default we add-fno-strict-overflow\ninstead).\n- --without-remote-debug\u00b6\nDeactivate remote debugging support described in PEP 768 (enabled by default). When this flag is provided the code that allows the interpreter to schedule the execution of a Python file in a separate process as described in PEP 768 is not compiled. This includes both the functionality to schedule code to be executed and the functionality to receive code to be executed.\n-\nPy_REMOTE_DEBUG\u00b6\nThis macro is defined by default, unless Python is configured with\n--without-remote-debug\n.Note that even if the macro is defined, remote debugging may not be available (for example, on an incompatible platform).\nAdded in version 3.14.\n-\nPy_REMOTE_DEBUG\u00b6\n3.3.8. Python Debug Build\u00b6\nA debug build is Python built with the --with-pydebug\nconfigure\noption.\nEffects of a debug build:\nDisplay all warnings by default: the list of default warning filters is empty in the\nwarnings\nmodule.Add\nd\ntosys.abiflags\n.Add\nsys.gettotalrefcount()\nfunction.Add\n-X showrefcount\ncommand line option.Add\n-d\ncommand line option andPYTHONDEBUG\nenvironment variable to debug the parser.Add support for the\n__lltrace__\nvariable: enable low-level tracing in the bytecode evaluation loop if the variable is defined.Install debug hooks on memory allocators to detect buffer overflow and other memory errors.\nDefine\nPy_DEBUG\nandPy_REF_DEBUG\nmacros.Add runtime checks: code surrounded by\n#ifdef Py_DEBUG\nand#endif\n. Enableassert(...)\nand_PyObject_ASSERT(...)\nassertions: don\u2019t set theNDEBUG\nmacro (see also the--with-assertions\nconfigure option). Main runtime checks:Add sanity checks on the function arguments.\nUnicode and int objects are created with their memory filled with a pattern to detect usage of uninitialized objects.\nEnsure that functions which can clear or replace the current exception are not called with an exception raised.\nCheck that deallocator functions don\u2019t change the current exception.\nThe garbage collector (\ngc.collect()\nfunction) runs some basic checks on objects consistency.The\nPy_SAFE_DOWNCAST()\nmacro checks for integer underflow and overflow when downcasting from wide types to narrow types.\nSee also the Python Development Mode and the\n--with-trace-refs\nconfigure option.\nChanged in version 3.8: Release builds and debug builds are now ABI compatible: defining the\nPy_DEBUG\nmacro no longer implies the Py_TRACE_REFS\nmacro (see the\n--with-trace-refs\noption).\n3.3.9. Debug options\u00b6\n- --with-pydebug\u00b6\nBuild Python in debug mode: define the\nPy_DEBUG\nmacro (disabled by default).\n- --with-trace-refs\u00b6\nEnable tracing references for debugging purpose (disabled by default).\nEffects:\nDefine the\nPy_TRACE_REFS\nmacro.Add\nsys.getobjects()\nfunction.Add\nPYTHONDUMPREFS\nenvironment variable.\nThe\nPYTHONDUMPREFS\nenvironment variable can be used to dump objects and reference counts still alive at Python exit.Statically allocated objects are not traced.\nAdded in version 3.8.\nChanged in version 3.13: This build is now ABI compatible with release build and debug build.\n- --with-assertions\u00b6\nBuild with C assertions enabled (default is no):\nassert(...);\nand_PyObject_ASSERT(...);\n.If set, the\nNDEBUG\nmacro is not defined in theOPT\ncompiler variable.See also the\n--with-pydebug\noption (debug build) which also enables assertions.Added in version 3.6.\n- --with-valgrind\u00b6\nEnable Valgrind support (default is no).\n- --with-dtrace\u00b6\nEnable DTrace support (default is no).\nSee Instrumenting CPython with DTrace and SystemTap.\nAdded in version 3.6.\n- --with-address-sanitizer\u00b6\nEnable AddressSanitizer memory error detector,\nasan\n(default is no). To improve ASan detection capabilities you may also want to combine this with--without-pymalloc\nto disable the specialized small-object allocator whose allocations are not tracked by ASan.Added in version 3.6.\n- --with-memory-sanitizer\u00b6\nEnable MemorySanitizer allocation error detector,\nmsan\n(default is no).Added in version 3.6.\n- --with-undefined-behavior-sanitizer\u00b6\nEnable UndefinedBehaviorSanitizer undefined behaviour detector,\nubsan\n(default is no).Added in version 3.6.\n- --with-thread-sanitizer\u00b6\nEnable ThreadSanitizer data race detector,\ntsan\n(default is no).Added in version 3.13.\n3.3.10. Linker options\u00b6\nEnable building a shared Python library:\nlibpython\n(default is no).\n- --without-static-libpython\u00b6\nDo not build\nlibpythonMAJOR.MINOR.a\nand do not installpython.o\n(built and enabled by default).Added in version 3.10.\n3.3.11. Libraries options\u00b6\n- --with-libs='lib1 ...'\u00b6\nLink against additional libraries (default is no).\n- --with-system-expat\u00b6\nBuild the\npyexpat\nmodule using an installedexpat\nlibrary (default is no).\n- --with-system-libmpdec\u00b6\nBuild the\n_decimal\nextension module using an installedmpdecimal\nlibrary, see thedecimal\nmodule (default is yes).Added in version 3.3.\nChanged in version 3.13: Default to using the installed\nmpdecimal\nlibrary.Changed in version 3.15: A bundled copy of the library will no longer be selected implicitly if an installed\nmpdecimal\nlibrary is not found. In Python 3.15 only, it can still be selected explicitly using--with-system-libmpdec=no\nor--without-system-libmpdec\n.Deprecated since version 3.13, will be removed in version 3.16: A copy of the\nmpdecimal\nlibrary sources will no longer be distributed with Python 3.16.See also\n- --with-readline=readline|editline\u00b6\nDesignate a backend library for the\nreadline\nmodule.readline: Use readline as the backend.\neditline: Use editline as the backend.\nAdded in version 3.10.\n- --without-readline\u00b6\nDon\u2019t build the\nreadline\nmodule (built by default).Don\u2019t define the\nHAVE_LIBREADLINE\nmacro.Added in version 3.10.\n- --with-libm=STRING\u00b6\nOverride\nlibm\nmath library to STRING (default is system-dependent).\n- --with-libc=STRING\u00b6\nOverride\nlibc\nC library to STRING (default is system-dependent).\n- --with-openssl=DIR\u00b6\nRoot of the OpenSSL directory.\nAdded in version 3.7.\n- --with-openssl-rpath=[no|auto|DIR]\u00b6\nSet runtime library directory (rpath) for OpenSSL libraries:\nno\n(default): don\u2019t set rpath;auto\n: auto-detect rpath from--with-openssl\nandpkg-config\n;DIR: set an explicit rpath.\nAdded in version 3.10.\n3.3.12. Security Options\u00b6\n- --with-hash-algorithm=[fnv|siphash13|siphash24]\u00b6\nSelect hash algorithm for use in\nPython/pyhash.c\n:siphash13\n(default);siphash24\n;fnv\n.\nAdded in version 3.4.\nAdded in version 3.11:\nsiphash13\nis added and it is the new default.\n- --with-builtin-hashlib-hashes=md5,sha1,sha256,sha512,sha3,blake2\u00b6\nBuilt-in hash modules:\nmd5\n;sha1\n;sha256\n;sha512\n;sha3\n(with shake);blake2\n.\nAdded in version 3.9.\n- --with-ssl-default-suites=[python|openssl|STRING]\u00b6\nOverride the OpenSSL default cipher suites string:\npython\n(default): use Python\u2019s preferred selection;openssl\n: leave OpenSSL\u2019s defaults untouched;STRING: use a custom string\nSee the\nssl\nmodule.Added in version 3.7.\nChanged in version 3.10: The settings\npython\nand STRING also set TLS 1.2 as minimum protocol version.\n- --disable-safety\u00b6\nDisable compiler options that are recommended by OpenSSF for security reasons with no performance overhead. If this option is not enabled, CPython will be built based on safety compiler options with no slow down. When this option is enabled, CPython will not be built with the compiler options listed below.\nThe following compiler options are disabled with\n--disable-safety\n:-fstack-protector-strong: Enable run-time checks for stack-based buffer overflows.\n-Wtrampolines: Enable warnings about trampolines that require executable stacks.\nAdded in version 3.14.\n- --enable-slower-safety\u00b6\nEnable compiler options that are recommended by OpenSSF for security reasons which require overhead. If this option is not enabled, CPython will not be built based on safety compiler options which performance impact. When this option is enabled, CPython will be built with the compiler options listed below.\nThe following compiler options are enabled with\n--enable-slower-safety\n:-D_FORTIFY_SOURCE=3: Fortify sources with compile- and run-time checks for unsafe libc usage and buffer overflows.\nAdded in version 3.14.\n3.3.13. macOS Options\u00b6\nSee Mac/README.rst.\n- --enable-universalsdk\u00b6\n- --enable-universalsdk=SDKDIR\u00b6\nCreate a universal binary build. SDKDIR specifies which macOS SDK should be used to perform the build (default is no).\n- --enable-framework\u00b6\n- --enable-framework=INSTALLDIR\u00b6\nCreate a Python.framework rather than a traditional Unix install. Optional INSTALLDIR specifies the installation path (default is no).\n- --with-universal-archs=ARCH\u00b6\nSpecify the kind of universal binary that should be created. This option is only valid when\n--enable-universalsdk\nis set.Options:\nuniversal2\n(x86-64 and arm64);32-bit\n(PPC and i386);64-bit\n(PPC64 and x86-64);3-way\n(i386, PPC and x86-64);intel\n(i386 and x86-64);intel-32\n(i386);intel-64\n(x86-64);all\n(PPC, i386, PPC64 and x86-64).\nNote that values for this configuration item are not the same as the identifiers used for universal binary wheels on macOS. See the Python Packaging User Guide for details on the packaging platform compatibility tags used on macOS\n- --with-framework-name=FRAMEWORK\u00b6\nSpecify the name for the python framework on macOS only valid when\n--enable-framework\nis set (default:Python\n).\n- --with-app-store-compliance\u00b6\n- --with-app-store-compliance=PATCH-FILE\u00b6\nThe Python standard library contains strings that are known to trigger automated inspection tool errors when submitted for distribution by the macOS and iOS App Stores. If enabled, this option will apply the list of patches that are known to correct app store compliance. A custom patch file can also be specified. This option is disabled by default.\nAdded in version 3.13.\n3.3.14. iOS Options\u00b6\nSee iOS/README.rst.\n- --enable-framework=INSTALLDIR\u00b6\nCreate a Python.framework. Unlike macOS, the INSTALLDIR argument specifying the installation path is mandatory.\n- --with-framework-name=FRAMEWORK\u00b6\nSpecify the name for the framework (default:\nPython\n).\n3.3.15. Cross Compiling Options\u00b6\nCross compiling, also known as cross building, can be used to build Python for another CPU architecture or platform. Cross compiling requires a Python interpreter for the build platform. The version of the build Python must match the version of the cross compiled host Python.\n- --build=BUILD\u00b6\nconfigure for building on BUILD, usually guessed by config.guess.\n- --host=HOST\u00b6\ncross-compile to build programs to run on HOST (target platform)\n- --with-build-python=path/to/python\u00b6\npath to build\npython\nbinary for cross compilingAdded in version 3.11.\n- CONFIG_SITE=file\u00b6\nAn environment variable that points to a file with configure overrides.\nExample config.site file:\n# config.site-aarch64 ac_cv_buggy_getaddrinfo=no ac_cv_file__dev_ptmx=yes ac_cv_file__dev_ptc=no\n- HOSTRUNNER\u00b6\nProgram to run CPython for the host platform for cross-compilation.\nAdded in version 3.11.\nCross compiling example:\nCONFIG_SITE=config.site-aarch64 ../configure \\\n--build=x86_64-pc-linux-gnu \\\n--host=aarch64-unknown-linux-gnu \\\n--with-build-python=../x86_64/python\n3.4. Python Build System\u00b6\n3.4.1. Main files of the build system\u00b6\nconfigure.ac\n=>configure\n;Makefile.pre.in\n=>Makefile\n(created byconfigure\n);pyconfig.h\n(created byconfigure\n);Modules/Setup\n: C extensions built by the Makefile usingModule/makesetup\nshell script;\n3.4.2. Main build steps\u00b6\nC files (\n.c\n) are built as object files (.o\n).A static\nlibpython\nlibrary (.a\n) is created from objects files.python.o\nand the staticlibpython\nlibrary are linked into the finalpython\nprogram.C extensions are built by the Makefile (see\nModules/Setup\n).\n3.4.3. Main Makefile targets\u00b6\n3.4.3.1. make\u00b6\nFor the most part, when rebuilding after editing some code or\nrefreshing your checkout from upstream, all you need to do is execute\nmake\n, which (per Make\u2019s semantics) builds the default target, the\nfirst one defined in the Makefile. By tradition (including in the\nCPython project) this is usually the all\ntarget. The\nconfigure\nscript expands an autoconf\nvariable,\n@DEF_MAKE_ALL_RULE@\nto describe precisely which targets make\nall\nwill build. The three choices are:\nprofile-opt\n(configured with--enable-optimizations\n)build_wasm\n(chosen if the host platform matcheswasm32-wasi*\norwasm32-emscripten\n)build_all\n(configured without explicitly using either of the others)\nDepending on the most recent source file changes, Make will rebuild\nany targets (object files and executables) deemed out-of-date,\nincluding running configure\nagain if necessary. Source/target\ndependencies are many and maintained manually however, so Make\nsometimes doesn\u2019t have all the information necessary to correctly\ndetect all targets which need to be rebuilt. Depending on which\ntargets aren\u2019t rebuilt, you might experience a number of problems. If\nyou have build or test problems which you can\u2019t otherwise explain,\nmake clean && make\nshould work around most dependency problems, at\nthe expense of longer build times.\n3.4.3.2. make platform\u00b6\nBuild the python\nprogram, but don\u2019t build the standard library\nextension modules. This generates a file named platform\nwhich\ncontains a single line describing the details of the build platform,\ne.g., macosx-14.3-arm64-3.12\nor linux-x86_64-3.13\n.\n3.4.3.3. make profile-opt\u00b6\nBuild Python using profile-guided optimization (PGO). You can use the\nconfigure --enable-optimizations\noption to make this the\ndefault target of the make\ncommand (make all\nor just\nmake\n).\n3.4.3.4. make clean\u00b6\nRemove built files.\n3.4.3.5. make distclean\u00b6\nIn addition to the work done by make clean\n, remove files\ncreated by the configure script. configure\nwill have to be run\nbefore building again. [6]\n3.4.3.6. make install\u00b6\nBuild the all\ntarget and install Python.\n3.4.3.7. make test\u00b6\nBuild the all\ntarget and run the Python test suite with the\n--fast-ci\noption without GUI tests. Variables:\nTESTOPTS\n: additional regrtest command-line options.TESTPYTHONOPTS\n: additional Python command-line options.TESTTIMEOUT\n: timeout in seconds (default: 10 minutes).\n3.4.3.8. make ci\u00b6\nThis is similar to make test\n, but uses the -ugui\nto also run GUI tests.\nAdded in version 3.14.\n3.4.3.9. make buildbottest\u00b6\nThis is similar to make test\n, but uses the --slow-ci\noption and default timeout of 20 minutes, instead of --fast-ci\noption.\n3.4.3.10. make regen-all\u00b6\nRegenerate (almost) all generated files. These include (but are not\nlimited to) bytecode cases, and parser generator file.\nmake regen-stdlib-module-names\nand autoconf\nmust be run\nseparately for the remaining generated files.\n3.4.4. C extensions\u00b6\nSome C extensions are built as built-in modules, like the sys\nmodule.\nThey are built with the Py_BUILD_CORE_BUILTIN\nmacro defined.\nBuilt-in modules have no __file__\nattribute:\n>>> import sys\n>>> sys\n\n>>> sys.__file__\nTraceback (most recent call last):\nFile \"\", line 1, in \nAttributeError: module 'sys' has no attribute '__file__'\nOther C extensions are built as dynamic libraries, like the _asyncio\nmodule.\nThey are built with the Py_BUILD_CORE_MODULE\nmacro defined.\nExample on Linux x86-64:\n>>> import _asyncio\n>>> _asyncio\n\n>>> _asyncio.__file__\n'/usr/lib64/python3.9/lib-dynload/_asyncio.cpython-39-x86_64-linux-gnu.so'\nModules/Setup\nis used to generate Makefile targets to build C extensions.\nAt the beginning of the files, C extensions are built as built-in modules.\nExtensions defined after the *shared*\nmarker are built as dynamic libraries.\nThe PyAPI_FUNC()\n, PyAPI_DATA()\nand\nPyMODINIT_FUNC\nmacros of Include/exports.h\nare defined\ndifferently depending if the Py_BUILD_CORE_MODULE\nmacro is defined:\nUse\nPy_EXPORTED_SYMBOL\nif thePy_BUILD_CORE_MODULE\nis definedUse\nPy_IMPORTED_SYMBOL\notherwise.\nIf the Py_BUILD_CORE_BUILTIN\nmacro is used by mistake on a C extension\nbuilt as a shared library, its PyInit_xxx()\nfunction is not exported,\ncausing an ImportError\non import.\n3.5. Compiler and linker flags\u00b6\nOptions set by the ./configure\nscript and environment variables and used by\nMakefile\n.\n3.5.1. Preprocessor flags\u00b6\n- CONFIGURE_CPPFLAGS\u00b6\nValue of\nCPPFLAGS\nvariable passed to the./configure\nscript.Added in version 3.6.\n- CPPFLAGS\u00b6\n(Objective) C/C++ preprocessor flags, e.g.\n-Iinclude_dir\nif you have headers in a nonstandard directory include_dir.Both\nCPPFLAGS\nandLDFLAGS\nneed to contain the shell\u2019s value to be able to build extension modules using the directories specified in the environment variables.\n- BASECPPFLAGS\u00b6\nAdded in version 3.4.\n- PY_CPPFLAGS\u00b6\nExtra preprocessor flags added for building the interpreter object files.\nDefault:\n$(BASECPPFLAGS) -I. -I$(srcdir)/Include $(CONFIGURE_CPPFLAGS) $(CPPFLAGS)\n.Added in version 3.2.\n3.5.2. Compiler flags\u00b6\n- CC\u00b6\nC compiler command.\nExample:\ngcc -pthread\n.\n- CXX\u00b6\nC++ compiler command.\nExample:\ng++ -pthread\n.\n- CFLAGS\u00b6\nC compiler flags.\n- CFLAGS_NODIST\u00b6\nCFLAGS_NODIST\nis used for building the interpreter and stdlib C extensions. Use it when a compiler flag should not be part ofCFLAGS\nonce Python is installed (gh-65320).In particular,\nCFLAGS\nshould not contain:the compiler flag\n-I\n(for setting the search path for include files). The-I\nflags are processed from left to right, and any flags inCFLAGS\nwould take precedence over user- and package-supplied-I\nflags.hardening flags such as\n-Werror\nbecause distributions cannot control whether packages installed by users conform to such heightened standards.\nAdded in version 3.5.\n- COMPILEALL_OPTS\u00b6\nOptions passed to the\ncompileall\ncommand line when building PYC files inmake install\n. Default:-j0\n.Added in version 3.12.\n- EXTRA_CFLAGS\u00b6\nExtra C compiler flags.\n- CONFIGURE_CFLAGS_NODIST\u00b6\nValue of\nCFLAGS_NODIST\nvariable passed to the./configure\nscript.Added in version 3.5.\n- BASECFLAGS\u00b6\nBase compiler flags.\n- OPT\u00b6\nOptimization flags.\n- CFLAGS_ALIASING\u00b6\nStrict or non-strict aliasing flags used to compile\nPython/dtoa.c\n.Added in version 3.7.\n- CCSHARED\u00b6\nCompiler flags used to build a shared library.\nFor example,\n-fPIC\nis used on Linux and on BSD.\n- CFLAGSFORSHARED\u00b6\nExtra C flags added for building the interpreter object files.\nDefault:\n$(CCSHARED)\nwhen--enable-shared\nis used, or an empty string otherwise.\n- PY_CFLAGS\u00b6\nDefault:\n$(BASECFLAGS) $(OPT) $(CONFIGURE_CFLAGS) $(CFLAGS) $(EXTRA_CFLAGS)\n.\n- PY_CFLAGS_NODIST\u00b6\nDefault:\n$(CONFIGURE_CFLAGS_NODIST) $(CFLAGS_NODIST) -I$(srcdir)/Include/internal\n.Added in version 3.5.\n- PY_STDMODULE_CFLAGS\u00b6\nC flags used for building the interpreter object files.\nDefault:\n$(PY_CFLAGS) $(PY_CFLAGS_NODIST) $(PY_CPPFLAGS) $(CFLAGSFORSHARED)\n.Added in version 3.7.\n- PY_CORE_CFLAGS\u00b6\nDefault:\n$(PY_STDMODULE_CFLAGS) -DPy_BUILD_CORE\n.Added in version 3.2.\n- PY_BUILTIN_MODULE_CFLAGS\u00b6\nCompiler flags to build a standard library extension module as a built-in module, like the\nposix\nmodule.Default:\n$(PY_STDMODULE_CFLAGS) -DPy_BUILD_CORE_BUILTIN\n.Added in version 3.8.\n- PURIFY\u00b6\nPurify command. Purify is a memory debugger program.\nDefault: empty string (not used).\n3.5.3. Linker flags\u00b6\n- LINKCC\u00b6\nLinker command used to build programs like\npython\nand_testembed\n.Default:\n$(PURIFY) $(CC)\n.\n- CONFIGURE_LDFLAGS\u00b6\nValue of\nLDFLAGS\nvariable passed to the./configure\nscript.Avoid assigning\nCFLAGS\n,LDFLAGS\n, etc. so users can use them on the command line to append to these values without stomping the pre-set values.Added in version 3.2.\n- LDFLAGS_NODIST\u00b6\nLDFLAGS_NODIST\nis used in the same manner asCFLAGS_NODIST\n. Use it when a linker flag should not be part ofLDFLAGS\nonce Python is installed (gh-65320).In particular,\nLDFLAGS\nshould not contain:the compiler flag\n-L\n(for setting the search path for libraries). The-L\nflags are processed from left to right, and any flags inLDFLAGS\nwould take precedence over user- and package-supplied-L\nflags.\n- CONFIGURE_LDFLAGS_NODIST\u00b6\nValue of\nLDFLAGS_NODIST\nvariable passed to the./configure\nscript.Added in version 3.8.\n- LDFLAGS\u00b6\nLinker flags, e.g.\n-Llib_dir\nif you have libraries in a nonstandard directory lib_dir.Both\nCPPFLAGS\nandLDFLAGS\nneed to contain the shell\u2019s value to be able to build extension modules using the directories specified in the environment variables.\n- LIBS\u00b6\nLinker flags to pass libraries to the linker when linking the Python executable.\nExample:\n-lrt\n.\n- LDSHARED\u00b6\nCommand to build a shared library.\nDefault:\n@LDSHARED@ $(PY_LDFLAGS)\n.\n- BLDSHARED\u00b6\nCommand to build\nlibpython\nshared library.Default:\n@BLDSHARED@ $(PY_CORE_LDFLAGS)\n.\n- PY_LDFLAGS\u00b6\nDefault:\n$(CONFIGURE_LDFLAGS) $(LDFLAGS)\n.\n- PY_LDFLAGS_NODIST\u00b6\nDefault:\n$(CONFIGURE_LDFLAGS_NODIST) $(LDFLAGS_NODIST)\n.Added in version 3.8.\n- PY_CORE_LDFLAGS\u00b6\nLinker flags used for building the interpreter object files.\nAdded in version 3.8.\nFootnotes\ngit clean -fdx\nis an even more extreme way to \u201cclean\u201d your\ncheckout. It removes all files not known to Git.\nWhen bug hunting using git bisect\n, this is\nrecommended between probes\nto guarantee a completely clean build. Use with care, as it\nwill delete all files not checked into Git, including your\nnew, uncommitted work.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 9110} +{"url": "https://docs.python.org/3/download.html", "title": "Download Python 3.14 documentation", "content": "Download Python 3.14 documentation\nLast updated on: Feb 18, 2026 (17:01 UTC).\nDownload an archive containing all the documentation for this version of Python:\n| Format | Packed as .zip | Packed as .tar.bz2 |\n|---|---|---|\n| HTML | Download | Download |\n| Plain text | Download | Download |\n| Texinfo | Download | Download |\n| EPUB | Download |\nWe no longer provide pre-built PDFs of the documentation.\nTo build a PDF archive, follow the instructions in the\nDeveloper's Guide\nand run make dist-pdf\nin the Doc/\ndirectory of a copy of the CPython repository.\nSee the directory listing for file sizes.\nProblems\nOpen an issue if you have comments or suggestions for the Python documentation.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 171} +{"url": "https://docs.python.org/3/tutorial/modules.html", "title": "Modules", "content": "6. Modules\u00b6\nIf you quit from the Python interpreter and enter it again, the definitions you have made (functions and variables) are lost. Therefore, if you want to write a somewhat longer program, you are better off using a text editor to prepare the input for the interpreter and running it with that file as input instead. This is known as creating a script. As your program gets longer, you may want to split it into several files for easier maintenance. You may also want to use a handy function that you\u2019ve written in several programs without copying its definition into each program.\nTo support this, Python has a way to put definitions in a file and use them in a script or in an interactive instance of the interpreter. Such a file is called a module; definitions from a module can be imported into other modules or into the main module (the collection of variables that you have access to in a script executed at the top level and in calculator mode).\nA module is a file containing Python definitions and statements. The file name\nis the module name with the suffix .py\nappended. Within a module, the\nmodule\u2019s name (as a string) is available as the value of the global variable\n__name__\n. For instance, use your favorite text editor to create a file\ncalled fibo.py\nin the current directory with the following contents:\n# Fibonacci numbers module\ndef fib(n):\n\"\"\"Write Fibonacci series up to n.\"\"\"\na, b = 0, 1\nwhile a < n:\nprint(a, end=' ')\na, b = b, a+b\nprint()\ndef fib2(n):\n\"\"\"Return Fibonacci series up to n.\"\"\"\nresult = []\na, b = 0, 1\nwhile a < n:\nresult.append(a)\na, b = b, a+b\nreturn result\nNow enter the Python interpreter and import this module with the following command:\n>>> import fibo\nThis does not add the names of the functions defined in fibo\ndirectly to\nthe current namespace (see Python Scopes and Namespaces for more details);\nit only adds the module name fibo\nthere. Using\nthe module name you can access the functions:\n>>> fibo.fib(1000)\n0 1 1 2 3 5 8 13 21 34 55 89 144 233 377 610 987\n>>> fibo.fib2(100)\n[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89]\n>>> fibo.__name__\n'fibo'\nIf you intend to use a function often you can assign it to a local name:\n>>> fib = fibo.fib\n>>> fib(500)\n0 1 1 2 3 5 8 13 21 34 55 89 144 233 377\n6.1. More on Modules\u00b6\nA module can contain executable statements as well as function definitions. These statements are intended to initialize the module. They are executed only the first time the module name is encountered in an import statement. [1] (They are also run if the file is executed as a script.)\nEach module has its own private namespace, which is used as the global namespace\nby all functions defined in the module. Thus, the author of a module can\nuse global variables in the module without worrying about accidental clashes\nwith a user\u2019s global variables. On the other hand, if you know what you are\ndoing you can touch a module\u2019s global variables with the same notation used to\nrefer to its functions, modname.itemname\n.\nModules can import other modules. It is customary but not required to place all\nimport\nstatements at the beginning of a module (or script, for that\nmatter). The imported module names, if placed at the top level of a module\n(outside any functions or classes), are added to the module\u2019s global namespace.\nThere is a variant of the import\nstatement that imports names from a\nmodule directly into the importing module\u2019s namespace. For example:\n>>> from fibo import fib, fib2\n>>> fib(500)\n0 1 1 2 3 5 8 13 21 34 55 89 144 233 377\nThis does not introduce the module name from which the imports are taken in the\nlocal namespace (so in the example, fibo\nis not defined).\nThere is even a variant to import all names that a module defines:\n>>> from fibo import *\n>>> fib(500)\n0 1 1 2 3 5 8 13 21 34 55 89 144 233 377\nThis imports all names except those beginning with an underscore (_\n).\nIn most cases Python programmers do not use this facility since it introduces\nan unknown set of names into the interpreter, possibly hiding some things\nyou have already defined.\nNote that in general the practice of importing *\nfrom a module or package is\nfrowned upon, since it often causes poorly readable code. However, it is okay to\nuse it to save typing in interactive sessions.\nIf the module name is followed by as\n, then the name\nfollowing as\nis bound directly to the imported module.\n>>> import fibo as fib\n>>> fib.fib(500)\n0 1 1 2 3 5 8 13 21 34 55 89 144 233 377\nThis is effectively importing the module in the same way that import fibo\nwill do, with the only difference of it being available as fib\n.\nIt can also be used when utilising from\nwith similar effects:\n>>> from fibo import fib as fibonacci\n>>> fibonacci(500)\n0 1 1 2 3 5 8 13 21 34 55 89 144 233 377\nNote\nFor efficiency reasons, each module is only imported once per interpreter\nsession. Therefore, if you change your modules, you must restart the\ninterpreter \u2013 or, if it\u2019s just one module you want to test interactively,\nuse importlib.reload()\n, e.g. import importlib;\nimportlib.reload(modulename)\n.\n6.1.1. Executing modules as scripts\u00b6\nWhen you run a Python module with\npython fibo.py \nthe code in the module will be executed, just as if you imported it, but with\nthe __name__\nset to \"__main__\"\n. That means that by adding this code at\nthe end of your module:\nif __name__ == \"__main__\":\nimport sys\nfib(int(sys.argv[1]))\nyou can make the file usable as a script as well as an importable module, because the code that parses the command line only runs if the module is executed as the \u201cmain\u201d file:\n$ python fibo.py 50\n0 1 1 2 3 5 8 13 21 34\nIf the module is imported, the code is not run:\n>>> import fibo\n>>>\nThis is often used either to provide a convenient user interface to a module, or for testing purposes (running the module as a script executes a test suite).\n6.1.2. The Module Search Path\u00b6\nWhen a module named spam\nis imported, the interpreter first searches for\na built-in module with that name. These module names are listed in\nsys.builtin_module_names\n. If not found, it then searches for a file\nnamed spam.py\nin a list of directories given by the variable\nsys.path\n. sys.path\nis initialized from these locations:\nThe directory containing the input script (or the current directory when no file is specified).\nPYTHONPATH\n(a list of directory names, with the same syntax as the shell variablePATH\n).The installation-dependent default (by convention including a\nsite-packages\ndirectory, handled by thesite\nmodule).\nMore details are at The initialization of the sys.path module search path.\nNote\nOn file systems which support symlinks, the directory containing the input script is calculated after the symlink is followed. In other words the directory containing the symlink is not added to the module search path.\nAfter initialization, Python programs can modify sys.path\n. The\ndirectory containing the script being run is placed at the beginning of the\nsearch path, ahead of the standard library path. This means that scripts in that\ndirectory will be loaded instead of modules of the same name in the library\ndirectory. This is an error unless the replacement is intended. See section\nStandard Modules for more information.\n6.1.3. \u201cCompiled\u201d Python files\u00b6\nTo speed up loading modules, Python caches the compiled version of each module\nin the __pycache__\ndirectory under the name module.version.pyc\n,\nwhere the version encodes the format of the compiled file; it generally contains\nthe Python version number. For example, in CPython release 3.3 the compiled\nversion of spam.py would be cached as __pycache__/spam.cpython-33.pyc\n. This\nnaming convention allows compiled modules from different releases and different\nversions of Python to coexist.\nPython checks the modification date of the source against the compiled version to see if it\u2019s out of date and needs to be recompiled. This is a completely automatic process. Also, the compiled modules are platform-independent, so the same library can be shared among systems with different architectures.\nPython does not check the cache in two circumstances. First, it always recompiles and does not store the result for the module that\u2019s loaded directly from the command line. Second, it does not check the cache if there is no source module. To support a non-source (compiled only) distribution, the compiled module must be in the source directory, and there must not be a source module.\nSome tips for experts:\nYou can use the\n-O\nor-OO\nswitches on the Python command to reduce the size of a compiled module. The-O\nswitch removes assert statements, the-OO\nswitch removes both assert statements and __doc__ strings. Since some programs may rely on having these available, you should only use this option if you know what you\u2019re doing. \u201cOptimized\u201d modules have anopt-\ntag and are usually smaller. Future releases may change the effects of optimization.A program doesn\u2019t run any faster when it is read from a\n.pyc\nfile than when it is read from a.py\nfile; the only thing that\u2019s faster about.pyc\nfiles is the speed with which they are loaded.The module\ncompileall\ncan create .pyc files for all modules in a directory.There is more detail on this process, including a flow chart of the decisions, in PEP 3147.\n6.2. Standard Modules\u00b6\nPython comes with a library of standard modules, described in a separate\ndocument, the Python Library Reference (\u201cLibrary Reference\u201d hereafter). Some\nmodules are built into the interpreter; these provide access to operations that\nare not part of the core of the language but are nevertheless built in, either\nfor efficiency or to provide access to operating system primitives such as\nsystem calls. The set of such modules is a configuration option which also\ndepends on the underlying platform. For example, the winreg\nmodule is only\nprovided on Windows systems. One particular module deserves some attention:\nsys\n, which is built into every Python interpreter. The variables\nsys.ps1\nand sys.ps2\ndefine the strings used as primary and secondary\nprompts:\n>>> import sys\n>>> sys.ps1\n'>>> '\n>>> sys.ps2\n'... '\n>>> sys.ps1 = 'C> '\nC> print('Yuck!')\nYuck!\nC>\nThese two variables are only defined if the interpreter is in interactive mode.\nThe variable sys.path\nis a list of strings that determines the interpreter\u2019s\nsearch path for modules. It is initialized to a default path taken from the\nenvironment variable PYTHONPATH\n, or from a built-in default if\nPYTHONPATH\nis not set. You can modify it using standard list\noperations:\n>>> import sys\n>>> sys.path.append('/ufs/guido/lib/python')\n6.3. The dir()\nFunction\u00b6\nThe built-in function dir()\nis used to find out which names a module\ndefines. It returns a sorted list of strings:\n>>> import fibo, sys\n>>> dir(fibo)\n['__name__', 'fib', 'fib2']\n>>> dir(sys)\n['__breakpointhook__', '__displayhook__', '__doc__', '__excepthook__',\n'__interactivehook__', '__loader__', '__name__', '__package__', '__spec__',\n'__stderr__', '__stdin__', '__stdout__', '__unraisablehook__',\n'_clear_type_cache', '_current_frames', '_debugmallocstats', '_framework',\n'_getframe', '_git', '_home', '_xoptions', 'abiflags', 'addaudithook',\n'api_version', 'argv', 'audit', 'base_exec_prefix', 'base_prefix',\n'breakpointhook', 'builtin_module_names', 'byteorder', 'call_tracing',\n'callstats', 'copyright', 'displayhook', 'dont_write_bytecode', 'exc_info',\n'excepthook', 'exec_prefix', 'executable', 'exit', 'flags', 'float_info',\n'float_repr_style', 'get_asyncgen_hooks', 'get_coroutine_origin_tracking_depth',\n'getallocatedblocks', 'getdefaultencoding', 'getdlopenflags',\n'getfilesystemencodeerrors', 'getfilesystemencoding', 'getprofile',\n'getrecursionlimit', 'getrefcount', 'getsizeof', 'getswitchinterval',\n'gettrace', 'hash_info', 'hexversion', 'implementation', 'int_info',\n'intern', 'is_finalizing', 'last_traceback', 'last_type', 'last_value',\n'maxsize', 'maxunicode', 'meta_path', 'modules', 'path', 'path_hooks',\n'path_importer_cache', 'platform', 'prefix', 'ps1', 'ps2', 'pycache_prefix',\n'set_asyncgen_hooks', 'set_coroutine_origin_tracking_depth', 'setdlopenflags',\n'setprofile', 'setrecursionlimit', 'setswitchinterval', 'settrace', 'stderr',\n'stdin', 'stdout', 'thread_info', 'unraisablehook', 'version', 'version_info',\n'warnoptions']\nWithout arguments, dir()\nlists the names you have defined currently:\n>>> a = [1, 2, 3, 4, 5]\n>>> import fibo\n>>> fib = fibo.fib\n>>> dir()\n['__builtins__', '__name__', 'a', 'fib', 'fibo', 'sys']\nNote that it lists all types of names: variables, modules, functions, etc.\ndir()\ndoes not list the names of built-in functions and variables. If you\nwant a list of those, they are defined in the standard module\nbuiltins\n:\n>>> import builtins\n>>> dir(builtins)\n['ArithmeticError', 'AssertionError', 'AttributeError', 'BaseException',\n'BlockingIOError', 'BrokenPipeError', 'BufferError', 'BytesWarning',\n'ChildProcessError', 'ConnectionAbortedError', 'ConnectionError',\n'ConnectionRefusedError', 'ConnectionResetError', 'DeprecationWarning',\n'EOFError', 'Ellipsis', 'EnvironmentError', 'Exception', 'False',\n'FileExistsError', 'FileNotFoundError', 'FloatingPointError',\n'FutureWarning', 'GeneratorExit', 'IOError', 'ImportError',\n'ImportWarning', 'IndentationError', 'IndexError', 'InterruptedError',\n'IsADirectoryError', 'KeyError', 'KeyboardInterrupt', 'LookupError',\n'MemoryError', 'NameError', 'None', 'NotADirectoryError', 'NotImplemented',\n'NotImplementedError', 'OSError', 'OverflowError',\n'PendingDeprecationWarning', 'PermissionError', 'ProcessLookupError',\n'ReferenceError', 'ResourceWarning', 'RuntimeError', 'RuntimeWarning',\n'StopIteration', 'SyntaxError', 'SyntaxWarning', 'SystemError',\n'SystemExit', 'TabError', 'TimeoutError', 'True', 'TypeError',\n'UnboundLocalError', 'UnicodeDecodeError', 'UnicodeEncodeError',\n'UnicodeError', 'UnicodeTranslateError', 'UnicodeWarning', 'UserWarning',\n'ValueError', 'Warning', 'ZeroDivisionError', '_', '__build_class__',\n'__debug__', '__doc__', '__import__', '__name__', '__package__', 'abs',\n'all', 'any', 'ascii', 'bin', 'bool', 'bytearray', 'bytes', 'callable',\n'chr', 'classmethod', 'compile', 'complex', 'copyright', 'credits',\n'delattr', 'dict', 'dir', 'divmod', 'enumerate', 'eval', 'exec', 'exit',\n'filter', 'float', 'format', 'frozenset', 'getattr', 'globals', 'hasattr',\n'hash', 'help', 'hex', 'id', 'input', 'int', 'isinstance', 'issubclass',\n'iter', 'len', 'license', 'list', 'locals', 'map', 'max', 'memoryview',\n'min', 'next', 'object', 'oct', 'open', 'ord', 'pow', 'print', 'property',\n'quit', 'range', 'repr', 'reversed', 'round', 'set', 'setattr', 'slice',\n'sorted', 'staticmethod', 'str', 'sum', 'super', 'tuple', 'type', 'vars',\n'zip']\n6.4. Packages\u00b6\nPackages are a way of structuring Python\u2019s module namespace by using \u201cdotted\nmodule names\u201d. For example, the module name A.B\ndesignates a submodule\nnamed B\nin a package named A\n. Just like the use of modules saves the\nauthors of different modules from having to worry about each other\u2019s global\nvariable names, the use of dotted module names saves the authors of multi-module\npackages like NumPy or Pillow from having to worry about\neach other\u2019s module names.\nSuppose you want to design a collection of modules (a \u201cpackage\u201d) for the uniform\nhandling of sound files and sound data. There are many different sound file\nformats (usually recognized by their extension, for example: .wav\n,\n.aiff\n, .au\n), so you may need to create and maintain a growing\ncollection of modules for the conversion between the various file formats.\nThere are also many different operations you might want to perform on sound data\n(such as mixing, adding echo, applying an equalizer function, creating an\nartificial stereo effect), so in addition you will be writing a never-ending\nstream of modules to perform these operations. Here\u2019s a possible structure for\nyour package (expressed in terms of a hierarchical filesystem):\nsound/ Top-level package\n__init__.py Initialize the sound package\nformats/ Subpackage for file format conversions\n__init__.py\nwavread.py\nwavwrite.py\naiffread.py\naiffwrite.py\nauread.py\nauwrite.py\n...\neffects/ Subpackage for sound effects\n__init__.py\necho.py\nsurround.py\nreverse.py\n...\nfilters/ Subpackage for filters\n__init__.py\nequalizer.py\nvocoder.py\nkaraoke.py\n...\nWhen importing the package, Python searches through the directories on\nsys.path\nlooking for the package subdirectory.\nThe __init__.py\nfiles are required to make Python treat directories\ncontaining the file as packages (unless using a namespace package, a\nrelatively advanced feature). This prevents directories with a common name,\nsuch as string\n, from unintentionally hiding valid modules that occur later\non the module search path. In the simplest case, __init__.py\ncan just be\nan empty file, but it can also execute initialization code for the package or\nset the __all__\nvariable, described later.\nUsers of the package can import individual modules from the package, for example:\nimport sound.effects.echo\nThis loads the submodule sound.effects.echo\n. It must be referenced with\nits full name.\nsound.effects.echo.echofilter(input, output, delay=0.7, atten=4)\nAn alternative way of importing the submodule is:\nfrom sound.effects import echo\nThis also loads the submodule echo\n, and makes it available without its\npackage prefix, so it can be used as follows:\necho.echofilter(input, output, delay=0.7, atten=4)\nYet another variation is to import the desired function or variable directly:\nfrom sound.effects.echo import echofilter\nAgain, this loads the submodule echo\n, but this makes its function\nechofilter()\ndirectly available:\nechofilter(input, output, delay=0.7, atten=4)\nNote that when using from package import item\n, the item can be either a\nsubmodule (or subpackage) of the package, or some other name defined in the\npackage, like a function, class or variable. The import\nstatement first\ntests whether the item is defined in the package; if not, it assumes it is a\nmodule and attempts to load it. If it fails to find it, an ImportError\nexception is raised.\nContrarily, when using syntax like import item.subitem.subsubitem\n, each item\nexcept for the last must be a package; the last item can be a module or a\npackage but can\u2019t be a class or function or variable defined in the previous\nitem.\n6.4.1. Importing * From a Package\u00b6\nNow what happens when the user writes from sound.effects import *\n? Ideally,\none would hope that this somehow goes out to the filesystem, finds which\nsubmodules are present in the package, and imports them all. This could take a\nlong time and importing sub-modules might have unwanted side-effects that should\nonly happen when the sub-module is explicitly imported.\nThe only solution is for the package author to provide an explicit index of the\npackage. The import\nstatement uses the following convention: if a package\u2019s\n__init__.py\ncode defines a list named __all__\n, it is taken to be the\nlist of module names that should be imported when from package import *\nis\nencountered. It is up to the package author to keep this list up-to-date when a\nnew version of the package is released. Package authors may also decide not to\nsupport it, if they don\u2019t see a use for importing * from their package. For\nexample, the file sound/effects/__init__.py\ncould contain the following\ncode:\n__all__ = [\"echo\", \"surround\", \"reverse\"]\nThis would mean that from sound.effects import *\nwould import the three\nnamed submodules of the sound.effects\npackage.\nBe aware that submodules might become shadowed by locally defined names. For\nexample, if you added a reverse\nfunction to the\nsound/effects/__init__.py\nfile, the from sound.effects import *\nwould only import the two submodules echo\nand surround\n, but not the\nreverse\nsubmodule, because it is shadowed by the locally defined\nreverse\nfunction:\n__all__ = [\n\"echo\", # refers to the 'echo.py' file\n\"surround\", # refers to the 'surround.py' file\n\"reverse\", # !!! refers to the 'reverse' function now !!!\n]\ndef reverse(msg: str): # <-- this name shadows the 'reverse.py' submodule\nreturn msg[::-1] # in the case of a 'from sound.effects import *'\nIf __all__\nis not defined, the statement from sound.effects import *\ndoes not import all submodules from the package sound.effects\ninto the\ncurrent namespace; it only ensures that the package sound.effects\nhas\nbeen imported (possibly running any initialization code in __init__.py\n)\nand then imports whatever names are defined in the package. This includes any\nnames defined (and submodules explicitly loaded) by __init__.py\n. It\nalso includes any submodules of the package that were explicitly loaded by\nprevious import\nstatements. Consider this code:\nimport sound.effects.echo\nimport sound.effects.surround\nfrom sound.effects import *\nIn this example, the echo\nand surround\nmodules are imported in the\ncurrent namespace because they are defined in the sound.effects\npackage\nwhen the from...import\nstatement is executed. (This also works when\n__all__\nis defined.)\nAlthough certain modules are designed to export only names that follow certain\npatterns when you use import *\n, it is still considered bad practice in\nproduction code.\nRemember, there is nothing wrong with using from package import\nspecific_submodule\n! In fact, this is the recommended notation unless the\nimporting module needs to use submodules with the same name from different\npackages.\n6.4.2. Intra-package References\u00b6\nWhen packages are structured into subpackages (as with the sound\npackage\nin the example), you can use absolute imports to refer to submodules of siblings\npackages. For example, if the module sound.filters.vocoder\nneeds to use\nthe echo\nmodule in the sound.effects\npackage, it can use from\nsound.effects import echo\n.\nYou can also write relative imports, with the from module import name\nform\nof import statement. These imports use leading dots to indicate the current and\nparent packages involved in the relative import. From the surround\nmodule for example, you might use:\nfrom . import echo\nfrom .. import formats\nfrom ..filters import equalizer\nNote that relative imports are based on the name of the current module\u2019s package. Since the main module does not have a package, modules intended for use as the main module of a Python application must always use absolute imports.\n6.4.3. Packages in Multiple Directories\u00b6\nPackages support one more special attribute, __path__\n. This is\ninitialized to be a sequence of strings containing the name of the\ndirectory holding the\npackage\u2019s __init__.py\nbefore the code in that file is executed. This\nvariable can be modified; doing so affects future searches for modules and\nsubpackages contained in the package.\nWhile this feature is not often needed, it can be used to extend the set of modules found in a package.\nFootnotes", "code_snippets": ["\n\n", "\n", "\n ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", " ", " ", " ", "\n ", "\n\n", "\n", "\n ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", "\n ", " ", " ", " ", " ", "\n ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", " ", " ", "\n", " ", " ", " ", "\n ", "\n ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", " ", "\n", " ", " ", " ", "\n", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n", "\n\n", " ", " ", "\n ", " ", " ", "\n", "\n", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 5661} +{"url": "https://docs.python.org/3/tutorial/datastructures.html", "title": "Data Structures", "content": "5. Data Structures\u00b6\nThis chapter describes some things you\u2019ve learned about already in more detail, and adds some new things as well.\n5.1. More on Lists\u00b6\nThe list data type has some more methods. Here are all of the methods of list objects:\n- list.append(x)\nAdd an item to the end of the list. Similar to\na[len(a):] = [x]\n.\n- list.extend(iterable)\nExtend the list by appending all the items from the iterable. Similar to\na[len(a):] = iterable\n.\n- list.insert(i, x)\nInsert an item at a given position. The first argument is the index of the element before which to insert, so\na.insert(0, x)\ninserts at the front of the list, anda.insert(len(a), x)\nis equivalent toa.append(x)\n.\n- list.remove(x)\nRemove the first item from the list whose value is equal to x. It raises a\nValueError\nif there is no such item.\n- list.pop([i])\nRemove the item at the given position in the list, and return it. If no index is specified,\na.pop()\nremoves and returns the last item in the list. It raises anIndexError\nif the list is empty or the index is outside the list range.\n- list.clear()\nRemove all items from the list. Similar to\ndel a[:]\n.\n- list.index(x[, start[, end]])\nReturn zero-based index of the first occurrence of x in the list. Raises a\nValueError\nif there is no such item.The optional arguments start and end are interpreted as in the slice notation and are used to limit the search to a particular subsequence of the list. The returned index is computed relative to the beginning of the full sequence rather than the start argument.\n- list.count(x)\nReturn the number of times x appears in the list.\n- list.sort(*, key=None, reverse=False)\nSort the items of the list in place (the arguments can be used for sort customization, see\nsorted()\nfor their explanation).\n- list.reverse()\nReverse the elements of the list in place.\n- list.copy()\nReturn a shallow copy of the list. Similar to\na[:]\n.\nAn example that uses most of the list methods:\n>>> fruits = ['orange', 'apple', 'pear', 'banana', 'kiwi', 'apple', 'banana']\n>>> fruits.count('apple')\n2\n>>> fruits.count('tangerine')\n0\n>>> fruits.index('banana')\n3\n>>> fruits.index('banana', 4) # Find next banana starting at position 4\n6\n>>> fruits.reverse()\n>>> fruits\n['banana', 'apple', 'kiwi', 'banana', 'pear', 'apple', 'orange']\n>>> fruits.append('grape')\n>>> fruits\n['banana', 'apple', 'kiwi', 'banana', 'pear', 'apple', 'orange', 'grape']\n>>> fruits.sort()\n>>> fruits\n['apple', 'apple', 'banana', 'banana', 'grape', 'kiwi', 'orange', 'pear']\n>>> fruits.pop()\n'pear'\nYou might have noticed that methods like insert\n, remove\nor sort\nthat\nonly modify the list have no return value printed \u2013 they return the default\nNone\n. [1] This is a design principle for all mutable data structures in\nPython.\nAnother thing you might notice is that not all data can be sorted or\ncompared. For instance, [None, 'hello', 10]\ndoesn\u2019t sort because\nintegers can\u2019t be compared to strings and None\ncan\u2019t be compared to\nother types. Also, there are some types that don\u2019t have a defined\nordering relation. For example, 3+4j < 5+7j\nisn\u2019t a valid\ncomparison.\n5.1.1. Using Lists as Stacks\u00b6\nThe list methods make it very easy to use a list as a stack, where the last\nelement added is the first element retrieved (\u201clast-in, first-out\u201d). To add an\nitem to the top of the stack, use append()\n. To retrieve an item from the\ntop of the stack, use pop()\nwithout an explicit index. For example:\n>>> stack = [3, 4, 5]\n>>> stack.append(6)\n>>> stack.append(7)\n>>> stack\n[3, 4, 5, 6, 7]\n>>> stack.pop()\n7\n>>> stack\n[3, 4, 5, 6]\n>>> stack.pop()\n6\n>>> stack.pop()\n5\n>>> stack\n[3, 4]\n5.1.2. Using Lists as Queues\u00b6\nIt is also possible to use a list as a queue, where the first element added is the first element retrieved (\u201cfirst-in, first-out\u201d); however, lists are not efficient for this purpose. While appends and pops from the end of list are fast, doing inserts or pops from the beginning of a list is slow (because all of the other elements have to be shifted by one).\nTo implement a queue, use collections.deque\nwhich was designed to\nhave fast appends and pops from both ends. For example:\n>>> from collections import deque\n>>> queue = deque([\"Eric\", \"John\", \"Michael\"])\n>>> queue.append(\"Terry\") # Terry arrives\n>>> queue.append(\"Graham\") # Graham arrives\n>>> queue.popleft() # The first to arrive now leaves\n'Eric'\n>>> queue.popleft() # The second to arrive now leaves\n'John'\n>>> queue # Remaining queue in order of arrival\ndeque(['Michael', 'Terry', 'Graham'])\n5.1.3. List Comprehensions\u00b6\nList comprehensions provide a concise way to create lists. Common applications are to make new lists where each element is the result of some operations applied to each member of another sequence or iterable, or to create a subsequence of those elements that satisfy a certain condition.\nFor example, assume we want to create a list of squares, like:\n>>> squares = []\n>>> for x in range(10):\n... squares.append(x**2)\n...\n>>> squares\n[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]\nNote that this creates (or overwrites) a variable named x\nthat still exists\nafter the loop completes. We can calculate the list of squares without any\nside effects using:\nsquares = list(map(lambda x: x**2, range(10)))\nor, equivalently:\nsquares = [x**2 for x in range(10)]\nwhich is more concise and readable.\nA list comprehension consists of brackets containing an expression followed\nby a for\nclause, then zero or more for\nor if\nclauses. The result will be a new list resulting from evaluating the expression\nin the context of the for\nand if\nclauses which follow it.\nFor example, this listcomp combines the elements of two lists if they are not\nequal:\n>>> [(x, y) for x in [1,2,3] for y in [3,1,4] if x != y]\n[(1, 3), (1, 4), (2, 3), (2, 1), (2, 4), (3, 1), (3, 4)]\nand it\u2019s equivalent to:\n>>> combs = []\n>>> for x in [1,2,3]:\n... for y in [3,1,4]:\n... if x != y:\n... combs.append((x, y))\n...\n>>> combs\n[(1, 3), (1, 4), (2, 3), (2, 1), (2, 4), (3, 1), (3, 4)]\nNote how the order of the for\nand if\nstatements is the\nsame in both these snippets.\nIf the expression is a tuple (e.g. the (x, y)\nin the previous example),\nit must be parenthesized.\n>>> vec = [-4, -2, 0, 2, 4]\n>>> # create a new list with the values doubled\n>>> [x*2 for x in vec]\n[-8, -4, 0, 4, 8]\n>>> # filter the list to exclude negative numbers\n>>> [x for x in vec if x >= 0]\n[0, 2, 4]\n>>> # apply a function to all the elements\n>>> [abs(x) for x in vec]\n[4, 2, 0, 2, 4]\n>>> # call a method on each element\n>>> freshfruit = [' banana', ' loganberry ', 'passion fruit ']\n>>> [weapon.strip() for weapon in freshfruit]\n['banana', 'loganberry', 'passion fruit']\n>>> # create a list of 2-tuples like (number, square)\n>>> [(x, x**2) for x in range(6)]\n[(0, 0), (1, 1), (2, 4), (3, 9), (4, 16), (5, 25)]\n>>> # the tuple must be parenthesized, otherwise an error is raised\n>>> [x, x**2 for x in range(6)]\nFile \"\", line 1\n[x, x**2 for x in range(6)]\n^^^^^^^\nSyntaxError: did you forget parentheses around the comprehension target?\n>>> # flatten a list using a listcomp with two 'for'\n>>> vec = [[1,2,3], [4,5,6], [7,8,9]]\n>>> [num for elem in vec for num in elem]\n[1, 2, 3, 4, 5, 6, 7, 8, 9]\nList comprehensions can contain complex expressions and nested functions:\n>>> from math import pi\n>>> [str(round(pi, i)) for i in range(1, 6)]\n['3.1', '3.14', '3.142', '3.1416', '3.14159']\n5.1.4. Nested List Comprehensions\u00b6\nThe initial expression in a list comprehension can be any arbitrary expression, including another list comprehension.\nConsider the following example of a 3x4 matrix implemented as a list of 3 lists of length 4:\n>>> matrix = [\n... [1, 2, 3, 4],\n... [5, 6, 7, 8],\n... [9, 10, 11, 12],\n... ]\nThe following list comprehension will transpose rows and columns:\n>>> [[row[i] for row in matrix] for i in range(4)]\n[[1, 5, 9], [2, 6, 10], [3, 7, 11], [4, 8, 12]]\nAs we saw in the previous section, the inner list comprehension is evaluated in\nthe context of the for\nthat follows it, so this example is\nequivalent to:\n>>> transposed = []\n>>> for i in range(4):\n... transposed.append([row[i] for row in matrix])\n...\n>>> transposed\n[[1, 5, 9], [2, 6, 10], [3, 7, 11], [4, 8, 12]]\nwhich, in turn, is the same as:\n>>> transposed = []\n>>> for i in range(4):\n... # the following 3 lines implement the nested listcomp\n... transposed_row = []\n... for row in matrix:\n... transposed_row.append(row[i])\n... transposed.append(transposed_row)\n...\n>>> transposed\n[[1, 5, 9], [2, 6, 10], [3, 7, 11], [4, 8, 12]]\nIn the real world, you should prefer built-in functions to complex flow statements.\nThe zip()\nfunction would do a great job for this use case:\n>>> list(zip(*matrix))\n[(1, 5, 9), (2, 6, 10), (3, 7, 11), (4, 8, 12)]\nSee Unpacking Argument Lists for details on the asterisk in this line.\n5.2. The del\nstatement\u00b6\nThere is a way to remove an item from a list given its index instead of its\nvalue: the del\nstatement. This differs from the pop()\nmethod\nwhich returns a value. The del\nstatement can also be used to remove\nslices from a list or clear the entire list (which we did earlier by assignment\nof an empty list to the slice). For example:\n>>> a = [-1, 1, 66.25, 333, 333, 1234.5]\n>>> del a[0]\n>>> a\n[1, 66.25, 333, 333, 1234.5]\n>>> del a[2:4]\n>>> a\n[1, 66.25, 1234.5]\n>>> del a[:]\n>>> a\n[]\ndel\ncan also be used to delete entire variables:\n>>> del a\nReferencing the name a\nhereafter is an error (at least until another value\nis assigned to it). We\u2019ll find other uses for del\nlater.\n5.3. Tuples and Sequences\u00b6\nWe saw that lists and strings have many common properties, such as indexing and slicing operations. They are two examples of sequence data types (see Sequence Types \u2014 list, tuple, range). Since Python is an evolving language, other sequence data types may be added. There is also another standard sequence data type: the tuple.\nA tuple consists of a number of values separated by commas, for instance:\n>>> t = 12345, 54321, 'hello!'\n>>> t[0]\n12345\n>>> t\n(12345, 54321, 'hello!')\n>>> # Tuples may be nested:\n>>> u = t, (1, 2, 3, 4, 5)\n>>> u\n((12345, 54321, 'hello!'), (1, 2, 3, 4, 5))\n>>> # Tuples are immutable:\n>>> t[0] = 88888\nTraceback (most recent call last):\nFile \"\", line 1, in \nTypeError: 'tuple' object does not support item assignment\n>>> # but they can contain mutable objects:\n>>> v = ([1, 2, 3], [3, 2, 1])\n>>> v\n([1, 2, 3], [3, 2, 1])\nAs you see, on output tuples are always enclosed in parentheses, so that nested tuples are interpreted correctly; they may be input with or without surrounding parentheses, although often parentheses are necessary anyway (if the tuple is part of a larger expression). It is not possible to assign to the individual items of a tuple, however it is possible to create tuples which contain mutable objects, such as lists.\nThough tuples may seem similar to lists, they are often used in different\nsituations and for different purposes.\nTuples are immutable, and usually contain a heterogeneous sequence of\nelements that are accessed via unpacking (see later in this section) or indexing\n(or even by attribute in the case of namedtuples\n).\nLists are mutable, and their elements are usually homogeneous and are\naccessed by iterating over the list.\nA special problem is the construction of tuples containing 0 or 1 items: the syntax has some extra quirks to accommodate these. Empty tuples are constructed by an empty pair of parentheses; a tuple with one item is constructed by following a value with a comma (it is not sufficient to enclose a single value in parentheses). Ugly, but effective. For example:\n>>> empty = ()\n>>> singleton = 'hello', # <-- note trailing comma\n>>> len(empty)\n0\n>>> len(singleton)\n1\n>>> singleton\n('hello',)\nThe statement t = 12345, 54321, 'hello!'\nis an example of tuple packing:\nthe values 12345\n, 54321\nand 'hello!'\nare packed together in a tuple.\nThe reverse operation is also possible:\n>>> x, y, z = t\nThis is called, appropriately enough, sequence unpacking and works for any sequence on the right-hand side. Sequence unpacking requires that there are as many variables on the left side of the equals sign as there are elements in the sequence. Note that multiple assignment is really just a combination of tuple packing and sequence unpacking.\n5.4. Sets\u00b6\nPython also includes a data type for sets. A set is an unordered collection with no duplicate elements. Basic uses include membership testing and eliminating duplicate entries. Set objects also support mathematical operations like union, intersection, difference, and symmetric difference.\nCurly braces or the set()\nfunction can be used to create sets. Note: to\ncreate an empty set you have to use set()\n, not {}\n; the latter creates an\nempty dictionary, a data structure that we discuss in the next section.\nHere is a brief demonstration:\n>>> basket = {'apple', 'orange', 'apple', 'pear', 'orange', 'banana'}\n>>> print(basket) # show that duplicates have been removed\n{'orange', 'banana', 'pear', 'apple'}\n>>> 'orange' in basket # fast membership testing\nTrue\n>>> 'crabgrass' in basket\nFalse\n>>> # Demonstrate set operations on unique letters from two words\n>>>\n>>> a = set('abracadabra')\n>>> b = set('alacazam')\n>>> a # unique letters in a\n{'a', 'r', 'b', 'c', 'd'}\n>>> a - b # letters in a but not in b\n{'r', 'd', 'b'}\n>>> a | b # letters in a or b or both\n{'a', 'c', 'r', 'd', 'b', 'm', 'z', 'l'}\n>>> a & b # letters in both a and b\n{'a', 'c'}\n>>> a ^ b # letters in a or b but not both\n{'r', 'd', 'b', 'm', 'z', 'l'}\nSimilarly to list comprehensions, set comprehensions are also supported:\n>>> a = {x for x in 'abracadabra' if x not in 'abc'}\n>>> a\n{'r', 'd'}\n5.5. Dictionaries\u00b6\nAnother useful data type built into Python is the dictionary (see\nMapping Types \u2014 dict). Dictionaries are sometimes found in other languages as\n\u201cassociative memories\u201d or \u201cassociative arrays\u201d. Unlike sequences, which are\nindexed by a range of numbers, dictionaries are indexed by keys, which can be\nany immutable type; strings and numbers can always be keys. Tuples can be used\nas keys if they contain only strings, numbers, or tuples; if a tuple contains\nany mutable object either directly or indirectly, it cannot be used as a key.\nYou can\u2019t use lists as keys, since lists can be modified in place using index\nassignments, slice assignments, or methods like append()\nand\nextend()\n.\nIt is best to think of a dictionary as a set of key: value pairs,\nwith the requirement that the keys are unique (within one dictionary). A pair of\nbraces creates an empty dictionary: {}\n. Placing a comma-separated list of\nkey:value pairs within the braces adds initial key:value pairs to the\ndictionary; this is also the way dictionaries are written on output.\nThe main operations on a dictionary are storing a value with some key and\nextracting the value given the key. It is also possible to delete a key:value\npair with del\n. If you store using a key that is already in use, the old\nvalue associated with that key is forgotten.\nExtracting a value for a non-existent key by subscripting (d[key]\n) raises a\nKeyError\n. To avoid getting this error when trying to access a possibly\nnon-existent key, use the get()\nmethod instead, which returns\nNone\n(or a specified default value) if the key is not in the dictionary.\nPerforming list(d)\non a dictionary returns a list of all the keys\nused in the dictionary, in insertion order (if you want it sorted, just use\nsorted(d)\ninstead). To check whether a single key is in the\ndictionary, use the in\nkeyword.\nHere is a small example using a dictionary:\n>>> tel = {'jack': 4098, 'sape': 4139}\n>>> tel['guido'] = 4127\n>>> tel\n{'jack': 4098, 'sape': 4139, 'guido': 4127}\n>>> tel['jack']\n4098\n>>> tel['irv']\nTraceback (most recent call last):\nFile \"\", line 1, in \nKeyError: 'irv'\n>>> print(tel.get('irv'))\nNone\n>>> del tel['sape']\n>>> tel['irv'] = 4127\n>>> tel\n{'jack': 4098, 'guido': 4127, 'irv': 4127}\n>>> list(tel)\n['jack', 'guido', 'irv']\n>>> sorted(tel)\n['guido', 'irv', 'jack']\n>>> 'guido' in tel\nTrue\n>>> 'jack' not in tel\nFalse\nThe dict()\nconstructor builds dictionaries directly from sequences of\nkey-value pairs:\n>>> dict([('sape', 4139), ('guido', 4127), ('jack', 4098)])\n{'sape': 4139, 'guido': 4127, 'jack': 4098}\nIn addition, dict comprehensions can be used to create dictionaries from arbitrary key and value expressions:\n>>> {x: x**2 for x in (2, 4, 6)}\n{2: 4, 4: 16, 6: 36}\nWhen the keys are simple strings, it is sometimes easier to specify pairs using keyword arguments:\n>>> dict(sape=4139, guido=4127, jack=4098)\n{'sape': 4139, 'guido': 4127, 'jack': 4098}\n5.6. Looping Techniques\u00b6\nWhen looping through dictionaries, the key and corresponding value can be\nretrieved at the same time using the items()\nmethod.\n>>> knights = {'gallahad': 'the pure', 'robin': 'the brave'}\n>>> for k, v in knights.items():\n... print(k, v)\n...\ngallahad the pure\nrobin the brave\nWhen looping through a sequence, the position index and corresponding value can\nbe retrieved at the same time using the enumerate()\nfunction.\n>>> for i, v in enumerate(['tic', 'tac', 'toe']):\n... print(i, v)\n...\n0 tic\n1 tac\n2 toe\nTo loop over two or more sequences at the same time, the entries can be paired\nwith the zip()\nfunction.\n>>> questions = ['name', 'quest', 'favorite color']\n>>> answers = ['lancelot', 'the holy grail', 'blue']\n>>> for q, a in zip(questions, answers):\n... print('What is your {0}? It is {1}.'.format(q, a))\n...\nWhat is your name? It is lancelot.\nWhat is your quest? It is the holy grail.\nWhat is your favorite color? It is blue.\nTo loop over a sequence in reverse, first specify the sequence in a forward\ndirection and then call the reversed()\nfunction.\n>>> for i in reversed(range(1, 10, 2)):\n... print(i)\n...\n9\n7\n5\n3\n1\nTo loop over a sequence in sorted order, use the sorted()\nfunction which\nreturns a new sorted list while leaving the source unaltered.\n>>> basket = ['apple', 'orange', 'apple', 'pear', 'orange', 'banana']\n>>> for i in sorted(basket):\n... print(i)\n...\napple\napple\nbanana\norange\norange\npear\nUsing set()\non a sequence eliminates duplicate elements. The use of\nsorted()\nin combination with set()\nover a sequence is an idiomatic\nway to loop over unique elements of the sequence in sorted order.\n>>> basket = ['apple', 'orange', 'apple', 'pear', 'orange', 'banana']\n>>> for f in sorted(set(basket)):\n... print(f)\n...\napple\nbanana\norange\npear\nIt is sometimes tempting to change a list while you are looping over it; however, it is often simpler and safer to create a new list instead.\n>>> import math\n>>> raw_data = [56.2, float('NaN'), 51.7, 55.3, 52.5, float('NaN'), 47.8]\n>>> filtered_data = []\n>>> for value in raw_data:\n... if not math.isnan(value):\n... filtered_data.append(value)\n...\n>>> filtered_data\n[56.2, 51.7, 55.3, 52.5, 47.8]\n5.7. More on Conditions\u00b6\nThe conditions used in while\nand if\nstatements can contain any\noperators, not just comparisons.\nThe comparison operators in\nand not in\nare membership tests that\ndetermine whether a value is in (or not in) a container. The operators is\nand is not\ncompare whether two objects are really the same object. All\ncomparison operators have the same priority, which is lower than that of all\nnumerical operators.\nComparisons can be chained. For example, a < b == c\ntests whether a\nis\nless than b\nand moreover b\nequals c\n.\nComparisons may be combined using the Boolean operators and\nand or\n, and\nthe outcome of a comparison (or of any other Boolean expression) may be negated\nwith not\n. These have lower priorities than comparison operators; between\nthem, not\nhas the highest priority and or\nthe lowest, so that A and\nnot B or C\nis equivalent to (A and (not B)) or C\n. As always, parentheses\ncan be used to express the desired composition.\nThe Boolean operators and\nand or\nare so-called short-circuit\noperators: their arguments are evaluated from left to right, and evaluation\nstops as soon as the outcome is determined. For example, if A\nand C\nare\ntrue but B\nis false, A and B and C\ndoes not evaluate the expression\nC\n. When used as a general value and not as a Boolean, the return value of a\nshort-circuit operator is the last evaluated argument.\nIt is possible to assign the result of a comparison or other Boolean expression to a variable. For example,\n>>> string1, string2, string3 = '', 'Trondheim', 'Hammer Dance'\n>>> non_null = string1 or string2 or string3\n>>> non_null\n'Trondheim'\nNote that in Python, unlike C, assignment inside expressions must be done\nexplicitly with the\nwalrus operator :=\n.\nThis avoids a common class of problems encountered in C programs: typing =\nin an expression when ==\nwas intended.\n5.8. Comparing Sequences and Other Types\u00b6\nSequence objects typically may be compared to other objects with the same sequence type. The comparison uses lexicographical ordering: first the first two items are compared, and if they differ this determines the outcome of the comparison; if they are equal, the next two items are compared, and so on, until either sequence is exhausted. If two items to be compared are themselves sequences of the same type, the lexicographical comparison is carried out recursively. If all items of two sequences compare equal, the sequences are considered equal. If one sequence is an initial sub-sequence of the other, the shorter sequence is the smaller (lesser) one. Lexicographical ordering for strings uses the Unicode code point number to order individual characters. Some examples of comparisons between sequences of the same type:\n(1, 2, 3) < (1, 2, 4)\n[1, 2, 3] < [1, 2, 4]\n'ABC' < 'C' < 'Pascal' < 'Python'\n(1, 2, 3, 4) < (1, 2, 4)\n(1, 2) < (1, 2, -1)\n(1, 2, 3) == (1.0, 2.0, 3.0)\n(1, 2, ('aa', 'ab')) < (1, 2, ('abc', 'a'), 4)\nNote that comparing objects of different types with <\nor >\nis legal\nprovided that the objects have appropriate comparison methods. For example,\nmixed numeric types are compared according to their numeric value, so 0 equals\n0.0, etc. Otherwise, rather than providing an arbitrary ordering, the\ninterpreter will raise a TypeError\nexception.\nFootnotes", "code_snippets": [" ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", " ", " ", " ", " ", "\n", " ", "\n", " ", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", "\n", "\n", " ", " ", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", " ", "\n File ", ", line ", "\n", " ", " ", " ", " ", " ", "\n", "\n", ": ", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", " ", "\n", "\n", "\n", " ", "\n", "\n", "\n", " ", "\n", "\n", "\n", " ", "\n", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", " ", "\n", "\n", " ", " ", " ", "\n", "\n", " ", " ", "\n", "\n\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", "\n", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", "\n", "\n", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", " ", " ", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 5614} +{"url": "https://docs.python.org/3/c-api/typeobj.html", "title": "Type Object Structures", "content": "Type Object Structures\u00b6\nPerhaps one of the most important structures of the Python object system is the\nstructure that defines a new type: the PyTypeObject\nstructure. Type\nobjects can be handled using any of the PyObject_*\nor\nPyType_*\nfunctions, but do not offer much that\u2019s interesting to most\nPython applications. These objects are fundamental to how objects behave, so\nthey are very important to the interpreter itself and to any extension module\nthat implements new types.\nType objects are fairly large compared to most of the standard types. The reason for the size is that each type object stores a large number of values, mostly C function pointers, each of which implements a small part of the type\u2019s functionality. The fields of the type object are examined in detail in this section. The fields will be described in the order in which they occur in the structure.\nIn addition to the following quick reference, the Examples\nsection provides at-a-glance insight into the meaning and use of\nPyTypeObject\n.\nQuick Reference\u00b6\n\u201ctp slots\u201d\u00b6\nPyTypeObject Slot [1] |\nspecial methods/attrs |\nInfo [2] |\n||||\n|---|---|---|---|---|---|---|\nO |\nT |\nD |\nI |\n|||\n |\nconst char * |\n__name__ |\nX |\nX |\n||\nX |\nX |\nX |\n||||\nX |\nX |\n|||||\nX |\nX |\nX |\n||||\nX |\nX |\n|||||\n__getattribute__, __getattr__ |\nG |\n|||||\n__setattr__, __delattr__ |\nG |\n|||||\n% |\n||||||\n__repr__ |\nX |\nX |\nX |\n|||\n% |\n||||||\n% |\n||||||\n% |\n||||||\n__hash__ |\nX |\nG |\n||||\n__call__ |\nX |\nX |\n||||\n__str__ |\nX |\nX |\n||||\n__getattribute__, __getattr__ |\nX |\nX |\nG |\n|||\n__setattr__, __delattr__ |\nX |\nX |\nG |\n|||\n% |\n||||||\nunsigned long |\nX |\nX |\n? |\n|||\nconst char * |\n__doc__ |\nX |\nX |\n|||\nX |\nG |\n|||||\nX |\nG |\n|||||\n__lt__, __le__, __eq__, __ne__, __gt__, __ge__ |\nX |\nG |\n||||\nX |\n? |\n|||||\n__iter__ |\nX |\n|||||\n__next__ |\nX |\n|||||\n|\nX |\nX |\n||||\n|\nX |\n|||||\n|\nX |\nX |\n||||\n__base__ |\nX |\n|||||\n|\n__dict__ |\n? |\n||||\n__get__ |\nX |\n|||||\n__set__, __delete__ |\nX |\n|||||\nX |\n? |\n|||||\n__init__ |\nX |\nX |\nX |\n|||\nX |\n? |\n? |\n||||\n__new__ |\nX |\nX |\n? |\n? |\n||\nX |\nX |\n? |\n? |\n|||\nX |\nX |\n|||||\n< |\n|\n__bases__ |\n~ |\n|||\n< |\n|\n__mro__ |\n~ |\n|||\n[ |\n|\n|||||\nvoid * |\n__subclasses__ |\n|||||\n|\n||||||\n( |\n||||||\nunsigned int |\n||||||\n__del__ |\nX |\n|||||\nunsigned char |\nsub-slots\u00b6\nSlot |\nspecial methods |\n|\n|---|---|---|\n__await__ |\n||\n__aiter__ |\n||\n__anext__ |\n||\n__add__ __radd__ |\n||\n__iadd__ |\n||\n__sub__ __rsub__ |\n||\n__isub__ |\n||\n__mul__ __rmul__ |\n||\n__imul__ |\n||\n__mod__ __rmod__ |\n||\n__imod__ |\n||\n__divmod__ __rdivmod__ |\n||\n__pow__ __rpow__ |\n||\n__ipow__ |\n||\n__neg__ |\n||\n__pos__ |\n||\n__abs__ |\n||\n__bool__ |\n||\n__invert__ |\n||\n__lshift__ __rlshift__ |\n||\n__ilshift__ |\n||\n__rshift__ __rrshift__ |\n||\n__irshift__ |\n||\n__and__ __rand__ |\n||\n__iand__ |\n||\n__xor__ __rxor__ |\n||\n__ixor__ |\n||\n__or__ __ror__ |\n||\n__ior__ |\n||\n__int__ |\n||\nvoid * |\n||\n__float__ |\n||\n__floordiv__ |\n||\n__ifloordiv__ |\n||\n__truediv__ |\n||\n__itruediv__ |\n||\n__index__ |\n||\n__matmul__ __rmatmul__ |\n||\n__imatmul__ |\n||\n__len__ |\n||\n__getitem__ |\n||\n__setitem__, __delitem__ |\n||\n__len__ |\n||\n__add__ |\n||\n__mul__ |\n||\n__getitem__ |\n||\n__setitem__ __delitem__ |\n||\n__contains__ |\n||\n__iadd__ |\n||\n__imul__ |\n||\n__buffer__ |\n||\n__release_buffer__ |\nslot typedefs\u00b6\ntypedef |\nParameter Types |\nReturn Type |\n|---|---|---|\n|\n||\n|\nvoid |\n|\nvoid * |\nvoid |\n|\nint |\n||\n|\n||\nint |\n||\n|\n|\n|\nPyObject *const char *\n|\n|\n|\nint |\n||\n|\n||\nint |\n||\n|\n||\nint |\n||\n|\nPy_hash_t |\n|\n|\n||\n|\n|\n|\n|\n|\n|\n|\n||\nint |\n||\nvoid |\n||\n|\nint |\n|\nPyObject * |\n|\n|\n|\n||\n|\n||\n|\n||\nint |\n||\nint |\n||\nint |\nSee Slot Type typedefs below for more detail.\nPyTypeObject Definition\u00b6\nThe structure definition for PyTypeObject\ncan be found in\nInclude/cpython/object.h\n. For convenience of reference, this repeats the\ndefinition found there:\ntypedef struct _typeobject {\nPyObject_VAR_HEAD\nconst char *tp_name; /* For printing, in format \".\" */\nPy_ssize_t tp_basicsize, tp_itemsize; /* For allocation */\n/* Methods to implement standard operations */\ndestructor tp_dealloc;\nPy_ssize_t tp_vectorcall_offset;\ngetattrfunc tp_getattr;\nsetattrfunc tp_setattr;\nPyAsyncMethods *tp_as_async; /* formerly known as tp_compare (Python 2)\nor tp_reserved (Python 3) */\nreprfunc tp_repr;\n/* Method suites for standard classes */\nPyNumberMethods *tp_as_number;\nPySequenceMethods *tp_as_sequence;\nPyMappingMethods *tp_as_mapping;\n/* More standard operations (here for binary compatibility) */\nhashfunc tp_hash;\nternaryfunc tp_call;\nreprfunc tp_str;\ngetattrofunc tp_getattro;\nsetattrofunc tp_setattro;\n/* Functions to access object as input/output buffer */\nPyBufferProcs *tp_as_buffer;\n/* Flags to define presence of optional/expanded features */\nunsigned long tp_flags;\nconst char *tp_doc; /* Documentation string */\n/* Assigned meaning in release 2.0 */\n/* call function for all accessible objects */\ntraverseproc tp_traverse;\n/* delete references to contained objects */\ninquiry tp_clear;\n/* Assigned meaning in release 2.1 */\n/* rich comparisons */\nrichcmpfunc tp_richcompare;\n/* weak reference enabler */\nPy_ssize_t tp_weaklistoffset;\n/* Iterators */\ngetiterfunc tp_iter;\niternextfunc tp_iternext;\n/* Attribute descriptor and subclassing stuff */\nPyMethodDef *tp_methods;\nPyMemberDef *tp_members;\nPyGetSetDef *tp_getset;\n// Strong reference on a heap type, borrowed reference on a static type\nPyTypeObject *tp_base;\nPyObject *tp_dict;\ndescrgetfunc tp_descr_get;\ndescrsetfunc tp_descr_set;\nPy_ssize_t tp_dictoffset;\ninitproc tp_init;\nallocfunc tp_alloc;\nnewfunc tp_new;\nfreefunc tp_free; /* Low-level free-memory routine */\ninquiry tp_is_gc; /* For PyObject_IS_GC */\nPyObject *tp_bases;\nPyObject *tp_mro; /* method resolution order */\nPyObject *tp_cache; /* no longer used */\nvoid *tp_subclasses; /* for static builtin types this is an index */\nPyObject *tp_weaklist; /* not used for static builtin types */\ndestructor tp_del;\n/* Type attribute cache version tag. Added in version 2.6.\n* If zero, the cache is invalid and must be initialized.\n*/\nunsigned int tp_version_tag;\ndestructor tp_finalize;\nvectorcallfunc tp_vectorcall;\n/* bitset of which type-watchers care about this type */\nunsigned char tp_watched;\n/* Number of tp_version_tag values used.\n* Set to _Py_ATTR_CACHE_UNUSED if the attribute cache is\n* disabled for this type (e.g. due to custom MRO entries).\n* Otherwise, limited to MAX_VERSIONS_PER_CLASS (defined elsewhere).\n*/\nuint16_t tp_versions_used;\n} PyTypeObject;\nPyObject Slots\u00b6\nThe type object structure extends the PyVarObject\nstructure. The\nob_size\nfield is used for dynamic types (created by type_new()\n,\nusually called from a class statement). Note that PyType_Type\n(the\nmetatype) initializes tp_itemsize\n, which means that its instances (i.e.\ntype objects) must have the ob_size\nfield.\nThe type object\u2019s reference count is initialized to\n1\nby thePyObject_HEAD_INIT\nmacro. Note that for statically allocated type objects, the type\u2019s instances (objects whoseob_type\npoints back to the type) do not count as references. But for dynamically allocated type objects, the instances do count as references.Inheritance:\nThis field is not inherited by subtypes.\nThis is the type\u2019s type, in other words its metatype. It is initialized by the argument to the\nPyObject_HEAD_INIT\nmacro, and its value should normally be&PyType_Type\n. However, for dynamically loadable extension modules that must be usable on Windows (at least), the compiler complains that this is not a valid initializer. Therefore, the convention is to passNULL\nto thePyObject_HEAD_INIT\nmacro and to initialize this field explicitly at the start of the module\u2019s initialization function, before doing anything else. This is typically done like this:Foo_Type.ob_type = &PyType_Type;This should be done before any instances of the type are created.\nPyType_Ready()\nchecks ifob_type\nisNULL\n, and if so, initializes it to theob_type\nfield of the base class.PyType_Ready()\nwill not change this field if it is non-zero.Inheritance:\nThis field is inherited by subtypes.\nPyVarObject Slots\u00b6\nFor statically allocated type objects, this should be initialized to zero. For dynamically allocated type objects, this field has a special internal meaning.\nThis field should be accessed using the\nPy_SIZE()\nmacro.Inheritance:\nThis field is not inherited by subtypes.\nPyTypeObject Slots\u00b6\nEach slot has a section describing inheritance. If PyType_Ready()\nmay set a value when the field is set to NULL\nthen there will also be\na \u201cDefault\u201d section. (Note that many fields set on PyBaseObject_Type\nand PyType_Type\neffectively act as defaults.)\n-\nconst char *PyTypeObject.tp_name\u00b6\nPointer to a NUL-terminated string containing the name of the type. For types that are accessible as module globals, the string should be the full module name, followed by a dot, followed by the type name; for built-in types, it should be just the type name. If the module is a submodule of a package, the full package name is part of the full module name. For example, a type named\nT\ndefined in moduleM\nin subpackageQ\nin packageP\nshould have thetp_name\ninitializer\"P.Q.M.T\"\n.For dynamically allocated type objects, this should just be the type name, and the module name explicitly stored in the type dict as the value for key\n'__module__'\n.For statically allocated type objects, the tp_name field should contain a dot. Everything before the last dot is made accessible as the\n__module__\nattribute, and everything after the last dot is made accessible as the__name__\nattribute.If no dot is present, the entire\ntp_name\nfield is made accessible as the__name__\nattribute, and the__module__\nattribute is undefined (unless explicitly set in the dictionary, as explained above). This means your type will be impossible to pickle. Additionally, it will not be listed in module documentations created with pydoc.This field must not be\nNULL\n. It is the only required field inPyTypeObject()\n(other than potentiallytp_itemsize\n).Inheritance:\nThis field is not inherited by subtypes.\n-\nPy_ssize_t PyTypeObject.tp_basicsize\u00b6\n-\nPy_ssize_t PyTypeObject.tp_itemsize\u00b6\nThese fields allow calculating the size in bytes of instances of the type.\nThere are two kinds of types: types with fixed-length instances have a zero\ntp_itemsize\nfield, types with variable-length instances have a non-zerotp_itemsize\nfield. For a type with fixed-length instances, all instances have the same size, given intp_basicsize\n. (Exceptions to this rule can be made usingPyUnstable_Object_GC_NewWithExtraData()\n.)For a type with variable-length instances, the instances must have an\nob_size\nfield, and the instance size istp_basicsize\nplus N timestp_itemsize\n, where N is the \u201clength\u201d of the object.Functions like\nPyObject_NewVar()\nwill take the value of N as an argument, and store in the instance\u2019sob_size\nfield. Note that theob_size\nfield may later be used for other purposes. For example,int\ninstances use the bits ofob_size\nin an implementation-defined way; the underlying storage and its size should be accessed usingPyLong_Export()\n.Note\nThe\nob_size\nfield should be accessed using thePy_SIZE()\nandPy_SET_SIZE()\nmacros.Also, the presence of an\nob_size\nfield in the instance layout doesn\u2019t mean that the instance structure is variable-length. For example, thelist\ntype has fixed-length instances, yet those instances have aob_size\nfield. (As withint\n, avoid reading lists\u2019ob_size\ndirectly. CallPyList_Size()\ninstead.)The\ntp_basicsize\nincludes size needed for data of the type\u2019stp_base\n, plus any extra data needed by each instance.The correct way to set\ntp_basicsize\nis to use thesizeof\noperator on the struct used to declare the instance layout. This struct must include the struct used to declare the base type. In other words,tp_basicsize\nmust be greater than or equal to the base\u2019stp_basicsize\n.Since every type is a subtype of\nobject\n, this struct must includePyObject\norPyVarObject\n(depending on whetherob_size\nshould be included). These are usually defined by the macroPyObject_HEAD\norPyObject_VAR_HEAD\n, respectively.The basic size does not include the GC header size, as that header is not part of\nPyObject_HEAD\n.For cases where struct used to declare the base type is unknown, see\nPyType_Spec.basicsize\nandPyType_FromMetaclass()\n.Notes about alignment:\ntp_basicsize\nmust be a multiple of_Alignof(PyObject)\n. When usingsizeof\non astruct\nthat includesPyObject_HEAD\n, as recommended, the compiler ensures this. When not using a Cstruct\n, or when using compiler extensions like__attribute__((packed))\n, it is up to you.If the variable items require a particular alignment,\ntp_basicsize\nandtp_itemsize\nmust each be a multiple of that alignment. For example, if a type\u2019s variable part stores adouble\n, it is your responsibility that both fields are a multiple of_Alignof(double)\n.\nInheritance:\nThese fields are inherited separately by subtypes. (That is, if the field is set to zero,\nPyType_Ready()\nwill copy the value from the base type, indicating that the instances do not need additional storage.)If the base type has a non-zero\ntp_itemsize\n, it is generally not safe to settp_itemsize\nto a different non-zero value in a subtype (though this depends on the implementation of the base type).\n-\ndestructor PyTypeObject.tp_dealloc\u00b6\nThe corresponding slot ID\nPy_tp_dealloc\nis part of the Stable ABI.A pointer to the instance destructor function. The function signature is:\nvoid tp_dealloc(PyObject *self);\nThe destructor function should remove all references which the instance owns (e.g., call\nPy_CLEAR()\n), free all memory buffers owned by the instance, and call the type\u2019stp_free\nfunction to free the object itself.If you may call functions that may set the error indicator, you must use\nPyErr_GetRaisedException()\nandPyErr_SetRaisedException()\nto ensure you don\u2019t clobber a preexisting error indicator (the deallocation could have occurred while processing a different error):static void foo_dealloc(foo_object *self) { PyObject *et, *ev, *etb; PyObject *exc = PyErr_GetRaisedException(); ... PyErr_SetRaisedException(exc); }\nThe dealloc handler itself must not raise an exception; if it hits an error case it should call\nPyErr_FormatUnraisable()\nto log (and clear) an unraisable exception.No guarantees are made about when an object is destroyed, except:\nPython will destroy an object immediately or some time after the final reference to the object is deleted, unless its finalizer (\ntp_finalize\n) subsequently resurrects the object.An object will not be destroyed while it is being automatically finalized (\ntp_finalize\n) or automatically cleared (tp_clear\n).\nCPython currently destroys an object immediately from\nPy_DECREF()\nwhen the new reference count is zero, but this may change in a future version.It is recommended to call\nPyObject_CallFinalizerFromDealloc()\nat the beginning oftp_dealloc\nto guarantee that the object is always finalized before destruction.If the type supports garbage collection (the\nPy_TPFLAGS_HAVE_GC\nflag is set), the destructor should callPyObject_GC_UnTrack()\nbefore clearing any member fields.It is permissible to call\ntp_clear\nfromtp_dealloc\nto reduce code duplication and to guarantee that the object is always cleared before destruction. Beware thattp_clear\nmight have already been called.If the type is heap allocated (\nPy_TPFLAGS_HEAPTYPE\n), the deallocator should release the owned reference to its type object (viaPy_DECREF()\n) after calling the type deallocator. See the example code below.:static void foo_dealloc(PyObject *op) { foo_object *self = (foo_object *) op; PyObject_GC_UnTrack(self); Py_CLEAR(self->ref); Py_TYPE(self)->tp_free(self); }\ntp_dealloc\nmust leave the exception status unchanged. If it needs to call something that might raise an exception, the exception state must be backed up first and restored later (after logging any exceptions withPyErr_WriteUnraisable()\n).Example:\nstatic void foo_dealloc(PyObject *self) { PyObject *exc = PyErr_GetRaisedException(); if (PyObject_CallFinalizerFromDealloc(self) < 0) { // self was resurrected. goto done; } PyTypeObject *tp = Py_TYPE(self); if (tp->tp_flags & Py_TPFLAGS_HAVE_GC) { PyObject_GC_UnTrack(self); } // Optional, but convenient to avoid code duplication. if (tp->tp_clear && tp->tp_clear(self) < 0) { PyErr_WriteUnraisable(self); } // Any additional destruction goes here. tp->tp_free(self); self = NULL; // In case PyErr_WriteUnraisable() is called below. if (tp->tp_flags & Py_TPFLAGS_HEAPTYPE) { Py_CLEAR(tp); } done: // Optional, if something was called that might have raised an // exception. if (PyErr_Occurred()) { PyErr_WriteUnraisable(self); } PyErr_SetRaisedException(exc); }\ntp_dealloc\nmay be called from any Python thread, not just the thread which created the object (if the object becomes part of a refcount cycle, that cycle might be collected by a garbage collection on any thread). This is not a problem for Python API calls, since the thread on whichtp_dealloc\nis called with an attached thread state. However, if the object being destroyed in turn destroys objects from some other C library, care should be taken to ensure that destroying those objects on the thread which calledtp_dealloc\nwill not violate any assumptions of the library.Inheritance:\nThis field is inherited by subtypes.\nSee also\nObject Life Cycle for details about how this slot relates to other slots.\n-\nPy_ssize_t PyTypeObject.tp_vectorcall_offset\u00b6\nAn optional offset to a per-instance function that implements calling the object using the vectorcall protocol, a more efficient alternative of the simpler\ntp_call\n.This field is only used if the flag\nPy_TPFLAGS_HAVE_VECTORCALL\nis set. If so, this must be a positive integer containing the offset in the instance of avectorcallfunc\npointer.The vectorcallfunc pointer may be\nNULL\n, in which case the instance behaves as ifPy_TPFLAGS_HAVE_VECTORCALL\nwas not set: calling the instance falls back totp_call\n.Any class that sets\nPy_TPFLAGS_HAVE_VECTORCALL\nmust also settp_call\nand make sure its behaviour is consistent with the vectorcallfunc function. This can be done by setting tp_call toPyVectorcall_Call()\n.Changed in version 3.8: Before version 3.8, this slot was named\ntp_print\n. In Python 2.x, it was used for printing to a file. In Python 3.0 to 3.7, it was unused.Changed in version 3.12: Before version 3.12, it was not recommended for mutable heap types to implement the vectorcall protocol. When a user sets\n__call__\nin Python code, only tp_call is updated, likely making it inconsistent with the vectorcall function. Since 3.12, setting__call__\nwill disable vectorcall optimization by clearing thePy_TPFLAGS_HAVE_VECTORCALL\nflag.Inheritance:\nThis field is always inherited. However, the\nPy_TPFLAGS_HAVE_VECTORCALL\nflag is not always inherited. If it\u2019s not set, then the subclass won\u2019t use vectorcall, except whenPyVectorcall_Call()\nis explicitly called.\n-\ngetattrfunc PyTypeObject.tp_getattr\u00b6\nThe corresponding slot ID\nPy_tp_getattr\nis part of the Stable ABI.An optional pointer to the get-attribute-string function.\nThis field is deprecated. When it is defined, it should point to a function that acts the same as the\ntp_getattro\nfunction, but taking a C string instead of a Python string object to give the attribute name.Inheritance:\nGroup:\ntp_getattr\n,tp_getattro\nThis field is inherited by subtypes together with\ntp_getattro\n: a subtype inherits bothtp_getattr\nandtp_getattro\nfrom its base type when the subtype\u2019stp_getattr\nandtp_getattro\nare bothNULL\n.\n-\nsetattrfunc PyTypeObject.tp_setattr\u00b6\nThe corresponding slot ID\nPy_tp_setattr\nis part of the Stable ABI.An optional pointer to the function for setting and deleting attributes.\nThis field is deprecated. When it is defined, it should point to a function that acts the same as the\ntp_setattro\nfunction, but taking a C string instead of a Python string object to give the attribute name.Inheritance:\nGroup:\ntp_setattr\n,tp_setattro\nThis field is inherited by subtypes together with\ntp_setattro\n: a subtype inherits bothtp_setattr\nandtp_setattro\nfrom its base type when the subtype\u2019stp_setattr\nandtp_setattro\nare bothNULL\n.\n-\nPyAsyncMethods *PyTypeObject.tp_as_async\u00b6\nPointer to an additional structure that contains fields relevant only to objects which implement awaitable and asynchronous iterator protocols at the C-level. See Async Object Structures for details.\nAdded in version 3.5: Formerly known as\ntp_compare\nandtp_reserved\n.Inheritance:\nThe\ntp_as_async\nfield is not inherited, but the contained fields are inherited individually.\n-\nreprfunc PyTypeObject.tp_repr\u00b6\nThe corresponding slot ID\nPy_tp_repr\nis part of the Stable ABI.An optional pointer to a function that implements the built-in function\nrepr()\n.The signature is the same as for\nPyObject_Repr()\n:PyObject *tp_repr(PyObject *self);\nThe function must return a string or a Unicode object. Ideally, this function should return a string that, when passed to\neval()\n, given a suitable environment, returns an object with the same value. If this is not feasible, it should return a string starting with'<'\nand ending with'>'\nfrom which both the type and the value of the object can be deduced.Inheritance:\nThis field is inherited by subtypes.\nDefault:\nWhen this field is not set, a string of the form\n<%s object at %p>\nis returned, where%s\nis replaced by the type name, and%p\nby the object\u2019s memory address.\n-\nPyNumberMethods *PyTypeObject.tp_as_number\u00b6\nPointer to an additional structure that contains fields relevant only to objects which implement the number protocol. These fields are documented in Number Object Structures.\nInheritance:\nThe\ntp_as_number\nfield is not inherited, but the contained fields are inherited individually.\n-\nPySequenceMethods *PyTypeObject.tp_as_sequence\u00b6\nPointer to an additional structure that contains fields relevant only to objects which implement the sequence protocol. These fields are documented in Sequence Object Structures.\nInheritance:\nThe\ntp_as_sequence\nfield is not inherited, but the contained fields are inherited individually.\n-\nPyMappingMethods *PyTypeObject.tp_as_mapping\u00b6\nPointer to an additional structure that contains fields relevant only to objects which implement the mapping protocol. These fields are documented in Mapping Object Structures.\nInheritance:\nThe\ntp_as_mapping\nfield is not inherited, but the contained fields are inherited individually.\n-\nhashfunc PyTypeObject.tp_hash\u00b6\nThe corresponding slot ID\nPy_tp_hash\nis part of the Stable ABI.An optional pointer to a function that implements the built-in function\nhash()\n.The signature is the same as for\nPyObject_Hash()\n:Py_hash_t tp_hash(PyObject *);\nThe value\n-1\nshould not be returned as a normal return value; when an error occurs during the computation of the hash value, the function should set an exception and return-1\n.When this field is not set (and\ntp_richcompare\nis not set), an attempt to take the hash of the object raisesTypeError\n. This is the same as setting it toPyObject_HashNotImplemented()\n.This field can be set explicitly to\nPyObject_HashNotImplemented()\nto block inheritance of the hash method from a parent type. This is interpreted as the equivalent of__hash__ = None\nat the Python level, causingisinstance(o, collections.Hashable)\nto correctly returnFalse\n. Note that the converse is also true - setting__hash__ = None\non a class at the Python level will result in thetp_hash\nslot being set toPyObject_HashNotImplemented()\n.Inheritance:\nGroup:\ntp_hash\n,tp_richcompare\nThis field is inherited by subtypes together with\ntp_richcompare\n: a subtype inherits both oftp_richcompare\nandtp_hash\n, when the subtype\u2019stp_richcompare\nandtp_hash\nare bothNULL\n.Default:\n-\nternaryfunc PyTypeObject.tp_call\u00b6\nThe corresponding slot ID\nPy_tp_call\nis part of the Stable ABI.An optional pointer to a function that implements calling the object. This should be\nNULL\nif the object is not callable. The signature is the same as forPyObject_Call()\n:PyObject *tp_call(PyObject *self, PyObject *args, PyObject *kwargs);\nInheritance:\nThis field is inherited by subtypes.\n-\nreprfunc PyTypeObject.tp_str\u00b6\nThe corresponding slot ID\nPy_tp_str\nis part of the Stable ABI.An optional pointer to a function that implements the built-in operation\nstr()\n. (Note thatstr\nis a type now, andstr()\ncalls the constructor for that type. This constructor callsPyObject_Str()\nto do the actual work, andPyObject_Str()\nwill call this handler.)The signature is the same as for\nPyObject_Str()\n:PyObject *tp_str(PyObject *self);\nThe function must return a string or a Unicode object. It should be a \u201cfriendly\u201d string representation of the object, as this is the representation that will be used, among other things, by the\nprint()\nfunction.Inheritance:\nThis field is inherited by subtypes.\nDefault:\nWhen this field is not set,\nPyObject_Repr()\nis called to return a string representation.\n-\ngetattrofunc PyTypeObject.tp_getattro\u00b6\nThe corresponding slot ID\nPy_tp_getattro\nis part of the Stable ABI.An optional pointer to the get-attribute function.\nThe signature is the same as for\nPyObject_GetAttr()\n:PyObject *tp_getattro(PyObject *self, PyObject *attr);\nIt is usually convenient to set this field to\nPyObject_GenericGetAttr()\n, which implements the normal way of looking for object attributes.Inheritance:\nGroup:\ntp_getattr\n,tp_getattro\nThis field is inherited by subtypes together with\ntp_getattr\n: a subtype inherits bothtp_getattr\nandtp_getattro\nfrom its base type when the subtype\u2019stp_getattr\nandtp_getattro\nare bothNULL\n.Default:\n-\nsetattrofunc PyTypeObject.tp_setattro\u00b6\nThe corresponding slot ID\nPy_tp_setattro\nis part of the Stable ABI.An optional pointer to the function for setting and deleting attributes.\nThe signature is the same as for\nPyObject_SetAttr()\n:int tp_setattro(PyObject *self, PyObject *attr, PyObject *value);\nIn addition, setting value to\nNULL\nto delete an attribute must be supported. It is usually convenient to set this field toPyObject_GenericSetAttr()\n, which implements the normal way of setting object attributes.Inheritance:\nGroup:\ntp_setattr\n,tp_setattro\nThis field is inherited by subtypes together with\ntp_setattr\n: a subtype inherits bothtp_setattr\nandtp_setattro\nfrom its base type when the subtype\u2019stp_setattr\nandtp_setattro\nare bothNULL\n.Default:\n-\nPyBufferProcs *PyTypeObject.tp_as_buffer\u00b6\nPointer to an additional structure that contains fields relevant only to objects which implement the buffer interface. These fields are documented in Buffer Object Structures.\nInheritance:\nThe\ntp_as_buffer\nfield is not inherited, but the contained fields are inherited individually.\n-\nunsigned long PyTypeObject.tp_flags\u00b6\nThis field is a bit mask of various flags. Some flags indicate variant semantics for certain situations; others are used to indicate that certain fields in the type object (or in the extension structures referenced via\ntp_as_number\n,tp_as_sequence\n,tp_as_mapping\n, andtp_as_buffer\n) that were historically not always present are valid; if such a flag bit is clear, the type fields it guards must not be accessed and must be considered to have a zero orNULL\nvalue instead.Inheritance:\nInheritance of this field is complicated. Most flag bits are inherited individually, i.e. if the base type has a flag bit set, the subtype inherits this flag bit. The flag bits that pertain to extension structures are strictly inherited if the extension structure is inherited, i.e. the base type\u2019s value of the flag bit is copied into the subtype together with a pointer to the extension structure. The\nPy_TPFLAGS_HAVE_GC\nflag bit is inherited together with thetp_traverse\nandtp_clear\nfields, i.e. if thePy_TPFLAGS_HAVE_GC\nflag bit is clear in the subtype and thetp_traverse\nandtp_clear\nfields in the subtype exist and haveNULL\nvalues.Default:\nPyBaseObject_Type\nusesPy_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE\n.Bit Masks:\nThe following bit masks are currently defined; these can be ORed together using the\n|\noperator to form the value of thetp_flags\nfield. The macroPyType_HasFeature()\ntakes a type and a flags value, tp and f, and checks whethertp->tp_flags & f\nis non-zero.-\nPy_TPFLAGS_HEAPTYPE\u00b6\nThis bit is set when the type object itself is allocated on the heap, for example, types created dynamically using\nPyType_FromSpec()\n. In this case, theob_type\nfield of its instances is considered a reference to the type, and the type object is INCREF\u2019ed when a new instance is created, and DECREF\u2019ed when an instance is destroyed (this does not apply to instances of subtypes; only the type referenced by the instance\u2019s ob_type gets INCREF\u2019ed or DECREF\u2019ed). Heap types should also support garbage collection as they can form a reference cycle with their own module object.Inheritance:\n???\n-\nPy_TPFLAGS_BASETYPE\u00b6\n- Part of the Stable ABI.\nThis bit is set when the type can be used as the base type of another type. If this bit is clear, the type cannot be subtyped (similar to a \u201cfinal\u201d class in Java).\nInheritance:\n???\n-\nPy_TPFLAGS_READY\u00b6\nThis bit is set when the type object has been fully initialized by\nPyType_Ready()\n.Inheritance:\n???\n-\nPy_TPFLAGS_READYING\u00b6\nThis bit is set while\nPyType_Ready()\nis in the process of initializing the type object.Inheritance:\n???\n-\nPy_TPFLAGS_HAVE_GC\u00b6\n- Part of the Stable ABI.\nThis bit is set when the object supports garbage collection. If this bit is set, memory for new instances (see\ntp_alloc\n) must be allocated usingPyObject_GC_New\norPyType_GenericAlloc()\nand deallocated (seetp_free\n) usingPyObject_GC_Del()\n. More information in section Supporting Cyclic Garbage Collection.Inheritance:\nGroup:\nPy_TPFLAGS_HAVE_GC\n,tp_traverse\n,tp_clear\nThe\nPy_TPFLAGS_HAVE_GC\nflag bit is inherited together with thetp_traverse\nandtp_clear\nfields, i.e. if thePy_TPFLAGS_HAVE_GC\nflag bit is clear in the subtype and thetp_traverse\nandtp_clear\nfields in the subtype exist and haveNULL\nvalues.\n-\nPy_TPFLAGS_DEFAULT\u00b6\n- Part of the Stable ABI.\nThis is a bitmask of all the bits that pertain to the existence of certain fields in the type object and its extension structures. Currently, it includes the following bits:\nPy_TPFLAGS_HAVE_STACKLESS_EXTENSION\n.Inheritance:\n???\n-\nPy_TPFLAGS_METHOD_DESCRIPTOR\u00b6\n- Part of the Stable ABI since version 3.8.\nThis bit indicates that objects behave like unbound methods.\nIf this flag is set for\ntype(meth)\n, then:meth.__get__(obj, cls)(*args, **kwds)\n(withobj\nnot None) must be equivalent tometh(obj, *args, **kwds)\n.meth.__get__(None, cls)(*args, **kwds)\nmust be equivalent tometh(*args, **kwds)\n.\nThis flag enables an optimization for typical method calls like\nobj.meth()\n: it avoids creating a temporary \u201cbound method\u201d object forobj.meth\n.Added in version 3.8.\nInheritance:\nThis flag is never inherited by types without the\nPy_TPFLAGS_IMMUTABLETYPE\nflag set. For extension types, it is inherited whenevertp_descr_get\nis inherited.\n-\nPy_TPFLAGS_MANAGED_DICT\u00b6\nThis bit indicates that instances of the class have a\n__dict__\nattribute, and that the space for the dictionary is managed by the VM.If this flag is set,\nPy_TPFLAGS_HAVE_GC\nshould also be set.The type traverse function must call\nPyObject_VisitManagedDict()\nand its clear function must callPyObject_ClearManagedDict()\n.Added in version 3.12.\nInheritance:\nThis flag is inherited unless the\ntp_dictoffset\nfield is set in a superclass.\n-\nPy_TPFLAGS_MANAGED_WEAKREF\u00b6\nThis bit indicates that instances of the class should be weakly referenceable.\nAdded in version 3.12.\nInheritance:\nThis flag is inherited unless the\ntp_weaklistoffset\nfield is set in a superclass.\n-\nPy_TPFLAGS_ITEMS_AT_END\u00b6\n- Part of the Stable ABI since version 3.12.\nOnly usable with variable-size types, i.e. ones with non-zero\ntp_itemsize\n.Indicates that the variable-sized portion of an instance of this type is at the end of the instance\u2019s memory area, at an offset of\nPy_TYPE(obj)->tp_basicsize\n(which may be different in each subclass).When setting this flag, be sure that all superclasses either use this memory layout, or are not variable-sized. Python does not check this.\nAdded in version 3.12.\nInheritance:\nThis flag is inherited.\n-\nPy_TPFLAGS_LONG_SUBCLASS\u00b6\n-\nPy_TPFLAGS_LIST_SUBCLASS\u00b6\n-\nPy_TPFLAGS_TUPLE_SUBCLASS\u00b6\n-\nPy_TPFLAGS_BYTES_SUBCLASS\u00b6\n-\nPy_TPFLAGS_UNICODE_SUBCLASS\u00b6\n-\nPy_TPFLAGS_DICT_SUBCLASS\u00b6\n-\nPy_TPFLAGS_BASE_EXC_SUBCLASS\u00b6\n-\nPy_TPFLAGS_TYPE_SUBCLASS\u00b6\nFunctions such as\nPyLong_Check()\nwill callPyType_FastSubclass()\nwith one of these flags to quickly determine if a type is a subclass of a built-in type; such specific checks are faster than a generic check, likePyObject_IsInstance()\n. Custom types that inherit from built-ins should have theirtp_flags\nset appropriately, or the code that interacts with such types will behave differently depending on what kind of check is used.\n-\nPy_TPFLAGS_HAVE_FINALIZE\u00b6\nThis bit is set when the\ntp_finalize\nslot is present in the type structure.Added in version 3.4.\nDeprecated since version 3.8: This flag isn\u2019t necessary anymore, as the interpreter assumes the\ntp_finalize\nslot is always present in the type structure.\n-\nPy_TPFLAGS_HAVE_VECTORCALL\u00b6\n- Part of the Stable ABI since version 3.12.\nThis bit is set when the class implements the vectorcall protocol. See\ntp_vectorcall_offset\nfor details.Inheritance:\nThis bit is inherited if\ntp_call\nis also inherited.Added in version 3.8: as\n_Py_TPFLAGS_HAVE_VECTORCALL\nChanged in version 3.9.\nRenamed to the current name, without the leading underscore. The old provisional name is soft deprecated.\nChanged in version 3.12: This flag is now removed from a class when the class\u2019s\n__call__()\nmethod is reassigned.This flag can now be inherited by mutable classes.\n-\nPy_TPFLAGS_IMMUTABLETYPE\u00b6\nThis bit is set for type objects that are immutable: type attributes cannot be set nor deleted.\nPyType_Ready()\nautomatically applies this flag to static types.Inheritance:\nThis flag is not inherited.\nAdded in version 3.10.\n-\nPy_TPFLAGS_DISALLOW_INSTANTIATION\u00b6\nDisallow creating instances of the type: set\ntp_new\nto NULL and don\u2019t create the__new__\nkey in the type dictionary.The flag must be set before creating the type, not after. For example, it must be set before\nPyType_Ready()\nis called on the type.The flag is set automatically on static types if\ntp_base\nis NULL or&PyBaseObject_Type\nandtp_new\nis NULL.Inheritance:\nThis flag is not inherited. However, subclasses will not be instantiable unless they provide a non-NULL\ntp_new\n(which is only possible via the C API).Note\nTo disallow instantiating a class directly but allow instantiating its subclasses (e.g. for an abstract base class), do not use this flag. Instead, make\ntp_new\nonly succeed for subclasses.Added in version 3.10.\n-\nPy_TPFLAGS_MAPPING\u00b6\nThis bit indicates that instances of the class may match mapping patterns when used as the subject of a\nmatch\nblock. It is automatically set when registering or subclassingcollections.abc.Mapping\n, and unset when registeringcollections.abc.Sequence\n.Note\nPy_TPFLAGS_MAPPING\nandPy_TPFLAGS_SEQUENCE\nare mutually exclusive; it is an error to enable both flags simultaneously.Inheritance:\nThis flag is inherited by types that do not already set\nPy_TPFLAGS_SEQUENCE\n.See also\nPEP 634 \u2013 Structural Pattern Matching: Specification\nAdded in version 3.10.\n-\nPy_TPFLAGS_SEQUENCE\u00b6\nThis bit indicates that instances of the class may match sequence patterns when used as the subject of a\nmatch\nblock. It is automatically set when registering or subclassingcollections.abc.Sequence\n, and unset when registeringcollections.abc.Mapping\n.Note\nPy_TPFLAGS_MAPPING\nandPy_TPFLAGS_SEQUENCE\nare mutually exclusive; it is an error to enable both flags simultaneously.Inheritance:\nThis flag is inherited by types that do not already set\nPy_TPFLAGS_MAPPING\n.See also\nPEP 634 \u2013 Structural Pattern Matching: Specification\nAdded in version 3.10.\n-\nPy_TPFLAGS_VALID_VERSION_TAG\u00b6\nInternal. Do not set or unset this flag. To indicate that a class has changed call\nPyType_Modified()\nWarning\nThis flag is present in header files, but is not be used. It will be removed in a future version of CPython\n-\nPy_TPFLAGS_HEAPTYPE\u00b6\n-\nconst char *PyTypeObject.tp_doc\u00b6\nThe corresponding slot ID\nPy_tp_doc\nis part of the Stable ABI.An optional pointer to a NUL-terminated C string giving the docstring for this type object. This is exposed as the\n__doc__\nattribute on the type and instances of the type.Inheritance:\nThis field is not inherited by subtypes.\n-\ntraverseproc PyTypeObject.tp_traverse\u00b6\nThe corresponding slot ID\nPy_tp_traverse\nis part of the Stable ABI.An optional pointer to a traversal function for the garbage collector. This is only used if the\nPy_TPFLAGS_HAVE_GC\nflag bit is set. The signature is:int tp_traverse(PyObject *self, visitproc visit, void *arg);\nMore information about Python\u2019s garbage collection scheme can be found in section Supporting Cyclic Garbage Collection.\nThe\ntp_traverse\npointer is used by the garbage collector to detect reference cycles. A typical implementation of atp_traverse\nfunction simply callsPy_VISIT()\non each of the instance\u2019s members that are Python objects that the instance owns. For example, this is functionlocal_traverse()\nfrom the_thread\nextension module:static int local_traverse(PyObject *op, visitproc visit, void *arg) { localobject *self = (localobject *) op; Py_VISIT(self->args); Py_VISIT(self->kw); Py_VISIT(self->dict); return 0; }\nNote that\nPy_VISIT()\nis called only on those members that can participate in reference cycles. Although there is also aself->key\nmember, it can only beNULL\nor a Python string and therefore cannot be part of a reference cycle.On the other hand, even if you know a member can never be part of a cycle, as a debugging aid you may want to visit it anyway just so the\ngc\nmodule\u2019sget_referents()\nfunction will include it.Heap types (\nPy_TPFLAGS_HEAPTYPE\n) must visit their type with:Py_VISIT(Py_TYPE(self));\nIt is only needed since Python 3.9. To support Python 3.8 and older, this line must be conditional:\n#if PY_VERSION_HEX >= 0x03090000 Py_VISIT(Py_TYPE(self)); #endif\nIf the\nPy_TPFLAGS_MANAGED_DICT\nbit is set in thetp_flags\nfield, the traverse function must callPyObject_VisitManagedDict()\nlike this:PyObject_VisitManagedDict((PyObject*)self, visit, arg);\nWarning\nWhen implementing\ntp_traverse\n, only the members that the instance owns (by having strong references to them) must be visited. For instance, if an object supports weak references via thetp_weaklist\nslot, the pointer supporting the linked list (what tp_weaklist points to) must not be visited as the instance does not directly own the weak references to itself (the weakreference list is there to support the weak reference machinery, but the instance has no strong reference to the elements inside it, as they are allowed to be removed even if the instance is still alive).Warning\nThe traversal function must not have any side effects. It must not modify the reference counts of any Python objects nor create or destroy any Python objects.\nNote that\nPy_VISIT()\nrequires the visit and arg parameters tolocal_traverse()\nto have these specific names; don\u2019t name them just anything.Instances of heap-allocated types hold a reference to their type. Their traversal function must therefore either visit\nPy_TYPE(self)\n, or delegate this responsibility by callingtp_traverse\nof another heap-allocated type (such as a heap-allocated superclass). If they do not, the type object may not be garbage-collected.Note\nThe\ntp_traverse\nfunction can be called from any thread.Changed in version 3.9: Heap-allocated types are expected to visit\nPy_TYPE(self)\nintp_traverse\n. In earlier versions of Python, due to bug 40217, doing this may lead to crashes in subclasses.Inheritance:\nGroup:\nPy_TPFLAGS_HAVE_GC\n,tp_traverse\n,tp_clear\nThis field is inherited by subtypes together with\ntp_clear\nand thePy_TPFLAGS_HAVE_GC\nflag bit: the flag bit,tp_traverse\n, andtp_clear\nare all inherited from the base type if they are all zero in the subtype.\n-\ninquiry PyTypeObject.tp_clear\u00b6\nThe corresponding slot ID\nPy_tp_clear\nis part of the Stable ABI.An optional pointer to a clear function. The signature is:\nint tp_clear(PyObject *);\nThe purpose of this function is to break reference cycles that are causing a cyclic isolate so that the objects can be safely destroyed. A cleared object is a partially destroyed object; the object is not obligated to satisfy design invariants held during normal use.\ntp_clear\ndoes not need to delete references to objects that can\u2019t participate in reference cycles, such as Python strings or Python integers. However, it may be convenient to clear all references, and write the type\u2019stp_dealloc\nfunction to invoketp_clear\nto avoid code duplication. (Beware thattp_clear\nmight have already been called. Prefer calling idempotent functions likePy_CLEAR()\n.)Any non-trivial cleanup should be performed in\ntp_finalize\ninstead oftp_clear\n.Note\nIf\ntp_clear\nfails to break a reference cycle then the objects in the cyclic isolate may remain indefinitely uncollectable (\u201cleak\u201d). Seegc.garbage\n.Note\nReferents (direct and indirect) might have already been cleared; they are not guaranteed to be in a consistent state.\nNote\nThe\ntp_clear\nfunction can be called from any thread.Note\nAn object is not guaranteed to be automatically cleared before its destructor (\ntp_dealloc\n) is called.This function differs from the destructor (\ntp_dealloc\n) in the following ways:The purpose of clearing an object is to remove references to other objects that might participate in a reference cycle. The purpose of the destructor, on the other hand, is a superset: it must release all resources it owns, including references to objects that cannot participate in a reference cycle (e.g., integers) as well as the object\u2019s own memory (by calling\ntp_free\n).When\ntp_clear\nis called, other objects might still hold references to the object being cleared. Because of this,tp_clear\nmust not deallocate the object\u2019s own memory (tp_free\n). The destructor, on the other hand, is only called when no (strong) references exist, and as such, must safely destroy the object itself by deallocating it.tp_clear\nmight never be automatically called. An object\u2019s destructor, on the other hand, will be automatically called some time after the object becomes unreachable (i.e., either there are no references to the object or the object is a member of a cyclic isolate).\nNo guarantees are made about when, if, or how often Python automatically clears an object, except:\nPython will not automatically clear an object if it is reachable, i.e., there is a reference to it and it is not a member of a cyclic isolate.\nPython will not automatically clear an object if it has not been automatically finalized (see\ntp_finalize\n). (If the finalizer resurrected the object, the object may or may not be automatically finalized again before it is cleared.)If an object is a member of a cyclic isolate, Python will not automatically clear it if any member of the cyclic isolate has not yet been automatically finalized (\ntp_finalize\n).Python will not destroy an object until after any automatic calls to its\ntp_clear\nfunction have returned. This ensures that the act of breaking a reference cycle does not invalidate theself\npointer whiletp_clear\nis still executing.Python will not automatically call\ntp_clear\nmultiple times concurrently.\nCPython currently only automatically clears objects as needed to break reference cycles in a cyclic isolate, but future versions might clear objects regularly before their destruction.\nTaken together, all\ntp_clear\nfunctions in the system must combine to break all reference cycles. This is subtle, and if in any doubt supply atp_clear\nfunction. For example, the tuple type does not implement atp_clear\nfunction, because it\u2019s possible to prove that no reference cycle can be composed entirely of tuples. Therefore thetp_clear\nfunctions of other types are responsible for breaking any cycle containing a tuple. This isn\u2019t immediately obvious, and there\u2019s rarely a good reason to avoid implementingtp_clear\n.Implementations of\ntp_clear\nshould drop the instance\u2019s references to those of its members that may be Python objects, and set its pointers to those members toNULL\n, as in the following example:static int local_clear(PyObject *op) { localobject *self = (localobject *) op; Py_CLEAR(self->key); Py_CLEAR(self->args); Py_CLEAR(self->kw); Py_CLEAR(self->dict); return 0; }\nThe\nPy_CLEAR()\nmacro should be used, because clearing references is delicate: the reference to the contained object must not be released (viaPy_DECREF()\n) until after the pointer to the contained object is set toNULL\n. This is because releasing the reference may cause the contained object to become trash, triggering a chain of reclamation activity that may include invoking arbitrary Python code (due to finalizers, or weakref callbacks, associated with the contained object). If it\u2019s possible for such code to reference self again, it\u2019s important that the pointer to the contained object beNULL\nat that time, so that self knows the contained object can no longer be used. ThePy_CLEAR()\nmacro performs the operations in a safe order.If the\nPy_TPFLAGS_MANAGED_DICT\nbit is set in thetp_flags\nfield, the clear function must callPyObject_ClearManagedDict()\nlike this:PyObject_ClearManagedDict((PyObject*)self);\nMore information about Python\u2019s garbage collection scheme can be found in section Supporting Cyclic Garbage Collection.\nInheritance:\nGroup:\nPy_TPFLAGS_HAVE_GC\n,tp_traverse\n,tp_clear\nThis field is inherited by subtypes together with\ntp_traverse\nand thePy_TPFLAGS_HAVE_GC\nflag bit: the flag bit,tp_traverse\n, andtp_clear\nare all inherited from the base type if they are all zero in the subtype.See also\nObject Life Cycle for details about how this slot relates to other slots.\n-\nrichcmpfunc PyTypeObject.tp_richcompare\u00b6\nThe corresponding slot ID\nPy_tp_richcompare\nis part of the Stable ABI.An optional pointer to the rich comparison function, whose signature is:\nPyObject *tp_richcompare(PyObject *self, PyObject *other, int op);\nThe first parameter is guaranteed to be an instance of the type that is defined by\nPyTypeObject\n.The function should return the result of the comparison (usually\nPy_True\norPy_False\n). If the comparison is undefined, it must returnPy_NotImplemented\n, if another error occurred it must returnNULL\nand set an exception condition.The following constants are defined to be used as the third argument for\ntp_richcompare\nand forPyObject_RichCompare()\n:Constant\nComparison\n-\nPy_LT\u00b6\n<\n-\nPy_LE\u00b6\n<=\n-\nPy_EQ\u00b6\n==\n-\nPy_NE\u00b6\n!=\n-\nPy_GT\u00b6\n>\n-\nPy_GE\u00b6\n>=\nThe following macro is defined to ease writing rich comparison functions:\n-\nPy_RETURN_RICHCOMPARE(VAL_A, VAL_B, op)\u00b6\nReturn\nPy_True\norPy_False\nfrom the function, depending on the result of a comparison. VAL_A and VAL_B must be orderable by C comparison operators (for example, they may be C ints or floats). The third argument specifies the requested operation, as forPyObject_RichCompare()\n.The returned value is a new strong reference.\nOn error, sets an exception and returns\nNULL\nfrom the function.Added in version 3.7.\nInheritance:\nGroup:\ntp_hash\n,tp_richcompare\nThis field is inherited by subtypes together with\ntp_hash\n: a subtype inheritstp_richcompare\nandtp_hash\nwhen the subtype\u2019stp_richcompare\nandtp_hash\nare bothNULL\n.Default:\nPyBaseObject_Type\nprovides atp_richcompare\nimplementation, which may be inherited. However, if onlytp_hash\nis defined, not even the inherited function is used and instances of the type will not be able to participate in any comparisons.-\nPy_LT\u00b6\n-\nPy_ssize_t PyTypeObject.tp_weaklistoffset\u00b6\nWhile this field is still supported,\nPy_TPFLAGS_MANAGED_WEAKREF\nshould be used instead, if at all possible.If the instances of this type are weakly referenceable, this field is greater than zero and contains the offset in the instance structure of the weak reference list head (ignoring the GC header, if present); this offset is used by\nPyObject_ClearWeakRefs()\nand thePyWeakref_*\nfunctions. The instance structure needs to include a field of type PyObject* which is initialized toNULL\n.Do not confuse this field with\ntp_weaklist\n; that is the list head for weak references to the type object itself.It is an error to set both the\nPy_TPFLAGS_MANAGED_WEAKREF\nbit andtp_weaklistoffset\n.Inheritance:\nThis field is inherited by subtypes, but see the rules listed below. A subtype may override this offset; this means that the subtype uses a different weak reference list head than the base type. Since the list head is always found via\ntp_weaklistoffset\n, this should not be a problem.Default:\nIf the\nPy_TPFLAGS_MANAGED_WEAKREF\nbit is set in thetp_flags\nfield, thentp_weaklistoffset\nwill be set to a negative value, to indicate that it is unsafe to use this field.\n-\ngetiterfunc PyTypeObject.tp_iter\u00b6\nThe corresponding slot ID\nPy_tp_iter\nis part of the Stable ABI.An optional pointer to a function that returns an iterator for the object. Its presence normally signals that the instances of this type are iterable (although sequences may be iterable without this function).\nThis function has the same signature as\nPyObject_GetIter()\n:PyObject *tp_iter(PyObject *self);\nInheritance:\nThis field is inherited by subtypes.\n-\niternextfunc PyTypeObject.tp_iternext\u00b6\nThe corresponding slot ID\nPy_tp_iternext\nis part of the Stable ABI.An optional pointer to a function that returns the next item in an iterator. The signature is:\nPyObject *tp_iternext(PyObject *self);\nWhen the iterator is exhausted, it must return\nNULL\n; aStopIteration\nexception may or may not be set. When another error occurs, it must returnNULL\ntoo. Its presence signals that the instances of this type are iterators.Iterator types should also define the\ntp_iter\nfunction, and that function should return the iterator instance itself (not a new iterator instance).This function has the same signature as\nPyIter_Next()\n.Inheritance:\nThis field is inherited by subtypes.\n-\nstruct PyMethodDef *PyTypeObject.tp_methods\u00b6\nThe corresponding slot ID\nPy_tp_methods\nis part of the Stable ABI.An optional pointer to a static\nNULL\n-terminated array ofPyMethodDef\nstructures, declaring regular methods of this type.For each entry in the array, an entry is added to the type\u2019s dictionary (see\ntp_dict\nbelow) containing a method descriptor.Inheritance:\nThis field is not inherited by subtypes (methods are inherited through a different mechanism).\n-\nstruct PyMemberDef *PyTypeObject.tp_members\u00b6\nThe corresponding slot ID\nPy_tp_members\nis part of the Stable ABI.An optional pointer to a static\nNULL\n-terminated array ofPyMemberDef\nstructures, declaring regular data members (fields or slots) of instances of this type.For each entry in the array, an entry is added to the type\u2019s dictionary (see\ntp_dict\nbelow) containing a member descriptor.Inheritance:\nThis field is not inherited by subtypes (members are inherited through a different mechanism).\n-\nstruct PyGetSetDef *PyTypeObject.tp_getset\u00b6\nThe corresponding slot ID\nPy_tp_getset\nis part of the Stable ABI.An optional pointer to a static\nNULL\n-terminated array ofPyGetSetDef\nstructures, declaring computed attributes of instances of this type.For each entry in the array, an entry is added to the type\u2019s dictionary (see\ntp_dict\nbelow) containing a getset descriptor.Inheritance:\nThis field is not inherited by subtypes (computed attributes are inherited through a different mechanism).\n-\nPyTypeObject *PyTypeObject.tp_base\u00b6\nThe corresponding slot ID\nPy_tp_base\nis part of the Stable ABI.An optional pointer to a base type from which type properties are inherited. At this level, only single inheritance is supported; multiple inheritance require dynamically creating a type object by calling the metatype.\nNote\nSlot initialization is subject to the rules of initializing globals. C99 requires the initializers to be \u201caddress constants\u201d. Function designators like\nPyType_GenericNew()\n, with implicit conversion to a pointer, are valid C99 address constants.However, the unary \u2018&\u2019 operator applied to a non-static variable like\nPyBaseObject_Type\nis not required to produce an address constant. Compilers may support this (gcc does), MSVC does not. Both compilers are strictly standard conforming in this particular behavior.Consequently,\ntp_base\nshould be set in the extension module\u2019s init function.Inheritance:\nThis field is not inherited by subtypes (obviously).\nDefault:\nThis field defaults to\n&PyBaseObject_Type\n(which to Python programmers is known as the typeobject\n).\n-\nPyObject *PyTypeObject.tp_dict\u00b6\nThe type\u2019s dictionary is stored here by\nPyType_Ready()\n.This field should normally be initialized to\nNULL\nbefore PyType_Ready is called; it may also be initialized to a dictionary containing initial attributes for the type. OncePyType_Ready()\nhas initialized the type, extra attributes for the type may be added to this dictionary only if they don\u2019t correspond to overloaded operations (like__add__()\n). Once initialization for the type has finished, this field should be treated as read-only.Some types may not store their dictionary in this slot. Use\nPyType_GetDict()\nto retrieve the dictionary for an arbitrary type.Changed in version 3.12: Internals detail: For static builtin types, this is always\nNULL\n. Instead, the dict for such types is stored onPyInterpreterState\n. UsePyType_GetDict()\nto get the dict for an arbitrary type.Inheritance:\nThis field is not inherited by subtypes (though the attributes defined in here are inherited through a different mechanism).\nDefault:\nIf this field is\nNULL\n,PyType_Ready()\nwill assign a new dictionary to it.Warning\nIt is not safe to use\nPyDict_SetItem()\non or otherwise modifytp_dict\nwith the dictionary C-API.\n-\ndescrgetfunc PyTypeObject.tp_descr_get\u00b6\nThe corresponding slot ID\nPy_tp_descr_get\nis part of the Stable ABI.An optional pointer to a \u201cdescriptor get\u201d function.\nThe function signature is:\nPyObject * tp_descr_get(PyObject *self, PyObject *obj, PyObject *type);\nInheritance:\nThis field is inherited by subtypes.\n-\ndescrsetfunc PyTypeObject.tp_descr_set\u00b6\nThe corresponding slot ID\nPy_tp_descr_set\nis part of the Stable ABI.An optional pointer to a function for setting and deleting a descriptor\u2019s value.\nThe function signature is:\nint tp_descr_set(PyObject *self, PyObject *obj, PyObject *value);\nThe value argument is set to\nNULL\nto delete the value.Inheritance:\nThis field is inherited by subtypes.\n-\nPy_ssize_t PyTypeObject.tp_dictoffset\u00b6\nWhile this field is still supported,\nPy_TPFLAGS_MANAGED_DICT\nshould be used instead, if at all possible.If the instances of this type have a dictionary containing instance variables, this field is non-zero and contains the offset in the instances of the type of the instance variable dictionary; this offset is used by\nPyObject_GenericGetAttr()\n.Do not confuse this field with\ntp_dict\n; that is the dictionary for attributes of the type object itself.The value specifies the offset of the dictionary from the start of the instance structure.\nThe\ntp_dictoffset\nshould be regarded as write-only. To get the pointer to the dictionary callPyObject_GenericGetDict()\n. CallingPyObject_GenericGetDict()\nmay need to allocate memory for the dictionary, so it is may be more efficient to callPyObject_GetAttr()\nwhen accessing an attribute on the object.It is an error to set both the\nPy_TPFLAGS_MANAGED_DICT\nbit andtp_dictoffset\n.Inheritance:\nThis field is inherited by subtypes. A subtype should not override this offset; doing so could be unsafe, if C code tries to access the dictionary at the previous offset. To properly support inheritance, use\nPy_TPFLAGS_MANAGED_DICT\n.Default:\nThis slot has no default. For static types, if the field is\nNULL\nthen no__dict__\ngets created for instances.If the\nPy_TPFLAGS_MANAGED_DICT\nbit is set in thetp_flags\nfield, thentp_dictoffset\nwill be set to-1\n, to indicate that it is unsafe to use this field.\n-\ninitproc PyTypeObject.tp_init\u00b6\nThe corresponding slot ID\nPy_tp_init\nis part of the Stable ABI.An optional pointer to an instance initialization function.\nThis function corresponds to the\n__init__()\nmethod of classes. Like__init__()\n, it is possible to create an instance without calling__init__()\n, and it is possible to reinitialize an instance by calling its__init__()\nmethod again.The function signature is:\nint tp_init(PyObject *self, PyObject *args, PyObject *kwds);\nThe self argument is the instance to be initialized; the args and kwds arguments represent positional and keyword arguments of the call to\n__init__()\n.The\ntp_init\nfunction, if notNULL\n, is called when an instance is created normally by calling its type, after the type\u2019stp_new\nfunction has returned an instance of the type. If thetp_new\nfunction returns an instance of some other type that is not a subtype of the original type, notp_init\nfunction is called; iftp_new\nreturns an instance of a subtype of the original type, the subtype\u2019stp_init\nis called.Returns\n0\non success,-1\nand sets an exception on error.Inheritance:\nThis field is inherited by subtypes.\nDefault:\nFor static types this field does not have a default.\n-\nallocfunc PyTypeObject.tp_alloc\u00b6\nThe corresponding slot ID\nPy_tp_alloc\nis part of the Stable ABI.An optional pointer to an instance allocation function.\nThe function signature is:\nPyObject *tp_alloc(PyTypeObject *self, Py_ssize_t nitems);\nInheritance:\nStatic subtypes inherit this slot, which will be\nPyType_GenericAlloc()\nif inherited fromobject\n.Heap subtypes do not inherit this slot.\nDefault:\nFor heap subtypes, this field is always set to\nPyType_GenericAlloc()\n.For static subtypes, this slot is inherited (see above).\n-\nnewfunc PyTypeObject.tp_new\u00b6\nThe corresponding slot ID\nPy_tp_new\nis part of the Stable ABI.An optional pointer to an instance creation function.\nThe function signature is:\nPyObject *tp_new(PyTypeObject *subtype, PyObject *args, PyObject *kwds);\nThe subtype argument is the type of the object being created; the args and kwds arguments represent positional and keyword arguments of the call to the type. Note that subtype doesn\u2019t have to equal the type whose\ntp_new\nfunction is called; it may be a subtype of that type (but not an unrelated type).The\ntp_new\nfunction should callsubtype->tp_alloc(subtype, nitems)\nto allocate space for the object, and then do only as much further initialization as is absolutely necessary. Initialization that can safely be ignored or repeated should be placed in thetp_init\nhandler. A good rule of thumb is that for immutable types, all initialization should take place intp_new\n, while for mutable types, most initialization should be deferred totp_init\n.Set the\nPy_TPFLAGS_DISALLOW_INSTANTIATION\nflag to disallow creating instances of the type in Python.Inheritance:\nThis field is inherited by subtypes, except it is not inherited by static types whose\ntp_base\nisNULL\nor&PyBaseObject_Type\n.Default:\nFor static types this field has no default. This means if the slot is defined as\nNULL\n, the type cannot be called to create new instances; presumably there is some other way to create instances, like a factory function.\n-\nfreefunc PyTypeObject.tp_free\u00b6\nThe corresponding slot ID\nPy_tp_free\nis part of the Stable ABI.An optional pointer to an instance deallocation function. Its signature is:\nvoid tp_free(void *self);\nThis function must free the memory allocated by\ntp_alloc\n.Inheritance:\nStatic subtypes inherit this slot, which will be\nPyObject_Free()\nif inherited fromobject\n. Exception: If the type supports garbage collection (i.e., thePy_TPFLAGS_HAVE_GC\nflag is set intp_flags\n) and it would inheritPyObject_Free()\n, then this slot is not inherited but instead defaults toPyObject_GC_Del()\n.Heap subtypes do not inherit this slot.\nDefault:\nFor heap subtypes, this slot defaults to a deallocator suitable to match\nPyType_GenericAlloc()\nand the value of thePy_TPFLAGS_HAVE_GC\nflag.For static subtypes, this slot is inherited (see above).\n-\ninquiry PyTypeObject.tp_is_gc\u00b6\nThe corresponding slot ID\nPy_tp_is_gc\nis part of the Stable ABI.An optional pointer to a function called by the garbage collector.\nThe garbage collector needs to know whether a particular object is collectible or not. Normally, it is sufficient to look at the object\u2019s type\u2019s\ntp_flags\nfield, and check thePy_TPFLAGS_HAVE_GC\nflag bit. But some types have a mixture of statically and dynamically allocated instances, and the statically allocated instances are not collectible. Such types should define this function; it should return1\nfor a collectible instance, and0\nfor a non-collectible instance. The signature is:int tp_is_gc(PyObject *self);\n(The only example of this are types themselves. The metatype,\nPyType_Type\n, defines this function to distinguish between statically and dynamically allocated types.)Inheritance:\nThis field is inherited by subtypes.\nDefault:\nThis slot has no default. If this field is\nNULL\n,Py_TPFLAGS_HAVE_GC\nis used as the functional equivalent.\n-\nPyObject *PyTypeObject.tp_bases\u00b6\nThe corresponding slot ID\nPy_tp_bases\nis part of the Stable ABI.Tuple of base types.\nThis field should be set to\nNULL\nand treated as read-only. Python will fill it in when the type isinitialized\n.For dynamically created classes, the\nPy_tp_bases\nslot\ncan be used instead of the bases argument ofPyType_FromSpecWithBases()\n. The argument form is preferred.Warning\nMultiple inheritance does not work well for statically defined types. If you set\ntp_bases\nto a tuple, Python will not raise an error, but some slots will only be inherited from the first base.Inheritance:\nThis field is not inherited.\n-\nPyObject *PyTypeObject.tp_mro\u00b6\nTuple containing the expanded set of base types, starting with the type itself and ending with\nobject\n, in Method Resolution Order.This field should be set to\nNULL\nand treated as read-only. Python will fill it in when the type isinitialized\n.Inheritance:\nThis field is not inherited; it is calculated fresh by\nPyType_Ready()\n.\n-\nPyObject *PyTypeObject.tp_cache\u00b6\nUnused. Internal use only.\nInheritance:\nThis field is not inherited.\n-\nvoid *PyTypeObject.tp_subclasses\u00b6\nA collection of subclasses. Internal use only. May be an invalid pointer.\nTo get a list of subclasses, call the Python method\n__subclasses__()\n.Changed in version 3.12: For some types, this field does not hold a valid PyObject*. The type was changed to void* to indicate this.\nInheritance:\nThis field is not inherited.\n-\nPyObject *PyTypeObject.tp_weaklist\u00b6\nWeak reference list head, for weak references to this type object. Not inherited. Internal use only.\nChanged in version 3.12: Internals detail: For the static builtin types this is always\nNULL\n, even if weakrefs are added. Instead, the weakrefs for each are stored onPyInterpreterState\n. Use the public C-API or the internal_PyObject_GET_WEAKREFS_LISTPTR()\nmacro to avoid the distinction.Inheritance:\nThis field is not inherited.\n-\ndestructor PyTypeObject.tp_del\u00b6\nThe corresponding slot ID\nPy_tp_del\nis part of the Stable ABI.This field is deprecated. Use\ntp_finalize\ninstead.\n-\nunsigned int PyTypeObject.tp_version_tag\u00b6\nUsed to index into the method cache. Internal use only.\nInheritance:\nThis field is not inherited.\n-\ndestructor PyTypeObject.tp_finalize\u00b6\nThe corresponding slot ID\nPy_tp_finalize\nis part of the Stable ABI since version 3.5.An optional pointer to an instance finalization function. This is the C implementation of the\n__del__()\nspecial method. Its signature is:void tp_finalize(PyObject *self);\nThe primary purpose of finalization is to perform any non-trivial cleanup that must be performed before the object is destroyed, while the object and any other objects it directly or indirectly references are still in a consistent state. The finalizer is allowed to execute arbitrary Python code.\nBefore Python automatically finalizes an object, some of the object\u2019s direct or indirect referents might have themselves been automatically finalized. However, none of the referents will have been automatically cleared (\ntp_clear\n) yet.Other non-finalized objects might still be using a finalized object, so the finalizer must leave the object in a sane state (e.g., invariants are still met).\nNote\nAfter Python automatically finalizes an object, Python might start automatically clearing (\ntp_clear\n) the object and its referents (direct and indirect). Cleared objects are not guaranteed to be in a consistent state; a finalized object must be able to tolerate cleared referents.Note\nAn object is not guaranteed to be automatically finalized before its destructor (\ntp_dealloc\n) is called. It is recommended to callPyObject_CallFinalizerFromDealloc()\nat the beginning oftp_dealloc\nto guarantee that the object is always finalized before destruction.Note\nThe\ntp_finalize\nfunction can be called from any thread, although the GIL will be held.Note\nThe\ntp_finalize\nfunction can be called during shutdown, after some global variables have been deleted. See the documentation of the__del__()\nmethod for details.When Python finalizes an object, it behaves like the following algorithm:\nPython might mark the object as finalized. Currently, Python always marks objects whose type supports garbage collection (i.e., the\nPy_TPFLAGS_HAVE_GC\nflag is set intp_flags\n) and never marks other types of objects; this might change in a future version.If the object is not marked as finalized and its\ntp_finalize\nfinalizer function is non-NULL\n, the finalizer function is called.If the finalizer function was called and the finalizer made the object reachable (i.e., there is a reference to the object and it is not a member of a cyclic isolate), then the finalizer is said to have resurrected the object. It is unspecified whether the finalizer can also resurrect the object by adding a new reference to the object that does not make it reachable, i.e., the object is (still) a member of a cyclic isolate.\nIf the finalizer resurrected the object, the object\u2019s pending destruction is canceled and the object\u2019s finalized mark might be removed if present. Currently, Python never removes the finalized mark; this might change in a future version.\nAutomatic finalization refers to any finalization performed by Python except via calls to\nPyObject_CallFinalizer()\norPyObject_CallFinalizerFromDealloc()\n. No guarantees are made about when, if, or how often an object is automatically finalized, except:Python will not automatically finalize an object if it is reachable, i.e., there is a reference to it and it is not a member of a cyclic isolate.\nPython will not automatically finalize an object if finalizing it would not mark the object as finalized. Currently, this applies to objects whose type does not support garbage collection, i.e., the\nPy_TPFLAGS_HAVE_GC\nflag is not set. Such objects can still be manually finalized by callingPyObject_CallFinalizer()\norPyObject_CallFinalizerFromDealloc()\n.Python will not automatically finalize any two members of a cyclic isolate concurrently.\nPython will not automatically finalize an object after it has automatically cleared (\ntp_clear\n) the object.If an object is a member of a cyclic isolate, Python will not automatically finalize it after automatically clearing (see\ntp_clear\n) any other member.Python will automatically finalize every member of a cyclic isolate before it automatically clears (see\ntp_clear\n) any of them.If Python is going to automatically clear an object (\ntp_clear\n), it will automatically finalize the object first.\nPython currently only automatically finalizes objects that are members of a cyclic isolate, but future versions might finalize objects regularly before their destruction.\nTo manually finalize an object, do not call this function directly; call\nPyObject_CallFinalizer()\norPyObject_CallFinalizerFromDealloc()\ninstead.tp_finalize\nshould leave the current exception status unchanged. The recommended way to write a non-trivial finalizer is to back up the exception at the beginning by callingPyErr_GetRaisedException()\nand restore the exception at the end by callingPyErr_SetRaisedException()\n. If an exception is encountered in the middle of the finalizer, log and clear it withPyErr_WriteUnraisable()\norPyErr_FormatUnraisable()\n. For example:static void foo_finalize(PyObject *self) { // Save the current exception, if any. PyObject *exc = PyErr_GetRaisedException(); // ... if (do_something_that_might_raise() != success_indicator) { PyErr_WriteUnraisable(self); goto done; } done: // Restore the saved exception. This silently discards any exception // raised above, so be sure to call PyErr_WriteUnraisable first if // necessary. PyErr_SetRaisedException(exc); }\nInheritance:\nThis field is inherited by subtypes.\nAdded in version 3.4.\nChanged in version 3.8: Before version 3.8 it was necessary to set the\nPy_TPFLAGS_HAVE_FINALIZE\nflags bit in order for this field to be used. This is no longer required.See also\nPEP 442: \u201cSafe object finalization\u201d\nObject Life Cycle for details about how this slot relates to other slots.\n-\nvectorcallfunc PyTypeObject.tp_vectorcall\u00b6\nThe corresponding slot ID\nPy_tp_vectorcall\nis part of the Stable ABI since version 3.14.A vectorcall function to use for calls of this type object (rather than instances). In other words,\ntp_vectorcall\ncan be used to optimizetype.__call__\n, which typically returns a new instance of type.As with any vectorcall function, if\ntp_vectorcall\nisNULL\n, the tp_call protocol (Py_TYPE(type)->tp_call\n) is used instead.Note\nThe vectorcall protocol requires that the vectorcall function has the same behavior as the corresponding\ntp_call\n. This means thattype->tp_vectorcall\nmust match the behavior ofPy_TYPE(type)->tp_call\n.Specifically, if type uses the default metaclass,\ntype->tp_vectorcall\nmust behave the same as PyType_Type->tp_call, which:calls\ntype->tp_new\n,if the result is a subclass of type, calls\ntype->tp_init\non the result oftp_new\n, andreturns the result of\ntp_new\n.\nTypically,\ntp_vectorcall\nis overridden to optimize this process for specifictp_new\nandtp_init\n. When doing this for user-subclassable types, note that both can be overridden (using__new__()\nand__init__()\n, respectively).Inheritance:\nThis field is never inherited.\nAdded in version 3.9: (the field exists since 3.8 but it\u2019s only used since 3.9)\n-\nunsigned char PyTypeObject.tp_watched\u00b6\nInternal. Do not use.\nAdded in version 3.12.\nStatic Types\u00b6\nTraditionally, types defined in C code are static, that is,\na static PyTypeObject\nstructure is defined directly in code\nand initialized using PyType_Ready()\n.\nThis results in types that are limited relative to types defined in Python:\nStatic types are limited to one base, i.e. they cannot use multiple inheritance.\nStatic type objects (but not necessarily their instances) are immutable. It is not possible to add or modify the type object\u2019s attributes from Python.\nStatic type objects are shared across sub-interpreters, so they should not include any subinterpreter-specific state.\nAlso, since PyTypeObject\nis only part of the Limited API as an opaque struct, any extension modules using static types must be\ncompiled for a specific Python minor version.\nHeap Types\u00b6\nAn alternative to static types is heap-allocated types,\nor heap types for short, which correspond closely to classes created by\nPython\u2019s class\nstatement. Heap types have the Py_TPFLAGS_HEAPTYPE\nflag set.\nThis is done by filling a PyType_Spec\nstructure and calling\nPyType_FromSpec()\n, PyType_FromSpecWithBases()\n,\nPyType_FromModuleAndSpec()\n, or PyType_FromMetaclass()\n.\nNumber Object Structures\u00b6\n-\ntype PyNumberMethods\u00b6\nThis structure holds pointers to the functions which an object uses to implement the number protocol. Each function is used by the function of similar name documented in the Number Protocol section.\nHere is the structure definition:\ntypedef struct { binaryfunc nb_add; binaryfunc nb_subtract; binaryfunc nb_multiply; binaryfunc nb_remainder; binaryfunc nb_divmod; ternaryfunc nb_power; unaryfunc nb_negative; unaryfunc nb_positive; unaryfunc nb_absolute; inquiry nb_bool; unaryfunc nb_invert; binaryfunc nb_lshift; binaryfunc nb_rshift; binaryfunc nb_and; binaryfunc nb_xor; binaryfunc nb_or; unaryfunc nb_int; void *nb_reserved; unaryfunc nb_float; binaryfunc nb_inplace_add; binaryfunc nb_inplace_subtract; binaryfunc nb_inplace_multiply; binaryfunc nb_inplace_remainder; ternaryfunc nb_inplace_power; binaryfunc nb_inplace_lshift; binaryfunc nb_inplace_rshift; binaryfunc nb_inplace_and; binaryfunc nb_inplace_xor; binaryfunc nb_inplace_or; binaryfunc nb_floor_divide; binaryfunc nb_true_divide; binaryfunc nb_inplace_floor_divide; binaryfunc nb_inplace_true_divide; unaryfunc nb_index; binaryfunc nb_matrix_multiply; binaryfunc nb_inplace_matrix_multiply; } PyNumberMethods;\nNote\nBinary and ternary functions must check the type of all their operands, and implement the necessary conversions (at least one of the operands is an instance of the defined type). If the operation is not defined for the given operands, binary and ternary functions must return\nPy_NotImplemented\n, if another error occurred they must returnNULL\nand set an exception.Note\nThe\nnb_reserved\nfield should always beNULL\n. It was previously callednb_long\n, and was renamed in Python 3.0.1.\n-\nbinaryfunc PyNumberMethods.nb_add\u00b6\nThe corresponding slot ID\nPy_nb_add\nis part of the Stable ABI.\n-\nbinaryfunc PyNumberMethods.nb_subtract\u00b6\nThe corresponding slot ID\nPy_nb_subtract\nis part of the Stable ABI.\n-\nbinaryfunc PyNumberMethods.nb_multiply\u00b6\nThe corresponding slot ID\nPy_nb_multiply\nis part of the Stable ABI.\n-\nbinaryfunc PyNumberMethods.nb_remainder\u00b6\nThe corresponding slot ID\nPy_nb_remainder\nis part of the Stable ABI.\n-\nbinaryfunc PyNumberMethods.nb_divmod\u00b6\nThe corresponding slot ID\nPy_nb_divmod\nis part of the Stable ABI.\n-\nternaryfunc PyNumberMethods.nb_power\u00b6\nThe corresponding slot ID\nPy_nb_power\nis part of the Stable ABI.\n-\nunaryfunc PyNumberMethods.nb_negative\u00b6\nThe corresponding slot ID\nPy_nb_negative\nis part of the Stable ABI.\n-\nunaryfunc PyNumberMethods.nb_positive\u00b6\nThe corresponding slot ID\nPy_nb_positive\nis part of the Stable ABI.\n-\nunaryfunc PyNumberMethods.nb_absolute\u00b6\nThe corresponding slot ID\nPy_nb_absolute\nis part of the Stable ABI.\n-\ninquiry PyNumberMethods.nb_bool\u00b6\nThe corresponding slot ID\nPy_nb_bool\nis part of the Stable ABI.\n-\nunaryfunc PyNumberMethods.nb_invert\u00b6\nThe corresponding slot ID\nPy_nb_invert\nis part of the Stable ABI.\n-\nbinaryfunc PyNumberMethods.nb_lshift\u00b6\nThe corresponding slot ID\nPy_nb_lshift\nis part of the Stable ABI.\n-\nbinaryfunc PyNumberMethods.nb_rshift\u00b6\nThe corresponding slot ID\nPy_nb_rshift\nis part of the Stable ABI.\n-\nbinaryfunc PyNumberMethods.nb_and\u00b6\nThe corresponding slot ID\nPy_nb_and\nis part of the Stable ABI.\n-\nbinaryfunc PyNumberMethods.nb_xor\u00b6\nThe corresponding slot ID\nPy_nb_xor\nis part of the Stable ABI.\n-\nbinaryfunc PyNumberMethods.nb_or\u00b6\nThe corresponding slot ID\nPy_nb_or\nis part of the Stable ABI.\n-\nunaryfunc PyNumberMethods.nb_int\u00b6\nThe corresponding slot ID\nPy_nb_int\nis part of the Stable ABI.\n-\nvoid *PyNumberMethods.nb_reserved\u00b6\n-\nunaryfunc PyNumberMethods.nb_float\u00b6\nThe corresponding slot ID\nPy_nb_float\nis part of the Stable ABI.\n-\nbinaryfunc PyNumberMethods.nb_inplace_add\u00b6\nThe corresponding slot ID\nPy_nb_inplace_add\nis part of the Stable ABI.\n-\nbinaryfunc PyNumberMethods.nb_inplace_subtract\u00b6\nThe corresponding slot ID\nPy_nb_inplace_subtract\nis part of the Stable ABI.\n-\nbinaryfunc PyNumberMethods.nb_inplace_multiply\u00b6\nThe corresponding slot ID\nPy_nb_inplace_multiply\nis part of the Stable ABI.\n-\nbinaryfunc PyNumberMethods.nb_inplace_remainder\u00b6\nThe corresponding slot ID\nPy_nb_inplace_remainder\nis part of the Stable ABI.\n-\nternaryfunc PyNumberMethods.nb_inplace_power\u00b6\nThe corresponding slot ID\nPy_nb_inplace_power\nis part of the Stable ABI.\n-\nbinaryfunc PyNumberMethods.nb_inplace_lshift\u00b6\nThe corresponding slot ID\nPy_nb_inplace_lshift\nis part of the Stable ABI.\n-\nbinaryfunc PyNumberMethods.nb_inplace_rshift\u00b6\nThe corresponding slot ID\nPy_nb_inplace_rshift\nis part of the Stable ABI.\n-\nbinaryfunc PyNumberMethods.nb_inplace_and\u00b6\nThe corresponding slot ID\nPy_nb_inplace_and\nis part of the Stable ABI.\n-\nbinaryfunc PyNumberMethods.nb_inplace_xor\u00b6\nThe corresponding slot ID\nPy_nb_inplace_xor\nis part of the Stable ABI.\n-\nbinaryfunc PyNumberMethods.nb_inplace_or\u00b6\nThe corresponding slot ID\nPy_nb_inplace_or\nis part of the Stable ABI.\n-\nbinaryfunc PyNumberMethods.nb_floor_divide\u00b6\nThe corresponding slot ID\nPy_nb_floor_divide\nis part of the Stable ABI.\n-\nbinaryfunc PyNumberMethods.nb_true_divide\u00b6\nThe corresponding slot ID\nPy_nb_true_divide\nis part of the Stable ABI.\n-\nbinaryfunc PyNumberMethods.nb_inplace_floor_divide\u00b6\nThe corresponding slot ID\nPy_nb_inplace_floor_divide\nis part of the Stable ABI.\n-\nbinaryfunc PyNumberMethods.nb_inplace_true_divide\u00b6\nThe corresponding slot ID\nPy_nb_inplace_true_divide\nis part of the Stable ABI.\n-\nunaryfunc PyNumberMethods.nb_index\u00b6\nThe corresponding slot ID\nPy_nb_index\nis part of the Stable ABI.\n-\nbinaryfunc PyNumberMethods.nb_matrix_multiply\u00b6\nThe corresponding slot ID\nPy_nb_matrix_multiply\nis part of the Stable ABI since version 3.5.\n-\nbinaryfunc PyNumberMethods.nb_inplace_matrix_multiply\u00b6\nThe corresponding slot ID\nPy_nb_inplace_matrix_multiply\nis part of the Stable ABI since version 3.5.\nMapping Object Structures\u00b6\n-\ntype PyMappingMethods\u00b6\nThis structure holds pointers to the functions which an object uses to implement the mapping protocol. It has three members:\n-\nlenfunc PyMappingMethods.mp_length\u00b6\nThe corresponding slot ID\nPy_mp_length\nis part of the Stable ABI.This function is used by\nPyMapping_Size()\nandPyObject_Size()\n, and has the same signature. This slot may be set toNULL\nif the object has no defined length.\n-\nbinaryfunc PyMappingMethods.mp_subscript\u00b6\nThe corresponding slot ID\nPy_mp_subscript\nis part of the Stable ABI.This function is used by\nPyObject_GetItem()\nandPySequence_GetSlice()\n, and has the same signature asPyObject_GetItem()\n. This slot must be filled for thePyMapping_Check()\nfunction to return1\n, it can beNULL\notherwise.\n-\nobjobjargproc PyMappingMethods.mp_ass_subscript\u00b6\nThe corresponding slot ID\nPy_mp_ass_subscript\nis part of the Stable ABI.This function is used by\nPyObject_SetItem()\n,PyObject_DelItem()\n,PySequence_SetSlice()\nandPySequence_DelSlice()\n. It has the same signature asPyObject_SetItem()\n, but v can also be set toNULL\nto delete an item. If this slot isNULL\n, the object does not support item assignment and deletion.\nSequence Object Structures\u00b6\n-\ntype PySequenceMethods\u00b6\nThis structure holds pointers to the functions which an object uses to implement the sequence protocol.\n-\nlenfunc PySequenceMethods.sq_length\u00b6\nThe corresponding slot ID\nPy_sq_length\nis part of the Stable ABI.This function is used by\nPySequence_Size()\nandPyObject_Size()\n, and has the same signature. It is also used for handling negative indices via thesq_item\nand thesq_ass_item\nslots.\n-\nbinaryfunc PySequenceMethods.sq_concat\u00b6\nThe corresponding slot ID\nPy_sq_concat\nis part of the Stable ABI.This function is used by\nPySequence_Concat()\nand has the same signature. It is also used by the+\noperator, after trying the numeric addition via thenb_add\nslot.\n-\nssizeargfunc PySequenceMethods.sq_repeat\u00b6\nThe corresponding slot ID\nPy_sq_repeat\nis part of the Stable ABI.This function is used by\nPySequence_Repeat()\nand has the same signature. It is also used by the*\noperator, after trying numeric multiplication via thenb_multiply\nslot.\n-\nssizeargfunc PySequenceMethods.sq_item\u00b6\nThe corresponding slot ID\nPy_sq_item\nis part of the Stable ABI.This function is used by\nPySequence_GetItem()\nand has the same signature. It is also used byPyObject_GetItem()\n, after trying the subscription via themp_subscript\nslot. This slot must be filled for thePySequence_Check()\nfunction to return1\n, it can beNULL\notherwise.Negative indexes are handled as follows: if the\nsq_length\nslot is filled, it is called and the sequence length is used to compute a positive index which is passed tosq_item\n. Ifsq_length\nisNULL\n, the index is passed as is to the function.\n-\nssizeobjargproc PySequenceMethods.sq_ass_item\u00b6\nThe corresponding slot ID\nPy_sq_ass_item\nis part of the Stable ABI.This function is used by\nPySequence_SetItem()\nand has the same signature. It is also used byPyObject_SetItem()\nandPyObject_DelItem()\n, after trying the item assignment and deletion via themp_ass_subscript\nslot. This slot may be left toNULL\nif the object does not support item assignment and deletion.\n-\nobjobjproc PySequenceMethods.sq_contains\u00b6\nThe corresponding slot ID\nPy_sq_contains\nis part of the Stable ABI.This function may be used by\nPySequence_Contains()\nand has the same signature. This slot may be left toNULL\n, in this casePySequence_Contains()\nsimply traverses the sequence until it finds a match.\n-\nbinaryfunc PySequenceMethods.sq_inplace_concat\u00b6\nThe corresponding slot ID\nPy_sq_inplace_concat\nis part of the Stable ABI.This function is used by\nPySequence_InPlaceConcat()\nand has the same signature. It should modify its first operand, and return it. This slot may be left toNULL\n, in this casePySequence_InPlaceConcat()\nwill fall back toPySequence_Concat()\n. It is also used by the augmented assignment+=\n, after trying numeric in-place addition via thenb_inplace_add\nslot.\n-\nssizeargfunc PySequenceMethods.sq_inplace_repeat\u00b6\nThe corresponding slot ID\nPy_sq_inplace_repeat\nis part of the Stable ABI.This function is used by\nPySequence_InPlaceRepeat()\nand has the same signature. It should modify its first operand, and return it. This slot may be left toNULL\n, in this casePySequence_InPlaceRepeat()\nwill fall back toPySequence_Repeat()\n. It is also used by the augmented assignment*=\n, after trying numeric in-place multiplication via thenb_inplace_multiply\nslot.\nBuffer Object Structures\u00b6\n-\ntype PyBufferProcs\u00b6\nThis structure holds pointers to the functions required by the Buffer protocol. The protocol defines how an exporter object can expose its internal data to consumer objects.\n-\ngetbufferproc PyBufferProcs.bf_getbuffer\u00b6\nThe corresponding slot ID\nPy_bf_getbuffer\nis part of the Stable ABI since version 3.11.The signature of this function is:\nint (PyObject *exporter, Py_buffer *view, int flags);\nHandle a request to exporter to fill in view as specified by flags. Except for point (3), an implementation of this function MUST take these steps:\nCheck if the request can be met. If not, raise\nBufferError\n, set view->obj toNULL\nand return-1\n.Fill in the requested fields.\nIncrement an internal counter for the number of exports.\nSet view->obj to exporter and increment view->obj.\nReturn\n0\n.\nIf exporter is part of a chain or tree of buffer providers, two main schemes can be used:\nRe-export: Each member of the tree acts as the exporting object and sets view->obj to a new reference to itself.\nRedirect: The buffer request is redirected to the root object of the tree. Here, view->obj will be a new reference to the root object.\nThe individual fields of view are described in section Buffer structure, the rules how an exporter must react to specific requests are in section Buffer request types.\nAll memory pointed to in the\nPy_buffer\nstructure belongs to the exporter and must remain valid until there are no consumers left.format\n,shape\n,strides\n,suboffsets\nandinternal\nare read-only for the consumer.PyBuffer_FillInfo()\nprovides an easy way of exposing a simple bytes buffer while dealing correctly with all request types.PyObject_GetBuffer()\nis the interface for the consumer that wraps this function.\n-\nreleasebufferproc PyBufferProcs.bf_releasebuffer\u00b6\nThe corresponding slot ID\nPy_bf_releasebuffer\nis part of the Stable ABI since version 3.11.The signature of this function is:\nvoid (PyObject *exporter, Py_buffer *view);\nHandle a request to release the resources of the buffer. If no resources need to be released,\nPyBufferProcs.bf_releasebuffer\nmay beNULL\n. Otherwise, a standard implementation of this function will take these optional steps:Decrement an internal counter for the number of exports.\nIf the counter is\n0\n, free all memory associated with view.\nThe exporter MUST use the\ninternal\nfield to keep track of buffer-specific resources. This field is guaranteed to remain constant, while a consumer MAY pass a copy of the original buffer as the view argument.This function MUST NOT decrement view->obj, since that is done automatically in\nPyBuffer_Release()\n(this scheme is useful for breaking reference cycles).PyBuffer_Release()\nis the interface for the consumer that wraps this function.\nAsync Object Structures\u00b6\nAdded in version 3.5.\n-\ntype PyAsyncMethods\u00b6\nThis structure holds pointers to the functions required to implement awaitable and asynchronous iterator objects.\nHere is the structure definition:\ntypedef struct { unaryfunc am_await; unaryfunc am_aiter; unaryfunc am_anext; sendfunc am_send; } PyAsyncMethods;\n-\nunaryfunc PyAsyncMethods.am_await\u00b6\nThe corresponding slot ID\nPy_am_await\nis part of the Stable ABI since version 3.5.The signature of this function is:\nPyObject *am_await(PyObject *self);\nThe returned object must be an iterator, i.e.\nPyIter_Check()\nmust return1\nfor it.This slot may be set to\nNULL\nif an object is not an awaitable.\n-\nunaryfunc PyAsyncMethods.am_aiter\u00b6\nThe corresponding slot ID\nPy_am_aiter\nis part of the Stable ABI since version 3.5.The signature of this function is:\nPyObject *am_aiter(PyObject *self);\nMust return an asynchronous iterator object. See\n__anext__()\nfor details.This slot may be set to\nNULL\nif an object does not implement asynchronous iteration protocol.\n-\nunaryfunc PyAsyncMethods.am_anext\u00b6\nThe corresponding slot ID\nPy_am_anext\nis part of the Stable ABI since version 3.5.The signature of this function is:\nPyObject *am_anext(PyObject *self);\nMust return an awaitable object. See\n__anext__()\nfor details. This slot may be set toNULL\n.\n-\nsendfunc PyAsyncMethods.am_send\u00b6\nThe corresponding slot ID\nPy_am_send\nis part of the Stable ABI since version 3.10.The signature of this function is:\nPySendResult am_send(PyObject *self, PyObject *arg, PyObject **result);\nSee\nPyIter_Send()\nfor details. This slot may be set toNULL\n.Added in version 3.10.\nSlot Type typedefs\u00b6\n-\ntypedef PyObject *(*allocfunc)(PyTypeObject *cls, Py_ssize_t nitems)\u00b6\n- Part of the Stable ABI.\nThe purpose of this function is to separate memory allocation from memory initialization. It should return a pointer to a block of memory of adequate length for the instance, suitably aligned, and initialized to zeros, but with\nob_refcnt\nset to1\nandob_type\nset to the type argument. If the type\u2019stp_itemsize\nis non-zero, the object\u2019sob_size\nfield should be initialized to nitems and the length of the allocated memory block should betp_basicsize + nitems*tp_itemsize\n, rounded up to a multiple ofsizeof(void*)\n; otherwise, nitems is not used and the length of the block should betp_basicsize\n.This function should not do any other instance initialization, not even to allocate additional memory; that should be done by\ntp_new\n.\n-\ntypedef void (*destructor)(PyObject*)\u00b6\n- Part of the Stable ABI.\n-\ntypedef PyObject *(*newfunc)(PyTypeObject*, PyObject*, PyObject*)\u00b6\n- Part of the Stable ABI.\nSee\ntp_new\n.\n-\ntypedef PyObject *(*reprfunc)(PyObject*)\u00b6\n- Part of the Stable ABI.\nSee\ntp_repr\n.\n-\ntypedef PyObject *(*getattrfunc)(PyObject *self, char *attr)\u00b6\n- Part of the Stable ABI.\nReturn the value of the named attribute for the object.\n-\ntypedef int (*setattrfunc)(PyObject *self, char *attr, PyObject *value)\u00b6\n- Part of the Stable ABI.\nSet the value of the named attribute for the object. The value argument is set to\nNULL\nto delete the attribute.\n-\ntypedef PyObject *(*getattrofunc)(PyObject *self, PyObject *attr)\u00b6\n- Part of the Stable ABI.\nReturn the value of the named attribute for the object.\nSee\ntp_getattro\n.\n-\ntypedef int (*setattrofunc)(PyObject *self, PyObject *attr, PyObject *value)\u00b6\n- Part of the Stable ABI.\nSet the value of the named attribute for the object. The value argument is set to\nNULL\nto delete the attribute.See\ntp_setattro\n.\n-\ntypedef PyObject *(*descrgetfunc)(PyObject*, PyObject*, PyObject*)\u00b6\n- Part of the Stable ABI.\nSee\ntp_descr_get\n.\n-\ntypedef int (*descrsetfunc)(PyObject*, PyObject*, PyObject*)\u00b6\n- Part of the Stable ABI.\nSee\ntp_descr_set\n.\n-\ntypedef Py_hash_t (*hashfunc)(PyObject*)\u00b6\n- Part of the Stable ABI.\nSee\ntp_hash\n.\n-\ntypedef PyObject *(*richcmpfunc)(PyObject*, PyObject*, int)\u00b6\n- Part of the Stable ABI.\nSee\ntp_richcompare\n.\n-\ntypedef PyObject *(*getiterfunc)(PyObject*)\u00b6\n- Part of the Stable ABI.\nSee\ntp_iter\n.\n-\ntypedef PyObject *(*iternextfunc)(PyObject*)\u00b6\n- Part of the Stable ABI.\nSee\ntp_iternext\n.\n-\ntypedef Py_ssize_t (*lenfunc)(PyObject*)\u00b6\n- Part of the Stable ABI.\n-\ntypedef int (*getbufferproc)(PyObject*, Py_buffer*, int)\u00b6\n- Part of the Stable ABI since version 3.12.\n-\ntypedef void (*releasebufferproc)(PyObject*, Py_buffer*)\u00b6\n- Part of the Stable ABI since version 3.12.\n-\ntypedef PyObject *(*unaryfunc)(PyObject*)\u00b6\n- Part of the Stable ABI.\n-\ntypedef PyObject *(*binaryfunc)(PyObject*, PyObject*)\u00b6\n- Part of the Stable ABI.\n-\ntypedef PyObject *(*ssizeargfunc)(PyObject*, Py_ssize_t)\u00b6\n- Part of the Stable ABI.\n-\ntypedef int (*ssizeobjargproc)(PyObject*, Py_ssize_t, PyObject*)\u00b6\n- Part of the Stable ABI.\n-\ntypedef int (*objobjproc)(PyObject*, PyObject*)\u00b6\n- Part of the Stable ABI.\n-\ntypedef int (*objobjargproc)(PyObject*, PyObject*, PyObject*)\u00b6\n- Part of the Stable ABI.\nExamples\u00b6\nThe following are simple examples of Python type definitions. They include common usage you may encounter. Some demonstrate tricky corner cases. For more examples, practical info, and a tutorial, see Defining Extension Types: Tutorial and Defining Extension Types: Assorted Topics.\nA basic static type:\ntypedef struct {\nPyObject_HEAD\nconst char *data;\n} MyObject;\nstatic PyTypeObject MyObject_Type = {\nPyVarObject_HEAD_INIT(NULL, 0)\n.tp_name = \"mymod.MyObject\",\n.tp_basicsize = sizeof(MyObject),\n.tp_doc = PyDoc_STR(\"My objects\"),\n.tp_new = myobj_new,\n.tp_dealloc = (destructor)myobj_dealloc,\n.tp_repr = (reprfunc)myobj_repr,\n};\nYou may also find older code (especially in the CPython code base) with a more verbose initializer:\nstatic PyTypeObject MyObject_Type = {\nPyVarObject_HEAD_INIT(NULL, 0)\n\"mymod.MyObject\", /* tp_name */\nsizeof(MyObject), /* tp_basicsize */\n0, /* tp_itemsize */\n(destructor)myobj_dealloc, /* tp_dealloc */\n0, /* tp_vectorcall_offset */\n0, /* tp_getattr */\n0, /* tp_setattr */\n0, /* tp_as_async */\n(reprfunc)myobj_repr, /* tp_repr */\n0, /* tp_as_number */\n0, /* tp_as_sequence */\n0, /* tp_as_mapping */\n0, /* tp_hash */\n0, /* tp_call */\n0, /* tp_str */\n0, /* tp_getattro */\n0, /* tp_setattro */\n0, /* tp_as_buffer */\n0, /* tp_flags */\nPyDoc_STR(\"My objects\"), /* tp_doc */\n0, /* tp_traverse */\n0, /* tp_clear */\n0, /* tp_richcompare */\n0, /* tp_weaklistoffset */\n0, /* tp_iter */\n0, /* tp_iternext */\n0, /* tp_methods */\n0, /* tp_members */\n0, /* tp_getset */\n0, /* tp_base */\n0, /* tp_dict */\n0, /* tp_descr_get */\n0, /* tp_descr_set */\n0, /* tp_dictoffset */\n0, /* tp_init */\n0, /* tp_alloc */\nmyobj_new, /* tp_new */\n};\nA type that supports weakrefs, instance dicts, and hashing:\ntypedef struct {\nPyObject_HEAD\nconst char *data;\n} MyObject;\nstatic PyTypeObject MyObject_Type = {\nPyVarObject_HEAD_INIT(NULL, 0)\n.tp_name = \"mymod.MyObject\",\n.tp_basicsize = sizeof(MyObject),\n.tp_doc = PyDoc_STR(\"My objects\"),\n.tp_flags = Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE |\nPy_TPFLAGS_HAVE_GC | Py_TPFLAGS_MANAGED_DICT |\nPy_TPFLAGS_MANAGED_WEAKREF,\n.tp_new = myobj_new,\n.tp_traverse = (traverseproc)myobj_traverse,\n.tp_clear = (inquiry)myobj_clear,\n.tp_alloc = PyType_GenericNew,\n.tp_dealloc = (destructor)myobj_dealloc,\n.tp_repr = (reprfunc)myobj_repr,\n.tp_hash = (hashfunc)myobj_hash,\n.tp_richcompare = PyBaseObject_Type.tp_richcompare,\n};\nA str subclass that cannot be subclassed and cannot be called\nto create instances (e.g. uses a separate factory func) using\nPy_TPFLAGS_DISALLOW_INSTANTIATION\nflag:\ntypedef struct {\nPyUnicodeObject raw;\nchar *extra;\n} MyStr;\nstatic PyTypeObject MyStr_Type = {\nPyVarObject_HEAD_INIT(NULL, 0)\n.tp_name = \"mymod.MyStr\",\n.tp_basicsize = sizeof(MyStr),\n.tp_base = NULL, // set to &PyUnicode_Type in module init\n.tp_doc = PyDoc_STR(\"my custom str\"),\n.tp_flags = Py_TPFLAGS_DEFAULT | Py_TPFLAGS_DISALLOW_INSTANTIATION,\n.tp_repr = (reprfunc)myobj_repr,\n};\nThe simplest static type with fixed-length instances:\ntypedef struct {\nPyObject_HEAD\n} MyObject;\nstatic PyTypeObject MyObject_Type = {\nPyVarObject_HEAD_INIT(NULL, 0)\n.tp_name = \"mymod.MyObject\",\n};\nThe simplest static type with variable-length instances:\ntypedef struct {\nPyObject_VAR_HEAD\nconst char *data[1];\n} MyObject;\nstatic PyTypeObject MyObject_Type = {\nPyVarObject_HEAD_INIT(NULL, 0)\n.tp_name = \"mymod.MyObject\",\n.tp_basicsize = sizeof(MyObject) - sizeof(char *),\n.tp_itemsize = sizeof(char *),\n};", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 23070} +{"url": "https://docs.python.org/3/tutorial/controlflow.html", "title": "More Control Flow Tools", "content": "4. More Control Flow Tools\u00b6\nAs well as the while\nstatement just introduced, Python uses a few more\nthat we will encounter in this chapter.\n4.1. if\nStatements\u00b6\nPerhaps the most well-known statement type is the if\nstatement. For\nexample:\n>>> x = int(input(\"Please enter an integer: \"))\nPlease enter an integer: 42\n>>> if x < 0:\n... x = 0\n... print('Negative changed to zero')\n... elif x == 0:\n... print('Zero')\n... elif x == 1:\n... print('Single')\n... else:\n... print('More')\n...\nMore\nThere can be zero or more elif\nparts, and the else\npart is\noptional. The keyword \u2018elif\n\u2019 is short for \u2018else if\u2019, and is useful\nto avoid excessive indentation. An if\n\u2026 elif\n\u2026\nelif\n\u2026 sequence is a substitute for the switch\nor\ncase\nstatements found in other languages.\nIf you\u2019re comparing the same value to several constants, or checking for specific types or\nattributes, you may also find the match\nstatement useful. For more\ndetails see match Statements.\n4.2. for\nStatements\u00b6\nThe for\nstatement in Python differs a bit from what you may be used\nto in C or Pascal. Rather than always iterating over an arithmetic progression\nof numbers (like in Pascal), or giving the user the ability to define both the\niteration step and halting condition (as C), Python\u2019s for\nstatement\niterates over the items of any sequence (a list or a string), in the order that\nthey appear in the sequence. For example (no pun intended):\n>>> # Measure some strings:\n>>> words = ['cat', 'window', 'defenestrate']\n>>> for w in words:\n... print(w, len(w))\n...\ncat 3\nwindow 6\ndefenestrate 12\nCode that modifies a collection while iterating over that same collection can be tricky to get right. Instead, it is usually more straight-forward to loop over a copy of the collection or to create a new collection:\n# Create a sample collection\nusers = {'Hans': 'active', '\u00c9l\u00e9onore': 'inactive', '\u666f\u592a\u90ce': 'active'}\n# Strategy: Iterate over a copy\nfor user, status in users.copy().items():\nif status == 'inactive':\ndel users[user]\n# Strategy: Create a new collection\nactive_users = {}\nfor user, status in users.items():\nif status == 'active':\nactive_users[user] = status\n4.3. The range()\nFunction\u00b6\nIf you do need to iterate over a sequence of numbers, the built-in function\nrange()\ncomes in handy. It generates arithmetic progressions:\n>>> for i in range(5):\n... print(i)\n...\n0\n1\n2\n3\n4\nThe given end point is never part of the generated sequence; range(10)\ngenerates\n10 values, the legal indices for items of a sequence of length 10. It\nis possible to let the range start at another number, or to specify a different\nincrement (even negative; sometimes this is called the \u2018step\u2019):\n>>> list(range(5, 10))\n[5, 6, 7, 8, 9]\n>>> list(range(0, 10, 3))\n[0, 3, 6, 9]\n>>> list(range(-10, -100, -30))\n[-10, -40, -70]\nTo iterate over the indices of a sequence, you can combine range()\nand\nlen()\nas follows:\n>>> a = ['Mary', 'had', 'a', 'little', 'lamb']\n>>> for i in range(len(a)):\n... print(i, a[i])\n...\n0 Mary\n1 had\n2 a\n3 little\n4 lamb\nIn most such cases, however, it is convenient to use the enumerate()\nfunction, see Looping Techniques.\nA strange thing happens if you just print a range:\n>>> range(10)\nrange(0, 10)\nIn many ways the object returned by range()\nbehaves as if it is a list,\nbut in fact it isn\u2019t. It is an object which returns the successive items of\nthe desired sequence when you iterate over it, but it doesn\u2019t really make\nthe list, thus saving space.\nWe say such an object is iterable, that is, suitable as a target for\nfunctions and constructs that expect something from which they can\nobtain successive items until the supply is exhausted. We have seen that\nthe for\nstatement is such a construct, while an example of a function\nthat takes an iterable is sum()\n:\n>>> sum(range(4)) # 0 + 1 + 2 + 3\n6\nLater we will see more functions that return iterables and take iterables as\narguments. In chapter Data Structures, we will discuss list()\nin more\ndetail.\n4.4. break\nand continue\nStatements\u00b6\nThe break\nstatement breaks out of the innermost enclosing\nfor\nor while\nloop:\n>>> for n in range(2, 10):\n... for x in range(2, n):\n... if n % x == 0:\n... print(f\"{n} equals {x} * {n//x}\")\n... break\n...\n4 equals 2 * 2\n6 equals 2 * 3\n8 equals 2 * 4\n9 equals 3 * 3\nThe continue\nstatement continues with the next\niteration of the loop:\n>>> for num in range(2, 10):\n... if num % 2 == 0:\n... print(f\"Found an even number {num}\")\n... continue\n... print(f\"Found an odd number {num}\")\n...\nFound an even number 2\nFound an odd number 3\nFound an even number 4\nFound an odd number 5\nFound an even number 6\nFound an odd number 7\nFound an even number 8\nFound an odd number 9\n4.5. else\nClauses on Loops\u00b6\nIn a for\nor while\nloop the break\nstatement\nmay be paired with an else\nclause. If the loop finishes without\nexecuting the break\n, the else\nclause executes.\nIn a for\nloop, the else\nclause is executed\nafter the loop finishes its final iteration, that is, if no break occurred.\nIn a while\nloop, it\u2019s executed after the loop\u2019s condition becomes false.\nIn either kind of loop, the else\nclause is not executed if the\nloop was terminated by a break\n. Of course, other ways of ending the\nloop early, such as a return\nor a raised exception, will also skip\nexecution of the else\nclause.\nThis is exemplified in the following for\nloop,\nwhich searches for prime numbers:\n>>> for n in range(2, 10):\n... for x in range(2, n):\n... if n % x == 0:\n... print(n, 'equals', x, '*', n//x)\n... break\n... else:\n... # loop fell through without finding a factor\n... print(n, 'is a prime number')\n...\n2 is a prime number\n3 is a prime number\n4 equals 2 * 2\n5 is a prime number\n6 equals 2 * 3\n7 is a prime number\n8 equals 2 * 4\n9 equals 3 * 3\n(Yes, this is the correct code. Look closely: the else\nclause belongs to\nthe for\nloop, not the if\nstatement.)\nOne way to think of the else clause is to imagine it paired with the if\ninside the loop. As the loop executes, it will run a sequence like\nif/if/if/else. The if\nis inside the loop, encountered a number of times. If\nthe condition is ever true, a break\nwill happen. If the condition is never\ntrue, the else\nclause outside the loop will execute.\nWhen used with a loop, the else\nclause has more in common with the else\nclause of a try\nstatement than it does with that of if\nstatements: a try\nstatement\u2019s else\nclause runs when no exception\noccurs, and a loop\u2019s else\nclause runs when no break\noccurs. For more on\nthe try\nstatement and exceptions, see Handling Exceptions.\n4.6. pass\nStatements\u00b6\nThe pass\nstatement does nothing. It can be used when a statement is\nrequired syntactically but the program requires no action. For example:\n>>> while True:\n... pass # Busy-wait for keyboard interrupt (Ctrl+C)\n...\nThis is commonly used for creating minimal classes:\n>>> class MyEmptyClass:\n... pass\n...\nAnother place pass\ncan be used is as a place-holder for a function or\nconditional body when you are working on new code, allowing you to keep thinking\nat a more abstract level. The pass\nis silently ignored:\n>>> def initlog(*args):\n... pass # Remember to implement this!\n...\nFor this last case, many people use the ellipsis literal ...\ninstead of\npass\n. This use has no special meaning to Python, and is not part of\nthe language definition (you could use any constant expression here), but\n...\nis used conventionally as a placeholder body as well.\nSee The Ellipsis Object.\n4.7. match\nStatements\u00b6\nA match\nstatement takes an expression and compares its value to successive\npatterns given as one or more case blocks. This is superficially\nsimilar to a switch statement in C, Java or JavaScript (and many\nother languages), but it\u2019s more similar to pattern matching in\nlanguages like Rust or Haskell. Only the first pattern that matches\ngets executed and it can also extract components (sequence elements\nor object attributes) from the value into variables. If no case matches,\nnone of the branches is executed.\nThe simplest form compares a subject value against one or more literals:\ndef http_error(status):\nmatch status:\ncase 400:\nreturn \"Bad request\"\ncase 404:\nreturn \"Not found\"\ncase 418:\nreturn \"I'm a teapot\"\ncase _:\nreturn \"Something's wrong with the internet\"\nNote the last block: the \u201cvariable name\u201d _\nacts as a wildcard and\nnever fails to match.\nYou can combine several literals in a single pattern using |\n(\u201cor\u201d):\ncase 401 | 403 | 404:\nreturn \"Not allowed\"\nPatterns can look like unpacking assignments, and can be used to bind variables:\n# point is an (x, y) tuple\nmatch point:\ncase (0, 0):\nprint(\"Origin\")\ncase (0, y):\nprint(f\"Y={y}\")\ncase (x, 0):\nprint(f\"X={x}\")\ncase (x, y):\nprint(f\"X={x}, Y={y}\")\ncase _:\nraise ValueError(\"Not a point\")\nStudy that one carefully! The first pattern has two literals, and can\nbe thought of as an extension of the literal pattern shown above. But\nthe next two patterns combine a literal and a variable, and the\nvariable binds a value from the subject (point\n). The fourth\npattern captures two values, which makes it conceptually similar to\nthe unpacking assignment (x, y) = point\n.\nIf you are using classes to structure your data you can use the class name followed by an argument list resembling a constructor, but with the ability to capture attributes into variables:\nclass Point:\ndef __init__(self, x, y):\nself.x = x\nself.y = y\ndef where_is(point):\nmatch point:\ncase Point(x=0, y=0):\nprint(\"Origin\")\ncase Point(x=0, y=y):\nprint(f\"Y={y}\")\ncase Point(x=x, y=0):\nprint(f\"X={x}\")\ncase Point():\nprint(\"Somewhere else\")\ncase _:\nprint(\"Not a point\")\nYou can use positional parameters with some builtin classes that provide an\nordering for their attributes (e.g. dataclasses). You can also define a specific\nposition for attributes in patterns by setting the __match_args__\nspecial\nattribute in your classes. If it\u2019s set to (\u201cx\u201d, \u201cy\u201d), the following patterns are all\nequivalent (and all bind the y\nattribute to the var\nvariable):\nPoint(1, var)\nPoint(1, y=var)\nPoint(x=1, y=var)\nPoint(y=var, x=1)\nA recommended way to read patterns is to look at them as an extended form of what you\nwould put on the left of an assignment, to understand which variables would be set to\nwhat.\nOnly the standalone names (like var\nabove) are assigned to by a match statement.\nDotted names (like foo.bar\n), attribute names (the x=\nand y=\nabove) or class names\n(recognized by the \u201c(\u2026)\u201d next to them like Point\nabove) are never assigned to.\nPatterns can be arbitrarily nested. For example, if we have a short\nlist of Points, with __match_args__\nadded, we could match it like this:\nclass Point:\n__match_args__ = ('x', 'y')\ndef __init__(self, x, y):\nself.x = x\nself.y = y\nmatch points:\ncase []:\nprint(\"No points\")\ncase [Point(0, 0)]:\nprint(\"The origin\")\ncase [Point(x, y)]:\nprint(f\"Single point {x}, {y}\")\ncase [Point(0, y1), Point(0, y2)]:\nprint(f\"Two on the Y axis at {y1}, {y2}\")\ncase _:\nprint(\"Something else\")\nWe can add an if\nclause to a pattern, known as a \u201cguard\u201d. If the\nguard is false, match\ngoes on to try the next case block. Note\nthat value capture happens before the guard is evaluated:\nmatch point:\ncase Point(x, y) if x == y:\nprint(f\"Y=X at {x}\")\ncase Point(x, y):\nprint(f\"Not on the diagonal\")\nSeveral other key features of this statement:\nLike unpacking assignments, tuple and list patterns have exactly the same meaning and actually match arbitrary sequences. An important exception is that they don\u2019t match iterators or strings.\nSequence patterns support extended unpacking:\n[x, y, *rest]\nand(x, y, *rest)\nwork similar to unpacking assignments. The name after*\nmay also be_\n, so(x, y, *_)\nmatches a sequence of at least two items without binding the remaining items.Mapping patterns:\n{\"bandwidth\": b, \"latency\": l}\ncaptures the\"bandwidth\"\nand\"latency\"\nvalues from a dictionary. Unlike sequence patterns, extra keys are ignored. An unpacking like**rest\nis also supported. (But**_\nwould be redundant, so it is not allowed.)Subpatterns may be captured using the\nas\nkeyword:case (Point(x1, y1), Point(x2, y2) as p2): ...\nwill capture the second element of the input as\np2\n(as long as the input is a sequence of two points)Most literals are compared by equality, however the singletons\nTrue\n,False\nandNone\nare compared by identity.Patterns may use named constants. These must be dotted names to prevent them from being interpreted as capture variables:\nfrom enum import Enum class Color(Enum): RED = 'red' GREEN = 'green' BLUE = 'blue' color = Color(input(\"Enter your choice of 'red', 'blue' or 'green': \")) match color: case Color.RED: print(\"I see red!\") case Color.GREEN: print(\"Grass is green\") case Color.BLUE: print(\"I'm feeling the blues :(\")\nFor a more detailed explanation and additional examples, you can look into PEP 636 which is written in a tutorial format.\n4.8. Defining Functions\u00b6\nWe can create a function that writes the Fibonacci series to an arbitrary boundary:\n>>> def fib(n): # write Fibonacci series less than n\n... \"\"\"Print a Fibonacci series less than n.\"\"\"\n... a, b = 0, 1\n... while a < n:\n... print(a, end=' ')\n... a, b = b, a+b\n... print()\n...\n>>> # Now call the function we just defined:\n>>> fib(2000)\n0 1 1 2 3 5 8 13 21 34 55 89 144 233 377 610 987 1597\nThe keyword def\nintroduces a function definition. It must be\nfollowed by the function name and the parenthesized list of formal parameters.\nThe statements that form the body of the function start at the next line, and\nmust be indented.\nThe first statement of the function body can optionally be a string literal; this string literal is the function\u2019s documentation string, or docstring. (More about docstrings can be found in the section Documentation Strings.) There are tools which use docstrings to automatically produce online or printed documentation, or to let the user interactively browse through code; it\u2019s good practice to include docstrings in code that you write, so make a habit of it.\nThe execution of a function introduces a new symbol table used for the local\nvariables of the function. More precisely, all variable assignments in a\nfunction store the value in the local symbol table; whereas variable references\nfirst look in the local symbol table, then in the local symbol tables of\nenclosing functions, then in the global symbol table, and finally in the table\nof built-in names. Thus, global variables and variables of enclosing functions\ncannot be directly assigned a value within a function (unless, for global\nvariables, named in a global\nstatement, or, for variables of enclosing\nfunctions, named in a nonlocal\nstatement), although they may be\nreferenced.\nThe actual parameters (arguments) to a function call are introduced in the local symbol table of the called function when it is called; thus, arguments are passed using call by value (where the value is always an object reference, not the value of the object). [1] When a function calls another function, or calls itself recursively, a new local symbol table is created for that call.\nA function definition associates the function name with the function object in the current symbol table. The interpreter recognizes the object pointed to by that name as a user-defined function. Other names can also point to that same function object and can also be used to access the function:\n>>> fib\n\n>>> f = fib\n>>> f(100)\n0 1 1 2 3 5 8 13 21 34 55 89\nComing from other languages, you might object that fib\nis not a function but\na procedure since it doesn\u2019t return a value. In fact, even functions without a\nreturn\nstatement do return a value, albeit a rather boring one. This\nvalue is called None\n(it\u2019s a built-in name). Writing the value None\nis\nnormally suppressed by the interpreter if it would be the only value written.\nYou can see it if you really want to using print()\n:\n>>> fib(0)\n>>> print(fib(0))\nNone\nIt is simple to write a function that returns a list of the numbers of the Fibonacci series, instead of printing it:\n>>> def fib2(n): # return Fibonacci series up to n\n... \"\"\"Return a list containing the Fibonacci series up to n.\"\"\"\n... result = []\n... a, b = 0, 1\n... while a < n:\n... result.append(a) # see below\n... a, b = b, a+b\n... return result\n...\n>>> f100 = fib2(100) # call it\n>>> f100 # write the result\n[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89]\nThis example, as usual, demonstrates some new Python features:\nThe\nreturn\nstatement returns with a value from a function.return\nwithout an expression argument returnsNone\n. Falling off the end of a function also returnsNone\n.The statement\nresult.append(a)\ncalls a method of the list objectresult\n. A method is a function that \u2018belongs\u2019 to an object and is namedobj.methodname\n, whereobj\nis some object (this may be an expression), andmethodname\nis the name of a method that is defined by the object\u2019s type. Different types define different methods. Methods of different types may have the same name without causing ambiguity. (It is possible to define your own object types and methods, using classes, see Classes) The methodappend()\nshown in the example is defined for list objects; it adds a new element at the end of the list. In this example it is equivalent toresult = result + [a]\n, but more efficient.\n4.9. More on Defining Functions\u00b6\nIt is also possible to define functions with a variable number of arguments. There are three forms, which can be combined.\n4.9.1. Default Argument Values\u00b6\nThe most useful form is to specify a default value for one or more arguments. This creates a function that can be called with fewer arguments than it is defined to allow. For example:\ndef ask_ok(prompt, retries=4, reminder='Please try again!'):\nwhile True:\nreply = input(prompt)\nif reply in {'y', 'ye', 'yes'}:\nreturn True\nif reply in {'n', 'no', 'nop', 'nope'}:\nreturn False\nretries = retries - 1\nif retries < 0:\nraise ValueError('invalid user response')\nprint(reminder)\nThis function can be called in several ways:\ngiving only the mandatory argument:\nask_ok('Do you really want to quit?')\ngiving one of the optional arguments:\nask_ok('OK to overwrite the file?', 2)\nor even giving all arguments:\nask_ok('OK to overwrite the file?', 2, 'Come on, only yes or no!')\nThis example also introduces the in\nkeyword. This tests whether or\nnot a sequence contains a certain value.\nThe default values are evaluated at the point of function definition in the defining scope, so that\ni = 5\ndef f(arg=i):\nprint(arg)\ni = 6\nf()\nwill print 5\n.\nImportant warning: The default value is evaluated only once. This makes a difference when the default is a mutable object such as a list, dictionary, or instances of most classes. For example, the following function accumulates the arguments passed to it on subsequent calls:\ndef f(a, L=[]):\nL.append(a)\nreturn L\nprint(f(1))\nprint(f(2))\nprint(f(3))\nThis will print\n[1]\n[1, 2]\n[1, 2, 3]\nIf you don\u2019t want the default to be shared between subsequent calls, you can write the function like this instead:\ndef f(a, L=None):\nif L is None:\nL = []\nL.append(a)\nreturn L\n4.9.2. Keyword Arguments\u00b6\nFunctions can also be called using keyword arguments\nof the form kwarg=value\n. For instance, the following function:\ndef parrot(voltage, state='a stiff', action='voom', type='Norwegian Blue'):\nprint(\"-- This parrot wouldn't\", action, end=' ')\nprint(\"if you put\", voltage, \"volts through it.\")\nprint(\"-- Lovely plumage, the\", type)\nprint(\"-- It's\", state, \"!\")\naccepts one required argument (voltage\n) and three optional arguments\n(state\n, action\n, and type\n). This function can be called in any\nof the following ways:\nparrot(1000) # 1 positional argument\nparrot(voltage=1000) # 1 keyword argument\nparrot(voltage=1000000, action='VOOOOOM') # 2 keyword arguments\nparrot(action='VOOOOOM', voltage=1000000) # 2 keyword arguments\nparrot('a million', 'bereft of life', 'jump') # 3 positional arguments\nparrot('a thousand', state='pushing up the daisies') # 1 positional, 1 keyword\nbut all the following calls would be invalid:\nparrot() # required argument missing\nparrot(voltage=5.0, 'dead') # non-keyword argument after a keyword argument\nparrot(110, voltage=220) # duplicate value for the same argument\nparrot(actor='John Cleese') # unknown keyword argument\nIn a function call, keyword arguments must follow positional arguments.\nAll the keyword arguments passed must match one of the arguments\naccepted by the function (e.g. actor\nis not a valid argument for the\nparrot\nfunction), and their order is not important. This also includes\nnon-optional arguments (e.g. parrot(voltage=1000)\nis valid too).\nNo argument may receive a value more than once.\nHere\u2019s an example that fails due to this restriction:\n>>> def function(a):\n... pass\n...\n>>> function(0, a=0)\nTraceback (most recent call last):\nFile \"\", line 1, in \nTypeError: function() got multiple values for argument 'a'\nWhen a final formal parameter of the form **name\nis present, it receives a\ndictionary (see Mapping Types \u2014 dict) containing all keyword arguments except for\nthose corresponding to a formal parameter. This may be combined with a formal\nparameter of the form *name\n(described in the next subsection) which\nreceives a tuple containing the positional\narguments beyond the formal parameter list. (*name\nmust occur\nbefore **name\n.) For example, if we define a function like this:\ndef cheeseshop(kind, *arguments, **keywords):\nprint(\"-- Do you have any\", kind, \"?\")\nprint(\"-- I'm sorry, we're all out of\", kind)\nfor arg in arguments:\nprint(arg)\nprint(\"-\" * 40)\nfor kw in keywords:\nprint(kw, \":\", keywords[kw])\nIt could be called like this:\ncheeseshop(\"Limburger\", \"It's very runny, sir.\",\n\"It's really very, VERY runny, sir.\",\nshopkeeper=\"Michael Palin\",\nclient=\"John Cleese\",\nsketch=\"Cheese Shop Sketch\")\nand of course it would print:\n-- Do you have any Limburger ?\n-- I'm sorry, we're all out of Limburger\nIt's very runny, sir.\nIt's really very, VERY runny, sir.\n----------------------------------------\nshopkeeper : Michael Palin\nclient : John Cleese\nsketch : Cheese Shop Sketch\nNote that the order in which the keyword arguments are printed is guaranteed to match the order in which they were provided in the function call.\n4.9.3. Special parameters\u00b6\nBy default, arguments may be passed to a Python function either by position or explicitly by keyword. For readability and performance, it makes sense to restrict the way arguments can be passed so that a developer need only look at the function definition to determine if items are passed by position, by position or keyword, or by keyword.\nA function definition may look like:\ndef f(pos1, pos2, /, pos_or_kwd, *, kwd1, kwd2):\n----------- ---------- ----------\n| | |\n| Positional or keyword |\n| - Keyword only\n-- Positional only\nwhere /\nand *\nare optional. If used, these symbols indicate the kind of\nparameter by how the arguments may be passed to the function:\npositional-only, positional-or-keyword, and keyword-only. Keyword parameters\nare also referred to as named parameters.\n4.9.3.1. Positional-or-Keyword Arguments\u00b6\nIf /\nand *\nare not present in the function definition, arguments may\nbe passed to a function by position or by keyword.\n4.9.3.2. Positional-Only Parameters\u00b6\nLooking at this in a bit more detail, it is possible to mark certain parameters\nas positional-only. If positional-only, the parameters\u2019 order matters, and\nthe parameters cannot be passed by keyword. Positional-only parameters are\nplaced before a /\n(forward-slash). The /\nis used to logically\nseparate the positional-only parameters from the rest of the parameters.\nIf there is no /\nin the function definition, there are no positional-only\nparameters.\nParameters following the /\nmay be positional-or-keyword or keyword-only.\n4.9.3.3. Keyword-Only Arguments\u00b6\nTo mark parameters as keyword-only, indicating the parameters must be passed\nby keyword argument, place an *\nin the arguments list just before the first\nkeyword-only parameter.\n4.9.3.4. Function Examples\u00b6\nConsider the following example function definitions paying close attention to the\nmarkers /\nand *\n:\n>>> def standard_arg(arg):\n... print(arg)\n...\n>>> def pos_only_arg(arg, /):\n... print(arg)\n...\n>>> def kwd_only_arg(*, arg):\n... print(arg)\n...\n>>> def combined_example(pos_only, /, standard, *, kwd_only):\n... print(pos_only, standard, kwd_only)\nThe first function definition, standard_arg\n, the most familiar form,\nplaces no restrictions on the calling convention and arguments may be\npassed by position or keyword:\n>>> standard_arg(2)\n2\n>>> standard_arg(arg=2)\n2\nThe second function pos_only_arg\nis restricted to only use positional\nparameters as there is a /\nin the function definition:\n>>> pos_only_arg(1)\n1\n>>> pos_only_arg(arg=1)\nTraceback (most recent call last):\nFile \"\", line 1, in \nTypeError: pos_only_arg() got some positional-only arguments passed as keyword arguments: 'arg'\nThe third function kwd_only_arg\nonly allows keyword arguments as indicated\nby a *\nin the function definition:\n>>> kwd_only_arg(3)\nTraceback (most recent call last):\nFile \"\", line 1, in \nTypeError: kwd_only_arg() takes 0 positional arguments but 1 was given\n>>> kwd_only_arg(arg=3)\n3\nAnd the last uses all three calling conventions in the same function definition:\n>>> combined_example(1, 2, 3)\nTraceback (most recent call last):\nFile \"\", line 1, in \nTypeError: combined_example() takes 2 positional arguments but 3 were given\n>>> combined_example(1, 2, kwd_only=3)\n1 2 3\n>>> combined_example(1, standard=2, kwd_only=3)\n1 2 3\n>>> combined_example(pos_only=1, standard=2, kwd_only=3)\nTraceback (most recent call last):\nFile \"\", line 1, in \nTypeError: combined_example() got some positional-only arguments passed as keyword arguments: 'pos_only'\nFinally, consider this function definition which has a potential collision between the positional argument name\nand **kwds\nwhich has name\nas a key:\ndef foo(name, **kwds):\nreturn 'name' in kwds\nThere is no possible call that will make it return True\nas the keyword 'name'\nwill always bind to the first parameter. For example:\n>>> foo(1, **{'name': 2})\nTraceback (most recent call last):\nFile \"\", line 1, in \nTypeError: foo() got multiple values for argument 'name'\n>>>\nBut using /\n(positional only arguments), it is possible since it allows name\nas a positional argument and 'name'\nas a key in the keyword arguments:\n>>> def foo(name, /, **kwds):\n... return 'name' in kwds\n...\n>>> foo(1, **{'name': 2})\nTrue\nIn other words, the names of positional-only parameters can be used in\n**kwds\nwithout ambiguity.\n4.9.3.5. Recap\u00b6\nThe use case will determine which parameters to use in the function definition:\ndef f(pos1, pos2, /, pos_or_kwd, *, kwd1, kwd2):\nAs guidance:\nUse positional-only if you want the name of the parameters to not be available to the user. This is useful when parameter names have no real meaning, if you want to enforce the order of the arguments when the function is called or if you need to take some positional parameters and arbitrary keywords.\nUse keyword-only when names have meaning and the function definition is more understandable by being explicit with names or you want to prevent users relying on the position of the argument being passed.\nFor an API, use positional-only to prevent breaking API changes if the parameter\u2019s name is modified in the future.\n4.9.4. Arbitrary Argument Lists\u00b6\nFinally, the least frequently used option is to specify that a function can be called with an arbitrary number of arguments. These arguments will be wrapped up in a tuple (see Tuples and Sequences). Before the variable number of arguments, zero or more normal arguments may occur.\ndef write_multiple_items(file, separator, *args):\nfile.write(separator.join(args))\nNormally, these variadic arguments will be last in the list of formal\nparameters, because they scoop up all remaining input arguments that are\npassed to the function. Any formal parameters which occur after the *args\nparameter are \u2018keyword-only\u2019 arguments, meaning that they can only be used as\nkeywords rather than positional arguments.\n>>> def concat(*args, sep=\"/\"):\n... return sep.join(args)\n...\n>>> concat(\"earth\", \"mars\", \"venus\")\n'earth/mars/venus'\n>>> concat(\"earth\", \"mars\", \"venus\", sep=\".\")\n'earth.mars.venus'\n4.9.5. Unpacking Argument Lists\u00b6\nThe reverse situation occurs when the arguments are already in a list or tuple\nbut need to be unpacked for a function call requiring separate positional\narguments. For instance, the built-in range()\nfunction expects separate\nstart and stop arguments. If they are not available separately, write the\nfunction call with the *\n-operator to unpack the arguments out of a list\nor tuple:\n>>> list(range(3, 6)) # normal call with separate arguments\n[3, 4, 5]\n>>> args = [3, 6]\n>>> list(range(*args)) # call with arguments unpacked from a list\n[3, 4, 5]\nIn the same fashion, dictionaries can deliver keyword arguments with the\n**\n-operator:\n>>> def parrot(voltage, state='a stiff', action='voom'):\n... print(\"-- This parrot wouldn't\", action, end=' ')\n... print(\"if you put\", voltage, \"volts through it.\", end=' ')\n... print(\"E's\", state, \"!\")\n...\n>>> d = {\"voltage\": \"four million\", \"state\": \"bleedin' demised\", \"action\": \"VOOM\"}\n>>> parrot(**d)\n-- This parrot wouldn't VOOM if you put four million volts through it. E's bleedin' demised !\n4.9.6. Lambda Expressions\u00b6\nSmall anonymous functions can be created with the lambda\nkeyword.\nThis function returns the sum of its two arguments: lambda a, b: a+b\n.\nLambda functions can be used wherever function objects are required. They are\nsyntactically restricted to a single expression. Semantically, they are just\nsyntactic sugar for a normal function definition. Like nested function\ndefinitions, lambda functions can reference variables from the containing\nscope:\n>>> def make_incrementor(n):\n... return lambda x: x + n\n...\n>>> f = make_incrementor(42)\n>>> f(0)\n42\n>>> f(1)\n43\nThe above example uses a lambda expression to return a function. Another use\nis to pass a small function as an argument. For instance, list.sort()\ntakes a sorting key function key which can be a lambda function:\n>>> pairs = [(1, 'one'), (2, 'two'), (3, 'three'), (4, 'four')]\n>>> pairs.sort(key=lambda pair: pair[1])\n>>> pairs\n[(4, 'four'), (1, 'one'), (3, 'three'), (2, 'two')]\n4.9.7. Documentation Strings\u00b6\nHere are some conventions about the content and formatting of documentation strings.\nThe first line should always be a short, concise summary of the object\u2019s purpose. For brevity, it should not explicitly state the object\u2019s name or type, since these are available by other means (except if the name happens to be a verb describing a function\u2019s operation). This line should begin with a capital letter and end with a period.\nIf there are more lines in the documentation string, the second line should be blank, visually separating the summary from the rest of the description. The following lines should be one or more paragraphs describing the object\u2019s calling conventions, its side effects, etc.\nThe Python parser strips indentation from multi-line string literals when they serve as module, class, or function docstrings.\nHere is an example of a multi-line docstring:\n>>> def my_function():\n... \"\"\"Do nothing, but document it.\n...\n... No, really, it doesn't do anything:\n...\n... >>> my_function()\n... >>>\n... \"\"\"\n... pass\n...\n>>> print(my_function.__doc__)\nDo nothing, but document it.\nNo, really, it doesn't do anything:\n>>> my_function()\n>>>\n4.9.8. Function Annotations\u00b6\nFunction annotations are completely optional metadata information about the types used by user-defined functions (see PEP 3107 and PEP 484 for more information).\nAnnotations are stored in the __annotations__\nattribute of the function as a dictionary and have no effect on any other part of the\nfunction. Parameter annotations are defined by a colon after the parameter name, followed\nby an expression evaluating to the value of the annotation. Return annotations are\ndefined by a literal ->\n, followed by an expression, between the parameter\nlist and the colon denoting the end of the def\nstatement. The\nfollowing example has a required argument, an optional argument, and the return\nvalue annotated:\n>>> def f(ham: str, eggs: str = 'eggs') -> str:\n... print(\"Annotations:\", f.__annotations__)\n... print(\"Arguments:\", ham, eggs)\n... return ham + ' and ' + eggs\n...\n>>> f('spam')\nAnnotations: {'ham': , 'return': , 'eggs': }\nArguments: spam eggs\n'spam and eggs'\n4.10. Intermezzo: Coding Style\u00b6\nNow that you are about to write longer, more complex pieces of Python, it is a good time to talk about coding style. Most languages can be written (or more concisely, formatted) in different styles; some are more readable than others. Making it easy for others to read your code is always a good idea, and adopting a nice coding style helps tremendously for that.\nFor Python, PEP 8 has emerged as the style guide that most projects adhere to; it promotes a very readable and eye-pleasing coding style. Every Python developer should read it at some point; here are the most important points extracted for you:\nUse 4-space indentation, and no tabs.\n4 spaces are a good compromise between small indentation (allows greater nesting depth) and large indentation (easier to read). Tabs introduce confusion, and are best left out.\nWrap lines so that they don\u2019t exceed 79 characters.\nThis helps users with small displays and makes it possible to have several code files side-by-side on larger displays.\nUse blank lines to separate functions and classes, and larger blocks of code inside functions.\nWhen possible, put comments on a line of their own.\nUse docstrings.\nUse spaces around operators and after commas, but not directly inside bracketing constructs:\na = f(1, 2) + g(3, 4)\n.Name your classes and functions consistently; the convention is to use\nUpperCamelCase\nfor classes andlowercase_with_underscores\nfor functions and methods. Always useself\nas the name for the first method argument (see A First Look at Classes for more on classes and methods).Don\u2019t use fancy encodings if your code is meant to be used in international environments. Python\u2019s default, UTF-8, or even plain ASCII work best in any case.\nLikewise, don\u2019t use non-ASCII characters in identifiers if there is only the slightest chance people speaking a different language will read or maintain the code.\nFootnotes", "code_snippets": [" ", " ", "\n", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", " ", " ", " ", "\n", " ", "\n", " ", " ", " ", "\n", " ", "\n", "\n", " ", "\n", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", "\n\n", "\n", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n\n", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n\n", " ", " ", "\n", "\n\n", " ", " ", "\n", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", "\n", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", " ", " ", "\n", "\n", "\n", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", " ", "\n", " ", " ", " ", " ", " ", "\n ", " ", "\n", "\n", " ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n ", "\n ", " ", "\n", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n\n", "\n ", " ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n ", " ", "\n ", "\n ", "\n ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n\n", " ", "\n ", " ", "\n ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n ", " ", " ", " ", " ", "\n ", "\n ", "\n ", "\n", " ", "\n ", " ", " ", " ", " ", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", " ", "\n", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n\n", " ", " ", "\n\n", " ", "\n ", " ", "\n ", "\n ", " ", "\n ", "\n ", " ", "\n ", "\n", " ", "\n", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", " ", "\n", " ", "\n", "\n", " ", " ", "\n ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n ", " ", "\n ", " ", " ", " ", " ", " ", " ", "\n ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", "\n", " ", " ", "\n\n", "\n ", "\n\n", " ", " ", "\n", "\n", " ", "\n ", "\n ", " ", "\n\n", "\n", "\n", "\n", "\n", " ", "\n", " ", " ", "\n", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", "\n ", " ", "\n", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", "\n ", " ", " ", "\n", " ", "\n", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", " ", " ", "\n ", " ", " ", "\n ", " ", "\n ", " ", " ", " ", "\n ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n", " ", "\n ", "\n ", "\n ", "\n ", "\n", "\n", " ", "\n", "\n", " ", "\n", " ", "\n", "\n", " ", "\n", " ", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n\n", "\n", "\n", "\n", "\n\n", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n\n", "\n", "\n", " ", " ", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n\n", " ", " ", "\n", "\n\n", " ", " ", "\n", "\n\n", " ", " ", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", " ", "\n ", " ", " ", " ", "\n", " ", " ", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", "\n ", "\n", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", " ", "\n", " ", "\n", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n\n", "\n\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 8619} +{"url": "https://docs.python.org/3/tutorial/introduction.html", "title": "An Informal Introduction to Python", "content": "3. An Informal Introduction to Python\u00b6\nIn the following examples, input and output are distinguished by the presence or absence of prompts (>>> and \u2026): to repeat the example, you must type everything after the prompt, when the prompt appears; lines that do not begin with a prompt are output from the interpreter. Note that a secondary prompt on a line by itself in an example means you must type a blank line; this is used to end a multi-line command.\nYou can use the \u201cCopy\u201d button (it appears in the upper-right corner when hovering over or tapping a code example), which strips prompts and omits output, to copy and paste the input lines into your interpreter.\nMany of the examples in this manual, even those entered at the interactive\nprompt, include comments. Comments in Python start with the hash character,\n#\n, and extend to the end of the physical line. A comment may appear at the\nstart of a line or following whitespace or code, but not within a string\nliteral. A hash character within a string literal is just a hash character.\nSince comments are to clarify code and are not interpreted by Python, they may\nbe omitted when typing in examples.\nSome examples:\n# this is the first comment\nspam = 1 # and this is the second comment\n# ... and now a third!\ntext = \"# This is not a comment because it's inside quotes.\"\n3.1. Using Python as a Calculator\u00b6\nLet\u2019s try some simple Python commands. Start the interpreter and wait for the\nprimary prompt, >>>\n. (It shouldn\u2019t take long.)\n3.1.1. Numbers\u00b6\nThe interpreter acts as a simple calculator: you can type an expression into it\nand it will write the value. Expression syntax is straightforward: the\noperators +\n, -\n, *\nand /\ncan be used to perform\narithmetic; parentheses (()\n) can be used for grouping.\nFor example:\n>>> 2 + 2\n4\n>>> 50 - 5*6\n20\n>>> (50 - 5*6) / 4\n5.0\n>>> 8 / 5 # division always returns a floating-point number\n1.6\nThe integer numbers (e.g. 2\n, 4\n, 20\n) have type int\n,\nthe ones with a fractional part (e.g. 5.0\n, 1.6\n) have type\nfloat\n. We will see more about numeric types later in the tutorial.\nDivision (/\n) always returns a float. To do floor division and\nget an integer result you can use the //\noperator; to calculate\nthe remainder you can use %\n:\n>>> 17 / 3 # classic division returns a float\n5.666666666666667\n>>>\n>>> 17 // 3 # floor division discards the fractional part\n5\n>>> 17 % 3 # the % operator returns the remainder of the division\n2\n>>> 5 * 3 + 2 # floored quotient * divisor + remainder\n17\nWith Python, it is possible to use the **\noperator to calculate powers [1]:\n>>> 5 ** 2 # 5 squared\n25\n>>> 2 ** 7 # 2 to the power of 7\n128\nThe equal sign (=\n) is used to assign a value to a variable. Afterwards, no\nresult is displayed before the next interactive prompt:\n>>> width = 20\n>>> height = 5 * 9\n>>> width * height\n900\nIf a variable is not \u201cdefined\u201d (assigned a value), trying to use it will give you an error:\n>>> n # try to access an undefined variable\nTraceback (most recent call last):\nFile \"\", line 1, in \nNameError: name 'n' is not defined\nThere is full support for floating point; operators with mixed type operands convert the integer operand to floating point:\n>>> 4 * 3.75 - 1\n14.0\nIn interactive mode, the last printed expression is assigned to the variable\n_\n. This means that when you are using Python as a desk calculator, it is\nsomewhat easier to continue calculations, for example:\n>>> tax = 12.5 / 100\n>>> price = 100.50\n>>> price * tax\n12.5625\n>>> price + _\n113.0625\n>>> round(_, 2)\n113.06\nThis variable should be treated as read-only by the user. Don\u2019t explicitly assign a value to it \u2014 you would create an independent local variable with the same name masking the built-in variable with its magic behavior.\nIn addition to int\nand float\n, Python supports other types of\nnumbers, such as Decimal\nand Fraction\n.\nPython also has built-in support for complex numbers,\nand uses the j\nor J\nsuffix to indicate the imaginary part\n(e.g. 3+5j\n).\n3.1.2. Text\u00b6\nPython can manipulate text (represented by type str\n, so-called\n\u201cstrings\u201d) as well as numbers. This includes characters \u201c!\n\u201d, words\n\u201crabbit\n\u201d, names \u201cParis\n\u201d, sentences \u201cGot your back.\n\u201d, etc.\n\u201cYay! :)\n\u201d. They can be enclosed in single quotes ('...'\n) or double\nquotes (\"...\"\n) with the same result [2].\n>>> 'spam eggs' # single quotes\n'spam eggs'\n>>> \"Paris rabbit got your back :)! Yay!\" # double quotes\n'Paris rabbit got your back :)! Yay!'\n>>> '1975' # digits and numerals enclosed in quotes are also strings\n'1975'\nTo quote a quote, we need to \u201cescape\u201d it, by preceding it with \\\n.\nAlternatively, we can use the other type of quotation marks:\n>>> 'doesn\\'t' # use \\' to escape the single quote...\n\"doesn't\"\n>>> \"doesn't\" # ...or use double quotes instead\n\"doesn't\"\n>>> '\"Yes,\" they said.'\n'\"Yes,\" they said.'\n>>> \"\\\"Yes,\\\" they said.\"\n'\"Yes,\" they said.'\n>>> '\"Isn\\'t,\" they said.'\n'\"Isn\\'t,\" they said.'\nIn the Python shell, the string definition and output string can look\ndifferent. The print()\nfunction produces a more readable output, by\nomitting the enclosing quotes and by printing escaped and special characters:\n>>> s = 'First line.\\nSecond line.' # \\n means newline\n>>> s # without print(), special characters are included in the string\n'First line.\\nSecond line.'\n>>> print(s) # with print(), special characters are interpreted, so \\n produces new line\nFirst line.\nSecond line.\nIf you don\u2019t want characters prefaced by \\\nto be interpreted as\nspecial characters, you can use raw strings by adding an r\nbefore\nthe first quote:\n>>> print('C:\\some\\name') # here \\n means newline!\nC:\\some\name\n>>> print(r'C:\\some\\name') # note the r before the quote\nC:\\some\\name\nThere is one subtle aspect to raw strings: a raw string may not end in\nan odd number of \\\ncharacters; see\nthe FAQ entry for more information\nand workarounds.\nString literals can span multiple lines. One way is using triple-quotes:\n\"\"\"...\"\"\"\nor '''...'''\n. End-of-line characters are automatically\nincluded in the string, but it\u2019s possible to prevent this by adding a \\\nat\nthe end of the line. In the following example, the initial newline is not\nincluded:\n>>> print(\"\"\"\\\n... Usage: thingy [OPTIONS]\n... -h Display this usage message\n... -H hostname Hostname to connect to\n... \"\"\")\nUsage: thingy [OPTIONS]\n-h Display this usage message\n-H hostname Hostname to connect to\n>>>\nStrings can be concatenated (glued together) with the +\noperator, and\nrepeated with *\n:\n>>> # 3 times 'un', followed by 'ium'\n>>> 3 * 'un' + 'ium'\n'unununium'\nTwo or more string literals (i.e. the ones enclosed between quotes) next to each other are automatically concatenated.\n>>> 'Py' 'thon'\n'Python'\nThis feature is particularly useful when you want to break long strings:\n>>> text = ('Put several strings within parentheses '\n... 'to have them joined together.')\n>>> text\n'Put several strings within parentheses to have them joined together.'\nThis only works with two literals though, not with variables or expressions:\n>>> prefix = 'Py'\n>>> prefix 'thon' # can't concatenate a variable and a string literal\nFile \"\", line 1\nprefix 'thon'\n^^^^^^\nSyntaxError: invalid syntax\n>>> ('un' * 3) 'ium'\nFile \"\", line 1\n('un' * 3) 'ium'\n^^^^^\nSyntaxError: invalid syntax\nIf you want to concatenate variables or a variable and a literal, use +\n:\n>>> prefix + 'thon'\n'Python'\nStrings can be indexed (subscripted), with the first character having index 0. There is no separate character type; a character is simply a string of size one:\n>>> word = 'Python'\n>>> word[0] # character in position 0\n'P'\n>>> word[5] # character in position 5\n'n'\nIndices may also be negative numbers, to start counting from the right:\n>>> word[-1] # last character\n'n'\n>>> word[-2] # second-last character\n'o'\n>>> word[-6]\n'P'\nNote that since -0 is the same as 0, negative indices start from -1.\nIn addition to indexing, slicing is also supported. While indexing is used to obtain individual characters, slicing allows you to obtain a substring:\n>>> word[0:2] # characters from position 0 (included) to 2 (excluded)\n'Py'\n>>> word[2:5] # characters from position 2 (included) to 5 (excluded)\n'tho'\nSlice indices have useful defaults; an omitted first index defaults to zero, an omitted second index defaults to the size of the string being sliced.\n>>> word[:2] # character from the beginning to position 2 (excluded)\n'Py'\n>>> word[4:] # characters from position 4 (included) to the end\n'on'\n>>> word[-2:] # characters from the second-last (included) to the end\n'on'\nNote how the start is always included, and the end always excluded. This\nmakes sure that s[:i] + s[i:]\nis always equal to s\n:\n>>> word[:2] + word[2:]\n'Python'\n>>> word[:4] + word[4:]\n'Python'\nOne way to remember how slices work is to think of the indices as pointing between characters, with the left edge of the first character numbered 0. Then the right edge of the last character of a string of n characters has index n, for example:\n+---+---+---+---+---+---+\n| P | y | t | h | o | n |\n+---+---+---+---+---+---+\n0 1 2 3 4 5 6\n-6 -5 -4 -3 -2 -1\nThe first row of numbers gives the position of the indices 0\u20266 in the string; the second row gives the corresponding negative indices. The slice from i to j consists of all characters between the edges labeled i and j, respectively.\nFor non-negative indices, the length of a slice is the difference of the\nindices, if both are within bounds. For example, the length of word[1:3]\nis\n2.\nAttempting to use an index that is too large will result in an error:\n>>> word[42] # the word only has 6 characters\nTraceback (most recent call last):\nFile \"\", line 1, in \nIndexError: string index out of range\nHowever, out of range slice indexes are handled gracefully when used for slicing:\n>>> word[4:42]\n'on'\n>>> word[42:]\n''\nPython strings cannot be changed \u2014 they are immutable. Therefore, assigning to an indexed position in the string results in an error:\n>>> word[0] = 'J'\nTraceback (most recent call last):\nFile \"\", line 1, in \nTypeError: 'str' object does not support item assignment\n>>> word[2:] = 'py'\nTraceback (most recent call last):\nFile \"\", line 1, in \nTypeError: 'str' object does not support item assignment\nIf you need a different string, you should create a new one:\n>>> 'J' + word[1:]\n'Jython'\n>>> word[:2] + 'py'\n'Pypy'\nThe built-in function len()\nreturns the length of a string:\n>>> s = 'supercalifragilisticexpialidocious'\n>>> len(s)\n34\nSee also\n- Text Sequence Type \u2014 str\nStrings are examples of sequence types, and support the common operations supported by such types.\n- String Methods\nStrings support a large number of methods for basic transformations and searching.\n- f-strings\nString literals that have embedded expressions.\n- Format String Syntax\nInformation about string formatting with\nstr.format()\n.- printf-style String Formatting\nThe old formatting operations invoked when strings are the left operand of the\n%\noperator are described in more detail here.\n3.1.3. Lists\u00b6\nPython knows a number of compound data types, used to group together other values. The most versatile is the list, which can be written as a list of comma-separated values (items) between square brackets. Lists might contain items of different types, but usually the items all have the same type.\n>>> squares = [1, 4, 9, 16, 25]\n>>> squares\n[1, 4, 9, 16, 25]\nLike strings (and all other built-in sequence types), lists can be indexed and sliced:\n>>> squares[0] # indexing returns the item\n1\n>>> squares[-1]\n25\n>>> squares[-3:] # slicing returns a new list\n[9, 16, 25]\nLists also support operations like concatenation:\n>>> squares + [36, 49, 64, 81, 100]\n[1, 4, 9, 16, 25, 36, 49, 64, 81, 100]\nUnlike strings, which are immutable, lists are a mutable type, i.e. it is possible to change their content:\n>>> cubes = [1, 8, 27, 65, 125] # something's wrong here\n>>> 4 ** 3 # the cube of 4 is 64, not 65!\n64\n>>> cubes[3] = 64 # replace the wrong value\n>>> cubes\n[1, 8, 27, 64, 125]\nYou can also add new items at the end of the list, by using\nthe list.append()\nmethod (we will see more about methods later):\n>>> cubes.append(216) # add the cube of 6\n>>> cubes.append(7 ** 3) # and the cube of 7\n>>> cubes\n[1, 8, 27, 64, 125, 216, 343]\nSimple assignment in Python never copies data. When you assign a list to a variable, the variable refers to the existing list. Any changes you make to the list through one variable will be seen through all other variables that refer to it.:\n>>> rgb = [\"Red\", \"Green\", \"Blue\"]\n>>> rgba = rgb\n>>> id(rgb) == id(rgba) # they reference the same object\nTrue\n>>> rgba.append(\"Alph\")\n>>> rgb\n[\"Red\", \"Green\", \"Blue\", \"Alph\"]\nAll slice operations return a new list containing the requested elements. This means that the following slice returns a shallow copy of the list:\n>>> correct_rgba = rgba[:]\n>>> correct_rgba[-1] = \"Alpha\"\n>>> correct_rgba\n[\"Red\", \"Green\", \"Blue\", \"Alpha\"]\n>>> rgba\n[\"Red\", \"Green\", \"Blue\", \"Alph\"]\nAssignment to slices is also possible, and this can even change the size of the list or clear it entirely:\n>>> letters = ['a', 'b', 'c', 'd', 'e', 'f', 'g']\n>>> letters\n['a', 'b', 'c', 'd', 'e', 'f', 'g']\n>>> # replace some values\n>>> letters[2:5] = ['C', 'D', 'E']\n>>> letters\n['a', 'b', 'C', 'D', 'E', 'f', 'g']\n>>> # now remove them\n>>> letters[2:5] = []\n>>> letters\n['a', 'b', 'f', 'g']\n>>> # clear the list by replacing all the elements with an empty list\n>>> letters[:] = []\n>>> letters\n[]\nThe built-in function len()\nalso applies to lists:\n>>> letters = ['a', 'b', 'c', 'd']\n>>> len(letters)\n4\nIt is possible to nest lists (create lists containing other lists), for example:\n>>> a = ['a', 'b', 'c']\n>>> n = [1, 2, 3]\n>>> x = [a, n]\n>>> x\n[['a', 'b', 'c'], [1, 2, 3]]\n>>> x[0]\n['a', 'b', 'c']\n>>> x[0][1]\n'b'\n3.2. First Steps Towards Programming\u00b6\nOf course, we can use Python for more complicated tasks than adding two and two together. For instance, we can write an initial sub-sequence of the Fibonacci series as follows:\n>>> # Fibonacci series:\n>>> # the sum of two elements defines the next\n>>> a, b = 0, 1\n>>> while a < 10:\n... print(a)\n... a, b = b, a+b\n...\n0\n1\n1\n2\n3\n5\n8\nThis example introduces several new features.\nThe first line contains a multiple assignment: the variables\na\nandb\nsimultaneously get the new values 0 and 1. On the last line this is used again, demonstrating that the expressions on the right-hand side are all evaluated first before any of the assignments take place. The right-hand side expressions are evaluated from the left to the right.The\nwhile\nloop executes as long as the condition (here:a < 10\n) remains true. In Python, like in C, any non-zero integer value is true; zero is false. The condition may also be a string or list value, in fact any sequence; anything with a non-zero length is true, empty sequences are false. The test used in the example is a simple comparison. The standard comparison operators are written the same as in C:<\n(less than),>\n(greater than),==\n(equal to),<=\n(less than or equal to),>=\n(greater than or equal to) and!=\n(not equal to).The body of the loop is indented: indentation is Python\u2019s way of grouping statements. At the interactive prompt, you have to type a tab or space(s) for each indented line. In practice you will prepare more complicated input for Python with a text editor; all decent text editors have an auto-indent facility. When a compound statement is entered interactively, it must be followed by a blank line to indicate completion (since the parser cannot guess when you have typed the last line). Note that each line within a basic block must be indented by the same amount.\nThe\nprint()\nfunction writes the value of the argument(s) it is given. It differs from just writing the expression you want to write (as we did earlier in the calculator examples) in the way it handles multiple arguments, floating-point quantities, and strings. Strings are printed without quotes, and a space is inserted between items, so you can format things nicely, like this:>>> i = 256*256 >>> print('The value of i is', i) The value of i is 65536\nThe keyword argument end can be used to avoid the newline after the output, or end the output with a different string:\n>>> a, b = 0, 1 >>> while a < 1000: ... print(a, end=',') ... a, b = b, a+b ... 0,1,1,2,3,5,8,13,21,34,55,89,144,233,377,610,987,\nFootnotes", "code_snippets": ["\n", " ", " ", " ", "\n ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n", " ", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", " ", " ", " ", " ", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", " ", "\n", "\n", " ", "\n", "\n", "\n", " ", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n\n", "\n", "\n", " ", " ", " ", " ", "\n", "\n", " ", "\n", "\n", " ", " ", "\n", " ", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n File ", ", line ", "\n", " ", "\n", "\n", ": ", "\n", " ", " ", " ", "\n File ", ", line ", "\n", " ", " ", " ", "\n", "\n", ": ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n ", "\n ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", " ", " ", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", " ", "\n", "\n", " ", " ", " ", " ", " ", " ", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", " ", "\n", " ", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", " ", " ", " ", " ", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 4159} +{"url": "https://docs.python.org/3/tutorial/interpreter.html", "title": "Using the Python Interpreter", "content": "2. Using the Python Interpreter\u00b6\n2.1. Invoking the Interpreter\u00b6\nThe Python interpreter is usually installed as /usr/local/bin/python3.14\non those machines where it is available; putting /usr/local/bin\nin your\nUnix shell\u2019s search path makes it possible to start it by typing the command:\npython3.14\nto the shell. [1] Since the choice of the directory where the interpreter lives\nis an installation option, other places are possible; check with your local\nPython guru or system administrator. (E.g., /usr/local/python\nis a\npopular alternative location.)\nOn Windows machines where you have installed Python from the Microsoft Store, the python3.14\ncommand will be available. If you have\nthe py.exe launcher installed, you can use the py\ncommand. See Python install manager for other ways to launch Python.\nTyping an end-of-file character (Control-D on Unix, Control-Z on\nWindows) at the primary prompt causes the interpreter to exit with a zero exit\nstatus. If that doesn\u2019t work, you can exit the interpreter by typing the\nfollowing command: quit()\n.\nThe interpreter\u2019s line-editing features include interactive editing, history\nsubstitution and code completion on most systems.\nPerhaps the quickest check to see whether command line editing is supported is\ntyping a word in on the Python prompt, then pressing Left arrow (or Control-b).\nIf the cursor moves, you have command line editing; see Appendix\nInteractive Input Editing and History Substitution for an introduction to the keys.\nIf nothing appears to happen, or if a sequence like ^[[D\nor ^B\nappears,\ncommand line editing isn\u2019t available; you\u2019ll only be able to use\nbackspace to remove characters from the current line.\nThe interpreter operates somewhat like the Unix shell: when called with standard input connected to a tty device, it reads and executes commands interactively; when called with a file name argument or with a file as standard input, it reads and executes a script from that file.\nA second way of starting the interpreter is python -c command [arg] ...\n,\nwhich executes the statement(s) in command, analogous to the shell\u2019s\n-c\noption. Since Python statements often contain spaces or other\ncharacters that are special to the shell, it is usually advised to quote\ncommand in its entirety.\nSome Python modules are also useful as scripts. These can be invoked using\npython -m module [arg] ...\n, which executes the source file for module as\nif you had spelled out its full name on the command line.\nWhen a script file is used, it is sometimes useful to be able to run the script\nand enter interactive mode afterwards. This can be done by passing -i\nbefore the script.\nAll command line options are described in Command line and environment.\n2.1.1. Argument Passing\u00b6\nWhen known to the interpreter, the script name and additional arguments\nthereafter are turned into a list of strings and assigned to the argv\nvariable in the sys\nmodule. You can access this list by executing import\nsys\n. The length of the list is at least one; when no script and no arguments\nare given, sys.argv[0]\nis an empty string. When the script name is given as\n'-'\n(meaning standard input), sys.argv[0]\nis set to '-'\n. When\n-c\ncommand is used, sys.argv[0]\nis set to '-c'\n. When\n-m\nmodule is used, sys.argv[0]\nis set to the full name of the\nlocated module. Options found after -c\ncommand or -m\nmodule are not consumed by the Python interpreter\u2019s option processing but\nleft in sys.argv\nfor the command or module to handle.\n2.1.2. Interactive Mode\u00b6\nWhen commands are read from a tty, the interpreter is said to be in interactive\nmode. In this mode it prompts for the next command with the primary prompt,\nusually three greater-than signs (>>>\n); for continuation lines it prompts\nwith the secondary prompt, by default three dots (...\n). The interpreter\nprints a welcome message stating its version number and a copyright notice\nbefore printing the first prompt:\n$ python3.14\nPython 3.14 (default, April 4 2024, 09:25:04)\n[GCC 10.2.0] on linux\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>>\nContinuation lines are needed when entering a multi-line construct. As an\nexample, take a look at this if\nstatement:\n>>> the_world_is_flat = True\n>>> if the_world_is_flat:\n... print(\"Be careful not to fall off!\")\n...\nBe careful not to fall off!\nFor more on interactive mode, see Interactive Mode.\n2.2. The Interpreter and Its Environment\u00b6\n2.2.1. Source Code Encoding\u00b6\nBy default, Python source files are treated as encoded in UTF-8. In that encoding, characters of most languages in the world can be used simultaneously in string literals, identifiers and comments \u2014 although the standard library only uses ASCII characters for identifiers, a convention that any portable code should follow. To display all these characters properly, your editor must recognize that the file is UTF-8, and it must use a font that supports all the characters in the file.\nTo declare an encoding other than the default one, a special comment line should be added as the first line of the file. The syntax is as follows:\n# -*- coding: encoding -*-\nwhere encoding is one of the valid codecs\nsupported by Python.\nFor example, to declare that Windows-1252 encoding is to be used, the first line of your source code file should be:\n# -*- coding: cp1252 -*-\nOne exception to the first line rule is when the source code starts with a UNIX \u201cshebang\u201d line. In this case, the encoding declaration should be added as the second line of the file. For example:\n#!/usr/bin/env python3\n# -*- coding: cp1252 -*-\nFootnotes", "code_snippets": [" ", " ", "\n", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 1385} +{"url": "https://docs.python.org/3/library/security_warnings.html", "title": "Security Considerations", "content": "Security Considerations\u00b6\nThe following modules have specific security considerations:\nhashlib\n: all constructors take a \u201cusedforsecurity\u201d keyword-only argument disabling known insecure and blocked algorithmshttp.server\nis not suitable for production use, only implementing basic security checks. See the security considerations.random\nshouldn\u2019t be used for security purposes, usesecrets\ninsteadshelve\n: shelve is based on pickle and thus unsuitable for dealing with untrusted sourcestempfile\n: mktemp is deprecated due to vulnerability to race conditionszipfile\n: maliciously prepared .zip files can cause disk volume exhaustion\nThe -I\ncommand line option can be used to run Python in isolated\nmode. When it cannot be used, the -P\noption or the\nPYTHONSAFEPATH\nenvironment variable can be used to not prepend a\npotentially unsafe path to sys.path\nsuch as the current directory, the\nscript\u2019s directory or an empty string.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 230} +{"url": "https://docs.python.org/3/c-api/init.html", "title": "Initialization, Finalization, and Threads", "content": "Initialization, Finalization, and Threads\u00b6\nSee Python Initialization Configuration for details on how to configure the interpreter prior to initialization.\nBefore Python Initialization\u00b6\nIn an application embedding Python, the Py_Initialize()\nfunction must\nbe called before using any other Python/C API functions; with the exception of\na few functions and the global configuration variables.\nThe following functions can be safely called before Python is initialized:\nFunctions that initialize the interpreter:\nthe runtime pre-initialization functions covered in Python Initialization Configuration\nConfiguration functions:\nPyInitFrozenExtensions()\nthe configuration functions covered in Python Initialization Configuration\nInformative functions:\nUtilities:\nthe status reporting and utility functions covered in Python Initialization Configuration\nMemory allocators:\nSynchronization:\nNote\nDespite their apparent similarity to some of the functions listed above,\nthe following functions should not be called before the interpreter has\nbeen initialized: Py_EncodeLocale()\n, Py_GetPath()\n,\nPy_GetPrefix()\n, Py_GetExecPrefix()\n,\nPy_GetProgramFullPath()\n, Py_GetPythonHome()\n,\nPy_GetProgramName()\n, PyEval_InitThreads()\n, and\nPy_RunMain()\n.\nGlobal configuration variables\u00b6\nPython has variables for the global configuration to control different features and options. By default, these flags are controlled by command line options.\nWhen a flag is set by an option, the value of the flag is the number of times\nthat the option was set. For example, -b\nsets Py_BytesWarningFlag\nto 1 and -bb\nsets Py_BytesWarningFlag\nto 2.\n-\nint Py_BytesWarningFlag\u00b6\nThis API is kept for backward compatibility: setting\nPyConfig.bytes_warning\nshould be used instead, see Python Initialization Configuration.Issue a warning when comparing\nbytes\norbytearray\nwithstr\norbytes\nwithint\n. Issue an error if greater or equal to2\n.Set by the\n-b\noption.Deprecated since version 3.12, will be removed in version 3.15.\n-\nint Py_DebugFlag\u00b6\nThis API is kept for backward compatibility: setting\nPyConfig.parser_debug\nshould be used instead, see Python Initialization Configuration.Turn on parser debugging output (for expert only, depending on compilation options).\nSet by the\n-d\noption and thePYTHONDEBUG\nenvironment variable.Deprecated since version 3.12, will be removed in version 3.15.\n-\nint Py_DontWriteBytecodeFlag\u00b6\nThis API is kept for backward compatibility: setting\nPyConfig.write_bytecode\nshould be used instead, see Python Initialization Configuration.If set to non-zero, Python won\u2019t try to write\n.pyc\nfiles on the import of source modules.Set by the\n-B\noption and thePYTHONDONTWRITEBYTECODE\nenvironment variable.Deprecated since version 3.12, will be removed in version 3.15.\n-\nint Py_FrozenFlag\u00b6\nThis API is kept for backward compatibility: setting\nPyConfig.pathconfig_warnings\nshould be used instead, see Python Initialization Configuration.Suppress error messages when calculating the module search path in\nPy_GetPath()\n.Private flag used by\n_freeze_module\nandfrozenmain\nprograms.Deprecated since version 3.12, will be removed in version 3.15.\n-\nint Py_HashRandomizationFlag\u00b6\nThis API is kept for backward compatibility: setting\nPyConfig.hash_seed\nandPyConfig.use_hash_seed\nshould be used instead, see Python Initialization Configuration.Set to\n1\nif thePYTHONHASHSEED\nenvironment variable is set to a non-empty string.If the flag is non-zero, read the\nPYTHONHASHSEED\nenvironment variable to initialize the secret hash seed.Deprecated since version 3.12, will be removed in version 3.15.\n-\nint Py_IgnoreEnvironmentFlag\u00b6\nThis API is kept for backward compatibility: setting\nPyConfig.use_environment\nshould be used instead, see Python Initialization Configuration.Ignore all\nPYTHON*\nenvironment variables, e.g.PYTHONPATH\nandPYTHONHOME\n, that might be set.Deprecated since version 3.12, will be removed in version 3.15.\n-\nint Py_InspectFlag\u00b6\nThis API is kept for backward compatibility: setting\nPyConfig.inspect\nshould be used instead, see Python Initialization Configuration.When a script is passed as first argument or the\n-c\noption is used, enter interactive mode after executing the script or the command, even whensys.stdin\ndoes not appear to be a terminal.Set by the\n-i\noption and thePYTHONINSPECT\nenvironment variable.Deprecated since version 3.12, will be removed in version 3.15.\n-\nint Py_InteractiveFlag\u00b6\nThis API is kept for backward compatibility: setting\nPyConfig.interactive\nshould be used instead, see Python Initialization Configuration.Set by the\n-i\noption.Deprecated since version 3.12, will be removed in version 3.15.\n-\nint Py_IsolatedFlag\u00b6\nThis API is kept for backward compatibility: setting\nPyConfig.isolated\nshould be used instead, see Python Initialization Configuration.Run Python in isolated mode. In isolated mode\nsys.path\ncontains neither the script\u2019s directory nor the user\u2019s site-packages directory.Set by the\n-I\noption.Added in version 3.4.\nDeprecated since version 3.12, will be removed in version 3.15.\n-\nint Py_LegacyWindowsFSEncodingFlag\u00b6\nThis API is kept for backward compatibility: setting\nPyPreConfig.legacy_windows_fs_encoding\nshould be used instead, see Python Initialization Configuration.If the flag is non-zero, use the\nmbcs\nencoding withreplace\nerror handler, instead of the UTF-8 encoding withsurrogatepass\nerror handler, for the filesystem encoding and error handler.Set to\n1\nif thePYTHONLEGACYWINDOWSFSENCODING\nenvironment variable is set to a non-empty string.See PEP 529 for more details.\nAvailability: Windows.\nDeprecated since version 3.12, will be removed in version 3.15.\n-\nint Py_LegacyWindowsStdioFlag\u00b6\nThis API is kept for backward compatibility: setting\nPyConfig.legacy_windows_stdio\nshould be used instead, see Python Initialization Configuration.If the flag is non-zero, use\nio.FileIO\ninstead ofio._WindowsConsoleIO\nforsys\nstandard streams.Set to\n1\nif thePYTHONLEGACYWINDOWSSTDIO\nenvironment variable is set to a non-empty string.See PEP 528 for more details.\nAvailability: Windows.\nDeprecated since version 3.12, will be removed in version 3.15.\n-\nint Py_NoSiteFlag\u00b6\nThis API is kept for backward compatibility: setting\nPyConfig.site_import\nshould be used instead, see Python Initialization Configuration.Disable the import of the module\nsite\nand the site-dependent manipulations ofsys.path\nthat it entails. Also disable these manipulations ifsite\nis explicitly imported later (callsite.main()\nif you want them to be triggered).Set by the\n-S\noption.Deprecated since version 3.12, will be removed in version 3.15.\n-\nint Py_NoUserSiteDirectory\u00b6\nThis API is kept for backward compatibility: setting\nPyConfig.user_site_directory\nshould be used instead, see Python Initialization Configuration.Don\u2019t add the\nuser site-packages directory\ntosys.path\n.Set by the\n-s\nand-I\noptions, and thePYTHONNOUSERSITE\nenvironment variable.Deprecated since version 3.12, will be removed in version 3.15.\n-\nint Py_OptimizeFlag\u00b6\nThis API is kept for backward compatibility: setting\nPyConfig.optimization_level\nshould be used instead, see Python Initialization Configuration.Set by the\n-O\noption and thePYTHONOPTIMIZE\nenvironment variable.Deprecated since version 3.12, will be removed in version 3.15.\n-\nint Py_QuietFlag\u00b6\nThis API is kept for backward compatibility: setting\nPyConfig.quiet\nshould be used instead, see Python Initialization Configuration.Don\u2019t display the copyright and version messages even in interactive mode.\nSet by the\n-q\noption.Added in version 3.2.\nDeprecated since version 3.12, will be removed in version 3.15.\n-\nint Py_UnbufferedStdioFlag\u00b6\nThis API is kept for backward compatibility: setting\nPyConfig.buffered_stdio\nshould be used instead, see Python Initialization Configuration.Force the stdout and stderr streams to be unbuffered.\nSet by the\n-u\noption and thePYTHONUNBUFFERED\nenvironment variable.Deprecated since version 3.12, will be removed in version 3.15.\n-\nint Py_VerboseFlag\u00b6\nThis API is kept for backward compatibility: setting\nPyConfig.verbose\nshould be used instead, see Python Initialization Configuration.Print a message each time a module is initialized, showing the place (filename or built-in module) from which it is loaded. If greater or equal to\n2\n, print a message for each file that is checked for when searching for a module. Also provides information on module cleanup at exit.Set by the\n-v\noption and thePYTHONVERBOSE\nenvironment variable.Deprecated since version 3.12, will be removed in version 3.15.\nInitializing and finalizing the interpreter\u00b6\n-\nvoid Py_Initialize()\u00b6\n- Part of the Stable ABI.\nInitialize the Python interpreter. In an application embedding Python, this should be called before using any other Python/C API functions; see Before Python Initialization for the few exceptions.\nThis initializes the table of loaded modules (\nsys.modules\n), and creates the fundamental modulesbuiltins\n,__main__\nandsys\n. It also initializes the module search path (sys.path\n). It does not setsys.argv\n; use the Python Initialization Configuration API for that. This is a no-op when called for a second time (without callingPy_FinalizeEx()\nfirst). There is no return value; it is a fatal error if the initialization fails.Use\nPy_InitializeFromConfig()\nto customize the Python Initialization Configuration.Note\nOn Windows, changes the console mode from\nO_TEXT\ntoO_BINARY\n, which will also affect non-Python uses of the console using the C Runtime.\n-\nvoid Py_InitializeEx(int initsigs)\u00b6\n- Part of the Stable ABI.\nThis function works like\nPy_Initialize()\nif initsigs is1\n. If initsigs is0\n, it skips initialization registration of signal handlers, which may be useful when CPython is embedded as part of a larger application.Use\nPy_InitializeFromConfig()\nto customize the Python Initialization Configuration.\n-\nPyStatus Py_InitializeFromConfig(const PyConfig *config)\u00b6\nInitialize Python from config configuration, as described in Initialization with PyConfig.\nSee the Python Initialization Configuration section for details on pre-initializing the interpreter, populating the runtime configuration structure, and querying the returned status structure.\n-\nint Py_IsInitialized()\u00b6\n- Part of the Stable ABI.\nReturn true (nonzero) when the Python interpreter has been initialized, false (zero) if not. After\nPy_FinalizeEx()\nis called, this returns false untilPy_Initialize()\nis called again.\n-\nint Py_IsFinalizing()\u00b6\n- Part of the Stable ABI since version 3.13.\nReturn true (non-zero) if the main Python interpreter is shutting down. Return false (zero) otherwise.\nAdded in version 3.13.\n-\nint Py_FinalizeEx()\u00b6\n- Part of the Stable ABI since version 3.6.\nUndo all initializations made by\nPy_Initialize()\nand subsequent use of Python/C API functions, and destroy all sub-interpreters (seePy_NewInterpreter()\nbelow) that were created and not yet destroyed since the last call toPy_Initialize()\n. This is a no-op when called for a second time (without callingPy_Initialize()\nagain first).Since this is the reverse of\nPy_Initialize()\n, it should be called in the same thread with the same interpreter active. That means the main thread and the main interpreter. This should never be called whilePy_RunMain()\nis running.Normally the return value is\n0\n. If there were errors during finalization (flushing buffered data),-1\nis returned.Note that Python will do a best effort at freeing all memory allocated by the Python interpreter. Therefore, any C-Extension should make sure to correctly clean up all of the previously allocated PyObjects before using them in subsequent calls to\nPy_Initialize()\n. Otherwise it could introduce vulnerabilities and incorrect behavior.This function is provided for a number of reasons. An embedding application might want to restart Python without having to restart the application itself. An application that has loaded the Python interpreter from a dynamically loadable library (or DLL) might want to free all memory allocated by Python before unloading the DLL. During a hunt for memory leaks in an application a developer might want to free all memory allocated by Python before exiting from the application.\nBugs and caveats: The destruction of modules and objects in modules is done in random order; this may cause destructors (\n__del__()\nmethods) to fail when they depend on other objects (even functions) or modules. Dynamically loaded extension modules loaded by Python are not unloaded. Small amounts of memory allocated by the Python interpreter may not be freed (if you find a leak, please report it). Memory tied up in circular references between objects is not freed. Interned strings will all be deallocated regardless of their reference count. Some memory allocated by extension modules may not be freed. Some extensions may not work properly if their initialization routine is called more than once; this can happen if an application callsPy_Initialize()\nandPy_FinalizeEx()\nmore than once.Py_FinalizeEx()\nmust not be called recursively from within itself. Therefore, it must not be called by any code that may be run as part of the interpreter shutdown process, such asatexit\nhandlers, object finalizers, or any code that may be run while flushing the stdout and stderr files.Raises an auditing event\ncpython._PySys_ClearAuditHooks\nwith no arguments.Added in version 3.6.\n-\nvoid Py_Finalize()\u00b6\n- Part of the Stable ABI.\nThis is a backwards-compatible version of\nPy_FinalizeEx()\nthat disregards the return value.\n-\nint Py_BytesMain(int argc, char **argv)\u00b6\n- Part of the Stable ABI since version 3.8.\nSimilar to\nPy_Main()\nbut argv is an array of bytes strings, allowing the calling application to delegate the text decoding step to the CPython runtime.Added in version 3.8.\n-\nint Py_Main(int argc, wchar_t **argv)\u00b6\n- Part of the Stable ABI.\nThe main program for the standard interpreter, encapsulating a full initialization/finalization cycle, as well as additional behaviour to implement reading configurations settings from the environment and command line, and then executing\n__main__\nin accordance with Command line.This is made available for programs which wish to support the full CPython command line interface, rather than just embedding a Python runtime in a larger application.\nThe argc and argv parameters are similar to those which are passed to a C program\u2019s\nmain()\nfunction, except that the argv entries are first converted towchar_t\nusingPy_DecodeLocale()\n. It is also important to note that the argument list entries may be modified to point to strings other than those passed in (however, the contents of the strings pointed to by the argument list are not modified).The return value is\n2\nif the argument list does not represent a valid Python command line, and otherwise the same asPy_RunMain()\n.In terms of the CPython runtime configuration APIs documented in the runtime configuration section (and without accounting for error handling),\nPy_Main\nis approximately equivalent to:PyConfig config; PyConfig_InitPythonConfig(&config); PyConfig_SetArgv(&config, argc, argv); Py_InitializeFromConfig(&config); PyConfig_Clear(&config); Py_RunMain();\nIn normal usage, an embedding application will call this function instead of calling\nPy_Initialize()\n,Py_InitializeEx()\norPy_InitializeFromConfig()\ndirectly, and all settings will be applied as described elsewhere in this documentation. If this function is instead called after a preceding runtime initialization API call, then exactly which environmental and command line configuration settings will be updated is version dependent (as it depends on which settings correctly support being modified after they have already been set once when the runtime was first initialized).\n-\nint Py_RunMain(void)\u00b6\nExecutes the main module in a fully configured CPython runtime.\nExecutes the command (\nPyConfig.run_command\n), the script (PyConfig.run_filename\n) or the module (PyConfig.run_module\n) specified on the command line or in the configuration. If none of these values are set, runs the interactive Python prompt (REPL) using the__main__\nmodule\u2019s global namespace.If\nPyConfig.inspect\nis not set (the default), the return value will be0\nif the interpreter exits normally (that is, without raising an exception), the exit status of an unhandledSystemExit\n, or1\nfor any other unhandled exception.If\nPyConfig.inspect\nis set (such as when the-i\noption is used), rather than returning when the interpreter exits, execution will instead resume in an interactive Python prompt (REPL) using the__main__\nmodule\u2019s global namespace. If the interpreter exited with an exception, it is immediately raised in the REPL session. The function return value is then determined by the way the REPL session terminates:0\n,1\n, or the status of aSystemExit\n, as specified above.This function always finalizes the Python interpreter before it returns.\nSee Python Configuration for an example of a customized Python that always runs in isolated mode using\nPy_RunMain()\n.\n-\nint PyUnstable_AtExit(PyInterpreterState *interp, void (*func)(void*), void *data)\u00b6\n- This is Unstable API. It may change without warning in minor releases.\nRegister an\natexit\ncallback for the target interpreter interp. This is similar toPy_AtExit()\n, but takes an explicit interpreter and data pointer for the callback.There must be an attached thread state for interp.\nAdded in version 3.13.\nProcess-wide parameters\u00b6\n-\nvoid Py_SetProgramName(const wchar_t *name)\u00b6\n- Part of the Stable ABI.\nThis API is kept for backward compatibility: setting\nPyConfig.program_name\nshould be used instead, see Python Initialization Configuration.This function should be called before\nPy_Initialize()\nis called for the first time, if it is called at all. It tells the interpreter the value of theargv[0]\nargument to themain()\nfunction of the program (converted to wide characters). This is used byPy_GetPath()\nand some other functions below to find the Python run-time libraries relative to the interpreter executable. The default value is'python'\n. The argument should point to a zero-terminated wide character string in static storage whose contents will not change for the duration of the program\u2019s execution. No code in the Python interpreter will change the contents of this storage.Use\nPy_DecodeLocale()\nto decode a bytes string to get a wchar_t* string.Deprecated since version 3.11, will be removed in version 3.15.\n-\nwchar_t *Py_GetProgramName()\u00b6\n- Part of the Stable ABI.\nReturn the program name set with\nPyConfig.program_name\n, or the default. The returned string points into static storage; the caller should not modify its value.This function should not be called before\nPy_Initialize()\n, otherwise it returnsNULL\n.Changed in version 3.10: It now returns\nNULL\nif called beforePy_Initialize()\n.Deprecated since version 3.13, will be removed in version 3.15: Use\nPyConfig_Get(\"executable\")\n(sys.executable\n) instead.\n-\nwchar_t *Py_GetPrefix()\u00b6\n- Part of the Stable ABI.\nReturn the prefix for installed platform-independent files. This is derived through a number of complicated rules from the program name set with\nPyConfig.program_name\nand some environment variables; for example, if the program name is'/usr/local/bin/python'\n, the prefix is'/usr/local'\n. The returned string points into static storage; the caller should not modify its value. This corresponds to the prefix variable in the top-levelMakefile\nand the--prefix\nargument to the configure script at build time. The value is available to Python code assys.base_prefix\n. It is only useful on Unix. See also the next function.This function should not be called before\nPy_Initialize()\n, otherwise it returnsNULL\n.Changed in version 3.10: It now returns\nNULL\nif called beforePy_Initialize()\n.Deprecated since version 3.13, will be removed in version 3.15: Use\nPyConfig_Get(\"base_prefix\")\n(sys.base_prefix\n) instead. UsePyConfig_Get(\"prefix\")\n(sys.prefix\n) if virtual environments need to be handled.\n-\nwchar_t *Py_GetExecPrefix()\u00b6\n- Part of the Stable ABI.\nReturn the exec-prefix for installed platform-dependent files. This is derived through a number of complicated rules from the program name set with\nPyConfig.program_name\nand some environment variables; for example, if the program name is'/usr/local/bin/python'\n, the exec-prefix is'/usr/local'\n. The returned string points into static storage; the caller should not modify its value. This corresponds to the exec_prefix variable in the top-levelMakefile\nand the--exec-prefix\nargument to the configure script at build time. The value is available to Python code assys.base_exec_prefix\n. It is only useful on Unix.Background: The exec-prefix differs from the prefix when platform dependent files (such as executables and shared libraries) are installed in a different directory tree. In a typical installation, platform dependent files may be installed in the\n/usr/local/plat\nsubtree while platform independent may be installed in/usr/local\n.Generally speaking, a platform is a combination of hardware and software families, e.g. Sparc machines running the Solaris 2.x operating system are considered the same platform, but Intel machines running Solaris 2.x are another platform, and Intel machines running Linux are yet another platform. Different major revisions of the same operating system generally also form different platforms. Non-Unix operating systems are a different story; the installation strategies on those systems are so different that the prefix and exec-prefix are meaningless, and set to the empty string. Note that compiled Python bytecode files are platform independent (but not independent from the Python version by which they were compiled!).\nSystem administrators will know how to configure the mount or automount programs to share\n/usr/local\nbetween platforms while having/usr/local/plat\nbe a different filesystem for each platform.This function should not be called before\nPy_Initialize()\n, otherwise it returnsNULL\n.Changed in version 3.10: It now returns\nNULL\nif called beforePy_Initialize()\n.Deprecated since version 3.13, will be removed in version 3.15: Use\nPyConfig_Get(\"base_exec_prefix\")\n(sys.base_exec_prefix\n) instead. UsePyConfig_Get(\"exec_prefix\")\n(sys.exec_prefix\n) if virtual environments need to be handled.\n-\nwchar_t *Py_GetProgramFullPath()\u00b6\n- Part of the Stable ABI.\nReturn the full program name of the Python executable; this is computed as a side-effect of deriving the default module search path from the program name (set by\nPyConfig.program_name\n). The returned string points into static storage; the caller should not modify its value. The value is available to Python code assys.executable\n.This function should not be called before\nPy_Initialize()\n, otherwise it returnsNULL\n.Changed in version 3.10: It now returns\nNULL\nif called beforePy_Initialize()\n.Deprecated since version 3.13, will be removed in version 3.15: Use\nPyConfig_Get(\"executable\")\n(sys.executable\n) instead.\n-\nwchar_t *Py_GetPath()\u00b6\n- Part of the Stable ABI.\nReturn the default module search path; this is computed from the program name (set by\nPyConfig.program_name\n) and some environment variables. The returned string consists of a series of directory names separated by a platform dependent delimiter character. The delimiter character is':'\non Unix and macOS,';'\non Windows. The returned string points into static storage; the caller should not modify its value. The listsys.path\nis initialized with this value on interpreter startup; it can be (and usually is) modified later to change the search path for loading modules.This function should not be called before\nPy_Initialize()\n, otherwise it returnsNULL\n.Changed in version 3.10: It now returns\nNULL\nif called beforePy_Initialize()\n.Deprecated since version 3.13, will be removed in version 3.15: Use\nPyConfig_Get(\"module_search_paths\")\n(sys.path\n) instead.\n-\nconst char *Py_GetVersion()\u00b6\n- Part of the Stable ABI.\nReturn the version of this Python interpreter. This is a string that looks something like\n\"3.0a5+ (py3k:63103M, May 12 2008, 00:53:55) \\n[GCC 4.2.3]\"\nThe first word (up to the first space character) is the current Python version; the first characters are the major and minor version separated by a period. The returned string points into static storage; the caller should not modify its value. The value is available to Python code as\nsys.version\n.See also the\nPy_Version\nconstant.\n-\nconst char *Py_GetPlatform()\u00b6\n- Part of the Stable ABI.\nReturn the platform identifier for the current platform. On Unix, this is formed from the \u201cofficial\u201d name of the operating system, converted to lower case, followed by the major revision number; e.g., for Solaris 2.x, which is also known as SunOS 5.x, the value is\n'sunos5'\n. On macOS, it is'darwin'\n. On Windows, it is'win'\n. The returned string points into static storage; the caller should not modify its value. The value is available to Python code assys.platform\n.\n-\nconst char *Py_GetCopyright()\u00b6\n- Part of the Stable ABI.\nReturn the official copyright string for the current Python version, for example\n'Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam'\nThe returned string points into static storage; the caller should not modify its value. The value is available to Python code as\nsys.copyright\n.\n-\nconst char *Py_GetCompiler()\u00b6\n- Part of the Stable ABI.\nReturn an indication of the compiler used to build the current Python version, in square brackets, for example:\n\"[GCC 2.7.2.2]\"\nThe returned string points into static storage; the caller should not modify its value. The value is available to Python code as part of the variable\nsys.version\n.\n-\nconst char *Py_GetBuildInfo()\u00b6\n- Part of the Stable ABI.\nReturn information about the sequence number and build date and time of the current Python interpreter instance, for example\n\"#67, Aug 1 1997, 22:34:28\"\nThe returned string points into static storage; the caller should not modify its value. The value is available to Python code as part of the variable\nsys.version\n.\n-\nvoid PySys_SetArgvEx(int argc, wchar_t **argv, int updatepath)\u00b6\n- Part of the Stable ABI.\nThis API is kept for backward compatibility: setting\nPyConfig.argv\n,PyConfig.parse_argv\nandPyConfig.safe_path\nshould be used instead, see Python Initialization Configuration.Set\nsys.argv\nbased on argc and argv. These parameters are similar to those passed to the program\u2019smain()\nfunction with the difference that the first entry should refer to the script file to be executed rather than the executable hosting the Python interpreter. If there isn\u2019t a script that will be run, the first entry in argv can be an empty string. If this function fails to initializesys.argv\n, a fatal condition is signalled usingPy_FatalError()\n.If updatepath is zero, this is all the function does. If updatepath is non-zero, the function also modifies\nsys.path\naccording to the following algorithm:If the name of an existing script is passed in\nargv[0]\n, the absolute path of the directory where the script is located is prepended tosys.path\n.Otherwise (that is, if argc is\n0\norargv[0]\ndoesn\u2019t point to an existing file name), an empty string is prepended tosys.path\n, which is the same as prepending the current working directory (\".\"\n).\nUse\nPy_DecodeLocale()\nto decode a bytes string to get a wchar_t* string.See also\nPyConfig.orig_argv\nandPyConfig.argv\nmembers of the Python Initialization Configuration.Note\nIt is recommended that applications embedding the Python interpreter for purposes other than executing a single script pass\n0\nas updatepath, and updatesys.path\nthemselves if desired. See CVE 2008-5983.On versions before 3.1.3, you can achieve the same effect by manually popping the first\nsys.path\nelement after having calledPySys_SetArgv()\n, for example using:PyRun_SimpleString(\"import sys; sys.path.pop(0)\\n\");\nAdded in version 3.1.3.\nDeprecated since version 3.11, will be removed in version 3.15.\n-\nvoid PySys_SetArgv(int argc, wchar_t **argv)\u00b6\n- Part of the Stable ABI.\nThis API is kept for backward compatibility: setting\nPyConfig.argv\nandPyConfig.parse_argv\nshould be used instead, see Python Initialization Configuration.This function works like\nPySys_SetArgvEx()\nwith updatepath set to1\nunless the python interpreter was started with the-I\n.Use\nPy_DecodeLocale()\nto decode a bytes string to get a wchar_t* string.See also\nPyConfig.orig_argv\nandPyConfig.argv\nmembers of the Python Initialization Configuration.Changed in version 3.4: The updatepath value depends on\n-I\n.Deprecated since version 3.11, will be removed in version 3.15.\n-\nvoid Py_SetPythonHome(const wchar_t *home)\u00b6\n- Part of the Stable ABI.\nThis API is kept for backward compatibility: setting\nPyConfig.home\nshould be used instead, see Python Initialization Configuration.Set the default \u201chome\u201d directory, that is, the location of the standard Python libraries. See\nPYTHONHOME\nfor the meaning of the argument string.The argument should point to a zero-terminated character string in static storage whose contents will not change for the duration of the program\u2019s execution. No code in the Python interpreter will change the contents of this storage.\nUse\nPy_DecodeLocale()\nto decode a bytes string to get a wchar_t* string.Deprecated since version 3.11, will be removed in version 3.15.\n-\nwchar_t *Py_GetPythonHome()\u00b6\n- Part of the Stable ABI.\nReturn the default \u201chome\u201d, that is, the value set by\nPyConfig.home\n, or the value of thePYTHONHOME\nenvironment variable if it is set.This function should not be called before\nPy_Initialize()\n, otherwise it returnsNULL\n.Changed in version 3.10: It now returns\nNULL\nif called beforePy_Initialize()\n.Deprecated since version 3.13, will be removed in version 3.15: Use\nPyConfig_Get(\"home\")\nor thePYTHONHOME\nenvironment variable instead.\nThread State and the Global Interpreter Lock\u00b6\nUnless on a free-threaded build of CPython, the Python interpreter is not fully thread-safe. In order to support multi-threaded Python programs, there\u2019s a global lock, called the global interpreter lock or GIL, that must be held by the current thread before it can safely access Python objects. Without the lock, even the simplest operations could cause problems in a multi-threaded program: for example, when two threads simultaneously increment the reference count of the same object, the reference count could end up being incremented only once instead of twice.\nTherefore, the rule exists that only the thread that has acquired the\nGIL may operate on Python objects or call Python/C API functions.\nIn order to emulate concurrency of execution, the interpreter regularly\ntries to switch threads (see sys.setswitchinterval()\n). The lock is also\nreleased around potentially blocking I/O operations like reading or writing\na file, so that other Python threads can run in the meantime.\nThe Python interpreter keeps some thread-specific bookkeeping information\ninside a data structure called PyThreadState\n, known as a thread state.\nEach OS thread has a thread-local pointer to a PyThreadState\n; a thread state\nreferenced by this pointer is considered to be attached.\nA thread can only have one attached thread state at a time. An attached thread state is typically analogous with holding the GIL, except on free-threaded builds. On builds with the GIL enabled, attaching a thread state will block until the GIL can be acquired. However, even on builds with the GIL disabled, it is still required to have an attached thread state to call most of the C API.\nIn general, there will always be an attached thread state when using Python\u2019s C API.\nOnly in some specific cases (such as in a Py_BEGIN_ALLOW_THREADS\nblock) will the\nthread not have an attached thread state. If uncertain, check if PyThreadState_GetUnchecked()\nreturns\nNULL\n.\nDetaching the thread state from extension code\u00b6\nMost extension code manipulating the thread state has the following simple structure:\nSave the thread state in a local variable.\n... Do some blocking I/O operation ...\nRestore the thread state from the local variable.\nThis is so common that a pair of macros exists to simplify it:\nPy_BEGIN_ALLOW_THREADS\n... Do some blocking I/O operation ...\nPy_END_ALLOW_THREADS\nThe Py_BEGIN_ALLOW_THREADS\nmacro opens a new block and declares a\nhidden local variable; the Py_END_ALLOW_THREADS\nmacro closes the\nblock.\nThe block above expands to the following code:\nPyThreadState *_save;\n_save = PyEval_SaveThread();\n... Do some blocking I/O operation ...\nPyEval_RestoreThread(_save);\nHere is how these functions work:\nThe attached thread state holds the GIL for the entire interpreter. When detaching\nthe attached thread state, the GIL is released, allowing other threads to attach\na thread state to their own thread, thus getting the GIL and can start executing.\nThe pointer to the prior attached thread state is stored as a local variable.\nUpon reaching Py_END_ALLOW_THREADS\n, the thread state that was\npreviously attached is passed to PyEval_RestoreThread()\n.\nThis function will block until another releases its thread state,\nthus allowing the old thread state to get re-attached and the\nC API can be called again.\nFor free-threaded builds, the GIL is normally out of the question, but detaching the thread state is still required for blocking I/O and long operations. The difference is that threads don\u2019t have to wait for the GIL to be released to attach their thread state, allowing true multi-core parallelism.\nNote\nCalling system I/O functions is the most common use case for detaching\nthe thread state, but it can also be useful before calling\nlong-running computations which don\u2019t need access to Python objects, such\nas compression or cryptographic functions operating over memory buffers.\nFor example, the standard zlib\nand hashlib\nmodules detach the\nthread state when compressing or hashing data.\nNon-Python created threads\u00b6\nWhen threads are created using the dedicated Python APIs (such as the\nthreading\nmodule), a thread state is automatically associated to them\nand the code showed above is therefore correct. However, when threads are\ncreated from C (for example by a third-party library with its own thread\nmanagement), they don\u2019t hold the GIL, because they don\u2019t have an\nattached thread state.\nIf you need to call Python code from these threads (often this will be part of a callback API provided by the aforementioned third-party library), you must first register these threads with the interpreter by creating an attached thread state before you can start using the Python/C API. When you are done, you should detach the thread state, and finally free it.\nThe PyGILState_Ensure()\nand PyGILState_Release()\nfunctions do\nall of the above automatically. The typical idiom for calling into Python\nfrom a C thread is:\nPyGILState_STATE gstate;\ngstate = PyGILState_Ensure();\n/* Perform Python actions here. */\nresult = CallSomeFunction();\n/* evaluate result or handle exception */\n/* Release the thread. No Python API allowed beyond this point. */\nPyGILState_Release(gstate);\nNote that the PyGILState_*\nfunctions assume there is only one global\ninterpreter (created automatically by Py_Initialize()\n). Python\nsupports the creation of additional interpreters (using\nPy_NewInterpreter()\n), but mixing multiple interpreters and the\nPyGILState_*\nAPI is unsupported. This is because PyGILState_Ensure()\nand similar functions default to attaching a\nthread state for the main interpreter, meaning that the thread can\u2019t safely\ninteract with the calling subinterpreter.\nSupporting subinterpreters in non-Python threads\u00b6\nIf you would like to support subinterpreters with non-Python created threads, you\nmust use the PyThreadState_*\nAPI instead of the traditional PyGILState_*\nAPI.\nIn particular, you must store the interpreter state from the calling\nfunction and pass it to PyThreadState_New()\n, which will ensure that\nthe thread state is targeting the correct interpreter:\n/* The return value of PyInterpreterState_Get() from the\nfunction that created this thread. */\nPyInterpreterState *interp = ThreadData->interp;\nPyThreadState *tstate = PyThreadState_New(interp);\nPyThreadState_Swap(tstate);\n/* GIL of the subinterpreter is now held.\nPerform Python actions here. */\nresult = CallSomeFunction();\n/* evaluate result or handle exception */\n/* Destroy the thread state. No Python API allowed beyond this point. */\nPyThreadState_Clear(tstate);\nPyThreadState_DeleteCurrent();\nCautions about fork()\u00b6\nAnother important thing to note about threads is their behaviour in the face\nof the C fork()\ncall. On most systems with fork()\n, after a\nprocess forks only the thread that issued the fork will exist. This has a\nconcrete impact both on how locks must be handled and on all stored state\nin CPython\u2019s runtime.\nThe fact that only the \u201ccurrent\u201d thread remains\nmeans any locks held by other threads will never be released. Python solves\nthis for os.fork()\nby acquiring the locks it uses internally before\nthe fork, and releasing them afterwards. In addition, it resets any\nLock objects in the child. When extending or embedding Python, there\nis no way to inform Python of additional (non-Python) locks that need to be\nacquired before or reset after a fork. OS facilities such as\npthread_atfork()\nwould need to be used to accomplish the same thing.\nAdditionally, when extending or embedding Python, calling fork()\ndirectly rather than through os.fork()\n(and returning to or calling\ninto Python) may result in a deadlock by one of Python\u2019s internal locks\nbeing held by a thread that is defunct after the fork.\nPyOS_AfterFork_Child()\ntries to reset the necessary locks, but is not\nalways able to.\nThe fact that all other threads go away also means that CPython\u2019s\nruntime state there must be cleaned up properly, which os.fork()\ndoes. This means finalizing all other PyThreadState\nobjects\nbelonging to the current interpreter and all other\nPyInterpreterState\nobjects. Due to this and the special\nnature of the \u201cmain\u201d interpreter,\nfork()\nshould only be called in that interpreter\u2019s \u201cmain\u201d\nthread, where the CPython global runtime was originally initialized.\nThe only exception is if exec()\nwill be called immediately\nafter.\nCautions regarding runtime finalization\u00b6\nIn the late stage of interpreter shutdown, after attempting to wait for\nnon-daemon threads to exit (though this can be interrupted by\nKeyboardInterrupt\n) and running the atexit\nfunctions, the runtime\nis marked as finalizing: Py_IsFinalizing()\nand\nsys.is_finalizing()\nreturn true. At this point, only the finalization\nthread that initiated finalization (typically the main thread) is allowed to\nacquire the GIL.\nIf any thread, other than the finalization thread, attempts to attach a thread state during finalization, either explicitly or implicitly, the thread enters a permanently blocked state where it remains until the program exits. In most cases this is harmless, but this can result in deadlock if a later stage of finalization attempts to acquire a lock owned by the blocked thread, or otherwise waits on the blocked thread.\nGross? Yes. This prevents random crashes and/or unexpectedly skipped C++ finalizations further up the call stack when such threads were forcibly exited here in CPython 3.13 and earlier. The CPython runtime thread state C APIs have never had any error reporting or handling expectations at thread state attachment time that would\u2019ve allowed for graceful exit from this situation. Changing that would require new stable C APIs and rewriting the majority of C code in the CPython ecosystem to use those with error handling.\nHigh-level API\u00b6\nThese are the most commonly used types and functions when writing C extension code, or when embedding the Python interpreter:\n-\ntype PyInterpreterState\u00b6\n- Part of the Limited API (as an opaque struct).\nThis data structure represents the state shared by a number of cooperating threads. Threads belonging to the same interpreter share their module administration and a few other internal items. There are no public members in this structure.\nThreads belonging to different interpreters initially share nothing, except process state like available memory, open file descriptors and such. The global interpreter lock is also shared by all threads, regardless of to which interpreter they belong.\nChanged in version 3.12: PEP 684 introduced the possibility of a per-interpreter GIL. See\nPy_NewInterpreterFromConfig()\n.\n-\ntype PyThreadState\u00b6\n- Part of the Limited API (as an opaque struct).\nThis data structure represents the state of a single thread. The only public data member is:\n-\nPyInterpreterState *interp\u00b6\nThis thread\u2019s interpreter state.\n-\nPyInterpreterState *interp\u00b6\n-\nvoid PyEval_InitThreads()\u00b6\n- Part of the Stable ABI.\nDeprecated function which does nothing.\nIn Python 3.6 and older, this function created the GIL if it didn\u2019t exist.\nChanged in version 3.9: The function now does nothing.\nChanged in version 3.7: This function is now called by\nPy_Initialize()\n, so you don\u2019t have to call it yourself anymore.Changed in version 3.2: This function cannot be called before\nPy_Initialize()\nanymore.Deprecated since version 3.9.\n-\nPyThreadState *PyEval_SaveThread()\u00b6\n- Part of the Stable ABI.\nDetach the attached thread state and return it. The thread will have no thread state upon returning.\n-\nvoid PyEval_RestoreThread(PyThreadState *tstate)\u00b6\n- Part of the Stable ABI.\nSet the attached thread state to tstate. The passed thread state should not be attached, otherwise deadlock ensues. tstate will be attached upon returning.\nNote\nCalling this function from a thread when the runtime is finalizing will hang the thread until the program exits, even if the thread was not created by Python. Refer to Cautions regarding runtime finalization for more details.\nChanged in version 3.14: Hangs the current thread, rather than terminating it, if called while the interpreter is finalizing.\n-\nPyThreadState *PyThreadState_Get()\u00b6\n- Part of the Stable ABI.\nReturn the attached thread state. If the thread has no attached thread state, (such as when inside of\nPy_BEGIN_ALLOW_THREADS\nblock), then this issues a fatal error (so that the caller needn\u2019t check forNULL\n).See also\nPyThreadState_GetUnchecked()\n.\n-\nPyThreadState *PyThreadState_GetUnchecked()\u00b6\nSimilar to\nPyThreadState_Get()\n, but don\u2019t kill the process with a fatal error if it is NULL. The caller is responsible to check if the result is NULL.Added in version 3.13: In Python 3.5 to 3.12, the function was private and known as\n_PyThreadState_UncheckedGet()\n.\n-\nPyThreadState *PyThreadState_Swap(PyThreadState *tstate)\u00b6\n- Part of the Stable ABI.\nSet the attached thread state to tstate, and return the thread state that was attached prior to calling.\nThis function is safe to call without an attached thread state; it will simply return\nNULL\nindicating that there was no prior thread state.See also\nNote\nSimilar to\nPyGILState_Ensure()\n, this function will hang the thread if the runtime is finalizing.\nThe following functions use thread-local storage, and are not compatible with sub-interpreters:\n-\ntype PyGILState_STATE\u00b6\n- Part of the Stable ABI.\nThe type of the value returned by\nPyGILState_Ensure()\nand passed toPyGILState_Release()\n.-\nenumerator PyGILState_LOCKED\u00b6\nThe GIL was already held when\nPyGILState_Ensure()\nwas called.\n-\nenumerator PyGILState_UNLOCKED\u00b6\nThe GIL was not held when\nPyGILState_Ensure()\nwas called.\n-\nenumerator PyGILState_LOCKED\u00b6\n-\nPyGILState_STATE PyGILState_Ensure()\u00b6\n- Part of the Stable ABI.\nEnsure that the current thread is ready to call the Python C API regardless of the current state of Python, or of the attached thread state. This may be called as many times as desired by a thread as long as each call is matched with a call to\nPyGILState_Release()\n. In general, other thread-related APIs may be used betweenPyGILState_Ensure()\nandPyGILState_Release()\ncalls as long as the thread state is restored to its previous state before the Release(). For example, normal usage of thePy_BEGIN_ALLOW_THREADS\nandPy_END_ALLOW_THREADS\nmacros is acceptable.The return value is an opaque \u201chandle\u201d to the attached thread state when\nPyGILState_Ensure()\nwas called, and must be passed toPyGILState_Release()\nto ensure Python is left in the same state. Even though recursive calls are allowed, these handles cannot be shared - each unique call toPyGILState_Ensure()\nmust save the handle for its call toPyGILState_Release()\n.When the function returns, there will be an attached thread state and the thread will be able to call arbitrary Python code. Failure is a fatal error.\nWarning\nCalling this function when the runtime is finalizing is unsafe. Doing so will either hang the thread until the program ends, or fully crash the interpreter in rare cases. Refer to Cautions regarding runtime finalization for more details.\nChanged in version 3.14: Hangs the current thread, rather than terminating it, if called while the interpreter is finalizing.\n-\nvoid PyGILState_Release(PyGILState_STATE)\u00b6\n- Part of the Stable ABI.\nRelease any resources previously acquired. After this call, Python\u2019s state will be the same as it was prior to the corresponding\nPyGILState_Ensure()\ncall (but generally this state will be unknown to the caller, hence the use of the GILState API).Every call to\nPyGILState_Ensure()\nmust be matched by a call toPyGILState_Release()\non the same thread.\n-\nPyThreadState *PyGILState_GetThisThreadState()\u00b6\n- Part of the Stable ABI.\nGet the attached thread state for this thread. May return\nNULL\nif no GILState API has been used on the current thread. Note that the main thread always has such a thread-state, even if no auto-thread-state call has been made on the main thread. This is mainly a helper/diagnostic function.Note\nThis function may return non-\nNULL\neven when the thread state is detached. PreferPyThreadState_Get()\norPyThreadState_GetUnchecked()\nfor most cases.See also\n-\nint PyGILState_Check()\u00b6\nReturn\n1\nif the current thread is holding the GIL and0\notherwise. This function can be called from any thread at any time. Only if it has had its thread state initialized viaPyGILState_Ensure()\nwill it return1\n. This is mainly a helper/diagnostic function. It can be useful for example in callback contexts or memory allocation functions when knowing that the GIL is locked can allow the caller to perform sensitive actions or otherwise behave differently.Note\nIf the current Python process has ever created a subinterpreter, this function will always return\n1\n. PreferPyThreadState_GetUnchecked()\nfor most cases.Added in version 3.4.\nThe following macros are normally used without a trailing semicolon; look for example usage in the Python source distribution.\n-\nPy_BEGIN_ALLOW_THREADS\u00b6\n- Part of the Stable ABI.\nThis macro expands to\n{ PyThreadState *_save; _save = PyEval_SaveThread();\n. Note that it contains an opening brace; it must be matched with a followingPy_END_ALLOW_THREADS\nmacro. See above for further discussion of this macro.\n-\nPy_END_ALLOW_THREADS\u00b6\n- Part of the Stable ABI.\nThis macro expands to\nPyEval_RestoreThread(_save); }\n. Note that it contains a closing brace; it must be matched with an earlierPy_BEGIN_ALLOW_THREADS\nmacro. See above for further discussion of this macro.\n-\nPy_BLOCK_THREADS\u00b6\n- Part of the Stable ABI.\nThis macro expands to\nPyEval_RestoreThread(_save);\n: it is equivalent toPy_END_ALLOW_THREADS\nwithout the closing brace.\n-\nPy_UNBLOCK_THREADS\u00b6\n- Part of the Stable ABI.\nThis macro expands to\n_save = PyEval_SaveThread();\n: it is equivalent toPy_BEGIN_ALLOW_THREADS\nwithout the opening brace and variable declaration.\nLow-level API\u00b6\nAll of the following functions must be called after Py_Initialize()\n.\nChanged in version 3.7: Py_Initialize()\nnow initializes the GIL\nand sets an attached thread state.\n-\nPyInterpreterState *PyInterpreterState_New()\u00b6\n- Part of the Stable ABI.\nCreate a new interpreter state object. An attached thread state is not needed, but may optionally exist if it is necessary to serialize calls to this function.\nRaises an auditing event\ncpython.PyInterpreterState_New\nwith no arguments.\n-\nvoid PyInterpreterState_Clear(PyInterpreterState *interp)\u00b6\n- Part of the Stable ABI.\nReset all information in an interpreter state object. There must be an attached thread state for the interpreter.\nRaises an auditing event\ncpython.PyInterpreterState_Clear\nwith no arguments.\n-\nvoid PyInterpreterState_Delete(PyInterpreterState *interp)\u00b6\n- Part of the Stable ABI.\nDestroy an interpreter state object. There should not be an attached thread state for the target interpreter. The interpreter state must have been reset with a previous call to\nPyInterpreterState_Clear()\n.\n-\nPyThreadState *PyThreadState_New(PyInterpreterState *interp)\u00b6\n- Part of the Stable ABI.\nCreate a new thread state object belonging to the given interpreter object. An attached thread state is not needed.\n-\nvoid PyThreadState_Clear(PyThreadState *tstate)\u00b6\n- Part of the Stable ABI.\nReset all information in a thread state object. tstate must be attached\nChanged in version 3.9: This function now calls the\nPyThreadState.on_delete\ncallback. Previously, that happened inPyThreadState_Delete()\n.Changed in version 3.13: The\nPyThreadState.on_delete\ncallback was removed.\n-\nvoid PyThreadState_Delete(PyThreadState *tstate)\u00b6\n- Part of the Stable ABI.\nDestroy a thread state object. tstate should not be attached to any thread. tstate must have been reset with a previous call to\nPyThreadState_Clear()\n.\n-\nvoid PyThreadState_DeleteCurrent(void)\u00b6\nDetach the attached thread state (which must have been reset with a previous call to\nPyThreadState_Clear()\n) and then destroy it.No thread state will be attached upon returning.\n-\nPyFrameObject *PyThreadState_GetFrame(PyThreadState *tstate)\u00b6\n- Part of the Stable ABI since version 3.10.\nGet the current frame of the Python thread state tstate.\nReturn a strong reference. Return\nNULL\nif no frame is currently executing.See also\nPyEval_GetFrame()\n.tstate must not be\nNULL\n, and must be attached.Added in version 3.9.\n-\nuint64_t PyThreadState_GetID(PyThreadState *tstate)\u00b6\n- Part of the Stable ABI since version 3.10.\nGet the unique thread state identifier of the Python thread state tstate.\ntstate must not be\nNULL\n, and must be attached.Added in version 3.9.\n-\nPyInterpreterState *PyThreadState_GetInterpreter(PyThreadState *tstate)\u00b6\n- Part of the Stable ABI since version 3.10.\nGet the interpreter of the Python thread state tstate.\ntstate must not be\nNULL\n, and must be attached.Added in version 3.9.\n-\nvoid PyThreadState_EnterTracing(PyThreadState *tstate)\u00b6\nSuspend tracing and profiling in the Python thread state tstate.\nResume them using the\nPyThreadState_LeaveTracing()\nfunction.Added in version 3.11.\n-\nvoid PyThreadState_LeaveTracing(PyThreadState *tstate)\u00b6\nResume tracing and profiling in the Python thread state tstate suspended by the\nPyThreadState_EnterTracing()\nfunction.See also\nPyEval_SetTrace()\nandPyEval_SetProfile()\nfunctions.Added in version 3.11.\n-\nint PyUnstable_ThreadState_SetStackProtection(PyThreadState *tstate, void *stack_start_addr, size_t stack_size)\u00b6\n- This is Unstable API. It may change without warning in minor releases.\nSet the stack protection start address and stack protection size of a Python thread state.\nOn success, return\n0\n. On failure, set an exception and return-1\n.CPython implements recursion control for C code by raising\nRecursionError\nwhen it notices that the machine execution stack is close to overflow. See for example thePy_EnterRecursiveCall()\nfunction. For this, it needs to know the location of the current thread\u2019s stack, which it normally gets from the operating system. When the stack is changed, for example using context switching techniques like the Boost library\u2019sboost::context\n, you must callPyUnstable_ThreadState_SetStackProtection()\nto inform CPython of the change.Call\nPyUnstable_ThreadState_SetStackProtection()\neither before or after changing the stack. Do not call any other Python C API between the call and the stack change.See\nPyUnstable_ThreadState_ResetStackProtection()\nfor undoing this operation.Added in version 3.14.1:\nWarning\nThis function was added in a bugfix release, and extensions that use it will be incompatible with Python 3.14.0. Most packaging tools for Python are not able to handle this incompatibility automatically, and will need explicit configuration. When using PyPA standards (wheels and source distributions), specify\nRequires-Python: != 3.14.0.*\nin core metadata.\n-\nvoid PyUnstable_ThreadState_ResetStackProtection(PyThreadState *tstate)\u00b6\n- This is Unstable API. It may change without warning in minor releases.\nReset the stack protection start address and stack protection size of a Python thread state to the operating system defaults.\nSee\nPyUnstable_ThreadState_SetStackProtection()\nfor an explanation.Added in version 3.14.1:\nWarning\nThis function was added in a bugfix release, and extensions that use it will be incompatible with Python 3.14.0. Most packaging tools for Python are not able to handle this incompatibility automatically, and will need explicit configuration. When using PyPA standards (wheels and source distributions), specify\nRequires-Python: != 3.14.0.*\nin core metadata.\n-\nPyInterpreterState *PyInterpreterState_Get(void)\u00b6\n- Part of the Stable ABI since version 3.9.\nGet the current interpreter.\nIssue a fatal error if there is no attached thread state. It cannot return NULL.\nAdded in version 3.9.\n-\nint64_t PyInterpreterState_GetID(PyInterpreterState *interp)\u00b6\n- Part of the Stable ABI since version 3.7.\nReturn the interpreter\u2019s unique ID. If there was any error in doing so then\n-1\nis returned and an error is set.The caller must have an attached thread state.\nAdded in version 3.7.\n-\nPyObject *PyInterpreterState_GetDict(PyInterpreterState *interp)\u00b6\n- Return value: Borrowed reference. Part of the Stable ABI since version 3.8.\nReturn a dictionary in which interpreter-specific data may be stored. If this function returns\nNULL\nthen no exception has been raised and the caller should assume no interpreter-specific dict is available.This is not a replacement for\nPyModule_GetState()\n, which extensions should use to store interpreter-specific state information.The returned dictionary is borrowed from the interpreter and is valid until interpreter shutdown.\nAdded in version 3.8.\n-\ntypedef PyObject *(*_PyFrameEvalFunction)(PyThreadState *tstate, _PyInterpreterFrame *frame, int throwflag)\u00b6\nType of a frame evaluation function.\nThe throwflag parameter is used by the\nthrow()\nmethod of generators: if non-zero, handle the current exception.Changed in version 3.9: The function now takes a tstate parameter.\nChanged in version 3.11: The frame parameter changed from\nPyFrameObject*\nto_PyInterpreterFrame*\n.\n-\n_PyFrameEvalFunction _PyInterpreterState_GetEvalFrameFunc(PyInterpreterState *interp)\u00b6\nGet the frame evaluation function.\nSee the PEP 523 \u201cAdding a frame evaluation API to CPython\u201d.\nAdded in version 3.9.\n-\nvoid _PyInterpreterState_SetEvalFrameFunc(PyInterpreterState *interp, _PyFrameEvalFunction eval_frame)\u00b6\nSet the frame evaluation function.\nSee the PEP 523 \u201cAdding a frame evaluation API to CPython\u201d.\nAdded in version 3.9.\n-\nPyObject *PyThreadState_GetDict()\u00b6\n- Return value: Borrowed reference. Part of the Stable ABI.\nReturn a dictionary in which extensions can store thread-specific state information. Each extension should use a unique key to use to store state in the dictionary. It is okay to call this function when no thread state is attached. If this function returns\nNULL\n, no exception has been raised and the caller should assume no thread state is attached.\n-\nint PyThreadState_SetAsyncExc(unsigned long id, PyObject *exc)\u00b6\n- Part of the Stable ABI.\nAsynchronously raise an exception in a thread. The id argument is the thread id of the target thread; exc is the exception object to be raised. This function does not steal any references to exc. To prevent naive misuse, you must write your own C extension to call this. Must be called with an attached thread state. Returns the number of thread states modified; this is normally one, but will be zero if the thread id isn\u2019t found. If exc is\nNULL\n, the pending exception (if any) for the thread is cleared. This raises no exceptions.Changed in version 3.7: The type of the id parameter changed from long to unsigned long.\n-\nvoid PyEval_AcquireThread(PyThreadState *tstate)\u00b6\n- Part of the Stable ABI.\nAttach tstate to the current thread, which must not be\nNULL\nor already attached.The calling thread must not already have an attached thread state.\nNote\nCalling this function from a thread when the runtime is finalizing will hang the thread until the program exits, even if the thread was not created by Python. Refer to Cautions regarding runtime finalization for more details.\nChanged in version 3.8: Updated to be consistent with\nPyEval_RestoreThread()\n,Py_END_ALLOW_THREADS()\n, andPyGILState_Ensure()\n, and terminate the current thread if called while the interpreter is finalizing.Changed in version 3.14: Hangs the current thread, rather than terminating it, if called while the interpreter is finalizing.\nPyEval_RestoreThread()\nis a higher-level function which is always available (even when threads have not been initialized).\n-\nvoid PyEval_ReleaseThread(PyThreadState *tstate)\u00b6\n- Part of the Stable ABI.\nDetach the attached thread state. The tstate argument, which must not be\nNULL\n, is only used to check that it represents the attached thread state \u2014 if it isn\u2019t, a fatal error is reported.PyEval_SaveThread()\nis a higher-level function which is always available (even when threads have not been initialized).\nSub-interpreter support\u00b6\nWhile in most uses, you will only embed a single Python interpreter, there are cases where you need to create several independent interpreters in the same process and perhaps even in the same thread. Sub-interpreters allow you to do that.\nThe \u201cmain\u201d interpreter is the first one created when the runtime initializes.\nIt is usually the only Python interpreter in a process. Unlike sub-interpreters,\nthe main interpreter has unique process-global responsibilities like signal\nhandling. It is also responsible for execution during runtime initialization and\nis usually the active interpreter during runtime finalization. The\nPyInterpreterState_Main()\nfunction returns a pointer to its state.\nYou can switch between sub-interpreters using the PyThreadState_Swap()\nfunction. You can create and destroy them using the following functions:\n-\ntype PyInterpreterConfig\u00b6\nStructure containing most parameters to configure a sub-interpreter. Its values are used only in\nPy_NewInterpreterFromConfig()\nand never modified by the runtime.Added in version 3.12.\nStructure fields:\n-\nint use_main_obmalloc\u00b6\nIf this is\n0\nthen the sub-interpreter will use its own \u201cobject\u201d allocator state. Otherwise it will use (share) the main interpreter\u2019s.If this is\n0\nthencheck_multi_interp_extensions\nmust be1\n(non-zero). If this is1\nthengil\nmust not bePyInterpreterConfig_OWN_GIL\n.\n-\nint allow_fork\u00b6\nIf this is\n0\nthen the runtime will not support forking the process in any thread where the sub-interpreter is currently active. Otherwise fork is unrestricted.Note that the\nsubprocess\nmodule still works when fork is disallowed.\n-\nint allow_exec\u00b6\nIf this is\n0\nthen the runtime will not support replacing the current process via exec (e.g.os.execv()\n) in any thread where the sub-interpreter is currently active. Otherwise exec is unrestricted.Note that the\nsubprocess\nmodule still works when exec is disallowed.\n-\nint allow_threads\u00b6\nIf this is\n0\nthen the sub-interpreter\u2019sthreading\nmodule won\u2019t create threads. Otherwise threads are allowed.\n-\nint allow_daemon_threads\u00b6\nIf this is\n0\nthen the sub-interpreter\u2019sthreading\nmodule won\u2019t create daemon threads. Otherwise daemon threads are allowed (as long asallow_threads\nis non-zero).\n-\nint check_multi_interp_extensions\u00b6\nIf this is\n0\nthen all extension modules may be imported, including legacy (single-phase init) modules, in any thread where the sub-interpreter is currently active. Otherwise only multi-phase init extension modules (see PEP 489) may be imported. (Also seePy_mod_multiple_interpreters\n.)This must be\n1\n(non-zero) ifuse_main_obmalloc\nis0\n.\n-\nint gil\u00b6\nThis determines the operation of the GIL for the sub-interpreter. It may be one of the following:\n-\nPyInterpreterConfig_DEFAULT_GIL\u00b6\nUse the default selection (\nPyInterpreterConfig_SHARED_GIL\n).\n-\nPyInterpreterConfig_SHARED_GIL\u00b6\nUse (share) the main interpreter\u2019s GIL.\n-\nPyInterpreterConfig_OWN_GIL\u00b6\nUse the sub-interpreter\u2019s own GIL.\nIf this is\nPyInterpreterConfig_OWN_GIL\nthenPyInterpreterConfig.use_main_obmalloc\nmust be0\n.-\nPyInterpreterConfig_DEFAULT_GIL\u00b6\n-\nint use_main_obmalloc\u00b6\n-\nPyStatus Py_NewInterpreterFromConfig(PyThreadState **tstate_p, const PyInterpreterConfig *config)\u00b6\nCreate a new sub-interpreter. This is an (almost) totally separate environment for the execution of Python code. In particular, the new interpreter has separate, independent versions of all imported modules, including the fundamental modules\nbuiltins\n,__main__\nandsys\n. The table of loaded modules (sys.modules\n) and the module search path (sys.path\n) are also separate. The new environment has nosys.argv\nvariable. It has new standard I/O stream file objectssys.stdin\n,sys.stdout\nandsys.stderr\n(however these refer to the same underlying file descriptors).The given config controls the options with which the interpreter is initialized.\nUpon success, tstate_p will be set to the first thread state created in the new sub-interpreter. This thread state is attached. Note that no actual thread is created; see the discussion of thread states below. If creation of the new interpreter is unsuccessful, tstate_p is set to\nNULL\n; no exception is set since the exception state is stored in the attached thread state, which might not exist.Like all other Python/C API functions, an attached thread state must be present before calling this function, but it might be detached upon returning. On success, the returned thread state will be attached. If the sub-interpreter is created with its own GIL then the attached thread state of the calling interpreter will be detached. When the function returns, the new interpreter\u2019s thread state will be attached to the current thread and the previous interpreter\u2019s attached thread state will remain detached.\nAdded in version 3.12.\nSub-interpreters are most effective when isolated from each other, with certain functionality restricted:\nPyInterpreterConfig config = { .use_main_obmalloc = 0, .allow_fork = 0, .allow_exec = 0, .allow_threads = 1, .allow_daemon_threads = 0, .check_multi_interp_extensions = 1, .gil = PyInterpreterConfig_OWN_GIL, }; PyThreadState *tstate = NULL; PyStatus status = Py_NewInterpreterFromConfig(&tstate, &config); if (PyStatus_Exception(status)) { Py_ExitStatusException(status); }\nNote that the config is used only briefly and does not get modified. During initialization the config\u2019s values are converted into various\nPyInterpreterState\nvalues. A read-only copy of the config may be stored internally on thePyInterpreterState\n.Extension modules are shared between (sub-)interpreters as follows:\nFor modules using multi-phase initialization, e.g.\nPyModule_FromDefAndSpec()\n, a separate module object is created and initialized for each interpreter. Only C-level static and global variables are shared between these module objects.For modules using single-phase initialization, e.g.\nPyModule_Create()\n, the first time a particular extension is imported, it is initialized normally, and a (shallow) copy of its module\u2019s dictionary is squirreled away. When the same extension is imported by another (sub-)interpreter, a new module is initialized and filled with the contents of this copy; the extension\u2019sinit\nfunction is not called. Objects in the module\u2019s dictionary thus end up shared across (sub-)interpreters, which might cause unwanted behavior (see Bugs and caveats below).Note that this is different from what happens when an extension is imported after the interpreter has been completely re-initialized by calling\nPy_FinalizeEx()\nandPy_Initialize()\n; in that case, the extension\u2019sinitmodule\nfunction is called again. As with multi-phase initialization, this means that only C-level static and global variables are shared between these modules.\n-\nPyThreadState *Py_NewInterpreter(void)\u00b6\n- Part of the Stable ABI.\nCreate a new sub-interpreter. This is essentially just a wrapper around\nPy_NewInterpreterFromConfig()\nwith a config that preserves the existing behavior. The result is an unisolated sub-interpreter that shares the main interpreter\u2019s GIL, allows fork/exec, allows daemon threads, and allows single-phase init modules.\n-\nvoid Py_EndInterpreter(PyThreadState *tstate)\u00b6\n- Part of the Stable ABI.\nDestroy the (sub-)interpreter represented by the given thread state. The given thread state must be attached. When the call returns, there will be no attached thread state. All thread states associated with this interpreter are destroyed.\nPy_FinalizeEx()\nwill destroy all sub-interpreters that haven\u2019t been explicitly destroyed at that point.\nA Per-Interpreter GIL\u00b6\nUsing Py_NewInterpreterFromConfig()\nyou can create\na sub-interpreter that is completely isolated from other interpreters,\nincluding having its own GIL. The most important benefit of this\nisolation is that such an interpreter can execute Python code without\nbeing blocked by other interpreters or blocking any others. Thus a\nsingle Python process can truly take advantage of multiple CPU cores\nwhen running Python code. The isolation also encourages a different\napproach to concurrency than that of just using threads.\n(See PEP 554 and PEP 684.)\nUsing an isolated interpreter requires vigilance in preserving that\nisolation. That especially means not sharing any objects or mutable\nstate without guarantees about thread-safety. Even objects that are\notherwise immutable (e.g. None\n, (1, 5)\n) can\u2019t normally be shared\nbecause of the refcount. One simple but less-efficient approach around\nthis is to use a global lock around all use of some state (or object).\nAlternately, effectively immutable objects (like integers or strings)\ncan be made safe in spite of their refcounts by making them immortal.\nIn fact, this has been done for the builtin singletons, small integers,\nand a number of other builtin objects.\nIf you preserve isolation then you will have access to proper multi-core computing without the complications that come with free-threading. Failure to preserve isolation will expose you to the full consequences of free-threading, including races and hard-to-debug crashes.\nAside from that, one of the main challenges of using multiple isolated interpreters is how to communicate between them safely (not break isolation) and efficiently. The runtime and stdlib do not provide any standard approach to this yet. A future stdlib module would help mitigate the effort of preserving isolation and expose effective tools for communicating (and sharing) data between interpreters.\nAdded in version 3.12.\nBugs and caveats\u00b6\nBecause sub-interpreters (and the main interpreter) are part of the same\nprocess, the insulation between them isn\u2019t perfect \u2014 for example, using\nlow-level file operations like os.close()\nthey can\n(accidentally or maliciously) affect each other\u2019s open files. Because of the\nway extensions are shared between (sub-)interpreters, some extensions may not\nwork properly; this is especially likely when using single-phase initialization\nor (static) global variables.\nIt is possible to insert objects created in one sub-interpreter into\na namespace of another (sub-)interpreter; this should be avoided if possible.\nSpecial care should be taken to avoid sharing user-defined functions, methods, instances or classes between sub-interpreters, since import operations executed by such objects may affect the wrong (sub-)interpreter\u2019s dictionary of loaded modules. It is equally important to avoid sharing objects from which the above are reachable.\nAlso note that combining this functionality with PyGILState_*\nAPIs\nis delicate, because these APIs assume a bijection between Python thread states\nand OS-level threads, an assumption broken by the presence of sub-interpreters.\nIt is highly recommended that you don\u2019t switch sub-interpreters between a pair\nof matching PyGILState_Ensure()\nand PyGILState_Release()\ncalls.\nFurthermore, extensions (such as ctypes\n) using these APIs to allow calling\nof Python code from non-Python created threads will probably be broken when using\nsub-interpreters.\nAsynchronous Notifications\u00b6\nA mechanism is provided to make asynchronous notifications to the main interpreter thread. These notifications take the form of a function pointer and a void pointer argument.\n-\nint Py_AddPendingCall(int (*func)(void*), void *arg)\u00b6\n- Part of the Stable ABI.\nSchedule a function to be called from the main interpreter thread. On success,\n0\nis returned and func is queued for being called in the main thread. On failure,-1\nis returned without setting any exception.When successfully queued, func will be eventually called from the main interpreter thread with the argument arg. It will be called asynchronously with respect to normally running Python code, but with both these conditions met:\non a bytecode boundary;\nwith the main thread holding an attached thread state (func can therefore use the full C API).\nfunc must return\n0\non success, or-1\non failure with an exception set. func won\u2019t be interrupted to perform another asynchronous notification recursively, but it can still be interrupted to switch threads if the thread state is detached.This function doesn\u2019t need an attached thread state. However, to call this function in a subinterpreter, the caller must have an attached thread state. Otherwise, the function func can be scheduled to be called from the wrong interpreter.\nWarning\nThis is a low-level function, only useful for very special cases. There is no guarantee that func will be called as quick as possible. If the main thread is busy executing a system call, func won\u2019t be called before the system call returns. This function is generally not suitable for calling Python code from arbitrary C threads. Instead, use the PyGILState API.\nAdded in version 3.1.\nChanged in version 3.9: If this function is called in a subinterpreter, the function func is now scheduled to be called from the subinterpreter, rather than being called from the main interpreter. Each subinterpreter now has its own list of scheduled calls.\nChanged in version 3.12: This function now always schedules func to be run in the main interpreter.\n-\nint Py_MakePendingCalls(void)\u00b6\n- Part of the Stable ABI.\nExecute all pending calls. This is usually executed automatically by the interpreter.\nThis function returns\n0\non success, and returns-1\nwith an exception set on failure.If this is not called in the main thread of the main interpreter, this function does nothing and returns\n0\n. The caller must hold an attached thread state.Added in version 3.1.\nChanged in version 3.12: This function only runs pending calls in the main interpreter.\nProfiling and Tracing\u00b6\nThe Python interpreter provides some low-level support for attaching profiling and execution tracing facilities. These are used for profiling, debugging, and coverage analysis tools.\nThis C interface allows the profiling or tracing code to avoid the overhead of calling through Python-level callable objects, making a direct C function call instead. The essential attributes of the facility have not changed; the interface allows trace functions to be installed per-thread, and the basic events reported to the trace function are the same as had been reported to the Python-level trace functions in previous versions.\n-\ntypedef int (*Py_tracefunc)(PyObject *obj, PyFrameObject *frame, int what, PyObject *arg)\u00b6\nThe type of the trace function registered using\nPyEval_SetProfile()\nandPyEval_SetTrace()\n. The first parameter is the object passed to the registration function as obj, frame is the frame object to which the event pertains, what is one of the constantsPyTrace_CALL\n,PyTrace_EXCEPTION\n,PyTrace_LINE\n,PyTrace_RETURN\n,PyTrace_C_CALL\n,PyTrace_C_EXCEPTION\n,PyTrace_C_RETURN\n, orPyTrace_OPCODE\n, and arg depends on the value of what:Value of what\nMeaning of arg\nAlways\nPy_None\n.Exception information as returned by\nsys.exc_info()\n.Always\nPy_None\n.Value being returned to the caller, or\nNULL\nif caused by an exception.Function object being called.\nFunction object being called.\nFunction object being called.\nAlways\nPy_None\n.\n-\nint PyTrace_CALL\u00b6\nThe value of the what parameter to a\nPy_tracefunc\nfunction when a new call to a function or method is being reported, or a new entry into a generator. Note that the creation of the iterator for a generator function is not reported as there is no control transfer to the Python bytecode in the corresponding frame.\n-\nint PyTrace_EXCEPTION\u00b6\nThe value of the what parameter to a\nPy_tracefunc\nfunction when an exception has been raised. The callback function is called with this value for what when after any bytecode is processed after which the exception becomes set within the frame being executed. The effect of this is that as exception propagation causes the Python stack to unwind, the callback is called upon return to each frame as the exception propagates. Only trace functions receive these events; they are not needed by the profiler.\n-\nint PyTrace_LINE\u00b6\nThe value passed as the what parameter to a\nPy_tracefunc\nfunction (but not a profiling function) when a line-number event is being reported. It may be disabled for a frame by settingf_trace_lines\nto 0 on that frame.\n-\nint PyTrace_RETURN\u00b6\nThe value for the what parameter to\nPy_tracefunc\nfunctions when a call is about to return.\n-\nint PyTrace_C_CALL\u00b6\nThe value for the what parameter to\nPy_tracefunc\nfunctions when a C function is about to be called.\n-\nint PyTrace_C_EXCEPTION\u00b6\nThe value for the what parameter to\nPy_tracefunc\nfunctions when a C function has raised an exception.\n-\nint PyTrace_C_RETURN\u00b6\nThe value for the what parameter to\nPy_tracefunc\nfunctions when a C function has returned.\n-\nint PyTrace_OPCODE\u00b6\nThe value for the what parameter to\nPy_tracefunc\nfunctions (but not profiling functions) when a new opcode is about to be executed. This event is not emitted by default: it must be explicitly requested by settingf_trace_opcodes\nto 1 on the frame.\n-\nvoid PyEval_SetProfile(Py_tracefunc func, PyObject *obj)\u00b6\nSet the profiler function to func. The obj parameter is passed to the function as its first parameter, and may be any Python object, or\nNULL\n. If the profile function needs to maintain state, using a different value for obj for each thread provides a convenient and thread-safe place to store it. The profile function is called for all monitored events exceptPyTrace_LINE\nPyTrace_OPCODE\nandPyTrace_EXCEPTION\n.See also the\nsys.setprofile()\nfunction.The caller must have an attached thread state.\n-\nvoid PyEval_SetProfileAllThreads(Py_tracefunc func, PyObject *obj)\u00b6\nLike\nPyEval_SetProfile()\nbut sets the profile function in all running threads belonging to the current interpreter instead of the setting it only on the current thread.The caller must have an attached thread state.\nAs\nPyEval_SetProfile()\n, this function ignores any exceptions raised while setting the profile functions in all threads.\nAdded in version 3.12.\n-\nvoid PyEval_SetTrace(Py_tracefunc func, PyObject *obj)\u00b6\nSet the tracing function to func. This is similar to\nPyEval_SetProfile()\n, except the tracing function does receive line-number events and per-opcode events, but does not receive any event related to C function objects being called. Any trace function registered usingPyEval_SetTrace()\nwill not receivePyTrace_C_CALL\n,PyTrace_C_EXCEPTION\norPyTrace_C_RETURN\nas a value for the what parameter.See also the\nsys.settrace()\nfunction.The caller must have an attached thread state.\n-\nvoid PyEval_SetTraceAllThreads(Py_tracefunc func, PyObject *obj)\u00b6\nLike\nPyEval_SetTrace()\nbut sets the tracing function in all running threads belonging to the current interpreter instead of the setting it only on the current thread.The caller must have an attached thread state.\nAs\nPyEval_SetTrace()\n, this function ignores any exceptions raised while setting the trace functions in all threads.\nAdded in version 3.12.\nReference tracing\u00b6\nAdded in version 3.13.\n-\ntypedef int (*PyRefTracer)(PyObject*, int event, void *data)\u00b6\nThe type of the trace function registered using\nPyRefTracer_SetTracer()\n. The first parameter is a Python object that has been just created (when event is set toPyRefTracer_CREATE\n) or about to be destroyed (when event is set toPyRefTracer_DESTROY\n). The data argument is the opaque pointer that was provided whenPyRefTracer_SetTracer()\nwas called.\nAdded in version 3.13.\n-\nint PyRefTracer_CREATE\u00b6\nThe value for the event parameter to\nPyRefTracer\nfunctions when a Python object has been created.\n-\nint PyRefTracer_DESTROY\u00b6\nThe value for the event parameter to\nPyRefTracer\nfunctions when a Python object has been destroyed.\n-\nint PyRefTracer_SetTracer(PyRefTracer tracer, void *data)\u00b6\nRegister a reference tracer function. The function will be called when a new Python has been created or when an object is going to be destroyed. If data is provided it must be an opaque pointer that will be provided when the tracer function is called. Return\n0\non success. Set an exception and return-1\non error.Note that tracer functions must not create Python objects inside or otherwise the call will be re-entrant. The tracer also must not clear any existing exception or set an exception. A thread state will be active every time the tracer function is called.\nThere must be an attached thread state when calling this function.\nAdded in version 3.13.\n-\nPyRefTracer PyRefTracer_GetTracer(void **data)\u00b6\nGet the registered reference tracer function and the value of the opaque data pointer that was registered when\nPyRefTracer_SetTracer()\nwas called. If no tracer was registered this function will return NULL and will set the data pointer to NULL.There must be an attached thread state when calling this function.\nAdded in version 3.13.\nAdvanced Debugger Support\u00b6\nThese functions are only intended to be used by advanced debugging tools.\n-\nPyInterpreterState *PyInterpreterState_Head()\u00b6\nReturn the interpreter state object at the head of the list of all such objects.\n-\nPyInterpreterState *PyInterpreterState_Main()\u00b6\nReturn the main interpreter state object.\n-\nPyInterpreterState *PyInterpreterState_Next(PyInterpreterState *interp)\u00b6\nReturn the next interpreter state object after interp from the list of all such objects.\n-\nPyThreadState *PyInterpreterState_ThreadHead(PyInterpreterState *interp)\u00b6\nReturn the pointer to the first\nPyThreadState\nobject in the list of threads associated with the interpreter interp.\n-\nPyThreadState *PyThreadState_Next(PyThreadState *tstate)\u00b6\nReturn the next thread state object after tstate from the list of all such objects belonging to the same\nPyInterpreterState\nobject.\nThread Local Storage Support\u00b6\nThe Python interpreter provides low-level support for thread-local storage\n(TLS) which wraps the underlying native TLS implementation to support the\nPython-level thread local storage API (threading.local\n). The\nCPython C level APIs are similar to those offered by pthreads and Windows:\nuse a thread key and functions to associate a void* value per\nthread.\nA thread state does not need to be attached when calling these functions; they suppl their own locking.\nNote that Python.h\ndoes not include the declaration of the TLS APIs,\nyou need to include pythread.h\nto use thread-local storage.\nNote\nNone of these API functions handle memory management on behalf of the void* values. You need to allocate and deallocate them yourself. If the void* values happen to be PyObject*, these functions don\u2019t do refcount operations on them either.\nThread Specific Storage (TSS) API\u00b6\nTSS API is introduced to supersede the use of the existing TLS API within the\nCPython interpreter. This API uses a new type Py_tss_t\ninstead of\nint to represent thread keys.\nAdded in version 3.7.\nSee also\n\u201cA New C-API for Thread-Local Storage in CPython\u201d (PEP 539)\n-\ntype Py_tss_t\u00b6\nThis data structure represents the state of a thread key, the definition of which may depend on the underlying TLS implementation, and it has an internal field representing the key\u2019s initialization state. There are no public members in this structure.\nWhen Py_LIMITED_API is not defined, static allocation of this type by\nPy_tss_NEEDS_INIT\nis allowed.\n-\nPy_tss_NEEDS_INIT\u00b6\nThis macro expands to the initializer for\nPy_tss_t\nvariables. Note that this macro won\u2019t be defined with Py_LIMITED_API.\nDynamic Allocation\u00b6\nDynamic allocation of the Py_tss_t\n, required in extension modules\nbuilt with Py_LIMITED_API, where static allocation of this type\nis not possible due to its implementation being opaque at build time.\n-\nPy_tss_t *PyThread_tss_alloc()\u00b6\n- Part of the Stable ABI since version 3.7.\nReturn a value which is the same state as a value initialized with\nPy_tss_NEEDS_INIT\n, orNULL\nin the case of dynamic allocation failure.\n-\nvoid PyThread_tss_free(Py_tss_t *key)\u00b6\n- Part of the Stable ABI since version 3.7.\nFree the given key allocated by\nPyThread_tss_alloc()\n, after first callingPyThread_tss_delete()\nto ensure any associated thread locals have been unassigned. This is a no-op if the key argument isNULL\n.Note\nA freed key becomes a dangling pointer. You should reset the key to\nNULL\n.\nMethods\u00b6\nThe parameter key of these functions must not be NULL\n. Moreover, the\nbehaviors of PyThread_tss_set()\nand PyThread_tss_get()\nare\nundefined if the given Py_tss_t\nhas not been initialized by\nPyThread_tss_create()\n.\n-\nint PyThread_tss_is_created(Py_tss_t *key)\u00b6\n- Part of the Stable ABI since version 3.7.\nReturn a non-zero value if the given\nPy_tss_t\nhas been initialized byPyThread_tss_create()\n.\n-\nint PyThread_tss_create(Py_tss_t *key)\u00b6\n- Part of the Stable ABI since version 3.7.\nReturn a zero value on successful initialization of a TSS key. The behavior is undefined if the value pointed to by the key argument is not initialized by\nPy_tss_NEEDS_INIT\n. This function can be called repeatedly on the same key \u2013 calling it on an already initialized key is a no-op and immediately returns success.\n-\nvoid PyThread_tss_delete(Py_tss_t *key)\u00b6\n- Part of the Stable ABI since version 3.7.\nDestroy a TSS key to forget the values associated with the key across all threads, and change the key\u2019s initialization state to uninitialized. A destroyed key is able to be initialized again by\nPyThread_tss_create()\n. This function can be called repeatedly on the same key \u2013 calling it on an already destroyed key is a no-op.\n-\nint PyThread_tss_set(Py_tss_t *key, void *value)\u00b6\n- Part of the Stable ABI since version 3.7.\nReturn a zero value to indicate successfully associating a void* value with a TSS key in the current thread. Each thread has a distinct mapping of the key to a void* value.\n-\nvoid *PyThread_tss_get(Py_tss_t *key)\u00b6\n- Part of the Stable ABI since version 3.7.\nReturn the void* value associated with a TSS key in the current thread. This returns\nNULL\nif no value is associated with the key in the current thread.\nThread Local Storage (TLS) API\u00b6\nDeprecated since version 3.7: This API is superseded by Thread Specific Storage (TSS) API.\nNote\nThis version of the API does not support platforms where the native TLS key\nis defined in a way that cannot be safely cast to int\n. On such platforms,\nPyThread_create_key()\nwill return immediately with a failure status,\nand the other TLS functions will all be no-ops on such platforms.\nDue to the compatibility problem noted above, this version of the API should not be used in new code.\n-\nint PyThread_create_key()\u00b6\n- Part of the Stable ABI.\n-\nvoid PyThread_delete_key(int key)\u00b6\n- Part of the Stable ABI.\n-\nint PyThread_set_key_value(int key, void *value)\u00b6\n- Part of the Stable ABI.\n-\nvoid *PyThread_get_key_value(int key)\u00b6\n- Part of the Stable ABI.\n-\nvoid PyThread_delete_key_value(int key)\u00b6\n- Part of the Stable ABI.\n-\nvoid PyThread_ReInitTLS()\u00b6\n- Part of the Stable ABI.\nSynchronization Primitives\u00b6\nThe C-API provides a basic mutual exclusion lock.\n-\ntype PyMutex\u00b6\nA mutual exclusion lock. The\nPyMutex\nshould be initialized to zero to represent the unlocked state. For example:PyMutex mutex = {0};\nInstances of\nPyMutex\nshould not be copied or moved. Both the contents and address of aPyMutex\nare meaningful, and it must remain at a fixed, writable location in memory.Note\nA\nPyMutex\ncurrently occupies one byte, but the size should be considered unstable. The size may change in future Python releases without a deprecation period.Added in version 3.13.\n-\nvoid PyMutex_Lock(PyMutex *m)\u00b6\nLock mutex m. If another thread has already locked it, the calling thread will block until the mutex is unlocked. While blocked, the thread will temporarily detach the thread state if one exists.\nAdded in version 3.13.\n-\nvoid PyMutex_Unlock(PyMutex *m)\u00b6\nUnlock mutex m. The mutex must be locked \u2014 otherwise, the function will issue a fatal error.\nAdded in version 3.13.\n-\nint PyMutex_IsLocked(PyMutex *m)\u00b6\nReturns non-zero if the mutex m is currently locked, zero otherwise.\nNote\nThis function is intended for use in assertions and debugging only and should not be used to make concurrency control decisions, as the lock state may change immediately after the check.\nAdded in version 3.14.\nPython Critical Section API\u00b6\nThe critical section API provides a deadlock avoidance layer on top of per-object locks for free-threaded CPython. They are intended to replace reliance on the global interpreter lock, and are no-ops in versions of Python with the global interpreter lock.\nCritical sections are intended to be used for custom types implemented\nin C-API extensions. They should generally not be used with built-in types like\nlist\nand dict\nbecause their public C-APIs\nalready use critical sections internally, with the notable\nexception of PyDict_Next()\n, which requires critical section\nto be acquired externally.\nCritical sections avoid deadlocks by implicitly suspending active critical\nsections, hence, they do not provide exclusive access such as provided by\ntraditional locks like PyMutex\n. When a critical section is started,\nthe per-object lock for the object is acquired. If the code executed inside the\ncritical section calls C-API functions then it can suspend the critical section thereby\nreleasing the per-object lock, so other threads can acquire the per-object lock\nfor the same object.\nVariants that accept PyMutex\npointers rather than Python objects are also\navailable. Use these variants to start a critical section in a situation where\nthere is no PyObject\n\u2013 for example, when working with a C type that\ndoes not extend or wrap PyObject\nbut still needs to call into the C\nAPI in a manner that might lead to deadlocks.\nThe functions and structs used by the macros are exposed for cases where C macros are not available. They should only be used as in the given macro expansions. Note that the sizes and contents of the structures may change in future Python versions.\nNote\nOperations that need to lock two objects at once must use\nPy_BEGIN_CRITICAL_SECTION2\n. You cannot use nested critical\nsections to lock more than one object at once, because the inner critical\nsection may suspend the outer critical sections. This API does not provide\na way to lock more than two objects at once.\nExample usage:\nstatic PyObject *\nset_field(MyObject *self, PyObject *value)\n{\nPy_BEGIN_CRITICAL_SECTION(self);\nPy_SETREF(self->field, Py_XNewRef(value));\nPy_END_CRITICAL_SECTION();\nPy_RETURN_NONE;\n}\nIn the above example, Py_SETREF\ncalls Py_DECREF\n, which\ncan call arbitrary code through an object\u2019s deallocation function. The critical\nsection API avoids potential deadlocks due to reentrancy and lock ordering\nby allowing the runtime to temporarily suspend the critical section if the\ncode triggered by the finalizer blocks and calls PyEval_SaveThread()\n.\n-\nPy_BEGIN_CRITICAL_SECTION(op)\u00b6\nAcquires the per-object lock for the object op and begins a critical section.\nIn the free-threaded build, this macro expands to:\n{ PyCriticalSection _py_cs; PyCriticalSection_Begin(&_py_cs, (PyObject*)(op))\nIn the default build, this macro expands to\n{\n.Added in version 3.13.\n-\nPy_BEGIN_CRITICAL_SECTION_MUTEX(m)\u00b6\nLocks the mutex m and begins a critical section.\nIn the free-threaded build, this macro expands to:\n{ PyCriticalSection _py_cs; PyCriticalSection_BeginMutex(&_py_cs, m)\nNote that unlike\nPy_BEGIN_CRITICAL_SECTION\n, there is no cast for the argument of the macro - it must be aPyMutex\npointer.On the default build, this macro expands to\n{\n.Added in version 3.14.\n-\nPy_END_CRITICAL_SECTION()\u00b6\nEnds the critical section and releases the per-object lock.\nIn the free-threaded build, this macro expands to:\nPyCriticalSection_End(&_py_cs); }\nIn the default build, this macro expands to\n}\n.Added in version 3.13.\n-\nPy_BEGIN_CRITICAL_SECTION2(a, b)\u00b6\nAcquires the per-objects locks for the objects a and b and begins a critical section. The locks are acquired in a consistent order (lowest address first) to avoid lock ordering deadlocks.\nIn the free-threaded build, this macro expands to:\n{ PyCriticalSection2 _py_cs2; PyCriticalSection2_Begin(&_py_cs2, (PyObject*)(a), (PyObject*)(b))\nIn the default build, this macro expands to\n{\n.Added in version 3.13.\n-\nPy_BEGIN_CRITICAL_SECTION2_MUTEX(m1, m2)\u00b6\nLocks the mutexes m1 and m2 and begins a critical section.\nIn the free-threaded build, this macro expands to:\n{ PyCriticalSection2 _py_cs2; PyCriticalSection2_BeginMutex(&_py_cs2, m1, m2)\nNote that unlike\nPy_BEGIN_CRITICAL_SECTION2\n, there is no cast for the arguments of the macro - they must bePyMutex\npointers.On the default build, this macro expands to\n{\n.Added in version 3.14.\n-\nPy_END_CRITICAL_SECTION2()\u00b6\nEnds the critical section and releases the per-object locks.\nIn the free-threaded build, this macro expands to:\nPyCriticalSection2_End(&_py_cs2); }\nIn the default build, this macro expands to\n}\n.Added in version 3.13.\nLegacy Locking APIs\u00b6\nThese APIs are obsolete since Python 3.13 with the introduction of\nPyMutex\n.\nChanged in version 3.15: These APIs are now a simple wrapper around PyMutex\n.\n-\ntype PyThread_type_lock\u00b6\nA pointer to a mutual exclusion lock.\n-\ntype PyLockStatus\u00b6\nThe result of acquiring a lock with a timeout.\n-\nenumerator PY_LOCK_FAILURE\u00b6\nFailed to acquire the lock.\n-\nenumerator PY_LOCK_ACQUIRED\u00b6\nThe lock was successfully acquired.\n-\nenumerator PY_LOCK_INTR\u00b6\nThe lock was interrupted by a signal.\n-\nenumerator PY_LOCK_FAILURE\u00b6\n-\nPyThread_type_lock PyThread_allocate_lock(void)\u00b6\n- Part of the Stable ABI.\nAllocate a new lock.\nOn success, this function returns a lock; on failure, this function returns\n0\nwithout an exception set.The caller does not need to hold an attached thread state.\nChanged in version 3.15: This function now always uses\nPyMutex\n. In prior versions, this would use a lock provided by the operating system.\n-\nvoid PyThread_free_lock(PyThread_type_lock lock)\u00b6\n- Part of the Stable ABI.\nDestroy lock. The lock should not be held by any thread when calling this.\nThe caller does not need to hold an attached thread state.\n-\nPyLockStatus PyThread_acquire_lock_timed(PyThread_type_lock lock, long long microseconds, int intr_flag)\u00b6\n- Part of the Stable ABI.\nAcquire lock with a timeout.\nThis will wait for microseconds microseconds to acquire the lock. If the timeout expires, this function returns\nPY_LOCK_FAILURE\n. If microseconds is-1\n, this will wait indefinitely until the lock has been released.If intr_flag is\n1\n, acquiring the lock may be interrupted by a signal, in which case this function returnsPY_LOCK_INTR\n. Upon interruption, it\u2019s generally expected that the caller makes a call toPy_MakePendingCalls()\nto propagate an exception to Python code.If the lock is successfully acquired, this function returns\nPY_LOCK_ACQUIRED\n.The caller does not need to hold an attached thread state.\n-\nint PyThread_acquire_lock(PyThread_type_lock lock, int waitflag)\u00b6\n- Part of the Stable ABI.\nAcquire lock.\nIf waitflag is\n1\nand another thread currently holds the lock, this function will wait until the lock can be acquired and will always return1\n.If waitflag is\n0\nand another thread holds the lock, this function will not wait and instead return0\n. If the lock is not held by any other thread, then this function will acquire it and return1\n.Unlike\nPyThread_acquire_lock_timed()\n, acquiring the lock cannot be interrupted by a signal.The caller does not need to hold an attached thread state.\n-\nint PyThread_release_lock(PyThread_type_lock lock)\u00b6\n- Part of the Stable ABI.\nRelease lock. If lock is not held, then this function issues a fatal error.\nThe caller does not need to hold an attached thread state.\nOperating System Thread APIs\u00b6\n-\nPYTHREAD_INVALID_THREAD_ID\u00b6\nSentinel value for an invalid thread ID.\nThis is currently equivalent to\n(unsigned long)-1\n.\n-\nunsigned long PyThread_start_new_thread(void (*func)(void*), void *arg)\u00b6\n- Part of the Stable ABI.\nStart function func in a new thread with argument arg. The resulting thread is not intended to be joined.\nfunc must not be\nNULL\n, but arg may beNULL\n.On success, this function returns the identifier of the new thread; on failure, this returns\nPYTHREAD_INVALID_THREAD_ID\n.The caller does not need to hold an attached thread state.\n-\nunsigned long PyThread_get_thread_ident(void)\u00b6\n- Part of the Stable ABI.\nReturn the identifier of the current thread, which will never be zero.\nThis function cannot fail, and the caller does not need to hold an attached thread state.\nSee also\n-\nPyObject *PyThread_GetInfo(void)\u00b6\n- Part of the Stable ABI since version 3.3.\nGet general information about the current thread in the form of a struct sequence object. This information is accessible as\nsys.thread_info\nin Python.On success, this returns a new strong reference to the thread information; on failure, this returns\nNULL\nwith an exception set.The caller must hold an attached thread state.\n-\nPY_HAVE_THREAD_NATIVE_ID\u00b6\nThis macro is defined when the system supports native thread IDs.\n-\nunsigned long PyThread_get_thread_native_id(void)\u00b6\n- Part of the Stable ABI on platforms with native thread IDs.\nGet the native identifier of the current thread as it was assigned by the operating system\u2019s kernel, which will never be less than zero.\nThis function is only available when\nPY_HAVE_THREAD_NATIVE_ID\nis defined.This function cannot fail, and the caller does not need to hold an attached thread state.\nSee also\n-\nvoid PyThread_exit_thread(void)\u00b6\n- Part of the Stable ABI.\nTerminate the current thread. This function is generally considered unsafe and should be avoided. It is kept solely for backwards compatibility.\nThis function is only safe to call if all functions in the full call stack are written to safely allow it.\nWarning\nIf the current system uses POSIX threads (also known as \u201cpthreads\u201d), this calls pthread_exit(3), which attempts to unwind the stack and call C++ destructors on some libc implementations. However, if a\nnoexcept\nfunction is reached, it may terminate the process. Other systems, such as macOS, do unwinding.On Windows, this function calls\n_endthreadex()\n, which kills the thread without calling C++ destructors.In any case, there is a risk of corruption on the thread\u2019s stack.\nDeprecated since version 3.14.\n-\nvoid PyThread_init_thread(void)\u00b6\n- Part of the Stable ABI.\nInitialize\nPyThread*\nAPIs. Python executes this function automatically, so there\u2019s little need to call it from an extension module.\n-\nint PyThread_set_stacksize(size_t size)\u00b6\n- Part of the Stable ABI.\nSet the stack size of the current thread to size bytes.\nThis function returns\n0\non success,-1\nif size is invalid, or-2\nif the system does not support changing the stack size. This function does not set exceptions.The caller does not need to hold an attached thread state.\n-\nsize_t PyThread_get_stacksize(void)\u00b6\n- Part of the Stable ABI.\nReturn the stack size of the current thread in bytes, or\n0\nif the system\u2019s default stack size is in use.The caller does not need to hold an attached thread state.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 23972} +{"url": "https://docs.python.org/3/library/removed.html", "title": "Removed Modules", "content": "Removed Modules\u00b6\nThe modules described in this chapter have been removed from the Python standard library. They are documented here to help people find replacements.\naifc\n\u2014 Read and write AIFF and AIFC filesasynchat\n\u2014 Asynchronous socket command/response handlerasyncore\n\u2014 Asynchronous socket handleraudioop\n\u2014 Manipulate raw audio datacgi\n\u2014 Common Gateway Interface supportcgitb\n\u2014 Traceback manager for CGI scriptschunk\n\u2014 Read IFF chunked datacrypt\n\u2014 Function to check Unix passwordsdistutils\n\u2014 Building and installing Python modulesimghdr\n\u2014 Determine the type of an imageimp\n\u2014 Access the import internalsmailcap\n\u2014 Mailcap file handlingmsilib\n\u2014 Read and write Microsoft Installer filesnis\n\u2014 Interface to Sun\u2019s NIS (Yellow Pages)nntplib\n\u2014 NNTP protocol clientossaudiodev\n\u2014 Access to OSS-compatible audio devicespipes\n\u2014 Interface to shell pipelinessmtpd\n\u2014 SMTP Serversndhdr\n\u2014 Determine type of sound filespwd\n\u2014 The shadow password databasesunau\n\u2014 Read and write Sun AU filestelnetlib\n\u2014 Telnet clientuu\n\u2014 Encode and decode uuencode filesxdrlib\n\u2014 Encode and decode XDR data", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 267} +{"url": "https://docs.python.org/3/library/getopt.html", "title": " \u2014 C-style parser for command line options", "content": "getopt\n\u2014 C-style parser for command line options\u00b6\nSource code: Lib/getopt.py\nNote\nThis module is considered feature complete. A more declarative and\nextensible alternative to this API is provided in the optparse\nmodule. Further functional enhancements for command line parameter\nprocessing are provided either as third party modules on PyPI,\nor else as features in the argparse\nmodule.\nThis module helps scripts to parse the command line arguments in sys.argv\n.\nIt supports the same conventions as the Unix getopt()\nfunction (including\nthe special meanings of arguments of the form \u2018-\n\u2019 and \u2018--\n\u2018). Long\noptions similar to those supported by GNU software may be used as well via an\noptional third argument.\nUsers who are unfamiliar with the Unix getopt()\nfunction should consider\nusing the argparse\nmodule instead. Users who are familiar with the Unix\ngetopt()\nfunction, but would like to get equivalent behavior while\nwriting less code and getting better help and error messages should consider\nusing the optparse\nmodule. See Choosing an argument parsing library for\nadditional details.\nThis module provides two functions and an exception:\n- getopt.getopt(args, shortopts, longopts=[])\u00b6\nParses command line options and parameter list. args is the argument list to be parsed, without the leading reference to the running program. Typically, this means\nsys.argv[1:]\n. shortopts is the string of option letters that the script wants to recognize, with options that require an argument followed by a colon (':'\n) and options that accept an optional argument followed by two colons ('::'\n); i.e., the same format that Unixgetopt()\nuses.Note\nUnlike GNU\ngetopt()\n, after a non-option argument, all further arguments are considered also non-options. This is similar to the way non-GNU Unix systems work.longopts, if specified, must be a list of strings with the names of the long options which should be supported. The leading\n'--'\ncharacters should not be included in the option name. Long options which require an argument should be followed by an equal sign ('='\n). Long options which accept an optional argument should be followed by an equal sign and question mark ('=?'\n). To accept only long options, shortopts should be an empty string. Long options on the command line can be recognized so long as they provide a prefix of the option name that matches exactly one of the accepted options. For example, if longopts is['foo', 'frob']\n, the option--fo\nwill match as--foo\n, but--f\nwill not match uniquely, soGetoptError\nwill be raised.The return value consists of two elements: the first is a list of\n(option, value)\npairs; the second is the list of program arguments left after the option list was stripped (this is a trailing slice of args). Each option-and-value pair returned has the option as its first element, prefixed with a hyphen for short options (e.g.,'-x'\n) or two hyphens for long options (e.g.,'--long-option'\n), and the option argument as its second element, or an empty string if the option has no argument. The options occur in the list in the same order in which they were found, thus allowing multiple occurrences. Long and short options may be mixed.Changed in version 3.14: Optional arguments are supported.\n- getopt.gnu_getopt(args, shortopts, longopts=[])\u00b6\nThis function works like\ngetopt()\n, except that GNU style scanning mode is used by default. This means that option and non-option arguments may be intermixed. Thegetopt()\nfunction stops processing options as soon as a non-option argument is encountered.If the first character of the option string is\n'+'\n, or if the environment variablePOSIXLY_CORRECT\nis set, then option processing stops as soon as a non-option argument is encountered.If the first character of the option string is\n'-'\n, non-option arguments that are followed by options are added to the list of option-and-value pairs as a pair that hasNone\nas its first element and the list of non-option arguments as its second element. The second element of thegnu_getopt()\nresult is a list of program arguments after the last option.Changed in version 3.14: Support for returning intermixed options and non-option arguments in order.\n- exception getopt.GetoptError\u00b6\nThis is raised when an unrecognized option is found in the argument list or when an option requiring an argument is given none. The argument to the exception is a string indicating the cause of the error. For long options, an argument given to an option which does not require one will also cause this exception to be raised. The attributes\nmsg\nandopt\ngive the error message and related option; if there is no specific option to which the exception relates,opt\nis an empty string.\n- exception getopt.error\u00b6\nAlias for\nGetoptError\n; for backward compatibility.\nAn example using only Unix style options:\n>>> import getopt\n>>> args = '-a -b -cfoo -d bar a1 a2'.split()\n>>> args\n['-a', '-b', '-cfoo', '-d', 'bar', 'a1', 'a2']\n>>> optlist, args = getopt.getopt(args, 'abc:d:')\n>>> optlist\n[('-a', ''), ('-b', ''), ('-c', 'foo'), ('-d', 'bar')]\n>>> args\n['a1', 'a2']\nUsing long option names is equally easy:\n>>> s = '--condition=foo --testing --output-file abc.def -x a1 a2'\n>>> args = s.split()\n>>> args\n['--condition=foo', '--testing', '--output-file', 'abc.def', '-x', 'a1', 'a2']\n>>> optlist, args = getopt.getopt(args, 'x', [\n... 'condition=', 'output-file=', 'testing'])\n>>> optlist\n[('--condition', 'foo'), ('--testing', ''), ('--output-file', 'abc.def'), ('-x', '')]\n>>> args\n['a1', 'a2']\nOptional arguments should be specified explicitly:\n>>> s = '-Con -C --color=off --color a1 a2'\n>>> args = s.split()\n>>> args\n['-Con', '-C', '--color=off', '--color', 'a1', 'a2']\n>>> optlist, args = getopt.getopt(args, 'C::', ['color=?'])\n>>> optlist\n[('-C', 'on'), ('-C', ''), ('--color', 'off'), ('--color', '')]\n>>> args\n['a1', 'a2']\nThe order of options and non-option arguments can be preserved:\n>>> s = 'a1 -x a2 a3 a4 --long a5 a6'\n>>> args = s.split()\n>>> args\n['a1', '-x', 'a2', 'a3', 'a4', '--long', 'a5', 'a6']\n>>> optlist, args = getopt.gnu_getopt(args, '-x:', ['long='])\n>>> optlist\n[(None, ['a1']), ('-x', 'a2'), (None, ['a3', 'a4']), ('--long', 'a5')]\n>>> args\n['a6']\nIn a script, typical usage is something like this:\nimport getopt, sys\ndef main():\ntry:\nopts, args = getopt.getopt(sys.argv[1:], \"ho:v\", [\"help\", \"output=\"])\nexcept getopt.GetoptError as err:\n# print help information and exit:\nprint(err) # will print something like \"option -a not recognized\"\nusage()\nsys.exit(2)\noutput = None\nverbose = False\nfor o, a in opts:\nif o == \"-v\":\nverbose = True\nelif o in (\"-h\", \"--help\"):\nusage()\nsys.exit()\nelif o in (\"-o\", \"--output\"):\noutput = a\nelse:\nassert False, \"unhandled option\"\nprocess(args, output=output, verbose=verbose)\nif __name__ == \"__main__\":\nmain()\nNote that an equivalent command line interface could be produced with less code\nand more informative help and error messages by using the optparse\nmodule:\nimport optparse\nif __name__ == '__main__':\nparser = optparse.OptionParser()\nparser.add_option('-o', '--output')\nparser.add_option('-v', dest='verbose', action='store_true')\nopts, args = parser.parse_args()\nprocess(args, output=opts.output, verbose=opts.verbose)\nA roughly equivalent command line interface for this case can also be\nproduced by using the argparse\nmodule:\nimport argparse\nif __name__ == '__main__':\nparser = argparse.ArgumentParser()\nparser.add_argument('-o', '--output')\nparser.add_argument('-v', dest='verbose', action='store_true')\nparser.add_argument('rest', nargs='*')\nargs = parser.parse_args()\nprocess(args.rest, output=args.output, verbose=args.verbose)\nSee Choosing an argument parsing library for details on how the argparse\nversion of this code differs in behaviour from the optparse\n(and\ngetopt\n) version.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1939} +{"url": "https://docs.python.org/3/library/superseded.html", "title": "Superseded Modules", "content": "Superseded Modules\u00b6\nThe modules described in this chapter have been superseded by other modules for most use cases, and are retained primarily to preserve backwards compatibility.\nModules may appear in this chapter because they only cover a limited subset of\na problem space, and a more generally applicable solution is available elsewhere\nin the standard library (for example, getopt\ncovers the very specific\ntask of \u201cmimic the C getopt()\nAPI in Python\u201d, rather than the broader\ncommand line option parsing and argument parsing capabilities offered by\noptparse\nand argparse\n).\nAlternatively, modules may appear in this chapter because they are deprecated outright, and awaiting removal in a future release, or they are soft deprecated and their use is actively discouraged in new projects. With the removal of various obsolete modules through PEP 594, there are currently no modules in this latter category.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 227} +{"url": "https://docs.python.org/3/library/cmdline.html", "title": "Modules command-line interface (CLI)", "content": "Modules command-line interface (CLI)\u00b6\nThe following modules have a command-line interface.\nencodings.rot_13\nthis\nSee also the Python command-line interface.\nThe following modules have a command-line interface.\nencodings.rot_13\nthis\nSee also the Python command-line interface.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 69} +{"url": "https://docs.python.org/3/library/syslog.html", "title": " \u2014 Unix syslog library routines", "content": "syslog\n\u2014 Unix syslog library routines\u00b6\nThis module provides an interface to the Unix syslog\nlibrary routines.\nRefer to the Unix manual pages for a detailed description of the syslog\nfacility.\nAvailability: Unix, not WASI, not iOS.\nThis module wraps the system syslog\nfamily of routines. A pure Python\nlibrary that can speak to a syslog server is available in the\nlogging.handlers\nmodule as SysLogHandler\n.\nThe module defines the following functions:\n- syslog.syslog(message)\u00b6\n- syslog.syslog(priority, message)\nSend the string message to the system logger. A trailing newline is added if necessary. Each message is tagged with a priority composed of a facility and a level. The optional priority argument, which defaults to\nLOG_INFO\n, determines the message priority. If the facility is not encoded in priority using logical-or (LOG_INFO | LOG_USER\n), the value given in theopenlog()\ncall is used.If\nopenlog()\nhas not been called prior to the call tosyslog()\n,openlog()\nwill be called with no arguments.Raises an auditing event\nsyslog.syslog\nwith argumentspriority\n,message\n.Changed in version 3.2: In previous versions,\nopenlog()\nwould not be called automatically if it wasn\u2019t called prior to the call tosyslog()\n, deferring to the syslog implementation to callopenlog()\n.Changed in version 3.12: This function is restricted in subinterpreters. (Only code that runs in multiple interpreters is affected and the restriction is not relevant for most users.)\nopenlog()\nmust be called in the main interpreter beforesyslog()\nmay be used in a subinterpreter. Otherwise it will raiseRuntimeError\n.\n- syslog.openlog([ident[, logoption[, facility]]])\u00b6\nLogging options of subsequent\nsyslog()\ncalls can be set by callingopenlog()\n.syslog()\nwill callopenlog()\nwith no arguments if the log is not currently open.The optional ident keyword argument is a string which is prepended to every message, and defaults to\nsys.argv[0]\nwith leading path components stripped. The optional logoption keyword argument (default is 0) is a bit field \u2013 see below for possible values to combine. The optional facility keyword argument (default isLOG_USER\n) sets the default facility for messages which do not have a facility explicitly encoded.Raises an auditing event\nsyslog.openlog\nwith argumentsident\n,logoption\n,facility\n.Changed in version 3.2: In previous versions, keyword arguments were not allowed, and ident was required.\nChanged in version 3.12: This function is restricted in subinterpreters. (Only code that runs in multiple interpreters is affected and the restriction is not relevant for most users.) This may only be called in the main interpreter. It will raise\nRuntimeError\nif called in a subinterpreter.\n- syslog.closelog()\u00b6\nReset the syslog module values and call the system library\ncloselog()\n.This causes the module to behave as it does when initially imported. For example,\nopenlog()\nwill be called on the firstsyslog()\ncall (ifopenlog()\nhasn\u2019t already been called), and ident and otheropenlog()\nparameters are reset to defaults.Raises an auditing event\nsyslog.closelog\nwith no arguments.Changed in version 3.12: This function is restricted in subinterpreters. (Only code that runs in multiple interpreters is affected and the restriction is not relevant for most users.) This may only be called in the main interpreter. It will raise\nRuntimeError\nif called in a subinterpreter.\n- syslog.setlogmask(maskpri)\u00b6\nSet the priority mask to maskpri and return the previous mask value. Calls to\nsyslog()\nwith a priority level not set in maskpri are ignored. The default is to log all priorities. The functionLOG_MASK(pri)\ncalculates the mask for the individual priority pri. The functionLOG_UPTO(pri)\ncalculates the mask for all priorities up to and including pri.Raises an auditing event\nsyslog.setlogmask\nwith argumentmaskpri\n.\nThe module defines the following constants:\n- syslog.LOG_EMERG\u00b6\n- syslog.LOG_ALERT\u00b6\n- syslog.LOG_CRIT\u00b6\n- syslog.LOG_ERR\u00b6\n- syslog.LOG_WARNING\u00b6\n- syslog.LOG_NOTICE\u00b6\n- syslog.LOG_INFO\u00b6\n- syslog.LOG_DEBUG\u00b6\nPriority levels (high to low).\n- syslog.LOG_AUTH\u00b6\n- syslog.LOG_AUTHPRIV\u00b6\n- syslog.LOG_CRON\u00b6\n- syslog.LOG_DAEMON\u00b6\n- syslog.LOG_FTP\u00b6\n- syslog.LOG_INSTALL\u00b6\n- syslog.LOG_KERN\u00b6\n- syslog.LOG_LAUNCHD\u00b6\n- syslog.LOG_LPR\u00b6\n- syslog.LOG_MAIL\u00b6\n- syslog.LOG_NETINFO\u00b6\n- syslog.LOG_NEWS\u00b6\n- syslog.LOG_RAS\u00b6\n- syslog.LOG_REMOTEAUTH\u00b6\n- syslog.LOG_SYSLOG\u00b6\n- syslog.LOG_USER\u00b6\n- syslog.LOG_UUCP\u00b6\n- syslog.LOG_LOCAL0\u00b6\n- syslog.LOG_LOCAL1\u00b6\n- syslog.LOG_LOCAL2\u00b6\n- syslog.LOG_LOCAL3\u00b6\n- syslog.LOG_LOCAL4\u00b6\n- syslog.LOG_LOCAL5\u00b6\n- syslog.LOG_LOCAL6\u00b6\n- syslog.LOG_LOCAL7\u00b6\nFacilities, depending on availability in\n\nforLOG_AUTHPRIV\n,LOG_FTP\n,LOG_NETINFO\n,LOG_REMOTEAUTH\n,LOG_INSTALL\nandLOG_RAS\n.Changed in version 3.13: Added\nLOG_FTP\n,LOG_NETINFO\n,LOG_REMOTEAUTH\n,LOG_INSTALL\n,LOG_RAS\n, andLOG_LAUNCHD\n.\n- syslog.LOG_PID\u00b6\n- syslog.LOG_CONS\u00b6\n- syslog.LOG_NDELAY\u00b6\n- syslog.LOG_ODELAY\u00b6\n- syslog.LOG_NOWAIT\u00b6\n- syslog.LOG_PERROR\u00b6\nLog options, depending on availability in\n\nforLOG_ODELAY\n,LOG_NOWAIT\nandLOG_PERROR\n.\nExamples\u00b6\nSimple example\u00b6\nA simple set of examples:\nimport syslog\nsyslog.syslog('Processing started')\nif error:\nsyslog.syslog(syslog.LOG_ERR, 'Processing started')\nAn example of setting some log options, these would include the process ID in logged messages, and write the messages to the destination facility used for mail logging:\nsyslog.openlog(logoption=syslog.LOG_PID, facility=syslog.LOG_MAIL)\nsyslog.syslog('E-mail processing initiated...')", "code_snippets": ["\n\n", "\n", " ", "\n ", " ", "\n", " ", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 1366} +{"url": "https://docs.python.org/3/library/resource.html", "title": " \u2014 Resource usage information", "content": "resource\n\u2014 Resource usage information\u00b6\nThis module provides basic mechanisms for measuring and controlling system resources utilized by a program.\nAvailability: Unix, not WASI.\nSymbolic constants are used to specify particular system resources and to request usage information about either the current process or its children.\nAn OSError\nis raised on syscall failure.\nResource Limits\u00b6\nResources usage can be limited using the setrlimit()\nfunction described\nbelow. Each resource is controlled by a pair of limits: a soft limit and a hard\nlimit. The soft limit is the current limit, and may be lowered or raised by a\nprocess over time. The soft limit can never exceed the hard limit. The hard\nlimit can be lowered to any value greater than the soft limit, but not raised.\n(Only processes with the effective UID of the super-user can raise a hard\nlimit.)\nThe specific resources that can be limited are system dependent. They are described in the getrlimit(2) man page. The resources listed below are supported when the underlying operating system supports them; resources which cannot be checked or controlled by the operating system are not defined in this module for those platforms.\n- resource.RLIM_INFINITY\u00b6\nConstant used to represent the limit for an unlimited resource.\n- resource.getrlimit(resource)\u00b6\nReturns a tuple\n(soft, hard)\nwith the current soft and hard limits of resource. RaisesValueError\nif an invalid resource is specified, orerror\nif the underlying system call fails unexpectedly.\n- resource.setrlimit(resource, limits)\u00b6\nSets new limits of consumption of resource. The limits argument must be a tuple\n(soft, hard)\nof two integers describing the new limits. A value ofRLIM_INFINITY\ncan be used to request a limit that is unlimited.Raises\nValueError\nif an invalid resource is specified, if the new soft limit exceeds the hard limit, or if a process tries to raise its hard limit. Specifying a limit ofRLIM_INFINITY\nwhen the hard or system limit for that resource is not unlimited will result in aValueError\n. A process with the effective UID of super-user can request any valid limit value, including unlimited, butValueError\nwill still be raised if the requested limit exceeds the system imposed limit.setrlimit\nmay also raiseerror\nif the underlying system call fails.VxWorks only supports setting\nRLIMIT_NOFILE\n.Raises an auditing event\nresource.setrlimit\nwith argumentsresource\n,limits\n.\n- resource.prlimit(pid, resource[, limits])\u00b6\nCombines\nsetrlimit()\nandgetrlimit()\nin one function and supports to get and set the resources limits of an arbitrary process. If pid is 0, then the call applies to the current process. resource and limits have the same meaning as insetrlimit()\n, except that limits is optional.When limits is not given the function returns the resource limit of the process pid. When limits is given the resource limit of the process is set and the former resource limit is returned.\nRaises\nProcessLookupError\nwhen pid can\u2019t be found andPermissionError\nwhen the user doesn\u2019t haveCAP_SYS_RESOURCE\nfor the process.Raises an auditing event\nresource.prlimit\nwith argumentspid\n,resource\n,limits\n.Availability: Linux >= 2.6.36 with glibc >= 2.13.\nAdded in version 3.4.\nThese symbols define resources whose consumption can be controlled using the\nsetrlimit()\nand getrlimit()\nfunctions described below. The values of\nthese symbols are exactly the constants used by C programs.\nThe Unix man page for getrlimit(2) lists the available resources. Note that not all systems use the same symbol or same value to denote the same resource. This module does not attempt to mask platform differences \u2014 symbols not defined for a platform will not be available from this module on that platform.\n- resource.RLIMIT_CORE\u00b6\nThe maximum size (in bytes) of a core file that the current process can create. This may result in the creation of a partial core file if a larger core would be required to contain the entire process image.\n- resource.RLIMIT_CPU\u00b6\nThe maximum amount of processor time (in seconds) that a process can use. If this limit is exceeded, a\nSIGXCPU\nsignal is sent to the process. (See thesignal\nmodule documentation for information about how to catch this signal and do something useful, e.g. flush open files to disk.)\n- resource.RLIMIT_FSIZE\u00b6\nThe maximum size of a file which the process may create.\n- resource.RLIMIT_DATA\u00b6\nThe maximum size (in bytes) of the process\u2019s heap.\n- resource.RLIMIT_STACK\u00b6\nThe maximum size (in bytes) of the call stack for the current process. This only affects the stack of the main thread in a multi-threaded process.\n- resource.RLIMIT_RSS\u00b6\nThe maximum resident set size that should be made available to the process.\n- resource.RLIMIT_NPROC\u00b6\nThe maximum number of processes the current process may create.\n- resource.RLIMIT_NOFILE\u00b6\nThe maximum number of open file descriptors for the current process.\n- resource.RLIMIT_OFILE\u00b6\nThe BSD name for\nRLIMIT_NOFILE\n.\n- resource.RLIMIT_MEMLOCK\u00b6\nThe maximum address space which may be locked in memory.\n- resource.RLIMIT_VMEM\u00b6\nThe largest area of mapped memory which the process may occupy. Usually an alias of\nRLIMIT_AS\n.Availability: Solaris, FreeBSD, NetBSD.\n- resource.RLIMIT_AS\u00b6\nThe maximum area (in bytes) of address space which may be taken by the process.\n- resource.RLIMIT_MSGQUEUE\u00b6\nThe number of bytes that can be allocated for POSIX message queues.\nAvailability: Linux >= 2.6.8.\nAdded in version 3.4.\n- resource.RLIMIT_NICE\u00b6\nThe ceiling for the process\u2019s nice level (calculated as 20 - rlim_cur).\nAvailability: Linux >= 2.6.12.\nAdded in version 3.4.\n- resource.RLIMIT_RTPRIO\u00b6\nThe ceiling of the real-time priority.\nAvailability: Linux >= 2.6.12.\nAdded in version 3.4.\n- resource.RLIMIT_RTTIME\u00b6\nThe time limit (in microseconds) on CPU time that a process can spend under real-time scheduling without making a blocking syscall.\nAvailability: Linux >= 2.6.25.\nAdded in version 3.4.\n- resource.RLIMIT_SIGPENDING\u00b6\nThe number of signals which the process may queue.\nAvailability: Linux >= 2.6.8.\nAdded in version 3.4.\n- resource.RLIMIT_SBSIZE\u00b6\nThe maximum size (in bytes) of socket buffer usage for this user. This limits the amount of network memory, and hence the amount of mbufs, that this user may hold at any time.\nAvailability: FreeBSD, NetBSD.\nAdded in version 3.4.\n- resource.RLIMIT_SWAP\u00b6\nThe maximum size (in bytes) of the swap space that may be reserved or used by all of this user id\u2019s processes. This limit is enforced only if bit 1 of the vm.overcommit sysctl is set. Please see tuning(7) for a complete description of this sysctl.\nAvailability: FreeBSD >= 8.\nAdded in version 3.4.\n- resource.RLIMIT_NPTS\u00b6\nThe maximum number of pseudo-terminals created by this user id.\nAvailability: FreeBSD >= 8.\nAdded in version 3.4.\n- resource.RLIMIT_KQUEUES\u00b6\nThe maximum number of kqueues this user id is allowed to create.\nAvailability: FreeBSD >= 11.\nAdded in version 3.10.\nResource Usage\u00b6\nThese functions are used to retrieve resource usage information:\n- resource.getrusage(who)\u00b6\nThis function returns an object that describes the resources consumed by either the current process or its children, as specified by the who parameter. The who parameter should be specified using one of the\nRUSAGE_*\nconstants described below.A simple example:\nfrom resource import * import time # a non CPU-bound task time.sleep(3) print(getrusage(RUSAGE_SELF)) # a CPU-bound task for i in range(10 ** 8): _ = 1 + 1 print(getrusage(RUSAGE_SELF))\nThe fields of the return value each describe how a particular system resource has been used, e.g. amount of time spent running in user mode or number of times the process was swapped out of main memory. Some values are dependent on the clock tick interval, e.g. the amount of memory the process is using.\nFor backward compatibility, the return value is also accessible as a tuple of 16 elements.\nThe fields\nru_utime\nandru_stime\nof the return value are floating-point values representing the amount of time spent executing in user mode and the amount of time spent executing in system mode, respectively. The remaining values are integers. Consult the getrusage(2) man page for detailed information about these values. A brief summary is presented here:Index\nField\nResource\n0\nru_utime\ntime in user mode (float seconds)\n1\nru_stime\ntime in system mode (float seconds)\n2\nru_maxrss\nmaximum resident set size\n3\nru_ixrss\nshared memory size\n4\nru_idrss\nunshared memory size\n5\nru_isrss\nunshared stack size\n6\nru_minflt\npage faults not requiring I/O\n7\nru_majflt\npage faults requiring I/O\n8\nru_nswap\nnumber of swap outs\n9\nru_inblock\nblock input operations\n10\nru_oublock\nblock output operations\n11\nru_msgsnd\nmessages sent\n12\nru_msgrcv\nmessages received\n13\nru_nsignals\nsignals received\n14\nru_nvcsw\nvoluntary context switches\n15\nru_nivcsw\ninvoluntary context switches\nThis function will raise a\nValueError\nif an invalid who parameter is specified. It may also raiseerror\nexception in unusual circumstances.\n- resource.getpagesize()\u00b6\nReturns the number of bytes in a system page. (This need not be the same as the hardware page size.)\nThe following RUSAGE_*\nsymbols are passed to the getrusage()\nfunction to specify which processes information should be provided for.\n- resource.RUSAGE_SELF\u00b6\nPass to\ngetrusage()\nto request resources consumed by the calling process, which is the sum of resources used by all threads in the process.\n- resource.RUSAGE_CHILDREN\u00b6\nPass to\ngetrusage()\nto request resources consumed by child processes of the calling process which have been terminated and waited for.\n- resource.RUSAGE_BOTH\u00b6\nPass to\ngetrusage()\nto request resources consumed by both the current process and child processes. May not be available on all systems.\n- resource.RUSAGE_THREAD\u00b6\nPass to\ngetrusage()\nto request resources consumed by the current thread. May not be available on all systems.Added in version 3.2.", "code_snippets": [" ", "\n", "\n\n", "\n", "\n", "\n\n", "\n", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 2456} +{"url": "https://docs.python.org/3/library/fcntl.html", "title": " \u2014 The ", "content": "fcntl\n\u2014 The fcntl\nand ioctl\nsystem calls\u00b6\nThis module performs file and I/O control on file descriptors. It is an\ninterface to the fcntl()\nand ioctl()\nUnix routines.\nSee the fcntl(2) and ioctl(2) Unix manual pages\nfor full details.\nAvailability: Unix, not WASI.\nAll functions in this module take a file descriptor fd as their first\nargument. This can be an integer file descriptor, such as returned by\nsys.stdin.fileno()\n, or an io.IOBase\nobject, such as sys.stdin\nitself, which provides a fileno()\nthat returns a genuine file\ndescriptor.\nChanged in version 3.3: Operations in this module used to raise an IOError\nwhere they now\nraise an OSError\n.\nChanged in version 3.8: The fcntl\nmodule now contains F_ADD_SEALS\n, F_GET_SEALS\n, and\nF_SEAL_*\nconstants for sealing of os.memfd_create()\nfile\ndescriptors.\nChanged in version 3.9: On macOS, the fcntl\nmodule exposes the F_GETPATH\nconstant,\nwhich obtains the path of a file from a file descriptor.\nOn Linux(>=3.15), the fcntl\nmodule exposes the F_OFD_GETLK\n,\nF_OFD_SETLK\nand F_OFD_SETLKW\nconstants, which are used when working\nwith open file description locks.\nChanged in version 3.10: On Linux >= 2.6.11, the fcntl\nmodule exposes the F_GETPIPE_SZ\nand\nF_SETPIPE_SZ\nconstants, which allow to check and modify a pipe\u2019s size\nrespectively.\nChanged in version 3.11: On FreeBSD, the fcntl\nmodule exposes the F_DUP2FD\nand\nF_DUP2FD_CLOEXEC\nconstants, which allow to duplicate a file descriptor,\nthe latter setting FD_CLOEXEC\nflag in addition.\nChanged in version 3.12: On Linux >= 4.5, the fcntl\nmodule exposes the FICLONE\nand\nFICLONERANGE\nconstants, which allow to share some data of one file with\nanother file by reflinking on some filesystems (e.g., btrfs, OCFS2, and\nXFS). This behavior is commonly referred to as \u201ccopy-on-write\u201d.\nChanged in version 3.13: On Linux >= 2.6.32, the fcntl\nmodule exposes the\nF_GETOWN_EX\n, F_SETOWN_EX\n, F_OWNER_TID\n, F_OWNER_PID\n, F_OWNER_PGRP\nconstants, which allow to direct I/O availability signals\nto a specific thread, process, or process group.\nOn Linux >= 4.13, the fcntl\nmodule exposes the\nF_GET_RW_HINT\n, F_SET_RW_HINT\n, F_GET_FILE_RW_HINT\n,\nF_SET_FILE_RW_HINT\n, and RWH_WRITE_LIFE_*\nconstants, which allow\nto inform the kernel about the relative expected lifetime of writes on\na given inode or via a particular open file description.\nOn Linux >= 5.1 and NetBSD, the fcntl\nmodule exposes the\nF_SEAL_FUTURE_WRITE\nconstant for use with F_ADD_SEALS\nand\nF_GET_SEALS\noperations.\nOn FreeBSD, the fcntl\nmodule exposes the F_READAHEAD\n, F_ISUNIONSTACK\n, and F_KINFO\nconstants.\nOn macOS and FreeBSD, the fcntl\nmodule exposes the F_RDAHEAD\nconstant.\nOn NetBSD and AIX, the fcntl\nmodule exposes the F_CLOSEM\nconstant.\nOn NetBSD, the fcntl\nmodule exposes the F_MAXFD\nconstant.\nOn macOS and NetBSD, the fcntl\nmodule exposes the F_GETNOSIGPIPE\nand F_SETNOSIGPIPE\nconstant.\nChanged in version 3.14: On Linux >= 6.1, the fcntl\nmodule exposes the F_DUPFD_QUERY\nto query a file descriptor pointing to the same file.\nThe module defines the following functions:\n- fcntl.fcntl(fd, cmd, arg=0, /)\u00b6\nPerform the operation cmd on file descriptor fd (file objects providing a\nfileno()\nmethod are accepted as well). The values used for cmd are operating system dependent, and are available as constants in thefcntl\nmodule, using the same names as used in the relevant C header files. The argument arg can either be an integer value, a bytes-like object, or a string. The type and size of arg must match the type and size of the argument of the operation as specified in the relevant C documentation.When arg is an integer, the function returns the integer return value of the C\nfcntl()\ncall.When the argument is bytes-like object, it represents a binary structure, for example, created by\nstruct.pack()\n. A string value is encoded to binary using the UTF-8 encoding. The binary data is copied to a buffer whose address is passed to the Cfcntl()\ncall. The return value after a successful call is the contents of the buffer, converted to abytes\nobject. The length of the returned object will be the same as the length of the arg argument. This is limited to 1024 bytes.If the\nfcntl()\ncall fails, anOSError\nis raised.Note\nIf the type or the size of arg does not match the type or size of the argument of the operation (for example, if an integer is passed when a pointer is expected, or the information returned in the buffer by the operating system is larger than 1024 bytes), this is most likely to result in a segmentation violation or a more subtle data corruption.\nRaises an auditing event\nfcntl.fcntl\nwith argumentsfd\n,cmd\n,arg\n.Changed in version 3.14: Add support of arbitrary bytes-like objects, not only\nbytes\n.\n- fcntl.ioctl(fd, request, arg=0, mutate_flag=True, /)\u00b6\nThis function is identical to the\nfcntl()\nfunction, except that the argument handling is even more complicated.The request parameter is limited to values that can fit in 32-bits or 64-bits, depending on the platform. Additional constants of interest for use as the request argument can be found in the\ntermios\nmodule, under the same names as used in the relevant C header files.The parameter arg can be an integer, a bytes-like object, or a string. The type and size of arg must match the type and size of the argument of the operation as specified in the relevant C documentation.\nIf arg does not support the read-write buffer interface or the mutate_flag is false, behavior is as for the\nfcntl()\nfunction.If arg supports the read-write buffer interface (like\nbytearray\n) and mutate_flag is true (the default), then the buffer is (in effect) passed to the underlyingioctl()\nsystem call, the latter\u2019s return code is passed back to the calling Python, and the buffer\u2019s new contents reflect the action of theioctl()\n. This is a slight simplification, because if the supplied buffer is less than 1024 bytes long it is first copied into a static buffer 1024 bytes long which is then passed toioctl()\nand copied back into the supplied buffer.If the\nioctl()\ncall fails, anOSError\nexception is raised.Note\nIf the type or size of arg does not match the type or size of the operation\u2019s argument (for example, if an integer is passed when a pointer is expected, or the information returned in the buffer by the operating system is larger than 1024 bytes, or the size of the mutable bytes-like object is too small), this is most likely to result in a segmentation violation or a more subtle data corruption.\nAn example:\n>>> import array, fcntl, struct, termios, os >>> os.getpgrp() 13341 >>> struct.unpack('h', fcntl.ioctl(0, termios.TIOCGPGRP, \" \"))[0] 13341 >>> buf = array.array('h', [0]) >>> fcntl.ioctl(0, termios.TIOCGPGRP, buf, 1) 0 >>> buf array('h', [13341])\nRaises an auditing event\nfcntl.ioctl\nwith argumentsfd\n,request\n,arg\n.Changed in version 3.14: The GIL is always released during a system call. System calls failing with EINTR are automatically retried.\n- fcntl.flock(fd, operation, /)\u00b6\nPerform the lock operation operation on file descriptor fd (file objects providing a\nfileno()\nmethod are accepted as well). See the Unix manual flock(2) for details. (On some systems, this function is emulated usingfcntl()\n.)If the\nflock()\ncall fails, anOSError\nexception is raised.Raises an auditing event\nfcntl.flock\nwith argumentsfd\n,operation\n.\n- fcntl.lockf(fd, cmd, len=0, start=0, whence=0, /)\u00b6\nThis is essentially a wrapper around the\nfcntl()\nlocking calls. fd is the file descriptor (file objects providing afileno()\nmethod are accepted as well) of the file to lock or unlock, and cmd is one of the following values:- fcntl.LOCK_UN\u00b6\nRelease an existing lock.\n- fcntl.LOCK_SH\u00b6\nAcquire a shared lock.\n- fcntl.LOCK_EX\u00b6\nAcquire an exclusive lock.\n- fcntl.LOCK_NB\u00b6\nBitwise OR with any of the other three\nLOCK_*\nconstants to make the request non-blocking.\nIf\nLOCK_NB\nis used and the lock cannot be acquired, anOSError\nwill be raised and the exception will have an errno attribute set toEACCES\norEAGAIN\n(depending on the operating system; for portability, check for both values). On at least some systems,LOCK_EX\ncan only be used if the file descriptor refers to a file opened for writing.len is the number of bytes to lock, start is the byte offset at which the lock starts, relative to whence, and whence is as with\nio.IOBase.seek()\n, specifically:0\n\u2013 relative to the start of the file (os.SEEK_SET\n)1\n\u2013 relative to the current buffer position (os.SEEK_CUR\n)2\n\u2013 relative to the end of the file (os.SEEK_END\n)\nThe default for start is 0, which means to start at the beginning of the file. The default for len is 0 which means to lock to the end of the file. The default for whence is also 0.\nRaises an auditing event\nfcntl.lockf\nwith argumentsfd\n,cmd\n,len\n,start\n,whence\n.\nExamples (all on a SVR4 compliant system):\nimport struct, fcntl, os\nf = open(...)\nrv = fcntl.fcntl(f, fcntl.F_SETFL, os.O_NDELAY)\nlockdata = struct.pack('hhllhh', fcntl.F_WRLCK, 0, 0, 0, 0, 0)\nrv = fcntl.fcntl(f, fcntl.F_SETLKW, lockdata)\nNote that in the first example the return value variable rv will hold an\ninteger value; in the second example it will hold a bytes\nobject. The\nstructure lay-out for the lockdata variable is system dependent \u2014 therefore\nusing the flock()\ncall may be better.", "code_snippets": ["\n", "\n", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n\n", " ", " ", "\n", " ", " ", " ", " ", "\n\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 2298} +{"url": "https://docs.python.org/3/reference/datamodel.html", "title": "Data model", "content": "3. Data model\u00b6\n3.1. Objects, values and types\u00b6\nObjects are Python\u2019s abstraction for data. All data in a Python program is represented by objects or by relations between objects. Even code is represented by objects.\nEvery object has an identity, a type and a value. An object\u2019s identity never\nchanges once it has been created; you may think of it as the object\u2019s address in\nmemory. The is\noperator compares the identity of two objects; the\nid()\nfunction returns an integer representing its identity.\nCPython implementation detail: For CPython, id(x)\nis the memory address where x\nis stored.\nAn object\u2019s type determines the operations that the object supports (e.g., \u201cdoes\nit have a length?\u201d) and also defines the possible values for objects of that\ntype. The type()\nfunction returns an object\u2019s type (which is an object\nitself). Like its identity, an object\u2019s type is also unchangeable.\n[1]\nThe value of some objects can change. Objects whose value can change are said to be mutable; objects whose value is unchangeable once they are created are called immutable. (The value of an immutable container object that contains a reference to a mutable object can change when the latter\u2019s value is changed; however the container is still considered immutable, because the collection of objects it contains cannot be changed. So, immutability is not strictly the same as having an unchangeable value, it is more subtle.) An object\u2019s mutability is determined by its type; for instance, numbers, strings and tuples are immutable, while dictionaries and lists are mutable.\nObjects are never explicitly destroyed; however, when they become unreachable they may be garbage-collected. An implementation is allowed to postpone garbage collection or omit it altogether \u2014 it is a matter of implementation quality how garbage collection is implemented, as long as no objects are collected that are still reachable.\nCPython implementation detail: CPython currently uses a reference-counting scheme with (optional) delayed\ndetection of cyclically linked garbage, which collects most objects as soon\nas they become unreachable, but is not guaranteed to collect garbage\ncontaining circular references. See the documentation of the gc\nmodule for information on controlling the collection of cyclic garbage.\nOther implementations act differently and CPython may change.\nDo not depend on immediate finalization of objects when they become\nunreachable (so you should always close files explicitly).\nNote that the use of the implementation\u2019s tracing or debugging facilities may\nkeep objects alive that would normally be collectable. Also note that catching\nan exception with a try\n\u2026except\nstatement may keep\nobjects alive.\nSome objects contain references to \u201cexternal\u201d resources such as open files or\nwindows. It is understood that these resources are freed when the object is\ngarbage-collected, but since garbage collection is not guaranteed to happen,\nsuch objects also provide an explicit way to release the external resource,\nusually a close()\nmethod. Programs are strongly recommended to explicitly\nclose such objects. The try\n\u2026finally\nstatement\nand the with\nstatement provide convenient ways to do this.\nSome objects contain references to other objects; these are called containers. Examples of containers are tuples, lists and dictionaries. The references are part of a container\u2019s value. In most cases, when we talk about the value of a container, we imply the values, not the identities of the contained objects; however, when we talk about the mutability of a container, only the identities of the immediately contained objects are implied. So, if an immutable container (like a tuple) contains a reference to a mutable object, its value changes if that mutable object is changed.\nTypes affect almost all aspects of object behavior. Even the importance of\nobject identity is affected in some sense: for immutable types, operations that\ncompute new values may actually return a reference to any existing object with\nthe same type and value, while for mutable objects this is not allowed.\nFor example, after a = 1; b = 1\n, a and b may or may not refer to\nthe same object with the value one, depending on the implementation.\nThis is because int\nis an immutable type, so the reference to 1\ncan be reused. This behaviour depends on the implementation used, so should\nnot be relied upon, but is something to be aware of when making use of object\nidentity tests.\nHowever, after c = []; d = []\n, c and d are guaranteed to refer to two\ndifferent, unique, newly created empty lists. (Note that e = f = []\nassigns\nthe same object to both e and f.)\n3.2. The standard type hierarchy\u00b6\nBelow is a list of the types that are built into Python. Extension modules (written in C, Java, or other languages, depending on the implementation) can define additional types. Future versions of Python may add types to the type hierarchy (e.g., rational numbers, efficiently stored arrays of integers, etc.), although such additions will often be provided via the standard library instead.\nSome of the type descriptions below contain a paragraph listing \u2018special attributes.\u2019 These are attributes that provide access to the implementation and are not intended for general use. Their definition may change in the future.\n3.2.1. None\u00b6\nThis type has a single value. There is a single object with this value. This\nobject is accessed through the built-in name None\n. It is used to signify the\nabsence of a value in many situations, e.g., it is returned from functions that\ndon\u2019t explicitly return anything. Its truth value is false.\n3.2.2. NotImplemented\u00b6\nThis type has a single value. There is a single object with this value. This\nobject is accessed through the built-in name NotImplemented\n. Numeric methods\nand rich comparison methods should return this value if they do not implement the\noperation for the operands provided. (The interpreter will then try the\nreflected operation, or some other fallback, depending on the operator.) It\nshould not be evaluated in a boolean context.\nSee Implementing the arithmetic operations for more details.\nChanged in version 3.9: Evaluating NotImplemented\nin a boolean context was deprecated.\nChanged in version 3.14: Evaluating NotImplemented\nin a boolean context now raises a TypeError\n.\nIt previously evaluated to True\nand emitted a DeprecationWarning\nsince Python 3.9.\n3.2.3. Ellipsis\u00b6\nThis type has a single value. There is a single object with this value. This\nobject is accessed through the literal ...\nor the built-in name\nEllipsis\n. Its truth value is true.\n3.2.4. numbers.Number\n\u00b6\nThese are created by numeric literals and returned as results by arithmetic operators and arithmetic built-in functions. Numeric objects are immutable; once created their value never changes. Python numbers are of course strongly related to mathematical numbers, but subject to the limitations of numerical representation in computers.\nThe string representations of the numeric classes, computed by\n__repr__()\nand __str__()\n, have the following\nproperties:\nThey are valid numeric literals which, when passed to their class constructor, produce an object having the value of the original numeric.\nThe representation is in base 10, when possible.\nLeading zeros, possibly excepting a single zero before a decimal point, are not shown.\nTrailing zeros, possibly excepting a single zero after a decimal point, are not shown.\nA sign is shown only when the number is negative.\nPython distinguishes between integers, floating-point numbers, and complex numbers:\n3.2.4.1. numbers.Integral\n\u00b6\nThese represent elements from the mathematical set of integers (positive and negative).\nNote\nThe rules for integer representation are intended to give the most meaningful interpretation of shift and mask operations involving negative integers.\nThere are two types of integers:\n- Integers (\nint\n) These represent numbers in an unlimited range, subject to available (virtual) memory only. For the purpose of shift and mask operations, a binary representation is assumed, and negative numbers are represented in a variant of 2\u2019s complement which gives the illusion of an infinite string of sign bits extending to the left.\n- Booleans (\nbool\n) These represent the truth values False and True. The two objects representing the values\nFalse\nandTrue\nare the only Boolean objects. The Boolean type is a subtype of the integer type, and Boolean values behave like the values 0 and 1, respectively, in almost all contexts, the exception being that when converted to a string, the strings\"False\"\nor\"True\"\nare returned, respectively.\n3.2.4.2. numbers.Real\n(float\n)\u00b6\nThese represent machine-level double precision floating-point numbers. You are at the mercy of the underlying machine architecture (and C or Java implementation) for the accepted range and handling of overflow. Python does not support single-precision floating-point numbers; the savings in processor and memory usage that are usually the reason for using these are dwarfed by the overhead of using objects in Python, so there is no reason to complicate the language with two kinds of floating-point numbers.\n3.2.4.3. numbers.Complex\n(complex\n)\u00b6\nThese represent complex numbers as a pair of machine-level double precision\nfloating-point numbers. The same caveats apply as for floating-point numbers.\nThe real and imaginary parts of a complex number z\ncan be retrieved through\nthe read-only attributes z.real\nand z.imag\n.\n3.2.5. Sequences\u00b6\nThese represent finite ordered sets indexed by non-negative numbers. The\nbuilt-in function len()\nreturns the number of items of a sequence. When\nthe length of a sequence is n, the index set contains the numbers 0, 1,\n\u2026, n-1. Item i of sequence a is selected by a[i]\n. Some sequences,\nincluding built-in sequences, interpret negative subscripts by adding the\nsequence length. For example, a[-2]\nequals a[n-2]\n, the second to last\nitem of sequence a with length n\n.\nThe resulting value must be a nonnegative integer less than the number of items\nin the sequence. If it is not, an IndexError\nis raised.\nSequences also support slicing: a[start:stop]\nselects all items with index k such\nthat start <=\nk <\nstop. When used as an expression, a slice is a\nsequence of the same type. The comment above about negative subscripts also applies\nto negative slice positions.\nNote that no error is raised if a slice position is less than zero or larger\nthan the length of the sequence.\nIf start is missing or None\n, slicing behaves as if start was zero.\nIf stop is missing or None\n, slicing behaves as if stop was equal to\nthe length of the sequence.\nSome sequences also support \u201cextended slicing\u201d with a third \u201cstep\u201d parameter:\na[i:j:k]\nselects all items of a with index x where x = i + n*k\n, n\n>=\n0\nand i <=\nx <\nj.\nSequences are distinguished according to their mutability:\n3.2.5.1. Immutable sequences\u00b6\nAn object of an immutable sequence type cannot change once it is created. (If the object contains references to other objects, these other objects may be mutable and may be changed; however, the collection of objects directly referenced by an immutable object cannot change.)\nThe following types are immutable sequences:\n- Strings\nA string (\nstr\n) is a sequence of values that represent characters, or more formally, Unicode code points. All the code points in the range0\nto0x10FFFF\ncan be represented in a string.Python doesn\u2019t have a dedicated character type. Instead, every code point in the string is represented as a string object with length\n1\n.The built-in function\nord()\nconverts a code point from its string form to an integer in the range0\nto0x10FFFF\n;chr()\nconverts an integer in the range0\nto0x10FFFF\nto the corresponding length1\nstring object.str.encode()\ncan be used to convert astr\ntobytes\nusing the given text encoding, andbytes.decode()\ncan be used to achieve the opposite.- Tuples\nThe items of a\ntuple\nare arbitrary Python objects. Tuples of two or more items are formed by comma-separated lists of expressions. A tuple of one item (a \u2018singleton\u2019) can be formed by affixing a comma to an expression (an expression by itself does not create a tuple, since parentheses must be usable for grouping of expressions). An empty tuple can be formed by an empty pair of parentheses.- Bytes\nA\nbytes\nobject is an immutable array. The items are 8-bit bytes, represented by integers in the range 0 <= x < 256. Bytes literals (likeb'abc'\n) and the built-inbytes()\nconstructor can be used to create bytes objects. Also, bytes objects can be decoded to strings via thedecode()\nmethod.\n3.2.5.2. Mutable sequences\u00b6\nMutable sequences can be changed after they are created. The subscription and\nslicing notations can be used as the target of assignment and del\n(delete) statements.\nNote\nThe collections\nand array\nmodule provide\nadditional examples of mutable sequence types.\nThere are currently two intrinsic mutable sequence types:\n- Lists\nThe items of a list are arbitrary Python objects. Lists are formed by placing a comma-separated list of expressions in square brackets. (Note that there are no special cases needed to form lists of length 0 or 1.)\n- Byte Arrays\nA bytearray object is a mutable array. They are created by the built-in\nbytearray()\nconstructor. Aside from being mutable (and hence unhashable), byte arrays otherwise provide the same interface and functionality as immutablebytes\nobjects.\n3.2.6. Set types\u00b6\nThese represent unordered, finite sets of unique, immutable objects. As such,\nthey cannot be indexed by any subscript. However, they can be iterated over, and\nthe built-in function len()\nreturns the number of items in a set. Common\nuses for sets are fast membership testing, removing duplicates from a sequence,\nand computing mathematical operations such as intersection, union, difference,\nand symmetric difference.\nFor set elements, the same immutability rules apply as for dictionary keys. Note\nthat numeric types obey the normal rules for numeric comparison: if two numbers\ncompare equal (e.g., 1\nand 1.0\n), only one of them can be contained in a\nset.\nThere are currently two intrinsic set types:\n- Sets\nThese represent a mutable set. They are created by the built-in\nset()\nconstructor and can be modified afterwards by several methods, such asadd()\n.- Frozen sets\nThese represent an immutable set. They are created by the built-in\nfrozenset()\nconstructor. As a frozenset is immutable and hashable, it can be used again as an element of another set, or as a dictionary key.\n3.2.7. Mappings\u00b6\nThese represent finite sets of objects indexed by arbitrary index sets. The\nsubscript notation a[k]\nselects the item indexed by k\nfrom the mapping\na\n; this can be used in expressions and as the target of assignments or\ndel\nstatements. The built-in function len()\nreturns the number\nof items in a mapping.\nThere is currently a single intrinsic mapping type:\n3.2.7.1. Dictionaries\u00b6\nThese represent finite sets of objects indexed by nearly arbitrary values. The\nonly types of values not acceptable as keys are values containing lists or\ndictionaries or other mutable types that are compared by value rather than by\nobject identity, the reason being that the efficient implementation of\ndictionaries requires a key\u2019s hash value to remain constant. Numeric types used\nfor keys obey the normal rules for numeric comparison: if two numbers compare\nequal (e.g., 1\nand 1.0\n) then they can be used interchangeably to index\nthe same dictionary entry.\nDictionaries preserve insertion order, meaning that keys will be produced in the same order they were added sequentially over the dictionary. Replacing an existing key does not change the order, however removing a key and re-inserting it will add it to the end instead of keeping its old place.\nDictionaries are mutable; they can be created by the {}\nnotation (see\nsection Dictionary displays).\nThe extension modules dbm.ndbm\nand dbm.gnu\nprovide\nadditional examples of mapping types, as does the collections\nmodule.\nChanged in version 3.7: Dictionaries did not preserve insertion order in versions of Python before 3.6. In CPython 3.6, insertion order was preserved, but it was considered an implementation detail at that time rather than a language guarantee.\n3.2.8. Callable types\u00b6\nThese are the types to which the function call operation (see section Calls) can be applied:\n3.2.8.1. User-defined functions\u00b6\nA user-defined function object is created by a function definition (see section Function definitions). It should be called with an argument list containing the same number of items as the function\u2019s formal parameter list.\n3.2.8.1.1. Special read-only attributes\u00b6\nAttribute |\nMeaning |\n|---|---|\n|\nA reference to the Added in version 3.10. |\n|\nA reference to the |\n|\nA cell object has the attribute |\n3.2.8.1.2. Special writable attributes\u00b6\nMost of these attributes check the type of the assigned value:\nAttribute |\nMeaning |\n|---|---|\n|\nThe function\u2019s documentation string, or |\n|\nThe function\u2019s name.\nSee also: |\n|\nThe function\u2019s qualified name.\nSee also: Added in version 3.3. |\n|\nThe name of the module the function was defined in,\nor |\n|\nA |\n|\nThe code object representing the compiled function body. |\n|\nThe namespace supporting arbitrary function attributes.\nSee also: |\n|\nA Changed in version 3.14: Annotations are now lazily evaluated. See PEP 649. |\n|\nThe annotate function for this function, or Added in version 3.14. |\n|\nA |\n|\nA Added in version 3.12. |\nFunction objects also support getting and setting arbitrary attributes, which can be used, for example, to attach metadata to functions. Regular attribute dot-notation is used to get and set such attributes.\nCPython implementation detail: CPython\u2019s current implementation only supports function attributes on user-defined functions. Function attributes on built-in functions may be supported in the future.\nAdditional information about a function\u2019s definition can be retrieved from its\ncode object\n(accessible via the __code__\nattribute).\n3.2.8.2. Instance methods\u00b6\nAn instance method object combines a class, a class instance and any callable object (normally a user-defined function).\nSpecial read-only attributes:\n|\nRefers to the class instance object to which the method is bound |\n|\nRefers to the original function object |\n|\nThe method\u2019s documentation\n(same as |\n|\nThe name of the method\n(same as |\n|\nThe name of the module the method was defined in, or |\nMethods also support accessing (but not setting) the arbitrary function attributes on the underlying function object.\nUser-defined method objects may be created when getting an attribute of a\nclass (perhaps via an instance of that class), if that attribute is a\nuser-defined function object or a\nclassmethod\nobject.\nWhen an instance method object is created by retrieving a user-defined\nfunction object from a class via one of its\ninstances, its __self__\nattribute is the instance, and the\nmethod object is said to be bound. The new method\u2019s __func__\nattribute is the original function object.\nWhen an instance method object is created by retrieving a classmethod\nobject from a class or instance, its __self__\nattribute is the\nclass itself, and its __func__\nattribute is the function object\nunderlying the class method.\nWhen an instance method object is called, the underlying function\n(__func__\n) is called, inserting the class instance\n(__self__\n) in front of the argument list. For instance, when\nC\nis a class which contains a definition for a function\nf()\n, and x\nis an instance of C\n, calling x.f(1)\nis\nequivalent to calling C.f(x, 1)\n.\nWhen an instance method object is derived from a classmethod\nobject, the\n\u201cclass instance\u201d stored in __self__\nwill actually be the class\nitself, so that calling either x.f(1)\nor C.f(1)\nis equivalent to\ncalling f(C,1)\nwhere f\nis the underlying function.\nIt is important to note that user-defined functions which are attributes of a class instance are not converted to bound methods; this only happens when the function is an attribute of the class.\n3.2.8.3. Generator functions\u00b6\nA function or method which uses the yield\nstatement (see section\nThe yield statement) is called a generator function. Such a function, when\ncalled, always returns an iterator object which can be used to\nexecute the body of the function: calling the iterator\u2019s\niterator.__next__()\nmethod will cause the function to execute until\nit provides a value using the yield\nstatement. When the\nfunction executes a return\nstatement or falls off the end, a\nStopIteration\nexception is raised and the iterator will have\nreached the end of the set of values to be returned.\n3.2.8.4. Coroutine functions\u00b6\nA function or method which is defined using async def\nis called\na coroutine function. Such a function, when called, returns a\ncoroutine object. It may contain await\nexpressions,\nas well as async with\nand async for\nstatements. See\nalso the Coroutine Objects section.\n3.2.8.5. Asynchronous generator functions\u00b6\nA function or method which is defined using async def\nand\nwhich uses the yield\nstatement is called a\nasynchronous generator function. Such a function, when called,\nreturns an asynchronous iterator object which can be used in an\nasync for\nstatement to execute the body of the function.\nCalling the asynchronous iterator\u2019s\naiterator.__anext__\nmethod\nwill return an awaitable which when awaited\nwill execute until it provides a value using the yield\nexpression. When the function executes an empty return\nstatement or falls off the end, a StopAsyncIteration\nexception\nis raised and the asynchronous iterator will have reached the end of\nthe set of values to be yielded.\n3.2.8.6. Built-in functions\u00b6\nA built-in function object is a wrapper around a C function. Examples of\nbuilt-in functions are len()\nand math.sin()\n(math\nis a\nstandard built-in module). The number and type of the arguments are\ndetermined by the C function. Special read-only attributes:\n__doc__\nis the function\u2019s documentation string, orNone\nif unavailable. Seefunction.__doc__\n.__name__\nis the function\u2019s name. Seefunction.__name__\n.__self__\nis set toNone\n(but see the next item).__module__\nis the name of the module the function was defined in orNone\nif unavailable. Seefunction.__module__\n.\n3.2.8.7. Built-in methods\u00b6\nThis is really a different disguise of a built-in function, this time containing\nan object passed to the C function as an implicit extra argument. An example of\na built-in method is alist.append()\n, assuming alist is a list object. In\nthis case, the special read-only attribute __self__\nis set to the object\ndenoted by alist. (The attribute has the same semantics as it does with\nother instance methods\n.)\n3.2.8.8. Classes\u00b6\nClasses are callable. These objects normally act as factories for new\ninstances of themselves, but variations are possible for class types that\noverride __new__()\n. The arguments of the call are passed to\n__new__()\nand, in the typical case, to __init__()\nto\ninitialize the new instance.\n3.2.8.9. Class Instances\u00b6\nInstances of arbitrary classes can be made callable by defining a\n__call__()\nmethod in their class.\n3.2.9. Modules\u00b6\nModules are a basic organizational unit of Python code, and are created by\nthe import system as invoked either by the\nimport\nstatement, or by calling\nfunctions such as importlib.import_module()\nand built-in\n__import__()\n. A module object has a namespace implemented by a\ndictionary\nobject (this is the dictionary referenced by the\n__globals__\nattribute of functions defined in the module). Attribute references are\ntranslated to lookups in this dictionary, e.g., m.x\nis equivalent to\nm.__dict__[\"x\"]\n. A module object does not contain the code object used\nto initialize the module (since it isn\u2019t needed once the initialization is\ndone).\nAttribute assignment updates the module\u2019s namespace dictionary, e.g.,\nm.x = 1\nis equivalent to m.__dict__[\"x\"] = 1\n.\n3.2.9.2. Other writable attributes on module objects\u00b6\nAs well as the import-related attributes listed above, module objects also have the following writable attributes:\n- module.__doc__\u00b6\nThe module\u2019s documentation string, or\nNone\nif unavailable. See also:__doc__ attributes\n.\n- module.__annotations__\u00b6\nA dictionary containing variable annotations collected during module body execution. For best practices on working with\n__annotations__\n, seeannotationlib\n.Changed in version 3.14: Annotations are now lazily evaluated. See PEP 649.\n- module.__annotate__\u00b6\nThe annotate function for this module, or\nNone\nif the module has no annotations. See also:__annotate__\nattributes.Added in version 3.14.\n3.2.9.3. Module dictionaries\u00b6\nModule objects also have the following special read-only attribute:\n- module.__dict__\u00b6\nThe module\u2019s namespace as a dictionary object. Uniquely among the attributes listed here,\n__dict__\ncannot be accessed as a global variable from within a module; it can only be accessed as an attribute on module objects.CPython implementation detail: Because of the way CPython clears module dictionaries, the module dictionary will be cleared when the module falls out of scope even if the dictionary still has live references. To avoid this, copy the dictionary or keep the module around while using its dictionary directly.\n3.2.10. Custom classes\u00b6\nCustom class types are typically created by class definitions (see section\nClass definitions). A class has a namespace implemented by a dictionary object.\nClass attribute references are translated to lookups in this dictionary, e.g.,\nC.x\nis translated to C.__dict__[\"x\"]\n(although there are a number of\nhooks which allow for other means of locating attributes). When the attribute\nname is not found there, the attribute search continues in the base classes.\nThis search of the base classes uses the C3 method resolution order which\nbehaves correctly even in the presence of \u2018diamond\u2019 inheritance structures\nwhere there are multiple inheritance paths leading back to a common ancestor.\nAdditional details on the C3 MRO used by Python can be found at\nThe Python 2.3 Method Resolution Order.\nWhen a class attribute reference (for class C\n, say) would yield a\nclass method object, it is transformed into an instance method object whose\n__self__\nattribute is C\n.\nWhen it would yield a staticmethod\nobject,\nit is transformed into the object wrapped by the static method\nobject. See section Implementing Descriptors for another way in which attributes\nretrieved from a class may differ from those actually contained in its\n__dict__\n.\nClass attribute assignments update the class\u2019s dictionary, never the dictionary of a base class.\nA class object can be called (see above) to yield a class instance (see below).\n3.2.10.1. Special attributes\u00b6\nAttribute |\nMeaning |\n|---|---|\n|\nThe class\u2019s name.\nSee also: |\n|\nThe class\u2019s qualified name.\nSee also: |\n|\nThe name of the module in which the class was defined. |\n|\nA |\n|\nA |\n|\nCPython implementation detail: The single base class in the inheritance chain that is responsible\nfor the memory layout of instances. This attribute corresponds to\n|\n|\nThe class\u2019s documentation string, or |\n|\nA dictionary containing\nvariable annotations\ncollected during class body execution. See also:\nFor best practices on working with Warning Accessing the This attribute does not exist on certain builtin classes. On\nuser-defined classes without Changed in version 3.14: Annotations are now lazily evaluated. See PEP 649. |\n|\nThe annotate function for this class, or Added in version 3.14. |\n|\nA Added in version 3.12. |\n|\nA Added in version 3.13. |\n|\nThe line number of the first line of the class definition,\nincluding decorators.\nSetting the Added in version 3.13. |\n|\nThe |\n3.2.10.2. Special methods\u00b6\nIn addition to the special attributes described above, all Python classes also have the following two methods available:\n- type.mro()\u00b6\nThis method can be overridden by a metaclass to customize the method resolution order for its instances. It is called at class instantiation, and its result is stored in\n__mro__\n.\n- type.__subclasses__()\u00b6\nEach class keeps a list of weak references to its immediate subclasses. This method returns a list of all those references still alive. The list is in definition order. Example:\n>>> class A: pass >>> class B(A): pass >>> A.__subclasses__() []\n3.2.11. Class instances\u00b6\nA class instance is created by calling a class object (see above). A class\ninstance has a namespace implemented as a dictionary which is the first place\nin which attribute references are searched. When an attribute is not found\nthere, and the instance\u2019s class has an attribute by that name, the search\ncontinues with the class attributes. If a class attribute is found that is a\nuser-defined function object, it is transformed into an instance method\nobject whose __self__\nattribute is the instance. Static method and\nclass method objects are also transformed; see above under \u201cClasses\u201d. See\nsection Implementing Descriptors for another way in which attributes of a class\nretrieved via its instances may differ from the objects actually stored in\nthe class\u2019s __dict__\n. If no class attribute is found, and the\nobject\u2019s class has a __getattr__()\nmethod, that is called to satisfy\nthe lookup.\nAttribute assignments and deletions update the instance\u2019s dictionary, never a\nclass\u2019s dictionary. If the class has a __setattr__()\nor\n__delattr__()\nmethod, this is called instead of updating the instance\ndictionary directly.\nClass instances can pretend to be numbers, sequences, or mappings if they have methods with certain special names. See section Special method names.\n3.2.11.1. Special attributes\u00b6\n- object.__class__\u00b6\nThe class to which a class instance belongs.\n3.2.12. I/O objects (also known as file objects)\u00b6\nA file object represents an open file. Various shortcuts are\navailable to create file objects: the open()\nbuilt-in function, and\nalso os.popen()\n, os.fdopen()\n, and the\nmakefile()\nmethod of socket objects (and perhaps by\nother functions or methods provided by extension modules).\nThe objects sys.stdin\n, sys.stdout\nand sys.stderr\nare\ninitialized to file objects corresponding to the interpreter\u2019s standard\ninput, output and error streams; they are all open in text mode and\ntherefore follow the interface defined by the io.TextIOBase\nabstract class.\n3.2.13. Internal types\u00b6\nA few types used internally by the interpreter are exposed to the user. Their definitions may change with future versions of the interpreter, but they are mentioned here for completeness.\n3.2.13.1. Code objects\u00b6\nCode objects represent byte-compiled executable Python code, or bytecode. The difference between a code object and a function object is that the function object contains an explicit reference to the function\u2019s globals (the module in which it was defined), while a code object contains no context; also the default argument values are stored in the function object, not in the code object (because they represent values calculated at run-time). Unlike function objects, code objects are immutable and contain no references (directly or indirectly) to mutable objects.\n3.2.13.1.1. Special read-only attributes\u00b6\n|\nThe function name |\n|\nThe fully qualified function name Added in version 3.11. |\n|\nThe total number of positional parameters (including positional-only parameters and parameters with default values) that the function has |\n|\nThe number of positional-only parameters (including arguments with default values) that the function has |\n|\nThe number of keyword-only parameters (including arguments with default values) that the function has |\n|\nThe number of local variables used by the function (including parameters) |\n|\nA |\n|\nA |\n|\nA Note: references to global and builtin names are not included. |\n|\nA string representing the sequence of bytecode instructions in the function |\n|\nA |\n|\nA |\n|\nThe name of the file from which the code was compiled |\n|\nThe line number of the first line of the function |\n|\nA string encoding the mapping from bytecode offsets to line numbers. For details, see the source code of the interpreter. Deprecated since version 3.12: This attribute of code objects is deprecated, and may be removed in Python 3.15. |\n|\nThe required stack size of the code object |\n|\nAn |\nThe following flag bits are defined for co_flags\n:\nbit 0x04\nis set if\nthe function uses the *arguments\nsyntax to accept an arbitrary number of\npositional arguments; bit 0x08\nis set if the function uses the\n**keywords\nsyntax to accept arbitrary keyword arguments; bit 0x20\nis set\nif the function is a generator. See Code Objects Bit Flags for details\non the semantics of each flags that might be present.\nFuture feature declarations (for example, from __future__ import division\n) also use bits\nin co_flags\nto indicate whether a code object was compiled with a\nparticular feature enabled. See compiler_flag\n.\nOther bits in co_flags\nare reserved for internal use.\nIf a code object represents a function and has a docstring,\nthe CO_HAS_DOCSTRING\nbit is set in co_flags\nand the first item in co_consts\nis\nthe docstring of the function.\n3.2.13.1.2. Methods on code objects\u00b6\n- codeobject.co_positions()\u00b6\nReturns an iterable over the source code positions of each bytecode instruction in the code object.\nThe iterator returns\ntuple\ns containing the(start_line, end_line, start_column, end_column)\n. The i-th tuple corresponds to the position of the source code that compiled to the i-th code unit. Column information is 0-indexed utf-8 byte offsets on the given source line.This positional information can be missing. A non-exhaustive lists of cases where this may happen:\nRunning the interpreter with\n-X\nno_debug_ranges\n.Loading a pyc file compiled while using\n-X\nno_debug_ranges\n.Position tuples corresponding to artificial instructions.\nLine and column numbers that can\u2019t be represented due to implementation specific limitations.\nWhen this occurs, some or all of the tuple elements can be\nNone\n.Added in version 3.11.\nNote\nThis feature requires storing column positions in code objects which may result in a small increase of disk usage of compiled Python files or interpreter memory usage. To avoid storing the extra information and/or deactivate printing the extra traceback information, the\n-X\nno_debug_ranges\ncommand line flag or thePYTHONNODEBUGRANGES\nenvironment variable can be used.\n- codeobject.co_lines()\u00b6\nReturns an iterator that yields information about successive ranges of bytecodes. Each item yielded is a\n(start, end, lineno)\ntuple\n:start\n(anint\n) represents the offset (inclusive) of the start of the bytecode rangeend\n(anint\n) represents the offset (exclusive) of the end of the bytecode rangelineno\nis anint\nrepresenting the line number of the bytecode range, orNone\nif the bytecodes in the given range have no line number\nThe items yielded will have the following properties:\nThe first range yielded will have a\nstart\nof 0.The\n(start, end)\nranges will be non-decreasing and consecutive. That is, for any pair oftuple\ns, thestart\nof the second will be equal to theend\nof the first.No range will be backwards:\nend >= start\nfor all triples.The last\ntuple\nyielded will haveend\nequal to the size of the bytecode.\nZero-width ranges, where\nstart == end\n, are allowed. Zero-width ranges are used for lines that are present in the source code, but have been eliminated by the bytecode compiler.Added in version 3.10.\nSee also\n- PEP 626 - Precise line numbers for debugging and other tools.\nThe PEP that introduced the\nco_lines()\nmethod.\n- codeobject.replace(**kwargs)\u00b6\nReturn a copy of the code object with new values for the specified fields.\nCode objects are also supported by the generic function\ncopy.replace()\n.Added in version 3.8.\n3.2.13.2. Frame objects\u00b6\nFrame objects represent execution frames. They may occur in traceback objects, and are also passed to registered trace functions.\n3.2.13.2.1. Special read-only attributes\u00b6\n|\nPoints to the previous stack frame (towards the caller),\nor |\n|\nThe code object being executed in this frame.\nAccessing this attribute raises an auditing event\n|\n|\nThe mapping used by the frame to look up local variables. If the frame refers to an optimized scope, this may return a write-through proxy object. Changed in version 3.13: Return a proxy for optimized scopes. |\n|\nThe dictionary used by the frame to look up global variables |\n|\nThe dictionary used by the frame to look up built-in (intrinsic) names |\n|\nThe \u201cprecise instruction\u201d of the frame object (this is an index into the bytecode string of the code object) |\n|\nThe generator or coroutine object that owns this frame,\nor Added in version 3.14. |\n3.2.13.2.2. Special writable attributes\u00b6\n|\nIf not |\n|\nSet this attribute to |\n|\nSet this attribute to |\n|\nThe current line number of the frame \u2013 writing to this from within a trace function jumps to the given line (only for the bottom-most frame). A debugger can implement a Jump command (aka Set Next Statement) by writing to this attribute. |\n3.2.13.2.3. Frame object methods\u00b6\nFrame objects support one method:\n- frame.clear()\u00b6\nThis method clears all references to local variables held by the frame. Also, if the frame belonged to a generator, the generator is finalized. This helps break reference cycles involving frame objects (for example when catching an exception and storing its traceback for later use).\nRuntimeError\nis raised if the frame is currently executing or suspended.Added in version 3.4.\nChanged in version 3.13: Attempting to clear a suspended frame raises\nRuntimeError\n(as has always been the case for executing frames).\n3.2.13.3. Traceback objects\u00b6\nTraceback objects represent the stack trace of an exception.\nA traceback object\nis implicitly created when an exception occurs, and may also be explicitly\ncreated by calling types.TracebackType\n.\nChanged in version 3.7: Traceback objects can now be explicitly instantiated from Python code.\nFor implicitly created tracebacks, when the search for an exception handler\nunwinds the execution stack, at each unwound level a traceback object is\ninserted in front of the current traceback. When an exception handler is\nentered, the stack trace is made available to the program. (See section\nThe try statement.) It is accessible as the third item of the\ntuple returned by sys.exc_info()\n, and as the\n__traceback__\nattribute\nof the caught exception.\nWhen the program contains no suitable\nhandler, the stack trace is written (nicely formatted) to the standard error\nstream; if the interpreter is interactive, it is also made available to the user\nas sys.last_traceback\n.\nFor explicitly created tracebacks, it is up to the creator of the traceback\nto determine how the tb_next\nattributes should be linked to\nform a full stack trace.\nSpecial read-only attributes:\n|\nPoints to the execution frame of the current level. Accessing this attribute raises an\nauditing event |\n|\nGives the line number where the exception occurred |\n|\nIndicates the \u201cprecise instruction\u201d. |\nThe line number and last instruction in the traceback may differ from the\nline number of its frame object if the exception\noccurred in a\ntry\nstatement with no matching except clause or with a\nfinally\nclause.\n- traceback.tb_next\u00b6\nThe special writable attribute\ntb_next\nis the next level in the stack trace (towards the frame where the exception occurred), orNone\nif there is no next level.Changed in version 3.7: This attribute is now writable\n3.2.13.4. Slice objects\u00b6\nSlice objects are used to represent slices for\n__getitem__()\nmethods. They are also created by the built-in slice()\nfunction.\nSpecial read-only attributes: start\nis the lower bound;\nstop\nis the upper bound; step\nis the step\nvalue; each is None\nif omitted. These attributes can have any type.\nSlice objects support one method:\n- slice.indices(self, length)\u00b6\nThis method takes a single integer argument length and computes information about the slice that the slice object would describe if applied to a sequence of length items. It returns a tuple of three integers; respectively these are the start and stop indices and the step or stride length of the slice. Missing or out-of-bounds indices are handled in a manner consistent with regular slices.\n3.2.13.5. Static method objects\u00b6\nStatic method objects provide a way of defeating the transformation of function\nobjects to method objects described above. A static method object is a wrapper\naround any other object, usually a user-defined method object. When a static\nmethod object is retrieved from a class or a class instance, the object actually\nreturned is the wrapped object, which is not subject to any further\ntransformation. Static method objects are also callable. Static method\nobjects are created by the built-in staticmethod()\nconstructor.\n3.2.13.6. Class method objects\u00b6\nA class method object, like a static method object, is a wrapper around another\nobject that alters the way in which that object is retrieved from classes and\nclass instances. The behaviour of class method objects upon such retrieval is\ndescribed above, under \u201cinstance methods\u201d. Class method objects are created\nby the built-in classmethod()\nconstructor.\n3.3. Special method names\u00b6\nA class can implement certain operations that are invoked by special syntax\n(such as arithmetic operations or subscripting and slicing) by defining methods\nwith special names. This is Python\u2019s approach to operator overloading,\nallowing classes to define their own behavior with respect to language\noperators. For instance, if a class defines a method named\n__getitem__()\n,\nand x\nis an instance of this class, then x[i]\nis roughly equivalent\nto type(x).__getitem__(x, i)\n. Except where mentioned, attempts to execute an\noperation raise an exception when no appropriate method is defined (typically\nAttributeError\nor TypeError\n).\nSetting a special method to None\nindicates that the corresponding\noperation is not available. For example, if a class sets\n__iter__()\nto None\n, the class is not iterable, so calling\niter()\non its instances will raise a TypeError\n(without\nfalling back to __getitem__()\n). [2]\nWhen implementing a class that emulates any built-in type, it is important that the emulation only be implemented to the degree that it makes sense for the object being modelled. For example, some sequences may work well with retrieval of individual elements, but extracting a slice may not make sense. (One example of this is the NodeList interface in the W3C\u2019s Document Object Model.)\n3.3.1. Basic customization\u00b6\n- object.__new__(cls[, ...])\u00b6\nCalled to create a new instance of class cls.\n__new__()\nis a static method (special-cased so you need not declare it as such) that takes the class of which an instance was requested as its first argument. The remaining arguments are those passed to the object constructor expression (the call to the class). The return value of__new__()\nshould be the new object instance (usually an instance of cls).Typical implementations create a new instance of the class by invoking the superclass\u2019s\n__new__()\nmethod usingsuper().__new__(cls[, ...])\nwith appropriate arguments and then modifying the newly created instance as necessary before returning it.If\n__new__()\nis invoked during object construction and it returns an instance of cls, then the new instance\u2019s__init__()\nmethod will be invoked like__init__(self[, ...])\n, where self is the new instance and the remaining arguments are the same as were passed to the object constructor.If\n__new__()\ndoes not return an instance of cls, then the new instance\u2019s__init__()\nmethod will not be invoked.__new__()\nis intended mainly to allow subclasses of immutable types (like int, str, or tuple) to customize instance creation. It is also commonly overridden in custom metaclasses in order to customize class creation.\n- object.__init__(self[, ...])\u00b6\nCalled after the instance has been created (by\n__new__()\n), but before it is returned to the caller. The arguments are those passed to the class constructor expression. If a base class has an__init__()\nmethod, the derived class\u2019s__init__()\nmethod, if any, must explicitly call it to ensure proper initialization of the base class part of the instance; for example:super().__init__([args...])\n.Because\n__new__()\nand__init__()\nwork together in constructing objects (__new__()\nto create it, and__init__()\nto customize it), no non-None\nvalue may be returned by__init__()\n; doing so will cause aTypeError\nto be raised at runtime.\n- object.__del__(self)\u00b6\nCalled when the instance is about to be destroyed. This is also called a finalizer or (improperly) a destructor. If a base class has a\n__del__()\nmethod, the derived class\u2019s__del__()\nmethod, if any, must explicitly call it to ensure proper deletion of the base class part of the instance.It is possible (though not recommended!) for the\n__del__()\nmethod to postpone destruction of the instance by creating a new reference to it. This is called object resurrection. It is implementation-dependent whether__del__()\nis called a second time when a resurrected object is about to be destroyed; the current CPython implementation only calls it once.It is not guaranteed that\n__del__()\nmethods are called for objects that still exist when the interpreter exits.weakref.finalize\nprovides a straightforward way to register a cleanup function to be called when an object is garbage collected.Note\ndel x\ndoesn\u2019t directly callx.__del__()\n\u2014 the former decrements the reference count forx\nby one, and the latter is only called whenx\n\u2019s reference count reaches zero.CPython implementation detail: It is possible for a reference cycle to prevent the reference count of an object from going to zero. In this case, the cycle will be later detected and deleted by the cyclic garbage collector. A common cause of reference cycles is when an exception has been caught in a local variable. The frame\u2019s locals then reference the exception, which references its own traceback, which references the locals of all frames caught in the traceback.\nSee also\nDocumentation for the\ngc\nmodule.Warning\nDue to the precarious circumstances under which\n__del__()\nmethods are invoked, exceptions that occur during their execution are ignored, and a warning is printed tosys.stderr\ninstead. In particular:__del__()\ncan be invoked when arbitrary code is being executed, including from any arbitrary thread. If__del__()\nneeds to take a lock or invoke any other blocking resource, it may deadlock as the resource may already be taken by the code that gets interrupted to execute__del__()\n.__del__()\ncan be executed during interpreter shutdown. As a consequence, the global variables it needs to access (including other modules) may already have been deleted or set toNone\n. Python guarantees that globals whose name begins with a single underscore are deleted from their module before other globals are deleted; if no other references to such globals exist, this may help in assuring that imported modules are still available at the time when the__del__()\nmethod is called.\n- object.__repr__(self)\u00b6\nCalled by the\nrepr()\nbuilt-in function to compute the \u201cofficial\u201d string representation of an object. If at all possible, this should look like a valid Python expression that could be used to recreate an object with the same value (given an appropriate environment). If this is not possible, a string of the form<...some useful description...>\nshould be returned. The return value must be a string object. If a class defines__repr__()\nbut not__str__()\n, then__repr__()\nis also used when an \u201cinformal\u201d string representation of instances of that class is required.This is typically used for debugging, so it is important that the representation is information-rich and unambiguous. A default implementation is provided by the\nobject\nclass itself.\n- object.__str__(self)\u00b6\nCalled by\nstr(object)\n, the default__format__()\nimplementation, and the built-in functionprint()\n, to compute the \u201cinformal\u201d or nicely printable string representation of an object. The return value must be a str object.This method differs from\nobject.__repr__()\nin that there is no expectation that__str__()\nreturn a valid Python expression: a more convenient or concise representation can be used.The default implementation defined by the built-in type\nobject\ncallsobject.__repr__()\n.\n- object.__bytes__(self)\u00b6\nCalled by bytes to compute a byte-string representation of an object. This should return a\nbytes\nobject. Theobject\nclass itself does not provide this method.\n- object.__format__(self, format_spec)\u00b6\nCalled by the\nformat()\nbuilt-in function, and by extension, evaluation of formatted string literals and thestr.format()\nmethod, to produce a \u201cformatted\u201d string representation of an object. The format_spec argument is a string that contains a description of the formatting options desired. The interpretation of the format_spec argument is up to the type implementing__format__()\n, however most classes will either delegate formatting to one of the built-in types, or use a similar formatting option syntax.See Format Specification Mini-Language for a description of the standard formatting syntax.\nThe return value must be a string object.\nThe default implementation by the\nobject\nclass should be given an empty format_spec string. It delegates to__str__()\n.Changed in version 3.4: The __format__ method of\nobject\nitself raises aTypeError\nif passed any non-empty string.Changed in version 3.7:\nobject.__format__(x, '')\nis now equivalent tostr(x)\nrather thanformat(str(x), '')\n.\n- object.__lt__(self, other)\u00b6\n- object.__le__(self, other)\u00b6\n- object.__eq__(self, other)\u00b6\n- object.__ne__(self, other)\u00b6\n- object.__gt__(self, other)\u00b6\n- object.__ge__(self, other)\u00b6\nThese are the so-called \u201crich comparison\u201d methods. The correspondence between operator symbols and method names is as follows:\nxy\ncallsx.__gt__(y)\n, andx>=y\ncallsx.__ge__(y)\n.A rich comparison method may return the singleton\nNotImplemented\nif it does not implement the operation for a given pair of arguments. By convention,False\nandTrue\nare returned for a successful comparison. However, these methods can return any value, so if the comparison operator is used in a Boolean context (e.g., in the condition of anif\nstatement), Python will callbool()\non the value to determine if the result is true or false.By default,\nobject\nimplements__eq__()\nby usingis\n, returningNotImplemented\nin the case of a false comparison:True if x is y else NotImplemented\n. For__ne__()\n, by default it delegates to__eq__()\nand inverts the result unless it isNotImplemented\n. There are no other implied relationships among the comparison operators or default implementations; for example, the truth of(x.__hash__\n.If a class that does not override\n__eq__()\nwishes to suppress hash support, it should include__hash__ = None\nin the class definition. A class which defines its own__hash__()\nthat explicitly raises aTypeError\nwould be incorrectly identified as hashable by anisinstance(obj, collections.abc.Hashable)\ncall.Note\nBy default, the\n__hash__()\nvalues of str and bytes objects are \u201csalted\u201d with an unpredictable random value. Although they remain constant within an individual Python process, they are not predictable between repeated invocations of Python.This is intended to provide protection against a denial-of-service caused by carefully chosen inputs that exploit the worst case performance of a dict insertion, O(n2) complexity. See http://ocert.org/advisories/ocert-2011-003.html for details.\nChanging hash values affects the iteration order of sets. Python has never made guarantees about this ordering (and it typically varies between 32-bit and 64-bit builds).\nSee also\nPYTHONHASHSEED\n.Changed in version 3.3: Hash randomization is enabled by default.\n- object.__bool__(self)\u00b6\nCalled to implement truth value testing and the built-in operation\nbool()\n; should returnFalse\norTrue\n. When this method is not defined,__len__()\nis called, if it is defined, and the object is considered true if its result is nonzero. If a class defines neither__len__()\nnor__bool__()\n(which is true of theobject\nclass itself), all its instances are considered true.\n3.3.2. Customizing attribute access\u00b6\nThe following methods can be defined to customize the meaning of attribute\naccess (use of, assignment to, or deletion of x.name\n) for class instances.\n- object.__getattr__(self, name)\u00b6\nCalled when the default attribute access fails with an\nAttributeError\n(either__getattribute__()\nraises anAttributeError\nbecause name is not an instance attribute or an attribute in the class tree forself\n; or__get__()\nof a name property raisesAttributeError\n). This method should either return the (computed) attribute value or raise anAttributeError\nexception. Theobject\nclass itself does not provide this method.Note that if the attribute is found through the normal mechanism,\n__getattr__()\nis not called. (This is an intentional asymmetry between__getattr__()\nand__setattr__()\n.) This is done both for efficiency reasons and because otherwise__getattr__()\nwould have no way to access other attributes of the instance. Note that at least for instance variables, you can take total control by not inserting any values in the instance attribute dictionary (but instead inserting them in another object). See the__getattribute__()\nmethod below for a way to actually get total control over attribute access.\n- object.__getattribute__(self, name)\u00b6\nCalled unconditionally to implement attribute accesses for instances of the class. If the class also defines\n__getattr__()\n, the latter will not be called unless__getattribute__()\neither calls it explicitly or raises anAttributeError\n. This method should return the (computed) attribute value or raise anAttributeError\nexception. In order to avoid infinite recursion in this method, its implementation should always call the base class method with the same name to access any attributes it needs, for example,object.__getattribute__(self, name)\n.Note\nThis method may still be bypassed when looking up special methods as the result of implicit invocation via language syntax or built-in functions. See Special method lookup.\nFor certain sensitive attribute accesses, raises an auditing event\nobject.__getattr__\nwith argumentsobj\nandname\n.\n- object.__setattr__(self, name, value)\u00b6\nCalled when an attribute assignment is attempted. This is called instead of the normal mechanism (i.e. store the value in the instance dictionary). name is the attribute name, value is the value to be assigned to it.\nIf\n__setattr__()\nwants to assign to an instance attribute, it should call the base class method with the same name, for example,object.__setattr__(self, name, value)\n.For certain sensitive attribute assignments, raises an auditing event\nobject.__setattr__\nwith argumentsobj\n,name\n,value\n.\n- object.__delattr__(self, name)\u00b6\nLike\n__setattr__()\nbut for attribute deletion instead of assignment. This should only be implemented ifdel obj.name\nis meaningful for the object.For certain sensitive attribute deletions, raises an auditing event\nobject.__delattr__\nwith argumentsobj\nandname\n.\n- object.__dir__(self)\u00b6\nCalled when\ndir()\nis called on the object. An iterable must be returned.dir()\nconverts the returned iterable to a list and sorts it.\n3.3.2.1. Customizing module attribute access\u00b6\nSpecial names __getattr__\nand __dir__\ncan be also used to customize\naccess to module attributes. The __getattr__\nfunction at the module level\nshould accept one argument which is the name of an attribute and return the\ncomputed value or raise an AttributeError\n. If an attribute is\nnot found on a module object through the normal lookup, i.e.\nobject.__getattribute__()\n, then __getattr__\nis searched in\nthe module __dict__\nbefore raising an AttributeError\n. If found,\nit is called with the attribute name and the result is returned.\nThe __dir__\nfunction should accept no arguments, and return an iterable of\nstrings that represents the names accessible on module. If present, this\nfunction overrides the standard dir()\nsearch on a module.\n- module.__class__\u00b6\nFor a more fine grained customization of the module behavior (setting\nattributes, properties, etc.), one can set the __class__\nattribute of\na module object to a subclass of types.ModuleType\n. For example:\nimport sys\nfrom types import ModuleType\nclass VerboseModule(ModuleType):\ndef __repr__(self):\nreturn f'Verbose {self.__name__}'\ndef __setattr__(self, attr, value):\nprint(f'Setting {attr}...')\nsuper().__setattr__(attr, value)\nsys.modules[__name__].__class__ = VerboseModule\nNote\nDefining module __getattr__\nand setting module __class__\nonly\naffect lookups made using the attribute access syntax \u2013 directly accessing\nthe module globals (whether by code within the module, or via a reference\nto the module\u2019s globals dictionary) is unaffected.\nChanged in version 3.5: __class__\nmodule attribute is now writable.\nAdded in version 3.7: __getattr__\nand __dir__\nmodule attributes.\nSee also\n- PEP 562 - Module __getattr__ and __dir__\nDescribes the\n__getattr__\nand__dir__\nfunctions on modules.\n3.3.2.2. Implementing Descriptors\u00b6\nThe following methods only apply when an instance of the class containing the\nmethod (a so-called descriptor class) appears in an owner class (the\ndescriptor must be in either the owner\u2019s class dictionary or in the class\ndictionary for one of its parents). In the examples below, \u201cthe attribute\u201d\nrefers to the attribute whose name is the key of the property in the owner\nclass\u2019 __dict__\n. The object\nclass itself does not\nimplement any of these protocols.\n- object.__get__(self, instance, owner=None)\u00b6\nCalled to get the attribute of the owner class (class attribute access) or of an instance of that class (instance attribute access). The optional owner argument is the owner class, while instance is the instance that the attribute was accessed through, or\nNone\nwhen the attribute is accessed through the owner.This method should return the computed attribute value or raise an\nAttributeError\nexception.PEP 252 specifies that\n__get__()\nis callable with one or two arguments. Python\u2019s own built-in descriptors support this specification; however, it is likely that some third-party tools have descriptors that require both arguments. Python\u2019s own__getattribute__()\nimplementation always passes in both arguments whether they are required or not.\n- object.__set__(self, instance, value)\u00b6\nCalled to set the attribute on an instance instance of the owner class to a new value, value.\nNote, adding\n__set__()\nor__delete__()\nchanges the kind of descriptor to a \u201cdata descriptor\u201d. See Invoking Descriptors for more details.\n- object.__delete__(self, instance)\u00b6\nCalled to delete the attribute on an instance instance of the owner class.\nInstances of descriptors may also have the __objclass__\nattribute\npresent:\n- object.__objclass__\u00b6\nThe attribute\n__objclass__\nis interpreted by theinspect\nmodule as specifying the class where this object was defined (setting this appropriately can assist in runtime introspection of dynamic class attributes). For callables, it may indicate that an instance of the given type (or a subclass) is expected or required as the first positional argument (for example, CPython sets this attribute for unbound methods that are implemented in C).\n3.3.2.3. Invoking Descriptors\u00b6\nIn general, a descriptor is an object attribute with \u201cbinding behavior\u201d, one\nwhose attribute access has been overridden by methods in the descriptor\nprotocol: __get__()\n, __set__()\n, and\n__delete__()\n. If any of\nthose methods are defined for an object, it is said to be a descriptor.\nThe default behavior for attribute access is to get, set, or delete the\nattribute from an object\u2019s dictionary. For instance, a.x\nhas a lookup chain\nstarting with a.__dict__['x']\n, then type(a).__dict__['x']\n, and\ncontinuing through the base classes of type(a)\nexcluding metaclasses.\nHowever, if the looked-up value is an object defining one of the descriptor methods, then Python may override the default behavior and invoke the descriptor method instead. Where this occurs in the precedence chain depends on which descriptor methods were defined and how they were called.\nThe starting point for descriptor invocation is a binding, a.x\n. How the\narguments are assembled depends on a\n:\n- Direct Call\nThe simplest and least common call is when user code directly invokes a descriptor method:\nx.__get__(a)\n.- Instance Binding\nIf binding to an object instance,\na.x\nis transformed into the call:type(a).__dict__['x'].__get__(a, type(a))\n.- Class Binding\nIf binding to a class,\nA.x\nis transformed into the call:A.__dict__['x'].__get__(None, A)\n.- Super Binding\nA dotted lookup such as\nsuper(A, a).x\nsearchesa.__class__.__mro__\nfor a base classB\nfollowingA\nand then returnsB.__dict__['x'].__get__(a, A)\n. If not a descriptor,x\nis returned unchanged.\nFor instance bindings, the precedence of descriptor invocation depends on\nwhich descriptor methods are defined. A descriptor can define any combination\nof __get__()\n, __set__()\nand\n__delete__()\n. If it does not\ndefine __get__()\n, then accessing the attribute will return the descriptor\nobject itself unless there is a value in the object\u2019s instance dictionary. If\nthe descriptor defines __set__()\nand/or __delete__()\n, it is a data\ndescriptor; if it defines neither, it is a non-data descriptor. Normally, data\ndescriptors define both __get__()\nand __set__()\n, while non-data\ndescriptors have just the __get__()\nmethod. Data descriptors with\n__get__()\nand __set__()\n(and/or __delete__()\n) defined\nalways override a redefinition in an\ninstance dictionary. In contrast, non-data descriptors can be overridden by\ninstances.\nPython methods (including those decorated with\n@staticmethod\nand @classmethod\n) are\nimplemented as non-data descriptors. Accordingly, instances can redefine and\noverride methods. This allows individual instances to acquire behaviors that\ndiffer from other instances of the same class.\nThe property()\nfunction is implemented as a data descriptor. Accordingly,\ninstances cannot override the behavior of a property.\n3.3.2.4. __slots__\u00b6\n__slots__ allow us to explicitly declare data members (like\nproperties) and deny the creation of __dict__\nand __weakref__\n(unless explicitly declared in __slots__ or available in a parent.)\nThe space saved over using __dict__\ncan be significant.\nAttribute lookup speed can be significantly improved as well.\n- object.__slots__\u00b6\nThis class variable can be assigned a string, iterable, or sequence of strings with variable names used by instances. __slots__ reserves space for the declared variables and prevents the automatic creation of\n__dict__\nand __weakref__ for each instance.\nNotes on using __slots__:\nWhen inheriting from a class without __slots__, the\n__dict__\nand __weakref__ attribute of the instances will always be accessible.Without a\n__dict__\nvariable, instances cannot be assigned new variables not listed in the __slots__ definition. Attempts to assign to an unlisted variable name raisesAttributeError\n. If dynamic assignment of new variables is desired, then add'__dict__'\nto the sequence of strings in the __slots__ declaration.Without a __weakref__ variable for each instance, classes defining __slots__ do not support\nweak references\nto its instances. If weak reference support is needed, then add'__weakref__'\nto the sequence of strings in the __slots__ declaration.__slots__ are implemented at the class level by creating descriptors for each variable name. As a result, class attributes cannot be used to set default values for instance variables defined by __slots__; otherwise, the class attribute would overwrite the descriptor assignment.\nThe action of a __slots__ declaration is not limited to the class where it is defined. __slots__ declared in parents are available in child classes. However, instances of a child subclass will get a\n__dict__\nand __weakref__ unless the subclass also defines __slots__ (which should only contain names of any additional slots).If a class defines a slot also defined in a base class, the instance variable defined by the base class slot is inaccessible (except by retrieving its descriptor directly from the base class). This renders the meaning of the program undefined. In the future, a check may be added to prevent this.\nTypeError\nwill be raised if nonempty __slots__ are defined for a class derived from a\"variable-length\" built-in type\nsuch asint\n,bytes\n, andtuple\n.Any non-string iterable may be assigned to __slots__.\nIf a\ndictionary\nis used to assign __slots__, the dictionary keys will be used as the slot names. The values of the dictionary can be used to provide per-attribute docstrings that will be recognised byinspect.getdoc()\nand displayed in the output ofhelp()\n.__class__\nassignment works only if both classes have the same __slots__.Multiple inheritance with multiple slotted parent classes can be used, but only one parent is allowed to have attributes created by slots (the other bases must have empty slot layouts) - violations raise\nTypeError\n.If an iterator is used for __slots__ then a descriptor is created for each of the iterator\u2019s values. However, the __slots__ attribute will be an empty iterator.\n3.3.3. Customizing class creation\u00b6\nWhenever a class inherits from another class, __init_subclass__()\nis\ncalled on the parent class. This way, it is possible to write classes which\nchange the behavior of subclasses. This is closely related to class\ndecorators, but where class decorators only affect the specific class they\u2019re\napplied to, __init_subclass__\nsolely applies to future subclasses of the\nclass defining the method.\n- classmethod object.__init_subclass__(cls)\u00b6\nThis method is called whenever the containing class is subclassed. cls is then the new subclass. If defined as a normal instance method, this method is implicitly converted to a class method.\nKeyword arguments which are given to a new class are passed to the parent class\u2019s\n__init_subclass__\n. For compatibility with other classes using__init_subclass__\n, one should take out the needed keyword arguments and pass the others over to the base class, as in:class Philosopher: def __init_subclass__(cls, /, default_name, **kwargs): super().__init_subclass__(**kwargs) cls.default_name = default_name class AustralianPhilosopher(Philosopher, default_name=\"Bruce\"): pass\nThe default implementation\nobject.__init_subclass__\ndoes nothing, but raises an error if it is called with any arguments.Note\nThe metaclass hint\nmetaclass\nis consumed by the rest of the type machinery, and is never passed to__init_subclass__\nimplementations. The actual metaclass (rather than the explicit hint) can be accessed astype(cls)\n.Added in version 3.6.\nWhen a class is created, type.__new__()\nscans the class variables\nand makes callbacks to those with a __set_name__()\nhook.\n- object.__set_name__(self, owner, name)\u00b6\nAutomatically called at the time the owning class owner is created. The object has been assigned to name in that class:\nclass A: x = C() # Automatically calls: x.__set_name__(A, 'x')\nIf the class variable is assigned after the class is created,\n__set_name__()\nwill not be called automatically. If needed,__set_name__()\ncan be called directly:class A: pass c = C() A.x = c # The hook is not called c.__set_name__(A, 'x') # Manually invoke the hook\nSee Creating the class object for more details.\nAdded in version 3.6.\n3.3.3.1. Metaclasses\u00b6\nBy default, classes are constructed using type()\n. The class body is\nexecuted in a new namespace and the class name is bound locally to the\nresult of type(name, bases, namespace)\n.\nThe class creation process can be customized by passing the metaclass\nkeyword argument in the class definition line, or by inheriting from an\nexisting class that included such an argument. In the following example,\nboth MyClass\nand MySubclass\nare instances of Meta\n:\nclass Meta(type):\npass\nclass MyClass(metaclass=Meta):\npass\nclass MySubclass(MyClass):\npass\nAny other keyword arguments that are specified in the class definition are passed through to all metaclass operations described below.\nWhen a class definition is executed, the following steps occur:\nMRO entries are resolved;\nthe appropriate metaclass is determined;\nthe class namespace is prepared;\nthe class body is executed;\nthe class object is created.\n3.3.3.2. Resolving MRO entries\u00b6\n- object.__mro_entries__(self, bases)\u00b6\nIf a base that appears in a class definition is not an instance of\ntype\n, then an__mro_entries__()\nmethod is searched on the base. If an__mro_entries__()\nmethod is found, the base is substituted with the result of a call to__mro_entries__()\nwhen creating the class. The method is called with the original bases tuple passed to the bases parameter, and must return a tuple of classes that will be used instead of the base. The returned tuple may be empty: in these cases, the original base is ignored.\nSee also\ntypes.resolve_bases()\nDynamically resolve bases that are not instances of\ntype\n.types.get_original_bases()\nRetrieve a class\u2019s \u201coriginal bases\u201d prior to modifications by\n__mro_entries__()\n.- PEP 560\nCore support for typing module and generic types.\n3.3.3.3. Determining the appropriate metaclass\u00b6\nThe appropriate metaclass for a class definition is determined as follows:\nif no bases and no explicit metaclass are given, then\ntype()\nis used;if an explicit metaclass is given and it is not an instance of\ntype()\n, then it is used directly as the metaclass;if an instance of\ntype()\nis given as the explicit metaclass, or bases are defined, then the most derived metaclass is used.\nThe most derived metaclass is selected from the explicitly specified\nmetaclass (if any) and the metaclasses (i.e. type(cls)\n) of all specified\nbase classes. The most derived metaclass is one which is a subtype of all\nof these candidate metaclasses. If none of the candidate metaclasses meets\nthat criterion, then the class definition will fail with TypeError\n.\n3.3.3.4. Preparing the class namespace\u00b6\nOnce the appropriate metaclass has been identified, then the class namespace\nis prepared. If the metaclass has a __prepare__\nattribute, it is called\nas namespace = metaclass.__prepare__(name, bases, **kwds)\n(where the\nadditional keyword arguments, if any, come from the class definition). The\n__prepare__\nmethod should be implemented as a\nclassmethod\n. The\nnamespace returned by __prepare__\nis passed in to __new__\n, but when\nthe final class object is created the namespace is copied into a new dict\n.\nIf the metaclass has no __prepare__\nattribute, then the class namespace\nis initialised as an empty ordered mapping.\nSee also\n- PEP 3115 - Metaclasses in Python 3000\nIntroduced the\n__prepare__\nnamespace hook\n3.3.3.5. Executing the class body\u00b6\nThe class body is executed (approximately) as\nexec(body, globals(), namespace)\n. The key difference from a normal\ncall to exec()\nis that lexical scoping allows the class body (including\nany methods) to reference names from the current and outer scopes when the\nclass definition occurs inside a function.\nHowever, even when the class definition occurs inside the function, methods\ndefined inside the class still cannot see names defined at the class scope.\nClass variables must be accessed through the first parameter of instance or\nclass methods, or through the implicit lexically scoped __class__\nreference\ndescribed in the next section.\n3.3.3.6. Creating the class object\u00b6\nOnce the class namespace has been populated by executing the class body,\nthe class object is created by calling\nmetaclass(name, bases, namespace, **kwds)\n(the additional keywords\npassed here are the same as those passed to __prepare__\n).\nThis class object is the one that will be referenced by the zero-argument\nform of super()\n. __class__\nis an implicit closure reference\ncreated by the compiler if any methods in a class body refer to either\n__class__\nor super\n. This allows the zero argument form of\nsuper()\nto correctly identify the class being defined based on\nlexical scoping, while the class or instance that was used to make the\ncurrent call is identified based on the first argument passed to the method.\nCPython implementation detail: In CPython 3.6 and later, the __class__\ncell is passed to the metaclass\nas a __classcell__\nentry in the class namespace. If present, this must\nbe propagated up to the type.__new__\ncall in order for the class to be\ninitialised correctly.\nFailing to do so will result in a RuntimeError\nin Python 3.8.\nWhen using the default metaclass type\n, or any metaclass that ultimately\ncalls type.__new__\n, the following additional customization steps are\ninvoked after creating the class object:\nThe\ntype.__new__\nmethod collects all of the attributes in the class namespace that define a__set_name__()\nmethod;Those\n__set_name__\nmethods are called with the class being defined and the assigned name of that particular attribute;The\n__init_subclass__()\nhook is called on the immediate parent of the new class in its method resolution order.\nAfter the class object is created, it is passed to the class decorators included in the class definition (if any) and the resulting object is bound in the local namespace as the defined class.\nWhen a new class is created by type.__new__\n, the object provided as the\nnamespace parameter is copied to a new ordered mapping and the original\nobject is discarded. The new copy is wrapped in a read-only proxy, which\nbecomes the __dict__\nattribute of the class object.\nSee also\n- PEP 3135 - New super\nDescribes the implicit\n__class__\nclosure reference\n3.3.3.7. Uses for metaclasses\u00b6\nThe potential uses for metaclasses are boundless. Some ideas that have been explored include enum, logging, interface checking, automatic delegation, automatic property creation, proxies, frameworks, and automatic resource locking/synchronization.\n3.3.4. Customizing instance and subclass checks\u00b6\nThe following methods are used to override the default behavior of the\nisinstance()\nand issubclass()\nbuilt-in functions.\nIn particular, the metaclass abc.ABCMeta\nimplements these methods in\norder to allow the addition of Abstract Base Classes (ABCs) as \u201cvirtual base\nclasses\u201d to any class or type (including built-in types), including other\nABCs.\n- type.__instancecheck__(self, instance)\u00b6\nReturn true if instance should be considered a (direct or indirect) instance of class. If defined, called to implement\nisinstance(instance, class)\n.\n- type.__subclasscheck__(self, subclass)\u00b6\nReturn true if subclass should be considered a (direct or indirect) subclass of class. If defined, called to implement\nissubclass(subclass, class)\n.\nNote that these methods are looked up on the type (metaclass) of a class. They cannot be defined as class methods in the actual class. This is consistent with the lookup of special methods that are called on instances, only in this case the instance is itself a class.\nSee also\n- PEP 3119 - Introducing Abstract Base Classes\nIncludes the specification for customizing\nisinstance()\nandissubclass()\nbehavior through__instancecheck__()\nand__subclasscheck__()\n, with motivation for this functionality in the context of adding Abstract Base Classes (see theabc\nmodule) to the language.\n3.3.5. Emulating generic types\u00b6\nWhen using type annotations, it is often useful to\nparameterize a generic type using Python\u2019s square-brackets notation.\nFor example, the annotation list[int]\nmight be used to signify a\nlist\nin which all the elements are of type int\n.\nSee also\n- PEP 484 - Type Hints\nIntroducing Python\u2019s framework for type annotations\n- Generic Alias Types\nDocumentation for objects representing parameterized generic classes\n- Generics, user-defined generics and\ntyping.Generic\nDocumentation on how to implement generic classes that can be parameterized at runtime and understood by static type-checkers.\nA class can generally only be parameterized if it defines the special\nclass method __class_getitem__()\n.\n- classmethod object.__class_getitem__(cls, key)\u00b6\nReturn an object representing the specialization of a generic class by type arguments found in key.\nWhen defined on a class,\n__class_getitem__()\nis automatically a class method. As such, there is no need for it to be decorated with@classmethod\nwhen it is defined.\n3.3.5.1. The purpose of __class_getitem__\u00b6\nThe purpose of __class_getitem__()\nis to allow runtime\nparameterization of standard-library generic classes in order to more easily\napply type hints to these classes.\nTo implement custom generic classes that can be parameterized at runtime and\nunderstood by static type-checkers, users should either inherit from a standard\nlibrary class that already implements __class_getitem__()\n, or\ninherit from typing.Generic\n, which has its own implementation of\n__class_getitem__()\n.\nCustom implementations of __class_getitem__()\non classes defined\noutside of the standard library may not be understood by third-party\ntype-checkers such as mypy. Using __class_getitem__()\non any class for\npurposes other than type hinting is discouraged.\n3.3.5.2. __class_getitem__ versus __getitem__\u00b6\nUsually, the subscription of an object using square\nbrackets will call the __getitem__()\ninstance method defined on\nthe object\u2019s class. However, if the object being subscribed is itself a class,\nthe class method __class_getitem__()\nmay be called instead.\n__class_getitem__()\nshould return a GenericAlias\nobject if it is properly defined.\nPresented with the expression obj[x]\n, the Python interpreter\nfollows something like the following process to decide whether\n__getitem__()\nor __class_getitem__()\nshould be\ncalled:\nfrom inspect import isclass\ndef subscribe(obj, x):\n\"\"\"Return the result of the expression 'obj[x]'\"\"\"\nclass_of_obj = type(obj)\n# If the class of obj defines __getitem__,\n# call class_of_obj.__getitem__(obj, x)\nif hasattr(class_of_obj, '__getitem__'):\nreturn class_of_obj.__getitem__(obj, x)\n# Else, if obj is a class and defines __class_getitem__,\n# call obj.__class_getitem__(x)\nelif isclass(obj) and hasattr(obj, '__class_getitem__'):\nreturn obj.__class_getitem__(x)\n# Else, raise an exception\nelse:\nraise TypeError(\nf\"'{class_of_obj.__name__}' object is not subscriptable\"\n)\nIn Python, all classes are themselves instances of other classes. The class of\na class is known as that class\u2019s metaclass, and most classes have the\ntype\nclass as their metaclass. type\ndoes not define\n__getitem__()\n, meaning that expressions such as list[int]\n,\ndict[str, float]\nand tuple[str, bytes]\nall result in\n__class_getitem__()\nbeing called:\n>>> # list has class \"type\" as its metaclass, like most classes:\n>>> type(list)\n\n>>> type(dict) == type(list) == type(tuple) == type(str) == type(bytes)\nTrue\n>>> # \"list[int]\" calls \"list.__class_getitem__(int)\"\n>>> list[int]\nlist[int]\n>>> # list.__class_getitem__ returns a GenericAlias object:\n>>> type(list[int])\n\nHowever, if a class has a custom metaclass that defines\n__getitem__()\n, subscribing the class may result in different\nbehaviour. An example of this can be found in the enum\nmodule:\n>>> from enum import Enum\n>>> class Menu(Enum):\n... \"\"\"A breakfast menu\"\"\"\n... SPAM = 'spam'\n... BACON = 'bacon'\n...\n>>> # Enum classes have a custom metaclass:\n>>> type(Menu)\n\n>>> # EnumMeta defines __getitem__,\n>>> # so __class_getitem__ is not called,\n>>> # and the result is not a GenericAlias object:\n>>> Menu['SPAM']\n\n>>> type(Menu['SPAM'])\n\nSee also\n- PEP 560 - Core Support for typing module and generic types\nIntroducing\n__class_getitem__()\n, and outlining when a subscription results in__class_getitem__()\nbeing called instead of__getitem__()\n3.3.6. Emulating callable objects\u00b6\n3.3.7. Emulating container types\u00b6\nThe following methods can be defined to implement container objects. None of them\nare provided by the object\nclass itself. Containers usually are\nsequences (such as lists\nor\ntuples\n) or mappings (like\ndictionaries),\nbut can represent other containers as well. The first set of methods is used\neither to emulate a sequence or to emulate a mapping; the difference is that for\na sequence, the allowable keys should be the integers k for which 0 <= k <\nN\nwhere N is the length of the sequence, or slice\nobjects, which define a\nrange of items. It is also recommended that mappings provide the methods\nkeys()\n, values()\n, items()\n, get()\n, clear()\n,\nsetdefault()\n, pop()\n, popitem()\n, copy()\n, and\nupdate()\nbehaving similar to those for Python\u2019s standard dictionary\nobjects. The collections.abc\nmodule provides a\nMutableMapping\nabstract base class to help create those methods from a base set of\n__getitem__()\n, __setitem__()\n,\n__delitem__()\n, and keys()\n.\nMutable sequences should provide methods\nappend()\n, clear()\n, count()\n,\nextend()\n, index()\n, insert()\n,\npop()\n, remove()\n, and reverse()\n,\nlike Python standard list\nobjects.\nFinally, sequence types should implement addition (meaning concatenation) and\nmultiplication (meaning repetition) by defining the methods\n__add__()\n, __radd__()\n, __iadd__()\n,\n__mul__()\n, __rmul__()\nand __imul__()\ndescribed below; they should not define other numerical\noperators.\nIt is recommended that both mappings and sequences implement the\n__contains__()\nmethod to allow efficient use of the in\noperator; for\nmappings, in\nshould search the mapping\u2019s keys; for sequences, it should\nsearch through the values. It is further recommended that both mappings and\nsequences implement the __iter__()\nmethod to allow efficient iteration\nthrough the container; for mappings, __iter__()\nshould iterate\nthrough the object\u2019s keys; for sequences, it should iterate through the values.\n- object.__len__(self)\u00b6\nCalled to implement the built-in function\nlen()\n. Should return the length of the object, an integer>=\n0. Also, an object that doesn\u2019t define a__bool__()\nmethod and whose__len__()\nmethod returns zero is considered to be false in a Boolean context.CPython implementation detail: In CPython, the length is required to be at most\nsys.maxsize\n. If the length is larger thansys.maxsize\nsome features (such aslen()\n) may raiseOverflowError\n. To prevent raisingOverflowError\nby truth value testing, an object must define a__bool__()\nmethod.\n- object.__length_hint__(self)\u00b6\nCalled to implement\noperator.length_hint()\n. Should return an estimated length for the object (which may be greater or less than the actual length). The length must be an integer>=\n0. The return value may also beNotImplemented\n, which is treated the same as if the__length_hint__\nmethod didn\u2019t exist at all. This method is purely an optimization and is never required for correctness.Added in version 3.4.\nNote\nSlicing is done exclusively with the following three methods. A call like\na[1:2] = b\nis translated to\na[slice(1, 2, None)] = b\nand so forth. Missing slice items are always filled in with None\n.\n- object.__getitem__(self, subscript)\u00b6\nCalled to implement subscription, that is,\nself[subscript]\n. See Subscriptions and slicings for details on the syntax.There are two types of built-in objects that support subscription via\n__getitem__()\n:sequences, where subscript (also called index) should be an integer or a\nslice\nobject. See the sequence documentation for the expected behavior, including handlingslice\nobjects and negative indices.mappings, where subscript is also called the key. See mapping documentation for the expected behavior.\nIf subscript is of an inappropriate type,\n__getitem__()\nshould raiseTypeError\n. If subscript has an inappropriate value,__getitem__()\nshould raise anLookupError\nor one of its subclasses (IndexError\nfor sequences;KeyError\nfor mappings).Note\nThe sequence iteration protocol (used, for example, in\nfor\nloops), expects that anIndexError\nwill be raised for illegal indexes to allow proper detection of the end of a sequence.Note\nWhen subscripting a class, the special class method\n__class_getitem__()\nmay be called instead of__getitem__()\n. See __class_getitem__ versus __getitem__ for more details.\n- object.__setitem__(self, key, value)\u00b6\nCalled to implement assignment to\nself[key]\n. Same note as for__getitem__()\n. This should only be implemented for mappings if the objects support changes to the values for keys, or if new keys can be added, or for sequences if elements can be replaced. The same exceptions should be raised for improper key values as for the__getitem__()\nmethod.\n- object.__delitem__(self, key)\u00b6\nCalled to implement deletion of\nself[key]\n. Same note as for__getitem__()\n. This should only be implemented for mappings if the objects support removal of keys, or for sequences if elements can be removed from the sequence. The same exceptions should be raised for improper key values as for the__getitem__()\nmethod.\n- object.__missing__(self, key)\u00b6\nCalled by\ndict\n.__getitem__()\nto implementself[key]\nfor dict subclasses when key is not in the dictionary.\n- object.__iter__(self)\u00b6\nThis method is called when an iterator is required for a container. This method should return a new iterator object that can iterate over all the objects in the container. For mappings, it should iterate over the keys of the container.\n- object.__reversed__(self)\u00b6\nCalled (if present) by the\nreversed()\nbuilt-in to implement reverse iteration. It should return a new iterator object that iterates over all the objects in the container in reverse order.If the\n__reversed__()\nmethod is not provided, thereversed()\nbuilt-in will fall back to using the sequence protocol (__len__()\nand__getitem__()\n). Objects that support the sequence protocol should only provide__reversed__()\nif they can provide an implementation that is more efficient than the one provided byreversed()\n.\nThe membership test operators (in\nand not in\n) are normally\nimplemented as an iteration through a container. However, container objects can\nsupply the following special method with a more efficient implementation, which\nalso does not require the object be iterable.\n- object.__contains__(self, item)\u00b6\nCalled to implement membership test operators. Should return true if item is in self, false otherwise. For mapping objects, this should consider the keys of the mapping rather than the values or the key-item pairs.\nFor objects that don\u2019t define\n__contains__()\n, the membership test first tries iteration via__iter__()\n, then the old sequence iteration protocol via__getitem__()\n, see this section in the language reference.\n3.3.8. Emulating numeric types\u00b6\nThe following methods can be defined to emulate numeric objects. Methods corresponding to operations that are not supported by the particular kind of number implemented (e.g., bitwise operations for non-integral numbers) should be left undefined.\n- object.__add__(self, other)\u00b6\n- object.__sub__(self, other)\u00b6\n- object.__mul__(self, other)\u00b6\n- object.__matmul__(self, other)\u00b6\n- object.__truediv__(self, other)\u00b6\n- object.__floordiv__(self, other)\u00b6\n- object.__mod__(self, other)\u00b6\n- object.__divmod__(self, other)\u00b6\n- object.__pow__(self, other[, modulo])\u00b6\n- object.__lshift__(self, other)\u00b6\n- object.__rshift__(self, other)\u00b6\n- object.__and__(self, other)\u00b6\n- object.__xor__(self, other)\u00b6\n- object.__or__(self, other)\u00b6\nThese methods are called to implement the binary arithmetic operations (\n+\n,-\n,*\n,@\n,/\n,//\n,%\n,divmod()\n,pow()\n,**\n,<<\n,>>\n,&\n,^\n,|\n). For instance, to evaluate the expressionx + y\n, where x is an instance of a class that has an__add__()\nmethod,type(x).__add__(x, y)\nis called. The__divmod__()\nmethod should be the equivalent to using__floordiv__()\nand__mod__()\n; it should not be related to__truediv__()\n. Note that__pow__()\nshould be defined to accept an optional third argument if the three-argument version of the built-inpow()\nfunction is to be supported.If one of those methods does not support the operation with the supplied arguments, it should return\nNotImplemented\n.\n- object.__radd__(self, other)\u00b6\n- object.__rsub__(self, other)\u00b6\n- object.__rmul__(self, other)\u00b6\n- object.__rmatmul__(self, other)\u00b6\n- object.__rtruediv__(self, other)\u00b6\n- object.__rfloordiv__(self, other)\u00b6\n- object.__rmod__(self, other)\u00b6\n- object.__rdivmod__(self, other)\u00b6\n- object.__rpow__(self, other[, modulo])\u00b6\n- object.__rlshift__(self, other)\u00b6\n- object.__rrshift__(self, other)\u00b6\n- object.__rand__(self, other)\u00b6\n- object.__rxor__(self, other)\u00b6\n- object.__ror__(self, other)\u00b6\nThese methods are called to implement the binary arithmetic operations (\n+\n,-\n,*\n,@\n,/\n,//\n,%\n,divmod()\n,pow()\n,**\n,<<\n,>>\n,&\n,^\n,|\n) with reflected (swapped) operands. These functions are only called if the operands are of different types, when the left operand does not support the corresponding operation [3], or the right operand\u2019s class is derived from the left operand\u2019s class. [4] For instance, to evaluate the expressionx - y\n, where y is an instance of a class that has an__rsub__()\nmethod,type(y).__rsub__(y, x)\nis called iftype(x).__sub__(x, y)\nreturnsNotImplemented\nortype(y)\nis a subclass oftype(x)\n. [5]Note that\n__rpow__()\nshould be defined to accept an optional third argument if the three-argument version of the built-inpow()\nfunction is to be supported.Changed in version 3.14: Three-argument\npow()\nnow try calling__rpow__()\nif necessary. Previously it was only called in two-argumentpow()\nand the binary power operator.Note\nIf the right operand\u2019s type is a subclass of the left operand\u2019s type and that subclass provides a different implementation of the reflected method for the operation, this method will be called before the left operand\u2019s non-reflected method. This behavior allows subclasses to override their ancestors\u2019 operations.\n- object.__iadd__(self, other)\u00b6\n- object.__isub__(self, other)\u00b6\n- object.__imul__(self, other)\u00b6\n- object.__imatmul__(self, other)\u00b6\n- object.__itruediv__(self, other)\u00b6\n- object.__ifloordiv__(self, other)\u00b6\n- object.__imod__(self, other)\u00b6\n- object.__ipow__(self, other[, modulo])\u00b6\n- object.__ilshift__(self, other)\u00b6\n- object.__irshift__(self, other)\u00b6\n- object.__iand__(self, other)\u00b6\n- object.__ixor__(self, other)\u00b6\n- object.__ior__(self, other)\u00b6\nThese methods are called to implement the augmented arithmetic assignments (\n+=\n,-=\n,*=\n,@=\n,/=\n,//=\n,%=\n,**=\n,<<=\n,>>=\n,&=\n,^=\n,|=\n). These methods should attempt to do the operation in-place (modifying self) and return the result (which could be, but does not have to be, self). If a specific method is not defined, or if that method returnsNotImplemented\n, the augmented assignment falls back to the normal methods. For instance, if x is an instance of a class with an__iadd__()\nmethod,x += y\nis equivalent tox = x.__iadd__(y)\n. If__iadd__()\ndoes not exist, or ifx.__iadd__(y)\nreturnsNotImplemented\n,x.__add__(y)\nandy.__radd__(x)\nare considered, as with the evaluation ofx + y\n. In certain situations, augmented assignment can result in unexpected errors (see Why does a_tuple[i] += [\u2018item\u2019] raise an exception when the addition works?), but this behavior is in fact part of the data model.\n- object.__neg__(self)\u00b6\n- object.__pos__(self)\u00b6\n- object.__abs__(self)\u00b6\n- object.__invert__(self)\u00b6\nCalled to implement the unary arithmetic operations (\n-\n,+\n,abs()\nand~\n).\n- object.__complex__(self)\u00b6\n- object.__int__(self)\u00b6\n- object.__float__(self)\u00b6\nCalled to implement the built-in functions\ncomplex()\n,int()\nandfloat()\n. Should return a value of the appropriate type.\n- object.__index__(self)\u00b6\nCalled to implement\noperator.index()\n, and whenever Python needs to losslessly convert the numeric object to an integer object (such as in slicing, or in the built-inbin()\n,hex()\nandoct()\nfunctions). Presence of this method indicates that the numeric object is an integer type. Must return an integer.If\n__int__()\n,__float__()\nand__complex__()\nare not defined then corresponding built-in functionsint()\n,float()\nandcomplex()\nfall back to__index__()\n.\n- object.__round__(self[, ndigits])\u00b6\n- object.__trunc__(self)\u00b6\n- object.__floor__(self)\u00b6\n- object.__ceil__(self)\u00b6\nCalled to implement the built-in function\nround()\nandmath\nfunctionstrunc()\n,floor()\nandceil()\n. Unless ndigits is passed to__round__()\nall these methods should return the value of the object truncated to anIntegral\n(typically anint\n).Changed in version 3.14:\nint()\nno longer delegates to the__trunc__()\nmethod.\n3.3.9. With Statement Context Managers\u00b6\nA context manager is an object that defines the runtime context to be\nestablished when executing a with\nstatement. The context manager\nhandles the entry into, and the exit from, the desired runtime context for the\nexecution of the block of code. Context managers are normally invoked using the\nwith\nstatement (described in section The with statement), but can also be\nused by directly invoking their methods.\nTypical uses of context managers include saving and restoring various kinds of global state, locking and unlocking resources, closing opened files, etc.\nFor more information on context managers, see Context Manager Types.\nThe object\nclass itself does not provide the context manager methods.\n- object.__enter__(self)\u00b6\nEnter the runtime context related to this object. The\nwith\nstatement will bind this method\u2019s return value to the target(s) specified in theas\nclause of the statement, if any.\n- object.__exit__(self, exc_type, exc_value, traceback)\u00b6\nExit the runtime context related to this object. The parameters describe the exception that caused the context to be exited. If the context was exited without an exception, all three arguments will be\nNone\n.If an exception is supplied, and the method wishes to suppress the exception (i.e., prevent it from being propagated), it should return a true value. Otherwise, the exception will be processed normally upon exit from this method.\nNote that\n__exit__()\nmethods should not reraise the passed-in exception; this is the caller\u2019s responsibility.\n3.3.10. Customizing positional arguments in class pattern matching\u00b6\nWhen using a class name in a pattern, positional arguments in the pattern are not\nallowed by default, i.e. case MyClass(x, y)\nis typically invalid without special\nsupport in MyClass\n. To be able to use that kind of pattern, the class needs to\ndefine a __match_args__ attribute.\n- object.__match_args__\u00b6\nThis class variable can be assigned a tuple of strings. When this class is used in a class pattern with positional arguments, each positional argument will be converted into a keyword argument, using the corresponding value in __match_args__ as the keyword. The absence of this attribute is equivalent to setting it to\n()\n.\nFor example, if MyClass.__match_args__\nis (\"left\", \"center\", \"right\")\nthat means\nthat case MyClass(x, y)\nis equivalent to case MyClass(left=x, center=y)\n. Note\nthat the number of arguments in the pattern must be smaller than or equal to the number\nof elements in __match_args__; if it is larger, the pattern match attempt will raise\na TypeError\n.\nAdded in version 3.10.\nSee also\n- PEP 634 - Structural Pattern Matching\nThe specification for the Python\nmatch\nstatement.\n3.3.11. Emulating buffer types\u00b6\nThe buffer protocol provides a way for Python\nobjects to expose efficient access to a low-level memory array. This protocol\nis implemented by builtin types such as bytes\nand memoryview\n,\nand third-party libraries may define additional buffer types.\nWhile buffer types are usually implemented in C, it is also possible to implement the protocol in Python.\n- object.__buffer__(self, flags)\u00b6\nCalled when a buffer is requested from self (for example, by the\nmemoryview\nconstructor). The flags argument is an integer representing the kind of buffer requested, affecting for example whether the returned buffer is read-only or writable.inspect.BufferFlags\nprovides a convenient way to interpret the flags. The method must return amemoryview\nobject.\n- object.__release_buffer__(self, buffer)\u00b6\nCalled when a buffer is no longer needed. The buffer argument is a\nmemoryview\nobject that was previously returned by__buffer__()\n. The method must release any resources associated with the buffer. This method should returnNone\n. Buffer objects that do not need to perform any cleanup are not required to implement this method.\nAdded in version 3.12.\nSee also\n- PEP 688 - Making the buffer protocol accessible in Python\nIntroduces the Python\n__buffer__\nand__release_buffer__\nmethods.collections.abc.Buffer\nABC for buffer types.\n3.3.12. Annotations\u00b6\nFunctions, classes, and modules may contain annotations, which are a way to associate information (usually type hints) with a symbol.\n- object.__annotations__\u00b6\nThis attribute contains the annotations for an object. It is lazily evaluated, so accessing the attribute may execute arbitrary code and raise exceptions. If evaluation is successful, the attribute is set to a dictionary mapping from variable names to annotations.\nChanged in version 3.14: Annotations are now lazily evaluated.\n- object.__annotate__(format)\u00b6\nAn annotate function. Returns a new dictionary object mapping attribute/parameter names to their annotation values.\nTakes a format parameter specifying the format in which annotations values should be provided. It must be a member of the\nannotationlib.Format\nenum, or an integer with a value corresponding to a member of the enum.If an annotate function doesn\u2019t support the requested format, it must raise\nNotImplementedError\n. Annotate functions must always supportVALUE\nformat; they must not raiseNotImplementedError()\nwhen called with this format.When called with\nVALUE\nformat, an annotate function may raiseNameError\n; it must not raiseNameError\nwhen called requesting any other format.If an object does not have any annotations,\n__annotate__\nshould preferably be set toNone\n(it can\u2019t be deleted), rather than set to a function that returns an empty dict.Added in version 3.14.\nSee also\n- PEP 649 \u2014 Deferred evaluation of annotation using descriptors\nIntroduces lazy evaluation of annotations and the\n__annotate__\nfunction.\n3.3.13. Special method lookup\u00b6\nFor custom classes, implicit invocations of special methods are only guaranteed to work correctly if defined on an object\u2019s type, not in the object\u2019s instance dictionary. That behaviour is the reason why the following code raises an exception:\n>>> class C:\n... pass\n...\n>>> c = C()\n>>> c.__len__ = lambda: 5\n>>> len(c)\nTraceback (most recent call last):\nFile \"\", line 1, in \nTypeError: object of type 'C' has no len()\nThe rationale behind this behaviour lies with a number of special methods such\nas __hash__()\nand __repr__()\nthat are implemented\nby all objects,\nincluding type objects. If the implicit lookup of these methods used the\nconventional lookup process, they would fail when invoked on the type object\nitself:\n>>> 1 .__hash__() == hash(1)\nTrue\n>>> int.__hash__() == hash(int)\nTraceback (most recent call last):\nFile \"\", line 1, in \nTypeError: descriptor '__hash__' of 'int' object needs an argument\nIncorrectly attempting to invoke an unbound method of a class in this way is sometimes referred to as \u2018metaclass confusion\u2019, and is avoided by bypassing the instance when looking up special methods:\n>>> type(1).__hash__(1) == hash(1)\nTrue\n>>> type(int).__hash__(int) == hash(int)\nTrue\nIn addition to bypassing any instance attributes in the interest of\ncorrectness, implicit special method lookup generally also bypasses the\n__getattribute__()\nmethod even of the object\u2019s metaclass:\n>>> class Meta(type):\n... def __getattribute__(*args):\n... print(\"Metaclass getattribute invoked\")\n... return type.__getattribute__(*args)\n...\n>>> class C(object, metaclass=Meta):\n... def __len__(self):\n... return 10\n... def __getattribute__(*args):\n... print(\"Class getattribute invoked\")\n... return object.__getattribute__(*args)\n...\n>>> c = C()\n>>> c.__len__() # Explicit lookup via instance\nClass getattribute invoked\n10\n>>> type(c).__len__(c) # Explicit lookup via type\nMetaclass getattribute invoked\n10\n>>> len(c) # Implicit lookup\n10\nBypassing the __getattribute__()\nmachinery in this fashion\nprovides significant scope for speed optimisations within the\ninterpreter, at the cost of some flexibility in the handling of\nspecial methods (the special method must be set on the class\nobject itself in order to be consistently invoked by the interpreter).\n3.4. Coroutines\u00b6\n3.4.1. Awaitable Objects\u00b6\nAn awaitable object generally implements an __await__()\nmethod.\nCoroutine objects returned from async def\nfunctions\nare awaitable.\nNote\nThe generator iterator objects returned from generators\ndecorated with types.coroutine()\nare also awaitable, but they do not implement __await__()\n.\n- object.__await__(self)\u00b6\nMust return an iterator. Should be used to implement awaitable objects. For instance,\nasyncio.Future\nimplements this method to be compatible with theawait\nexpression. Theobject\nclass itself is not awaitable and does not provide this method.\nAdded in version 3.5.\nSee also\nPEP 492 for additional information about awaitable objects.\n3.4.2. Coroutine Objects\u00b6\nCoroutine objects are awaitable objects.\nA coroutine\u2019s execution can be controlled by calling __await__()\nand\niterating over the result. When the coroutine has finished executing and\nreturns, the iterator raises StopIteration\n, and the exception\u2019s\nvalue\nattribute holds the return value. If the\ncoroutine raises an exception, it is propagated by the iterator. Coroutines\nshould not directly raise unhandled StopIteration\nexceptions.\nCoroutines also have the methods listed below, which are analogous to those of generators (see Generator-iterator methods). However, unlike generators, coroutines do not directly support iteration.\nChanged in version 3.5.2: It is a RuntimeError\nto await on a coroutine more than once.\n- coroutine.send(value)\u00b6\nStarts or resumes execution of the coroutine. If value is\nNone\n, this is equivalent to advancing the iterator returned by__await__()\n. If value is notNone\n, this method delegates to thesend()\nmethod of the iterator that caused the coroutine to suspend. The result (return value,StopIteration\n, or other exception) is the same as when iterating over the__await__()\nreturn value, described above.\n- coroutine.throw(value)\u00b6\n- coroutine.throw(type[, value[, traceback]])\nRaises the specified exception in the coroutine. This method delegates to the\nthrow()\nmethod of the iterator that caused the coroutine to suspend, if it has such a method. Otherwise, the exception is raised at the suspension point. The result (return value,StopIteration\n, or other exception) is the same as when iterating over the__await__()\nreturn value, described above. If the exception is not caught in the coroutine, it propagates back to the caller.Changed in version 3.12: The second signature (type[, value[, traceback]]) is deprecated and may be removed in a future version of Python.\n- coroutine.close()\u00b6\nCauses the coroutine to clean itself up and exit. If the coroutine is suspended, this method first delegates to the\nclose()\nmethod of the iterator that caused the coroutine to suspend, if it has such a method. Then it raisesGeneratorExit\nat the suspension point, causing the coroutine to immediately clean itself up. Finally, the coroutine is marked as having finished executing, even if it was never started.Coroutine objects are automatically closed using the above process when they are about to be destroyed.\n3.4.3. Asynchronous Iterators\u00b6\nAn asynchronous iterator can call asynchronous code in\nits __anext__\nmethod.\nAsynchronous iterators can be used in an async for\nstatement.\nThe object\nclass itself does not provide these methods.\n- object.__aiter__(self)\u00b6\nMust return an asynchronous iterator object.\n- object.__anext__(self)\u00b6\nMust return an awaitable resulting in a next value of the iterator. Should raise a\nStopAsyncIteration\nerror when the iteration is over.\nAn example of an asynchronous iterable object:\nclass Reader:\nasync def readline(self):\n...\ndef __aiter__(self):\nreturn self\nasync def __anext__(self):\nval = await self.readline()\nif val == b'':\nraise StopAsyncIteration\nreturn val\nAdded in version 3.5.\nChanged in version 3.7: Prior to Python 3.7, __aiter__()\ncould return an awaitable\nthat would resolve to an\nasynchronous iterator.\nStarting with Python 3.7, __aiter__()\nmust return an\nasynchronous iterator object. Returning anything else\nwill result in a TypeError\nerror.\n3.4.4. Asynchronous Context Managers\u00b6\nAn asynchronous context manager is a context manager that is able to\nsuspend execution in its __aenter__\nand __aexit__\nmethods.\nAsynchronous context managers can be used in an async with\nstatement.\nThe object\nclass itself does not provide these methods.\n- object.__aenter__(self)\u00b6\nSemantically similar to\n__enter__()\n, the only difference being that it must return an awaitable.\n- object.__aexit__(self, exc_type, exc_value, traceback)\u00b6\nSemantically similar to\n__exit__()\n, the only difference being that it must return an awaitable.\nAn example of an asynchronous context manager class:\nclass AsyncContextManager:\nasync def __aenter__(self):\nawait log('entering context')\nasync def __aexit__(self, exc_type, exc, tb):\nawait log('exiting context')\nAdded in version 3.5.\nFootnotes", "code_snippets": ["\n ", " ", " ", " ", "\n", "\n", " ", "\n\n", "\n ", "\n ", " ", "\n\n ", " ", " ", "\n ", "\n ", " ", "\n\n", " ", " ", "\n", "\n ", " ", " ", " ", "\n ", "\n ", " ", " ", "\n\n", " ", "\n ", "\n", "\n ", " ", " ", " ", "\n", "\n ", "\n\n", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n ", "\n\n", "\n ", "\n\n", "\n ", "\n", " ", "\n\n", " ", "\n", "\n\n ", " ", " ", "\n\n ", "\n ", "\n ", " ", " ", "\n ", " ", " ", "\n\n ", "\n ", "\n ", " ", " ", " ", " ", "\n ", " ", "\n\n ", "\n ", "\n ", " ", "\n ", "\n ", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n", "\n", " ", "\n", "\n", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", " ", " ", " ", "\n", "\n", " ", " ", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", "\n", " ", "\n", " ", " ", "\n", "\n", " ", "\n", " ", "\n", " ", " ", "\n", " ", "\n", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", " ", "\n", "\n", "\n", " ", "\n", "\n", "\n", " ", "\n", "\n", "\n ", " ", "\n ", "\n\n ", "\n ", " ", "\n\n ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", "\n", "\n ", " ", "\n ", " ", "\n\n ", " ", " ", " ", " ", "\n ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 27042} +{"url": "https://docs.python.org/3/reference/expressions.html", "title": "Expressions", "content": "6. Expressions\u00b6\nThis chapter explains the meaning of the elements of expressions in Python.\nSyntax Notes: In this and the following chapters, grammar notation will be used to describe syntax, not lexical analysis.\nWhen (one alternative of) a syntax rule has the form:\nname: othername\nand no semantics are given, the semantics of this form of name\nare the same\nas for othername\n.\n6.1. Arithmetic conversions\u00b6\nWhen a description of an arithmetic operator below uses the phrase \u201cthe numeric arguments are converted to a common real type\u201d, this means that the operator implementation for built-in numeric types works as described in the Numeric Types section of the standard library documentation.\nSome additional rules apply for certain operators and non-numeric operands\n(for example, a string as a left argument to the %\noperator).\nExtensions must define their own conversion behavior.\n6.2. Atoms\u00b6\nAtoms are the most basic elements of expressions. The simplest atoms are names or literals. Forms enclosed in parentheses, brackets or braces are also categorized syntactically as atoms.\nFormally, the syntax for atoms is:\natom: | 'True' | 'False' | 'None' | '...' |identifier\n|literal\n|enclosure\nenclosure: |parenth_form\n|list_display\n|dict_display\n|set_display\n|generator_expression\n|yield_atom\n6.2.1. Built-in constants\u00b6\nThe keywords True\n, False\n, and None\nname\nbuilt-in constants.\nThe token ...\nnames the Ellipsis\nconstant.\nEvaluation of these atoms yields the corresponding value.\nNote\nSeveral more built-in constants are available as global variables, but only the ones mentioned here are keywords. In particular, these names cannot be reassigned or used as attributes:\n>>> False = 123\nFile \"\", line 1\nFalse = 123\n^^^^^\nSyntaxError: cannot assign to False\n6.2.2. Identifiers (Names)\u00b6\nAn identifier occurring as an atom is a name. See section Names (identifiers and keywords) for lexical definition and section Naming and binding for documentation of naming and binding.\nWhen the name is bound to an object, evaluation of the atom yields that object.\nWhen a name is not bound, an attempt to evaluate it raises a NameError\nexception.\n6.2.2.1. Private name mangling\u00b6\nWhen an identifier that textually occurs in a class definition begins with two or more underscore characters and does not end in two or more underscores, it is considered a private name of that class.\nSee also\nThe class specifications.\nMore precisely, private names are transformed to a longer form before code is generated for them. If the transformed name is longer than 255 characters, implementation-defined truncation may happen.\nThe transformation is independent of the syntactical context in which the identifier is used but only the following private identifiers are mangled:\nAny name used as the name of a variable that is assigned or read or any name of an attribute being accessed.\nThe\n__name__\nattribute of nested functions, classes, and type aliases is however not mangled.The name of imported modules, e.g.,\n__spam\ninimport __spam\n. If the module is part of a package (i.e., its name contains a dot), the name is not mangled, e.g., the__foo\ninimport __foo.bar\nis not mangled.The name of an imported member, e.g.,\n__f\ninfrom spam import __f\n.\nThe transformation rule is defined as follows:\nThe class name, with leading underscores removed and a single leading underscore inserted, is inserted in front of the identifier, e.g., the identifier\n__spam\noccurring in a class namedFoo\n,_Foo\nor__Foo\nis transformed to_Foo__spam\n.If the class name consists only of underscores, the transformation is the identity, e.g., the identifier\n__spam\noccurring in a class named_\nor__\nis left as is.\n6.2.3. Literals\u00b6\nA literal is a textual representation of a value. Python supports numeric, string and bytes literals. Format strings and template strings are treated as string literals.\nNumeric literals consist of a single NUMBER\ntoken, which names an integer, floating-point number, or an imaginary number.\nSee the Numeric literals section in Lexical analysis documentation for details.\nString and bytes literals may consist of several tokens. See section String literal concatenation for details.\nNote that negative and complex numbers, like -3\nor 3+4.2j\n,\nare syntactically not literals, but unary or\nbinary arithmetic operations involving the -\nor +\noperator.\nEvaluation of a literal yields an object of the given type\n(int\n, float\n, complex\n, str\n,\nbytes\n, or Template\n) with the given value.\nThe value may be approximated in the case of floating-point\nand imaginary literals.\nThe formal grammar for literals is:\nliteral:strings\n|NUMBER\n6.2.3.1. Literals and object identity\u00b6\nAll literals correspond to immutable data types, and hence the object\u2019s identity is less important than its value. Multiple evaluations of literals with the same value (either the same occurrence in the program text or a different occurrence) may obtain the same object or a different object with the same value.\nCPython implementation detail\nFor example, in CPython, small integers with the same value evaluate to the same object:\n>>> x = 7\n>>> y = 7\n>>> x is y\nTrue\nHowever, large integers evaluate to different objects:\n>>> x = 123456789\n>>> y = 123456789\n>>> x is y\nFalse\nThis behavior may change in future versions of CPython. In particular, the boundary between \u201csmall\u201d and \u201clarge\u201d integers has already changed in the past.\nCPython will emit a SyntaxWarning\nwhen you compare literals\nusing is\n:\n>>> x = 7\n>>> x is 7\n:1: SyntaxWarning: \"is\" with 'int' literal. Did you mean \"==\"?\nTrue\nSee When can I rely on identity tests with the is operator? for more information.\nTemplate strings are immutable but may reference mutable\nobjects as Interpolation\nvalues.\nFor the purposes of this section, two t-strings have the \u201csame value\u201d if\nboth their structure and the identity of the values match.\nCPython implementation detail: Currently, each evaluation of a template string results in a different object.\n6.2.3.2. String literal concatenation\u00b6\nMultiple adjacent string or bytes literals, possibly using different quoting conventions, are allowed, and their meaning is the same as their concatenation:\n>>> \"hello\" 'world'\n\"helloworld\"\nThis feature is defined at the syntactical level, so it only works with literals. To concatenate string expressions at run time, the \u2018+\u2019 operator may be used:\n>>> greeting = \"Hello\"\n>>> space = \" \"\n>>> name = \"Blaise\"\n>>> print(greeting + space + name) # not: print(greeting space name)\nHello Blaise\nLiteral concatenation can freely mix raw strings, triple-quoted strings, and formatted string literals. For example:\n>>> \"Hello\" r', ' f\"{name}!\"\n\"Hello, Blaise!\"\nThis feature can be used to reduce the number of backslashes needed, to split long strings conveniently across long lines, or even to add comments to parts of strings. For example:\nre.compile(\"[A-Za-z_]\" # letter or underscore\n\"[A-Za-z0-9_]*\" # letter, digit or underscore\n)\nHowever, bytes literals may only be combined with other byte literals; not with string literals of any kind. Also, template string literals may only be combined with other template string literals:\n>>> t\"Hello\" t\"{name}!\"\nTemplate(strings=('Hello', '!'), interpolations=(...))\nFormally:\nstrings: (STRING\n|fstring\n)+ |tstring\n+\n6.2.4. Parenthesized forms\u00b6\nA parenthesized form is an optional expression list enclosed in parentheses:\nparenth_form: \"(\" [starred_expression\n] \")\"\nA parenthesized expression list yields whatever that expression list yields: if the list contains at least one comma, it yields a tuple; otherwise, it yields the single expression that makes up the expression list.\nAn empty pair of parentheses yields an empty tuple object. Since tuples are immutable, the same rules as for literals apply (i.e., two occurrences of the empty tuple may or may not yield the same object).\nNote that tuples are not formed by the parentheses, but rather by use of the comma. The exception is the empty tuple, for which parentheses are required \u2014 allowing unparenthesized \u201cnothing\u201d in expressions would cause ambiguities and allow common typos to pass uncaught.\n6.2.5. Displays for lists, sets and dictionaries\u00b6\nFor constructing a list, a set or a dictionary Python provides special syntax called \u201cdisplays\u201d, each of them in two flavors:\neither the container contents are listed explicitly, or\nthey are computed via a set of looping and filtering instructions, called a comprehension.\nCommon syntax elements for comprehensions are:\ncomprehension:assignment_expression\ncomp_for\ncomp_for: [\"async\"] \"for\"target_list\n\"in\"or_test\n[comp_iter\n] comp_iter:comp_for\n|comp_if\ncomp_if: \"if\"or_test\n[comp_iter\n]\nThe comprehension consists of a single expression followed by at least one\nfor\nclause and zero or more for\nor if\nclauses.\nIn this case, the elements of the new container are those that would be produced\nby considering each of the for\nor if\nclauses a block,\nnesting from left to right, and evaluating the expression to produce an element\neach time the innermost block is reached.\nHowever, aside from the iterable expression in the leftmost for\nclause,\nthe comprehension is executed in a separate implicitly nested scope. This ensures\nthat names assigned to in the target list don\u2019t \u201cleak\u201d into the enclosing scope.\nThe iterable expression in the leftmost for\nclause is evaluated\ndirectly in the enclosing scope and then passed as an argument to the implicitly\nnested scope. Subsequent for\nclauses and any filter condition in the\nleftmost for\nclause cannot be evaluated in the enclosing scope as\nthey may depend on the values obtained from the leftmost iterable. For example:\n[x*y for x in range(10) for y in range(x, x+10)]\n.\nTo ensure the comprehension always results in a container of the appropriate\ntype, yield\nand yield from\nexpressions are prohibited in the implicitly\nnested scope.\nSince Python 3.6, in an async def\nfunction, an async for\nclause may be used to iterate over a asynchronous iterator.\nA comprehension in an async def\nfunction may consist of either a\nfor\nor async for\nclause following the leading\nexpression, may contain additional for\nor async for\nclauses, and may also use await\nexpressions.\nIf a comprehension contains async for\nclauses, or if it contains\nawait\nexpressions or other asynchronous comprehensions anywhere except\nthe iterable expression in the leftmost for\nclause, it is called an\nasynchronous comprehension. An asynchronous comprehension may suspend the\nexecution of the coroutine function in which it appears.\nSee also PEP 530.\nAdded in version 3.6: Asynchronous comprehensions were introduced.\nChanged in version 3.8: yield\nand yield from\nprohibited in the implicitly nested scope.\nChanged in version 3.11: Asynchronous comprehensions are now allowed inside comprehensions in asynchronous functions. Outer comprehensions implicitly become asynchronous.\n6.2.6. List displays\u00b6\nA list display is a possibly empty series of expressions enclosed in square brackets:\nlist_display: \"[\" [flexible_expression_list\n|comprehension\n] \"]\"\nA list display yields a new list object, the contents being specified by either a list of expressions or a comprehension. When a comma-separated list of expressions is supplied, its elements are evaluated from left to right and placed into the list object in that order. When a comprehension is supplied, the list is constructed from the elements resulting from the comprehension.\n6.2.7. Set displays\u00b6\nA set display is denoted by curly braces and distinguishable from dictionary displays by the lack of colons separating keys and values:\nset_display: \"{\" (flexible_expression_list\n|comprehension\n) \"}\"\nA set display yields a new mutable set object, the contents being specified by either a sequence of expressions or a comprehension. When a comma-separated list of expressions is supplied, its elements are evaluated from left to right and added to the set object. When a comprehension is supplied, the set is constructed from the elements resulting from the comprehension.\nAn empty set cannot be constructed with {}\n; this literal constructs an empty\ndictionary.\n6.2.8. Dictionary displays\u00b6\nA dictionary display is a possibly empty series of dict items (key/value pairs) enclosed in curly braces:\ndict_display: \"{\" [dict_item_list\n|dict_comprehension\n] \"}\" dict_item_list:dict_item\n(\",\"dict_item\n)* [\",\"] dict_item:expression\n\":\"expression\n| \"**\"or_expr\ndict_comprehension:expression\n\":\"expression\ncomp_for\nA dictionary display yields a new dictionary object.\nIf a comma-separated sequence of dict items is given, they are evaluated from left to right to define the entries of the dictionary: each key object is used as a key into the dictionary to store the corresponding value. This means that you can specify the same key multiple times in the dict item list, and the final dictionary\u2019s value for that key will be the last one given.\nA double asterisk **\ndenotes dictionary unpacking.\nIts operand must be a mapping. Each mapping item is added\nto the new dictionary. Later values replace values already set by\nearlier dict items and earlier dictionary unpackings.\nAdded in version 3.5: Unpacking into dictionary displays, originally proposed by PEP 448.\nA dict comprehension, in contrast to list and set comprehensions, needs two expressions separated with a colon followed by the usual \u201cfor\u201d and \u201cif\u201d clauses. When the comprehension is run, the resulting key and value elements are inserted in the new dictionary in the order they are produced.\nRestrictions on the types of the key values are listed earlier in section The standard type hierarchy. (To summarize, the key type should be hashable, which excludes all mutable objects.) Clashes between duplicate keys are not detected; the last value (textually rightmost in the display) stored for a given key value prevails.\nChanged in version 3.8: Prior to Python 3.8, in dict comprehensions, the evaluation order of key and value was not well-defined. In CPython, the value was evaluated before the key. Starting with 3.8, the key is evaluated before the value, as proposed by PEP 572.\n6.2.9. Generator expressions\u00b6\nA generator expression is a compact generator notation in parentheses:\ngenerator_expression: \"(\"expression\ncomp_for\n\")\"\nA generator expression yields a new generator object. Its syntax is the same as for comprehensions, except that it is enclosed in parentheses instead of brackets or curly braces.\nVariables used in the generator expression are evaluated lazily when the\n__next__()\nmethod is called for the generator object (in the same\nfashion as normal generators). However, the iterable expression in the\nleftmost for\nclause is immediately evaluated, and the\niterator is immediately created for that iterable, so that an error\nproduced while creating the iterator will be emitted at the point where the generator expression\nis defined, rather than at the point where the first value is retrieved.\nSubsequent for\nclauses and any filter condition in the leftmost\nfor\nclause cannot be evaluated in the enclosing scope as they may\ndepend on the values obtained from the leftmost iterable. For example:\n(x*y for x in range(10) for y in range(x, x+10))\n.\nThe parentheses can be omitted on calls with only one argument. See section Calls for details.\nTo avoid interfering with the expected operation of the generator expression\nitself, yield\nand yield from\nexpressions are prohibited in the\nimplicitly defined generator.\nIf a generator expression contains either async for\nclauses or await\nexpressions it is called an\nasynchronous generator expression. An asynchronous generator\nexpression returns a new asynchronous generator object,\nwhich is an asynchronous iterator (see Asynchronous Iterators).\nAdded in version 3.6: Asynchronous generator expressions were introduced.\nChanged in version 3.7: Prior to Python 3.7, asynchronous generator expressions could\nonly appear in async def\ncoroutines. Starting\nwith 3.7, any function can use asynchronous generator expressions.\nChanged in version 3.8: yield\nand yield from\nprohibited in the implicitly nested scope.\n6.2.10. Yield expressions\u00b6\nyield_atom: \"(\"yield_expression\n\")\" yield_from: \"yield\" \"from\"expression\nyield_expression: \"yield\"yield_list\n|yield_from\nThe yield expression is used when defining a generator function\nor an asynchronous generator function and\nthus can only be used in the body of a function definition. Using a yield\nexpression in a function\u2019s body causes that function to be a generator function,\nand using it in an async def\nfunction\u2019s body causes that\ncoroutine function to be an asynchronous generator function. For example:\ndef gen(): # defines a generator function\nyield 123\nasync def agen(): # defines an asynchronous generator function\nyield 123\nDue to their side effects on the containing scope, yield\nexpressions\nare not permitted as part of the implicitly defined scopes used to\nimplement comprehensions and generator expressions.\nChanged in version 3.8: Yield expressions prohibited in the implicitly nested scopes used to implement comprehensions and generator expressions.\nGenerator functions are described below, while asynchronous generator functions are described separately in section Asynchronous generator functions.\nWhen a generator function is called, it returns an iterator known as a\ngenerator. That generator then controls the execution of the generator\nfunction. The execution starts when one of the generator\u2019s methods is called.\nAt that time, the execution proceeds to the first yield expression, where it is\nsuspended again, returning the value of yield_list\nto the generator\u2019s caller,\nor None\nif yield_list\nis omitted.\nBy suspended, we mean that all local state is\nretained, including the current bindings of local variables, the instruction\npointer, the internal evaluation stack, and the state of any exception handling.\nWhen the execution is resumed by calling one of the generator\u2019s methods, the\nfunction can proceed exactly as if the yield expression were just another\nexternal call. The value of the yield expression after resuming depends on the\nmethod which resumed the execution. If __next__()\nis used\n(typically via either a for\nor the next()\nbuiltin) then the\nresult is None\n. Otherwise, if send()\nis used, then\nthe result will be the value passed in to that method.\nAll of this makes generator functions quite similar to coroutines; they yield multiple times, they have more than one entry point and their execution can be suspended. The only difference is that a generator function cannot control where the execution should continue after it yields; the control is always transferred to the generator\u2019s caller.\nYield expressions are allowed anywhere in a try\nconstruct. If the\ngenerator is not resumed before it is\nfinalized (by reaching a zero reference count or by being garbage collected),\nthe generator-iterator\u2019s close()\nmethod will be called,\nallowing any pending finally\nclauses to execute.\nWhen yield from \nis used, the supplied expression must be an\niterable. The values produced by iterating that iterable are passed directly\nto the caller of the current generator\u2019s methods. Any values passed in with\nsend()\nand any exceptions passed in with\nthrow()\nare passed to the underlying iterator if it has the\nappropriate methods. If this is not the case, then send()\nwill raise AttributeError\nor TypeError\n, while\nthrow()\nwill just raise the passed in exception immediately.\nWhen the underlying iterator is complete, the value\nattribute of the raised StopIteration\ninstance becomes the value of\nthe yield expression. It can be either set explicitly when raising\nStopIteration\n, or automatically when the subiterator is a generator\n(by returning a value from the subgenerator).\nChanged in version 3.3: Added yield from \nto delegate control flow to a subiterator.\nThe parentheses may be omitted when the yield expression is the sole expression on the right hand side of an assignment statement.\nSee also\n- PEP 255 - Simple Generators\nThe proposal for adding generators and the\nyield\nstatement to Python.- PEP 342 - Coroutines via Enhanced Generators\nThe proposal to enhance the API and syntax of generators, making them usable as simple coroutines.\n- PEP 380 - Syntax for Delegating to a Subgenerator\nThe proposal to introduce the\nyield_from\nsyntax, making delegation to subgenerators easy.- PEP 525 - Asynchronous Generators\nThe proposal that expanded on PEP 492 by adding generator capabilities to coroutine functions.\n6.2.10.1. Generator-iterator methods\u00b6\nThis subsection describes the methods of a generator iterator. They can be used to control the execution of a generator function.\nNote that calling any of the generator methods below when the generator\nis already executing raises a ValueError\nexception.\n- generator.__next__()\u00b6\nStarts the execution of a generator function or resumes it at the last executed yield expression. When a generator function is resumed with a\n__next__()\nmethod, the current yield expression always evaluates toNone\n. The execution then continues to the next yield expression, where the generator is suspended again, and the value of theyield_list\nis returned to__next__()\n\u2019s caller. If the generator exits without yielding another value, aStopIteration\nexception is raised.This method is normally called implicitly, e.g. by a\nfor\nloop, or by the built-innext()\nfunction.\n- generator.send(value)\u00b6\nResumes the execution and \u201csends\u201d a value into the generator function. The value argument becomes the result of the current yield expression. The\nsend()\nmethod returns the next value yielded by the generator, or raisesStopIteration\nif the generator exits without yielding another value. Whensend()\nis called to start the generator, it must be called withNone\nas the argument, because there is no yield expression that could receive the value.\n- generator.throw(value)\u00b6\n- generator.throw(type[, value[, traceback]])\nRaises an exception at the point where the generator was paused, and returns the next value yielded by the generator function. If the generator exits without yielding another value, a\nStopIteration\nexception is raised. If the generator function does not catch the passed-in exception, or raises a different exception, then that exception propagates to the caller.In typical use, this is called with a single exception instance similar to the way the\nraise\nkeyword is used.For backwards compatibility, however, the second signature is supported, following a convention from older versions of Python. The type argument should be an exception class, and value should be an exception instance. If the value is not provided, the type constructor is called to get an instance. If traceback is provided, it is set on the exception, otherwise any existing\n__traceback__\nattribute stored in value may be cleared.Changed in version 3.12: The second signature (type[, value[, traceback]]) is deprecated and may be removed in a future version of Python.\n- generator.close()\u00b6\nRaises a\nGeneratorExit\nexception at the point where the generator function was paused (equivalent to callingthrow(GeneratorExit)\n). The exception is raised by the yield expression where the generator was paused. If the generator function catches the exception and returns a value, this value is returned fromclose()\n. If the generator function is already closed, or raisesGeneratorExit\n(by not catching the exception),close()\nreturnsNone\n. If the generator yields a value, aRuntimeError\nis raised. If the generator raises any other exception, it is propagated to the caller. If the generator has already exited due to an exception or normal exit,close()\nreturnsNone\nand has no other effect.Changed in version 3.13: If a generator returns a value upon being closed, the value is returned by\nclose()\n.\n6.2.10.2. Examples\u00b6\nHere is a simple example that demonstrates the behavior of generators and generator functions:\n>>> def echo(value=None):\n... print(\"Execution starts when 'next()' is called for the first time.\")\n... try:\n... while True:\n... try:\n... value = (yield value)\n... except Exception as e:\n... value = e\n... finally:\n... print(\"Don't forget to clean up when 'close()' is called.\")\n...\n>>> generator = echo(1)\n>>> print(next(generator))\nExecution starts when 'next()' is called for the first time.\n1\n>>> print(next(generator))\nNone\n>>> print(generator.send(2))\n2\n>>> generator.throw(TypeError, \"spam\")\nTypeError('spam',)\n>>> generator.close()\nDon't forget to clean up when 'close()' is called.\nFor examples using yield from\n, see PEP 380: Syntax for Delegating to a Subgenerator in \u201cWhat\u2019s New in\nPython.\u201d\n6.2.10.3. Asynchronous generator functions\u00b6\nThe presence of a yield expression in a function or method defined using\nasync def\nfurther defines the function as an\nasynchronous generator function.\nWhen an asynchronous generator function is called, it returns an\nasynchronous iterator known as an asynchronous generator object.\nThat object then controls the execution of the generator function.\nAn asynchronous generator object is typically used in an\nasync for\nstatement in a coroutine function analogously to\nhow a generator object would be used in a for\nstatement.\nCalling one of the asynchronous generator\u2019s methods returns an awaitable\nobject, and the execution starts when this object is awaited on. At that time,\nthe execution proceeds to the first yield expression, where it is suspended\nagain, returning the value of yield_list\nto the\nawaiting coroutine. As with a generator, suspension means that all local state\nis retained, including the current bindings of local variables, the instruction\npointer, the internal evaluation stack, and the state of any exception handling.\nWhen the execution is resumed by awaiting on the next object returned by the\nasynchronous generator\u2019s methods, the function can proceed exactly as if the\nyield expression were just another external call. The value of the yield\nexpression after resuming depends on the method which resumed the execution. If\n__anext__()\nis used then the result is None\n. Otherwise, if\nasend()\nis used, then the result will be the value passed in to that\nmethod.\nIf an asynchronous generator happens to exit early by break\n, the caller\ntask being cancelled, or other exceptions, the generator\u2019s async cleanup code\nwill run and possibly raise exceptions or access context variables in an\nunexpected context\u2013perhaps after the lifetime of tasks it depends, or\nduring the event loop shutdown when the async-generator garbage collection hook\nis called.\nTo prevent this, the caller must explicitly close the async generator by calling\naclose()\nmethod to finalize the generator and ultimately detach it\nfrom the event loop.\nIn an asynchronous generator function, yield expressions are allowed anywhere\nin a try\nconstruct. However, if an asynchronous generator is not\nresumed before it is finalized (by reaching a zero reference count or by\nbeing garbage collected), then a yield expression within a try\nconstruct could result in a failure to execute pending finally\nclauses. In this case, it is the responsibility of the event loop or\nscheduler running the asynchronous generator to call the asynchronous\ngenerator-iterator\u2019s aclose()\nmethod and run the resulting\ncoroutine object, thus allowing any pending finally\nclauses\nto execute.\nTo take care of finalization upon event loop termination, an event loop should\ndefine a finalizer function which takes an asynchronous generator-iterator and\npresumably calls aclose()\nand executes the coroutine.\nThis finalizer may be registered by calling sys.set_asyncgen_hooks()\n.\nWhen first iterated over, an asynchronous generator-iterator will store the\nregistered finalizer to be called upon finalization. For a reference example\nof a finalizer method see the implementation of\nasyncio.Loop.shutdown_asyncgens\nin Lib/asyncio/base_events.py.\nThe expression yield from \nis a syntax error when used in an\nasynchronous generator function.\n6.2.10.4. Asynchronous generator-iterator methods\u00b6\nThis subsection describes the methods of an asynchronous generator iterator, which are used to control the execution of a generator function.\n- async agen.__anext__()\u00b6\nReturns an awaitable which when run starts to execute the asynchronous generator or resumes it at the last executed yield expression. When an asynchronous generator function is resumed with an\n__anext__()\nmethod, the current yield expression always evaluates toNone\nin the returned awaitable, which when run will continue to the next yield expression. The value of theyield_list\nof the yield expression is the value of theStopIteration\nexception raised by the completing coroutine. If the asynchronous generator exits without yielding another value, the awaitable instead raises aStopAsyncIteration\nexception, signalling that the asynchronous iteration has completed.This method is normally called implicitly by a\nasync for\nloop.\n- async agen.asend(value)\u00b6\nReturns an awaitable which when run resumes the execution of the asynchronous generator. As with the\nsend()\nmethod for a generator, this \u201csends\u201d a value into the asynchronous generator function, and the value argument becomes the result of the current yield expression. The awaitable returned by theasend()\nmethod will return the next value yielded by the generator as the value of the raisedStopIteration\n, or raisesStopAsyncIteration\nif the asynchronous generator exits without yielding another value. Whenasend()\nis called to start the asynchronous generator, it must be called withNone\nas the argument, because there is no yield expression that could receive the value.\n- async agen.athrow(value)\u00b6\n- async agen.athrow(type[, value[, traceback]])\nReturns an awaitable that raises an exception of type\ntype\nat the point where the asynchronous generator was paused, and returns the next value yielded by the generator function as the value of the raisedStopIteration\nexception. If the asynchronous generator exits without yielding another value, aStopAsyncIteration\nexception is raised by the awaitable. If the generator function does not catch the passed-in exception, or raises a different exception, then when the awaitable is run that exception propagates to the caller of the awaitable.Changed in version 3.12: The second signature (type[, value[, traceback]]) is deprecated and may be removed in a future version of Python.\n- async agen.aclose()\u00b6\nReturns an awaitable that when run will throw a\nGeneratorExit\ninto the asynchronous generator function at the point where it was paused. If the asynchronous generator function then exits gracefully, is already closed, or raisesGeneratorExit\n(by not catching the exception), then the returned awaitable will raise aStopIteration\nexception. Any further awaitables returned by subsequent calls to the asynchronous generator will raise aStopAsyncIteration\nexception. If the asynchronous generator yields a value, aRuntimeError\nis raised by the awaitable. If the asynchronous generator raises any other exception, it is propagated to the caller of the awaitable. If the asynchronous generator has already exited due to an exception or normal exit, then further calls toaclose()\nwill return an awaitable that does nothing.\n6.3. Primaries\u00b6\nPrimaries represent the most tightly bound operations of the language. Their syntax is:\nprimary:atom\n|attributeref\n|subscription\n|call\n6.3.1. Attribute references\u00b6\nAn attribute reference is a primary followed by a period and a name:\nattributeref:primary\n\".\"identifier\nThe primary must evaluate to an object of a type that supports attribute references, which most objects do. This object is then asked to produce the attribute whose name is the identifier. The type and value produced is determined by the object. Multiple evaluations of the same attribute reference may yield different objects.\nThis production can be customized by overriding the\n__getattribute__()\nmethod or the __getattr__()\nmethod. The __getattribute__()\nmethod is called first and either\nreturns a value or raises AttributeError\nif the attribute is not\navailable.\nIf an AttributeError\nis raised and the object has a __getattr__()\nmethod, that method is called as a fallback.\n6.3.2. Subscriptions and slicings\u00b6\nThe subscription syntax is usually used for selecting an element from a\ncontainer \u2013 for example, to get a value from\na dict\n:\n>>> digits_by_name = {'one': 1, 'two': 2}\n>>> digits_by_name['two'] # Subscripting a dictionary using the key 'two'\n2\nIn the subscription syntax, the object being subscribed \u2013 a primary \u2013 is followed by a subscript in square brackets. In the simplest case, the subscript is a single expression.\nDepending on the type of the object being subscribed, the subscript is sometimes called a key (for mappings), index (for sequences), or type argument (for generic types). Syntactically, these are all equivalent:\n>>> colors = ['red', 'blue', 'green', 'black']\n>>> colors[3] # Subscripting a list using the index 3\n'black'\n>>> list[str] # Parameterizing the list type using the type argument str\nlist[str]\nAt runtime, the interpreter will evaluate the primary and\nthe subscript, and call the primary\u2019s __getitem__()\nor\n__class_getitem__()\nspecial method with the subscript\nas argument.\nFor more details on which of these methods is called, see\n__class_getitem__ versus __getitem__.\nTo show how subscription works, we can define a custom object that\nimplements __getitem__()\nand prints out the value of\nthe subscript:\n>>> class SubscriptionDemo:\n... def __getitem__(self, key):\n... print(f'subscripted with: {key!r}')\n...\n>>> demo = SubscriptionDemo()\n>>> demo[1]\nsubscripted with: 1\n>>> demo['a' * 3]\nsubscripted with: 'aaa'\nSee __getitem__()\ndocumentation for how built-in types handle\nsubscription.\nSubscriptions may also be used as targets in assignment or\ndeletion statements.\nIn these cases, the interpreter will call the subscripted object\u2019s\n__setitem__()\nor __delitem__()\nspecial method, respectively, instead of __getitem__()\n.\n>>> colors = ['red', 'blue', 'green', 'black']\n>>> colors[3] = 'white' # Setting item at index\n>>> colors\n['red', 'blue', 'green', 'white']\n>>> del colors[3] # Deleting item at index 3\n>>> colors\n['red', 'blue', 'green']\nAll advanced forms of subscript documented in the following sections are also usable for assignment and deletion.\n6.3.2.1. Slicings\u00b6\nA more advanced form of subscription, slicing, is commonly used to extract a portion of a sequence. In this form, the subscript is a slice: up to three expressions separated by colons. Any of the expressions may be omitted, but a slice must contain at least one colon:\n>>> number_names = ['zero', 'one', 'two', 'three', 'four', 'five']\n>>> number_names[1:3]\n['one', 'two']\n>>> number_names[1:]\n['one', 'two', 'three', 'four', 'five']\n>>> number_names[:3]\n['zero', 'one', 'two']\n>>> number_names[:]\n['zero', 'one', 'two', 'three', 'four', 'five']\n>>> number_names[::2]\n['zero', 'two', 'four']\n>>> number_names[:-3]\n['zero', 'one', 'two']\n>>> del number_names[4:]\n>>> number_names\n['zero', 'one', 'two', 'three']\nWhen a slice is evaluated, the interpreter constructs a slice\nobject\nwhose start\n, stop\nand\nstep\nattributes, respectively, are the results of the\nexpressions between the colons.\nAny missing expression evaluates to None\n.\nThis slice\nobject is then passed to the __getitem__()\nor __class_getitem__()\nspecial method, as above.\n# continuing with the SubscriptionDemo instance defined above:\n>>> demo[2:3]\nsubscripted with: slice(2, 3, None)\n>>> demo[::'spam']\nsubscripted with: slice(None, None, 'spam')\n6.3.2.2. Comma-separated subscripts\u00b6\nThe subscript can also be given as two or more comma-separated expressions or slices:\n# continuing with the SubscriptionDemo instance defined above:\n>>> demo[1, 2, 3]\nsubscripted with: (1, 2, 3)\n>>> demo[1:2, 3]\nsubscripted with: (slice(1, 2, None), 3)\nThis form is commonly used with numerical libraries for slicing\nmulti-dimensional data.\nIn this case, the interpreter constructs a tuple\nof the results of the\nexpressions or slices, and passes this tuple to the __getitem__()\nor __class_getitem__()\nspecial method, as above.\nThe subscript may also be given as a single expression or slice followed by a comma, to specify a one-element tuple:\n>>> demo['spam',]\nsubscripted with: ('spam',)\n6.3.2.3. \u201cStarred\u201d subscriptions\u00b6\nAdded in version 3.11: Expressions in tuple_slices may be starred. See PEP 646.\nThe subscript can also contain a starred expression.\nIn this case, the interpreter unpacks the result into a tuple, and passes\nthis tuple to __getitem__()\nor __class_getitem__()\n:\n# continuing with the SubscriptionDemo instance defined above:\n>>> demo[*range(10)]\nsubscripted with: (0, 1, 2, 3, 4, 5, 6, 7, 8, 9)\nStarred expressions may be combined with comma-separated expressions and slices:\n>>> demo['a', 'b', *range(3), 'c']\nsubscripted with: ('a', 'b', 0, 1, 2, 'c')\n6.3.2.4. Formal subscription grammar\u00b6\nsubscription:primary\n'['subscript\n']' subscript:single_subscript\n|tuple_subscript\nsingle_subscript:proper_slice\n|assignment_expression\nproper_slice: [expression\n] \":\" [expression\n] [ \":\" [expression\n] ] tuple_subscript: ','.(single_subscript\n|starred_expression\n)+ [',']\nRecall that the |\noperator denotes ordered choice.\nSpecifically, in subscript\n, if both alternatives would match, the\nfirst (single_subscript\n) has priority.\n6.3.3. Calls\u00b6\nA call calls a callable object (e.g., a function) with a possibly empty series of arguments:\ncall:primary\n\"(\" [argument_list\n[\",\"] |comprehension\n] \")\" argument_list:positional_arguments\n[\",\"starred_and_keywords\n] [\",\"keywords_arguments\n] |starred_and_keywords\n[\",\"keywords_arguments\n] |keywords_arguments\npositional_arguments:positional_item\n(\",\"positional_item\n)* positional_item:assignment_expression\n| \"*\"expression\nstarred_and_keywords: (\"*\"expression\n|keyword_item\n) (\",\" \"*\"expression\n| \",\"keyword_item\n)* keywords_arguments: (keyword_item\n| \"**\"expression\n) (\",\"keyword_item\n| \",\" \"**\"expression\n)* keyword_item:identifier\n\"=\"expression\nAn optional trailing comma may be present after the positional and keyword arguments but does not affect the semantics.\nThe primary must evaluate to a callable object (user-defined functions, built-in\nfunctions, methods of built-in objects, class objects, methods of class\ninstances, and all objects having a __call__()\nmethod are callable). All\nargument expressions are evaluated before the call is attempted. Please refer\nto section Function definitions for the syntax of formal parameter lists.\nIf keyword arguments are present, they are first converted to positional\narguments, as follows. First, a list of unfilled slots is created for the\nformal parameters. If there are N positional arguments, they are placed in the\nfirst N slots. Next, for each keyword argument, the identifier is used to\ndetermine the corresponding slot (if the identifier is the same as the first\nformal parameter name, the first slot is used, and so on). If the slot is\nalready filled, a TypeError\nexception is raised. Otherwise, the\nargument is placed in the slot, filling it (even if the expression is\nNone\n, it fills the slot). When all arguments have been processed, the slots\nthat are still unfilled are filled with the corresponding default value from the\nfunction definition. (Default values are calculated, once, when the function is\ndefined; thus, a mutable object such as a list or dictionary used as default\nvalue will be shared by all calls that don\u2019t specify an argument value for the\ncorresponding slot; this should usually be avoided.) If there are any unfilled\nslots for which no default value is specified, a TypeError\nexception is\nraised. Otherwise, the list of filled slots is used as the argument list for\nthe call.\nCPython implementation detail: An implementation may provide built-in functions whose positional parameters\ndo not have names, even if they are \u2018named\u2019 for the purpose of documentation,\nand which therefore cannot be supplied by keyword. In CPython, this is the\ncase for functions implemented in C that use PyArg_ParseTuple()\nto\nparse their arguments.\nIf there are more positional arguments than there are formal parameter slots, a\nTypeError\nexception is raised, unless a formal parameter using the syntax\n*identifier\nis present; in this case, that formal parameter receives a tuple\ncontaining the excess positional arguments (or an empty tuple if there were no\nexcess positional arguments).\nIf any keyword argument does not correspond to a formal parameter name, a\nTypeError\nexception is raised, unless a formal parameter using the syntax\n**identifier\nis present; in this case, that formal parameter receives a\ndictionary containing the excess keyword arguments (using the keywords as keys\nand the argument values as corresponding values), or a (new) empty dictionary if\nthere were no excess keyword arguments.\nIf the syntax *expression\nappears in the function call, expression\nmust\nevaluate to an iterable. Elements from these iterables are\ntreated as if they were additional positional arguments. For the call\nf(x1, x2, *y, x3, x4)\n, if y evaluates to a sequence y1, \u2026, yM,\nthis is equivalent to a call with M+4 positional arguments x1, x2,\ny1, \u2026, yM, x3, x4.\nA consequence of this is that although the *expression\nsyntax may appear\nafter explicit keyword arguments, it is processed before the\nkeyword arguments (and any **expression\narguments \u2013 see below). So:\n>>> def f(a, b):\n... print(a, b)\n...\n>>> f(b=1, *(2,))\n2 1\n>>> f(a=1, *(2,))\nTraceback (most recent call last):\nFile \"\", line 1, in \nTypeError: f() got multiple values for keyword argument 'a'\n>>> f(1, *(2,))\n1 2\nIt is unusual for both keyword arguments and the *expression\nsyntax to be\nused in the same call, so in practice this confusion does not often arise.\nIf the syntax **expression\nappears in the function call, expression\nmust\nevaluate to a mapping, the contents of which are treated as\nadditional keyword arguments. If a parameter matching a key has already been\ngiven a value (by an explicit keyword argument, or from another unpacking),\na TypeError\nexception is raised.\nWhen **expression\nis used, each key in this mapping must be\na string.\nEach value from the mapping is assigned to the first formal parameter\neligible for keyword assignment whose name is equal to the key.\nA key need not be a Python identifier (e.g. \"max-temp \u00b0F\"\nis acceptable,\nalthough it will not match any formal parameter that could be declared).\nIf there is no match to a formal parameter\nthe key-value pair is collected by the **\nparameter, if there is one,\nor if there is not, a TypeError\nexception is raised.\nFormal parameters using the syntax *identifier\nor **identifier\ncannot be\nused as positional argument slots or as keyword argument names.\nChanged in version 3.5: Function calls accept any number of *\nand **\nunpackings,\npositional arguments may follow iterable unpackings (*\n),\nand keyword arguments may follow dictionary unpackings (**\n).\nOriginally proposed by PEP 448.\nA call always returns some value, possibly None\n, unless it raises an\nexception. How this value is computed depends on the type of the callable\nobject.\nIf it is\u2014\n- a user-defined function:\nThe code block for the function is executed, passing it the argument list. The first thing the code block will do is bind the formal parameters to the arguments; this is described in section Function definitions. When the code block executes a\nreturn\nstatement, this specifies the return value of the function call. If execution reaches the end of the code block without executing areturn\nstatement, the return value isNone\n.- a built-in function or method:\nThe result is up to the interpreter; see Built-in Functions for the descriptions of built-in functions and methods.\n- a class object:\nA new instance of that class is returned.\n- a class instance method:\nThe corresponding user-defined function is called, with an argument list that is one longer than the argument list of the call: the instance becomes the first argument.\n- a class instance:\nThe class must define a\n__call__()\nmethod; the effect is then the same as if that method was called.\n6.4. Await expression\u00b6\nSuspend the execution of coroutine on an awaitable object. Can only be used inside a coroutine function.\nawait_expr: \"await\" primary\nAdded in version 3.5.\n6.5. The power operator\u00b6\nThe power operator binds more tightly than unary operators on its left; it binds less tightly than unary operators on its right. The syntax is:\npower: (await_expr\n|primary\n) [\"**\"u_expr\n]\nThus, in an unparenthesized sequence of power and unary operators, the operators\nare evaluated from right to left (this does not constrain the evaluation order\nfor the operands): -1**2\nresults in -1\n.\nThe power operator has the same semantics as the built-in pow()\nfunction,\nwhen called with two arguments: it yields its left argument raised to the power\nof its right argument.\nNumeric arguments are first converted to a common type,\nand the result is of that type.\nFor int operands, the result has the same type as the operands unless the second\nargument is negative; in that case, all arguments are converted to float and a\nfloat result is delivered. For example, 10**2\nreturns 100\n, but\n10**-2\nreturns 0.01\n.\nRaising 0.0\nto a negative power results in a ZeroDivisionError\n.\nRaising a negative number to a fractional power results in a complex\nnumber. (In earlier versions it raised a ValueError\n.)\nThis operation can be customized using the special __pow__()\nand\n__rpow__()\nmethods.\n6.6. Unary arithmetic and bitwise operations\u00b6\nAll unary arithmetic and bitwise operations have the same priority:\nu_expr:power\n| \"-\"u_expr\n| \"+\"u_expr\n| \"~\"u_expr\nThe unary -\n(minus) operator yields the negation of its numeric argument; the\noperation can be overridden with the __neg__()\nspecial method.\nThe unary +\n(plus) operator yields its numeric argument unchanged; the\noperation can be overridden with the __pos__()\nspecial method.\nThe unary ~\n(invert) operator yields the bitwise inversion of its integer\nargument. The bitwise inversion of x\nis defined as -(x+1)\n. It only\napplies to integral numbers or to custom objects that override the\n__invert__()\nspecial method.\nIn all three cases, if the argument does not have the proper type, a\nTypeError\nexception is raised.\n6.7. Binary arithmetic operations\u00b6\nThe binary arithmetic operations have the conventional priority levels. Note that some of these operations also apply to certain non-numeric types. Apart from the power operator, there are only two levels, one for multiplicative operators and one for additive operators:\nm_expr:u_expr\n|m_expr\n\"*\"u_expr\n|m_expr\n\"@\"m_expr\n|m_expr\n\"//\"u_expr\n|m_expr\n\"/\"u_expr\n|m_expr\n\"%\"u_expr\na_expr:m_expr\n|a_expr\n\"+\"m_expr\n|a_expr\n\"-\"m_expr\nThe *\n(multiplication) operator yields the product of its arguments. The\narguments must either both be numbers, or one argument must be an integer and\nthe other must be a sequence. In the former case, the numbers are\nconverted to a common real type and then\nmultiplied together. In the latter case, sequence repetition is performed;\na negative repetition factor yields an empty sequence.\nThis operation can be customized using the special __mul__()\nand\n__rmul__()\nmethods.\nChanged in version 3.14: If only one operand is a complex number, the other operand is converted to a floating-point number.\nThe @\n(at) operator is intended to be used for matrix multiplication. No\nbuiltin Python types implement this operator.\nThis operation can be customized using the special __matmul__()\nand\n__rmatmul__()\nmethods.\nAdded in version 3.5.\nThe /\n(division) and //\n(floor division) operators yield the quotient of\ntheir arguments. The numeric arguments are first\nconverted to a common type.\nDivision of integers yields a float, while floor division of integers results in an\ninteger; the result is that of mathematical division with the \u2018floor\u2019 function\napplied to the result. Division by zero raises the ZeroDivisionError\nexception.\nThe division operation can be customized using the special __truediv__()\nand __rtruediv__()\nmethods.\nThe floor division operation can be customized using the special\n__floordiv__()\nand __rfloordiv__()\nmethods.\nThe %\n(modulo) operator yields the remainder from the division of the first\nargument by the second. The numeric arguments are first\nconverted to a common type.\nA zero right argument raises the ZeroDivisionError\nexception. The\narguments may be floating-point numbers, e.g., 3.14%0.7\nequals 0.34\n(since 3.14\nequals 4*0.7 + 0.34\n.) The modulo operator always yields a\nresult with the same sign as its second operand (or zero); the absolute value of\nthe result is strictly smaller than the absolute value of the second operand\n[1].\nThe floor division and modulo operators are connected by the following\nidentity: x == (x//y)*y + (x%y)\n. Floor division and modulo are also\nconnected with the built-in function divmod()\n: divmod(x, y) == (x//y,\nx%y)\n. [2].\nIn addition to performing the modulo operation on numbers, the %\noperator is\nalso overloaded by string objects to perform old-style string formatting (also\nknown as interpolation). The syntax for string formatting is described in the\nPython Library Reference, section printf-style String Formatting.\nThe modulo operation can be customized using the special __mod__()\nand __rmod__()\nmethods.\nThe floor division operator, the modulo operator, and the divmod()\nfunction are not defined for complex numbers. Instead, convert to a\nfloating-point number using the abs()\nfunction if appropriate.\nThe +\n(addition) operator yields the sum of its arguments. The arguments\nmust either both be numbers or both be sequences of the same type. In the\nformer case, the numbers are\nconverted to a common real type and then\nadded together.\nIn the latter case, the sequences are concatenated.\nThis operation can be customized using the special __add__()\nand\n__radd__()\nmethods.\nChanged in version 3.14: If only one operand is a complex number, the other operand is converted to a floating-point number.\nThe -\n(subtraction) operator yields the difference of its arguments.\nThe numeric arguments are first\nconverted to a common real type.\nThis operation can be customized using the special __sub__()\nand\n__rsub__()\nmethods.\nChanged in version 3.14: If only one operand is a complex number, the other operand is converted to a floating-point number.\n6.8. Shifting operations\u00b6\nThe shifting operations have lower priority than the arithmetic operations:\nshift_expr:a_expr\n|shift_expr\n(\"<<\" | \">>\")a_expr\nThese operators accept integers as arguments. They shift the first argument to the left or right by the number of bits given by the second argument.\nThe left shift operation can be customized using the special __lshift__()\nand __rlshift__()\nmethods.\nThe right shift operation can be customized using the special __rshift__()\nand __rrshift__()\nmethods.\nA right shift by n bits is defined as floor division by pow(2,n)\n. A left\nshift by n bits is defined as multiplication with pow(2,n)\n.\n6.9. Binary bitwise operations\u00b6\nEach of the three bitwise operations has a different priority level:\nand_expr:shift_expr\n|and_expr\n\"&\"shift_expr\nxor_expr:and_expr\n|xor_expr\n\"^\"and_expr\nor_expr:xor_expr\n|or_expr\n\"|\"xor_expr\nThe &\noperator yields the bitwise AND of its arguments, which must be\nintegers or one of them must be a custom object overriding __and__()\nor\n__rand__()\nspecial methods.\nThe ^\noperator yields the bitwise XOR (exclusive OR) of its arguments, which\nmust be integers or one of them must be a custom object overriding __xor__()\nor\n__rxor__()\nspecial methods.\nThe |\noperator yields the bitwise (inclusive) OR of its arguments, which\nmust be integers or one of them must be a custom object overriding __or__()\nor\n__ror__()\nspecial methods.\n6.10. Comparisons\u00b6\nUnlike C, all comparison operations in Python have the same priority, which is\nlower than that of any arithmetic, shifting or bitwise operation. Also unlike\nC, expressions like a < b < c\nhave the interpretation that is conventional\nin mathematics:\ncomparison:or_expr\n(comp_operator\nor_expr\n)* comp_operator: \"<\" | \">\" | \"==\" | \">=\" | \"<=\" | \"!=\" | \"is\" [\"not\"] | [\"not\"] \"in\"\nComparisons yield boolean values: True\nor False\n. Custom\nrich comparison methods may return non-boolean values. In this case\nPython will call bool()\non such value in boolean contexts.\nComparisons can be chained arbitrarily, e.g., x < y <= z\nis equivalent to\nx < y and y <= z\n, except that y\nis evaluated only once (but in both\ncases z\nis not evaluated at all when x < y\nis found to be false).\nFormally, if a, b, c, \u2026, y, z are expressions and op1, op2, \u2026,\nopN are comparison operators, then a op1 b op2 c ... y opN z\nis equivalent\nto a op1 b and b op2 c and ... y opN z\n, except that each expression is\nevaluated at most once.\nNote that a op1 b op2 c\ndoesn\u2019t imply any kind of comparison between a and\nc, so that, e.g., x < y > z\nis perfectly legal (though perhaps not\npretty).\n6.10.1. Value comparisons\u00b6\nThe operators <\n, >\n, ==\n, >=\n, <=\n, and !=\ncompare the\nvalues of two objects. The objects do not need to have the same type.\nChapter Objects, values and types states that objects have a value (in addition to type and identity). The value of an object is a rather abstract notion in Python: For example, there is no canonical access method for an object\u2019s value. Also, there is no requirement that the value of an object should be constructed in a particular way, e.g. comprised of all its data attributes. Comparison operators implement a particular notion of what the value of an object is. One can think of them as defining the value of an object indirectly, by means of their comparison implementation.\nBecause all types are (direct or indirect) subtypes of object\n, they\ninherit the default comparison behavior from object\n. Types can\ncustomize their comparison behavior by implementing\nrich comparison methods like __lt__()\n, described in\nBasic customization.\nThe default behavior for equality comparison (==\nand !=\n) is based on\nthe identity of the objects. Hence, equality comparison of instances with the\nsame identity results in equality, and equality comparison of instances with\ndifferent identities results in inequality. A motivation for this default\nbehavior is the desire that all objects should be reflexive (i.e. x is y\nimplies x == y\n).\nA default order comparison (<\n, >\n, <=\n, and >=\n) is not provided;\nan attempt raises TypeError\n. A motivation for this default behavior is\nthe lack of a similar invariant as for equality.\nThe behavior of the default equality comparison, that instances with different identities are always unequal, may be in contrast to what types will need that have a sensible definition of object value and value-based equality. Such types will need to customize their comparison behavior, and in fact, a number of built-in types have done that.\nThe following list describes the comparison behavior of the most important built-in types.\nNumbers of built-in numeric types (Numeric Types \u2014 int, float, complex) and of the standard library types\nfractions.Fraction\nanddecimal.Decimal\ncan be compared within and across their types, with the restriction that complex numbers do not support order comparison. Within the limits of the types involved, they compare mathematically (algorithmically) correct without loss of precision.The not-a-number values\nfloat('NaN')\nanddecimal.Decimal('NaN')\nare special. Any ordered comparison of a number to a not-a-number value is false. A counter-intuitive implication is that not-a-number values are not equal to themselves. For example, ifx = float('NaN')\n,3 < x\n,x < 3\nandx == x\nare all false, whilex != x\nis true. This behavior is compliant with IEEE 754.None\nandNotImplemented\nare singletons. PEP 8 advises that comparisons for singletons should always be done withis\noris not\n, never the equality operators.Binary sequences (instances of\nbytes\norbytearray\n) can be compared within and across their types. They compare lexicographically using the numeric values of their elements.Strings (instances of\nstr\n) compare lexicographically using the numerical Unicode code points (the result of the built-in functionord()\n) of their characters. [3]Strings and binary sequences cannot be directly compared.\nSequences (instances of\ntuple\n,list\n, orrange\n) can be compared only within each of their types, with the restriction that ranges do not support order comparison. Equality comparison across these types results in inequality, and ordering comparison across these types raisesTypeError\n.Sequences compare lexicographically using comparison of corresponding elements. The built-in containers typically assume identical objects are equal to themselves. That lets them bypass equality tests for identical objects to improve performance and to maintain their internal invariants.\nLexicographical comparison between built-in collections works as follows:\nFor two collections to compare equal, they must be of the same type, have the same length, and each pair of corresponding elements must compare equal (for example,\n[1,2] == (1,2)\nis false because the type is not the same).Collections that support order comparison are ordered the same as their first unequal elements (for example,\n[1,2,x] <= [1,2,y]\nhas the same value asx <= y\n). If a corresponding element does not exist, the shorter collection is ordered first (for example,[1,2] < [1,2,3]\nis true).\nMappings (instances of\ndict\n) compare equal if and only if they have equal(key, value)\npairs. Equality comparison of the keys and values enforces reflexivity.Order comparisons (\n<\n,>\n,<=\n, and>=\n) raiseTypeError\n.Sets (instances of\nset\norfrozenset\n) can be compared within and across their types.They define order comparison operators to mean subset and superset tests. Those relations do not define total orderings (for example, the two sets\n{1,2}\nand{2,3}\nare not equal, nor subsets of one another, nor supersets of one another). Accordingly, sets are not appropriate arguments for functions which depend on total ordering (for example,min()\n,max()\n, andsorted()\nproduce undefined results given a list of sets as inputs).Comparison of sets enforces reflexivity of its elements.\nMost other built-in types have no comparison methods implemented, so they inherit the default comparison behavior.\nUser-defined classes that customize their comparison behavior should follow some consistency rules, if possible:\nEquality comparison should be reflexive. In other words, identical objects should compare equal:\nx is y\nimpliesx == y\nComparison should be symmetric. In other words, the following expressions should have the same result:\nx == y\nandy == x\nx != y\nandy != x\nx < y\nandy > x\nx <= y\nandy >= x\nComparison should be transitive. The following (non-exhaustive) examples illustrate that:\nx > y and y > z\nimpliesx > z\nx < y and y <= z\nimpliesx < z\nInverse comparison should result in the boolean negation. In other words, the following expressions should have the same result:\nx == y\nandnot x != y\nx < y\nandnot x >= y\n(for total ordering)x > y\nandnot x <= y\n(for total ordering)The last two expressions apply to totally ordered collections (e.g. to sequences, but not to sets or mappings). See also the\ntotal_ordering()\ndecorator.The\nhash()\nresult should be consistent with equality. Objects that are equal should either have the same hash value, or be marked as unhashable.\nPython does not enforce these consistency rules. In fact, the not-a-number values are an example for not following these rules.\n6.10.2. Membership test operations\u00b6\nThe operators in\nand not in\ntest for membership. x in\ns\nevaluates to True\nif x is a member of s, and False\notherwise.\nx not in s\nreturns the negation of x in s\n. All built-in sequences and\nset types support this as well as dictionary, for which in\ntests\nwhether the dictionary has a given key. For container types such as list, tuple,\nset, frozenset, dict, or collections.deque, the expression x in y\nis equivalent\nto any(x is e or x == e for e in y)\n.\nFor the string and bytes types, x in y\nis True\nif and only if x is a\nsubstring of y. An equivalent test is y.find(x) != -1\n. Empty strings are\nalways considered to be a substring of any other string, so \"\" in \"abc\"\nwill\nreturn True\n.\nFor user-defined classes which define the __contains__()\nmethod, x in\ny\nreturns True\nif y.__contains__(x)\nreturns a true value, and\nFalse\notherwise.\nFor user-defined classes which do not define __contains__()\nbut do define\n__iter__()\n, x in y\nis True\nif some value z\n, for which the\nexpression x is z or x == z\nis true, is produced while iterating over y\n.\nIf an exception is raised during the iteration, it is as if in\nraised\nthat exception.\nLastly, the old-style iteration protocol is tried: if a class defines\n__getitem__()\n, x in y\nis True\nif and only if there is a non-negative\ninteger index i such that x is y[i] or x == y[i]\n, and no lower integer index\nraises the IndexError\nexception. (If any other exception is raised, it is as\nif in\nraised that exception).\nThe operator not in\nis defined to have the inverse truth value of\nin\n.\n6.10.3. Identity comparisons\u00b6\nThe operators is\nand is not\ntest for an object\u2019s identity: x\nis y\nis true if and only if x and y are the same object. An Object\u2019s identity\nis determined using the id()\nfunction. x is not y\nyields the inverse\ntruth value. [4]\n6.11. Boolean operations\u00b6\nor_test:and_test\n|or_test\n\"or\"and_test\nand_test:not_test\n|and_test\n\"and\"not_test\nnot_test:comparison\n| \"not\"not_test\nIn the context of Boolean operations, and also when expressions are used by\ncontrol flow statements, the following values are interpreted as false:\nFalse\n, None\n, numeric zero of all types, and empty strings and containers\n(including strings, tuples, lists, dictionaries, sets and frozensets). All\nother values are interpreted as true. User-defined objects can customize their\ntruth value by providing a __bool__()\nmethod.\nThe operator not\nyields True\nif its argument is false, False\notherwise.\nThe expression x and y\nfirst evaluates x; if x is false, its value is\nreturned; otherwise, y is evaluated and the resulting value is returned.\nThe expression x or y\nfirst evaluates x; if x is true, its value is\nreturned; otherwise, y is evaluated and the resulting value is returned.\nNote that neither and\nnor or\nrestrict the value and type\nthey return to False\nand True\n, but rather return the last evaluated\nargument. This is sometimes useful, e.g., if s\nis a string that should be\nreplaced by a default value if it is empty, the expression s or 'foo'\nyields\nthe desired value. Because not\nhas to create a new value, it\nreturns a boolean value regardless of the type of its argument\n(for example, not 'foo'\nproduces False\nrather than ''\n.)\n6.12. Assignment expressions\u00b6\nassignment_expression: [identifier\n\":=\"]expression\nAn assignment expression (sometimes also called a \u201cnamed expression\u201d or\n\u201cwalrus\u201d) assigns an expression\nto an\nidentifier\n, while also returning the value of the\nexpression\n.\nOne common use case is when handling matched regular expressions:\nif matching := pattern.search(data):\ndo_something(matching)\nOr, when processing a file stream in chunks:\nwhile chunk := file.read(9000):\nprocess(chunk)\nAssignment expressions must be surrounded by parentheses when\nused as expression statements and when used as sub-expressions in\nslicing, conditional, lambda,\nkeyword-argument, and comprehension-if expressions and\nin assert\n, with\n, and assignment\nstatements.\nIn all other places where they can be used, parentheses are not required,\nincluding in if\nand while\nstatements.\nAdded in version 3.8: See PEP 572 for more details about assignment expressions.\n6.13. Conditional expressions\u00b6\nconditional_expression:or_test\n[\"if\"or_test\n\"else\"expression\n] expression:conditional_expression\n|lambda_expr\nA conditional expression (sometimes called a \u201cternary operator\u201d) is an alternative to the if-else statement. As it is an expression, it returns a value and can appear as a sub-expression.\nThe expression x if C else y\nfirst evaluates the condition, C rather than x.\nIf C is true, x is evaluated and its value is returned; otherwise, y is\nevaluated and its value is returned.\nSee PEP 308 for more details about conditional expressions.\n6.14. Lambdas\u00b6\nlambda_expr: \"lambda\" [parameter_list\n] \":\"expression\nLambda expressions (sometimes called lambda forms) are used to create anonymous\nfunctions. The expression lambda parameters: expression\nyields a function\nobject. The unnamed object behaves like a function object defined with:\ndef (parameters):\nreturn expression\nSee section Function definitions for the syntax of parameter lists. Note that functions created with lambda expressions cannot contain statements or annotations.\n6.15. Expression lists\u00b6\nstarred_expression: \"*\"or_expr\n|expression\nflexible_expression:assignment_expression\n|starred_expression\nflexible_expression_list:flexible_expression\n(\",\"flexible_expression\n)* [\",\"] starred_expression_list:starred_expression\n(\",\"starred_expression\n)* [\",\"] expression_list:expression\n(\",\"expression\n)* [\",\"] yield_list:expression_list\n|starred_expression\n\",\" [starred_expression_list\n]\nExcept when part of a list or set display, an expression list containing at least one comma yields a tuple. The length of the tuple is the number of expressions in the list. The expressions are evaluated from left to right.\nAn asterisk *\ndenotes iterable unpacking. Its operand must be\nan iterable. The iterable is expanded into a sequence of items,\nwhich are included in the new tuple, list, or set, at the site of\nthe unpacking.\nAdded in version 3.5: Iterable unpacking in expression lists, originally proposed by PEP 448.\nAdded in version 3.11: Any item in an expression list may be starred. See PEP 646.\nA trailing comma is required only to create a one-item tuple,\nsuch as 1,\n; it is optional in all other cases.\nA single expression without a\ntrailing comma doesn\u2019t create a tuple, but rather yields the value of that\nexpression. (To create an empty tuple, use an empty pair of parentheses:\n()\n.)\n6.16. Evaluation order\u00b6\nPython evaluates expressions from left to right. Notice that while evaluating an assignment, the right-hand side is evaluated before the left-hand side.\nIn the following lines, expressions will be evaluated in the arithmetic order of their suffixes:\nexpr1, expr2, expr3, expr4\n(expr1, expr2, expr3, expr4)\n{expr1: expr2, expr3: expr4}\nexpr1 + expr2 * (expr3 - expr4)\nexpr1(expr2, expr3, *expr4, **expr5)\nexpr3, expr4 = expr1, expr2\n6.17. Operator precedence\u00b6\nThe following table summarizes the operator precedence in Python, from highest precedence (most binding) to lowest precedence (least binding). Operators in the same box have the same precedence. Unless the syntax is explicitly given, operators are binary. Operators in the same box group left to right (except for exponentiation and conditional expressions, which group from right to left).\nNote that comparisons, membership tests, and identity tests, all have the same precedence and have a left-to-right chaining feature as described in the Comparisons section.\nOperator |\nDescription |\n|---|---|\n|\nBinding or parenthesized expression, list display, dictionary display, set display |\n|\nSubscription (including slicing), call, attribute reference |\nAwait expression |\n|\n|\nExponentiation [5] |\n|\nPositive, negative, bitwise NOT |\n|\nMultiplication, matrix multiplication, division, floor division, remainder [6] |\n|\nAddition and subtraction |\n|\nShifts |\n|\nBitwise AND |\n|\nBitwise XOR |\n|\nBitwise OR |\nComparisons, including membership tests and identity tests |\n|\nBoolean NOT |\n|\nBoolean AND |\n|\nBoolean OR |\n|\n|\nConditional expression |\nLambda expression |\n|\n|\nAssignment expression |\nFootnotes", "code_snippets": [" ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", " ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", "\n ", " ", "\n ", "\n", " ", "\n", "\n", " ", "\n ", " ", "\n\n", " ", " ", "\n ", " ", "\n", "\n", " ", "\n", " ", "\n", " ", " ", "\n", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", "\n", " ", "\n", "\n", " ", " ", " ", " ", " ", "\n", " ", "\n", "\n\n", " ", "\n", "\n", "\n", " ", " ", "\n", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", " ", "\n", " ", " ", " ", " ", "\n", " ", "\n", " ", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", " ", "\n", " ", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", " ", "\n", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 16880} +{"url": "https://docs.python.org/3/", "title": "Python 3.14.3 documentation", "content": "Python 3.14.3 documentation\nWelcome! This is the official documentation for Python 3.14.3.\nDocumentation sections:\n|\nWhat's new in Python 3.14?\nOr all \"What's new\" documents since Python 2.0\nTutorial\nStart here: a tour of Python's syntax and features\nLibrary reference\nStandard library and builtins\nLanguage reference\nSyntax and language elements\nPython setup and usage\nHow to install, configure, and use Python\nPython HOWTOs\nIn-depth topic manuals\n|\nInstalling Python modules\nThird-party modules and PyPI.org\nDistributing Python modules\nPublishing modules for use by other people\nExtending and embedding\nFor C/C++ programmers\nPython's C API\nC API reference\nFAQs\nFrequently asked questions (with answers!)\nDeprecations\nDeprecated functionality\n|\nIndices, glossary, and search:\n|\nGlobal module index\nAll modules and libraries\nGeneral index\nAll functions, classes, and terms\nGlossary\nTerms explained\n|\nSearch page\nSearch this documentation\nComplete table of contents\nLists all sections and subsections\n|\nProject information:", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 255} +{"url": "https://docs.python.org/3/c-api/extension-modules.html", "title": "Defining extension modules", "content": "Defining extension modules\u00b6\nA C extension for CPython is a shared library (for example, a .so\nfile\non Linux, .pyd\nDLL on Windows), which is loadable into the Python process\n(for example, it is compiled with compatible compiler settings), and which\nexports an initialization function.\nTo be importable by default (that is, by\nimportlib.machinery.ExtensionFileLoader\n),\nthe shared library must be available on sys.path\n,\nand must be named after the module name plus an extension listed in\nimportlib.machinery.EXTENSION_SUFFIXES\n.\nNote\nBuilding, packaging and distributing extension modules is best done with third-party tools, and is out of scope of this document. One suitable tool is Setuptools, whose documentation can be found at https://setuptools.pypa.io/en/latest/setuptools.html.\nNormally, the initialization function returns a module definition initialized\nusing PyModuleDef_Init()\n.\nThis allows splitting the creation process into several phases:\nBefore any substantial code is executed, Python can determine which capabilities the module supports, and it can adjust the environment or refuse loading an incompatible extension.\nBy default, Python itself creates the module object \u2013 that is, it does the equivalent of\nobject.__new__()\nfor classes. It also sets initial attributes like__package__\nand__loader__\n.Afterwards, the module object is initialized using extension-specific code \u2013 the equivalent of\n__init__()\non classes.\nThis is called multi-phase initialization to distinguish it from the legacy (but still supported) single-phase initialization scheme, where the initialization function returns a fully constructed module. See the single-phase-initialization section below for details.\nChanged in version 3.5: Added support for multi-phase initialization (PEP 489).\nMultiple module instances\u00b6\nBy default, extension modules are not singletons.\nFor example, if the sys.modules\nentry is removed and the module\nis re-imported, a new module object is created, and typically populated with\nfresh method and type objects.\nThe old module is subject to normal garbage collection.\nThis mirrors the behavior of pure-Python modules.\nAdditional module instances may be created in\nsub-interpreters\nor after Python runtime reinitialization\n(Py_Finalize()\nand Py_Initialize()\n).\nIn these cases, sharing Python objects between module instances would likely\ncause crashes or undefined behavior.\nTo avoid such issues, each instance of an extension module should be isolated: changes to one instance should not implicitly affect the others, and all state owned by the module, including references to Python objects, should be specific to a particular module instance. See Isolating Extension Modules for more details and a practical guide.\nA simpler way to avoid these issues is raising an error on repeated initialization.\nAll modules are expected to support\nsub-interpreters, or otherwise explicitly\nsignal a lack of support.\nThis is usually achieved by isolation or blocking repeated initialization,\nas above.\nA module may also be limited to the main interpreter using\nthe Py_mod_multiple_interpreters\nslot.\nInitialization function\u00b6\nThe initialization function defined by an extension module has the following signature:\nIts name should be PyInit_\n, with \nreplaced by the\nname of the module.\nFor modules with ASCII-only names, the function must instead be named\nPyInit_\n, with \nreplaced by the name of the module.\nWhen using Multi-phase initialization, non-ASCII module names\nare allowed. In this case, the initialization function name is\nPyInitU_\n, with \nencoded using Python\u2019s\npunycode encoding with hyphens replaced by underscores. In Python:\ndef initfunc_name(name):\ntry:\nsuffix = b'_' + name.encode('ascii')\nexcept UnicodeEncodeError:\nsuffix = b'U_' + name.encode('punycode').replace(b'-', b'_')\nreturn b'PyInit' + suffix\nIt is recommended to define the initialization function using a helper macro:\n-\nPyMODINIT_FUNC\u00b6\nDeclare an extension module initialization function. This macro:\nspecifies the PyObject* return type,\nadds any special linkage declarations required by the platform, and\nfor C++, declares the function as\nextern \"C\"\n.\nFor example, a module called spam\nwould be defined like this:\nstatic struct PyModuleDef spam_module = {\n.m_base = PyModuleDef_HEAD_INIT,\n.m_name = \"spam\",\n...\n};\nPyMODINIT_FUNC\nPyInit_spam(void)\n{\nreturn PyModuleDef_Init(&spam_module);\n}\nIt is possible to export multiple modules from a single shared library by defining multiple initialization functions. However, importing them requires using symbolic links or a custom importer, because by default only the function corresponding to the filename is found. See the Multiple modules in one library section in PEP 489 for details.\nThe initialization function is typically the only non-static\nitem defined in the module\u2019s C source.\nMulti-phase initialization\u00b6\nNormally, the initialization function\n(PyInit_modulename\n) returns a PyModuleDef\ninstance with\nnon-NULL\nm_slots\n.\nBefore it is returned, the PyModuleDef\ninstance must be initialized\nusing the following function:\n-\nPyObject *PyModuleDef_Init(PyModuleDef *def)\u00b6\n- Part of the Stable ABI since version 3.5.\nEnsure a module definition is a properly initialized Python object that correctly reports its type and a reference count.\nReturn def cast to\nPyObject*\n, orNULL\nif an error occurred.Calling this function is required for Multi-phase initialization. It should not be used in other contexts.\nNote that Python assumes that\nPyModuleDef\nstructures are statically allocated. This function may return either a new reference or a borrowed one; this reference must not be released.Added in version 3.5.\nLegacy single-phase initialization\u00b6\nAttention\nSingle-phase initialization is a legacy mechanism to initialize extension modules, with known drawbacks and design flaws. Extension module authors are encouraged to use multi-phase initialization instead.\nIn single-phase initialization, the\ninitialization function (PyInit_modulename\n)\nshould create, populate and return a module object.\nThis is typically done using PyModule_Create()\nand functions like\nPyModule_AddObjectRef()\n.\nSingle-phase initialization differs from the default in the following ways:\nSingle-phase modules are, or rather contain, \u201csingletons\u201d.\nWhen the module is first initialized, Python saves the contents of the module\u2019s\n__dict__\n(that is, typically, the module\u2019s functions and types).For subsequent imports, Python does not call the initialization function again. Instead, it creates a new module object with a new\n__dict__\n, and copies the saved contents to it. For example, given a single-phase module_testsinglephase\n[1] that defines a functionsum\nand an exception classerror\n:>>> import sys >>> import _testsinglephase as one >>> del sys.modules['_testsinglephase'] >>> import _testsinglephase as two >>> one is two False >>> one.__dict__ is two.__dict__ False >>> one.sum is two.sum True >>> one.error is two.error True\nThe exact behavior should be considered a CPython implementation detail.\nTo work around the fact that\nPyInit_modulename\ndoes not take a spec argument, some state of the import machinery is saved and applied to the first suitable module created during thePyInit_modulename\ncall. Specifically, when a sub-module is imported, this mechanism prepends the parent package name to the name of the module.A single-phase\nPyInit_modulename\nfunction should create \u201cits\u201d module object as soon as possible, before any other module objects can be created.Non-ASCII module names (\nPyInitU_modulename\n) are not supported.Single-phase modules support module lookup functions like\nPyState_FindModule()\n.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1915} +{"url": "https://docs.python.org/3/about.html", "title": "About this documentation", "content": "About this documentation\u00b6\nPython\u2019s documentation is generated from reStructuredText sources using Sphinx, a documentation generator originally created for Python and now maintained as an independent project.\nDevelopment of the documentation and its toolchain is an entirely volunteer effort, just like Python itself. If you want to contribute, please take a look at the Dealing with Bugs page for information on how to do so. New volunteers are always welcome!\nMany thanks go to:\nFred L. Drake, Jr., the creator of the original Python documentation toolset and author of much of the content;\nthe Docutils project for creating reStructuredText and the Docutils suite;\nFredrik Lundh for his Alternative Python Reference project from which Sphinx got many good ideas.\nContributors to the Python documentation\u00b6\nMany people have contributed to the Python language, the Python standard library, and the Python documentation. See Misc/ACKS in the Python source distribution for a partial list of contributors.\nIt is only with the input and contributions of the Python community that Python has such wonderful documentation \u2013 Thank You!", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 282} +{"url": "https://docs.python.org/3/extending/newtypes.html", "title": "Defining Extension Types: Assorted Topics", "content": "3. Defining Extension Types: Assorted Topics\u00b6\nThis section aims to give a quick fly-by on the various type methods you can implement and what they do.\nHere is the definition of PyTypeObject\n, with some fields only used in\ndebug builds omitted:\ntypedef struct _typeobject {\nPyObject_VAR_HEAD\nconst char *tp_name; /* For printing, in format \".\" */\nPy_ssize_t tp_basicsize, tp_itemsize; /* For allocation */\n/* Methods to implement standard operations */\ndestructor tp_dealloc;\nPy_ssize_t tp_vectorcall_offset;\ngetattrfunc tp_getattr;\nsetattrfunc tp_setattr;\nPyAsyncMethods *tp_as_async; /* formerly known as tp_compare (Python 2)\nor tp_reserved (Python 3) */\nreprfunc tp_repr;\n/* Method suites for standard classes */\nPyNumberMethods *tp_as_number;\nPySequenceMethods *tp_as_sequence;\nPyMappingMethods *tp_as_mapping;\n/* More standard operations (here for binary compatibility) */\nhashfunc tp_hash;\nternaryfunc tp_call;\nreprfunc tp_str;\ngetattrofunc tp_getattro;\nsetattrofunc tp_setattro;\n/* Functions to access object as input/output buffer */\nPyBufferProcs *tp_as_buffer;\n/* Flags to define presence of optional/expanded features */\nunsigned long tp_flags;\nconst char *tp_doc; /* Documentation string */\n/* Assigned meaning in release 2.0 */\n/* call function for all accessible objects */\ntraverseproc tp_traverse;\n/* delete references to contained objects */\ninquiry tp_clear;\n/* Assigned meaning in release 2.1 */\n/* rich comparisons */\nrichcmpfunc tp_richcompare;\n/* weak reference enabler */\nPy_ssize_t tp_weaklistoffset;\n/* Iterators */\ngetiterfunc tp_iter;\niternextfunc tp_iternext;\n/* Attribute descriptor and subclassing stuff */\nPyMethodDef *tp_methods;\nPyMemberDef *tp_members;\nPyGetSetDef *tp_getset;\n// Strong reference on a heap type, borrowed reference on a static type\nPyTypeObject *tp_base;\nPyObject *tp_dict;\ndescrgetfunc tp_descr_get;\ndescrsetfunc tp_descr_set;\nPy_ssize_t tp_dictoffset;\ninitproc tp_init;\nallocfunc tp_alloc;\nnewfunc tp_new;\nfreefunc tp_free; /* Low-level free-memory routine */\ninquiry tp_is_gc; /* For PyObject_IS_GC */\nPyObject *tp_bases;\nPyObject *tp_mro; /* method resolution order */\nPyObject *tp_cache; /* no longer used */\nvoid *tp_subclasses; /* for static builtin types this is an index */\nPyObject *tp_weaklist; /* not used for static builtin types */\ndestructor tp_del;\n/* Type attribute cache version tag. Added in version 2.6.\n* If zero, the cache is invalid and must be initialized.\n*/\nunsigned int tp_version_tag;\ndestructor tp_finalize;\nvectorcallfunc tp_vectorcall;\n/* bitset of which type-watchers care about this type */\nunsigned char tp_watched;\n/* Number of tp_version_tag values used.\n* Set to _Py_ATTR_CACHE_UNUSED if the attribute cache is\n* disabled for this type (e.g. due to custom MRO entries).\n* Otherwise, limited to MAX_VERSIONS_PER_CLASS (defined elsewhere).\n*/\nuint16_t tp_versions_used;\n} PyTypeObject;\nNow that\u2019s a lot of methods. Don\u2019t worry too much though \u2013 if you have a type you want to define, the chances are very good that you will only implement a handful of these.\nAs you probably expect by now, we\u2019re going to go over this and give more information about the various handlers. We won\u2019t go in the order they are defined in the structure, because there is a lot of historical baggage that impacts the ordering of the fields. It\u2019s often easiest to find an example that includes the fields you need and then change the values to suit your new type.\nconst char *tp_name; /* For printing */\nThe name of the type \u2013 as mentioned in the previous chapter, this will appear in various places, almost entirely for diagnostic purposes. Try to choose something that will be helpful in such a situation!\nPy_ssize_t tp_basicsize, tp_itemsize; /* For allocation */\nThese fields tell the runtime how much memory to allocate when new objects of\nthis type are created. Python has some built-in support for variable length\nstructures (think: strings, tuples) which is where the tp_itemsize\nfield\ncomes in. This will be dealt with later.\nconst char *tp_doc;\nHere you can put a string (or its address) that you want returned when the\nPython script references obj.__doc__\nto retrieve the doc string.\nNow we come to the basic type methods \u2013 the ones most extension types will implement.\n3.1. Finalization and De-allocation\u00b6\ndestructor tp_dealloc;\nThis function is called when the reference count of the instance of your type is reduced to zero and the Python interpreter wants to reclaim it. If your type has memory to free or other clean-up to perform, you can put it here. The object itself needs to be freed here as well. Here is an example of this function:\nstatic void\nnewdatatype_dealloc(PyObject *op)\n{\nnewdatatypeobject *self = (newdatatypeobject *) op;\nfree(self->obj_UnderlyingDatatypePtr);\nPy_TYPE(self)->tp_free(self);\n}\nIf your type supports garbage collection, the destructor should call\nPyObject_GC_UnTrack()\nbefore clearing any member fields:\nstatic void\nnewdatatype_dealloc(PyObject *op)\n{\nnewdatatypeobject *self = (newdatatypeobject *) op;\nPyObject_GC_UnTrack(op);\nPy_CLEAR(self->other_obj);\n...\nPy_TYPE(self)->tp_free(self);\n}\nOne important requirement of the deallocator function is that it leaves any\npending exceptions alone. This is important since deallocators are frequently\ncalled as the interpreter unwinds the Python stack; when the stack is unwound\ndue to an exception (rather than normal returns), nothing is done to protect the\ndeallocators from seeing that an exception has already been set. Any actions\nwhich a deallocator performs which may cause additional Python code to be\nexecuted may detect that an exception has been set. This can lead to misleading\nerrors from the interpreter. The proper way to protect against this is to save\na pending exception before performing the unsafe action, and restoring it when\ndone. This can be done using the PyErr_Fetch()\nand\nPyErr_Restore()\nfunctions:\nstatic void\nmy_dealloc(PyObject *obj)\n{\nMyObject *self = (MyObject *) obj;\nPyObject *cbresult;\nif (self->my_callback != NULL) {\nPyObject *err_type, *err_value, *err_traceback;\n/* This saves the current exception state */\nPyErr_Fetch(&err_type, &err_value, &err_traceback);\ncbresult = PyObject_CallNoArgs(self->my_callback);\nif (cbresult == NULL) {\nPyErr_WriteUnraisable(self->my_callback);\n}\nelse {\nPy_DECREF(cbresult);\n}\n/* This restores the saved exception state */\nPyErr_Restore(err_type, err_value, err_traceback);\nPy_DECREF(self->my_callback);\n}\nPy_TYPE(self)->tp_free(self);\n}\nNote\nThere are limitations to what you can safely do in a deallocator function.\nFirst, if your type supports garbage collection (using tp_traverse\nand/or tp_clear\n), some of the object\u2019s members can have been\ncleared or finalized by the time tp_dealloc\nis called. Second, in\ntp_dealloc\n, your object is in an unstable state: its reference\ncount is equal to zero. Any call to a non-trivial object or API (as in the\nexample above) might end up calling tp_dealloc\nagain, causing a\ndouble free and a crash.\nStarting with Python 3.4, it is recommended not to put any complex\nfinalization code in tp_dealloc\n, and instead use the new\ntp_finalize\ntype method.\nSee also\nPEP 442 explains the new finalization scheme.\n3.2. Object Presentation\u00b6\nIn Python, there are two ways to generate a textual representation of an object:\nthe repr()\nfunction, and the str()\nfunction. (The print()\nfunction just calls str()\n.) These handlers are both optional.\nreprfunc tp_repr;\nreprfunc tp_str;\nThe tp_repr\nhandler should return a string object containing a\nrepresentation of the instance for which it is called. Here is a simple\nexample:\nstatic PyObject *\nnewdatatype_repr(PyObject *op)\n{\nnewdatatypeobject *self = (newdatatypeobject *) op;\nreturn PyUnicode_FromFormat(\"Repr-ified_newdatatype{{size:%d}}\",\nself->obj_UnderlyingDatatypePtr->size);\n}\nIf no tp_repr\nhandler is specified, the interpreter will supply a\nrepresentation that uses the type\u2019s tp_name\nand a uniquely identifying\nvalue for the object.\nThe tp_str\nhandler is to str()\nwhat the tp_repr\nhandler\ndescribed above is to repr()\n; that is, it is called when Python code calls\nstr()\non an instance of your object. Its implementation is very similar\nto the tp_repr\nfunction, but the resulting string is intended for human\nconsumption. If tp_str\nis not specified, the tp_repr\nhandler is\nused instead.\nHere is a simple example:\nstatic PyObject *\nnewdatatype_str(PyObject *op)\n{\nnewdatatypeobject *self = (newdatatypeobject *) op;\nreturn PyUnicode_FromFormat(\"Stringified_newdatatype{{size:%d}}\",\nself->obj_UnderlyingDatatypePtr->size);\n}\n3.3. Attribute Management\u00b6\nFor every object which can support attributes, the corresponding type must\nprovide the functions that control how the attributes are resolved. There needs\nto be a function which can retrieve attributes (if any are defined), and another\nto set attributes (if setting attributes is allowed). Removing an attribute is\na special case, for which the new value passed to the handler is NULL\n.\nPython supports two pairs of attribute handlers; a type that supports attributes only needs to implement the functions for one pair. The difference is that one pair takes the name of the attribute as a char*, while the other accepts a PyObject*. Each type can use whichever pair makes more sense for the implementation\u2019s convenience.\ngetattrfunc tp_getattr; /* char * version */\nsetattrfunc tp_setattr;\n/* ... */\ngetattrofunc tp_getattro; /* PyObject * version */\nsetattrofunc tp_setattro;\nIf accessing attributes of an object is always a simple operation (this will be explained shortly), there are generic implementations which can be used to provide the PyObject* version of the attribute management functions. The actual need for type-specific attribute handlers almost completely disappeared starting with Python 2.2, though there are many examples which have not been updated to use some of the new generic mechanism that is available.\n3.3.1. Generic Attribute Management\u00b6\nMost extension types only use simple attributes. So, what makes the attributes simple? There are only a couple of conditions that must be met:\nThe name of the attributes must be known when\nPyType_Ready()\nis called.No special processing is needed to record that an attribute was looked up or set, nor do actions need to be taken based on the value.\nNote that this list does not place any restrictions on the values of the attributes, when the values are computed, or how relevant data is stored.\nWhen PyType_Ready()\nis called, it uses three tables referenced by the\ntype object to create descriptors which are placed in the dictionary of the\ntype object. Each descriptor controls access to one attribute of the instance\nobject. Each of the tables is optional; if all three are NULL\n, instances of\nthe type will only have attributes that are inherited from their base type, and\nshould leave the tp_getattro\nand tp_setattro\nfields NULL\nas\nwell, allowing the base type to handle attributes.\nThe tables are declared as three fields of the type object:\nstruct PyMethodDef *tp_methods;\nstruct PyMemberDef *tp_members;\nstruct PyGetSetDef *tp_getset;\nIf tp_methods\nis not NULL\n, it must refer to an array of\nPyMethodDef\nstructures. Each entry in the table is an instance of this\nstructure:\ntypedef struct PyMethodDef {\nconst char *ml_name; /* method name */\nPyCFunction ml_meth; /* implementation function */\nint ml_flags; /* flags */\nconst char *ml_doc; /* docstring */\n} PyMethodDef;\nOne entry should be defined for each method provided by the type; no entries are\nneeded for methods inherited from a base type. One additional entry is needed\nat the end; it is a sentinel that marks the end of the array. The\nml_name\nfield of the sentinel must be NULL\n.\nThe second table is used to define attributes which map directly to data stored in the instance. A variety of primitive C types are supported, and access may be read-only or read-write. The structures in the table are defined as:\ntypedef struct PyMemberDef {\nconst char *name;\nint type;\nint offset;\nint flags;\nconst char *doc;\n} PyMemberDef;\nFor each entry in the table, a descriptor will be constructed and added to the\ntype which will be able to extract a value from the instance structure. The\ntype\nfield should contain a type code like Py_T_INT\nor\nPy_T_DOUBLE\n; the value will be used to determine how to\nconvert Python values to and from C values. The flags\nfield is used to\nstore flags which control how the attribute can be accessed: you can set it to\nPy_READONLY\nto prevent Python code from setting it.\nAn interesting advantage of using the tp_members\ntable to build\ndescriptors that are used at runtime is that any attribute defined this way can\nhave an associated doc string simply by providing the text in the table. An\napplication can use the introspection API to retrieve the descriptor from the\nclass object, and get the doc string using its __doc__\nattribute.\nAs with the tp_methods\ntable, a sentinel entry with a ml_name\nvalue\nof NULL\nis required.\n3.3.2. Type-specific Attribute Management\u00b6\nFor simplicity, only the char* version will be demonstrated here; the type of the name parameter is the only difference between the char* and PyObject* flavors of the interface. This example effectively does the same thing as the generic example above, but does not use the generic support added in Python 2.2. It explains how the handler functions are called, so that if you do need to extend their functionality, you\u2019ll understand what needs to be done.\nThe tp_getattr\nhandler is called when the object requires an attribute\nlook-up. It is called in the same situations where the __getattr__()\nmethod of a class would be called.\nHere is an example:\nstatic PyObject *\nnewdatatype_getattr(PyObject *op, char *name)\n{\nnewdatatypeobject *self = (newdatatypeobject *) op;\nif (strcmp(name, \"data\") == 0) {\nreturn PyLong_FromLong(self->data);\n}\nPyErr_Format(PyExc_AttributeError,\n\"'%.100s' object has no attribute '%.400s'\",\nPy_TYPE(self)->tp_name, name);\nreturn NULL;\n}\nThe tp_setattr\nhandler is called when the __setattr__()\nor\n__delattr__()\nmethod of a class instance would be called. When an\nattribute should be deleted, the third parameter will be NULL\n. Here is an\nexample that simply raises an exception; if this were really all you wanted, the\ntp_setattr\nhandler should be set to NULL\n.\nstatic int\nnewdatatype_setattr(PyObject *op, char *name, PyObject *v)\n{\nPyErr_Format(PyExc_RuntimeError, \"Read-only attribute: %s\", name);\nreturn -1;\n}\n3.4. Object Comparison\u00b6\nrichcmpfunc tp_richcompare;\nThe tp_richcompare\nhandler is called when comparisons are needed. It is\nanalogous to the rich comparison methods, like\n__lt__()\n, and also called by PyObject_RichCompare()\nand\nPyObject_RichCompareBool()\n.\nThis function is called with two Python objects and the operator as arguments,\nwhere the operator is one of Py_EQ\n, Py_NE\n, Py_LE\n, Py_GE\n,\nPy_LT\nor Py_GT\n. It should compare the two objects with respect to the\nspecified operator and return Py_True\nor Py_False\nif the comparison is\nsuccessful, Py_NotImplemented\nto indicate that comparison is not\nimplemented and the other object\u2019s comparison method should be tried, or NULL\nif an exception was set.\nHere is a sample implementation, for a datatype that is considered equal if the size of an internal pointer is equal:\nstatic PyObject *\nnewdatatype_richcmp(PyObject *lhs, PyObject *rhs, int op)\n{\nnewdatatypeobject *obj1 = (newdatatypeobject *) lhs;\nnewdatatypeobject *obj2 = (newdatatypeobject *) rhs;\nPyObject *result;\nint c, size1, size2;\n/* code to make sure that both arguments are of type\nnewdatatype omitted */\nsize1 = obj1->obj_UnderlyingDatatypePtr->size;\nsize2 = obj2->obj_UnderlyingDatatypePtr->size;\nswitch (op) {\ncase Py_LT: c = size1 < size2; break;\ncase Py_LE: c = size1 <= size2; break;\ncase Py_EQ: c = size1 == size2; break;\ncase Py_NE: c = size1 != size2; break;\ncase Py_GT: c = size1 > size2; break;\ncase Py_GE: c = size1 >= size2; break;\n}\nresult = c ? Py_True : Py_False;\nreturn Py_NewRef(result);\n}\n3.5. Abstract Protocol Support\u00b6\nPython supports a variety of abstract \u2018protocols;\u2019 the specific interfaces provided to use these interfaces are documented in Abstract Objects Layer.\nA number of these abstract interfaces were defined early in the development of\nthe Python implementation. In particular, the number, mapping, and sequence\nprotocols have been part of Python since the beginning. Other protocols have\nbeen added over time. For protocols which depend on several handler routines\nfrom the type implementation, the older protocols have been defined as optional\nblocks of handlers referenced by the type object. For newer protocols there are\nadditional slots in the main type object, with a flag bit being set to indicate\nthat the slots are present and should be checked by the interpreter. (The flag\nbit does not indicate that the slot values are non-NULL\n. The flag may be set\nto indicate the presence of a slot, but a slot may still be unfilled.)\nPyNumberMethods *tp_as_number;\nPySequenceMethods *tp_as_sequence;\nPyMappingMethods *tp_as_mapping;\nIf you wish your object to be able to act like a number, a sequence, or a\nmapping object, then you place the address of a structure that implements the C\ntype PyNumberMethods\n, PySequenceMethods\n, or\nPyMappingMethods\n, respectively. It is up to you to fill in this\nstructure with appropriate values. You can find examples of the use of each of\nthese in the Objects\ndirectory of the Python source distribution.\nhashfunc tp_hash;\nThis function, if you choose to provide it, should return a hash number for an instance of your data type. Here is a simple example:\nstatic Py_hash_t\nnewdatatype_hash(PyObject *op)\n{\nnewdatatypeobject *self = (newdatatypeobject *) op;\nPy_hash_t result;\nresult = self->some_size + 32767 * self->some_number;\nif (result == -1) {\nresult = -2;\n}\nreturn result;\n}\nPy_hash_t\nis a signed integer type with a platform-varying width.\nReturning -1\nfrom tp_hash\nindicates an error,\nwhich is why you should be careful to avoid returning it when hash computation\nis successful, as seen above.\nternaryfunc tp_call;\nThis function is called when an instance of your data type is \u201ccalled\u201d, for\nexample, if obj1\nis an instance of your data type and the Python script\ncontains obj1('hello')\n, the tp_call\nhandler is invoked.\nThis function takes three arguments:\nself is the instance of the data type which is the subject of the call. If the call is\nobj1('hello')\n, then self isobj1\n.args is a tuple containing the arguments to the call. You can use\nPyArg_ParseTuple()\nto extract the arguments.kwds is a dictionary of keyword arguments that were passed. If this is non-\nNULL\nand you support keyword arguments, usePyArg_ParseTupleAndKeywords()\nto extract the arguments. If you do not want to support keyword arguments and this is non-NULL\n, raise aTypeError\nwith a message saying that keyword arguments are not supported.\nHere is a toy tp_call\nimplementation:\nstatic PyObject *\nnewdatatype_call(PyObject *op, PyObject *args, PyObject *kwds)\n{\nnewdatatypeobject *self = (newdatatypeobject *) op;\nPyObject *result;\nconst char *arg1;\nconst char *arg2;\nconst char *arg3;\nif (!PyArg_ParseTuple(args, \"sss:call\", &arg1, &arg2, &arg3)) {\nreturn NULL;\n}\nresult = PyUnicode_FromFormat(\n\"Returning -- value: [%d] arg1: [%s] arg2: [%s] arg3: [%s]\\n\",\nself->obj_UnderlyingDatatypePtr->size,\narg1, arg2, arg3);\nreturn result;\n}\n/* Iterators */\ngetiterfunc tp_iter;\niternextfunc tp_iternext;\nThese functions provide support for the iterator protocol. Both handlers\ntake exactly one parameter, the instance for which they are being called,\nand return a new reference. In the case of an error, they should set an\nexception and return NULL\n. tp_iter\ncorresponds\nto the Python __iter__()\nmethod, while tp_iternext\ncorresponds to the Python __next__()\nmethod.\nAny iterable object must implement the tp_iter\nhandler, which must return an iterator object. Here the same guidelines\napply as for Python classes:\nFor collections (such as lists and tuples) which can support multiple independent iterators, a new iterator should be created and returned by each call to\ntp_iter\n.Objects which can only be iterated over once (usually due to side effects of iteration, such as file objects) can implement\ntp_iter\nby returning a new reference to themselves \u2013 and should also therefore implement thetp_iternext\nhandler.\nAny iterator object should implement both tp_iter\nand tp_iternext\n. An iterator\u2019s\ntp_iter\nhandler should return a new reference\nto the iterator. Its tp_iternext\nhandler should\nreturn a new reference to the next object in the iteration, if there is one.\nIf the iteration has reached the end, tp_iternext\nmay return NULL\nwithout setting an exception, or it may set\nStopIteration\nin addition to returning NULL\n; avoiding\nthe exception can yield slightly better performance. If an actual error\noccurs, tp_iternext\nshould always set an exception\nand return NULL\n.\n3.6. Weak Reference Support\u00b6\nOne of the goals of Python\u2019s weak reference implementation is to allow any type to participate in the weak reference mechanism without incurring the overhead on performance-critical objects (such as numbers).\nSee also\nDocumentation for the weakref\nmodule.\nFor an object to be weakly referenceable, the extension type must set the\nPy_TPFLAGS_MANAGED_WEAKREF\nbit of the tp_flags\nfield. The legacy tp_weaklistoffset\nfield should\nbe left as zero.\nConcretely, here is how the statically declared type object would look:\nstatic PyTypeObject TrivialType = {\nPyVarObject_HEAD_INIT(NULL, 0)\n/* ... other members omitted for brevity ... */\n.tp_flags = Py_TPFLAGS_MANAGED_WEAKREF | ...,\n};\nThe only further addition is that tp_dealloc\nneeds to clear any weak\nreferences (by calling PyObject_ClearWeakRefs()\n):\nstatic void\nTrivial_dealloc(PyObject *op)\n{\n/* Clear weakrefs first before calling any destructors */\nPyObject_ClearWeakRefs(op);\n/* ... remainder of destruction code omitted for brevity ... */\nPy_TYPE(op)->tp_free(op);\n}\n3.7. More Suggestions\u00b6\nIn order to learn how to implement any specific method for your new data type,\nget the CPython source code. Go to the Objects\ndirectory,\nthen search the C source files for tp_\nplus the function you want\n(for example, tp_richcompare\n). You will find examples of the function\nyou want to implement.\nWhen you need to verify that an object is a concrete instance of the type you\nare implementing, use the PyObject_TypeCheck()\nfunction. A sample of\nits use might be something like the following:\nif (!PyObject_TypeCheck(some_object, &MyType)) {\nPyErr_SetString(PyExc_TypeError, \"arg #1 not a mything\");\nreturn NULL;\n}\nSee also\n- Download CPython source releases.\n- The CPython project on GitHub, where the CPython source code is developed.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 5673} +{"url": "https://docs.python.org/3/extending/newtypes_tutorial.html", "title": "Defining Extension Types: Tutorial", "content": "2. Defining Extension Types: Tutorial\u00b6\nPython allows the writer of a C extension module to define new types that\ncan be manipulated from Python code, much like the built-in str\nand list\ntypes. The code for all extension types follows a\npattern, but there are some details that you need to understand before you\ncan get started. This document is a gentle introduction to the topic.\n2.1. The Basics\u00b6\nThe CPython runtime sees all Python objects as variables of type\nPyObject*, which serves as a \u201cbase type\u201d for all Python objects.\nThe PyObject\nstructure itself only contains the object\u2019s\nreference count and a pointer to the object\u2019s \u201ctype object\u201d.\nThis is where the action is; the type object determines which (C) functions\nget called by the interpreter when, for instance, an attribute gets looked up\non an object, a method called, or it is multiplied by another object. These\nC functions are called \u201ctype methods\u201d.\nSo, if you want to define a new extension type, you need to create a new type object.\nThis sort of thing can only be explained by example, so here\u2019s a minimal, but\ncomplete, module that defines a new type named Custom\ninside a C\nextension module custom\n:\nNote\nWhat we\u2019re showing here is the traditional way of defining static\nextension types. It should be adequate for most uses. The C API also\nallows defining heap-allocated extension types using the\nPyType_FromSpec()\nfunction, which isn\u2019t covered in this tutorial.\n#define PY_SSIZE_T_CLEAN\n#include \ntypedef struct {\nPyObject_HEAD\n/* Type-specific fields go here. */\n} CustomObject;\nstatic PyTypeObject CustomType = {\n.ob_base = PyVarObject_HEAD_INIT(NULL, 0)\n.tp_name = \"custom.Custom\",\n.tp_doc = PyDoc_STR(\"Custom objects\"),\n.tp_basicsize = sizeof(CustomObject),\n.tp_itemsize = 0,\n.tp_flags = Py_TPFLAGS_DEFAULT,\n.tp_new = PyType_GenericNew,\n};\nstatic int\ncustom_module_exec(PyObject *m)\n{\nif (PyType_Ready(&CustomType) < 0) {\nreturn -1;\n}\nif (PyModule_AddObjectRef(m, \"Custom\", (PyObject *) &CustomType) < 0) {\nreturn -1;\n}\nreturn 0;\n}\nstatic PyModuleDef_Slot custom_module_slots[] = {\n{Py_mod_exec, custom_module_exec},\n// Just use this while using static types\n{Py_mod_multiple_interpreters, Py_MOD_MULTIPLE_INTERPRETERS_NOT_SUPPORTED},\n{0, NULL}\n};\nstatic PyModuleDef custom_module = {\n.m_base = PyModuleDef_HEAD_INIT,\n.m_name = \"custom\",\n.m_doc = \"Example module that creates an extension type.\",\n.m_size = 0,\n.m_slots = custom_module_slots,\n};\nPyMODINIT_FUNC\nPyInit_custom(void)\n{\nreturn PyModuleDef_Init(&custom_module);\n}\nNow that\u2019s quite a bit to take in at once, but hopefully bits will seem familiar from the previous chapter. This file defines three things:\nWhat a\nCustom\nobject contains: this is theCustomObject\nstruct, which is allocated once for eachCustom\ninstance.How the\nCustom\ntype behaves: this is theCustomType\nstruct, which defines a set of flags and function pointers that the interpreter inspects when specific operations are requested.How to define and execute the\ncustom\nmodule: this is thePyInit_custom\nfunction and the associatedcustom_module\nstruct for defining the module, and thecustom_module_exec\nfunction to set up a fresh module object.\nThe first bit is:\ntypedef struct {\nPyObject_HEAD\n} CustomObject;\nThis is what a Custom object will contain. PyObject_HEAD\nis mandatory\nat the start of each object struct and defines a field called ob_base\nof type PyObject\n, containing a pointer to a type object and a\nreference count (these can be accessed using the macros Py_TYPE\nand Py_REFCNT\nrespectively). The reason for the macro is to\nabstract away the layout and to enable additional fields in debug builds.\nNote\nThere is no semicolon above after the PyObject_HEAD\nmacro.\nBe wary of adding one by accident: some compilers will complain.\nOf course, objects generally store additional data besides the standard\nPyObject_HEAD\nboilerplate; for example, here is the definition for\nstandard Python floats:\ntypedef struct {\nPyObject_HEAD\ndouble ob_fval;\n} PyFloatObject;\nThe second bit is the definition of the type object.\nstatic PyTypeObject CustomType = {\n.ob_base = PyVarObject_HEAD_INIT(NULL, 0)\n.tp_name = \"custom.Custom\",\n.tp_doc = PyDoc_STR(\"Custom objects\"),\n.tp_basicsize = sizeof(CustomObject),\n.tp_itemsize = 0,\n.tp_flags = Py_TPFLAGS_DEFAULT,\n.tp_new = PyType_GenericNew,\n};\nNote\nWe recommend using C99-style designated initializers as above, to\navoid listing all the PyTypeObject\nfields that you don\u2019t care\nabout and also to avoid caring about the fields\u2019 declaration order.\nThe actual definition of PyTypeObject\nin object.h\nhas\nmany more fields than the definition above. The\nremaining fields will be filled with zeros by the C compiler, and it\u2019s\ncommon practice to not specify them explicitly unless you need them.\nWe\u2019re going to pick it apart, one field at a time:\n.ob_base = PyVarObject_HEAD_INIT(NULL, 0)\nThis line is mandatory boilerplate to initialize the ob_base\nfield mentioned above.\n.tp_name = \"custom.Custom\",\nThe name of our type. This will appear in the default textual representation of our objects and in some error messages, for example:\n>>> \"\" + custom.Custom()\nTraceback (most recent call last):\nFile \"\", line 1, in \nTypeError: can only concatenate str (not \"custom.Custom\") to str\nNote that the name is a dotted name that includes both the module name and the\nname of the type within the module. The module in this case is custom\nand\nthe type is Custom\n, so we set the type name to custom.Custom\n.\nUsing the real dotted import path is important to make your type compatible\nwith the pydoc\nand pickle\nmodules.\n.tp_basicsize = sizeof(CustomObject),\n.tp_itemsize = 0,\nThis is so that Python knows how much memory to allocate when creating\nnew Custom\ninstances. tp_itemsize\nis\nonly used for variable-sized objects and should otherwise be zero.\nNote\nIf you want your type to be subclassable from Python, and your type has the same\ntp_basicsize\nas its base type, you may have problems with multiple\ninheritance. A Python subclass of your type will have to list your type first\nin its __bases__\n, or else it will not be able to call your type\u2019s\n__new__()\nmethod without getting an error. You can avoid this problem by\nensuring that your type has a larger value for tp_basicsize\nthan its\nbase type does. Most of the time, this will be true anyway, because either your\nbase type will be object\n, or else you will be adding data members to\nyour base type, and therefore increasing its size.\nWe set the class flags to Py_TPFLAGS_DEFAULT\n.\n.tp_flags = Py_TPFLAGS_DEFAULT,\nAll types should include this constant in their flags. It enables all of the members defined until at least Python 3.3. If you need further members, you will need to OR the corresponding flags.\nWe provide a doc string for the type in tp_doc\n.\n.tp_doc = PyDoc_STR(\"Custom objects\"),\nTo enable object creation, we have to provide a tp_new\nhandler. This is the equivalent of the Python method __new__()\n, but\nhas to be specified explicitly. In this case, we can just use the default\nimplementation provided by the API function PyType_GenericNew()\n.\n.tp_new = PyType_GenericNew,\nEverything else in the file should be familiar, except for some code in\ncustom_module_exec()\n:\nif (PyType_Ready(&CustomType) < 0) {\nreturn -1;\n}\nThis initializes the Custom\ntype, filling in a number of members\nto the appropriate default values, including ob_type\nthat we initially\nset to NULL\n.\nif (PyModule_AddObjectRef(m, \"Custom\", (PyObject *) &CustomType) < 0) {\nreturn -1;\n}\nThis adds the type to the module dictionary. This allows us to create\nCustom\ninstances by calling the Custom\nclass:\n>>> import custom\n>>> mycustom = custom.Custom()\nThat\u2019s it! All that remains is to build it; put the above code in a file called\ncustom.c\n,\n[build-system]\nrequires = [\"setuptools\"]\nbuild-backend = \"setuptools.build_meta\"\n[project]\nname = \"custom\"\nversion = \"1\"\nin a file called pyproject.toml\n, and\nfrom setuptools import Extension, setup\nsetup(ext_modules=[Extension(\"custom\", [\"custom.c\"])])\nin a file called setup.py\n; then typing\n$ python -m pip install .\nin a shell should produce a file custom.so\nin a subdirectory\nand install it; now fire up Python \u2014 you should be able to import custom\nand play around with Custom\nobjects.\nThat wasn\u2019t so hard, was it?\nOf course, the current Custom type is pretty uninteresting. It has no data and doesn\u2019t do anything. It can\u2019t even be subclassed.\n2.2. Adding data and methods to the Basic example\u00b6\nLet\u2019s extend the basic example to add some data and methods. Let\u2019s also make\nthe type usable as a base class. We\u2019ll create a new module, custom2\nthat\nadds these capabilities:\n#define PY_SSIZE_T_CLEAN\n#include \n#include /* for offsetof() */\ntypedef struct {\nPyObject_HEAD\nPyObject *first; /* first name */\nPyObject *last; /* last name */\nint number;\n} CustomObject;\nstatic void\nCustom_dealloc(PyObject *op)\n{\nCustomObject *self = (CustomObject *) op;\nPy_XDECREF(self->first);\nPy_XDECREF(self->last);\nPy_TYPE(self)->tp_free(self);\n}\nstatic PyObject *\nCustom_new(PyTypeObject *type, PyObject *args, PyObject *kwds)\n{\nCustomObject *self;\nself = (CustomObject *) type->tp_alloc(type, 0);\nif (self != NULL) {\nself->first = Py_GetConstant(Py_CONSTANT_EMPTY_STR);\nif (self->first == NULL) {\nPy_DECREF(self);\nreturn NULL;\n}\nself->last = Py_GetConstant(Py_CONSTANT_EMPTY_STR);\nif (self->last == NULL) {\nPy_DECREF(self);\nreturn NULL;\n}\nself->number = 0;\n}\nreturn (PyObject *) self;\n}\nstatic int\nCustom_init(PyObject *op, PyObject *args, PyObject *kwds)\n{\nCustomObject *self = (CustomObject *) op;\nstatic char *kwlist[] = {\"first\", \"last\", \"number\", NULL};\nPyObject *first = NULL, *last = NULL;\nif (!PyArg_ParseTupleAndKeywords(args, kwds, \"|OOi\", kwlist,\n&first, &last,\n&self->number))\nreturn -1;\nif (first) {\nPy_XSETREF(self->first, Py_NewRef(first));\n}\nif (last) {\nPy_XSETREF(self->last, Py_NewRef(last));\n}\nreturn 0;\n}\nstatic PyMemberDef Custom_members[] = {\n{\"first\", Py_T_OBJECT_EX, offsetof(CustomObject, first), 0,\n\"first name\"},\n{\"last\", Py_T_OBJECT_EX, offsetof(CustomObject, last), 0,\n\"last name\"},\n{\"number\", Py_T_INT, offsetof(CustomObject, number), 0,\n\"custom number\"},\n{NULL} /* Sentinel */\n};\nstatic PyObject *\nCustom_name(PyObject *op, PyObject *Py_UNUSED(dummy))\n{\nCustomObject *self = (CustomObject *) op;\nif (self->first == NULL) {\nPyErr_SetString(PyExc_AttributeError, \"first\");\nreturn NULL;\n}\nif (self->last == NULL) {\nPyErr_SetString(PyExc_AttributeError, \"last\");\nreturn NULL;\n}\nreturn PyUnicode_FromFormat(\"%S %S\", self->first, self->last);\n}\nstatic PyMethodDef Custom_methods[] = {\n{\"name\", Custom_name, METH_NOARGS,\n\"Return the name, combining the first and last name\"\n},\n{NULL} /* Sentinel */\n};\nstatic PyTypeObject CustomType = {\n.ob_base = PyVarObject_HEAD_INIT(NULL, 0)\n.tp_name = \"custom2.Custom\",\n.tp_doc = PyDoc_STR(\"Custom objects\"),\n.tp_basicsize = sizeof(CustomObject),\n.tp_itemsize = 0,\n.tp_flags = Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE,\n.tp_new = Custom_new,\n.tp_init = Custom_init,\n.tp_dealloc = Custom_dealloc,\n.tp_members = Custom_members,\n.tp_methods = Custom_methods,\n};\nstatic int\ncustom_module_exec(PyObject *m)\n{\nif (PyType_Ready(&CustomType) < 0) {\nreturn -1;\n}\nif (PyModule_AddObjectRef(m, \"Custom\", (PyObject *) &CustomType) < 0) {\nreturn -1;\n}\nreturn 0;\n}\nstatic PyModuleDef_Slot custom_module_slots[] = {\n{Py_mod_exec, custom_module_exec},\n{Py_mod_multiple_interpreters, Py_MOD_MULTIPLE_INTERPRETERS_NOT_SUPPORTED},\n{0, NULL}\n};\nstatic PyModuleDef custom_module = {\n.m_base = PyModuleDef_HEAD_INIT,\n.m_name = \"custom2\",\n.m_doc = \"Example module that creates an extension type.\",\n.m_size = 0,\n.m_slots = custom_module_slots,\n};\nPyMODINIT_FUNC\nPyInit_custom2(void)\n{\nreturn PyModuleDef_Init(&custom_module);\n}\nThis version of the module has a number of changes.\nThe Custom\ntype now has three data attributes in its C struct,\nfirst, last, and number. The first and last variables are Python\nstrings containing first and last names. The number attribute is a C integer.\nThe object structure is updated accordingly:\ntypedef struct {\nPyObject_HEAD\nPyObject *first; /* first name */\nPyObject *last; /* last name */\nint number;\n} CustomObject;\nBecause we now have data to manage, we have to be more careful about object allocation and deallocation. At a minimum, we need a deallocation method:\nstatic void\nCustom_dealloc(PyObject *op)\n{\nCustomObject *self = (CustomObject *) op;\nPy_XDECREF(self->first);\nPy_XDECREF(self->last);\nPy_TYPE(self)->tp_free(self);\n}\nwhich is assigned to the tp_dealloc\nmember:\n.tp_dealloc = Custom_dealloc,\nThis method first clears the reference counts of the two Python attributes.\nPy_XDECREF()\ncorrectly handles the case where its argument is\nNULL\n(which might happen here if tp_new\nfailed midway). It then\ncalls the tp_free\nmember of the object\u2019s type\n(computed by Py_TYPE(self)\n) to free the object\u2019s memory. Note that\nthe object\u2019s type might not be CustomType\n, because the object may\nbe an instance of a subclass.\nNote\nThe explicit cast to CustomObject *\nabove is needed because we defined\nCustom_dealloc\nto take a PyObject *\nargument, as the tp_dealloc\nfunction pointer expects to receive a PyObject *\nargument.\nBy assigning to the tp_dealloc\nslot of a type, we declare\nthat it can only be called with instances of our CustomObject\nclass, so the cast to (CustomObject *)\nis safe.\nThis is object-oriented polymorphism, in C!\nIn existing code, or in previous versions of this tutorial,\nyou might see similar functions take a pointer to the subtype\nobject structure (CustomObject*\n) directly, like this:\nCustom_dealloc(CustomObject *self)\n{\nPy_XDECREF(self->first);\nPy_XDECREF(self->last);\nPy_TYPE(self)->tp_free((PyObject *) self);\n}\n...\n.tp_dealloc = (destructor) Custom_dealloc,\nThis does the same thing on all architectures that CPython supports, but according to the C standard, it invokes undefined behavior.\nWe want to make sure that the first and last names are initialized to empty\nstrings, so we provide a tp_new\nimplementation:\nstatic PyObject *\nCustom_new(PyTypeObject *type, PyObject *args, PyObject *kwds)\n{\nCustomObject *self;\nself = (CustomObject *) type->tp_alloc(type, 0);\nif (self != NULL) {\nself->first = PyUnicode_FromString(\"\");\nif (self->first == NULL) {\nPy_DECREF(self);\nreturn NULL;\n}\nself->last = PyUnicode_FromString(\"\");\nif (self->last == NULL) {\nPy_DECREF(self);\nreturn NULL;\n}\nself->number = 0;\n}\nreturn (PyObject *) self;\n}\nand install it in the tp_new\nmember:\n.tp_new = Custom_new,\nThe tp_new\nhandler is responsible for creating (as opposed to initializing)\nobjects of the type. It is exposed in Python as the __new__()\nmethod.\nIt is not required to define a tp_new\nmember, and indeed many extension\ntypes will simply reuse PyType_GenericNew()\nas done in the first\nversion of the Custom\ntype above. In this case, we use the tp_new\nhandler to initialize the first\nand last\nattributes to non-NULL\ndefault values.\ntp_new\nis passed the type being instantiated (not necessarily CustomType\n,\nif a subclass is instantiated) and any arguments passed when the type was\ncalled, and is expected to return the instance created. tp_new\nhandlers\nalways accept positional and keyword arguments, but they often ignore the\narguments, leaving the argument handling to initializer (a.k.a. tp_init\nin C or __init__\nin Python) methods.\nNote\ntp_new\nshouldn\u2019t call tp_init\nexplicitly, as the interpreter\nwill do it itself.\nThe tp_new\nimplementation calls the tp_alloc\nslot to allocate memory:\nself = (CustomObject *) type->tp_alloc(type, 0);\nSince memory allocation may fail, we must check the tp_alloc\nresult against NULL\nbefore proceeding.\nNote\nWe didn\u2019t fill the tp_alloc\nslot ourselves. Rather\nPyType_Ready()\nfills it for us by inheriting it from our base class,\nwhich is object\nby default. Most types use the default allocation\nstrategy.\nNote\nIf you are creating a co-operative tp_new\n(one\nthat calls a base type\u2019s tp_new\nor __new__()\n),\nyou must not try to determine what method to call using method resolution\norder at runtime. Always statically determine what type you are going to\ncall, and call its tp_new\ndirectly, or via\ntype->tp_base->tp_new\n. If you do not do this, Python subclasses of your\ntype that also inherit from other Python-defined classes may not work correctly.\n(Specifically, you may not be able to create instances of such subclasses\nwithout getting a TypeError\n.)\nWe also define an initialization function which accepts arguments to provide initial values for our instance:\nstatic int\nCustom_init(PyObject *op, PyObject *args, PyObject *kwds)\n{\nCustomObject *self = (CustomObject *) op;\nstatic char *kwlist[] = {\"first\", \"last\", \"number\", NULL};\nPyObject *first = NULL, *last = NULL, *tmp;\nif (!PyArg_ParseTupleAndKeywords(args, kwds, \"|OOi\", kwlist,\n&first, &last,\n&self->number))\nreturn -1;\nif (first) {\ntmp = self->first;\nPy_INCREF(first);\nself->first = first;\nPy_XDECREF(tmp);\n}\nif (last) {\ntmp = self->last;\nPy_INCREF(last);\nself->last = last;\nPy_XDECREF(tmp);\n}\nreturn 0;\n}\nby filling the tp_init\nslot.\n.tp_init = Custom_init,\nThe tp_init\nslot is exposed in Python as the\n__init__()\nmethod. It is used to initialize an object after it\u2019s\ncreated. Initializers always accept positional and keyword arguments,\nand they should return either 0\non success or -1\non error.\nUnlike the tp_new\nhandler, there is no guarantee that tp_init\nis called at all (for example, the pickle\nmodule by default\ndoesn\u2019t call __init__()\non unpickled instances). It can also be\ncalled multiple times. Anyone can call the __init__()\nmethod on\nour objects. For this reason, we have to be extra careful when assigning\nthe new attribute values. We might be tempted, for example to assign the\nfirst\nmember like this:\nif (first) {\nPy_XDECREF(self->first);\nPy_INCREF(first);\nself->first = first;\n}\nBut this would be risky. Our type doesn\u2019t restrict the type of the\nfirst\nmember, so it could be any kind of object. It could have a\ndestructor that causes code to be executed that tries to access the\nfirst\nmember; or that destructor could detach the\nthread state and let arbitrary code run in other\nthreads that accesses and modifies our object.\nTo be paranoid and protect ourselves against this possibility, we almost always reassign members before decrementing their reference counts. When don\u2019t we have to do this?\nwhen we absolutely know that the reference count is greater than 1;\nwhen we know that deallocation of the object [1] will neither detach the thread state nor cause any calls back into our type\u2019s code;\nwhen decrementing a reference count in a\ntp_dealloc\nhandler on a type which doesn\u2019t support cyclic garbage collection [2].\nWe want to expose our instance variables as attributes. There are a number of ways to do that. The simplest way is to define member definitions:\nstatic PyMemberDef Custom_members[] = {\n{\"first\", Py_T_OBJECT_EX, offsetof(CustomObject, first), 0,\n\"first name\"},\n{\"last\", Py_T_OBJECT_EX, offsetof(CustomObject, last), 0,\n\"last name\"},\n{\"number\", Py_T_INT, offsetof(CustomObject, number), 0,\n\"custom number\"},\n{NULL} /* Sentinel */\n};\nand put the definitions in the tp_members\nslot:\n.tp_members = Custom_members,\nEach member definition has a member name, type, offset, access flags and documentation string. See the Generic Attribute Management section below for details.\nA disadvantage of this approach is that it doesn\u2019t provide a way to restrict the\ntypes of objects that can be assigned to the Python attributes. We expect the\nfirst and last names to be strings, but any Python objects can be assigned.\nFurther, the attributes can be deleted, setting the C pointers to NULL\n. Even\nthough we can make sure the members are initialized to non-NULL\nvalues, the\nmembers can be set to NULL\nif the attributes are deleted.\nWe define a single method, Custom.name()\n, that outputs the objects name as the\nconcatenation of the first and last names.\nstatic PyObject *\nCustom_name(PyObject *op, PyObject *Py_UNUSED(dummy))\n{\nCustomObject *self = (CustomObject *) op;\nif (self->first == NULL) {\nPyErr_SetString(PyExc_AttributeError, \"first\");\nreturn NULL;\n}\nif (self->last == NULL) {\nPyErr_SetString(PyExc_AttributeError, \"last\");\nreturn NULL;\n}\nreturn PyUnicode_FromFormat(\"%S %S\", self->first, self->last);\n}\nThe method is implemented as a C function that takes a Custom\n(or\nCustom\nsubclass) instance as the first argument. Methods always take an\ninstance as the first argument. Methods often take positional and keyword\narguments as well, but in this case we don\u2019t take any and don\u2019t need to accept\na positional argument tuple or keyword argument dictionary. This method is\nequivalent to the Python method:\ndef name(self):\nreturn \"%s %s\" % (self.first, self.last)\nNote that we have to check for the possibility that our first\nand\nlast\nmembers are NULL\n. This is because they can be deleted, in which\ncase they are set to NULL\n. It would be better to prevent deletion of these\nattributes and to restrict the attribute values to be strings. We\u2019ll see how to\ndo that in the next section.\nNow that we\u2019ve defined the method, we need to create an array of method definitions:\nstatic PyMethodDef Custom_methods[] = {\n{\"name\", Custom_name, METH_NOARGS,\n\"Return the name, combining the first and last name\"\n},\n{NULL} /* Sentinel */\n};\n(note that we used the METH_NOARGS\nflag to indicate that the method\nis expecting no arguments other than self)\nand assign it to the tp_methods\nslot:\n.tp_methods = Custom_methods,\nFinally, we\u2019ll make our type usable as a base class for subclassing. We\u2019ve\nwritten our methods carefully so far so that they don\u2019t make any assumptions\nabout the type of the object being created or used, so all we need to do is\nto add the Py_TPFLAGS_BASETYPE\nto our class flag definition:\n.tp_flags = Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE,\nWe rename PyInit_custom()\nto PyInit_custom2()\n, update the\nmodule name in the PyModuleDef\nstruct, and update the full class\nname in the PyTypeObject\nstruct.\nFinally, we update our setup.py\nfile to include the new module,\nfrom setuptools import Extension, setup\nsetup(ext_modules=[\nExtension(\"custom\", [\"custom.c\"]),\nExtension(\"custom2\", [\"custom2.c\"]),\n])\nand then we re-install so that we can import custom2\n:\n$ python -m pip install .\n2.3. Providing finer control over data attributes\u00b6\nIn this section, we\u2019ll provide finer control over how the first\nand\nlast\nattributes are set in the Custom\nexample. In the previous\nversion of our module, the instance variables first\nand last\ncould be set to non-string values or even deleted. We want to make sure that\nthese attributes always contain strings.\n#define PY_SSIZE_T_CLEAN\n#include \n#include /* for offsetof() */\ntypedef struct {\nPyObject_HEAD\nPyObject *first; /* first name */\nPyObject *last; /* last name */\nint number;\n} CustomObject;\nstatic void\nCustom_dealloc(PyObject *op)\n{\nCustomObject *self = (CustomObject *) op;\nPy_XDECREF(self->first);\nPy_XDECREF(self->last);\nPy_TYPE(self)->tp_free(self);\n}\nstatic PyObject *\nCustom_new(PyTypeObject *type, PyObject *args, PyObject *kwds)\n{\nCustomObject *self;\nself = (CustomObject *) type->tp_alloc(type, 0);\nif (self != NULL) {\nself->first = Py_GetConstant(Py_CONSTANT_EMPTY_STR);\nif (self->first == NULL) {\nPy_DECREF(self);\nreturn NULL;\n}\nself->last = Py_GetConstant(Py_CONSTANT_EMPTY_STR);\nif (self->last == NULL) {\nPy_DECREF(self);\nreturn NULL;\n}\nself->number = 0;\n}\nreturn (PyObject *) self;\n}\nstatic int\nCustom_init(PyObject *op, PyObject *args, PyObject *kwds)\n{\nCustomObject *self = (CustomObject *) op;\nstatic char *kwlist[] = {\"first\", \"last\", \"number\", NULL};\nPyObject *first = NULL, *last = NULL;\nif (!PyArg_ParseTupleAndKeywords(args, kwds, \"|UUi\", kwlist,\n&first, &last,\n&self->number))\nreturn -1;\nif (first) {\nPy_SETREF(self->first, Py_NewRef(first));\n}\nif (last) {\nPy_SETREF(self->last, Py_NewRef(last));\n}\nreturn 0;\n}\nstatic PyMemberDef Custom_members[] = {\n{\"number\", Py_T_INT, offsetof(CustomObject, number), 0,\n\"custom number\"},\n{NULL} /* Sentinel */\n};\nstatic PyObject *\nCustom_getfirst(PyObject *op, void *closure)\n{\nCustomObject *self = (CustomObject *) op;\nreturn Py_NewRef(self->first);\n}\nstatic int\nCustom_setfirst(PyObject *op, PyObject *value, void *closure)\n{\nCustomObject *self = (CustomObject *) op;\nif (value == NULL) {\nPyErr_SetString(PyExc_TypeError, \"Cannot delete the first attribute\");\nreturn -1;\n}\nif (!PyUnicode_Check(value)) {\nPyErr_SetString(PyExc_TypeError,\n\"The first attribute value must be a string\");\nreturn -1;\n}\nPy_SETREF(self->first, Py_NewRef(value));\nreturn 0;\n}\nstatic PyObject *\nCustom_getlast(PyObject *op, void *closure)\n{\nCustomObject *self = (CustomObject *) op;\nreturn Py_NewRef(self->last);\n}\nstatic int\nCustom_setlast(PyObject *op, PyObject *value, void *closure)\n{\nCustomObject *self = (CustomObject *) op;\nif (value == NULL) {\nPyErr_SetString(PyExc_TypeError, \"Cannot delete the last attribute\");\nreturn -1;\n}\nif (!PyUnicode_Check(value)) {\nPyErr_SetString(PyExc_TypeError,\n\"The last attribute value must be a string\");\nreturn -1;\n}\nPy_SETREF(self->last, Py_NewRef(value));\nreturn 0;\n}\nstatic PyGetSetDef Custom_getsetters[] = {\n{\"first\", Custom_getfirst, Custom_setfirst,\n\"first name\", NULL},\n{\"last\", Custom_getlast, Custom_setlast,\n\"last name\", NULL},\n{NULL} /* Sentinel */\n};\nstatic PyObject *\nCustom_name(PyObject *op, PyObject *Py_UNUSED(dummy))\n{\nCustomObject *self = (CustomObject *) op;\nreturn PyUnicode_FromFormat(\"%S %S\", self->first, self->last);\n}\nstatic PyMethodDef Custom_methods[] = {\n{\"name\", Custom_name, METH_NOARGS,\n\"Return the name, combining the first and last name\"\n},\n{NULL} /* Sentinel */\n};\nstatic PyTypeObject CustomType = {\n.ob_base = PyVarObject_HEAD_INIT(NULL, 0)\n.tp_name = \"custom3.Custom\",\n.tp_doc = PyDoc_STR(\"Custom objects\"),\n.tp_basicsize = sizeof(CustomObject),\n.tp_itemsize = 0,\n.tp_flags = Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE,\n.tp_new = Custom_new,\n.tp_init = Custom_init,\n.tp_dealloc = Custom_dealloc,\n.tp_members = Custom_members,\n.tp_methods = Custom_methods,\n.tp_getset = Custom_getsetters,\n};\nstatic int\ncustom_module_exec(PyObject *m)\n{\nif (PyType_Ready(&CustomType) < 0) {\nreturn -1;\n}\nif (PyModule_AddObjectRef(m, \"Custom\", (PyObject *) &CustomType) < 0) {\nreturn -1;\n}\nreturn 0;\n}\nstatic PyModuleDef_Slot custom_module_slots[] = {\n{Py_mod_exec, custom_module_exec},\n{Py_mod_multiple_interpreters, Py_MOD_MULTIPLE_INTERPRETERS_NOT_SUPPORTED},\n{0, NULL}\n};\nstatic PyModuleDef custom_module = {\n.m_base = PyModuleDef_HEAD_INIT,\n.m_name = \"custom3\",\n.m_doc = \"Example module that creates an extension type.\",\n.m_size = 0,\n.m_slots = custom_module_slots,\n};\nPyMODINIT_FUNC\nPyInit_custom3(void)\n{\nreturn PyModuleDef_Init(&custom_module);\n}\nTo provide greater control, over the first\nand last\nattributes,\nwe\u2019ll use custom getter and setter functions. Here are the functions for\ngetting and setting the first\nattribute:\nstatic PyObject *\nCustom_getfirst(PyObject *op, void *closure)\n{\nCustomObject *self = (CustomObject *) op;\nPy_INCREF(self->first);\nreturn self->first;\n}\nstatic int\nCustom_setfirst(PyObject *op, PyObject *value, void *closure)\n{\nCustomObject *self = (CustomObject *) op;\nPyObject *tmp;\nif (value == NULL) {\nPyErr_SetString(PyExc_TypeError, \"Cannot delete the first attribute\");\nreturn -1;\n}\nif (!PyUnicode_Check(value)) {\nPyErr_SetString(PyExc_TypeError,\n\"The first attribute value must be a string\");\nreturn -1;\n}\ntmp = self->first;\nPy_INCREF(value);\nself->first = value;\nPy_DECREF(tmp);\nreturn 0;\n}\nThe getter function is passed a Custom\nobject and a \u201cclosure\u201d, which is\na void pointer. In this case, the closure is ignored. (The closure supports an\nadvanced usage in which definition data is passed to the getter and setter. This\ncould, for example, be used to allow a single set of getter and setter functions\nthat decide the attribute to get or set based on data in the closure.)\nThe setter function is passed the Custom\nobject, the new value, and the\nclosure. The new value may be NULL\n, in which case the attribute is being\ndeleted. In our setter, we raise an error if the attribute is deleted or if its\nnew value is not a string.\nWe create an array of PyGetSetDef\nstructures:\nstatic PyGetSetDef Custom_getsetters[] = {\n{\"first\", Custom_getfirst, Custom_setfirst,\n\"first name\", NULL},\n{\"last\", Custom_getlast, Custom_setlast,\n\"last name\", NULL},\n{NULL} /* Sentinel */\n};\nand register it in the tp_getset\nslot:\n.tp_getset = Custom_getsetters,\nThe last item in a PyGetSetDef\nstructure is the \u201cclosure\u201d mentioned\nabove. In this case, we aren\u2019t using a closure, so we just pass NULL\n.\nWe also remove the member definitions for these attributes:\nstatic PyMemberDef Custom_members[] = {\n{\"number\", Py_T_INT, offsetof(CustomObject, number), 0,\n\"custom number\"},\n{NULL} /* Sentinel */\n};\nWe also need to update the tp_init\nhandler to only\nallow strings [3] to be passed:\nstatic int\nCustom_init(PyObject *op, PyObject *args, PyObject *kwds)\n{\nCustomObject *self = (CustomObject *) op;\nstatic char *kwlist[] = {\"first\", \"last\", \"number\", NULL};\nPyObject *first = NULL, *last = NULL, *tmp;\nif (!PyArg_ParseTupleAndKeywords(args, kwds, \"|UUi\", kwlist,\n&first, &last,\n&self->number))\nreturn -1;\nif (first) {\ntmp = self->first;\nPy_INCREF(first);\nself->first = first;\nPy_DECREF(tmp);\n}\nif (last) {\ntmp = self->last;\nPy_INCREF(last);\nself->last = last;\nPy_DECREF(tmp);\n}\nreturn 0;\n}\nWith these changes, we can assure that the first\nand last\nmembers are\nnever NULL\nso we can remove checks for NULL\nvalues in almost all cases.\nThis means that most of the Py_XDECREF()\ncalls can be converted to\nPy_DECREF()\ncalls. The only place we can\u2019t change these calls is in\nthe tp_dealloc\nimplementation, where there is the possibility that the\ninitialization of these members failed in tp_new\n.\nWe also rename the module initialization function and module name in the\ninitialization function, as we did before, and we add an extra definition to the\nsetup.py\nfile.\n2.4. Supporting cyclic garbage collection\u00b6\nPython has a cyclic garbage collector (GC) that can identify unneeded objects even when their reference counts are not zero. This can happen when objects are involved in cycles. For example, consider:\n>>> l = []\n>>> l.append(l)\n>>> del l\nIn this example, we create a list that contains itself. When we delete it, it still has a reference from itself. Its reference count doesn\u2019t drop to zero. Fortunately, Python\u2019s cyclic garbage collector will eventually figure out that the list is garbage and free it.\nIn the second version of the Custom\nexample, we allowed any kind of\nobject to be stored in the first\nor last\nattributes [4].\nBesides, in the second and third versions, we allowed subclassing\nCustom\n, and subclasses may add arbitrary attributes. For any of\nthose two reasons, Custom\nobjects can participate in cycles:\n>>> import custom3\n>>> class Derived(custom3.Custom): pass\n...\n>>> n = Derived()\n>>> n.some_attribute = n\nTo allow a Custom\ninstance participating in a reference cycle to\nbe properly detected and collected by the cyclic GC, our Custom\ntype\nneeds to fill two additional slots and to enable a flag that enables these slots:\n#define PY_SSIZE_T_CLEAN\n#include \n#include /* for offsetof() */\ntypedef struct {\nPyObject_HEAD\nPyObject *first; /* first name */\nPyObject *last; /* last name */\nint number;\n} CustomObject;\nstatic int\nCustom_traverse(PyObject *op, visitproc visit, void *arg)\n{\nCustomObject *self = (CustomObject *) op;\nPy_VISIT(self->first);\nPy_VISIT(self->last);\nreturn 0;\n}\nstatic int\nCustom_clear(PyObject *op)\n{\nCustomObject *self = (CustomObject *) op;\nPy_CLEAR(self->first);\nPy_CLEAR(self->last);\nreturn 0;\n}\nstatic void\nCustom_dealloc(PyObject *op)\n{\nPyObject_GC_UnTrack(op);\n(void)Custom_clear(op);\nPy_TYPE(op)->tp_free(op);\n}\nstatic PyObject *\nCustom_new(PyTypeObject *type, PyObject *args, PyObject *kwds)\n{\nCustomObject *self;\nself = (CustomObject *) type->tp_alloc(type, 0);\nif (self != NULL) {\nself->first = Py_GetConstant(Py_CONSTANT_EMPTY_STR);\nif (self->first == NULL) {\nPy_DECREF(self);\nreturn NULL;\n}\nself->last = Py_GetConstant(Py_CONSTANT_EMPTY_STR);\nif (self->last == NULL) {\nPy_DECREF(self);\nreturn NULL;\n}\nself->number = 0;\n}\nreturn (PyObject *) self;\n}\nstatic int\nCustom_init(PyObject *op, PyObject *args, PyObject *kwds)\n{\nCustomObject *self = (CustomObject *) op;\nstatic char *kwlist[] = {\"first\", \"last\", \"number\", NULL};\nPyObject *first = NULL, *last = NULL;\nif (!PyArg_ParseTupleAndKeywords(args, kwds, \"|UUi\", kwlist,\n&first, &last,\n&self->number))\nreturn -1;\nif (first) {\nPy_SETREF(self->first, Py_NewRef(first));\n}\nif (last) {\nPy_SETREF(self->last, Py_NewRef(last));\n}\nreturn 0;\n}\nstatic PyMemberDef Custom_members[] = {\n{\"number\", Py_T_INT, offsetof(CustomObject, number), 0,\n\"custom number\"},\n{NULL} /* Sentinel */\n};\nstatic PyObject *\nCustom_getfirst(PyObject *op, void *closure)\n{\nCustomObject *self = (CustomObject *) op;\nreturn Py_NewRef(self->first);\n}\nstatic int\nCustom_setfirst(PyObject *op, PyObject *value, void *closure)\n{\nCustomObject *self = (CustomObject *) op;\nif (value == NULL) {\nPyErr_SetString(PyExc_TypeError, \"Cannot delete the first attribute\");\nreturn -1;\n}\nif (!PyUnicode_Check(value)) {\nPyErr_SetString(PyExc_TypeError,\n\"The first attribute value must be a string\");\nreturn -1;\n}\nPy_XSETREF(self->first, Py_NewRef(value));\nreturn 0;\n}\nstatic PyObject *\nCustom_getlast(PyObject *op, void *closure)\n{\nCustomObject *self = (CustomObject *) op;\nreturn Py_NewRef(self->last);\n}\nstatic int\nCustom_setlast(PyObject *op, PyObject *value, void *closure)\n{\nCustomObject *self = (CustomObject *) op;\nif (value == NULL) {\nPyErr_SetString(PyExc_TypeError, \"Cannot delete the last attribute\");\nreturn -1;\n}\nif (!PyUnicode_Check(value)) {\nPyErr_SetString(PyExc_TypeError,\n\"The last attribute value must be a string\");\nreturn -1;\n}\nPy_XSETREF(self->last, Py_NewRef(value));\nreturn 0;\n}\nstatic PyGetSetDef Custom_getsetters[] = {\n{\"first\", Custom_getfirst, Custom_setfirst,\n\"first name\", NULL},\n{\"last\", Custom_getlast, Custom_setlast,\n\"last name\", NULL},\n{NULL} /* Sentinel */\n};\nstatic PyObject *\nCustom_name(PyObject *op, PyObject *Py_UNUSED(dummy))\n{\nCustomObject *self = (CustomObject *) op;\nreturn PyUnicode_FromFormat(\"%S %S\", self->first, self->last);\n}\nstatic PyMethodDef Custom_methods[] = {\n{\"name\", Custom_name, METH_NOARGS,\n\"Return the name, combining the first and last name\"\n},\n{NULL} /* Sentinel */\n};\nstatic PyTypeObject CustomType = {\n.ob_base = PyVarObject_HEAD_INIT(NULL, 0)\n.tp_name = \"custom4.Custom\",\n.tp_doc = PyDoc_STR(\"Custom objects\"),\n.tp_basicsize = sizeof(CustomObject),\n.tp_itemsize = 0,\n.tp_flags = Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE | Py_TPFLAGS_HAVE_GC,\n.tp_new = Custom_new,\n.tp_init = Custom_init,\n.tp_dealloc = Custom_dealloc,\n.tp_traverse = Custom_traverse,\n.tp_clear = Custom_clear,\n.tp_members = Custom_members,\n.tp_methods = Custom_methods,\n.tp_getset = Custom_getsetters,\n};\nstatic int\ncustom_module_exec(PyObject *m)\n{\nif (PyType_Ready(&CustomType) < 0) {\nreturn -1;\n}\nif (PyModule_AddObjectRef(m, \"Custom\", (PyObject *) &CustomType) < 0) {\nreturn -1;\n}\nreturn 0;\n}\nstatic PyModuleDef_Slot custom_module_slots[] = {\n{Py_mod_exec, custom_module_exec},\n{Py_mod_multiple_interpreters, Py_MOD_MULTIPLE_INTERPRETERS_NOT_SUPPORTED},\n{0, NULL}\n};\nstatic PyModuleDef custom_module = {\n.m_base = PyModuleDef_HEAD_INIT,\n.m_name = \"custom4\",\n.m_doc = \"Example module that creates an extension type.\",\n.m_size = 0,\n.m_slots = custom_module_slots,\n};\nPyMODINIT_FUNC\nPyInit_custom4(void)\n{\nreturn PyModuleDef_Init(&custom_module);\n}\nFirst, the traversal method lets the cyclic GC know about subobjects that could participate in cycles:\nstatic int\nCustom_traverse(PyObject *op, visitproc visit, void *arg)\n{\nCustomObject *self = (CustomObject *) op;\nint vret;\nif (self->first) {\nvret = visit(self->first, arg);\nif (vret != 0)\nreturn vret;\n}\nif (self->last) {\nvret = visit(self->last, arg);\nif (vret != 0)\nreturn vret;\n}\nreturn 0;\n}\nFor each subobject that can participate in cycles, we need to call the\nvisit()\nfunction, which is passed to the traversal method. The\nvisit()\nfunction takes as arguments the subobject and the extra argument\narg passed to the traversal method. It returns an integer value that must be\nreturned if it is non-zero.\nPython provides a Py_VISIT()\nmacro that automates calling visit\nfunctions. With Py_VISIT()\n, we can minimize the amount of boilerplate\nin Custom_traverse\n:\nstatic int\nCustom_traverse(PyObject *op, visitproc visit, void *arg)\n{\nCustomObject *self = (CustomObject *) op;\nPy_VISIT(self->first);\nPy_VISIT(self->last);\nreturn 0;\n}\nNote\nThe tp_traverse\nimplementation must name its\narguments exactly visit and arg in order to use Py_VISIT()\n.\nSecond, we need to provide a method for clearing any subobjects that can participate in cycles:\nstatic int\nCustom_clear(PyObject *op)\n{\nCustomObject *self = (CustomObject *) op;\nPy_CLEAR(self->first);\nPy_CLEAR(self->last);\nreturn 0;\n}\nNotice the use of the Py_CLEAR()\nmacro. It is the recommended and safe\nway to clear data attributes of arbitrary types while decrementing\ntheir reference counts. If you were to call Py_XDECREF()\ninstead\non the attribute before setting it to NULL\n, there is a possibility\nthat the attribute\u2019s destructor would call back into code that reads the\nattribute again (especially if there is a reference cycle).\nNote\nYou could emulate Py_CLEAR()\nby writing:\nPyObject *tmp;\ntmp = self->first;\nself->first = NULL;\nPy_XDECREF(tmp);\nNevertheless, it is much easier and less error-prone to always\nuse Py_CLEAR()\nwhen deleting an attribute. Don\u2019t\ntry to micro-optimize at the expense of robustness!\nThe deallocator Custom_dealloc\nmay call arbitrary code when clearing\nattributes. It means the circular GC can be triggered inside the function.\nSince the GC assumes reference count is not zero, we need to untrack the object\nfrom the GC by calling PyObject_GC_UnTrack()\nbefore clearing members.\nHere is our reimplemented deallocator using PyObject_GC_UnTrack()\nand Custom_clear\n:\nstatic void\nCustom_dealloc(PyObject *op)\n{\nPyObject_GC_UnTrack(op);\n(void)Custom_clear(op);\nPy_TYPE(op)->tp_free(op);\n}\nFinally, we add the Py_TPFLAGS_HAVE_GC\nflag to the class flags:\n.tp_flags = Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE | Py_TPFLAGS_HAVE_GC,\nThat\u2019s pretty much it. If we had written custom tp_alloc\nor\ntp_free\nhandlers, we\u2019d need to modify them for cyclic\ngarbage collection. Most extensions will use the versions automatically provided.\n2.5. Subclassing other types\u00b6\nIt is possible to create new extension types that are derived from existing\ntypes. It is easiest to inherit from the built in types, since an extension can\neasily use the PyTypeObject\nit needs. It can be difficult to share\nthese PyTypeObject\nstructures between extension modules.\nIn this example we will create a SubList\ntype that inherits from the\nbuilt-in list\ntype. The new type will be completely compatible with\nregular lists, but will have an additional increment()\nmethod that\nincreases an internal counter:\n>>> import sublist\n>>> s = sublist.SubList(range(3))\n>>> s.extend(s)\n>>> print(len(s))\n6\n>>> print(s.increment())\n1\n>>> print(s.increment())\n2\n#define PY_SSIZE_T_CLEAN\n#include \ntypedef struct {\nPyListObject list;\nint state;\n} SubListObject;\nstatic PyObject *\nSubList_increment(PyObject *op, PyObject *Py_UNUSED(dummy))\n{\nSubListObject *self = (SubListObject *) op;\nself->state++;\nreturn PyLong_FromLong(self->state);\n}\nstatic PyMethodDef SubList_methods[] = {\n{\"increment\", SubList_increment, METH_NOARGS,\nPyDoc_STR(\"increment state counter\")},\n{NULL},\n};\nstatic int\nSubList_init(PyObject *op, PyObject *args, PyObject *kwds)\n{\nSubListObject *self = (SubListObject *) op;\nif (PyList_Type.tp_init(op, args, kwds) < 0)\nreturn -1;\nself->state = 0;\nreturn 0;\n}\nstatic PyTypeObject SubListType = {\n.ob_base = PyVarObject_HEAD_INIT(NULL, 0)\n.tp_name = \"sublist.SubList\",\n.tp_doc = PyDoc_STR(\"SubList objects\"),\n.tp_basicsize = sizeof(SubListObject),\n.tp_itemsize = 0,\n.tp_flags = Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE,\n.tp_init = SubList_init,\n.tp_methods = SubList_methods,\n};\nstatic int\nsublist_module_exec(PyObject *m)\n{\nSubListType.tp_base = &PyList_Type;\nif (PyType_Ready(&SubListType) < 0) {\nreturn -1;\n}\nif (PyModule_AddObjectRef(m, \"SubList\", (PyObject *) &SubListType) < 0) {\nreturn -1;\n}\nreturn 0;\n}\nstatic PyModuleDef_Slot sublist_module_slots[] = {\n{Py_mod_exec, sublist_module_exec},\n{Py_mod_multiple_interpreters, Py_MOD_MULTIPLE_INTERPRETERS_NOT_SUPPORTED},\n{0, NULL}\n};\nstatic PyModuleDef sublist_module = {\n.m_base = PyModuleDef_HEAD_INIT,\n.m_name = \"sublist\",\n.m_doc = \"Example module that creates an extension type.\",\n.m_size = 0,\n.m_slots = sublist_module_slots,\n};\nPyMODINIT_FUNC\nPyInit_sublist(void)\n{\nreturn PyModuleDef_Init(&sublist_module);\n}\nAs you can see, the source code closely resembles the Custom\nexamples in\nprevious sections. We will break down the main differences between them.\ntypedef struct {\nPyListObject list;\nint state;\n} SubListObject;\nThe primary difference for derived type objects is that the base type\u2019s\nobject structure must be the first value. The base type will already include\nthe PyObject_HEAD()\nat the beginning of its structure.\nWhen a Python object is a SubList\ninstance, its PyObject *\npointer\ncan be safely cast to both PyListObject *\nand SubListObject *\n:\nstatic int\nSubList_init(PyObject *op, PyObject *args, PyObject *kwds)\n{\nSubListObject *self = (SubListObject *) op;\nif (PyList_Type.tp_init(op, args, kwds) < 0)\nreturn -1;\nself->state = 0;\nreturn 0;\n}\nWe see above how to call through to the __init__()\nmethod of the base\ntype.\nThis pattern is important when writing a type with custom\ntp_new\nand tp_dealloc\nmembers. The tp_new\nhandler should not actually\ncreate the memory for the object with its tp_alloc\n,\nbut let the base class handle it by calling its own tp_new\n.\nThe PyTypeObject\nstruct supports a tp_base\nspecifying the type\u2019s concrete base class. Due to cross-platform compiler\nissues, you can\u2019t fill that field directly with a reference to\nPyList_Type\n; it should be done in the Py_mod_exec\nfunction:\nstatic int\nsublist_module_exec(PyObject *m)\n{\nSubListType.tp_base = &PyList_Type;\nif (PyType_Ready(&SubListType) < 0) {\nreturn -1;\n}\nif (PyModule_AddObjectRef(m, \"SubList\", (PyObject *) &SubListType) < 0) {\nreturn -1;\n}\nreturn 0;\n}\nBefore calling PyType_Ready()\n, the type structure must have the\ntp_base\nslot filled in. When we are deriving an\nexisting type, it is not necessary to fill out the tp_alloc\nslot with PyType_GenericNew()\n\u2013 the allocation function from the base\ntype will be inherited.\nAfter that, calling PyType_Ready()\nand adding the type object to the\nmodule is the same as with the basic Custom\nexamples.\nFootnotes", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 10635} +{"url": "https://docs.python.org/3/c-api/mapping.html", "title": "Mapping Protocol", "content": "Mapping Protocol\u00b6\nSee also PyObject_GetItem()\n, PyObject_SetItem()\nand\nPyObject_DelItem()\n.\n-\nint PyMapping_Check(PyObject *o)\u00b6\n- Part of the Stable ABI.\nReturn\n1\nif the object provides the mapping protocol or supports slicing, and0\notherwise. Note that it returns1\nfor Python classes with a__getitem__()\nmethod, since in general it is impossible to determine what type of keys the class supports. This function always succeeds.\n-\nPy_ssize_t PyMapping_Size(PyObject *o)\u00b6\n-\nPy_ssize_t PyMapping_Length(PyObject *o)\u00b6\n- Part of the Stable ABI.\nReturns the number of keys in object o on success, and\n-1\non failure. This is equivalent to the Python expressionlen(o)\n.\n-\nPyObject *PyMapping_GetItemString(PyObject *o, const char *key)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nThis is the same as\nPyObject_GetItem()\n, but key is specified as a const char* UTF-8 encoded bytes string, rather than a PyObject*.\n-\nint PyMapping_GetOptionalItem(PyObject *obj, PyObject *key, PyObject **result)\u00b6\n- Part of the Stable ABI since version 3.13.\nVariant of\nPyObject_GetItem()\nwhich doesn\u2019t raiseKeyError\nif the key is not found.If the key is found, return\n1\nand set *result to a new strong reference to the corresponding value. If the key is not found, return0\nand set *result toNULL\n; theKeyError\nis silenced. If an error other thanKeyError\nis raised, return-1\nand set *result toNULL\n.Added in version 3.13.\n-\nint PyMapping_GetOptionalItemString(PyObject *obj, const char *key, PyObject **result)\u00b6\n- Part of the Stable ABI since version 3.13.\nThis is the same as\nPyMapping_GetOptionalItem()\n, but key is specified as a const char* UTF-8 encoded bytes string, rather than a PyObject*.Added in version 3.13.\n-\nint PyMapping_SetItemString(PyObject *o, const char *key, PyObject *v)\u00b6\n- Part of the Stable ABI.\nThis is the same as\nPyObject_SetItem()\n, but key is specified as a const char* UTF-8 encoded bytes string, rather than a PyObject*.\n-\nint PyMapping_DelItem(PyObject *o, PyObject *key)\u00b6\nThis is an alias of\nPyObject_DelItem()\n.\n-\nint PyMapping_DelItemString(PyObject *o, const char *key)\u00b6\nThis is the same as\nPyObject_DelItem()\n, but key is specified as a const char* UTF-8 encoded bytes string, rather than a PyObject*.\n-\nint PyMapping_HasKeyWithError(PyObject *o, PyObject *key)\u00b6\n- Part of the Stable ABI since version 3.13.\nReturn\n1\nif the mapping object has the key key and0\notherwise. This is equivalent to the Python expressionkey in o\n. On failure, return-1\n.Added in version 3.13.\n-\nint PyMapping_HasKeyStringWithError(PyObject *o, const char *key)\u00b6\n- Part of the Stable ABI since version 3.13.\nThis is the same as\nPyMapping_HasKeyWithError()\n, but key is specified as a const char* UTF-8 encoded bytes string, rather than a PyObject*.Added in version 3.13.\n-\nint PyMapping_HasKey(PyObject *o, PyObject *key)\u00b6\n- Part of the Stable ABI.\nReturn\n1\nif the mapping object has the key key and0\notherwise. This is equivalent to the Python expressionkey in o\n. This function always succeeds.Note\nExceptions which occur when this calls the\n__getitem__()\nmethod are silently ignored. For proper error handling, usePyMapping_HasKeyWithError()\n,PyMapping_GetOptionalItem()\norPyObject_GetItem()\ninstead.\n-\nint PyMapping_HasKeyString(PyObject *o, const char *key)\u00b6\n- Part of the Stable ABI.\nThis is the same as\nPyMapping_HasKey()\n, but key is specified as a const char* UTF-8 encoded bytes string, rather than a PyObject*.Note\nExceptions that occur when this calls the\n__getitem__()\nmethod or while creating the temporarystr\nobject are silently ignored. For proper error handling, usePyMapping_HasKeyStringWithError()\n,PyMapping_GetOptionalItemString()\norPyMapping_GetItemString()\ninstead.\n-\nPyObject *PyMapping_Keys(PyObject *o)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nOn success, return a list of the keys in object o. On failure, return\nNULL\n.Changed in version 3.7: Previously, the function returned a list or a tuple.\n-\nPyObject *PyMapping_Values(PyObject *o)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nOn success, return a list of the values in object o. On failure, return\nNULL\n.Changed in version 3.7: Previously, the function returned a list or a tuple.\n-\nPyObject *PyMapping_Items(PyObject *o)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nOn success, return a list of the items in object o, where each item is a tuple containing a key-value pair. On failure, return\nNULL\n.Changed in version 3.7: Previously, the function returned a list or a tuple.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1124} +{"url": "https://docs.python.org/3/c-api/dict.html", "title": "Dictionary Objects", "content": "Dictionary Objects\u00b6\n-\nPyTypeObject PyDict_Type\u00b6\n- Part of the Stable ABI.\nThis instance of\nPyTypeObject\nrepresents the Python dictionary type. This is the same object asdict\nin the Python layer.\n-\nint PyDict_Check(PyObject *p)\u00b6\nReturn true if p is a dict object or an instance of a subtype of the dict type. This function always succeeds.\n-\nint PyDict_CheckExact(PyObject *p)\u00b6\nReturn true if p is a dict object, but not an instance of a subtype of the dict type. This function always succeeds.\n-\nPyObject *PyDict_New()\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a new empty dictionary, or\nNULL\non failure.\n-\nPyObject *PyDictProxy_New(PyObject *mapping)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a\ntypes.MappingProxyType\nobject for a mapping which enforces read-only behavior. This is normally used to create a view to prevent modification of the dictionary for non-dynamic class types.\n-\nPyTypeObject PyDictProxy_Type\u00b6\n- Part of the Stable ABI.\nThe type object for mapping proxy objects created by\nPyDictProxy_New()\nand for the read-only__dict__\nattribute of many built-in types. APyDictProxy_Type\ninstance provides a dynamic, read-only view of an underlying dictionary: changes to the underlying dictionary are reflected in the proxy, but the proxy itself does not support mutation operations. This corresponds totypes.MappingProxyType\nin Python.\n-\nvoid PyDict_Clear(PyObject *p)\u00b6\n- Part of the Stable ABI.\nEmpty an existing dictionary of all key-value pairs.\n-\nint PyDict_Contains(PyObject *p, PyObject *key)\u00b6\n- Part of the Stable ABI.\nDetermine if dictionary p contains key. If an item in p matches key, return\n1\n, otherwise return0\n. On error, return-1\n. This is equivalent to the Python expressionkey in p\n.\n-\nint PyDict_ContainsString(PyObject *p, const char *key)\u00b6\nThis is the same as\nPyDict_Contains()\n, but key is specified as a const char* UTF-8 encoded bytes string, rather than a PyObject*.Added in version 3.13.\n-\nPyObject *PyDict_Copy(PyObject *p)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a new dictionary that contains the same key-value pairs as p.\n-\nint PyDict_SetItem(PyObject *p, PyObject *key, PyObject *val)\u00b6\n- Part of the Stable ABI.\nInsert val into the dictionary p with a key of key. key must be hashable; if it isn\u2019t,\nTypeError\nwill be raised. Return0\non success or-1\non failure. This function does not steal a reference to val.\n-\nint PyDict_SetItemString(PyObject *p, const char *key, PyObject *val)\u00b6\n- Part of the Stable ABI.\nThis is the same as\nPyDict_SetItem()\n, but key is specified as a const char* UTF-8 encoded bytes string, rather than a PyObject*.\n-\nint PyDict_DelItem(PyObject *p, PyObject *key)\u00b6\n- Part of the Stable ABI.\nRemove the entry in dictionary p with key key. key must be hashable; if it isn\u2019t,\nTypeError\nis raised. If key is not in the dictionary,KeyError\nis raised. Return0\non success or-1\non failure.\n-\nint PyDict_DelItemString(PyObject *p, const char *key)\u00b6\n- Part of the Stable ABI.\nThis is the same as\nPyDict_DelItem()\n, but key is specified as a const char* UTF-8 encoded bytes string, rather than a PyObject*.\n-\nint PyDict_GetItemRef(PyObject *p, PyObject *key, PyObject **result)\u00b6\n- Part of the Stable ABI since version 3.13.\nReturn a new strong reference to the object from dictionary p which has a key key:\nIf the key is present, set *result to a new strong reference to the value and return\n1\n.If the key is missing, set *result to\nNULL\nand return0\n.On error, raise an exception and return\n-1\n.\nAdded in version 3.13.\nSee also the\nPyObject_GetItem()\nfunction.\n-\nPyObject *PyDict_GetItem(PyObject *p, PyObject *key)\u00b6\n- Return value: Borrowed reference. Part of the Stable ABI.\nReturn a borrowed reference to the object from dictionary p which has a key key. Return\nNULL\nif the key key is missing without setting an exception.Note\nExceptions that occur while this calls\n__hash__()\nand__eq__()\nmethods are silently ignored. Prefer thePyDict_GetItemWithError()\nfunction instead.Changed in version 3.10: Calling this API without an attached thread state had been allowed for historical reason. It is no longer allowed.\n-\nPyObject *PyDict_GetItemWithError(PyObject *p, PyObject *key)\u00b6\n- Return value: Borrowed reference. Part of the Stable ABI.\nVariant of\nPyDict_GetItem()\nthat does not suppress exceptions. ReturnNULL\nwith an exception set if an exception occurred. ReturnNULL\nwithout an exception set if the key wasn\u2019t present.\n-\nPyObject *PyDict_GetItemString(PyObject *p, const char *key)\u00b6\n- Return value: Borrowed reference. Part of the Stable ABI.\nThis is the same as\nPyDict_GetItem()\n, but key is specified as a const char* UTF-8 encoded bytes string, rather than a PyObject*.Note\nExceptions that occur while this calls\n__hash__()\nand__eq__()\nmethods or while creating the temporarystr\nobject are silently ignored. Prefer using thePyDict_GetItemWithError()\nfunction with your ownPyUnicode_FromString()\nkey instead.\n-\nint PyDict_GetItemStringRef(PyObject *p, const char *key, PyObject **result)\u00b6\n- Part of the Stable ABI since version 3.13.\nSimilar to\nPyDict_GetItemRef()\n, but key is specified as a const char* UTF-8 encoded bytes string, rather than a PyObject*.Added in version 3.13.\n-\nPyObject *PyDict_SetDefault(PyObject *p, PyObject *key, PyObject *defaultobj)\u00b6\n- Return value: Borrowed reference.\nThis is the same as the Python-level\ndict.setdefault()\n. If present, it returns the value corresponding to key from the dictionary p. If the key is not in the dict, it is inserted with value defaultobj and defaultobj is returned. This function evaluates the hash function of key only once, instead of evaluating it independently for the lookup and the insertion.Added in version 3.4.\n-\nint PyDict_SetDefaultRef(PyObject *p, PyObject *key, PyObject *default_value, PyObject **result)\u00b6\nInserts default_value into the dictionary p with a key of key if the key is not already present in the dictionary. If result is not\nNULL\n, then *result is set to a strong reference to either default_value, if the key was not present, or the existing value, if key was already present in the dictionary. Returns1\nif the key was present and default_value was not inserted, or0\nif the key was not present and default_value was inserted. On failure, returns-1\n, sets an exception, and sets*result\ntoNULL\n.For clarity: if you have a strong reference to default_value before calling this function, then after it returns, you hold a strong reference to both default_value and *result (if it\u2019s not\nNULL\n). These may refer to the same object: in that case you hold two separate references to it.Added in version 3.13.\n-\nint PyDict_Pop(PyObject *p, PyObject *key, PyObject **result)\u00b6\nRemove key from dictionary p and optionally return the removed value. Do not raise\nKeyError\nif the key is missing.If the key is present, set *result to a new reference to the removed value if result is not\nNULL\n, and return1\n.If the key is missing, set *result to\nNULL\nif result is notNULL\n, and return0\n.On error, raise an exception and return\n-1\n.\nSimilar to\ndict.pop()\n, but without the default value and not raisingKeyError\nif the key is missing.Added in version 3.13.\n-\nint PyDict_PopString(PyObject *p, const char *key, PyObject **result)\u00b6\nSimilar to\nPyDict_Pop()\n, but key is specified as a const char* UTF-8 encoded bytes string, rather than a PyObject*.Added in version 3.13.\n-\nPyObject *PyDict_Items(PyObject *p)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a\nPyListObject\ncontaining all the items from the dictionary.\n-\nPyObject *PyDict_Keys(PyObject *p)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a\nPyListObject\ncontaining all the keys from the dictionary.\n-\nPyObject *PyDict_Values(PyObject *p)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a\nPyListObject\ncontaining all the values from the dictionary p.\n-\nPy_ssize_t PyDict_Size(PyObject *p)\u00b6\n- Part of the Stable ABI.\nReturn the number of items in the dictionary. This is equivalent to\nlen(p)\non a dictionary.\n-\nPy_ssize_t PyDict_GET_SIZE(PyObject *p)\u00b6\nSimilar to\nPyDict_Size()\n, but without error checking.\n-\nint PyDict_Next(PyObject *p, Py_ssize_t *ppos, PyObject **pkey, PyObject **pvalue)\u00b6\n- Part of the Stable ABI.\nIterate over all key-value pairs in the dictionary p. The\nPy_ssize_t\nreferred to by ppos must be initialized to0\nprior to the first call to this function to start the iteration; the function returns true for each pair in the dictionary, and false once all pairs have been reported. The parameters pkey and pvalue should either point to PyObject* variables that will be filled in with each key and value, respectively, or may beNULL\n. Any references returned through them are borrowed. ppos should not be altered during iteration. Its value represents offsets within the internal dictionary structure, and since the structure is sparse, the offsets are not consecutive.For example:\nPyObject *key, *value; Py_ssize_t pos = 0; while (PyDict_Next(self->dict, &pos, &key, &value)) { /* do something interesting with the values... */ ... }\nThe dictionary p should not be mutated during iteration. It is safe to modify the values of the keys as you iterate over the dictionary, but only so long as the set of keys does not change. For example:\nPyObject *key, *value; Py_ssize_t pos = 0; while (PyDict_Next(self->dict, &pos, &key, &value)) { long i = PyLong_AsLong(value); if (i == -1 && PyErr_Occurred()) { return -1; } PyObject *o = PyLong_FromLong(i + 1); if (o == NULL) return -1; if (PyDict_SetItem(self->dict, key, o) < 0) { Py_DECREF(o); return -1; } Py_DECREF(o); }\nThe function is not thread-safe in the free-threaded build without external synchronization. You can use\nPy_BEGIN_CRITICAL_SECTION\nto lock the dictionary while iterating over it:Py_BEGIN_CRITICAL_SECTION(self->dict); while (PyDict_Next(self->dict, &pos, &key, &value)) { ... } Py_END_CRITICAL_SECTION();\nNote\nOn the free-threaded build, this function can be used safely inside a critical section. However, the references returned for pkey and pvalue are borrowed and are only valid while the critical section is held. If you need to use these objects outside the critical section or when the critical section can be suspended, create a strong reference (for example, using\nPy_NewRef()\n).\n-\nint PyDict_Merge(PyObject *a, PyObject *b, int override)\u00b6\n- Part of the Stable ABI.\nIterate over mapping object b adding key-value pairs to dictionary a. b may be a dictionary, or any object supporting\nPyMapping_Keys()\nandPyObject_GetItem()\n. If override is true, existing pairs in a will be replaced if a matching key is found in b, otherwise pairs will only be added if there is not a matching key in a. Return0\non success or-1\nif an exception was raised.\n-\nint PyDict_Update(PyObject *a, PyObject *b)\u00b6\n- Part of the Stable ABI.\nThis is the same as\nPyDict_Merge(a, b, 1)\nin C, and is similar toa.update(b)\nin Python except thatPyDict_Update()\ndoesn\u2019t fall back to the iterating over a sequence of key value pairs if the second argument has no \u201ckeys\u201d attribute. Return0\non success or-1\nif an exception was raised.\n-\nint PyDict_MergeFromSeq2(PyObject *a, PyObject *seq2, int override)\u00b6\n- Part of the Stable ABI.\nUpdate or merge into dictionary a, from the key-value pairs in seq2. seq2 must be an iterable object producing iterable objects of length 2, viewed as key-value pairs. In case of duplicate keys, the last wins if override is true, else the first wins. Return\n0\non success or-1\nif an exception was raised. Equivalent Python (except for the return value):def PyDict_MergeFromSeq2(a, seq2, override): for key, value in seq2: if override or key not in a: a[key] = value\n-\nint PyDict_AddWatcher(PyDict_WatchCallback callback)\u00b6\nRegister callback as a dictionary watcher. Return a non-negative integer id which must be passed to future calls to\nPyDict_Watch()\n. In case of error (e.g. no more watcher IDs available), return-1\nand set an exception.Added in version 3.12.\n-\nint PyDict_ClearWatcher(int watcher_id)\u00b6\nClear watcher identified by watcher_id previously returned from\nPyDict_AddWatcher()\n. Return0\non success,-1\non error (e.g. if the given watcher_id was never registered.)Added in version 3.12.\n-\nint PyDict_Watch(int watcher_id, PyObject *dict)\u00b6\nMark dictionary dict as watched. The callback granted watcher_id by\nPyDict_AddWatcher()\nwill be called when dict is modified or deallocated. Return0\non success or-1\non error.Added in version 3.12.\n-\nint PyDict_Unwatch(int watcher_id, PyObject *dict)\u00b6\nMark dictionary dict as no longer watched. The callback granted watcher_id by\nPyDict_AddWatcher()\nwill no longer be called when dict is modified or deallocated. The dict must previously have been watched by this watcher. Return0\non success or-1\non error.Added in version 3.12.\n-\ntype PyDict_WatchEvent\u00b6\nEnumeration of possible dictionary watcher events:\nPyDict_EVENT_ADDED\n,PyDict_EVENT_MODIFIED\n,PyDict_EVENT_DELETED\n,PyDict_EVENT_CLONED\n,PyDict_EVENT_CLEARED\n, orPyDict_EVENT_DEALLOCATED\n.Added in version 3.12.\n-\ntypedef int (*PyDict_WatchCallback)(PyDict_WatchEvent event, PyObject *dict, PyObject *key, PyObject *new_value)\u00b6\nType of a dict watcher callback function.\nIf event is\nPyDict_EVENT_CLEARED\norPyDict_EVENT_DEALLOCATED\n, both key and new_value will beNULL\n. If event isPyDict_EVENT_ADDED\norPyDict_EVENT_MODIFIED\n, new_value will be the new value for key. If event isPyDict_EVENT_DELETED\n, key is being deleted from the dictionary and new_value will beNULL\n.PyDict_EVENT_CLONED\noccurs when dict was previously empty and another dict is merged into it. To maintain efficiency of this operation, per-keyPyDict_EVENT_ADDED\nevents are not issued in this case; instead a singlePyDict_EVENT_CLONED\nis issued, and key will be the source dictionary.The callback may inspect but must not modify dict; doing so could have unpredictable effects, including infinite recursion. Do not trigger Python code execution in the callback, as it could modify the dict as a side effect.\nIf event is\nPyDict_EVENT_DEALLOCATED\n, taking a new reference in the callback to the about-to-be-destroyed dictionary will resurrect it and prevent it from being freed at this time. When the resurrected object is destroyed later, any watcher callbacks active at that time will be called again.Callbacks occur before the notified modification to dict takes place, so the prior state of dict can be inspected.\nIf the callback sets an exception, it must return\n-1\n; this exception will be printed as an unraisable exception usingPyErr_WriteUnraisable()\n. Otherwise it should return0\n.There may already be a pending exception set on entry to the callback. In this case, the callback should return\n0\nwith the same exception still set. This means the callback may not call any other API that can set an exception unless it saves and clears the exception state first, and restores it before returning.Added in version 3.12.\nDictionary View Objects\u00b6\n-\nint PyDictViewSet_Check(PyObject *op)\u00b6\nReturn true if op is a view of a set inside a dictionary. This is currently equivalent to PyDictKeys_Check(op) || PyDictItems_Check(op). This function always succeeds.\n-\nPyTypeObject PyDictKeys_Type\u00b6\n- Part of the Stable ABI.\nType object for a view of dictionary keys. In Python, this is the type of the object returned by\ndict.keys()\n.\n-\nint PyDictKeys_Check(PyObject *op)\u00b6\nReturn true if op is an instance of a dictionary keys view. This function always succeeds.\n-\nPyTypeObject PyDictValues_Type\u00b6\n- Part of the Stable ABI.\nType object for a view of dictionary values. In Python, this is the type of the object returned by\ndict.values()\n.\n-\nint PyDictValues_Check(PyObject *op)\u00b6\nReturn true if op is an instance of a dictionary values view. This function always succeeds.\n-\nPyTypeObject PyDictItems_Type\u00b6\n- Part of the Stable ABI.\nType object for a view of dictionary items. In Python, this is the type of the object returned by\ndict.items()\n.\nOrdered Dictionaries\u00b6\nPython\u2019s C API provides interface for collections.OrderedDict\nfrom C.\nSince Python 3.7, dictionaries are ordered by default, so there is usually\nlittle need for these functions; prefer PyDict*\nwhere possible.\n-\nPyTypeObject PyODict_Type\u00b6\nType object for ordered dictionaries. This is the same object as\ncollections.OrderedDict\nin the Python layer.\n-\nint PyODict_Check(PyObject *od)\u00b6\nReturn true if od is an ordered dictionary object or an instance of a subtype of the\nOrderedDict\ntype. This function always succeeds.\n-\nint PyODict_CheckExact(PyObject *od)\u00b6\nReturn true if od is an ordered dictionary object, but not an instance of a subtype of the\nOrderedDict\ntype. This function always succeeds.\n-\nPyTypeObject PyODictKeys_Type\u00b6\nAnalogous to\nPyDictKeys_Type\nfor ordered dictionaries.\n-\nPyTypeObject PyODictValues_Type\u00b6\nAnalogous to\nPyDictValues_Type\nfor ordered dictionaries.\n-\nPyTypeObject PyODictItems_Type\u00b6\nAnalogous to\nPyDictItems_Type\nfor ordered dictionaries.\n-\nPyObject *PyODict_New(void)\u00b6\nReturn a new empty ordered dictionary, or\nNULL\non failure.This is analogous to\nPyDict_New()\n.\n-\nint PyODict_SetItem(PyObject *od, PyObject *key, PyObject *value)\u00b6\nInsert value into the ordered dictionary od with a key of key. Return\n0\non success or-1\nwith an exception set on failure.This is analogous to\nPyDict_SetItem()\n.\n-\nint PyODict_DelItem(PyObject *od, PyObject *key)\u00b6\nRemove the entry in the ordered dictionary od with key key. Return\n0\non success or-1\nwith an exception set on failure.This is analogous to\nPyDict_DelItem()\n.\nThese are soft deprecated aliases to PyDict\nAPIs:\n|\n|\n|---|---|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 4384} +{"url": "https://docs.python.org/3/c-api/none.html", "title": "The ", "content": "The None\nObject\u00b6\nNote that the PyTypeObject\nfor None\nis not directly exposed in the\nPython/C API. Since None\nis a singleton, testing for object identity (using\n==\nin C) is sufficient. There is no PyNone_Check()\nfunction for the\nsame reason.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 60} +{"url": "https://docs.python.org/3/library/uu.html", "title": " \u2014 Encode and decode uuencode files", "content": "uu\n\u2014 Encode and decode uuencode files\u00b6\nDeprecated since version 3.11, removed in version 3.13.\nThis module is no longer part of the Python standard library. It was removed in Python 3.13 after being deprecated in Python 3.11. The removal was decided in PEP 594.\nThe last version of Python that provided the uu\nmodule was\nPython 3.12.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 83} +{"url": "https://docs.python.org/3/library/telnetlib.html", "title": " \u2014 Telnet client", "content": "telnetlib\n\u2014 Telnet client\u00b6\nDeprecated since version 3.11, removed in version 3.13.\nThis module is no longer part of the Python standard library. It was removed in Python 3.13 after being deprecated in Python 3.11. The removal was decided in PEP 594.\nPossible replacements are third-party libraries from PyPI: telnetlib3 or Exscript. These are not supported or maintained by the Python core team.\nThe last version of Python that provided the telnetlib\nmodule was\nPython 3.12.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 118} +{"url": "https://docs.python.org/3/library/sunau.html", "title": " \u2014 Read and write Sun AU files", "content": "sunau\n\u2014 Read and write Sun AU files\u00b6\nDeprecated since version 3.11, removed in version 3.13.\nThis module is no longer part of the Python standard library. It was removed in Python 3.13 after being deprecated in Python 3.11. The removal was decided in PEP 594.\nThe last version of Python that provided the sunau\nmodule was\nPython 3.12.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 83} +{"url": "https://docs.python.org/3/library/spwd.html", "title": " \u2014 The shadow password database", "content": "spwd\n\u2014 The shadow password database\u00b6\nDeprecated since version 3.11, removed in version 3.13.\nThis module is no longer part of the Python standard library. It was removed in Python 3.13 after being deprecated in Python 3.11. The removal was decided in PEP 594.\nA possible replacement is the third-party library python-pam. This library is not supported or maintained by the Python core team.\nThe last version of Python that provided the spwd\nmodule was\nPython 3.12.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 116} +{"url": "https://docs.python.org/3/library/sndhdr.html", "title": " \u2014 Determine type of sound file", "content": "sndhdr\n\u2014 Determine type of sound file\u00b6\nDeprecated since version 3.11, removed in version 3.13.\nThis module is no longer part of the Python standard library. It was removed in Python 3.13 after being deprecated in Python 3.11. The removal was decided in PEP 594.\nPossible replacements are third-party modules from PyPI: filetype, puremagic, or python-magic. These are not supported or maintained by the Python core team.\nThe last version of Python that provided the sndhdr\nmodule was\nPython 3.12.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 124} +{"url": "https://docs.python.org/3/library/smtpd.html", "title": " \u2014 SMTP Server", "content": "smtpd\n\u2014 SMTP Server\u00b6\nDeprecated since version 3.6, removed in version 3.12.\nThis module is no longer part of the Python standard library. It was removed in Python 3.12 after being deprecated in Python 3.6. The removal was decided in PEP 594.\nA possible replacement is the third-party aiosmtpd library. This library is not maintained or supported by the Python core team.\nThe last version of Python that provided the smtpd\nmodule was\nPython 3.11.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 111} +{"url": "https://docs.python.org/3/library/pipes.html", "title": " \u2014 Interface to shell pipelines", "content": "pipes\n\u2014 Interface to shell pipelines\u00b6\nDeprecated since version 3.11, removed in version 3.13.\nThis module is no longer part of the Python standard library. It was removed in Python 3.13 after being deprecated in Python 3.11. The removal was decided in PEP 594.\nApplications should use the subprocess\nmodule instead.\nThe last version of Python that provided the pipes\nmodule was\nPython 3.12.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 97} +{"url": "https://docs.python.org/3/library/asyncio-future.html", "title": "Futures", "content": "Futures\u00b6\nSource code: Lib/asyncio/futures.py, Lib/asyncio/base_futures.py\nFuture objects are used to bridge low-level callback-based code with high-level async/await code.\nFuture Functions\u00b6\n- asyncio.isfuture(obj)\u00b6\nReturn\nTrue\nif obj is either of:an instance of\nasyncio.Future\n,an instance of\nasyncio.Task\n,a Future-like object with a\n_asyncio_future_blocking\nattribute.\nAdded in version 3.5.\n- asyncio.ensure_future(obj, *, loop=None)\u00b6\nReturn:\nobj argument as is, if obj is a\nFuture\n, aTask\n, or a Future-like object (isfuture()\nis used for the test.)a\nTask\nobject wrapping obj, if obj is a coroutine (iscoroutine()\nis used for the test); in this case the coroutine will be scheduled byensure_future()\n.a\nTask\nobject that would await on obj, if obj is an awaitable (inspect.isawaitable()\nis used for the test.)\nIf obj is neither of the above a\nTypeError\nis raised.Important\nSave a reference to the result of this function, to avoid a task disappearing mid-execution.\nSee also the\ncreate_task()\nfunction which is the preferred way for creating new tasks or useasyncio.TaskGroup\nwhich keeps reference to the task internally.Changed in version 3.5.1: The function accepts any awaitable object.\nDeprecated since version 3.10: Deprecation warning is emitted if obj is not a Future-like object and loop is not specified and there is no running event loop.\n- asyncio.wrap_future(future, *, loop=None)\u00b6\nWrap a\nconcurrent.futures.Future\nobject in aasyncio.Future\nobject.Deprecated since version 3.10: Deprecation warning is emitted if future is not a Future-like object and loop is not specified and there is no running event loop.\nFuture Object\u00b6\n- class asyncio.Future(*, loop=None)\u00b6\nA Future represents an eventual result of an asynchronous operation. Not thread-safe.\nFuture is an awaitable object. Coroutines can await on Future objects until they either have a result or an exception set, or until they are cancelled. A Future can be awaited multiple times and the result is same.\nTypically Futures are used to enable low-level callback-based code (e.g. in protocols implemented using asyncio transports) to interoperate with high-level async/await code.\nThe rule of thumb is to never expose Future objects in user-facing APIs, and the recommended way to create a Future object is to call\nloop.create_future()\n. This way alternative event loop implementations can inject their own optimized implementations of a Future object.Changed in version 3.7: Added support for the\ncontextvars\nmodule.Deprecated since version 3.10: Deprecation warning is emitted if loop is not specified and there is no running event loop.\n- result()\u00b6\nReturn the result of the Future.\nIf the Future is done and has a result set by the\nset_result()\nmethod, the result value is returned.If the Future is done and has an exception set by the\nset_exception()\nmethod, this method raises the exception.If the Future has been cancelled, this method raises a\nCancelledError\nexception.If the Future\u2019s result isn\u2019t yet available, this method raises an\nInvalidStateError\nexception.\n- set_result(result)\u00b6\nMark the Future as done and set its result.\nRaises an\nInvalidStateError\nerror if the Future is already done.\n- set_exception(exception)\u00b6\nMark the Future as done and set an exception.\nRaises an\nInvalidStateError\nerror if the Future is already done.\n- done()\u00b6\nReturn\nTrue\nif the Future is done.A Future is done if it was cancelled or if it has a result or an exception set with\nset_result()\norset_exception()\ncalls.\n- cancelled()\u00b6\nReturn\nTrue\nif the Future was cancelled.The method is usually used to check if a Future is not cancelled before setting a result or an exception for it:\nif not fut.cancelled(): fut.set_result(42)\n- add_done_callback(callback, *, context=None)\u00b6\nAdd a callback to be run when the Future is done.\nThe callback is called with the Future object as its only argument.\nIf the Future is already done when this method is called, the callback is scheduled with\nloop.call_soon()\n.An optional keyword-only context argument allows specifying a custom\ncontextvars.Context\nfor the callback to run in. The current context is used when no context is provided.functools.partial()\ncan be used to pass parameters to the callback, e.g.:# Call 'print(\"Future:\", fut)' when \"fut\" is done. fut.add_done_callback( functools.partial(print, \"Future:\"))\nChanged in version 3.7: The context keyword-only parameter was added. See PEP 567 for more details.\n- remove_done_callback(callback)\u00b6\nRemove callback from the callbacks list.\nReturns the number of callbacks removed, which is typically 1, unless a callback was added more than once.\n- cancel(msg=None)\u00b6\nCancel the Future and schedule callbacks.\nIf the Future is already done or cancelled, return\nFalse\n. Otherwise, change the Future\u2019s state to cancelled, schedule the callbacks, and returnTrue\n.Changed in version 3.9: Added the msg parameter.\n- exception()\u00b6\nReturn the exception that was set on this Future.\nThe exception (or\nNone\nif no exception was set) is returned only if the Future is done.If the Future has been cancelled, this method raises a\nCancelledError\nexception.If the Future isn\u2019t done yet, this method raises an\nInvalidStateError\nexception.\n- get_loop()\u00b6\nReturn the event loop the Future object is bound to.\nAdded in version 3.7.\nThis example creates a Future object, creates and schedules an asynchronous Task to set result for the Future, and waits until the Future has a result:\nasync def set_after(fut, delay, value):\n# Sleep for *delay* seconds.\nawait asyncio.sleep(delay)\n# Set *value* as a result of *fut* Future.\nfut.set_result(value)\nasync def main():\n# Get the current event loop.\nloop = asyncio.get_running_loop()\n# Create a new Future object.\nfut = loop.create_future()\n# Run \"set_after()\" coroutine in a parallel Task.\n# We are using the low-level \"loop.create_task()\" API here because\n# we already have a reference to the event loop at hand.\n# Otherwise we could have just used \"asyncio.create_task()\".\nloop.create_task(\nset_after(fut, 1, '... world'))\nprint('hello ...')\n# Wait until *fut* has a result (1 second) and print it.\nprint(await fut)\nasyncio.run(main())\nImportant\nThe Future object was designed to mimic\nconcurrent.futures.Future\n. Key differences include:\nunlike asyncio Futures,\nconcurrent.futures.Future\ninstances cannot be awaited.asyncio.Future.result()\nandasyncio.Future.exception()\ndo not accept the timeout argument.asyncio.Future.result()\nandasyncio.Future.exception()\nraise anInvalidStateError\nexception when the Future is not done.Callbacks registered with\nasyncio.Future.add_done_callback()\nare not called immediately. They are scheduled withloop.call_soon()\ninstead.asyncio Future is not compatible with the\nconcurrent.futures.wait()\nandconcurrent.futures.as_completed()\nfunctions.asyncio.Future.cancel()\naccepts an optionalmsg\nargument, butconcurrent.futures.Future.cancel()\ndoes not.", "code_snippets": [" ", " ", "\n ", "\n", "\n", "\n ", " ", "\n", " ", " ", " ", "\n ", "\n ", " ", "\n\n ", "\n ", "\n\n", " ", "\n ", "\n ", " ", " ", "\n\n ", "\n ", " ", " ", "\n\n ", "\n ", "\n ", "\n ", "\n ", "\n ", " ", " ", "\n\n ", "\n\n ", "\n ", " ", "\n\n", "\n"], "language": "Python", "source": "python.org", "token_count": 1714} +{"url": "https://docs.python.org/3/whatsnew/3.3.html", "title": "What\u2019s New In Python 3.3", "content": "What\u2019s New In Python 3.3\u00b6\nThis article explains the new features in Python 3.3, compared to 3.2. Python 3.3 was released on September 29, 2012. For full details, see the changelog.\nSee also\nPEP 398 - Python 3.3 Release Schedule\nSummary \u2013 Release highlights\u00b6\nNew syntax features:\nNew\nyield from\nexpression for generator delegation.The\nu'unicode'\nsyntax is accepted again forstr\nobjects.\nNew library modules:\nfaulthandler\n(helps debugging low-level crashes)ipaddress\n(high-level objects representing IP addresses and masks)lzma\n(compress data using the XZ / LZMA algorithm)unittest.mock\n(replace parts of your system under test with mock objects)venv\n(Python virtual environments, as in the popularvirtualenv\npackage)\nNew built-in features:\nReworked I/O exception hierarchy.\nImplementation improvements:\nRewritten import machinery based on\nimportlib\n.More compact unicode strings.\nMore compact attribute dictionaries.\nSignificantly Improved Library Modules:\nC Accelerator for the decimal module.\nBetter unicode handling in the email module (provisional).\nSecurity improvements:\nHash randomization is switched on by default.\nPlease read on for a comprehensive list of user-facing changes.\nPEP 405: Virtual Environments\u00b6\nVirtual environments help create separate Python setups while sharing a\nsystem-wide base install, for ease of maintenance. Virtual environments\nhave their own set of private site packages (i.e. locally installed\nlibraries), and are optionally segregated from the system-wide site\npackages. Their concept and implementation are inspired by the popular\nvirtualenv\nthird-party package, but benefit from tighter integration\nwith the interpreter core.\nThis PEP adds the venv\nmodule for programmatic access, and the\npyvenv\nscript for command-line access and\nadministration. The Python interpreter checks for a pyvenv.cfg\n,\nfile whose existence signals the base of a virtual environment\u2019s directory\ntree.\nSee also\n- PEP 405 - Python Virtual Environments\nPEP written by Carl Meyer; implementation by Carl Meyer and Vinay Sajip\nPEP 420: Implicit Namespace Packages\u00b6\nNative support for package directories that don\u2019t require __init__.py\nmarker files and can automatically span multiple path segments (inspired by\nvarious third party approaches to namespace packages, as described in\nPEP 420)\nSee also\n- PEP 420 - Implicit Namespace Packages\nPEP written by Eric V. Smith; implementation by Eric V. Smith and Barry Warsaw\nPEP 3118: New memoryview implementation and buffer protocol documentation\u00b6\nThe implementation of PEP 3118 has been significantly improved.\nThe new memoryview implementation comprehensively fixes all ownership and lifetime issues of dynamically allocated fields in the Py_buffer struct that led to multiple crash reports. Additionally, several functions that crashed or returned incorrect results for non-contiguous or multi-dimensional input have been fixed.\nThe memoryview object now has a PEP-3118 compliant getbufferproc() that checks the consumer\u2019s request type. Many new features have been added, most of them work in full generality for non-contiguous arrays and arrays with suboffsets.\nThe documentation has been updated, clearly spelling out responsibilities for both exporters and consumers. Buffer request flags are grouped into basic and compound flags. The memory layout of non-contiguous and multi-dimensional NumPy-style arrays is explained.\nFeatures\u00b6\nAll native single character format specifiers in struct module syntax (optionally prefixed with \u2018@\u2019) are now supported.\nWith some restrictions, the cast() method allows changing of format and shape of C-contiguous arrays.\nMulti-dimensional list representations are supported for any array type.\nMulti-dimensional comparisons are supported for any array type.\nOne-dimensional memoryviews of hashable (read-only) types with formats B, b or c are now hashable. (Contributed by Antoine Pitrou in bpo-13411.)\nArbitrary slicing of any 1-D arrays type is supported. For example, it is now possible to reverse a memoryview in O(1) by using a negative step.\nAPI changes\u00b6\nThe maximum number of dimensions is officially limited to 64.\nThe representation of empty shape, strides and suboffsets is now an empty tuple instead of\nNone\n.Accessing a memoryview element with format \u2018B\u2019 (unsigned bytes) now returns an integer (in accordance with the struct module syntax). For returning a bytes object the view must be cast to \u2018c\u2019 first.\nmemoryview comparisons now use the logical structure of the operands and compare all array elements by value. All format strings in struct module syntax are supported. Views with unrecognised format strings are still permitted, but will always compare as unequal, regardless of view contents.\nFor further changes see Build and C API Changes and Porting C code.\n(Contributed by Stefan Krah in bpo-10181.)\nSee also\nPEP 3118 - Revising the Buffer Protocol\nPEP 393: Flexible String Representation\u00b6\nThe Unicode string type is changed to support multiple internal representations, depending on the character with the largest Unicode ordinal (1, 2, or 4 bytes) in the represented string. This allows a space-efficient representation in common cases, but gives access to full UCS-4 on all systems. For compatibility with existing APIs, several representations may exist in parallel; over time, this compatibility should be phased out.\nOn the Python side, there should be no downside to this change.\nOn the C API side, PEP 393 is fully backward compatible. The legacy API should remain available at least five years. Applications using the legacy API will not fully benefit of the memory reduction, or - worse - may use a bit more memory, because Python may have to maintain two versions of each string (in the legacy format and in the new efficient storage).\nFunctionality\u00b6\nChanges introduced by PEP 393 are the following:\nPython now always supports the full range of Unicode code points, including non-BMP ones (i.e. from\nU+0000\ntoU+10FFFF\n). The distinction between narrow and wide builds no longer exists and Python now behaves like a wide build, even under Windows.With the death of narrow builds, the problems specific to narrow builds have also been fixed, for example:\nlen()\nnow always returns 1 for non-BMP characters, solen('\\U0010FFFF') == 1\n;surrogate pairs are not recombined in string literals, so\n'\\uDBFF\\uDFFF' != '\\U0010FFFF'\n;indexing or slicing non-BMP characters returns the expected value, so\n'\\U0010FFFF'[0]\nnow returns'\\U0010FFFF'\nand not'\\uDBFF'\n;all other functions in the standard library now correctly handle non-BMP code points.\nThe value of\nsys.maxunicode\nis now always1114111\n(0x10FFFF\nin hexadecimal). ThePyUnicode_GetMax()\nfunction still returns either0xFFFF\nor0x10FFFF\nfor backward compatibility, and it should not be used with the new Unicode API (see bpo-13054).The\n./configure\nflag--with-wide-unicode\nhas been removed.\nPerformance and resource usage\u00b6\nThe storage of Unicode strings now depends on the highest code point in the string:\npure ASCII and Latin1 strings (\nU+0000-U+00FF\n) use 1 byte per code point;BMP strings (\nU+0000-U+FFFF\n) use 2 bytes per code point;non-BMP strings (\nU+10000-U+10FFFF\n) use 4 bytes per code point.\nThe net effect is that for most applications, memory usage of string storage should decrease significantly - especially compared to former wide unicode builds - as, in many cases, strings will be pure ASCII even in international contexts (because many strings store non-human language data, such as XML fragments, HTTP headers, JSON-encoded data, etc.). We also hope that it will, for the same reasons, increase CPU cache efficiency on non-trivial applications. The memory usage of Python 3.3 is two to three times smaller than Python 3.2, and a little bit better than Python 2.7, on a Django benchmark (see the PEP for details).\nSee also\n- PEP 393 - Flexible String Representation\nPEP written by Martin von L\u00f6wis; implementation by Torsten Becker and Martin von L\u00f6wis.\nPEP 397: Python Launcher for Windows\u00b6\nThe Python 3.3 Windows installer now includes a py\nlauncher application\nthat can be used to launch Python applications in a version independent\nfashion.\nThis launcher is invoked implicitly when double-clicking *.py\nfiles.\nIf only a single Python version is installed on the system, that version\nwill be used to run the file. If multiple versions are installed, the most\nrecent version is used by default, but this can be overridden by including\na Unix-style \u201cshebang line\u201d in the Python script.\nThe launcher can also be used explicitly from the command line as the py\napplication. Running py\nfollows the same version selection rules as\nimplicitly launching scripts, but a more specific version can be selected\nby passing appropriate arguments (such as -3\nto request Python 3 when\nPython 2 is also installed, or -2.6\nto specifically request an earlier\nPython version when a more recent version is installed).\nIn addition to the launcher, the Windows installer now includes an option to add the newly installed Python to the system PATH. (Contributed by Brian Curtin in bpo-3561.)\nSee also\n- PEP 397 - Python Launcher for Windows\nPEP written by Mark Hammond and Martin v. L\u00f6wis; implementation by Vinay Sajip.\nLauncher documentation: Python install manager\nInstaller PATH modification: Python install manager\nPEP 3151: Reworking the OS and IO exception hierarchy\u00b6\nThe hierarchy of exceptions raised by operating system errors is now both simplified and finer-grained.\nYou don\u2019t have to worry anymore about choosing the appropriate exception\ntype between OSError\n, IOError\n, EnvironmentError\n,\nWindowsError\n, mmap.error\n, socket.error\nor\nselect.error\n. All these exception types are now only one:\nOSError\n. The other names are kept as aliases for compatibility\nreasons.\nAlso, it is now easier to catch a specific error condition. Instead of\ninspecting the errno\nattribute (or args[0]\n) for a particular\nconstant from the errno\nmodule, you can catch the adequate\nOSError\nsubclass. The available subclasses are the following:\nAnd the ConnectionError\nitself has finer-grained subclasses:\nThanks to the new exceptions, common usages of the errno\ncan now be\navoided. For example, the following code written for Python 3.2:\nfrom errno import ENOENT, EACCES, EPERM\ntry:\nwith open(\"document.txt\") as f:\ncontent = f.read()\nexcept IOError as err:\nif err.errno == ENOENT:\nprint(\"document.txt file is missing\")\nelif err.errno in (EACCES, EPERM):\nprint(\"You are not allowed to read document.txt\")\nelse:\nraise\ncan now be written without the errno\nimport and without manual\ninspection of exception attributes:\ntry:\nwith open(\"document.txt\") as f:\ncontent = f.read()\nexcept FileNotFoundError:\nprint(\"document.txt file is missing\")\nexcept PermissionError:\nprint(\"You are not allowed to read document.txt\")\nSee also\n- PEP 3151 - Reworking the OS and IO Exception Hierarchy\nPEP written and implemented by Antoine Pitrou\nPEP 380: Syntax for Delegating to a Subgenerator\u00b6\nPEP 380 adds the yield from\nexpression, allowing a generator to\ndelegate\npart of its operations to another generator. This allows a section of code\ncontaining yield\nto be factored out and placed in another generator.\nAdditionally, the subgenerator is allowed to return with a value, and the\nvalue is made available to the delegating generator.\nWhile designed primarily for use in delegating to a subgenerator, the yield\nfrom\nexpression actually allows delegation to arbitrary subiterators.\nFor simple iterators, yield from iterable\nis essentially just a shortened\nform of for item in iterable: yield item\n:\n>>> def g(x):\n... yield from range(x, 0, -1)\n... yield from range(x)\n...\n>>> list(g(5))\n[5, 4, 3, 2, 1, 0, 1, 2, 3, 4]\nHowever, unlike an ordinary loop, yield from\nallows subgenerators to\nreceive sent and thrown values directly from the calling scope, and\nreturn a final value to the outer generator:\n>>> def accumulate():\n... tally = 0\n... while 1:\n... next = yield\n... if next is None:\n... return tally\n... tally += next\n...\n>>> def gather_tallies(tallies):\n... while 1:\n... tally = yield from accumulate()\n... tallies.append(tally)\n...\n>>> tallies = []\n>>> acc = gather_tallies(tallies)\n>>> next(acc) # Ensure the accumulator is ready to accept values\n>>> for i in range(4):\n... acc.send(i)\n...\n>>> acc.send(None) # Finish the first tally\n>>> for i in range(5):\n... acc.send(i)\n...\n>>> acc.send(None) # Finish the second tally\n>>> tallies\n[6, 10]\nThe main principle driving this change is to allow even generators that are\ndesigned to be used with the send\nand throw\nmethods to be split into\nmultiple subgenerators as easily as a single large function can be split into\nmultiple subfunctions.\nSee also\n- PEP 380 - Syntax for Delegating to a Subgenerator\nPEP written by Greg Ewing; implementation by Greg Ewing, integrated into 3.3 by Renaud Blanch, Ryan Kelly and Nick Coghlan; documentation by Zbigniew J\u0119drzejewski-Szmek and Nick Coghlan\nPEP 409: Suppressing exception context\u00b6\nPEP 409 introduces new syntax that allows the display of the chained exception context to be disabled. This allows cleaner error messages in applications that convert between exception types:\n>>> class D:\n... def __init__(self, extra):\n... self._extra_attributes = extra\n... def __getattr__(self, attr):\n... try:\n... return self._extra_attributes[attr]\n... except KeyError:\n... raise AttributeError(attr) from None\n...\n>>> D({}).x\nTraceback (most recent call last):\nFile \"\", line 1, in \nFile \"\", line 8, in __getattr__\nAttributeError: x\nWithout the from None\nsuffix to suppress the cause, the original\nexception would be displayed by default:\n>>> class C:\n... def __init__(self, extra):\n... self._extra_attributes = extra\n... def __getattr__(self, attr):\n... try:\n... return self._extra_attributes[attr]\n... except KeyError:\n... raise AttributeError(attr)\n...\n>>> C({}).x\nTraceback (most recent call last):\nFile \"\", line 6, in __getattr__\nKeyError: 'x'\nDuring handling of the above exception, another exception occurred:\nTraceback (most recent call last):\nFile \"\", line 1, in \nFile \"\", line 8, in __getattr__\nAttributeError: x\nNo debugging capability is lost, as the original exception context remains available if needed (for example, if an intervening library has incorrectly suppressed valuable underlying details):\n>>> try:\n... D({}).x\n... except AttributeError as exc:\n... print(repr(exc.__context__))\n...\nKeyError('x',)\nSee also\n- PEP 409 - Suppressing exception context\nPEP written by Ethan Furman; implemented by Ethan Furman and Nick Coghlan.\nPEP 414: Explicit Unicode literals\u00b6\nTo ease the transition from Python 2 for Unicode aware Python applications\nthat make heavy use of Unicode literals, Python 3.3 once again supports the\n\u201cu\n\u201d prefix for string literals. This prefix has no semantic significance\nin Python 3, it is provided solely to reduce the number of purely mechanical\nchanges in migrating to Python 3, making it easier for developers to focus on\nthe more significant semantic changes (such as the stricter default\nseparation of binary and text data).\nSee also\n- PEP 414 - Explicit Unicode literals\nPEP written by Armin Ronacher.\nPEP 3155: Qualified name for classes and functions\u00b6\nFunctions and class objects have a new __qualname__\nattribute representing\nthe \u201cpath\u201d from the module top-level to their definition. For global functions\nand classes, this is the same as __name__\n.\nFor other functions and classes,\nit provides better information about where they were actually defined, and\nhow they might be accessible from the global scope.\nExample with (non-bound) methods:\n>>> class C:\n... def meth(self):\n... pass\n...\n>>> C.meth.__name__\n'meth'\n>>> C.meth.__qualname__\n'C.meth'\nExample with nested classes:\n>>> class C:\n... class D:\n... def meth(self):\n... pass\n...\n>>> C.D.__name__\n'D'\n>>> C.D.__qualname__\n'C.D'\n>>> C.D.meth.__name__\n'meth'\n>>> C.D.meth.__qualname__\n'C.D.meth'\nExample with nested functions:\n>>> def outer():\n... def inner():\n... pass\n... return inner\n...\n>>> outer().__name__\n'inner'\n>>> outer().__qualname__\n'outer..inner'\nThe string representation of those objects is also changed to include the new, more precise information:\n>>> str(C.D)\n\"\"\n>>> str(C.D.meth)\n''\nSee also\n- PEP 3155 - Qualified name for classes and functions\nPEP written and implemented by Antoine Pitrou.\nPEP 412: Key-Sharing Dictionary\u00b6\nDictionaries used for the storage of objects\u2019 attributes are now able to share part of their internal storage between each other (namely, the part which stores the keys and their respective hashes). This reduces the memory consumption of programs creating many instances of non-builtin types.\nSee also\n- PEP 412 - Key-Sharing Dictionary\nPEP written and implemented by Mark Shannon.\nPEP 362: Function Signature Object\u00b6\nA new function inspect.signature()\nmakes introspection of python\ncallables easy and straightforward. A broad range of callables is supported:\npython functions, decorated or not, classes, and functools.partial()\nobjects. New classes inspect.Signature\n, inspect.Parameter\nand inspect.BoundArguments\nhold information about the call signatures,\nsuch as, annotations, default values, parameters kinds, and bound arguments,\nwhich considerably simplifies writing decorators and any code that validates\nor amends calling signatures or arguments.\nSee also\n- PEP 362: - Function Signature Object\nPEP written by Brett Cannon, Yury Selivanov, Larry Hastings, Jiwon Seo; implemented by Yury Selivanov.\nPEP 421: Adding sys.implementation\u00b6\nA new attribute on the sys\nmodule exposes details specific to the\nimplementation of the currently running interpreter. The initial set of\nattributes on sys.implementation\nare name\n, version\n,\nhexversion\n, and cache_tag\n.\nThe intention of sys.implementation\nis to consolidate into one namespace\nthe implementation-specific data used by the standard library. This allows\ndifferent Python implementations to share a single standard library code base\nmuch more easily. In its initial state, sys.implementation\nholds only a\nsmall portion of the implementation-specific data. Over time that ratio will\nshift in order to make the standard library more portable.\nOne example of improved standard library portability is cache_tag\n. As of\nPython 3.3, sys.implementation.cache_tag\nis used by importlib\nto\nsupport PEP 3147 compliance. Any Python implementation that uses\nimportlib\nfor its built-in import system may use cache_tag\nto control\nthe caching behavior for modules.\nSimpleNamespace\u00b6\nThe implementation of sys.implementation\nalso introduces a new type to\nPython: types.SimpleNamespace\n. In contrast to a mapping-based\nnamespace, like dict\n, SimpleNamespace\nis attribute-based, like\nobject\n. However, unlike object\n, SimpleNamespace\ninstances\nare writable. This means that you can add, remove, and modify the namespace\nthrough normal attribute access.\nSee also\n- PEP 421 - Adding sys.implementation\nPEP written and implemented by Eric Snow.\nUsing importlib as the Implementation of Import\u00b6\nbpo-2377 - Replace __import__ w/ importlib.__import__\nbpo-13959 - Re-implement parts of imp\nin pure Python\nbpo-14605 - Make import machinery explicit\nbpo-14646 - Require loaders set __loader__ and __package__\nThe __import__()\nfunction is now powered by importlib.__import__()\n.\nThis work leads to the completion of \u201cphase 2\u201d of PEP 302. There are\nmultiple benefits to this change. First, it has allowed for more of the\nmachinery powering import to be exposed instead of being implicit and hidden\nwithin the C code. It also provides a single implementation for all Python VMs\nsupporting Python 3.3 to use, helping to end any VM-specific deviations in\nimport semantics. And finally it eases the maintenance of import, allowing for\nfuture growth to occur.\nFor the common user, there should be no visible change in semantics. For those whose code currently manipulates import or calls import programmatically, the code changes that might possibly be required are covered in the Porting Python code section of this document.\nNew APIs\u00b6\nOne of the large benefits of this work is the exposure of what goes into\nmaking the import statement work. That means the various importers that were\nonce implicit are now fully exposed as part of the importlib\npackage.\nThe abstract base classes defined in importlib.abc\nhave been expanded\nto properly delineate between meta path finders\nand path entry finders by introducing\nimportlib.abc.MetaPathFinder\nand\nimportlib.abc.PathEntryFinder\n, respectively. The old ABC of\nimportlib.abc.Finder\nis now only provided for backwards-compatibility\nand does not enforce any method requirements.\nIn terms of finders, importlib.machinery.FileFinder\nexposes the\nmechanism used to search for source and bytecode files of a module. Previously\nthis class was an implicit member of sys.path_hooks\n.\nFor loaders, the new abstract base class importlib.abc.FileLoader\nhelps\nwrite a loader that uses the file system as the storage mechanism for a module\u2019s\ncode. The loader for source files\n(importlib.machinery.SourceFileLoader\n), sourceless bytecode files\n(importlib.machinery.SourcelessFileLoader\n), and extension modules\n(importlib.machinery.ExtensionFileLoader\n) are now available for\ndirect use.\nImportError\nnow has name\nand path\nattributes which are set when\nthere is relevant data to provide. The message for failed imports will also\nprovide the full name of the module now instead of just the tail end of the\nmodule\u2019s name.\nThe importlib.invalidate_caches()\nfunction will now call the method with\nthe same name on all finders cached in sys.path_importer_cache\nto help\nclean up any stored state as necessary.\nVisible Changes\u00b6\nFor potential required changes to code, see the Porting Python code section.\nBeyond the expanse of what importlib\nnow exposes, there are other\nvisible changes to import. The biggest is that sys.meta_path\nand\nsys.path_hooks\nnow store all of the meta path finders and path entry\nhooks used by import. Previously the finders were implicit and hidden within\nthe C code of import instead of being directly exposed. This means that one can\nnow easily remove or change the order of the various finders to fit one\u2019s needs.\nAnother change is that all modules have a __loader__\nattribute, storing the\nloader used to create the module. PEP 302 has been updated to make this\nattribute mandatory for loaders to implement, so in the future once 3rd-party\nloaders have been updated people will be able to rely on the existence of the\nattribute. Until such time, though, import is setting the module post-load.\nLoaders are also now expected to set the __package__\nattribute from\nPEP 366. Once again, import itself is already setting this on all loaders\nfrom importlib\nand import itself is setting the attribute post-load.\nNone\nis now inserted into sys.path_importer_cache\nwhen no finder\ncan be found on sys.path_hooks\n. Since imp.NullImporter\nis not\ndirectly exposed on sys.path_hooks\nit could no longer be relied upon to\nalways be available to use as a value representing no finder found.\nAll other changes relate to semantic changes which should be taken into consideration when updating code for Python 3.3, and thus should be read about in the Porting Python code section of this document.\n(Implementation by Brett Cannon)\nOther Language Changes\u00b6\nSome smaller changes made to the core Python language are:\nAdded support for Unicode name aliases and named sequences. Both\nunicodedata.lookup()\nand'\\N{...}'\nnow resolve name aliases, andunicodedata.lookup()\nresolves named sequences too.(Contributed by Ezio Melotti in bpo-12753.)\nUnicode database updated to UCD version 6.1.0\nEquality comparisons on\nrange()\nobjects now return a result reflecting the equality of the underlying sequences generated by those range objects. (bpo-13201)The\ncount()\n,find()\n,rfind()\n,index()\nandrindex()\nmethods ofbytes\nandbytearray\nobjects now accept an integer between 0 and 255 as their first argument.(Contributed by Petri Lehtinen in bpo-12170.)\nThe\nrjust()\n,ljust()\n, andcenter()\nmethods ofbytes\nandbytearray\nnow accept abytearray\nfor thefill\nargument. (Contributed by Petri Lehtinen in bpo-12380.)New methods have been added to\nlist\nandbytearray\n:copy()\nandclear()\n(bpo-10516). Consequently,MutableSequence\nnow also defines aclear()\nmethod (bpo-11388).Raw bytes literals can now be written\nrb\"...\"\nas well asbr\"...\"\n.(Contributed by Antoine Pitrou in bpo-13748.)\ndict.setdefault()\nnow does only one lookup for the given key, making it atomic when used with built-in types.(Contributed by Filip Gruszczy\u0144ski in bpo-13521.)\nThe error messages produced when a function call does not match the function signature have been significantly improved.\n(Contributed by Benjamin Peterson.)\nA Finer-Grained Import Lock\u00b6\nPrevious versions of CPython have always relied on a global import lock.\nThis led to unexpected annoyances, such as deadlocks when importing a module\nwould trigger code execution in a different thread as a side-effect.\nClumsy workarounds were sometimes employed, such as the\nPyImport_ImportModuleNoBlock()\nC API function.\nIn Python 3.3, importing a module takes a per-module lock. This correctly serializes importation of a given module from multiple threads (preventing the exposure of incompletely initialized modules), while eliminating the aforementioned annoyances.\n(Contributed by Antoine Pitrou in bpo-9260.)\nBuiltin functions and types\u00b6\nopen()\ngets a new opener parameter: the underlying file descriptor for the file object is then obtained by calling opener with (file, flags). It can be used to use custom flags likeos.O_CLOEXEC\nfor example. The'x'\nmode was added: open for exclusive creation, failing if the file already exists.print()\n: added the flush keyword argument. If the flush keyword argument is true, the stream is forcibly flushed.hash()\n: hash randomization is enabled by default, seeobject.__hash__()\nandPYTHONHASHSEED\n.The\nstr\ntype gets a newcasefold()\nmethod: return a casefolded copy of the string, casefolded strings may be used for caseless matching. For example,'\u00df'.casefold()\nreturns'ss'\n.The sequence documentation has been substantially rewritten to better explain the binary/text sequence distinction and to provide specific documentation sections for the individual builtin sequence types (bpo-4966).\nNew Modules\u00b6\nfaulthandler\u00b6\nThis new debug module faulthandler\ncontains functions to dump Python tracebacks explicitly,\non a fault (a crash like a segmentation fault), after a timeout, or on a user\nsignal. Call faulthandler.enable()\nto install fault handlers for the\nSIGSEGV\n, SIGFPE\n, SIGABRT\n,\nSIGBUS\n, and SIGILL\nsignals.\nYou can also enable them at startup by setting the PYTHONFAULTHANDLER\nenvironment variable or by using -X\nfaulthandler\ncommand line option.\nExample of a segmentation fault on Linux:\n$ python -q -X faulthandler\n>>> import ctypes\n>>> ctypes.string_at(0)\nFatal Python error: Segmentation fault\nCurrent thread 0x00007fb899f39700:\nFile \"/home/python/cpython/Lib/ctypes/__init__.py\", line 486 in string_at\nFile \"\", line 1 in \nSegmentation fault\nipaddress\u00b6\nThe new ipaddress\nmodule provides tools for creating and manipulating\nobjects representing IPv4 and IPv6 addresses, networks and interfaces (i.e.\nan IP address associated with a specific IP subnet).\n(Contributed by Google and Peter Moody in PEP 3144.)\nlzma\u00b6\nThe newly added lzma\nmodule provides data compression and decompression\nusing the LZMA algorithm, including support for the .xz\nand .lzma\nfile formats.\n(Contributed by Nadeem Vawda and Per \u00d8yvind Karlsen in bpo-6715.)\nImproved Modules\u00b6\nabc\u00b6\nImproved support for abstract base classes containing descriptors composed with\nabstract methods. The recommended approach to declaring abstract descriptors is\nnow to provide __isabstractmethod__\nas a dynamically updated\nproperty. The built-in descriptors have been updated accordingly.\nabc.abstractproperty\nhas been deprecated, useproperty\nwithabc.abstractmethod()\ninstead.abc.abstractclassmethod\nhas been deprecated, useclassmethod\nwithabc.abstractmethod()\ninstead.abc.abstractstaticmethod\nhas been deprecated, usestaticmethod\nwithabc.abstractmethod()\ninstead.\n(Contributed by Darren Dale in bpo-11610.)\nabc.ABCMeta.register()\nnow returns the registered subclass, which means\nit can now be used as a class decorator (bpo-10868).\narray\u00b6\nThe array\nmodule supports the long long type using q\nand\nQ\ntype codes.\n(Contributed by Oren Tirosh and Hirokazu Yamamoto in bpo-1172711.)\nbase64\u00b6\nASCII-only Unicode strings are now accepted by the decoding functions of the\nbase64\nmodern interface. For example, base64.b64decode('YWJj')\nreturns b'abc'\n. (Contributed by Catalin Iacob in bpo-13641.)\nbinascii\u00b6\nIn addition to the binary objects they normally accept, the a2b_\nfunctions\nnow all also accept ASCII-only strings as input. (Contributed by Antoine\nPitrou in bpo-13637.)\nbz2\u00b6\nThe bz2\nmodule has been rewritten from scratch. In the process, several\nnew features have been added:\nNew\nbz2.open()\nfunction: open a bzip2-compressed file in binary or text mode.bz2.BZ2File\ncan now read from and write to arbitrary file-like objects, by means of its constructor\u2019s fileobj argument.(Contributed by Nadeem Vawda in bpo-5863.)\nbz2.BZ2File\nandbz2.decompress()\ncan now decompress multi-stream inputs (such as those produced by the pbzip2 tool).bz2.BZ2File\ncan now also be used to create this type of file, using the'a'\n(append) mode.(Contributed by Nir Aides in bpo-1625.)\nbz2.BZ2File\nnow implements all of theio.BufferedIOBase\nAPI, except for thedetach()\nandtruncate()\nmethods.\ncodecs\u00b6\nThe mbcs\ncodec has been rewritten to handle correctly\nreplace\nand ignore\nerror handlers on all Windows versions. The\nmbcs\ncodec now supports all error handlers, instead of only\nreplace\nto encode and ignore\nto decode.\nA new Windows-only codec has been added: cp65001\n(bpo-13216). It is the\nWindows code page 65001 (Windows UTF-8, CP_UTF8\n). For example, it is used\nby sys.stdout\nif the console output code page is set to cp65001 (e.g., using\nchcp 65001\ncommand).\nMultibyte CJK decoders now resynchronize faster. They only ignore the first\nbyte of an invalid byte sequence. For example, b'\\xff\\n'.decode('gb2312',\n'replace')\nnow returns a \\n\nafter the replacement character.\nIncremental CJK codec encoders are no longer reset at each call to their encode() methods. For example:\n>>> import codecs\n>>> encoder = codecs.getincrementalencoder('hz')('strict')\n>>> b''.join(encoder.encode(x) for x in '\\u52ff\\u65bd\\u65bc\\u4eba\\u3002 Bye.')\nb'~{NpJ)l6HK!#~} Bye.'\nThis example gives b'~{Np~}~{J)~}~{l6~}~{HK~}~{!#~} Bye.'\nwith older Python\nversions.\nThe unicode_internal\ncodec has been deprecated.\ncollections\u00b6\nAddition of a new ChainMap\nclass to allow treating a\nnumber of mappings as a single unit. (Written by Raymond Hettinger for\nbpo-11089, made public in bpo-11297.)\nThe abstract base classes have been moved in a new collections.abc\nmodule, to better differentiate between the abstract and the concrete\ncollections classes. Aliases for ABCs are still present in the\ncollections\nmodule to preserve existing imports. (bpo-11085)\nThe Counter\nclass now supports the unary +\nand -\noperators, as well as the in-place operators +=\n, -=\n, |=\n, and\n&=\n. (Contributed by Raymond Hettinger in bpo-13121.)\ncontextlib\u00b6\nExitStack\nnow provides a solid foundation for\nprogrammatic manipulation of context managers and similar cleanup\nfunctionality. Unlike the previous contextlib.nested\nAPI (which was\ndeprecated and removed), the new API is designed to work correctly\nregardless of whether context managers acquire their resources in\ntheir __init__\nmethod (for example, file objects) or in their\n__enter__\nmethod (for example, synchronisation objects from the\nthreading\nmodule).\ncrypt\u00b6\nAddition of salt and modular crypt format (hashing method) and the mksalt()\nfunction to the crypt\nmodule.\ncurses\u00b6\nIf the\ncurses\nmodule is linked to the ncursesw library, use Unicode functions when Unicode strings or characters are passed (e.g.waddwstr()\n), and bytes functions otherwise (e.g.waddstr()\n).Use the locale encoding instead of\nutf-8\nto encode Unicode strings.curses.window\nhas a newcurses.window.encoding\nattribute.The\ncurses.window\nclass has a newget_wch()\nmethod to get a wide characterThe\ncurses\nmodule has a newunget_wch()\nfunction to push a wide character so the nextget_wch()\nwill return it\n(Contributed by I\u00f1igo Serna in bpo-6755.)\ndatetime\u00b6\nEquality comparisons between naive and aware\ndatetime\ninstances now returnFalse\ninstead of raisingTypeError\n(bpo-15006).New\ndatetime.datetime.timestamp()\nmethod: Return POSIX timestamp corresponding to thedatetime\ninstance.The\ndatetime.datetime.strftime()\nmethod supports formatting years older than 1000.The\ndatetime.datetime.astimezone()\nmethod can now be called without arguments to convert datetime instance to the system timezone.\ndecimal\u00b6\n- bpo-7652 - integrate fast native decimal arithmetic.\nC-module and libmpdec written by Stefan Krah.\nThe new C version of the decimal module integrates the high speed libmpdec library for arbitrary precision correctly rounded decimal floating-point arithmetic. libmpdec conforms to IBM\u2019s General Decimal Arithmetic Specification.\nPerformance gains range from 10x for database applications to 100x for numerically intensive applications. These numbers are expected gains for standard precisions used in decimal floating-point arithmetic. Since the precision is user configurable, the exact figures may vary. For example, in integer bignum arithmetic the differences can be significantly higher.\nThe following table is meant as an illustration. Benchmarks are available at https://www.bytereef.org/mpdecimal/quickstart.html.\ndecimal.py\n_decimal\nspeedup\npi\n42.02s\n0.345s\n120x\ntelco\n172.19s\n5.68s\n30x\npsycopg\n3.57s\n0.29s\n12x\nFeatures\u00b6\nThe\nFloatOperation\nsignal optionally enables stricter semantics for mixing floats and Decimals.If Python is compiled without threads, the C version automatically disables the expensive thread local context machinery. In this case, the variable\nHAVE_THREADS\nis set toFalse\n.\nAPI changes\u00b6\nThe C module has the following context limits, depending on the machine architecture:\nIn the context templates (\nDefaultContext\n,BasicContext\nandExtendedContext\n) the magnitude ofEmax\nandEmin\nhas changed to999999\n.The\nDecimal\nconstructor in decimal.py does not observe the context limits and converts values with arbitrary exponents or precision exactly. Since the C version has internal limits, the following scheme is used: If possible, values are converted exactly, otherwiseInvalidOperation\nis raised and the result is NaN. In the latter case it is always possible to usecreate_decimal()\nin order to obtain a rounded or inexact value.The power function in decimal.py is always correctly rounded. In the C version, it is defined in terms of the correctly rounded\nexp()\nandln()\nfunctions, but the final result is only \u201calmost always correctly rounded\u201d.In the C version, the context dictionary containing the signals is a\nMutableMapping\n. For speed reasons,flags\nandtraps\nalways refer to the sameMutableMapping\nthat the context was initialized with. If a new signal dictionary is assigned,flags\nandtraps\nare updated with the new values, but they do not reference the RHS dictionary.Pickling a\nContext\nproduces a different output in order to have a common interchange format for the Python and C versions.The order of arguments in the\nContext\nconstructor has been changed to match the order displayed byrepr()\n.The\nwatchexp\nparameter in thequantize()\nmethod is deprecated.\nemail\u00b6\nPolicy Framework\u00b6\nThe email package now has a policy\nframework. A\nPolicy\nis an object with several methods and properties\nthat control how the email package behaves. The primary policy for Python 3.3\nis the Compat32\npolicy, which provides backward\ncompatibility with the email package in Python 3.2. A policy\ncan be\nspecified when an email message is parsed by a parser\n, or when a\nMessage\nobject is created, or when an email is\nserialized using a generator\n. Unless overridden, a policy passed\nto a parser\nis inherited by all the Message\nobject and sub-objects\ncreated by the parser\n. By default a generator\nwill use the policy of\nthe Message\nobject it is serializing. The default policy is\ncompat32\n.\nThe minimum set of controls implemented by all policy\nobjects are:\nmax_line_length |\nThe maximum length, excluding the linesep character(s),\nindividual lines may have when a |\nlinesep |\nThe character used to separate individual lines when a\n|\ncte_type |\n|\nraise_on_defect |\nCauses a |\nA new policy instance, with new settings, is created using the\nclone()\nmethod of policy objects. clone\ntakes\nany of the above controls as keyword arguments. Any control not specified in\nthe call retains its default value. Thus you can create a policy that uses\n\\r\\n\nlinesep characters like this:\nmypolicy = compat32.clone(linesep='\\r\\n')\nPolicies can be used to make the generation of messages in the format needed by\nyour application simpler. Instead of having to remember to specify\nlinesep='\\r\\n'\nin all the places you call a generator\n, you can specify\nit once, when you set the policy used by the parser\nor the Message\n,\nwhichever your program uses to create Message\nobjects. On the other hand,\nif you need to generate messages in multiple forms, you can still specify the\nparameters in the appropriate generator\ncall. Or you can have custom\npolicy instances for your different cases, and pass those in when you create\nthe generator\n.\nProvisional Policy with New Header API\u00b6\nWhile the policy framework is worthwhile all by itself, the main motivation for introducing it is to allow the creation of new policies that implement new features for the email package in a way that maintains backward compatibility for those who do not use the new policies. Because the new policies introduce a new API, we are releasing them in Python 3.3 as a provisional policy. Backwards incompatible changes (up to and including removal of the code) may occur if deemed necessary by the core developers.\nThe new policies are instances of EmailPolicy\n,\nand add the following additional controls:\nrefold_source |\nControls whether or not headers parsed by a\n|\nheader_factory |\nA callable that take a |\nThe header_factory\nis the key to the new features provided by the new\npolicies. When one of the new policies is used, any header retrieved from\na Message\nobject is an object produced by the header_factory\n, and any\ntime you set a header on a Message\nit becomes an object produced by\nheader_factory\n. All such header objects have a name\nattribute equal\nto the header name. Address and Date headers have additional attributes\nthat give you access to the parsed data of the header. This means you can now\ndo things like this:\n>>> m = Message(policy=SMTP)\n>>> m['To'] = '\u00c9ric '\n>>> m['to']\n'\u00c9ric '\n>>> m['to'].addresses\n(Address(display_name='\u00c9ric', username='foo', domain='example.com'),)\n>>> m['to'].addresses[0].username\n'foo'\n>>> m['to'].addresses[0].display_name\n'\u00c9ric'\n>>> m['Date'] = email.utils.localtime()\n>>> m['Date'].datetime\ndatetime.datetime(2012, 5, 25, 21, 39, 24, 465484, tzinfo=datetime.timezone(datetime.timedelta(-1, 72000), 'EDT'))\n>>> m['Date']\n'Fri, 25 May 2012 21:44:27 -0400'\n>>> print(m)\nTo: =?utf-8?q?=C3=89ric?= \nDate: Fri, 25 May 2012 21:44:27 -0400\nYou will note that the unicode display name is automatically encoded as\nutf-8\nwhen the message is serialized, but that when the header is accessed\ndirectly, you get the unicode version. This eliminates any need to deal with\nthe email.header\ndecode_header()\nor\nmake_header()\nfunctions.\nYou can also create addresses from parts:\n>>> m['cc'] = [Group('pals', [Address('Bob', 'bob', 'example.com'),\n... Address('Sally', 'sally', 'example.com')]),\n... Address('Bonzo', addr_spec='bonz@laugh.com')]\n>>> print(m)\nTo: =?utf-8?q?=C3=89ric?= \nDate: Fri, 25 May 2012 21:44:27 -0400\ncc: pals: Bob , Sally ;, Bonzo \nDecoding to unicode is done automatically:\n>>> m2 = message_from_string(str(m))\n>>> m2['to']\n'\u00c9ric '\nWhen you parse a message, you can use the addresses\nand groups\nattributes of the header objects to access the groups and individual\naddresses:\n>>> m2['cc'].addresses\n(Address(display_name='Bob', username='bob', domain='example.com'), Address(display_name='Sally', username='sally', domain='example.com'), Address(display_name='Bonzo', username='bonz', domain='laugh.com'))\n>>> m2['cc'].groups\n(Group(display_name='pals', addresses=(Address(display_name='Bob', username='bob', domain='example.com'), Address(display_name='Sally', username='sally', domain='example.com')), Group(display_name=None, addresses=(Address(display_name='Bonzo', username='bonz', domain='laugh.com'),))\nIn summary, if you use one of the new policies, header manipulation works the way it ought to: your application works with unicode strings, and the email package transparently encodes and decodes the unicode to and from the RFC standard Content Transfer Encodings.\nOther API Changes\u00b6\nNew BytesHeaderParser\n, added to the parser\nmodule to complement HeaderParser\nand complete the Bytes\nAPI.\nNew utility functions:\nformat_datetime()\n: given adatetime\n, produce a string formatted for use in an email header.parsedate_to_datetime()\n: given a date string from an email header, convert it into an awaredatetime\n, or a naivedatetime\nif the offset is-0000\n.localtime()\n: With no argument, returns the current local time as an awaredatetime\nusing the localtimezone\n. Given an awaredatetime\n, converts it into an awaredatetime\nusing the localtimezone\n.\nftplib\u00b6\nftplib.FTP\nnow accepts asource_address\nkeyword argument to specify the(host, port)\nto use as the source address in the bind call when creating the outgoing socket. (Contributed by Giampaolo Rodol\u00e0 in bpo-8594.)The\nFTP_TLS\nclass now provides a newccc()\nfunction to revert control channel back to plaintext. This can be useful to take advantage of firewalls that know how to handle NAT with non-secure FTP without opening fixed ports. (Contributed by Giampaolo Rodol\u00e0 in bpo-12139.)Added\nftplib.FTP.mlsd()\nmethod which provides a parsable directory listing format and deprecatesftplib.FTP.nlst()\nandftplib.FTP.dir()\n. (Contributed by Giampaolo Rodol\u00e0 in bpo-11072.)\nfunctools\u00b6\nThe functools.lru_cache()\ndecorator now accepts a typed\nkeyword\nargument (that defaults to False\nto ensure that it caches values of\ndifferent types that compare equal in separate cache slots. (Contributed\nby Raymond Hettinger in bpo-13227.)\ngc\u00b6\nIt is now possible to register callbacks invoked by the garbage collector\nbefore and after collection using the new callbacks\nlist.\nhmac\u00b6\nA new compare_digest()\nfunction has been added to prevent side\nchannel attacks on digests through timing analysis. (Contributed by Nick\nCoghlan and Christian Heimes in bpo-15061.)\nhttp\u00b6\nhttp.server.BaseHTTPRequestHandler\nnow buffers the headers and writes\nthem all at once when end_headers()\nis\ncalled. A new method flush_headers()\ncan be used to directly manage when the accumulated headers are sent.\n(Contributed by Andrew Schaaf in bpo-3709.)\nhttp.server\nnow produces valid HTML 4.01 strict\noutput.\n(Contributed by Ezio Melotti in bpo-13295.)\nhttp.client.HTTPResponse\nnow has a\nreadinto()\nmethod, which means it can be used\nas an io.RawIOBase\nclass. (Contributed by John Kuhn in\nbpo-13464.)\nhtml\u00b6\nhtml.parser.HTMLParser\nis now able to parse broken markup without\nraising errors, therefore the strict argument of the constructor and the\nHTMLParseError\nexception are now deprecated.\nThe ability to parse broken markup is the result of a number of bug fixes that\nare also available on the latest bug fix releases of Python 2.7/3.2.\n(Contributed by Ezio Melotti in bpo-15114, and bpo-14538,\nbpo-13993, bpo-13960, bpo-13358, bpo-1745761,\nbpo-755670, bpo-13357, bpo-12629, bpo-1200313,\nbpo-670664, bpo-13273, bpo-12888, bpo-7311.)\nA new html5\ndictionary that maps HTML5 named character\nreferences to the equivalent Unicode character(s) (e.g. html5['gt;'] ==\n'>'\n) has been added to the html.entities\nmodule. The dictionary is\nnow also used by HTMLParser\n. (Contributed by Ezio\nMelotti in bpo-11113 and bpo-15156.)\nimaplib\u00b6\nThe IMAP4_SSL\nconstructor now accepts an SSLContext\nparameter to control parameters of the secure channel.\n(Contributed by Sijin Joseph in bpo-8808.)\ninspect\u00b6\nA new getclosurevars()\nfunction has been added. This function\nreports the current binding of all names referenced from the function body and\nwhere those names were resolved, making it easier to verify correct internal\nstate when testing code that relies on stateful closures.\n(Contributed by Meador Inge and Nick Coghlan in bpo-13062.)\nA new getgeneratorlocals()\nfunction has been added. This\nfunction reports the current binding of local variables in the generator\u2019s\nstack frame, making it easier to verify correct internal state when testing\ngenerators.\n(Contributed by Meador Inge in bpo-15153.)\nio\u00b6\nThe open()\nfunction has a new 'x'\nmode that can be used to\nexclusively create a new file, and raise a FileExistsError\nif the file\nalready exists. It is based on the C11 \u2018x\u2019 mode to fopen().\n(Contributed by David Townshend in bpo-12760.)\nThe constructor of the TextIOWrapper\nclass has a new\nwrite_through optional argument. If write_through is True\n, calls to\nwrite()\nare guaranteed not to be buffered: any data\nwritten on the TextIOWrapper\nobject is immediately handled to its\nunderlying binary buffer.\nitertools\u00b6\naccumulate()\nnow takes an optional func\nargument for\nproviding a user-supplied binary function.\nlogging\u00b6\nThe basicConfig()\nfunction now supports an optional handlers\nargument taking an iterable of handlers to be added to the root logger.\nA class level attribute append_nul\nhas\nbeen added to SysLogHandler\nto allow control of the\nappending of the NUL\n(\\000\n) byte to syslog records, since for some\ndaemons it is required while for others it is passed through to the log.\nmath\u00b6\nThe math\nmodule has a new function, log2()\n, which returns\nthe base-2 logarithm of x.\n(Written by Mark Dickinson in bpo-11888.)\nmmap\u00b6\nThe read()\nmethod is now more compatible with other file-like\nobjects: if the argument is omitted or specified as None\n, it returns the\nbytes from the current file position to the end of the mapping. (Contributed\nby Petri Lehtinen in bpo-12021.)\nmultiprocessing\u00b6\nThe new multiprocessing.connection.wait()\nfunction allows polling\nmultiple objects (such as connections, sockets and pipes) with a timeout.\n(Contributed by Richard Oudkerk in bpo-12328.)\nmultiprocessing.connection.Connection\nobjects can now be transferred\nover multiprocessing connections.\n(Contributed by Richard Oudkerk in bpo-4892.)\nmultiprocessing.Process\nnow accepts a daemon\nkeyword argument\nto override the default behavior of inheriting the daemon\nflag from\nthe parent process (bpo-6064).\nNew attribute multiprocessing.Process.sentinel\nallows a\nprogram to wait on multiple Process\nobjects at one\ntime using the appropriate OS primitives (for example, select\non\nposix systems).\nNew methods multiprocessing.pool.Pool.starmap()\nand\nstarmap_async()\nprovide\nitertools.starmap()\nequivalents to the existing\nmultiprocessing.pool.Pool.map()\nand\nmap_async()\nfunctions. (Contributed by Hynek\nSchlawack in bpo-12708.)\nnntplib\u00b6\nThe nntplib.NNTP\nclass now supports the context management protocol to\nunconditionally consume socket.error\nexceptions and to close the NNTP\nconnection when done:\n>>> from nntplib import NNTP\n>>> with NNTP('news.gmane.org') as n:\n... n.group('gmane.comp.python.committers')\n...\n('211 1755 1 1755 gmane.comp.python.committers', 1755, 1, 1755, 'gmane.comp.python.committers')\n>>>\n(Contributed by Giampaolo Rodol\u00e0 in bpo-9795.)\nos\u00b6\nThe\nos\nmodule has a newpipe2()\nfunction that makes it possible to create a pipe withO_CLOEXEC\norO_NONBLOCK\nflags set atomically. This is especially useful to avoid race conditions in multi-threaded programs.The\nos\nmodule has a newsendfile()\nfunction which provides an efficient \u201czero-copy\u201d way for copying data from one file (or socket) descriptor to another. The phrase \u201czero-copy\u201d refers to the fact that all of the copying of data between the two descriptors is done entirely by the kernel, with no copying of data into userspace buffers.sendfile()\ncan be used to efficiently copy data from a file on disk to a network socket, e.g. for downloading a file.(Patch submitted by Ross Lagerwall and Giampaolo Rodol\u00e0 in bpo-10882.)\nTo avoid race conditions like symlink attacks and issues with temporary files and directories, it is more reliable (and also faster) to manipulate file descriptors instead of file names. Python 3.3 enhances existing functions and introduces new functions to work on file descriptors (bpo-4761, bpo-10755 and bpo-14626).\nThe\nos\nmodule has a newfwalk()\nfunction similar towalk()\nexcept that it also yields file descriptors referring to the directories visited. This is especially useful to avoid symlink races.The following functions get new optional dir_fd (paths relative to directory descriptors) and/or follow_symlinks (not following symlinks):\naccess()\n,chflags()\n,chmod()\n,chown()\n,link()\n,lstat()\n,mkdir()\n,mkfifo()\n,mknod()\n,open()\n,readlink()\n,remove()\n,rename()\n,replace()\n,rmdir()\n,stat()\n,symlink()\n,unlink()\n,utime()\n. Platform support for using these parameters can be checked via the setsos.supports_dir_fd\nandos.supports_follow_symlinks\n.The following functions now support a file descriptor for their path argument:\nchdir()\n,chmod()\n,chown()\n,execve()\n,listdir()\n,pathconf()\n,exists()\n,stat()\n,statvfs()\n,utime()\n. Platform support for this can be checked via theos.supports_fd\nset.\naccess()\naccepts aneffective_ids\nkeyword argument to turn on using the effective uid/gid rather than the real uid/gid in the access check. Platform support for this can be checked via thesupports_effective_ids\nset.The\nos\nmodule has two new functions:getpriority()\nandsetpriority()\n. They can be used to get or set process niceness/priority in a fashion similar toos.nice()\nbut extended to all processes instead of just the current one.(Patch submitted by Giampaolo Rodol\u00e0 in bpo-10784.)\nThe new\nos.replace()\nfunction allows cross-platform renaming of a file with overwriting the destination. Withos.rename()\n, an existing destination file is overwritten under POSIX, but raises an error under Windows. (Contributed by Antoine Pitrou in bpo-8828.)The stat family of functions (\nstat()\n,fstat()\n, andlstat()\n) now support reading a file\u2019s timestamps with nanosecond precision. Symmetrically,utime()\ncan now write file timestamps with nanosecond precision. (Contributed by Larry Hastings in bpo-14127.)The new\nos.get_terminal_size()\nfunction queries the size of the terminal attached to a file descriptor. See alsoshutil.get_terminal_size()\n. (Contributed by Zbigniew J\u0119drzejewski-Szmek in bpo-13609.)\nNew functions to support Linux extended attributes (bpo-12720):\ngetxattr()\n,listxattr()\n,removexattr()\n,setxattr()\n.New interface to the scheduler. These functions control how a process is allocated CPU time by the operating system. New functions:\nsched_get_priority_max()\n,sched_get_priority_min()\n,sched_getaffinity()\n,sched_getparam()\n,sched_getscheduler()\n,sched_rr_get_interval()\n,sched_setaffinity()\n,sched_setparam()\n,sched_setscheduler()\n,sched_yield()\n,New functions to control the file system:\nposix_fadvise()\n: Announces an intention to access data in a specific pattern thus allowing the kernel to make optimizations.posix_fallocate()\n: Ensures that enough disk space is allocated for a file.sync()\n: Force write of everything to disk.\nAdditional new posix functions:\nlockf()\n: Apply, test or remove a POSIX lock on an open file descriptor.pread()\n: Read from a file descriptor at an offset, the file offset remains unchanged.pwrite()\n: Write to a file descriptor from an offset, leaving the file offset unchanged.readv()\n: Read from a file descriptor into a number of writable buffers.truncate()\n: Truncate the file corresponding to path, so that it is at most length bytes in size.waitid()\n: Wait for the completion of one or more child processes.writev()\n: Write the contents of buffers to a file descriptor, where buffers is an arbitrary sequence of buffers.getgrouplist()\n(bpo-9344): Return list of group ids that specified user belongs to.\ntimes()\nanduname()\n: Return type changed from a tuple to a tuple-like object with named attributes.Some platforms now support additional constants for the\nlseek()\nfunction, such asos.SEEK_HOLE\nandos.SEEK_DATA\n.New constants\nRTLD_LAZY\n,RTLD_NOW\n,RTLD_GLOBAL\n,RTLD_LOCAL\n,RTLD_NODELETE\n,RTLD_NOLOAD\n, andRTLD_DEEPBIND\nare available on platforms that support them. These are for use with thesys.setdlopenflags()\nfunction, and supersede the similar constants defined inctypes\nandDLFCN\n. (Contributed by Victor Stinner in bpo-13226.)os.symlink()\nnow accepts (and ignores) thetarget_is_directory\nkeyword argument on non-Windows platforms, to ease cross-platform support.\npdb\u00b6\nTab-completion is now available not only for command names, but also their\narguments. For example, for the break\ncommand, function and file names\nare completed.\n(Contributed by Georg Brandl in bpo-14210)\npickle\u00b6\npickle.Pickler\nobjects now have an optional\ndispatch_table\nattribute allowing per-pickler\nreduction functions to be set.\n(Contributed by Richard Oudkerk in bpo-14166.)\npydoc\u00b6\nThe Tk GUI and the serve()\nfunction have been removed from the\npydoc\nmodule: pydoc -g\nand serve()\nhave been deprecated\nin Python 3.2.\nre\u00b6\nstr\nregular expressions now support \\u\nand \\U\nescapes.\n(Contributed by Serhiy Storchaka in bpo-3665.)\nsched\u00b6\nrun()\nnow accepts a blocking parameter which when set to false makes the method execute the scheduled events due to expire soonest (if any) and then return immediately. This is useful in case you want to use thescheduler\nin non-blocking applications. (Contributed by Giampaolo Rodol\u00e0 in bpo-13449.)scheduler\nclass can now be safely used in multi-threaded environments. (Contributed by Josiah Carlson and Giampaolo Rodol\u00e0 in bpo-8684.)timefunc and delayfunct parameters of\nscheduler\nclass constructor are now optional and defaults totime.time()\nandtime.sleep()\nrespectively. (Contributed by Chris Clark in bpo-13245.)enter()\nandenterabs()\nargument parameter is now optional. (Contributed by Chris Clark in bpo-13245.)enter()\nandenterabs()\nnow accept a kwargs parameter. (Contributed by Chris Clark in bpo-13245.)\nselect\u00b6\nSolaris and derivative platforms have a new class select.devpoll\nfor high performance asynchronous sockets via /dev/poll\n.\n(Contributed by Jes\u00fas Cea Avi\u00f3n in bpo-6397.)\nshlex\u00b6\nThe previously undocumented helper function quote\nfrom the\npipes\nmodules has been moved to the shlex\nmodule and\ndocumented. quote()\nproperly escapes all characters in a string\nthat might be otherwise given special meaning by the shell.\nshutil\u00b6\nNew functions:\ndisk_usage()\n: provides total, used and free disk space statistics. (Contributed by Giampaolo Rodol\u00e0 in bpo-12442.)chown()\n: allows one to change user and/or group of the given path also specifying the user/group names and not only their numeric ids. (Contributed by Sandro Tosi in bpo-12191.)shutil.get_terminal_size()\n: returns the size of the terminal window to which the interpreter is attached. (Contributed by Zbigniew J\u0119drzejewski-Szmek in bpo-13609.)\ncopy2()\nandcopystat()\nnow preserve file timestamps with nanosecond precision on platforms that support it. They also preserve file \u201cextended attributes\u201d on Linux. (Contributed by Larry Hastings in bpo-14127 and bpo-15238.)Several functions now take an optional\nsymlinks\nargument: when that parameter is true, symlinks aren\u2019t dereferenced and the operation instead acts on the symlink itself (or creates one, if relevant). (Contributed by Hynek Schlawack in bpo-12715.)When copying files to a different file system,\nmove()\nnow handles symlinks the way the posixmv\ncommand does, recreating the symlink rather than copying the target file contents. (Contributed by Jonathan Niehof in bpo-9993.)move()\nnow also returns thedst\nargument as its result.rmtree()\nis now resistant to symlink attacks on platforms which support the newdir_fd\nparameter inos.open()\nandos.unlink()\n. (Contributed by Martin von L\u00f6wis and Hynek Schlawack in bpo-4489.)\nsignal\u00b6\nThe\nsignal\nmodule has new functions:pthread_sigmask()\n: fetch and/or change the signal mask of the calling thread (Contributed by Jean-Paul Calderone in bpo-8407);pthread_kill()\n: send a signal to a thread;sigpending()\n: examine pending functions;sigwait()\n: wait a signal;sigwaitinfo()\n: wait for a signal, returning detailed information about it;sigtimedwait()\n: likesigwaitinfo()\nbut with a timeout.\nThe signal handler writes the signal number as a single byte instead of a nul byte into the wakeup file descriptor. So it is possible to wait more than one signal and know which signals were raised.\nsignal.signal()\nandsignal.siginterrupt()\nraise an OSError, instead of a RuntimeError: OSError has an errno attribute.\nsmtpd\u00b6\nThe smtpd\nmodule now supports RFC 5321 (extended SMTP) and RFC 1870\n(size extension). Per the standard, these extensions are enabled if and only\nif the client initiates the session with an EHLO\ncommand.\n(Initial ELHO\nsupport by Alberto Trevino. Size extension by Juhana\nJauhiainen. Substantial additional work on the patch contributed by Michele\nOrr\u00f9 and Dan Boswell. bpo-8739)\nsmtplib\u00b6\nThe SMTP\n, SMTP_SSL\n, and\nLMTP\nclasses now accept a source_address\nkeyword argument\nto specify the (host, port)\nto use as the source address in the bind call\nwhen creating the outgoing socket. (Contributed by Paulo Scardine in\nbpo-11281.)\nSMTP\nnow supports the context management protocol, allowing an\nSMTP\ninstance to be used in a with\nstatement. (Contributed\nby Giampaolo Rodol\u00e0 in bpo-11289.)\nThe SMTP_SSL\nconstructor and the starttls()\nmethod now accept an SSLContext parameter to control parameters of the secure\nchannel. (Contributed by Kasun Herath in bpo-8809.)\nsocket\u00b6\nThe\nsocket\nclass now exposes additional methods to process ancillary data when supported by the underlying platform:(Contributed by David Watson in bpo-6560, based on an earlier patch by Heiko Wundram)\nThe\nsocket\nclass now supports the PF_CAN protocol family (https://en.wikipedia.org/wiki/Socketcan), on Linux (https://lwn.net/Articles/253425).(Contributed by Matthias Fuchs, updated by Tiago Gon\u00e7alves in bpo-10141.)\nThe\nsocket\nclass now supports the PF_RDS protocol family (https://en.wikipedia.org/wiki/Reliable_Datagram_Sockets and https://oss.oracle.com/projects/rds).The\nsocket\nclass now supports thePF_SYSTEM\nprotocol family on OS X. (Contributed by Michael Goderbauer in bpo-13777.)New function\nsethostname()\nallows the hostname to be set on Unix systems if the calling process has sufficient privileges. (Contributed by Ross Lagerwall in bpo-10866.)\nsocketserver\u00b6\nBaseServer\nnow has an overridable method\nservice_actions()\nthat is called by the\nserve_forever()\nmethod in the service loop.\nForkingMixIn\nnow uses this to clean up zombie\nchild processes. (Contributed by Justin Warkentin in bpo-11109.)\nsqlite3\u00b6\nNew sqlite3.Connection\nmethod\nset_trace_callback()\ncan be used to capture a trace of\nall sql commands processed by sqlite. (Contributed by Torsten Landschoff\nin bpo-11688.)\nssl\u00b6\nThe\nssl\nmodule has two new random generation functions:RAND_bytes()\n: generate cryptographically strong pseudo-random bytes.RAND_pseudo_bytes()\n: generate pseudo-random bytes.\n(Contributed by Victor Stinner in bpo-12049.)\nThe\nssl\nmodule now exposes a finer-grained exception hierarchy in order to make it easier to inspect the various kinds of errors. (Contributed by Antoine Pitrou in bpo-11183.)load_cert_chain()\nnow accepts a password argument to be used if the private key is encrypted. (Contributed by Adam Simpkins in bpo-12803.)Diffie-Hellman key exchange, both regular and Elliptic Curve-based, is now supported through the\nload_dh_params()\nandset_ecdh_curve()\nmethods. (Contributed by Antoine Pitrou in bpo-13626 and bpo-13627.)SSL sockets have a new\nget_channel_binding()\nmethod allowing the implementation of certain authentication mechanisms such as SCRAM-SHA-1-PLUS. (Contributed by Jacek Konieczny in bpo-12551.)You can query the SSL compression algorithm used by an SSL socket, thanks to its new\ncompression()\nmethod. The new attributeOP_NO_COMPRESSION\ncan be used to disable compression. (Contributed by Antoine Pitrou in bpo-13634.)Support has been added for the Next Protocol Negotiation extension using the\nssl.SSLContext.set_npn_protocols()\nmethod. (Contributed by Colin Marc in bpo-14204.)SSL errors can now be introspected more easily thanks to\nlibrary\nandreason\nattributes. (Contributed by Antoine Pitrou in bpo-14837.)The\nget_server_certificate()\nfunction now supports IPv6. (Contributed by Charles-Fran\u00e7ois Natali in bpo-11811.)New attribute\nOP_CIPHER_SERVER_PREFERENCE\nallows setting SSLv3 server sockets to use the server\u2019s cipher ordering preference rather than the client\u2019s (bpo-13635).\nstat\u00b6\nThe undocumented tarfile.filemode function has been moved to\nstat.filemode()\n. It can be used to convert a file\u2019s mode to a string of\nthe form \u2018-rwxrwxrwx\u2019.\n(Contributed by Giampaolo Rodol\u00e0 in bpo-14807.)\nstruct\u00b6\nThe struct\nmodule now supports ssize_t\nand size_t\nvia the\nnew codes n\nand N\n, respectively. (Contributed by Antoine Pitrou\nin bpo-3163.)\nsubprocess\u00b6\nCommand strings can now be bytes objects on posix platforms. (Contributed by Victor Stinner in bpo-8513.)\nA new constant DEVNULL\nallows suppressing output in a\nplatform-independent fashion. (Contributed by Ross Lagerwall in\nbpo-5870.)\nsys\u00b6\nThe sys\nmodule has a new thread_info\nnamed\ntuple holding information about the thread implementation\n(bpo-11223).\ntarfile\u00b6\ntarfile\nnow supports lzma\nencoding via the lzma\nmodule.\n(Contributed by Lars Gust\u00e4bel in bpo-5689.)\ntempfile\u00b6\ntempfile.SpooledTemporaryFile\n's truncate()\nmethod now accepts\na size\nparameter. (Contributed by Ryan Kelly in bpo-9957.)\ntextwrap\u00b6\nThe textwrap\nmodule has a new indent()\nthat makes\nit straightforward to add a common prefix to selected lines in a block\nof text (bpo-13857).\nthreading\u00b6\nthreading.Condition\n, threading.Semaphore\n,\nthreading.BoundedSemaphore\n, threading.Event\n, and\nthreading.Timer\n, all of which used to be factory functions returning a\nclass instance, are now classes and may be subclassed. (Contributed by \u00c9ric\nAraujo in bpo-10968.)\nThe threading.Thread\nconstructor now accepts a daemon\nkeyword\nargument to override the default behavior of inheriting the daemon\nflag\nvalue from the parent thread (bpo-6064).\nThe formerly private function _thread.get_ident\nis now available as the\npublic function threading.get_ident()\n. This eliminates several cases of\ndirect access to the _thread\nmodule in the stdlib. Third party code that\nused _thread.get_ident\nshould likewise be changed to use the new public\ninterface.\ntime\u00b6\nThe PEP 418 added new functions to the time\nmodule:\nget_clock_info()\n: Get information on a clock.monotonic()\n: Monotonic clock (cannot go backward), not affected by system clock updates.perf_counter()\n: Performance counter with the highest available resolution to measure a short duration.process_time()\n: Sum of the system and user CPU time of the current process.\nOther new functions:\nclock_getres()\n,clock_gettime()\nandclock_settime()\nfunctions withCLOCK_xxx\nconstants. (Contributed by Victor Stinner in bpo-10278.)\nTo improve cross platform consistency, sleep()\nnow raises a\nValueError\nwhen passed a negative sleep value. Previously this was an\nerror on posix, but produced an infinite sleep on Windows.\ntypes\u00b6\nAdd a new types.MappingProxyType\nclass: Read-only proxy of a mapping.\n(bpo-14386)\nThe new functions types.new_class()\nand types.prepare_class()\nprovide support\nfor PEP 3115 compliant dynamic type creation. (bpo-14588)\nunittest\u00b6\nassertRaises()\n, assertRaisesRegex()\n, assertWarns()\n, and\nassertWarnsRegex()\nnow accept a keyword argument msg when used as\ncontext managers. (Contributed by Ezio Melotti and Winston Ewert in\nbpo-10775.)\nunittest.TestCase.run()\nnow returns the TestResult\nobject.\nurllib\u00b6\nThe Request\nclass, now accepts a method argument\nused by get_method()\nto determine what HTTP method\nshould be used. For example, this will send a 'HEAD'\nrequest:\n>>> urlopen(Request('https://www.python.org', method='HEAD'))\nwebbrowser\u00b6\nThe webbrowser\nmodule supports more \u201cbrowsers\u201d: Google Chrome (named\nchrome, chromium, chrome-browser or\nchromium-browser depending on the version and operating system),\nand the generic launchers xdg-open, from the FreeDesktop.org\nproject, and gvfs-open, which is the default URI handler for GNOME\n3. (The former contributed by Arnaud Calmettes in bpo-13620, the latter\nby Matthias Klose in bpo-14493.)\nxml.etree.ElementTree\u00b6\nThe xml.etree.ElementTree\nmodule now imports its C accelerator by\ndefault; there is no longer a need to explicitly import\nxml.etree.cElementTree\n(this module stays for backwards compatibility,\nbut is now deprecated). In addition, the iter\nfamily of methods of\nElement\nhas been optimized (rewritten in C).\nThe module\u2019s documentation has also been greatly improved with added examples\nand a more detailed reference.\nzlib\u00b6\nNew attribute zlib.Decompress.eof\nmakes it possible to distinguish\nbetween a properly formed compressed stream and an incomplete or truncated one.\n(Contributed by Nadeem Vawda in bpo-12646.)\nNew attribute zlib.ZLIB_RUNTIME_VERSION\nreports the version string of\nthe underlying zlib\nlibrary that is loaded at runtime. (Contributed by\nTorsten Landschoff in bpo-12306.)\nOptimizations\u00b6\nMajor performance enhancements have been added:\nThanks to PEP 393, some operations on Unicode strings have been optimized:\nthe memory footprint is divided by 2 to 4 depending on the text\nencode an ASCII string to UTF-8 doesn\u2019t need to encode characters anymore, the UTF-8 representation is shared with the ASCII representation\nthe UTF-8 encoder has been optimized\nrepeating a single ASCII letter and getting a substring of an ASCII string is 4 times faster\nUTF-8 is now 2x to 4x faster. UTF-16 encoding is now up to 10x faster.\n(Contributed by Serhiy Storchaka, bpo-14624, bpo-14738 and bpo-15026.)\nBuild and C API Changes\u00b6\nChanges to Python\u2019s build process and to the C API include:\nNew PEP 3118 related function:\nPEP 393 added new Unicode types, macros and functions:\nHigh-level API:\nLow-level API:\nPyASCIIObject\nandPyCompactUnicodeObject\nstructuresPyUnicode_DATA\n,PyUnicode_1BYTE_DATA\n,PyUnicode_2BYTE_DATA\n,PyUnicode_4BYTE_DATA\nPyUnicode_KIND\nwithPyUnicode_Kind\nenum:PyUnicode_WCHAR_KIND\n,PyUnicode_1BYTE_KIND\n,PyUnicode_2BYTE_KIND\n,PyUnicode_4BYTE_KIND\nPyArg_ParseTuple\nnow accepts abytearray\nfor thec\nformat (bpo-12380).\nDeprecated\u00b6\nUnsupported Operating Systems\u00b6\nOS/2 and VMS are no longer supported due to the lack of a maintainer.\nWindows 2000 and Windows platforms which set COMSPEC\nto command.com\nare no longer supported due to maintenance burden.\nOSF support, which was deprecated in 3.2, has been completely removed.\nDeprecated Python modules, functions and methods\u00b6\nPassing a non-empty string to\nobject.__format__()\nis deprecated, and will produce aTypeError\nin Python 3.4 (bpo-9856).The\nunicode_internal\ncodec has been deprecated because of the PEP 393, use UTF-8, UTF-16 (utf-16-le\norutf-16-be\n), or UTF-32 (utf-32-le\norutf-32-be\n)ftplib.FTP.nlst()\nandftplib.FTP.dir()\n: useftplib.FTP.mlsd()\nplatform.popen()\n: use thesubprocess\nmodule. Check especially the Replacing Older Functions with the subprocess Module section (bpo-11377).bpo-13374: The Windows bytes API has been deprecated in the\nos\nmodule. Use Unicode filenames, instead of bytes filenames, to not depend on the ANSI code page anymore and to support any filename.bpo-13988: The\nxml.etree.cElementTree\nmodule is deprecated. The accelerator is used automatically whenever available.The behaviour of\ntime.clock()\ndepends on the platform: use the newtime.perf_counter()\nortime.process_time()\nfunction instead, depending on your requirements, to have a well defined behaviour.The\nos.stat_float_times()\nfunction is deprecated.abc\nmodule:abc.abstractproperty\nhas been deprecated, useproperty\nwithabc.abstractmethod()\ninstead.abc.abstractclassmethod\nhas been deprecated, useclassmethod\nwithabc.abstractmethod()\ninstead.abc.abstractstaticmethod\nhas been deprecated, usestaticmethod\nwithabc.abstractmethod()\ninstead.\nimportlib\npackage:importlib.abc.SourceLoader.path_mtime()\nis now deprecated in favour ofimportlib.abc.SourceLoader.path_stats()\nas bytecode files now store both the modification time and size of the source file the bytecode file was compiled from.\nDeprecated functions and types of the C API\u00b6\nThe Py_UNICODE\nhas been deprecated by PEP 393 and will be\nremoved in Python 4. All functions using this type are deprecated:\nUnicode functions and methods using Py_UNICODE\nand\nPy_UNICODE* types:\nPyUnicode_FromUnicode\n: usePyUnicode_FromWideChar()\norPyUnicode_FromKindAndData()\nPyUnicode_AS_UNICODE\n,PyUnicode_AsUnicode()\n,PyUnicode_AsUnicodeAndSize()\n: usePyUnicode_AsWideCharString()\nPyUnicode_AS_DATA\n: usePyUnicode_DATA\nwithPyUnicode_READ\nandPyUnicode_WRITE\nPyUnicode_GET_SIZE\n,PyUnicode_GetSize()\n: usePyUnicode_GET_LENGTH\norPyUnicode_GetLength()\nPyUnicode_GET_DATA_SIZE\n: usePyUnicode_GET_LENGTH(str) * PyUnicode_KIND(str)\n(only work on ready strings)PyUnicode_AsUnicodeCopy()\n: usePyUnicode_AsUCS4Copy()\norPyUnicode_AsWideCharString()\nPyUnicode_GetMax()\nFunctions and macros manipulating Py_UNICODE* strings:\nPy_UNICODE_strlen()\n: usePyUnicode_GetLength()\norPyUnicode_GET_LENGTH\nPy_UNICODE_strcat()\n: usePyUnicode_CopyCharacters()\norPyUnicode_FromFormat()\nPy_UNICODE_strcpy()\n,Py_UNICODE_strncpy()\n,Py_UNICODE_COPY()\n: usePyUnicode_CopyCharacters()\norPyUnicode_Substring()\nPy_UNICODE_strcmp()\n: usePyUnicode_Compare()\nPy_UNICODE_strncmp()\n: usePyUnicode_Tailmatch()\nPy_UNICODE_strchr()\n,Py_UNICODE_strrchr()\n: usePyUnicode_FindChar()\nPy_UNICODE_FILL()\n: usePyUnicode_Fill()\nPy_UNICODE_MATCH\nEncoders:\nPyUnicode_Encode()\n: usePyUnicode_AsEncodedObject()\nPyUnicode_EncodeUTF7()\nPyUnicode_EncodeUTF8()\n: usePyUnicode_AsUTF8()\norPyUnicode_AsUTF8String()\nPyUnicode_EncodeUTF32()\nPyUnicode_EncodeUTF16()\nPyUnicode_EncodeUnicodeEscape()\nusePyUnicode_AsUnicodeEscapeString()\nPyUnicode_EncodeRawUnicodeEscape()\nusePyUnicode_AsRawUnicodeEscapeString()\nPyUnicode_EncodeLatin1()\n: usePyUnicode_AsLatin1String()\nPyUnicode_EncodeASCII()\n: usePyUnicode_AsASCIIString()\nPyUnicode_EncodeCharmap()\nPyUnicode_TranslateCharmap()\nPyUnicode_EncodeMBCS()\n: usePyUnicode_AsMBCSString()\norPyUnicode_EncodeCodePage()\n(withCP_ACP\ncode_page)PyUnicode_EncodeDecimal()\n,PyUnicode_TransformDecimalToASCII()\nDeprecated features\u00b6\nThe array\nmodule\u2019s 'u'\nformat code is now deprecated and will be\nremoved in Python 4 together with the rest of the (Py_UNICODE\n) API.\nPorting to Python 3.3\u00b6\nThis section lists previously described changes and other bugfixes that may require changes to your code.\nPorting Python code\u00b6\nHash randomization is enabled by default. Set the\nPYTHONHASHSEED\nenvironment variable to0\nto disable hash randomization. See also theobject.__hash__()\nmethod.bpo-12326: On Linux, sys.platform doesn\u2019t contain the major version anymore. It is now always \u2018linux\u2019, instead of \u2018linux2\u2019 or \u2018linux3\u2019 depending on the Linux version used to build Python. Replace sys.platform == \u2018linux2\u2019 with sys.platform.startswith(\u2018linux\u2019), or directly sys.platform == \u2018linux\u2019 if you don\u2019t need to support older Python versions.\nbpo-13847, bpo-14180:\ntime\nanddatetime\n:OverflowError\nis now raised instead ofValueError\nif a timestamp is out of range.OSError\nis now raised if C functionsgmtime()\norlocaltime()\nfailed.The default finders used by import now utilize a cache of what is contained within a specific directory. If you create a Python source file or sourceless bytecode file, make sure to call\nimportlib.invalidate_caches()\nto clear out the cache for the finders to notice the new file.ImportError\nnow uses the full name of the module that was attempted to be imported. Doctests that check ImportErrors\u2019 message will need to be updated to use the full name of the module instead of just the tail of the name.The index argument to\n__import__()\nnow defaults to 0 instead of -1 and no longer support negative values. It was an oversight when PEP 328 was implemented that the default value remained -1. If you need to continue to perform a relative import followed by an absolute import, then perform the relative import using an index of 1, followed by another import using an index of 0. It is preferred, though, that you useimportlib.import_module()\nrather than call__import__()\ndirectly.__import__()\nno longer allows one to use an index value other than 0 for top-level modules. E.g.__import__('sys', level=1)\nis now an error.Because\nsys.meta_path\nandsys.path_hooks\nnow have finders on them by default, you will most likely want to uselist.insert()\ninstead oflist.append()\nto add to those lists.Because\nNone\nis now inserted intosys.path_importer_cache\n, if you are clearing out entries in the dictionary of paths that do not have a finder, you will need to remove keys paired with values ofNone\nandimp.NullImporter\nto be backwards-compatible. This will lead to extra overhead on older versions of Python that re-insertNone\nintosys.path_importer_cache\nwhere it represents the use of implicit finders, but semantically it should not change anything.importlib.abc.Finder\nno longer specifies afind_module()\nabstract method that must be implemented. If you were relying on subclasses to implement that method, make sure to check for the method\u2019s existence first. You will probably want to check forfind_loader()\nfirst, though, in the case of working with path entry finders.pkgutil\nhas been converted to useimportlib\ninternally. This eliminates many edge cases where the old behaviour of the PEP 302 import emulation failed to match the behaviour of the real import system. The import emulation itself is still present, but is now deprecated. Thepkgutil.iter_importers()\nandpkgutil.walk_packages()\nfunctions special case the standard import hooks so they are still supported even though they do not provide the non-standarditer_modules()\nmethod.A longstanding RFC-compliance bug (bpo-1079) in the parsing done by\nemail.header.decode_header()\nhas been fixed. Code that uses the standard idiom to convert encoded headers into unicode (str(make_header(decode_header(h))\n) will see no change, but code that looks at the individual tuples returned by decode_header will see that whitespace that precedes or followsASCII\nsections is now included in theASCII\nsection. Code that builds headers usingmake_header\nshould also continue to work without change, sincemake_header\ncontinues to add whitespace betweenASCII\nand non-ASCII\nsections if it is not already present in the input strings.email.utils.formataddr()\nnow does the correct content transfer encoding when passed non-ASCII\ndisplay names. Any code that depended on the previous buggy behavior that preserved the non-ASCII\nunicode in the formatted output string will need to be changed (bpo-1690608).poplib.POP3.quit()\nmay now raise protocol errors like all otherpoplib\nmethods. Code that assumesquit\ndoes not raisepoplib.error_proto\nerrors may need to be changed if errors onquit\nare encountered by a particular application (bpo-11291).The\nstrict\nargument toemail.parser.Parser\n, deprecated since Python 2.4, has finally been removed.The deprecated method\nunittest.TestCase.assertSameElements\nhas been removed.The deprecated variable\ntime.accept2dyear\nhas been removed.The deprecated\nContext._clamp\nattribute has been removed from thedecimal\nmodule. It was previously replaced by the public attributeclamp\n. (See bpo-8540.)The undocumented internal helper class\nSSLFakeFile\nhas been removed fromsmtplib\n, since its functionality has long been provided directly bysocket.socket.makefile()\n.Passing a negative value to\ntime.sleep()\non Windows now raises an error instead of sleeping forever. It has always raised an error on posix.The\nast.__version__\nconstant has been removed. If you need to make decisions affected by the AST version, usesys.version_info\nto make the decision.Code that used to work around the fact that the\nthreading\nmodule used factory functions by subclassing the private classes will need to change to subclass the now-public classes.The undocumented debugging machinery in the threading module has been removed, simplifying the code. This should have no effect on production code, but is mentioned here in case any application debug frameworks were interacting with it (bpo-13550).\nPorting C code\u00b6\nIn the course of changes to the buffer API the undocumented\nsmalltable\nmember of thePy_buffer\nstructure has been removed and the layout of thePyMemoryViewObject\nhas changed.All extensions relying on the relevant parts in\nmemoryobject.h\norobject.h\nmust be rebuilt.Due to PEP 393, the\nPy_UNICODE\ntype and all functions using this type are deprecated (but will stay available for at least five years). If you were using low-level Unicode APIs to construct and access unicode objects and you want to benefit of the memory footprint reduction provided by PEP 393, you have to convert your code to the new Unicode API.However, if you only have been using high-level functions such as\nPyUnicode_Concat()\n,PyUnicode_Join()\norPyUnicode_FromFormat()\n, your code will automatically take advantage of the new unicode representations.PyImport_GetMagicNumber()\nnow returns-1\nupon failure.As a negative value for the level argument to\n__import__()\nis no longer valid, the same now holds forPyImport_ImportModuleLevel()\n. This also means that the value of level used byPyImport_ImportModuleEx()\nis now0\ninstead of-1\n.\nBuilding C extensions\u00b6\nThe range of possible file names for C extensions has been narrowed. Very rarely used spellings have been suppressed: under POSIX, files named\nxxxmodule.so\n,xxxmodule.abi3.so\nandxxxmodule.cpython-*.so\nare no longer recognized as implementing thexxx\nmodule. If you had been generating such files, you have to switch to the other spellings (i.e., remove themodule\nstring from the file names).(implemented in bpo-14040.)\nCommand Line Switch Changes\u00b6\nThe -Q command-line flag and related artifacts have been removed. Code checking sys.flags.division_warning will need updating.\n(bpo-10998, contributed by \u00c9ric Araujo.)\nWhen python is started with\n-S\n,import site\nwill no longer add site-specific paths to the module search paths. In previous versions, it did.(bpo-11591, contributed by Carl Meyer with editions by \u00c9ric Araujo.)", "code_snippets": [" ", " ", " ", "\n\n", "\n ", " ", " ", " ", "\n ", " ", " ", "\n", " ", " ", " ", "\n ", " ", " ", " ", "\n ", "\n ", " ", " ", " ", " ", "\n ", "\n ", "\n ", "\n", "\n ", " ", " ", " ", "\n ", " ", " ", "\n", " ", "\n ", "\n", " ", "\n ", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", "\n", " ", " ", " ", "\n", " ", "\n", "\n", " ", "\n", " ", " ", " ", "\n", " ", "\n", "\n", " ", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", "\n File ", ", line ", ", in ", "\n File ", ", line ", ", in ", "\n", ": ", "\n", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n\n", "\n\n", "\n File ", ", line ", ", in ", "\n File ", ", line ", ", in ", "\n", ": ", "\n", "\n", " ", "\n", " ", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", " ", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 19663} +{"url": "https://docs.python.org/3/search.html", "title": "Search", "content": "Please activate JavaScript to enable the search functionality.\nSearching for multiple words only shows matches that contain all words.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 33} +{"url": "https://docs.python.org/3/c-api/utilities.html", "title": "Utilities", "content": "Utilities\u00b6\nThe functions in this chapter perform various utility tasks, ranging from helping C code be more portable across platforms, using Python modules from C, and parsing function arguments and constructing Python values from C values.\n- Operating System Utilities\n- System Functions\n- Process Control\n- Importing Modules\n- Data marshalling support\n- Parsing arguments and building values\n- String conversion and formatting\n- Character classification and conversion\n- PyHash API\n- Reflection\n- Codec registry and support functions\n- PyTime C API\n- Support for Perf Maps", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 143} +{"url": "https://docs.python.org/3/c-api/abstract.html", "title": "Abstract Objects Layer", "content": "Abstract Objects Layer\u00b6\nThe functions in this chapter interact with Python objects regardless of their type, or with wide classes of object types (e.g. all numerical types, or all sequence types). When used on object types for which they do not apply, they will raise a Python exception.\nIt is not possible to use these functions on objects that are not properly\ninitialized, such as a list object that has been created by PyList_New()\n,\nbut whose items have not been set to some non-NULL\nvalue yet.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 125} +{"url": "https://docs.python.org/3/extending/building.html", "title": "Building C and C++ Extensions", "content": "4. Building C and C++ Extensions\u00b6\nA C extension for CPython is a shared library (for example, a .so\nfile on\nLinux, .pyd\non Windows), which exports an initialization function.\nSee Defining extension modules for details.\n4.1. Building C and C++ Extensions with setuptools\u00b6\nBuilding, packaging and distributing extension modules is best done with third-party tools, and is out of scope of this document. One suitable tool is Setuptools, whose documentation can be found at https://setuptools.pypa.io/en/latest/setuptools.html.\nThe distutils\nmodule, which was included in the standard library\nuntil Python 3.12, is now maintained as part of Setuptools.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 162} +{"url": "https://docs.python.org/3/extending/extending.html", "title": "Extending Python with C or C++", "content": "1. Extending Python with C or C++\u00b6\nIt is quite easy to add new built-in modules to Python, if you know how to program in C. Such extension modules can do two things that can\u2019t be done directly in Python: they can implement new built-in object types, and they can call C library functions and system calls.\nTo support extensions, the Python API (Application Programmers Interface)\ndefines a set of functions, macros and variables that provide access to most\naspects of the Python run-time system. The Python API is incorporated in a C\nsource file by including the header \"Python.h\"\n.\nThe compilation of an extension module depends on its intended use as well as on your system setup; details are given in later chapters.\nNote\nThe C extension interface is specific to CPython, and extension modules do\nnot work on other Python implementations. In many cases, it is possible to\navoid writing C extensions and preserve portability to other implementations.\nFor example, if your use case is calling C library functions or system calls,\nyou should consider using the ctypes\nmodule or the cffi library rather than writing\ncustom C code.\nThese modules let you write Python code to interface with C code and are more\nportable between implementations of Python than writing and compiling a C\nextension module.\n1.1. A Simple Example\u00b6\nLet\u2019s create an extension module called spam\n(the favorite food of Monty\nPython fans\u2026) and let\u2019s say we want to create a Python interface to the C\nlibrary function system()\n[1]. This function takes a null-terminated\ncharacter string as argument and returns an integer. We want this function to\nbe callable from Python as follows:\n>>> import spam\n>>> status = spam.system(\"ls -l\")\nBegin by creating a file spammodule.c\n. (Historically, if a module is\ncalled spam\n, the C file containing its implementation is called\nspammodule.c\n; if the module name is very long, like spammify\n, the\nmodule name can be just spammify.c\n.)\nThe first two lines of our file can be:\n#define PY_SSIZE_T_CLEAN\n#include \nwhich pulls in the Python API (you can add a comment describing the purpose of the module and a copyright notice if you like).\nNote\nSince Python may define some pre-processor definitions which affect the standard\nheaders on some systems, you must include Python.h\nbefore any standard\nheaders are included.\n#define PY_SSIZE_T_CLEAN\nwas used to indicate that Py_ssize_t\nshould be\nused in some APIs instead of int\n.\nIt is not necessary since Python 3.13, but we keep it here for backward compatibility.\nSee Strings and buffers for a description of this macro.\nAll user-visible symbols defined by Python.h\nhave a prefix of Py\nor\nPY\n, except those defined in standard header files.\nTip\nFor backward compatibility, Python.h\nincludes several standard header files.\nC extensions should include the standard headers that they use,\nand should not rely on these implicit includes.\nIf using the limited C API version 3.13 or newer, the implicit includes are:\n\n\n(on Windows)\n\n\n\n\n\n(if present)\nIf Py_LIMITED_API\nis not defined, or is set to version 3.12 or older,\nthe headers below are also included:\n\n\n(on POSIX)\nIf Py_LIMITED_API\nis not defined, or is set to version 3.10 or older,\nthe headers below are also included:\n\n\n\n\nThe next thing we add to our module file is the C function that will be called\nwhen the Python expression spam.system(string)\nis evaluated (we\u2019ll see\nshortly how it ends up being called):\nstatic PyObject *\nspam_system(PyObject *self, PyObject *args)\n{\nconst char *command;\nint sts;\nif (!PyArg_ParseTuple(args, \"s\", &command))\nreturn NULL;\nsts = system(command);\nreturn PyLong_FromLong(sts);\n}\nThere is a straightforward translation from the argument list in Python (for\nexample, the single expression \"ls -l\"\n) to the arguments passed to the C\nfunction. The C function always has two arguments, conventionally named self\nand args.\nThe self argument points to the module object for module-level functions; for a method it would point to the object instance.\nThe args argument will be a pointer to a Python tuple object containing the\narguments. Each item of the tuple corresponds to an argument in the call\u2019s\nargument list. The arguments are Python objects \u2014 in order to do anything\nwith them in our C function we have to convert them to C values. The function\nPyArg_ParseTuple()\nin the Python API checks the argument types and\nconverts them to C values. It uses a template string to determine the required\ntypes of the arguments as well as the types of the C variables into which to\nstore the converted values. More about this later.\nPyArg_ParseTuple()\nreturns true (nonzero) if all arguments have the right\ntype and its components have been stored in the variables whose addresses are\npassed. It returns false (zero) if an invalid argument list was passed. In the\nlatter case it also raises an appropriate exception so the calling function can\nreturn NULL\nimmediately (as we saw in the example).\n1.2. Intermezzo: Errors and Exceptions\u00b6\nAn important convention throughout the Python interpreter is the following: when\na function fails, it should set an exception condition and return an error value\n(usually -1\nor a NULL\npointer). Exception information is stored in\nthree members of the interpreter\u2019s thread state. These are NULL\nif\nthere is no exception. Otherwise they are the C equivalents of the members\nof the Python tuple returned by sys.exc_info()\n. These are the\nexception type, exception instance, and a traceback object. It is important\nto know about them to understand how errors are passed around.\nThe Python API defines a number of functions to set various types of exceptions.\nThe most common one is PyErr_SetString()\n. Its arguments are an exception\nobject and a C string. The exception object is usually a predefined object like\nPyExc_ZeroDivisionError\n. The C string indicates the cause of the error\nand is converted to a Python string object and stored as the \u201cassociated value\u201d\nof the exception.\nAnother useful function is PyErr_SetFromErrno()\n, which only takes an\nexception argument and constructs the associated value by inspection of the\nglobal variable errno\n. The most general function is\nPyErr_SetObject()\n, which takes two object arguments, the exception and\nits associated value. You don\u2019t need to Py_INCREF()\nthe objects passed\nto any of these functions.\nYou can test non-destructively whether an exception has been set with\nPyErr_Occurred()\n. This returns the current exception object, or NULL\nif no exception has occurred. You normally don\u2019t need to call\nPyErr_Occurred()\nto see whether an error occurred in a function call,\nsince you should be able to tell from the return value.\nWhen a function f that calls another function g detects that the latter\nfails, f should itself return an error value (usually NULL\nor -1\n). It\nshould not call one of the PyErr_*\nfunctions \u2014 one has already\nbeen called by g. f\u2019s caller is then supposed to also return an error\nindication to its caller, again without calling PyErr_*\n, and so on\n\u2014 the most detailed cause of the error was already reported by the function\nthat first detected it. Once the error reaches the Python interpreter\u2019s main\nloop, this aborts the currently executing Python code and tries to find an\nexception handler specified by the Python programmer.\n(There are situations where a module can actually give a more detailed error\nmessage by calling another PyErr_*\nfunction, and in such cases it is\nfine to do so. As a general rule, however, this is not necessary, and can cause\ninformation about the cause of the error to be lost: most operations can fail\nfor a variety of reasons.)\nTo ignore an exception set by a function call that failed, the exception\ncondition must be cleared explicitly by calling PyErr_Clear()\n. The only\ntime C code should call PyErr_Clear()\nis if it doesn\u2019t want to pass the\nerror on to the interpreter but wants to handle it completely by itself\n(possibly by trying something else, or pretending nothing went wrong).\nEvery failing malloc()\ncall must be turned into an exception \u2014 the\ndirect caller of malloc()\n(or realloc()\n) must call\nPyErr_NoMemory()\nand return a failure indicator itself. All the\nobject-creating functions (for example, PyLong_FromLong()\n) already do\nthis, so this note is only relevant to those who call malloc()\ndirectly.\nAlso note that, with the important exception of PyArg_ParseTuple()\nand\nfriends, functions that return an integer status usually return a positive value\nor zero for success and -1\nfor failure, like Unix system calls.\nFinally, be careful to clean up garbage (by making Py_XDECREF()\nor\nPy_DECREF()\ncalls for objects you have already created) when you return\nan error indicator!\nThe choice of which exception to raise is entirely yours. There are predeclared\nC objects corresponding to all built-in Python exceptions, such as\nPyExc_ZeroDivisionError\n, which you can use directly. Of course, you\nshould choose exceptions wisely \u2014 don\u2019t use PyExc_TypeError\nto mean\nthat a file couldn\u2019t be opened (that should probably be PyExc_OSError\n).\nIf something\u2019s wrong with the argument list, the PyArg_ParseTuple()\nfunction usually raises PyExc_TypeError\n. If you have an argument whose\nvalue must be in a particular range or must satisfy other conditions,\nPyExc_ValueError\nis appropriate.\nYou can also define a new exception that is unique to your module. The simplest way to do this is to declare a static global object variable at the beginning of the file:\nstatic PyObject *SpamError = NULL;\nand initialize it by calling PyErr_NewException()\nin the module\u2019s\nPy_mod_exec\nfunction (spam_module_exec()\n):\nSpamError = PyErr_NewException(\"spam.error\", NULL, NULL);\nSince SpamError\nis a global variable, it will be overwritten every time\nthe module is reinitialized, when the Py_mod_exec\nfunction is called.\nFor now, let\u2019s avoid the issue: we will block repeated initialization by raising an\nImportError\n:\nstatic PyObject *SpamError = NULL;\nstatic int\nspam_module_exec(PyObject *m)\n{\nif (SpamError != NULL) {\nPyErr_SetString(PyExc_ImportError,\n\"cannot initialize spam module more than once\");\nreturn -1;\n}\nSpamError = PyErr_NewException(\"spam.error\", NULL, NULL);\nif (PyModule_AddObjectRef(m, \"SpamError\", SpamError) < 0) {\nreturn -1;\n}\nreturn 0;\n}\nstatic PyModuleDef_Slot spam_module_slots[] = {\n{Py_mod_exec, spam_module_exec},\n{0, NULL}\n};\nstatic struct PyModuleDef spam_module = {\n.m_base = PyModuleDef_HEAD_INIT,\n.m_name = \"spam\",\n.m_size = 0, // non-negative\n.m_slots = spam_module_slots,\n};\nPyMODINIT_FUNC\nPyInit_spam(void)\n{\nreturn PyModuleDef_Init(&spam_module);\n}\nNote that the Python name for the exception object is spam.error\n. The\nPyErr_NewException()\nfunction may create a class with the base class\nbeing Exception\n(unless another class is passed in instead of NULL\n),\ndescribed in Built-in Exceptions.\nNote also that the SpamError\nvariable retains a reference to the newly\ncreated exception class; this is intentional! Since the exception could be\nremoved from the module by external code, an owned reference to the class is\nneeded to ensure that it will not be discarded, causing SpamError\nto\nbecome a dangling pointer. Should it become a dangling pointer, C code which\nraises the exception could cause a core dump or other unintended side effects.\nFor now, the Py_DECREF()\ncall to remove this reference is missing.\nEven when the Python interpreter shuts down, the global SpamError\nvariable will not be garbage-collected. It will \u201cleak\u201d.\nWe did, however, ensure that this will happen at most once per process.\nWe discuss the use of PyMODINIT_FUNC\nas a function return type later in this\nsample.\nThe spam.error\nexception can be raised in your extension module using a\ncall to PyErr_SetString()\nas shown below:\nstatic PyObject *\nspam_system(PyObject *self, PyObject *args)\n{\nconst char *command;\nint sts;\nif (!PyArg_ParseTuple(args, \"s\", &command))\nreturn NULL;\nsts = system(command);\nif (sts < 0) {\nPyErr_SetString(SpamError, \"System command failed\");\nreturn NULL;\n}\nreturn PyLong_FromLong(sts);\n}\n1.3. Back to the Example\u00b6\nGoing back to our example function, you should now be able to understand this statement:\nif (!PyArg_ParseTuple(args, \"s\", &command))\nreturn NULL;\nIt returns NULL\n(the error indicator for functions returning object pointers)\nif an error is detected in the argument list, relying on the exception set by\nPyArg_ParseTuple()\n. Otherwise the string value of the argument has been\ncopied to the local variable command\n. This is a pointer assignment and\nyou are not supposed to modify the string to which it points (so in Standard C,\nthe variable command\nshould properly be declared as const char\n*command\n).\nThe next statement is a call to the Unix function system()\n, passing it\nthe string we just got from PyArg_ParseTuple()\n:\nsts = system(command);\nOur spam.system()\nfunction must return the value of sts\nas a\nPython object. This is done using the function PyLong_FromLong()\n.\nreturn PyLong_FromLong(sts);\nIn this case, it will return an integer object. (Yes, even integers are objects on the heap in Python!)\nIf you have a C function that returns no useful argument (a function returning\nvoid), the corresponding Python function must return None\n. You\nneed this idiom to do so (which is implemented by the Py_RETURN_NONE\nmacro):\nPy_INCREF(Py_None);\nreturn Py_None;\nPy_None\nis the C name for the special Python object None\n. It is a\ngenuine Python object rather than a NULL\npointer, which means \u201cerror\u201d in most\ncontexts, as we have seen.\n1.4. The Module\u2019s Method Table and Initialization Function\u00b6\nI promised to show how spam_system()\nis called from Python programs.\nFirst, we need to list its name and address in a \u201cmethod table\u201d:\nstatic PyMethodDef spam_methods[] = {\n...\n{\"system\", spam_system, METH_VARARGS,\n\"Execute a shell command.\"},\n...\n{NULL, NULL, 0, NULL} /* Sentinel */\n};\nNote the third entry (METH_VARARGS\n). This is a flag telling the interpreter\nthe calling convention to be used for the C function. It should normally always\nbe METH_VARARGS\nor METH_VARARGS | METH_KEYWORDS\n; a value of 0\nmeans\nthat an obsolete variant of PyArg_ParseTuple()\nis used.\nWhen using only METH_VARARGS\n, the function should expect the Python-level\nparameters to be passed in as a tuple acceptable for parsing via\nPyArg_ParseTuple()\n; more information on this function is provided below.\nThe METH_KEYWORDS\nbit may be set in the third field if keyword\narguments should be passed to the function. In this case, the C function should\naccept a third PyObject *\nparameter which will be a dictionary of keywords.\nUse PyArg_ParseTupleAndKeywords()\nto parse the arguments to such a\nfunction.\nThe method table must be referenced in the module definition structure:\nstatic struct PyModuleDef spam_module = {\n...\n.m_methods = spam_methods,\n...\n};\nThis structure, in turn, must be passed to the interpreter in the module\u2019s\ninitialization function. The initialization function must be named\nPyInit_name()\n, where name is the name of the module, and should be the\nonly non-static\nitem defined in the module file:\nPyMODINIT_FUNC\nPyInit_spam(void)\n{\nreturn PyModuleDef_Init(&spam_module);\n}\nNote that PyMODINIT_FUNC\ndeclares the function as PyObject *\nreturn type,\ndeclares any special linkage declarations required by the platform, and for C++\ndeclares the function as extern \"C\"\n.\nPyInit_spam()\nis called when each interpreter imports its module\nspam\nfor the first time. (See below for comments about embedding Python.)\nA pointer to the module definition must be returned via PyModuleDef_Init()\n,\nso that the import machinery can create the module and store it in sys.modules\n.\nWhen embedding Python, the PyInit_spam()\nfunction is not called\nautomatically unless there\u2019s an entry in the PyImport_Inittab\ntable.\nTo add the module to the initialization table, use PyImport_AppendInittab()\n,\noptionally followed by an import of the module:\n#define PY_SSIZE_T_CLEAN\n#include \nint\nmain(int argc, char *argv[])\n{\nPyStatus status;\nPyConfig config;\nPyConfig_InitPythonConfig(&config);\n/* Add a built-in module, before Py_Initialize */\nif (PyImport_AppendInittab(\"spam\", PyInit_spam) == -1) {\nfprintf(stderr, \"Error: could not extend in-built modules table\\n\");\nexit(1);\n}\n/* Pass argv[0] to the Python interpreter */\nstatus = PyConfig_SetBytesString(&config, &config.program_name, argv[0]);\nif (PyStatus_Exception(status)) {\ngoto exception;\n}\n/* Initialize the Python interpreter. Required.\nIf this step fails, it will be a fatal error. */\nstatus = Py_InitializeFromConfig(&config);\nif (PyStatus_Exception(status)) {\ngoto exception;\n}\nPyConfig_Clear(&config);\n/* Optionally import the module; alternatively,\nimport can be deferred until the embedded script\nimports it. */\nPyObject *pmodule = PyImport_ImportModule(\"spam\");\nif (!pmodule) {\nPyErr_Print();\nfprintf(stderr, \"Error: could not import module 'spam'\\n\");\n}\n// ... use Python C API here ...\nreturn 0;\nexception:\nPyConfig_Clear(&config);\nPy_ExitStatusException(status);\n}\nNote\nIf you declare a global variable or a local static one, the module may\nexperience unintended side-effects on re-initialisation, for example when\nremoving entries from sys.modules\nor importing compiled modules into\nmultiple interpreters within a process\n(or following a fork()\nwithout an intervening exec()\n).\nIf module state is not yet fully isolated,\nauthors should consider marking the module as having no support for subinterpreters\n(via Py_MOD_MULTIPLE_INTERPRETERS_NOT_SUPPORTED\n).\nA more substantial example module is included in the Python source distribution\nas Modules/xxlimited.c\n. This file may be used as a template or simply\nread as an example.\n1.5. Compilation and Linkage\u00b6\nThere are two more things to do before you can use your new extension: compiling and linking it with the Python system. If you use dynamic loading, the details may depend on the style of dynamic loading your system uses; see the chapters about building extension modules (chapter Building C and C++ Extensions) and additional information that pertains only to building on Windows (chapter Building C and C++ Extensions on Windows) for more information about this.\nIf you can\u2019t use dynamic loading, or if you want to make your module a permanent\npart of the Python interpreter, you will have to change the configuration setup\nand rebuild the interpreter. Luckily, this is very simple on Unix: just place\nyour file (spammodule.c\nfor example) in the Modules/\ndirectory\nof an unpacked source distribution, add a line to the file\nModules/Setup.local\ndescribing your file:\nspam spammodule.o\nand rebuild the interpreter by running make in the toplevel\ndirectory. You can also run make in the Modules/\nsubdirectory, but then you must first rebuild Makefile\nthere by running\n\u2018make Makefile\u2019. (This is necessary each time you change the\nSetup\nfile.)\nIf your module requires additional libraries to link with, these can be listed on the line in the configuration file as well, for instance:\nspam spammodule.o -lX11\n1.6. Calling Python Functions from C\u00b6\nSo far we have concentrated on making C functions callable from Python. The reverse is also useful: calling Python functions from C. This is especially the case for libraries that support so-called \u201ccallback\u201d functions. If a C interface makes use of callbacks, the equivalent Python often needs to provide a callback mechanism to the Python programmer; the implementation will require calling the Python callback functions from a C callback. Other uses are also imaginable.\nFortunately, the Python interpreter is easily called recursively, and there is a\nstandard interface to call a Python function. (I won\u2019t dwell on how to call the\nPython parser with a particular string as input \u2014 if you\u2019re interested, have a\nlook at the implementation of the -c\ncommand line option in\nModules/main.c\nfrom the Python source code.)\nCalling a Python function is easy. First, the Python program must somehow pass\nyou the Python function object. You should provide a function (or some other\ninterface) to do this. When this function is called, save a pointer to the\nPython function object (be careful to Py_INCREF()\nit!) in a global\nvariable \u2014 or wherever you see fit. For example, the following function might\nbe part of a module definition:\nstatic PyObject *my_callback = NULL;\nstatic PyObject *\nmy_set_callback(PyObject *dummy, PyObject *args)\n{\nPyObject *result = NULL;\nPyObject *temp;\nif (PyArg_ParseTuple(args, \"O:set_callback\", &temp)) {\nif (!PyCallable_Check(temp)) {\nPyErr_SetString(PyExc_TypeError, \"parameter must be callable\");\nreturn NULL;\n}\nPy_XINCREF(temp); /* Add a reference to new callback */\nPy_XDECREF(my_callback); /* Dispose of previous callback */\nmy_callback = temp; /* Remember new callback */\n/* Boilerplate to return \"None\" */\nPy_INCREF(Py_None);\nresult = Py_None;\n}\nreturn result;\n}\nThis function must be registered with the interpreter using the\nMETH_VARARGS\nflag; this is described in section The Module\u2019s Method Table and Initialization Function. The\nPyArg_ParseTuple()\nfunction and its arguments are documented in section\nExtracting Parameters in Extension Functions.\nThe macros Py_XINCREF()\nand Py_XDECREF()\nincrement/decrement the\nreference count of an object and are safe in the presence of NULL\npointers\n(but note that temp will not be NULL\nin this context). More info on them\nin section Reference Counts.\nLater, when it is time to call the function, you call the C function\nPyObject_CallObject()\n. This function has two arguments, both pointers to\narbitrary Python objects: the Python function, and the argument list. The\nargument list must always be a tuple object, whose length is the number of\narguments. To call the Python function with no arguments, pass in NULL\n, or\nan empty tuple; to call it with one argument, pass a singleton tuple.\nPy_BuildValue()\nreturns a tuple when its format string consists of zero\nor more format codes between parentheses. For example:\nint arg;\nPyObject *arglist;\nPyObject *result;\n...\narg = 123;\n...\n/* Time to call the callback */\narglist = Py_BuildValue(\"(i)\", arg);\nresult = PyObject_CallObject(my_callback, arglist);\nPy_DECREF(arglist);\nPyObject_CallObject()\nreturns a Python object pointer: this is the return\nvalue of the Python function. PyObject_CallObject()\nis\n\u201creference-count-neutral\u201d with respect to its arguments. In the example a new\ntuple was created to serve as the argument list, which is\nPy_DECREF()\n-ed immediately after the PyObject_CallObject()\ncall.\nThe return value of PyObject_CallObject()\nis \u201cnew\u201d: either it is a brand\nnew object, or it is an existing object whose reference count has been\nincremented. So, unless you want to save it in a global variable, you should\nsomehow Py_DECREF()\nthe result, even (especially!) if you are not\ninterested in its value.\nBefore you do this, however, it is important to check that the return value\nisn\u2019t NULL\n. If it is, the Python function terminated by raising an exception.\nIf the C code that called PyObject_CallObject()\nis called from Python, it\nshould now return an error indication to its Python caller, so the interpreter\ncan print a stack trace, or the calling Python code can handle the exception.\nIf this is not possible or desirable, the exception should be cleared by calling\nPyErr_Clear()\n. For example:\nif (result == NULL)\nreturn NULL; /* Pass error back */\n...use result...\nPy_DECREF(result);\nDepending on the desired interface to the Python callback function, you may also\nhave to provide an argument list to PyObject_CallObject()\n. In some cases\nthe argument list is also provided by the Python program, through the same\ninterface that specified the callback function. It can then be saved and used\nin the same manner as the function object. In other cases, you may have to\nconstruct a new tuple to pass as the argument list. The simplest way to do this\nis to call Py_BuildValue()\n. For example, if you want to pass an integral\nevent code, you might use the following code:\nPyObject *arglist;\n...\narglist = Py_BuildValue(\"(l)\", eventcode);\nresult = PyObject_CallObject(my_callback, arglist);\nPy_DECREF(arglist);\nif (result == NULL)\nreturn NULL; /* Pass error back */\n/* Here maybe use the result */\nPy_DECREF(result);\nNote the placement of Py_DECREF(arglist)\nimmediately after the call, before\nthe error check! Also note that strictly speaking this code is not complete:\nPy_BuildValue()\nmay run out of memory, and this should be checked.\nYou may also call a function with keyword arguments by using\nPyObject_Call()\n, which supports arguments and keyword arguments. As in\nthe above example, we use Py_BuildValue()\nto construct the dictionary.\nPyObject *dict;\n...\ndict = Py_BuildValue(\"{s:i}\", \"name\", val);\nresult = PyObject_Call(my_callback, NULL, dict);\nPy_DECREF(dict);\nif (result == NULL)\nreturn NULL; /* Pass error back */\n/* Here maybe use the result */\nPy_DECREF(result);\n1.7. Extracting Parameters in Extension Functions\u00b6\nThe PyArg_ParseTuple()\nfunction is declared as follows:\nint PyArg_ParseTuple(PyObject *arg, const char *format, ...);\nThe arg argument must be a tuple object containing an argument list passed from Python to a C function. The format argument must be a format string, whose syntax is explained in Parsing arguments and building values in the Python/C API Reference Manual. The remaining arguments must be addresses of variables whose type is determined by the format string.\nNote that while PyArg_ParseTuple()\nchecks that the Python arguments have\nthe required types, it cannot check the validity of the addresses of C variables\npassed to the call: if you make mistakes there, your code will probably crash or\nat least overwrite random bits in memory. So be careful!\nNote that any Python object references which are provided to the caller are borrowed references; do not decrement their reference count!\nSome example calls:\n#define PY_SSIZE_T_CLEAN\n#include \nint ok;\nint i, j;\nlong k, l;\nconst char *s;\nPy_ssize_t size;\nok = PyArg_ParseTuple(args, \"\"); /* No arguments */\n/* Python call: f() */\nok = PyArg_ParseTuple(args, \"s\", &s); /* A string */\n/* Possible Python call: f('whoops!') */\nok = PyArg_ParseTuple(args, \"lls\", &k, &l, &s); /* Two longs and a string */\n/* Possible Python call: f(1, 2, 'three') */\nok = PyArg_ParseTuple(args, \"(ii)s#\", &i, &j, &s, &size);\n/* A pair of ints and a string, whose size is also returned */\n/* Possible Python call: f((1, 2), 'three') */\n{\nconst char *file;\nconst char *mode = \"r\";\nint bufsize = 0;\nok = PyArg_ParseTuple(args, \"s|si\", &file, &mode, &bufsize);\n/* A string, and optionally another string and an integer */\n/* Possible Python calls:\nf('spam')\nf('spam', 'w')\nf('spam', 'wb', 100000) */\n}\n{\nint left, top, right, bottom, h, v;\nok = PyArg_ParseTuple(args, \"((ii)(ii))(ii)\",\n&left, &top, &right, &bottom, &h, &v);\n/* A rectangle and a point */\n/* Possible Python call:\nf(((0, 0), (400, 300)), (10, 10)) */\n}\n{\nPy_complex c;\nok = PyArg_ParseTuple(args, \"D:myfunction\", &c);\n/* a complex, also providing a function name for errors */\n/* Possible Python call: myfunction(1+2j) */\n}\n1.8. Keyword Parameters for Extension Functions\u00b6\nThe PyArg_ParseTupleAndKeywords()\nfunction is declared as follows:\nint PyArg_ParseTupleAndKeywords(PyObject *arg, PyObject *kwdict,\nconst char *format, char * const *kwlist, ...);\nThe arg and format parameters are identical to those of the\nPyArg_ParseTuple()\nfunction. The kwdict parameter is the dictionary of\nkeywords received as the third parameter from the Python runtime. The kwlist\nparameter is a NULL\n-terminated list of strings which identify the parameters;\nthe names are matched with the type information from format from left to\nright. On success, PyArg_ParseTupleAndKeywords()\nreturns true, otherwise\nit returns false and raises an appropriate exception.\nNote\nNested tuples cannot be parsed when using keyword arguments! Keyword parameters\npassed in which are not present in the kwlist will cause TypeError\nto\nbe raised.\nHere is an example module which uses keywords, based on an example by Geoff Philbrick (philbrick@hks.com):\n#define PY_SSIZE_T_CLEAN\n#include \nstatic PyObject *\nkeywdarg_parrot(PyObject *self, PyObject *args, PyObject *keywds)\n{\nint voltage;\nconst char *state = \"a stiff\";\nconst char *action = \"voom\";\nconst char *type = \"Norwegian Blue\";\nstatic char *kwlist[] = {\"voltage\", \"state\", \"action\", \"type\", NULL};\nif (!PyArg_ParseTupleAndKeywords(args, keywds, \"i|sss\", kwlist,\n&voltage, &state, &action, &type))\nreturn NULL;\nprintf(\"-- This parrot wouldn't %s if you put %i Volts through it.\\n\",\naction, voltage);\nprintf(\"-- Lovely plumage, the %s -- It's %s!\\n\", type, state);\nPy_RETURN_NONE;\n}\nstatic PyMethodDef keywdarg_methods[] = {\n/* The cast of the function is necessary since PyCFunction values\n* only take two PyObject* parameters, and keywdarg_parrot() takes\n* three.\n*/\n{\"parrot\", (PyCFunction)(void(*)(void))keywdarg_parrot, METH_VARARGS | METH_KEYWORDS,\n\"Print a lovely skit to standard output.\"},\n{NULL, NULL, 0, NULL} /* sentinel */\n};\nstatic struct PyModuleDef keywdarg_module = {\n.m_base = PyModuleDef_HEAD_INIT,\n.m_name = \"keywdarg\",\n.m_size = 0,\n.m_methods = keywdarg_methods,\n};\nPyMODINIT_FUNC\nPyInit_keywdarg(void)\n{\nreturn PyModuleDef_Init(&keywdarg_module);\n}\n1.9. Building Arbitrary Values\u00b6\nThis function is the counterpart to PyArg_ParseTuple()\n. It is declared\nas follows:\nPyObject *Py_BuildValue(const char *format, ...);\nIt recognizes a set of format units similar to the ones recognized by\nPyArg_ParseTuple()\n, but the arguments (which are input to the function,\nnot output) must not be pointers, just values. It returns a new Python object,\nsuitable for returning from a C function called from Python.\nOne difference with PyArg_ParseTuple()\n: while the latter requires its\nfirst argument to be a tuple (since Python argument lists are always represented\nas tuples internally), Py_BuildValue()\ndoes not always build a tuple. It\nbuilds a tuple only if its format string contains two or more format units. If\nthe format string is empty, it returns None\n; if it contains exactly one\nformat unit, it returns whatever object is described by that format unit. To\nforce it to return a tuple of size 0 or one, parenthesize the format string.\nExamples (to the left the call, to the right the resulting Python value):\nPy_BuildValue(\"\") None\nPy_BuildValue(\"i\", 123) 123\nPy_BuildValue(\"iii\", 123, 456, 789) (123, 456, 789)\nPy_BuildValue(\"s\", \"hello\") 'hello'\nPy_BuildValue(\"y\", \"hello\") b'hello'\nPy_BuildValue(\"ss\", \"hello\", \"world\") ('hello', 'world')\nPy_BuildValue(\"s#\", \"hello\", 4) 'hell'\nPy_BuildValue(\"y#\", \"hello\", 4) b'hell'\nPy_BuildValue(\"()\") ()\nPy_BuildValue(\"(i)\", 123) (123,)\nPy_BuildValue(\"(ii)\", 123, 456) (123, 456)\nPy_BuildValue(\"(i,i)\", 123, 456) (123, 456)\nPy_BuildValue(\"[i,i]\", 123, 456) [123, 456]\nPy_BuildValue(\"{s:i,s:i}\",\n\"abc\", 123, \"def\", 456) {'abc': 123, 'def': 456}\nPy_BuildValue(\"((ii)(ii)) (ii)\",\n1, 2, 3, 4, 5, 6) (((1, 2), (3, 4)), (5, 6))\n1.10. Reference Counts\u00b6\nIn languages like C or C++, the programmer is responsible for dynamic allocation\nand deallocation of memory on the heap. In C, this is done using the functions\nmalloc()\nand free()\n. In C++, the operators new\nand\ndelete\nare used with essentially the same meaning and we\u2019ll restrict\nthe following discussion to the C case.\nEvery block of memory allocated with malloc()\nshould eventually be\nreturned to the pool of available memory by exactly one call to free()\n.\nIt is important to call free()\nat the right time. If a block\u2019s address\nis forgotten but free()\nis not called for it, the memory it occupies\ncannot be reused until the program terminates. This is called a memory\nleak. On the other hand, if a program calls free()\nfor a block and then\ncontinues to use the block, it creates a conflict with reuse of the block\nthrough another malloc()\ncall. This is called using freed memory.\nIt has the same bad consequences as referencing uninitialized data \u2014 core\ndumps, wrong results, mysterious crashes.\nCommon causes of memory leaks are unusual paths through the code. For instance, a function may allocate a block of memory, do some calculation, and then free the block again. Now a change in the requirements for the function may add a test to the calculation that detects an error condition and can return prematurely from the function. It\u2019s easy to forget to free the allocated memory block when taking this premature exit, especially when it is added later to the code. Such leaks, once introduced, often go undetected for a long time: the error exit is taken only in a small fraction of all calls, and most modern machines have plenty of virtual memory, so the leak only becomes apparent in a long-running process that uses the leaking function frequently. Therefore, it\u2019s important to prevent leaks from happening by having a coding convention or strategy that minimizes this kind of errors.\nSince Python makes heavy use of malloc()\nand free()\n, it needs a\nstrategy to avoid memory leaks as well as the use of freed memory. The chosen\nmethod is called reference counting. The principle is simple: every\nobject contains a counter, which is incremented when a reference to the object\nis stored somewhere, and which is decremented when a reference to it is deleted.\nWhen the counter reaches zero, the last reference to the object has been deleted\nand the object is freed.\nAn alternative strategy is called automatic garbage collection.\n(Sometimes, reference counting is also referred to as a garbage collection\nstrategy, hence my use of \u201cautomatic\u201d to distinguish the two.) The big\nadvantage of automatic garbage collection is that the user doesn\u2019t need to call\nfree()\nexplicitly. (Another claimed advantage is an improvement in speed\nor memory usage \u2014 this is no hard fact however.) The disadvantage is that for\nC, there is no truly portable automatic garbage collector, while reference\ncounting can be implemented portably (as long as the functions malloc()\nand free()\nare available \u2014 which the C Standard guarantees). Maybe some\nday a sufficiently portable automatic garbage collector will be available for C.\nUntil then, we\u2019ll have to live with reference counts.\nWhile Python uses the traditional reference counting implementation, it also offers a cycle detector that works to detect reference cycles. This allows applications to not worry about creating direct or indirect circular references; these are the weakness of garbage collection implemented using only reference counting. Reference cycles consist of objects which contain (possibly indirect) references to themselves, so that each object in the cycle has a reference count which is non-zero. Typical reference counting implementations are not able to reclaim the memory belonging to any objects in a reference cycle, or referenced from the objects in the cycle, even though there are no further references to the cycle itself.\nThe cycle detector is able to detect garbage cycles and can reclaim them.\nThe gc\nmodule exposes a way to run the detector (the\ncollect()\nfunction), as well as configuration\ninterfaces and the ability to disable the detector at runtime.\n1.10.1. Reference Counting in Python\u00b6\nThere are two macros, Py_INCREF(x)\nand Py_DECREF(x)\n, which handle the\nincrementing and decrementing of the reference count. Py_DECREF()\nalso\nfrees the object when the count reaches zero. For flexibility, it doesn\u2019t call\nfree()\ndirectly \u2014 rather, it makes a call through a function pointer in\nthe object\u2019s type object. For this purpose (and others), every object\nalso contains a pointer to its type object.\nThe big question now remains: when to use Py_INCREF(x)\nand Py_DECREF(x)\n?\nLet\u2019s first introduce some terms. Nobody \u201cowns\u201d an object; however, you can\nown a reference to an object. An object\u2019s reference count is now defined\nas the number of owned references to it. The owner of a reference is\nresponsible for calling Py_DECREF()\nwhen the reference is no longer\nneeded. Ownership of a reference can be transferred. There are three ways to\ndispose of an owned reference: pass it on, store it, or call Py_DECREF()\n.\nForgetting to dispose of an owned reference creates a memory leak.\nIt is also possible to borrow [2] a reference to an object. The\nborrower of a reference should not call Py_DECREF()\n. The borrower must\nnot hold on to the object longer than the owner from which it was borrowed.\nUsing a borrowed reference after the owner has disposed of it risks using freed\nmemory and should be avoided completely [3].\nThe advantage of borrowing over owning a reference is that you don\u2019t need to take care of disposing of the reference on all possible paths through the code \u2014 in other words, with a borrowed reference you don\u2019t run the risk of leaking when a premature exit is taken. The disadvantage of borrowing over owning is that there are some subtle situations where in seemingly correct code a borrowed reference can be used after the owner from which it was borrowed has in fact disposed of it.\nA borrowed reference can be changed into an owned reference by calling\nPy_INCREF()\n. This does not affect the status of the owner from which the\nreference was borrowed \u2014 it creates a new owned reference, and gives full\nowner responsibilities (the new owner must dispose of the reference properly, as\nwell as the previous owner).\n1.10.2. Ownership Rules\u00b6\nWhenever an object reference is passed into or out of a function, it is part of the function\u2019s interface specification whether ownership is transferred with the reference or not.\nMost functions that return a reference to an object pass on ownership with the\nreference. In particular, all functions whose function it is to create a new\nobject, such as PyLong_FromLong()\nand Py_BuildValue()\n, pass\nownership to the receiver. Even if the object is not actually new, you still\nreceive ownership of a new reference to that object. For instance,\nPyLong_FromLong()\nmaintains a cache of popular values and can return a\nreference to a cached item.\nMany functions that extract objects from other objects also transfer ownership\nwith the reference, for instance PyObject_GetAttrString()\n. The picture\nis less clear, here, however, since a few common routines are exceptions:\nPyTuple_GetItem()\n, PyList_GetItem()\n, PyDict_GetItem()\n, and\nPyDict_GetItemString()\nall return references that you borrow from the\ntuple, list or dictionary.\nThe function PyImport_AddModule()\nalso returns a borrowed reference, even\nthough it may actually create the object it returns: this is possible because an\nowned reference to the object is stored in sys.modules\n.\nWhen you pass an object reference into another function, in general, the\nfunction borrows the reference from you \u2014 if it needs to store it, it will use\nPy_INCREF()\nto become an independent owner. There are exactly two\nimportant exceptions to this rule: PyTuple_SetItem()\nand\nPyList_SetItem()\n. These functions take over ownership of the item passed\nto them \u2014 even if they fail! (Note that PyDict_SetItem()\nand friends\ndon\u2019t take over ownership \u2014 they are \u201cnormal.\u201d)\nWhen a C function is called from Python, it borrows references to its arguments\nfrom the caller. The caller owns a reference to the object, so the borrowed\nreference\u2019s lifetime is guaranteed until the function returns. Only when such a\nborrowed reference must be stored or passed on, it must be turned into an owned\nreference by calling Py_INCREF()\n.\nThe object reference returned from a C function that is called from Python must be an owned reference \u2014 ownership is transferred from the function to its caller.\n1.10.3. Thin Ice\u00b6\nThere are a few situations where seemingly harmless use of a borrowed reference can lead to problems. These all have to do with implicit invocations of the interpreter, which can cause the owner of a reference to dispose of it.\nThe first and most important case to know about is using Py_DECREF()\non\nan unrelated object while borrowing a reference to a list item. For instance:\nvoid\nbug(PyObject *list)\n{\nPyObject *item = PyList_GetItem(list, 0);\nPyList_SetItem(list, 1, PyLong_FromLong(0L));\nPyObject_Print(item, stdout, 0); /* BUG! */\n}\nThis function first borrows a reference to list[0]\n, then replaces\nlist[1]\nwith the value 0\n, and finally prints the borrowed reference.\nLooks harmless, right? But it\u2019s not!\nLet\u2019s follow the control flow into PyList_SetItem()\n. The list owns\nreferences to all its items, so when item 1 is replaced, it has to dispose of\nthe original item 1. Now let\u2019s suppose the original item 1 was an instance of a\nuser-defined class, and let\u2019s further suppose that the class defined a\n__del__()\nmethod. If this class instance has a reference count of 1,\ndisposing of it will call its __del__()\nmethod. Internally,\nPyList_SetItem()\ncalls Py_DECREF()\non the replaced item,\nwhich invokes replaced item\u2019s corresponding\ntp_dealloc\nfunction. During\ndeallocation, tp_dealloc\ncalls\ntp_finalize\n, which is mapped to the\n__del__()\nmethod for class instances (see PEP 442). This entire\nsequence happens synchronously within the PyList_SetItem()\ncall.\nSince it is written in Python, the __del__()\nmethod can execute arbitrary\nPython code. Could it perhaps do something to invalidate the reference to\nitem\nin bug()\n? You bet! Assuming that the list passed into\nbug()\nis accessible to the __del__()\nmethod, it could execute a\nstatement to the effect of del list[0]\n, and assuming this was the last\nreference to that object, it would free the memory associated with it, thereby\ninvalidating item\n.\nThe solution, once you know the source of the problem, is easy: temporarily increment the reference count. The correct version of the function reads:\nvoid\nno_bug(PyObject *list)\n{\nPyObject *item = PyList_GetItem(list, 0);\nPy_INCREF(item);\nPyList_SetItem(list, 1, PyLong_FromLong(0L));\nPyObject_Print(item, stdout, 0);\nPy_DECREF(item);\n}\nThis is a true story. An older version of Python contained variants of this bug\nand someone spent a considerable amount of time in a C debugger to figure out\nwhy his __del__()\nmethods would fail\u2026\nThe second case of problems with a borrowed reference is a variant involving\nthreads. Normally, multiple threads in the Python interpreter can\u2019t get in each\nother\u2019s way, because there is a global lock\nprotecting Python\u2019s entire object space.\nHowever, it is possible to temporarily release this lock using the macro\nPy_BEGIN_ALLOW_THREADS\n, and to re-acquire it using\nPy_END_ALLOW_THREADS\n. This is common around blocking I/O calls, to\nlet other threads use the processor while waiting for the I/O to complete.\nObviously, the following function has the same problem as the previous one:\nvoid\nbug(PyObject *list)\n{\nPyObject *item = PyList_GetItem(list, 0);\nPy_BEGIN_ALLOW_THREADS\n...some blocking I/O call...\nPy_END_ALLOW_THREADS\nPyObject_Print(item, stdout, 0); /* BUG! */\n}\n1.10.4. NULL Pointers\u00b6\nIn general, functions that take object references as arguments do not expect you\nto pass them NULL\npointers, and will dump core (or cause later core dumps) if\nyou do so. Functions that return object references generally return NULL\nonly\nto indicate that an exception occurred. The reason for not testing for NULL\narguments is that functions often pass the objects they receive on to other\nfunction \u2014 if each function were to test for NULL\n, there would be a lot of\nredundant tests and the code would run more slowly.\nIt is better to test for NULL\nonly at the \u201csource:\u201d when a pointer that may be\nNULL\nis received, for example, from malloc()\nor from a function that\nmay raise an exception.\nThe macros Py_INCREF()\nand Py_DECREF()\ndo not check for NULL\npointers \u2014 however, their variants Py_XINCREF()\nand Py_XDECREF()\ndo.\nThe macros for checking for a particular object type (Pytype_Check()\n) don\u2019t\ncheck for NULL\npointers \u2014 again, there is much code that calls several of\nthese in a row to test an object against various different expected types, and\nthis would generate redundant tests. There are no variants with NULL\nchecking.\nThe C function calling mechanism guarantees that the argument list passed to C\nfunctions (args\nin the examples) is never NULL\n\u2014 in fact it guarantees\nthat it is always a tuple [4].\nIt is a severe error to ever let a NULL\npointer \u201cescape\u201d to the Python user.\n1.11. Writing Extensions in C++\u00b6\nIt is possible to write extension modules in C++. Some restrictions apply. If\nthe main program (the Python interpreter) is compiled and linked by the C\ncompiler, global or static objects with constructors cannot be used. This is\nnot a problem if the main program is linked by the C++ compiler. Functions that\nwill be called by the Python interpreter (in particular, module initialization\nfunctions) have to be declared using extern \"C\"\n. It is unnecessary to\nenclose the Python header files in extern \"C\" {...}\n\u2014 they use this form\nalready if the symbol __cplusplus\nis defined (all recent C++ compilers\ndefine this symbol).\n1.12. Providing a C API for an Extension Module\u00b6\nMany extension modules just provide new functions and types to be used from Python, but sometimes the code in an extension module can be useful for other extension modules. For example, an extension module could implement a type \u201ccollection\u201d which works like lists without order. Just like the standard Python list type has a C API which permits extension modules to create and manipulate lists, this new collection type should have a set of C functions for direct manipulation from other extension modules.\nAt first sight this seems easy: just write the functions (without declaring them\nstatic\n, of course), provide an appropriate header file, and document\nthe C API. And in fact this would work if all extension modules were always\nlinked statically with the Python interpreter. When modules are used as shared\nlibraries, however, the symbols defined in one module may not be visible to\nanother module. The details of visibility depend on the operating system; some\nsystems use one global namespace for the Python interpreter and all extension\nmodules (Windows, for example), whereas others require an explicit list of\nimported symbols at module link time (AIX is one example), or offer a choice of\ndifferent strategies (most Unices). And even if symbols are globally visible,\nthe module whose functions one wishes to call might not have been loaded yet!\nPortability therefore requires not to make any assumptions about symbol\nvisibility. This means that all symbols in extension modules should be declared\nstatic\n, except for the module\u2019s initialization function, in order to\navoid name clashes with other extension modules (as discussed in section\nThe Module\u2019s Method Table and Initialization Function). And it means that symbols that should be accessible from\nother extension modules must be exported in a different way.\nPython provides a special mechanism to pass C-level information (pointers) from one extension module to another one: Capsules. A Capsule is a Python data type which stores a pointer (void*). Capsules can only be created and accessed via their C API, but they can be passed around like any other Python object. In particular, they can be assigned to a name in an extension module\u2019s namespace. Other extension modules can then import this module, retrieve the value of this name, and then retrieve the pointer from the Capsule.\nThere are many ways in which Capsules can be used to export the C API of an extension module. Each function could get its own Capsule, or all C API pointers could be stored in an array whose address is published in a Capsule. And the various tasks of storing and retrieving the pointers can be distributed in different ways between the module providing the code and the client modules.\nWhichever method you choose, it\u2019s important to name your Capsules properly.\nThe function PyCapsule_New()\ntakes a name parameter\n(const char*); you\u2019re permitted to pass in a NULL\nname, but\nwe strongly encourage you to specify a name. Properly named Capsules provide\na degree of runtime type-safety; there is no feasible way to tell one unnamed\nCapsule from another.\nIn particular, Capsules used to expose C APIs should be given a name following this convention:\nmodulename.attributename\nThe convenience function PyCapsule_Import()\nmakes it easy to\nload a C API provided via a Capsule, but only if the Capsule\u2019s name\nmatches this convention. This behavior gives C API users a high degree\nof certainty that the Capsule they load contains the correct C API.\nThe following example demonstrates an approach that puts most of the burden on the writer of the exporting module, which is appropriate for commonly used library modules. It stores all C API pointers (just one in the example!) in an array of void pointers which becomes the value of a Capsule. The header file corresponding to the module provides a macro that takes care of importing the module and retrieving its C API pointers; client modules only have to call this macro before accessing the C API.\nThe exporting module is a modification of the spam\nmodule from section\nA Simple Example. The function spam.system()\ndoes not call\nthe C library function system()\ndirectly, but a function\nPySpam_System()\n, which would of course do something more complicated in\nreality (such as adding \u201cspam\u201d to every command). This function\nPySpam_System()\nis also exported to other extension modules.\nThe function PySpam_System()\nis a plain C function, declared\nstatic\nlike everything else:\nstatic int\nPySpam_System(const char *command)\n{\nreturn system(command);\n}\nThe function spam_system()\nis modified in a trivial way:\nstatic PyObject *\nspam_system(PyObject *self, PyObject *args)\n{\nconst char *command;\nint sts;\nif (!PyArg_ParseTuple(args, \"s\", &command))\nreturn NULL;\nsts = PySpam_System(command);\nreturn PyLong_FromLong(sts);\n}\nIn the beginning of the module, right after the line\n#include \ntwo more lines must be added:\n#define SPAM_MODULE\n#include \"spammodule.h\"\nThe #define\nis used to tell the header file that it is being included in the\nexporting module, not a client module. Finally, the module\u2019s mod_exec\nfunction must take care of initializing the C API pointer array:\nstatic int\nspam_module_exec(PyObject *m)\n{\nstatic void *PySpam_API[PySpam_API_pointers];\nPyObject *c_api_object;\n/* Initialize the C API pointer array */\nPySpam_API[PySpam_System_NUM] = (void *)PySpam_System;\n/* Create a Capsule containing the API pointer array's address */\nc_api_object = PyCapsule_New((void *)PySpam_API, \"spam._C_API\", NULL);\nif (PyModule_Add(m, \"_C_API\", c_api_object) < 0) {\nreturn -1;\n}\nreturn 0;\n}\nNote that PySpam_API\nis declared static\n; otherwise the pointer\narray would disappear when PyInit_spam()\nterminates!\nThe bulk of the work is in the header file spammodule.h\n, which looks\nlike this:\n#ifndef Py_SPAMMODULE_H\n#define Py_SPAMMODULE_H\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n/* Header file for spammodule */\n/* C API functions */\n#define PySpam_System_NUM 0\n#define PySpam_System_RETURN int\n#define PySpam_System_PROTO (const char *command)\n/* Total number of C API pointers */\n#define PySpam_API_pointers 1\n#ifdef SPAM_MODULE\n/* This section is used when compiling spammodule.c */\nstatic PySpam_System_RETURN PySpam_System PySpam_System_PROTO;\n#else\n/* This section is used in modules that use spammodule's API */\nstatic void **PySpam_API;\n#define PySpam_System \\\n(*(PySpam_System_RETURN (*)PySpam_System_PROTO) PySpam_API[PySpam_System_NUM])\n/* Return -1 on error, 0 on success.\n* PyCapsule_Import will set an exception if there's an error.\n*/\nstatic int\nimport_spam(void)\n{\nPySpam_API = (void **)PyCapsule_Import(\"spam._C_API\", 0);\nreturn (PySpam_API != NULL) ? 0 : -1;\n}\n#endif\n#ifdef __cplusplus\n}\n#endif\n#endif /* !defined(Py_SPAMMODULE_H) */\nAll that a client module must do in order to have access to the function\nPySpam_System()\nis to call the function (or rather macro)\nimport_spam()\nin its mod_exec\nfunction:\nstatic int\nclient_module_exec(PyObject *m)\n{\nif (import_spam() < 0) {\nreturn -1;\n}\n/* additional initialization can happen here */\nreturn 0;\n}\nThe main disadvantage of this approach is that the file spammodule.h\nis\nrather complicated. However, the basic structure is the same for each function\nthat is exported, so it has to be learned only once.\nFinally it should be mentioned that Capsules offer additional functionality,\nwhich is especially useful for memory allocation and deallocation of the pointer\nstored in a Capsule. The details are described in the Python/C API Reference\nManual in the section Capsules and in the implementation of Capsules (files\nInclude/pycapsule.h\nand Objects/pycapsule.c\nin the Python source\ncode distribution).\nFootnotes", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 13144} +{"url": "https://docs.python.org/3/distributing/index.html", "title": "Distributing Python Modules", "content": "Distributing Python Modules\u00b6\nNote\nInformation and guidance on distributing Python modules and packages has been moved to the Python Packaging User Guide, and the tutorial on packaging Python projects.\nNote\nInformation and guidance on distributing Python modules and packages has been moved to the Python Packaging User Guide, and the tutorial on packaging Python projects.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 93} +{"url": "https://docs.python.org/3/c-api/unicode.html", "title": "Unicode Objects and Codecs", "content": "Unicode Objects and Codecs\u00b6\nUnicode Objects\u00b6\nSince the implementation of PEP 393 in Python 3.3, Unicode objects internally use a variety of representations, in order to allow handling the complete range of Unicode characters while staying memory efficient. There are special cases for strings where all code points are below 128, 256, or 65536; otherwise, code points must be below 1114112 (which is the full Unicode range).\nUTF-8 representation is created on demand and cached in the Unicode object.\nNote\nThe Py_UNICODE\nrepresentation has been removed since Python 3.12\nwith deprecated APIs.\nSee PEP 623 for more information.\nUnicode Type\u00b6\nThese are the basic Unicode object types used for the Unicode implementation in Python:\n-\nPyTypeObject PyUnicode_Type\u00b6\n- Part of the Stable ABI.\nThis instance of\nPyTypeObject\nrepresents the Python Unicode type. It is exposed to Python code asstr\n.\n-\nPyTypeObject PyUnicodeIter_Type\u00b6\n- Part of the Stable ABI.\nThis instance of\nPyTypeObject\nrepresents the Python Unicode iterator type. It is used to iterate over Unicode string objects.\n-\ntype Py_UCS4\u00b6\n-\ntype Py_UCS2\u00b6\n-\ntype Py_UCS1\u00b6\n- Part of the Stable ABI.\nThese types are typedefs for unsigned integer types wide enough to contain characters of 32 bits, 16 bits and 8 bits, respectively. When dealing with single Unicode characters, use\nPy_UCS4\n.Added in version 3.3.\n-\ntype PyASCIIObject\u00b6\n-\ntype PyCompactUnicodeObject\u00b6\n-\ntype PyUnicodeObject\u00b6\nThese subtypes of\nPyObject\nrepresent a Python Unicode object. In almost all cases, they shouldn\u2019t be used directly, since all API functions that deal with Unicode objects take and returnPyObject\npointers.Added in version 3.3.\nThe structure of a particular object can be determined using the following macros. The macros cannot fail; their behavior is undefined if their argument is not a Python Unicode object.\n-\nPyUnicode_IS_COMPACT(o)\u00b6\nTrue if o uses the\nPyCompactUnicodeObject\nstructure.Added in version 3.3.\n-\nPyUnicode_IS_COMPACT_ASCII(o)\u00b6\nTrue if o uses the\nPyASCIIObject\nstructure.Added in version 3.3.\n-\nPyUnicode_IS_COMPACT(o)\u00b6\nThe following APIs are C macros and static inlined functions for fast checks and access to internal read-only data of Unicode objects:\n-\nint PyUnicode_Check(PyObject *obj)\u00b6\nReturn true if the object obj is a Unicode object or an instance of a Unicode subtype. This function always succeeds.\n-\nint PyUnicode_CheckExact(PyObject *obj)\u00b6\nReturn true if the object obj is a Unicode object, but not an instance of a subtype. This function always succeeds.\n-\nPy_ssize_t PyUnicode_GET_LENGTH(PyObject *unicode)\u00b6\nReturn the length of the Unicode string, in code points. unicode has to be a Unicode object in the \u201ccanonical\u201d representation (not checked).\nAdded in version 3.3.\n-\nPy_UCS1 *PyUnicode_1BYTE_DATA(PyObject *unicode)\u00b6\n-\nPy_UCS2 *PyUnicode_2BYTE_DATA(PyObject *unicode)\u00b6\n-\nPy_UCS4 *PyUnicode_4BYTE_DATA(PyObject *unicode)\u00b6\nReturn a pointer to the canonical representation cast to UCS1, UCS2 or UCS4 integer types for direct character access. No checks are performed if the canonical representation has the correct character size; use\nPyUnicode_KIND()\nto select the right function.Added in version 3.3.\n-\nPyUnicode_1BYTE_KIND\u00b6\n-\nPyUnicode_2BYTE_KIND\u00b6\n-\nPyUnicode_4BYTE_KIND\u00b6\nReturn values of the\nPyUnicode_KIND()\nmacro.Added in version 3.3.\nChanged in version 3.12:\nPyUnicode_WCHAR_KIND\nhas been removed.\n-\nint PyUnicode_KIND(PyObject *unicode)\u00b6\nReturn one of the PyUnicode kind constants (see above) that indicate how many bytes per character this Unicode object uses to store its data. unicode has to be a Unicode object in the \u201ccanonical\u201d representation (not checked).\nAdded in version 3.3.\n-\nvoid *PyUnicode_DATA(PyObject *unicode)\u00b6\nReturn a void pointer to the raw Unicode buffer. unicode has to be a Unicode object in the \u201ccanonical\u201d representation (not checked).\nAdded in version 3.3.\n-\nvoid PyUnicode_WRITE(int kind, void *data, Py_ssize_t index, Py_UCS4 value)\u00b6\nWrite the code point value to the given zero-based index in a string.\nThe kind value and data pointer must have been obtained from a string using\nPyUnicode_KIND()\nandPyUnicode_DATA()\nrespectively. You must hold a reference to that string while callingPyUnicode_WRITE()\n. All requirements ofPyUnicode_WriteChar()\nalso apply.The function performs no checks for any of its requirements, and is intended for usage in loops.\nAdded in version 3.3.\n-\nPy_UCS4 PyUnicode_READ(int kind, void *data, Py_ssize_t index)\u00b6\nRead a code point from a canonical representation data (as obtained with\nPyUnicode_DATA()\n). No checks or ready calls are performed.Added in version 3.3.\n-\nPy_UCS4 PyUnicode_READ_CHAR(PyObject *unicode, Py_ssize_t index)\u00b6\nRead a character from a Unicode object unicode, which must be in the \u201ccanonical\u201d representation. This is less efficient than\nPyUnicode_READ()\nif you do multiple consecutive reads.Added in version 3.3.\n-\nPy_UCS4 PyUnicode_MAX_CHAR_VALUE(PyObject *unicode)\u00b6\nReturn the maximum code point that is suitable for creating another string based on unicode, which must be in the \u201ccanonical\u201d representation. This is always an approximation but more efficient than iterating over the string.\nAdded in version 3.3.\n-\nint PyUnicode_IsIdentifier(PyObject *unicode)\u00b6\n- Part of the Stable ABI.\nReturn\n1\nif the string is a valid identifier according to the language definition, section Names (identifiers and keywords). Return0\notherwise.Changed in version 3.9: The function does not call\nPy_FatalError()\nanymore if the string is not ready.\n-\nunsigned int PyUnicode_IS_ASCII(PyObject *unicode)\u00b6\nReturn true if the string only contains ASCII characters. Equivalent to\nstr.isascii()\n.Added in version 3.2.\nUnicode Character Properties\u00b6\nUnicode provides many different character properties. The most often needed ones are available through these macros which are mapped to C functions depending on the Python configuration.\n-\nint Py_UNICODE_ISSPACE(Py_UCS4 ch)\u00b6\nReturn\n1\nor0\ndepending on whether ch is a whitespace character.\n-\nint Py_UNICODE_ISUPPER(Py_UCS4 ch)\u00b6\nReturn\n1\nor0\ndepending on whether ch is an uppercase character.\n-\nint Py_UNICODE_ISLINEBREAK(Py_UCS4 ch)\u00b6\nReturn\n1\nor0\ndepending on whether ch is a linebreak character.\n-\nint Py_UNICODE_ISALPHA(Py_UCS4 ch)\u00b6\nReturn\n1\nor0\ndepending on whether ch is an alphabetic character.\n-\nint Py_UNICODE_ISALNUM(Py_UCS4 ch)\u00b6\nReturn\n1\nor0\ndepending on whether ch is an alphanumeric character.\n-\nint Py_UNICODE_ISPRINTABLE(Py_UCS4 ch)\u00b6\nReturn\n1\nor0\ndepending on whether ch is a printable character, in the sense ofstr.isprintable()\n.\nThese APIs can be used for fast direct character conversions:\n-\nint Py_UNICODE_TODECIMAL(Py_UCS4 ch)\u00b6\nReturn the character ch converted to a decimal positive integer. Return\n-1\nif this is not possible. This function does not raise exceptions.\n-\nint Py_UNICODE_TODIGIT(Py_UCS4 ch)\u00b6\nReturn the character ch converted to a single digit integer. Return\n-1\nif this is not possible. This function does not raise exceptions.\n-\ndouble Py_UNICODE_TONUMERIC(Py_UCS4 ch)\u00b6\nReturn the character ch converted to a double. Return\n-1.0\nif this is not possible. This function does not raise exceptions.\nThese APIs can be used to work with surrogates:\n-\nint Py_UNICODE_IS_HIGH_SURROGATE(Py_UCS4 ch)\u00b6\nCheck if ch is a high surrogate (\n0xD800 <= ch <= 0xDBFF\n).\n-\nint Py_UNICODE_IS_LOW_SURROGATE(Py_UCS4 ch)\u00b6\nCheck if ch is a low surrogate (\n0xDC00 <= ch <= 0xDFFF\n).\n-\nPy_UCS4 Py_UNICODE_HIGH_SURROGATE(Py_UCS4 ch)\u00b6\nReturn the high UTF-16 surrogate (\n0xD800\nto0xDBFF\n) for a Unicode code point in the range[0x10000; 0x10FFFF]\n.\n-\nPy_UCS4 Py_UNICODE_LOW_SURROGATE(Py_UCS4 ch)\u00b6\nReturn the low UTF-16 surrogate (\n0xDC00\nto0xDFFF\n) for a Unicode code point in the range[0x10000; 0x10FFFF]\n.\n-\nPy_UCS4 Py_UNICODE_JOIN_SURROGATES(Py_UCS4 high, Py_UCS4 low)\u00b6\nJoin two surrogate code points and return a single\nPy_UCS4\nvalue. high and low are respectively the leading and trailing surrogates in a surrogate pair. high must be in the range[0xD800; 0xDBFF]\nand low must be in the range[0xDC00; 0xDFFF]\n.\nCreating and accessing Unicode strings\u00b6\nTo create Unicode objects and access their basic sequence properties, use these APIs:\n-\nPyObject *PyUnicode_New(Py_ssize_t size, Py_UCS4 maxchar)\u00b6\n- Return value: New reference.\nCreate a new Unicode object. maxchar should be the true maximum code point to be placed in the string. As an approximation, it can be rounded up to the nearest value in the sequence 127, 255, 65535, 1114111.\nOn error, set an exception and return\nNULL\n.After creation, the string can be filled by\nPyUnicode_WriteChar()\n,PyUnicode_CopyCharacters()\n,PyUnicode_Fill()\n,PyUnicode_WRITE()\nor similar. Since strings are supposed to be immutable, take care to not \u201cuse\u201d the result while it is being modified. In particular, before it\u2019s filled with its final contents, a string:must not be hashed,\nmust not be\nconverted to UTF-8\n, or another non-\u201ccanonical\u201d representation,must not have its reference count changed,\nmust not be shared with code that might do one of the above.\nThis list is not exhaustive. Avoiding these uses is your responsibility; Python does not always check these requirements.\nTo avoid accidentally exposing a partially-written string object, prefer using the\nPyUnicodeWriter\nAPI, or one of thePyUnicode_From*\nfunctions below.Added in version 3.3.\n-\nPyObject *PyUnicode_FromKindAndData(int kind, const void *buffer, Py_ssize_t size)\u00b6\n- Return value: New reference.\nCreate a new Unicode object with the given kind (possible values are\nPyUnicode_1BYTE_KIND\netc., as returned byPyUnicode_KIND()\n). The buffer must point to an array of size units of 1, 2 or 4 bytes per character, as given by the kind.If necessary, the input buffer is copied and transformed into the canonical representation. For example, if the buffer is a UCS4 string (\nPyUnicode_4BYTE_KIND\n) and it consists only of codepoints in the UCS1 range, it will be transformed into UCS1 (PyUnicode_1BYTE_KIND\n).Added in version 3.3.\n-\nPyObject *PyUnicode_FromStringAndSize(const char *str, Py_ssize_t size)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCreate a Unicode object from the char buffer str. The bytes will be interpreted as being UTF-8 encoded. The buffer is copied into the new object. The return value might be a shared object, i.e. modification of the data is not allowed.\nThis function raises\nSystemError\nwhen:size < 0,\nstr is\nNULL\nand size > 0\nChanged in version 3.12: str ==\nNULL\nwith size > 0 is not allowed anymore.\n-\nPyObject *PyUnicode_FromString(const char *str)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCreate a Unicode object from a UTF-8 encoded null-terminated char buffer str.\n-\nPyObject *PyUnicode_FromFormat(const char *format, ...)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nTake a C\nprintf()\n-style format string and a variable number of arguments, calculate the size of the resulting Python Unicode string and return a string with the values formatted into it. The variable arguments must be C types and must correspond exactly to the format characters in the format ASCII-encoded string.A conversion specifier contains two or more characters and has the following components, which must occur in this order:\nThe\n'%'\ncharacter, which marks the start of the specifier.Conversion flags (optional), which affect the result of some conversion types.\nMinimum field width (optional). If specified as an\n'*'\n(asterisk), the actual width is given in the next argument, which must be of type int, and the object to convert comes after the minimum field width and optional precision.Precision (optional), given as a\n'.'\n(dot) followed by the precision. If specified as'*'\n(an asterisk), the actual precision is given in the next argument, which must be of type int, and the value to convert comes after the precision.Length modifier (optional).\nConversion type.\nThe conversion flag characters are:\nFlag\nMeaning\n0\nThe conversion will be zero padded for numeric values.\n-\nThe converted value is left adjusted (overrides the\n0\nflag if both are given).The length modifiers for following integer conversions (\nd\n,i\n,o\n,u\n,x\n, orX\n) specify the type of the argument (int by default):Modifier\nTypes\nl\nlong or unsigned long\nll\nlong long or unsigned long long\nj\nintmax_t\noruintmax_t\nz\nsize_t\norssize_t\nt\nptrdiff_t\nThe length modifier\nl\nfor following conversionss\norV\nspecify that the type of the argument is const wchar_t*.The conversion specifiers are:\nConversion Specifier\nType\nComment\n%\nn/a\nThe literal\n%\ncharacter.d\n,i\nSpecified by the length modifier\nThe decimal representation of a signed C integer.\nu\nSpecified by the length modifier\nThe decimal representation of an unsigned C integer.\no\nSpecified by the length modifier\nThe octal representation of an unsigned C integer.\nx\nSpecified by the length modifier\nThe hexadecimal representation of an unsigned C integer (lowercase).\nX\nSpecified by the length modifier\nThe hexadecimal representation of an unsigned C integer (uppercase).\nc\nint\nA single character.\ns\nconst char* or const wchar_t*\nA null-terminated C character array.\np\nconst void*\nThe hex representation of a C pointer. Mostly equivalent to\nprintf(\"%p\")\nexcept that it is guaranteed to start with the literal0x\nregardless of what the platform\u2019sprintf\nyields.A\nThe result of calling\nascii()\n.U\nA Unicode object.\nV\nPyObject*, const char* or const wchar_t*\nA Unicode object (which may be\nNULL\n) and a null-terminated C character array as a second parameter (which will be used, if the first parameter isNULL\n).S\nThe result of calling\nPyObject_Str()\n.R\nThe result of calling\nPyObject_Repr()\n.T\nGet the fully qualified name of an object type; call\nPyType_GetFullyQualifiedName()\n.#T\nSimilar to\nT\nformat, but use a colon (:\n) as separator between the module name and the qualified name.N\nGet the fully qualified name of a type; call\nPyType_GetFullyQualifiedName()\n.#N\nSimilar to\nN\nformat, but use a colon (:\n) as separator between the module name and the qualified name.Note\nThe width formatter unit is number of characters rather than bytes. The precision formatter unit is number of bytes or\nwchar_t\nitems (if the length modifierl\nis used) for\"%s\"\nand\"%V\"\n(if thePyObject*\nargument isNULL\n), and a number of characters for\"%A\"\n,\"%U\"\n,\"%S\"\n,\"%R\"\nand\"%V\"\n(if thePyObject*\nargument is notNULL\n).Note\nUnlike to C\nprintf()\nthe0\nflag has effect even when a precision is given for integer conversions (d\n,i\n,u\n,o\n,x\n, orX\n).Changed in version 3.2: Support for\n\"%lld\"\nand\"%llu\"\nadded.Changed in version 3.3: Support for\n\"%li\"\n,\"%lli\"\nand\"%zi\"\nadded.Changed in version 3.4: Support width and precision formatter for\n\"%s\"\n,\"%A\"\n,\"%U\"\n,\"%V\"\n,\"%S\"\n,\"%R\"\nadded.Changed in version 3.12: Support for conversion specifiers\no\nandX\n. Support for length modifiersj\nandt\n. Length modifiers are now applied to all integer conversions. Length modifierl\nis now applied to conversion specifierss\nandV\n. Support for variable width and precision*\n. Support for flag-\n.An unrecognized format character now sets a\nSystemError\n. In previous versions it caused all the rest of the format string to be copied as-is to the result string, and any extra arguments discarded.Changed in version 3.13: Support for\n%T\n,%#T\n,%N\nand%#N\nformats added.\n-\nPyObject *PyUnicode_FromFormatV(const char *format, va_list vargs)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nIdentical to\nPyUnicode_FromFormat()\nexcept that it takes exactly two arguments.\n-\nPyObject *PyUnicode_FromObject(PyObject *obj)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCopy an instance of a Unicode subtype to a new true Unicode object if necessary. If obj is already a true Unicode object (not a subtype), return a new strong reference to the object.\nObjects other than Unicode or its subtypes will cause a\nTypeError\n.\n-\nPyObject *PyUnicode_FromOrdinal(int ordinal)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCreate a Unicode Object from the given Unicode code point ordinal.\nThe ordinal must be in\nrange(0x110000)\n. AValueError\nis raised in the case it is not.\n-\nPyObject *PyUnicode_FromEncodedObject(PyObject *obj, const char *encoding, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nDecode an encoded object obj to a Unicode object.\nbytes\n,bytearray\nand other bytes-like objects are decoded according to the given encoding and using the error handling defined by errors. Both can beNULL\nto have the interface use the default values (see Built-in Codecs for details).All other objects, including Unicode objects, cause a\nTypeError\nto be set.The API returns\nNULL\nif there was an error. The caller is responsible for decref\u2019ing the returned objects.\n-\nvoid PyUnicode_Append(PyObject **p_left, PyObject *right)\u00b6\n- Part of the Stable ABI.\nAppend the string right to the end of p_left. p_left must point to a strong reference to a Unicode object;\nPyUnicode_Append()\nreleases (\u201csteals\u201d) this reference.On error, set *p_left to\nNULL\nand set an exception.On success, set *p_left to a new strong reference to the result.\n-\nvoid PyUnicode_AppendAndDel(PyObject **p_left, PyObject *right)\u00b6\n- Part of the Stable ABI.\nThe function is similar to\nPyUnicode_Append()\n, with the only difference being that it decrements the reference count of right by one.\n-\nPyObject *PyUnicode_BuildEncodingMap(PyObject *string)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a mapping suitable for decoding a custom single-byte encoding. Given a Unicode string string of up to 256 characters representing an encoding table, returns either a compact internal mapping object or a dictionary mapping character ordinals to byte values. Raises a\nTypeError\nand returnNULL\non invalid input.Added in version 3.2.\n-\nconst char *PyUnicode_GetDefaultEncoding(void)\u00b6\n- Part of the Stable ABI.\nReturn the name of the default string encoding,\n\"utf-8\"\n. Seesys.getdefaultencoding()\n.The returned string does not need to be freed, and is valid until interpreter shutdown.\n-\nPy_ssize_t PyUnicode_GetLength(PyObject *unicode)\u00b6\n- Part of the Stable ABI since version 3.7.\nReturn the length of the Unicode object, in code points.\nOn error, set an exception and return\n-1\n.Added in version 3.3.\n-\nPy_ssize_t PyUnicode_CopyCharacters(PyObject *to, Py_ssize_t to_start, PyObject *from, Py_ssize_t from_start, Py_ssize_t how_many)\u00b6\nCopy characters from one Unicode object into another. This function performs character conversion when necessary and falls back to\nmemcpy()\nif possible. Returns-1\nand sets an exception on error, otherwise returns the number of copied characters.The string must not have been \u201cused\u201d yet. See\nPyUnicode_New()\nfor details.Added in version 3.3.\n-\nint PyUnicode_Resize(PyObject **unicode, Py_ssize_t length);\u00b6\n- Part of the Stable ABI.\nResize a Unicode object *unicode to the new length in code points.\nTry to resize the string in place (which is usually faster than allocating a new string and copying characters), or create a new string.\n*unicode is modified to point to the new (resized) object and\n0\nis returned on success. Otherwise,-1\nis returned and an exception is set, and *unicode is left untouched.The function doesn\u2019t check string content, the result may not be a string in canonical representation.\n-\nPy_ssize_t PyUnicode_Fill(PyObject *unicode, Py_ssize_t start, Py_ssize_t length, Py_UCS4 fill_char)\u00b6\nFill a string with a character: write fill_char into\nunicode[start:start+length]\n.Fail if fill_char is bigger than the string maximum character, or if the string has more than 1 reference.\nThe string must not have been \u201cused\u201d yet. See\nPyUnicode_New()\nfor details.Return the number of written character, or return\n-1\nand raise an exception on error.Added in version 3.3.\n-\nint PyUnicode_WriteChar(PyObject *unicode, Py_ssize_t index, Py_UCS4 character)\u00b6\n- Part of the Stable ABI since version 3.7.\nWrite a character to the string unicode at the zero-based index. Return\n0\non success,-1\non error with an exception set.This function checks that unicode is a Unicode object, that the index is not out of bounds, and that the object\u2019s reference count is one. See\nPyUnicode_WRITE()\nfor a version that skips these checks, making them your responsibility.The string must not have been \u201cused\u201d yet. See\nPyUnicode_New()\nfor details.Added in version 3.3.\n-\nPy_UCS4 PyUnicode_ReadChar(PyObject *unicode, Py_ssize_t index)\u00b6\n- Part of the Stable ABI since version 3.7.\nRead a character from a string. This function checks that unicode is a Unicode object and the index is not out of bounds, in contrast to\nPyUnicode_READ_CHAR()\n, which performs no error checking.Return character on success,\n-1\non error with an exception set.Added in version 3.3.\n-\nPyObject *PyUnicode_Substring(PyObject *unicode, Py_ssize_t start, Py_ssize_t end)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.7.\nReturn a substring of unicode, from character index start (included) to character index end (excluded). Negative indices are not supported. On error, set an exception and return\nNULL\n.Added in version 3.3.\n-\nPy_UCS4 *PyUnicode_AsUCS4(PyObject *unicode, Py_UCS4 *buffer, Py_ssize_t buflen, int copy_null)\u00b6\n- Part of the Stable ABI since version 3.7.\nCopy the string unicode into a UCS4 buffer, including a null character, if copy_null is set. Returns\nNULL\nand sets an exception on error (in particular, aSystemError\nif buflen is smaller than the length of unicode). buffer is returned on success.Added in version 3.3.\n-\nPy_UCS4 *PyUnicode_AsUCS4Copy(PyObject *unicode)\u00b6\n- Part of the Stable ABI since version 3.7.\nCopy the string unicode into a new UCS4 buffer that is allocated using\nPyMem_Malloc()\n. If this fails,NULL\nis returned with aMemoryError\nset. The returned buffer always has an extra null code point appended.Added in version 3.3.\nLocale Encoding\u00b6\nThe current locale encoding can be used to decode text from the operating system.\n-\nPyObject *PyUnicode_DecodeLocaleAndSize(const char *str, Py_ssize_t length, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.7.\nDecode a string from UTF-8 on Android and VxWorks, or from the current locale encoding on other platforms. The supported error handlers are\n\"strict\"\nand\"surrogateescape\"\n(PEP 383). The decoder uses\"strict\"\nerror handler if errors isNULL\n. str must end with a null character but cannot contain embedded null characters.Use\nPyUnicode_DecodeFSDefaultAndSize()\nto decode a string from the filesystem encoding and error handler.This function ignores the Python UTF-8 Mode.\nSee also\nThe\nPy_DecodeLocale()\nfunction.Added in version 3.3.\nChanged in version 3.7: The function now also uses the current locale encoding for the\nsurrogateescape\nerror handler, except on Android. Previously,Py_DecodeLocale()\nwas used for thesurrogateescape\n, and the current locale encoding was used forstrict\n.\n-\nPyObject *PyUnicode_DecodeLocale(const char *str, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.7.\nSimilar to\nPyUnicode_DecodeLocaleAndSize()\n, but compute the string length usingstrlen()\n.Added in version 3.3.\n-\nPyObject *PyUnicode_EncodeLocale(PyObject *unicode, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.7.\nEncode a Unicode object to UTF-8 on Android and VxWorks, or to the current locale encoding on other platforms. The supported error handlers are\n\"strict\"\nand\"surrogateescape\"\n(PEP 383). The encoder uses\"strict\"\nerror handler if errors isNULL\n. Return abytes\nobject. unicode cannot contain embedded null characters.Use\nPyUnicode_EncodeFSDefault()\nto encode a string to the filesystem encoding and error handler.This function ignores the Python UTF-8 Mode.\nSee also\nThe\nPy_EncodeLocale()\nfunction.Added in version 3.3.\nChanged in version 3.7: The function now also uses the current locale encoding for the\nsurrogateescape\nerror handler, except on Android. Previously,Py_EncodeLocale()\nwas used for thesurrogateescape\n, and the current locale encoding was used forstrict\n.\nFile System Encoding\u00b6\nFunctions encoding to and decoding from the filesystem encoding and error handler (PEP 383 and PEP 529).\nTo encode file names to bytes\nduring argument parsing, the \"O&\"\nconverter should be used, passing PyUnicode_FSConverter()\nas the\nconversion function:\n-\nint PyUnicode_FSConverter(PyObject *obj, void *result)\u00b6\n- Part of the Stable ABI.\nPyArg_Parse* converter: encode\nstr\nobjects \u2013 obtained directly or through theos.PathLike\ninterface \u2013 tobytes\nusingPyUnicode_EncodeFSDefault()\n;bytes\nobjects are output as-is. result must be an address of a C variable of type PyObject* (or PyBytesObject*). On success, set the variable to a new strong reference to a bytes object which must be released when it is no longer used and return a non-zero value (Py_CLEANUP_SUPPORTED\n). Embedded null bytes are not allowed in the result. On failure, return0\nwith an exception set.If obj is\nNULL\n, the function releases a strong reference stored in the variable referred by result and returns1\n.Added in version 3.1.\nChanged in version 3.6: Accepts a path-like object.\nTo decode file names to str\nduring argument parsing, the \"O&\"\nconverter should be used, passing PyUnicode_FSDecoder()\nas the\nconversion function:\n-\nint PyUnicode_FSDecoder(PyObject *obj, void *result)\u00b6\n- Part of the Stable ABI.\nPyArg_Parse* converter: decode\nbytes\nobjects \u2013 obtained either directly or indirectly through theos.PathLike\ninterface \u2013 tostr\nusingPyUnicode_DecodeFSDefaultAndSize()\n;str\nobjects are output as-is. result must be an address of a C variable of type PyObject* (or PyUnicodeObject*). On success, set the variable to a new strong reference to a Unicode object which must be released when it is no longer used and return a non-zero value (Py_CLEANUP_SUPPORTED\n). Embedded null characters are not allowed in the result. On failure, return0\nwith an exception set.If obj is\nNULL\n, release the strong reference to the object referred to by result and return1\n.Added in version 3.2.\nChanged in version 3.6: Accepts a path-like object.\n-\nPyObject *PyUnicode_DecodeFSDefaultAndSize(const char *str, Py_ssize_t size)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nDecode a string from the filesystem encoding and error handler.\nIf you need to decode a string from the current locale encoding, use\nPyUnicode_DecodeLocaleAndSize()\n.See also\nThe\nPy_DecodeLocale()\nfunction.Changed in version 3.6: The filesystem error handler is now used.\n-\nPyObject *PyUnicode_DecodeFSDefault(const char *str)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nDecode a null-terminated string from the filesystem encoding and error handler.\nIf the string length is known, use\nPyUnicode_DecodeFSDefaultAndSize()\n.Changed in version 3.6: The filesystem error handler is now used.\n-\nPyObject *PyUnicode_EncodeFSDefault(PyObject *unicode)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nEncode a Unicode object to the filesystem encoding and error handler, and return\nbytes\n. Note that the resultingbytes\nobject can contain null bytes.If you need to encode a string to the current locale encoding, use\nPyUnicode_EncodeLocale()\n.See also\nThe\nPy_EncodeLocale()\nfunction.Added in version 3.2.\nChanged in version 3.6: The filesystem error handler is now used.\nwchar_t Support\u00b6\nwchar_t\nsupport for platforms which support it:\n-\nPyObject *PyUnicode_FromWideChar(const wchar_t *wstr, Py_ssize_t size)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCreate a Unicode object from the\nwchar_t\nbuffer wstr of the given size. Passing-1\nas the size indicates that the function must itself compute the length, usingwcslen()\n. ReturnNULL\non failure.\n-\nPy_ssize_t PyUnicode_AsWideChar(PyObject *unicode, wchar_t *wstr, Py_ssize_t size)\u00b6\n- Part of the Stable ABI.\nCopy the Unicode object contents into the\nwchar_t\nbuffer wstr. At most sizewchar_t\ncharacters are copied (excluding a possibly trailing null termination character). Return the number ofwchar_t\ncharacters copied or-1\nin case of an error.When wstr is\nNULL\n, instead return the size that would be required to store all of unicode including a terminating null.Note that the resulting wchar_t* string may or may not be null-terminated. It is the responsibility of the caller to make sure that the wchar_t* string is null-terminated in case this is required by the application. Also, note that the wchar_t* string might contain null characters, which would cause the string to be truncated when used with most C functions.\n-\nwchar_t *PyUnicode_AsWideCharString(PyObject *unicode, Py_ssize_t *size)\u00b6\n- Part of the Stable ABI since version 3.7.\nConvert the Unicode object to a wide character string. The output string always ends with a null character. If size is not\nNULL\n, write the number of wide characters (excluding the trailing null termination character) into *size. Note that the resultingwchar_t\nstring might contain null characters, which would cause the string to be truncated when used with most C functions. If size isNULL\nand the wchar_t* string contains null characters aValueError\nis raised.Returns a buffer allocated by\nPyMem_New\n(usePyMem_Free()\nto free it) on success. On error, returnsNULL\nand *size is undefined. Raises aMemoryError\nif memory allocation is failed.Added in version 3.2.\nChanged in version 3.7: Raises a\nValueError\nif size isNULL\nand the wchar_t* string contains null characters.\nBuilt-in Codecs\u00b6\nPython provides a set of built-in codecs which are written in C for speed. All of these codecs are directly usable via the following functions.\nMany of the following APIs take two arguments encoding and errors, and they\nhave the same semantics as the ones of the built-in str()\nstring object\nconstructor.\nSetting encoding to NULL\ncauses the default encoding to be used\nwhich is UTF-8. The file system calls should use\nPyUnicode_FSConverter()\nfor encoding file names. This uses the\nfilesystem encoding and error handler internally.\nError handling is set by errors which may also be set to NULL\nmeaning to use\nthe default handling defined for the codec. Default error handling for all\nbuilt-in codecs is \u201cstrict\u201d (ValueError\nis raised).\nThe codecs all use a similar interface. Only deviations from the following generic ones are documented for simplicity.\nGeneric Codecs\u00b6\nThe following macro is provided:\n-\nPy_UNICODE_REPLACEMENT_CHARACTER\u00b6\nThe Unicode code point\nU+FFFD\n(replacement character).This Unicode character is used as the replacement character during decoding if the errors argument is set to \u201creplace\u201d.\nThese are the generic codec APIs:\n-\nPyObject *PyUnicode_Decode(const char *str, Py_ssize_t size, const char *encoding, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCreate a Unicode object by decoding size bytes of the encoded string str. encoding and errors have the same meaning as the parameters of the same name in the\nstr()\nbuilt-in function. The codec to be used is looked up using the Python codec registry. ReturnNULL\nif an exception was raised by the codec.\n-\nPyObject *PyUnicode_AsEncodedString(PyObject *unicode, const char *encoding, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nEncode a Unicode object and return the result as Python bytes object. encoding and errors have the same meaning as the parameters of the same name in the Unicode\nencode()\nmethod. The codec to be used is looked up using the Python codec registry. ReturnNULL\nif an exception was raised by the codec.\nUTF-8 Codecs\u00b6\nThese are the UTF-8 codec APIs:\n-\nPyObject *PyUnicode_DecodeUTF8(const char *str, Py_ssize_t size, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCreate a Unicode object by decoding size bytes of the UTF-8 encoded string str. Return\nNULL\nif an exception was raised by the codec.\n-\nPyObject *PyUnicode_DecodeUTF8Stateful(const char *str, Py_ssize_t size, const char *errors, Py_ssize_t *consumed)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nIf consumed is\nNULL\n, behave likePyUnicode_DecodeUTF8()\n. If consumed is notNULL\n, trailing incomplete UTF-8 byte sequences will not be treated as an error. Those bytes will not be decoded and the number of bytes that have been decoded will be stored in consumed.\n-\nPyObject *PyUnicode_AsUTF8String(PyObject *unicode)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nEncode a Unicode object using UTF-8 and return the result as Python bytes object. Error handling is \u201cstrict\u201d. Return\nNULL\nif an exception was raised by the codec.The function fails if the string contains surrogate code points (\nU+D800\n-U+DFFF\n).\n-\nconst char *PyUnicode_AsUTF8AndSize(PyObject *unicode, Py_ssize_t *size)\u00b6\n- Part of the Stable ABI since version 3.10.\nReturn a pointer to the UTF-8 encoding of the Unicode object, and store the size of the encoded representation (in bytes) in size. The size argument can be\nNULL\n; in this case no size will be stored. The returned buffer always has an extra null byte appended (not included in size), regardless of whether there are any other null code points.On error, set an exception, set size to\n-1\n(if it\u2019s not NULL) and returnNULL\n.The function fails if the string contains surrogate code points (\nU+D800\n-U+DFFF\n).This caches the UTF-8 representation of the string in the Unicode object, and subsequent calls will return a pointer to the same buffer. The caller is not responsible for deallocating the buffer. The buffer is deallocated and pointers to it become invalid when the Unicode object is garbage collected.\nAdded in version 3.3.\nChanged in version 3.7: The return type is now\nconst char *\nrather ofchar *\n.Changed in version 3.10: This function is a part of the limited API.\n-\nconst char *PyUnicode_AsUTF8(PyObject *unicode)\u00b6\nAs\nPyUnicode_AsUTF8AndSize()\n, but does not store the size.Warning\nThis function does not have any special behavior for null characters embedded within unicode. As a result, strings containing null characters will remain in the returned string, which some C functions might interpret as the end of the string, leading to truncation. If truncation is an issue, it is recommended to use\nPyUnicode_AsUTF8AndSize()\ninstead.Added in version 3.3.\nChanged in version 3.7: The return type is now\nconst char *\nrather ofchar *\n.\nUTF-32 Codecs\u00b6\nThese are the UTF-32 codec APIs:\n-\nPyObject *PyUnicode_DecodeUTF32(const char *str, Py_ssize_t size, const char *errors, int *byteorder)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nDecode size bytes from a UTF-32 encoded buffer string and return the corresponding Unicode object. errors (if non-\nNULL\n) defines the error handling. It defaults to \u201cstrict\u201d.If byteorder is non-\nNULL\n, the decoder starts decoding using the given byte order:*byteorder == -1: little endian *byteorder == 0: native order *byteorder == 1: big endian\nIf\n*byteorder\nis zero, and the first four bytes of the input data are a byte order mark (BOM), the decoder switches to this byte order and the BOM is not copied into the resulting Unicode string. If*byteorder\nis-1\nor1\n, any byte order mark is copied to the output.After completion, *byteorder is set to the current byte order at the end of input data.\nIf byteorder is\nNULL\n, the codec starts in native order mode.Return\nNULL\nif an exception was raised by the codec.\n-\nPyObject *PyUnicode_DecodeUTF32Stateful(const char *str, Py_ssize_t size, const char *errors, int *byteorder, Py_ssize_t *consumed)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nIf consumed is\nNULL\n, behave likePyUnicode_DecodeUTF32()\n. If consumed is notNULL\n,PyUnicode_DecodeUTF32Stateful()\nwill not treat trailing incomplete UTF-32 byte sequences (such as a number of bytes not divisible by four) as an error. Those bytes will not be decoded and the number of bytes that have been decoded will be stored in consumed.\n-\nPyObject *PyUnicode_AsUTF32String(PyObject *unicode)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a Python byte string using the UTF-32 encoding in native byte order. The string always starts with a BOM mark. Error handling is \u201cstrict\u201d. Return\nNULL\nif an exception was raised by the codec.\nUTF-16 Codecs\u00b6\nThese are the UTF-16 codec APIs:\n-\nPyObject *PyUnicode_DecodeUTF16(const char *str, Py_ssize_t size, const char *errors, int *byteorder)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nDecode size bytes from a UTF-16 encoded buffer string and return the corresponding Unicode object. errors (if non-\nNULL\n) defines the error handling. It defaults to \u201cstrict\u201d.If byteorder is non-\nNULL\n, the decoder starts decoding using the given byte order:*byteorder == -1: little endian *byteorder == 0: native order *byteorder == 1: big endian\nIf\n*byteorder\nis zero, and the first two bytes of the input data are a byte order mark (BOM), the decoder switches to this byte order and the BOM is not copied into the resulting Unicode string. If*byteorder\nis-1\nor1\n, any byte order mark is copied to the output (where it will result in either a\\ufeff\nor a\\ufffe\ncharacter).After completion,\n*byteorder\nis set to the current byte order at the end of input data.If byteorder is\nNULL\n, the codec starts in native order mode.Return\nNULL\nif an exception was raised by the codec.\n-\nPyObject *PyUnicode_DecodeUTF16Stateful(const char *str, Py_ssize_t size, const char *errors, int *byteorder, Py_ssize_t *consumed)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nIf consumed is\nNULL\n, behave likePyUnicode_DecodeUTF16()\n. If consumed is notNULL\n,PyUnicode_DecodeUTF16Stateful()\nwill not treat trailing incomplete UTF-16 byte sequences (such as an odd number of bytes or a split surrogate pair) as an error. Those bytes will not be decoded and the number of bytes that have been decoded will be stored in consumed.\n-\nPyObject *PyUnicode_AsUTF16String(PyObject *unicode)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a Python byte string using the UTF-16 encoding in native byte order. The string always starts with a BOM mark. Error handling is \u201cstrict\u201d. Return\nNULL\nif an exception was raised by the codec.\nUTF-7 Codecs\u00b6\nThese are the UTF-7 codec APIs:\n-\nPyObject *PyUnicode_DecodeUTF7(const char *str, Py_ssize_t size, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCreate a Unicode object by decoding size bytes of the UTF-7 encoded string str. Return\nNULL\nif an exception was raised by the codec.\n-\nPyObject *PyUnicode_DecodeUTF7Stateful(const char *str, Py_ssize_t size, const char *errors, Py_ssize_t *consumed)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nIf consumed is\nNULL\n, behave likePyUnicode_DecodeUTF7()\n. If consumed is notNULL\n, trailing incomplete UTF-7 base-64 sections will not be treated as an error. Those bytes will not be decoded and the number of bytes that have been decoded will be stored in consumed.\nUnicode-Escape Codecs\u00b6\nThese are the \u201cUnicode Escape\u201d codec APIs:\n-\nPyObject *PyUnicode_DecodeUnicodeEscape(const char *str, Py_ssize_t size, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCreate a Unicode object by decoding size bytes of the Unicode-Escape encoded string str. Return\nNULL\nif an exception was raised by the codec.\n-\nPyObject *PyUnicode_AsUnicodeEscapeString(PyObject *unicode)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nEncode a Unicode object using Unicode-Escape and return the result as a bytes object. Error handling is \u201cstrict\u201d. Return\nNULL\nif an exception was raised by the codec.\nRaw-Unicode-Escape Codecs\u00b6\nThese are the \u201cRaw Unicode Escape\u201d codec APIs:\n-\nPyObject *PyUnicode_DecodeRawUnicodeEscape(const char *str, Py_ssize_t size, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCreate a Unicode object by decoding size bytes of the Raw-Unicode-Escape encoded string str. Return\nNULL\nif an exception was raised by the codec.\n-\nPyObject *PyUnicode_AsRawUnicodeEscapeString(PyObject *unicode)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nEncode a Unicode object using Raw-Unicode-Escape and return the result as a bytes object. Error handling is \u201cstrict\u201d. Return\nNULL\nif an exception was raised by the codec.\nLatin-1 Codecs\u00b6\nThese are the Latin-1 codec APIs: Latin-1 corresponds to the first 256 Unicode ordinals and only these are accepted by the codecs during encoding.\n-\nPyObject *PyUnicode_DecodeLatin1(const char *str, Py_ssize_t size, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCreate a Unicode object by decoding size bytes of the Latin-1 encoded string str. Return\nNULL\nif an exception was raised by the codec.\n-\nPyObject *PyUnicode_AsLatin1String(PyObject *unicode)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nEncode a Unicode object using Latin-1 and return the result as Python bytes object. Error handling is \u201cstrict\u201d. Return\nNULL\nif an exception was raised by the codec.\nASCII Codecs\u00b6\nThese are the ASCII codec APIs. Only 7-bit ASCII data is accepted. All other codes generate errors.\n-\nPyObject *PyUnicode_DecodeASCII(const char *str, Py_ssize_t size, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCreate a Unicode object by decoding size bytes of the ASCII encoded string str. Return\nNULL\nif an exception was raised by the codec.\n-\nPyObject *PyUnicode_AsASCIIString(PyObject *unicode)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nEncode a Unicode object using ASCII and return the result as Python bytes object. Error handling is \u201cstrict\u201d. Return\nNULL\nif an exception was raised by the codec.\nCharacter Map Codecs\u00b6\nThis codec is special in that it can be used to implement many different codecs\n(and this is in fact what was done to obtain most of the standard codecs\nincluded in the encodings\npackage). The codec uses mappings to encode and\ndecode characters. The mapping objects provided must support the\n__getitem__()\nmapping interface; dictionaries and sequences work well.\nThese are the mapping codec APIs:\n-\nPyObject *PyUnicode_DecodeCharmap(const char *str, Py_ssize_t length, PyObject *mapping, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCreate a Unicode object by decoding size bytes of the encoded string str using the given mapping object. Return\nNULL\nif an exception was raised by the codec.If mapping is\nNULL\n, Latin-1 decoding will be applied. Else mapping must map bytes ordinals (integers in the range from 0 to 255) to Unicode strings, integers (which are then interpreted as Unicode ordinals) orNone\n. Unmapped data bytes \u2013 ones which cause aLookupError\n, as well as ones which get mapped toNone\n,0xFFFE\nor'\\ufffe'\n, are treated as undefined mappings and cause an error.\n-\nPyObject *PyUnicode_AsCharmapString(PyObject *unicode, PyObject *mapping)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nEncode a Unicode object using the given mapping object and return the result as a bytes object. Error handling is \u201cstrict\u201d. Return\nNULL\nif an exception was raised by the codec.The mapping object must map Unicode ordinal integers to bytes objects, integers in the range from 0 to 255 or\nNone\n. Unmapped character ordinals (ones which cause aLookupError\n) as well as mapped toNone\nare treated as \u201cundefined mapping\u201d and cause an error.\nThe following codec API is special in that maps Unicode to Unicode.\n-\nPyObject *PyUnicode_Translate(PyObject *unicode, PyObject *table, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nTranslate a string by applying a character mapping table to it and return the resulting Unicode object. Return\nNULL\nif an exception was raised by the codec.The mapping table must map Unicode ordinal integers to Unicode ordinal integers or\nNone\n(causing deletion of the character).Mapping tables need only provide the\n__getitem__()\ninterface; dictionaries and sequences work well. Unmapped character ordinals (ones which cause aLookupError\n) are left untouched and are copied as-is.errors has the usual meaning for codecs. It may be\nNULL\nwhich indicates to use the default error handling.\nMBCS codecs for Windows\u00b6\nThese are the MBCS codec APIs. They are currently only available on Windows and use the Win32 MBCS converters to implement the conversions. Note that MBCS (or DBCS) is a class of encodings, not just one. The target encoding is defined by the user settings on the machine running the codec.\n-\nPyObject *PyUnicode_DecodeMBCS(const char *str, Py_ssize_t size, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI on Windows since version 3.7.\nCreate a Unicode object by decoding size bytes of the MBCS encoded string str. Return\nNULL\nif an exception was raised by the codec.\n-\nPyObject *PyUnicode_DecodeMBCSStateful(const char *str, Py_ssize_t size, const char *errors, Py_ssize_t *consumed)\u00b6\n- Return value: New reference. Part of the Stable ABI on Windows since version 3.7.\nIf consumed is\nNULL\n, behave likePyUnicode_DecodeMBCS()\n. If consumed is notNULL\n,PyUnicode_DecodeMBCSStateful()\nwill not decode trailing lead byte and the number of bytes that have been decoded will be stored in consumed.\n-\nPyObject *PyUnicode_DecodeCodePageStateful(int code_page, const char *str, Py_ssize_t size, const char *errors, Py_ssize_t *consumed)\u00b6\n- Return value: New reference. Part of the Stable ABI on Windows since version 3.7.\nSimilar to\nPyUnicode_DecodeMBCSStateful()\n, except uses the code page specified by code_page.\n-\nPyObject *PyUnicode_AsMBCSString(PyObject *unicode)\u00b6\n- Return value: New reference. Part of the Stable ABI on Windows since version 3.7.\nEncode a Unicode object using MBCS and return the result as Python bytes object. Error handling is \u201cstrict\u201d. Return\nNULL\nif an exception was raised by the codec.\n-\nPyObject *PyUnicode_EncodeCodePage(int code_page, PyObject *unicode, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI on Windows since version 3.7.\nEncode the Unicode object using the specified code page and return a Python bytes object. Return\nNULL\nif an exception was raised by the codec. UseCP_ACP\ncode page to get the MBCS encoder.Added in version 3.3.\nMethods and Slot Functions\u00b6\nThe following APIs are capable of handling Unicode objects and strings on input (we refer to them as strings in the descriptions) and return Unicode objects or integers as appropriate.\nThey all return NULL\nor -1\nif an exception occurs.\n-\nPyObject *PyUnicode_Concat(PyObject *left, PyObject *right)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nConcat two strings giving a new Unicode string.\n-\nPyObject *PyUnicode_Split(PyObject *unicode, PyObject *sep, Py_ssize_t maxsplit)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nSplit a string giving a list of Unicode strings. If sep is\nNULL\n, splitting will be done at all whitespace substrings. Otherwise, splits occur at the given separator. At most maxsplit splits will be done. If negative, no limit is set. Separators are not included in the resulting list.On error, return\nNULL\nwith an exception set.Equivalent to\nstr.split()\n.\n-\nPyObject *PyUnicode_RSplit(PyObject *unicode, PyObject *sep, Py_ssize_t maxsplit)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nSimilar to\nPyUnicode_Split()\n, but splitting will be done beginning at the end of the string.On error, return\nNULL\nwith an exception set.Equivalent to\nstr.rsplit()\n.\n-\nPyObject *PyUnicode_Splitlines(PyObject *unicode, int keepends)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nSplit a Unicode string at line breaks, returning a list of Unicode strings. CRLF is considered to be one line break. If keepends is\n0\n, the Line break characters are not included in the resulting strings.\n-\nPyObject *PyUnicode_Partition(PyObject *unicode, PyObject *sep)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nSplit a Unicode string at the first occurrence of sep, and return a 3-tuple containing the part before the separator, the separator itself, and the part after the separator. If the separator is not found, return a 3-tuple containing the string itself, followed by two empty strings.\nsep must not be empty.\nOn error, return\nNULL\nwith an exception set.Equivalent to\nstr.partition()\n.\n-\nPyObject *PyUnicode_RPartition(PyObject *unicode, PyObject *sep)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nSimilar to\nPyUnicode_Partition()\n, but split a Unicode string at the last occurrence of sep. If the separator is not found, return a 3-tuple containing two empty strings, followed by the string itself.sep must not be empty.\nOn error, return\nNULL\nwith an exception set.Equivalent to\nstr.rpartition()\n.\n-\nPyObject *PyUnicode_Join(PyObject *separator, PyObject *seq)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nJoin a sequence of strings using the given separator and return the resulting Unicode string.\n-\nPy_ssize_t PyUnicode_Tailmatch(PyObject *unicode, PyObject *substr, Py_ssize_t start, Py_ssize_t end, int direction)\u00b6\n- Part of the Stable ABI.\nReturn\n1\nif substr matchesunicode[start:end]\nat the given tail end (direction ==-1\nmeans to do a prefix match, direction ==1\na suffix match),0\notherwise. Return-1\nif an error occurred.\n-\nPy_ssize_t PyUnicode_Find(PyObject *unicode, PyObject *substr, Py_ssize_t start, Py_ssize_t end, int direction)\u00b6\n- Part of the Stable ABI.\nReturn the first position of substr in\nunicode[start:end]\nusing the given direction (direction ==1\nmeans to do a forward search, direction ==-1\na backward search). The return value is the index of the first match; a value of-1\nindicates that no match was found, and-2\nindicates that an error occurred and an exception has been set.\n-\nPy_ssize_t PyUnicode_FindChar(PyObject *unicode, Py_UCS4 ch, Py_ssize_t start, Py_ssize_t end, int direction)\u00b6\n- Part of the Stable ABI since version 3.7.\nReturn the first position of the character ch in\nunicode[start:end]\nusing the given direction (direction ==1\nmeans to do a forward search, direction ==-1\na backward search). The return value is the index of the first match; a value of-1\nindicates that no match was found, and-2\nindicates that an error occurred and an exception has been set.Added in version 3.3.\nChanged in version 3.7: start and end are now adjusted to behave like\nunicode[start:end]\n.\n-\nPy_ssize_t PyUnicode_Count(PyObject *unicode, PyObject *substr, Py_ssize_t start, Py_ssize_t end)\u00b6\n- Part of the Stable ABI.\nReturn the number of non-overlapping occurrences of substr in\nunicode[start:end]\n. Return-1\nif an error occurred.\n-\nPyObject *PyUnicode_Replace(PyObject *unicode, PyObject *substr, PyObject *replstr, Py_ssize_t maxcount)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReplace at most maxcount occurrences of substr in unicode with replstr and return the resulting Unicode object. maxcount ==\n-1\nmeans replace all occurrences.\n-\nint PyUnicode_Compare(PyObject *left, PyObject *right)\u00b6\n- Part of the Stable ABI.\nCompare two strings and return\n-1\n,0\n,1\nfor less than, equal, and greater than, respectively.This function returns\n-1\nupon failure, so one should callPyErr_Occurred()\nto check for errors.See also\nThe\nPyUnicode_Equal()\nfunction.\n-\nint PyUnicode_Equal(PyObject *a, PyObject *b)\u00b6\n- Part of the Stable ABI since version 3.14.\nTest if two strings are equal:\nReturn\n1\nif a is equal to b.Return\n0\nif a is not equal to b.Set a\nTypeError\nexception and return-1\nif a or b is not astr\nobject.\nThe function always succeeds if a and b are\nstr\nobjects.The function works for\nstr\nsubclasses, but does not honor custom__eq__()\nmethod.See also\nThe\nPyUnicode_Compare()\nfunction.Added in version 3.14.\n-\nint PyUnicode_EqualToUTF8AndSize(PyObject *unicode, const char *string, Py_ssize_t size)\u00b6\n- Part of the Stable ABI since version 3.13.\nCompare a Unicode object with a char buffer which is interpreted as being UTF-8 or ASCII encoded and return true (\n1\n) if they are equal, or false (0\n) otherwise. If the Unicode object contains surrogate code points (U+D800\n-U+DFFF\n) or the C string is not valid UTF-8, false (0\n) is returned.This function does not raise exceptions.\nAdded in version 3.13.\n-\nint PyUnicode_EqualToUTF8(PyObject *unicode, const char *string)\u00b6\n- Part of the Stable ABI since version 3.13.\nSimilar to\nPyUnicode_EqualToUTF8AndSize()\n, but compute string length usingstrlen()\n. If the Unicode object contains null characters, false (0\n) is returned.Added in version 3.13.\n-\nint PyUnicode_CompareWithASCIIString(PyObject *unicode, const char *string)\u00b6\n- Part of the Stable ABI.\nCompare a Unicode object, unicode, with string and return\n-1\n,0\n,1\nfor less than, equal, and greater than, respectively. It is best to pass only ASCII-encoded strings, but the function interprets the input string as ISO-8859-1 if it contains non-ASCII characters.This function does not raise exceptions.\n-\nPyObject *PyUnicode_RichCompare(PyObject *left, PyObject *right, int op)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nRich compare two Unicode strings and return one of the following:\nNULL\nin case an exception was raisedPy_NotImplemented\nin case the type combination is unknown\nPossible values for op are\nPy_GT\n,Py_GE\n,Py_EQ\n,Py_NE\n,Py_LT\n, andPy_LE\n.\n-\nPyObject *PyUnicode_Format(PyObject *format, PyObject *args)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a new string object from format and args; this is analogous to\nformat % args\n.\n-\nint PyUnicode_Contains(PyObject *unicode, PyObject *substr)\u00b6\n- Part of the Stable ABI.\nCheck whether substr is contained in unicode and return true or false accordingly.\nsubstr has to coerce to a one element Unicode string.\n-1\nis returned if there was an error.\n-\nvoid PyUnicode_InternInPlace(PyObject **p_unicode)\u00b6\n- Part of the Stable ABI.\nIntern the argument *p_unicode in place. The argument must be the address of a pointer variable pointing to a Python Unicode string object. If there is an existing interned string that is the same as *p_unicode, it sets *p_unicode to it (releasing the reference to the old string object and creating a new strong reference to the interned string object), otherwise it leaves *p_unicode alone and interns it.\n(Clarification: even though there is a lot of talk about references, think of this function as reference-neutral. You must own the object you pass in; after the call you no longer own the passed-in reference, but you newly own the result.)\nThis function never raises an exception. On error, it leaves its argument unchanged without interning it.\nInstances of subclasses of\nstr\nmay not be interned, that is, PyUnicode_CheckExact(*p_unicode) must be true. If it is not, then \u2013 as with any other error \u2013 the argument is left unchanged.Note that interned strings are not \u201cimmortal\u201d. You must keep a reference to the result to benefit from interning.\n-\nPyObject *PyUnicode_InternFromString(const char *str)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nA combination of\nPyUnicode_FromString()\nandPyUnicode_InternInPlace()\n, meant for statically allocated strings.Return a new (\u201cowned\u201d) reference to either a new Unicode string object that has been interned, or an earlier interned string object with the same value.\nPython may keep a reference to the result, or make it immortal, preventing it from being garbage-collected promptly. For interning an unbounded number of different strings, such as ones coming from user input, prefer calling\nPyUnicode_FromString()\nandPyUnicode_InternInPlace()\ndirectly.\n-\nunsigned int PyUnicode_CHECK_INTERNED(PyObject *str)\u00b6\nReturn a non-zero value if str is interned, zero if not. The str argument must be a string; this is not checked. This function always succeeds.\nCPython implementation detail: A non-zero return value may carry additional information about how the string is interned. The meaning of such non-zero values, as well as each specific string\u2019s intern-related details, may change between CPython versions.\nPyUnicodeWriter\u00b6\nThe PyUnicodeWriter\nAPI can be used to create a Python str\nobject.\nAdded in version 3.14.\n-\ntype PyUnicodeWriter\u00b6\nA Unicode writer instance.\nThe instance must be destroyed by\nPyUnicodeWriter_Finish()\non success, orPyUnicodeWriter_Discard()\non error.\n-\nPyUnicodeWriter *PyUnicodeWriter_Create(Py_ssize_t length)\u00b6\nCreate a Unicode writer instance.\nlength must be greater than or equal to\n0\n.If length is greater than\n0\n, preallocate an internal buffer of length characters.Set an exception and return\nNULL\non error.\n-\nPyObject *PyUnicodeWriter_Finish(PyUnicodeWriter *writer)\u00b6\nReturn the final Python\nstr\nobject and destroy the writer instance.Set an exception and return\nNULL\non error.The writer instance is invalid after this call.\n-\nvoid PyUnicodeWriter_Discard(PyUnicodeWriter *writer)\u00b6\nDiscard the internal Unicode buffer and destroy the writer instance.\nIf writer is\nNULL\n, no operation is performed.The writer instance is invalid after this call.\n-\nint PyUnicodeWriter_WriteChar(PyUnicodeWriter *writer, Py_UCS4 ch)\u00b6\nWrite the single Unicode character ch into writer.\nOn success, return\n0\n. On error, set an exception, leave the writer unchanged, and return-1\n.\n-\nint PyUnicodeWriter_WriteUTF8(PyUnicodeWriter *writer, const char *str, Py_ssize_t size)\u00b6\nDecode the string str from UTF-8 in strict mode and write the output into writer.\nsize is the string length in bytes. If size is equal to\n-1\n, callstrlen(str)\nto get the string length.On success, return\n0\n. On error, set an exception, leave the writer unchanged, and return-1\n.See also\nPyUnicodeWriter_DecodeUTF8Stateful()\n.\n-\nint PyUnicodeWriter_WriteASCII(PyUnicodeWriter *writer, const char *str, Py_ssize_t size)\u00b6\nWrite the ASCII string str into writer.\nsize is the string length in bytes. If size is equal to\n-1\n, callstrlen(str)\nto get the string length.str must only contain ASCII characters. The behavior is undefined if str contains non-ASCII characters.\nOn success, return\n0\n. On error, set an exception, leave the writer unchanged, and return-1\n.Added in version 3.14.\n-\nint PyUnicodeWriter_WriteWideChar(PyUnicodeWriter *writer, const wchar_t *str, Py_ssize_t size)\u00b6\nWrite the wide string str into writer.\nsize is a number of wide characters. If size is equal to\n-1\n, callwcslen(str)\nto get the string length.On success, return\n0\n. On error, set an exception, leave the writer unchanged, and return-1\n.\n-\nint PyUnicodeWriter_WriteUCS4(PyUnicodeWriter *writer, Py_UCS4 *str, Py_ssize_t size)\u00b6\nWriter the UCS4 string str into writer.\nsize is a number of UCS4 characters.\nOn success, return\n0\n. On error, set an exception, leave the writer unchanged, and return-1\n.\n-\nint PyUnicodeWriter_WriteStr(PyUnicodeWriter *writer, PyObject *obj)\u00b6\nCall\nPyObject_Str()\non obj and write the output into writer.On success, return\n0\n. On error, set an exception, leave the writer unchanged, and return-1\n.\n-\nint PyUnicodeWriter_WriteRepr(PyUnicodeWriter *writer, PyObject *obj)\u00b6\nCall\nPyObject_Repr()\non obj and write the output into writer.On success, return\n0\n. On error, set an exception, leave the writer unchanged, and return-1\n.\n-\nint PyUnicodeWriter_WriteSubstring(PyUnicodeWriter *writer, PyObject *str, Py_ssize_t start, Py_ssize_t end)\u00b6\nWrite the substring\nstr[start:end]\ninto writer.str must be Python\nstr\nobject. start must be greater than or equal to 0, and less than or equal to end. end must be less than or equal to str length.On success, return\n0\n. On error, set an exception, leave the writer unchanged, and return-1\n.\n-\nint PyUnicodeWriter_Format(PyUnicodeWriter *writer, const char *format, ...)\u00b6\nSimilar to\nPyUnicode_FromFormat()\n, but write the output directly into writer.On success, return\n0\n. On error, set an exception, leave the writer unchanged, and return-1\n.\n-\nint PyUnicodeWriter_DecodeUTF8Stateful(PyUnicodeWriter *writer, const char *string, Py_ssize_t length, const char *errors, Py_ssize_t *consumed)\u00b6\nDecode the string str from UTF-8 with errors error handler and write the output into writer.\nsize is the string length in bytes. If size is equal to\n-1\n, callstrlen(str)\nto get the string length.errors is an error handler name, such as\n\"replace\"\n. If errors isNULL\n, use the strict error handler.If consumed is not\nNULL\n, set *consumed to the number of decoded bytes on success. If consumed isNULL\n, treat trailing incomplete UTF-8 byte sequences as an error.On success, return\n0\n. On error, set an exception, leave the writer unchanged, and return-1\n.See also\nPyUnicodeWriter_WriteUTF8()\n.\nDeprecated API\u00b6\nThe following API is deprecated.\n-\ntype Py_UNICODE\u00b6\nThis is a typedef of\nwchar_t\n, which is a 16-bit type or 32-bit type depending on the platform. Please usewchar_t\ndirectly instead.Changed in version 3.3: In previous versions, this was a 16-bit type or a 32-bit type depending on whether you selected a \u201cnarrow\u201d or \u201cwide\u201d Unicode version of Python at build time.\nDeprecated since version 3.13, will be removed in version 3.15.\n-\nint PyUnicode_READY(PyObject *unicode)\u00b6\nDo nothing and return\n0\n. This API is kept only for backward compatibility, but there are no plans to remove it.Added in version 3.3.\nDeprecated since version 3.10: This API does nothing since Python 3.12. Previously, this needed to be called for each string created using the old API (\nPyUnicode_FromUnicode()\nor similar).\n-\nunsigned int PyUnicode_IS_READY(PyObject *unicode)\u00b6\nDo nothing and return\n1\n. This API is kept only for backward compatibility, but there are no plans to remove it.Added in version 3.3.\nDeprecated since version 3.14: This API does nothing since Python 3.12. Previously, this could be called to check if\nPyUnicode_READY()\nis necessary.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 15326} +{"url": "https://docs.python.org/3/c-api/concrete.html", "title": "Concrete Objects Layer", "content": "Concrete Objects Layer\u00b6\nThe functions in this chapter are specific to certain Python object types.\nPassing them an object of the wrong type is not a good idea; if you receive an\nobject from a Python program and you are not sure that it has the right type,\nyou must perform a type check first; for example, to check that an object is a\ndictionary, use PyDict_Check()\n. The chapter is structured like the\n\u201cfamily tree\u201d of Python object types.\nWarning\nWhile the functions described in this chapter carefully check the type of the\nobjects which are passed in, many of them do not check for NULL\nbeing passed\ninstead of a valid object. Allowing NULL\nto be passed in can cause memory\naccess violations and immediate termination of the interpreter.\nFundamental Objects\u00b6\nThis section describes Python type objects and the singleton object None\n.\nNumeric Objects\u00b6\nSequence Objects\u00b6\nGeneric operations on sequence objects were discussed in the previous chapter; this section deals with the specific kinds of sequence objects that are intrinsic to the Python language.\nContainer Objects\u00b6\nFunction Objects\u00b6\nOther Objects\u00b6\n- File Objects\n- Module Objects\n- Module definitions\n- Creating extension modules dynamically\n- Support functions\n- Iterator Objects\n- Descriptor Objects\n- Slice Objects\n- MemoryView objects\n- Pickle buffer objects\n- Weak Reference Objects\n- Capsules\n- Frame Objects\n- Generator Objects\n- Coroutine Objects\n- Context Variables Objects\n- Objects for Type Hinting", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 368} +{"url": "https://docs.python.org/3/c-api/set.html", "title": "Set Objects", "content": "Set Objects\u00b6\nThis section details the public API for set\nand frozenset\nobjects. Any functionality not listed below is best accessed using either\nthe abstract object protocol (including PyObject_CallMethod()\n,\nPyObject_RichCompareBool()\n, PyObject_Hash()\n,\nPyObject_Repr()\n, PyObject_IsTrue()\n, PyObject_Print()\n, and\nPyObject_GetIter()\n) or the abstract number protocol (including\nPyNumber_And()\n, PyNumber_Subtract()\n, PyNumber_Or()\n,\nPyNumber_Xor()\n, PyNumber_InPlaceAnd()\n,\nPyNumber_InPlaceSubtract()\n, PyNumber_InPlaceOr()\n, and\nPyNumber_InPlaceXor()\n).\n-\ntype PySetObject\u00b6\nThis subtype of\nPyObject\nis used to hold the internal data for bothset\nandfrozenset\nobjects. It is like aPyDictObject\nin that it is a fixed size for small sets (much like tuple storage) and will point to a separate, variable sized block of memory for medium and large sized sets (much like list storage). None of the fields of this structure should be considered public and all are subject to change. All access should be done through the documented API rather than by manipulating the values in the structure.\n-\nPyTypeObject PySet_Type\u00b6\n- Part of the Stable ABI.\nThis is an instance of\nPyTypeObject\nrepresenting the Pythonset\ntype.\n-\nPyTypeObject PyFrozenSet_Type\u00b6\n- Part of the Stable ABI.\nThis is an instance of\nPyTypeObject\nrepresenting the Pythonfrozenset\ntype.\nThe following type check macros work on pointers to any Python object. Likewise, the constructor functions work with any iterable Python object.\n-\nint PySet_Check(PyObject *p)\u00b6\nReturn true if p is a\nset\nobject or an instance of a subtype. This function always succeeds.\n-\nint PyFrozenSet_Check(PyObject *p)\u00b6\nReturn true if p is a\nfrozenset\nobject or an instance of a subtype. This function always succeeds.\n-\nint PyAnySet_Check(PyObject *p)\u00b6\nReturn true if p is a\nset\nobject, afrozenset\nobject, or an instance of a subtype. This function always succeeds.\n-\nint PySet_CheckExact(PyObject *p)\u00b6\nReturn true if p is a\nset\nobject but not an instance of a subtype. This function always succeeds.Added in version 3.10.\n-\nint PyAnySet_CheckExact(PyObject *p)\u00b6\nReturn true if p is a\nset\nobject or afrozenset\nobject but not an instance of a subtype. This function always succeeds.\n-\nint PyFrozenSet_CheckExact(PyObject *p)\u00b6\nReturn true if p is a\nfrozenset\nobject but not an instance of a subtype. This function always succeeds.\n-\nPyObject *PySet_New(PyObject *iterable)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a new\nset\ncontaining objects returned by the iterable. The iterable may beNULL\nto create a new empty set. Return the new set on success orNULL\non failure. RaiseTypeError\nif iterable is not actually iterable. The constructor is also useful for copying a set (c=set(s)\n).\n-\nPyObject *PyFrozenSet_New(PyObject *iterable)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a new\nfrozenset\ncontaining objects returned by the iterable. The iterable may beNULL\nto create a new empty frozenset. Return the new set on success orNULL\non failure. RaiseTypeError\nif iterable is not actually iterable.\nThe following functions and macros are available for instances of set\nor frozenset\nor instances of their subtypes.\n-\nPy_ssize_t PySet_Size(PyObject *anyset)\u00b6\n- Part of the Stable ABI.\nReturn the length of a\nset\norfrozenset\nobject. Equivalent tolen(anyset)\n. Raises aSystemError\nif anyset is not aset\n,frozenset\n, or an instance of a subtype.\n-\nPy_ssize_t PySet_GET_SIZE(PyObject *anyset)\u00b6\nMacro form of\nPySet_Size()\nwithout error checking.\n-\nint PySet_Contains(PyObject *anyset, PyObject *key)\u00b6\n- Part of the Stable ABI.\nReturn\n1\nif found,0\nif not found, and-1\nif an error is encountered. Unlike the Python__contains__()\nmethod, this function does not automatically convert unhashable sets into temporary frozensets. Raise aTypeError\nif the key is unhashable. RaiseSystemError\nif anyset is not aset\n,frozenset\n, or an instance of a subtype.\n-\nint PySet_Add(PyObject *set, PyObject *key)\u00b6\n- Part of the Stable ABI.\nAdd key to a\nset\ninstance. Also works withfrozenset\ninstances (likePyTuple_SetItem()\nit can be used to fill in the values of brand new frozensets before they are exposed to other code). Return0\non success or-1\non failure. Raise aTypeError\nif the key is unhashable. Raise aMemoryError\nif there is no room to grow. Raise aSystemError\nif set is not an instance ofset\nor its subtype.\nThe following functions are available for instances of set\nor its\nsubtypes but not for instances of frozenset\nor its subtypes.\n-\nint PySet_Discard(PyObject *set, PyObject *key)\u00b6\n- Part of the Stable ABI.\nReturn\n1\nif found and removed,0\nif not found (no action taken), and-1\nif an error is encountered. Does not raiseKeyError\nfor missing keys. Raise aTypeError\nif the key is unhashable. Unlike the Pythondiscard()\nmethod, this function does not automatically convert unhashable sets into temporary frozensets. RaiseSystemError\nif set is not an instance ofset\nor its subtype.\n-\nPyObject *PySet_Pop(PyObject *set)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a new reference to an arbitrary object in the set, and removes the object from the set. Return\nNULL\non failure. RaiseKeyError\nif the set is empty. Raise aSystemError\nif set is not an instance ofset\nor its subtype.\n-\nint PySet_Clear(PyObject *set)\u00b6\n- Part of the Stable ABI.\nEmpty an existing set of all elements. Return\n0\non success. Return-1\nand raiseSystemError\nif set is not an instance ofset\nor its subtype.\nDeprecated API\u00b6\n-\nPySet_MINSIZE\u00b6\nA soft deprecated constant representing the size of an internal preallocated table inside\nPySetObject\ninstances.This is documented solely for completeness, as there are no guarantees that a given version of CPython uses preallocated tables with a fixed size. In code that does not deal with unstable set internals,\nPySet_MINSIZE\ncan be replaced with a small constant like8\n.If looking for the size of a set, use\nPySet_Size()\ninstead.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1479} +{"url": "https://docs.python.org/3/whatsnew/index.html", "title": "What\u2019s New in Python", "content": "What\u2019s New in Python\u00b6\nThe \u201cWhat\u2019s New in Python\u201d series of essays takes tours through the most important changes between major Python versions. They are a \u201cmust read\u201d for anyone wishing to stay up-to-date after a new release.\n- What\u2019s new in Python 3.14\n- What\u2019s New In Python 3.13\n- What\u2019s New In Python 3.12\n- What\u2019s New In Python 3.11\n- Summary \u2013 Release highlights\n- New Features\n- New Features Related to Type Hints\n- Other Language Changes\n- Other CPython Implementation Changes\n- New Modules\n- Improved Modules\n- Optimizations\n- Faster CPython\n- CPython bytecode changes\n- Deprecated\n- Pending Removal in Python 3.12\n- Removed\n- Porting to Python 3.11\n- Build Changes\n- C API Changes\n- Notable changes in 3.11.4\n- Notable changes in 3.11.5\n- What\u2019s New In Python 3.10\n- Summary \u2013 Release highlights\n- New Features\n- New Features Related to Type Hints\n- Other Language Changes\n- New Modules\n- Improved Modules\n- Optimizations\n- Deprecated\n- Removed\n- Porting to Python 3.10\n- CPython bytecode changes\n- Build Changes\n- C API Changes\n- Notable security feature in 3.10.7\n- Notable security feature in 3.10.8\n- Notable changes in 3.10.12\n- What\u2019s New In Python 3.9\n- Summary \u2013 Release highlights\n- You should check for DeprecationWarning in your code\n- New Features\n- Other Language Changes\n- New Modules\n- Improved Modules\n- Optimizations\n- Deprecated\n- Removed\n- Porting to Python 3.9\n- Build Changes\n- C API Changes\n- Notable changes in Python 3.9.1\n- Notable changes in Python 3.9.2\n- Notable changes in Python 3.9.3\n- Notable changes in Python 3.9.5\n- Notable security feature in 3.9.14\n- Notable changes in 3.9.17\n- What\u2019s New In Python 3.8\n- Summary \u2013 Release highlights\n- New Features\n- Other Language Changes\n- New Modules\n- Improved Modules\n- Optimizations\n- Build and C API Changes\n- Deprecated\n- API and Feature Removals\n- Porting to Python 3.8\n- Notable changes in Python 3.8.1\n- Notable changes in Python 3.8.2\n- Notable changes in Python 3.8.3\n- Notable changes in Python 3.8.8\n- Notable changes in Python 3.8.9\n- Notable changes in Python 3.8.10\n- Notable changes in Python 3.8.10\n- Notable changes in Python 3.8.12\n- Notable security feature in 3.8.14\n- Notable changes in 3.8.17\n- What\u2019s New In Python 3.7\n- Summary \u2013 Release Highlights\n- New Features\n- Other Language Changes\n- New Modules\n- Improved Modules\n- C API Changes\n- Build Changes\n- Optimizations\n- Other CPython Implementation Changes\n- Deprecated Python Behavior\n- Deprecated Python modules, functions and methods\n- Deprecated functions and types of the C API\n- Platform Support Removals\n- API and Feature Removals\n- Module Removals\n- Windows-only Changes\n- Porting to Python 3.7\n- Notable changes in Python 3.7.1\n- Notable changes in Python 3.7.2\n- Notable changes in Python 3.7.6\n- Notable changes in Python 3.7.10\n- Notable changes in Python 3.7.11\n- Notable security feature in 3.7.14\n- What\u2019s New In Python 3.6\n- Summary \u2013 Release highlights\n- New Features\n- Other Language Changes\n- New Modules\n- Improved Modules\n- Optimizations\n- Build and C API Changes\n- Other Improvements\n- Deprecated\n- Removed\n- Porting to Python 3.6\n- Notable changes in Python 3.6.2\n- Notable changes in Python 3.6.4\n- Notable changes in Python 3.6.5\n- Notable changes in Python 3.6.7\n- Notable changes in Python 3.6.10\n- Notable changes in Python 3.6.13\n- Notable changes in Python 3.6.14\n- What\u2019s New In Python 3.5\n- What\u2019s New In Python 3.4\n- What\u2019s New In Python 3.3\n- Summary \u2013 Release highlights\n- PEP 405: Virtual Environments\n- PEP 420: Implicit Namespace Packages\n- PEP 3118: New memoryview implementation and buffer protocol documentation\n- PEP 393: Flexible String Representation\n- PEP 397: Python Launcher for Windows\n- PEP 3151: Reworking the OS and IO exception hierarchy\n- PEP 380: Syntax for Delegating to a Subgenerator\n- PEP 409: Suppressing exception context\n- PEP 414: Explicit Unicode literals\n- PEP 3155: Qualified name for classes and functions\n- PEP 412: Key-Sharing Dictionary\n- PEP 362: Function Signature Object\n- PEP 421: Adding sys.implementation\n- Using importlib as the Implementation of Import\n- Other Language Changes\n- A Finer-Grained Import Lock\n- Builtin functions and types\n- New Modules\n- Improved Modules\n- Optimizations\n- Build and C API Changes\n- Deprecated\n- Porting to Python 3.3\n- What\u2019s New In Python 3.2\n- PEP 384: Defining a Stable ABI\n- PEP 389: Argparse Command Line Parsing Module\n- PEP 391: Dictionary Based Configuration for Logging\n- PEP 3148: The\nconcurrent.futures\nmodule - PEP 3147: PYC Repository Directories\n- PEP 3149: ABI Version Tagged .so Files\n- PEP 3333: Python Web Server Gateway Interface v1.0.1\n- Other Language Changes\n- New, Improved, and Deprecated Modules\n- Multi-threading\n- Optimizations\n- Unicode\n- Codecs\n- Documentation\n- IDLE\n- Code Repository\n- Build and C API Changes\n- Porting to Python 3.2\n- What\u2019s New In Python 3.1\n- What\u2019s New In Python 3.0\n- What\u2019s New in Python 2.7\n- The Future for Python 2.x\n- Changes to the Handling of Deprecation Warnings\n- Python 3.1 Features\n- PEP 372: Adding an Ordered Dictionary to collections\n- PEP 378: Format Specifier for Thousands Separator\n- PEP 389: The argparse Module for Parsing Command Lines\n- PEP 391: Dictionary-Based Configuration For Logging\n- PEP 3106: Dictionary Views\n- PEP 3137: The memoryview Object\n- Other Language Changes\n- New and Improved Modules\n- Build and C API Changes\n- Other Changes and Fixes\n- Porting to Python 2.7\n- New Features Added to Python 2.7 Maintenance Releases\n- Acknowledgements\n- What\u2019s New in Python 2.6\n- Python 3.0\n- Changes to the Development Process\n- PEP 343: The \u2018with\u2019 statement\n- PEP 366: Explicit Relative Imports From a Main Module\n- PEP 370: Per-user\nsite-packages\nDirectory - PEP 371: The\nmultiprocessing\nPackage - PEP 3101: Advanced String Formatting\n- PEP 3105:\nprint\nAs a Function - PEP 3110: Exception-Handling Changes\n- PEP 3112: Byte Literals\n- PEP 3116: New I/O Library\n- PEP 3118: Revised Buffer Protocol\n- PEP 3119: Abstract Base Classes\n- PEP 3127: Integer Literal Support and Syntax\n- PEP 3129: Class Decorators\n- PEP 3141: A Type Hierarchy for Numbers\n- Other Language Changes\n- New and Improved Modules\n- Deprecations and Removals\n- Build and C API Changes\n- Porting to Python 2.6\n- Acknowledgements\n- What\u2019s New in Python 2.5\n- PEP 308: Conditional Expressions\n- PEP 309: Partial Function Application\n- PEP 314: Metadata for Python Software Packages v1.1\n- PEP 328: Absolute and Relative Imports\n- PEP 338: Executing Modules as Scripts\n- PEP 341: Unified try/except/finally\n- PEP 342: New Generator Features\n- PEP 343: The \u2018with\u2019 statement\n- PEP 352: Exceptions as New-Style Classes\n- PEP 353: Using ssize_t as the index type\n- PEP 357: The \u2018__index__\u2019 method\n- Other Language Changes\n- New, Improved, and Removed Modules\n- Build and C API Changes\n- Porting to Python 2.5\n- Acknowledgements\n- What\u2019s New in Python 2.4\n- PEP 218: Built-In Set Objects\n- PEP 237: Unifying Long Integers and Integers\n- PEP 289: Generator Expressions\n- PEP 292: Simpler String Substitutions\n- PEP 318: Decorators for Functions and Methods\n- PEP 322: Reverse Iteration\n- PEP 324: New subprocess Module\n- PEP 327: Decimal Data Type\n- PEP 328: Multi-line Imports\n- PEP 331: Locale-Independent Float/String Conversions\n- Other Language Changes\n- New, Improved, and Deprecated Modules\n- Build and C API Changes\n- Porting to Python 2.4\n- Acknowledgements\n- What\u2019s New in Python 2.3\n- PEP 218: A Standard Set Datatype\n- PEP 255: Simple Generators\n- PEP 263: Source Code Encodings\n- PEP 273: Importing Modules from ZIP Archives\n- PEP 277: Unicode file name support for Windows NT\n- PEP 278: Universal Newline Support\n- PEP 279: enumerate()\n- PEP 282: The logging Package\n- PEP 285: A Boolean Type\n- PEP 293: Codec Error Handling Callbacks\n- PEP 301: Package Index and Metadata for Distutils\n- PEP 302: New Import Hooks\n- PEP 305: Comma-separated Files\n- PEP 307: Pickle Enhancements\n- Extended Slices\n- Other Language Changes\n- New, Improved, and Deprecated Modules\n- Pymalloc: A Specialized Object Allocator\n- Build and C API Changes\n- Other Changes and Fixes\n- Porting to Python 2.3\n- Acknowledgements\n- What\u2019s New in Python 2.2\n- Introduction\n- PEPs 252 and 253: Type and Class Changes\n- PEP 234: Iterators\n- PEP 255: Simple Generators\n- PEP 237: Unifying Long Integers and Integers\n- PEP 238: Changing the Division Operator\n- Unicode Changes\n- PEP 227: Nested Scopes\n- New and Improved Modules\n- Interpreter Changes and Fixes\n- Other Changes and Fixes\n- Acknowledgements\n- What\u2019s New in Python 2.1\n- Introduction\n- PEP 227: Nested Scopes\n- PEP 236: __future__ Directives\n- PEP 207: Rich Comparisons\n- PEP 230: Warning Framework\n- PEP 229: New Build System\n- PEP 205: Weak References\n- PEP 232: Function Attributes\n- PEP 235: Importing Modules on Case-Insensitive Platforms\n- PEP 217: Interactive Display Hook\n- PEP 208: New Coercion Model\n- PEP 241: Metadata in Python Packages\n- New and Improved Modules\n- Other Changes and Fixes\n- Acknowledgements\n- What\u2019s New in Python 2.0\n- Introduction\n- What About Python 1.6?\n- New Development Process\n- Unicode\n- List Comprehensions\n- Augmented Assignment\n- String Methods\n- Garbage Collection of Cycles\n- Other Core Changes\n- Porting to 2.0\n- Extending/Embedding Changes\n- Distutils: Making Modules Easy to Install\n- XML Modules\n- Module changes\n- New modules\n- IDLE Improvements\n- Deleted and Deprecated Modules\n- Acknowledgements\nThe \u201cChangelog\u201d is an HTML version of the file built from the contents of the Misc/NEWS.d directory tree, which contains all nontrivial changes to Python for the current version.\n- Changelog\n- Python next\n- Python 3.14.3 final\n- Python 3.14.2 final\n- Python 3.14.1 final\n- Python 3.14.0 final\n- Python 3.14.0 release candidate 3\n- Python 3.14.0 release candidate 2\n- Python 3.14.0 release candidate 1\n- Python 3.14.0 beta 4\n- Python 3.14.0 beta 3\n- Python 3.14.0 beta 2\n- Python 3.14.0 beta 1\n- Python 3.14.0 alpha 7\n- Python 3.14.0 alpha 6\n- Python 3.14.0 alpha 5\n- Python 3.14.0 alpha 4\n- Python 3.14.0 alpha 3\n- Python 3.14.0 alpha 2\n- Python 3.14.0 alpha 1\n- Python 3.13.0 beta 1\n- Python 3.13.0 alpha 6\n- Python 3.13.0 alpha 5\n- Python 3.13.0 alpha 4\n- Python 3.13.0 alpha 3\n- Python 3.13.0 alpha 2\n- Python 3.13.0 alpha 1\n- Python 3.12.0 beta 1\n- Python 3.12.0 alpha 7\n- Python 3.12.0 alpha 6\n- Python 3.12.0 alpha 5\n- Python 3.12.0 alpha 4\n- Python 3.12.0 alpha 3\n- Python 3.12.0 alpha 2\n- Python 3.12.0 alpha 1\n- Python 3.11.0 beta 1\n- Python 3.11.0 alpha 7\n- Python 3.11.0 alpha 6\n- Python 3.11.0 alpha 5\n- Python 3.11.0 alpha 4\n- Python 3.11.0 alpha 3\n- Python 3.11.0 alpha 2\n- Python 3.11.0 alpha 1\n- Python 3.10.0 beta 1\n- Python 3.10.0 alpha 7\n- Python 3.10.0 alpha 6\n- Python 3.10.0 alpha 5\n- Python 3.10.0 alpha 4\n- Python 3.10.0 alpha 3\n- Python 3.10.0 alpha 2\n- Python 3.10.0 alpha 1\n- Python 3.9.0 beta 1\n- Python 3.9.0 alpha 6\n- Python 3.9.0 alpha 5\n- Python 3.9.0 alpha 4\n- Python 3.9.0 alpha 3\n- Python 3.9.0 alpha 2\n- Python 3.9.0 alpha 1\n- Python 3.8.0 beta 1\n- Python 3.8.0 alpha 4\n- Python 3.8.0 alpha 3\n- Python 3.8.0 alpha 2\n- Python 3.8.0 alpha 1\n- Python 3.7.0 final\n- Python 3.7.0 release candidate 1\n- Python 3.7.0 beta 5\n- Python 3.7.0 beta 4\n- Python 3.7.0 beta 3\n- Python 3.7.0 beta 2\n- Python 3.7.0 beta 1\n- Python 3.7.0 alpha 4\n- Python 3.7.0 alpha 3\n- Python 3.7.0 alpha 2\n- Python 3.7.0 alpha 1\n- Python 3.6.6 final\n- Python 3.6.6 release candidate 1\n- Python 3.6.5 final\n- Python 3.6.5 release candidate 1\n- Python 3.6.4 final\n- Python 3.6.4 release candidate 1\n- Python 3.6.3 final\n- Python 3.6.3 release candidate 1\n- Python 3.6.2 final\n- Python 3.6.2 release candidate 2\n- Python 3.6.2 release candidate 1\n- Python 3.6.1 final\n- Python 3.6.1 release candidate 1\n- Python 3.6.0 final\n- Python 3.6.0 release candidate 2\n- Python 3.6.0 release candidate 1\n- Python 3.6.0 beta 4\n- Python 3.6.0 beta 3\n- Python 3.6.0 beta 2\n- Python 3.6.0 beta 1\n- Python 3.6.0 alpha 4\n- Python 3.6.0 alpha 3\n- Python 3.6.0 alpha 2\n- Python 3.6.0 alpha 1\n- Python 3.5.5 final\n- Python 3.5.5 release candidate 1\n- Python 3.5.4 final\n- Python 3.5.4 release candidate 1\n- Python 3.5.3 final\n- Python 3.5.3 release candidate 1\n- Python 3.5.2 final\n- Python 3.5.2 release candidate 1\n- Python 3.5.1 final\n- Python 3.5.1 release candidate 1\n- Python 3.5.0 final\n- Python 3.5.0 release candidate 4\n- Python 3.5.0 release candidate 3\n- Python 3.5.0 release candidate 2\n- Python 3.5.0 release candidate 1\n- Python 3.5.0 beta 4\n- Python 3.5.0 beta 3\n- Python 3.5.0 beta 2\n- Python 3.5.0 beta 1\n- Python 3.5.0 alpha 4\n- Python 3.5.0 alpha 3\n- Python 3.5.0 alpha 2\n- Python 3.5.0 alpha 1", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 3149} +{"url": "https://docs.python.org/3/reference/introduction.html", "title": "Introduction", "content": "1. Introduction\u00b6\nThis reference manual describes the Python programming language. It is not intended as a tutorial.\nWhile I am trying to be as precise as possible, I chose to use English rather than formal specifications for everything except syntax and lexical analysis. This should make the document more understandable to the average reader, but will leave room for ambiguities. Consequently, if you were coming from Mars and tried to re-implement Python from this document alone, you might have to guess things and in fact you would probably end up implementing quite a different language. On the other hand, if you are using Python and wonder what the precise rules about a particular area of the language are, you should definitely be able to find them here. If you would like to see a more formal definition of the language, maybe you could volunteer your time \u2014 or invent a cloning machine :-).\nIt is dangerous to add too many implementation details to a language reference document \u2014 the implementation may change, and other implementations of the same language may work differently. On the other hand, CPython is the one Python implementation in widespread use (although alternate implementations continue to gain support), and its particular quirks are sometimes worth being mentioned, especially where the implementation imposes additional limitations. Therefore, you\u2019ll find short \u201cimplementation notes\u201d sprinkled throughout the text.\nEvery Python implementation comes with a number of built-in and standard modules. These are documented in The Python Standard Library. A few built-in modules are mentioned when they interact in a significant way with the language definition.\n1.1. Alternate Implementations\u00b6\nThough there is one Python implementation which is by far the most popular, there are some alternate implementations which are of particular interest to different audiences.\nKnown implementations include:\n- CPython\nThis is the original and most-maintained implementation of Python, written in C. New language features generally appear here first.\n- Jython\nPython implemented in Java. This implementation can be used as a scripting language for Java applications, or can be used to create applications using the Java class libraries. It is also often used to create tests for Java libraries. More information can be found at the Jython website.\n- Python for .NET\nThis implementation actually uses the CPython implementation, but is a managed .NET application and makes .NET libraries available. It was created by Brian Lloyd. For more information, see the Python for .NET home page.\n- IronPython\nAn alternate Python for .NET. Unlike Python.NET, this is a complete Python implementation that generates IL, and compiles Python code directly to .NET assemblies. It was created by Jim Hugunin, the original creator of Jython. For more information, see the IronPython website.\n- PyPy\nAn implementation of Python written completely in Python. It supports several advanced features not found in other implementations like stackless support and a Just in Time compiler. One of the goals of the project is to encourage experimentation with the language itself by making it easier to modify the interpreter (since it is written in Python). Additional information is available on the PyPy project\u2019s home page.\nEach of these implementations varies in some way from the language as documented in this manual, or introduces specific information beyond what\u2019s covered in the standard Python documentation. Please refer to the implementation-specific documentation to determine what else you need to know about the specific implementation you\u2019re using.\n1.2. Notation\u00b6\nThe descriptions of lexical analysis and syntax use a grammar notation that is a mixture of EBNF and PEG. For example:\nname:letter\n(letter\n|digit\n| \"_\")* letter: \"a\"...\"z\" | \"A\"...\"Z\" digit: \"0\"...\"9\"\nIn this example, the first line says that a name\nis a letter\nfollowed\nby a sequence of zero or more letter\ns, digit\ns, and underscores.\nA letter\nin turn is any of the single characters 'a'\nthrough\n'z'\nand A\nthrough Z\n; a digit\nis a single character from 0\nto 9\n.\nEach rule begins with a name (which identifies the rule that\u2019s being defined)\nfollowed by a colon, :\n.\nThe definition to the right of the colon uses the following syntax elements:\nname\n: A name refers to another rule. Where possible, it is a link to the rule\u2019s definition.TOKEN\n: An uppercase name refers to a token. For the purposes of grammar definitions, tokens are the same as rules.\n\"text\"\n,'text'\n: Text in single or double quotes must match literally (without the quotes). The type of quote is chosen according to the meaning oftext\n:'if'\n: A name in single quotes denotes a keyword.\"case\"\n: A name in double quotes denotes a soft-keyword.'@'\n: A non-letter symbol in single quotes denotes anOP\ntoken, that is, a delimiter or operator.\ne1 e2\n: Items separated only by whitespace denote a sequence. Here,e1\nmust be followed bye2\n.e1 | e2\n: A vertical bar is used to separate alternatives. It denotes PEG\u2019s \u201cordered choice\u201d: ife1\nmatches,e2\nis not considered. In traditional PEG grammars, this is written as a slash,/\n, rather than a vertical bar. See PEP 617 for more background and details.e*\n: A star means zero or more repetitions of the preceding item.e+\n: Likewise, a plus means one or more repetitions.[e]\n: A phrase enclosed in square brackets means zero or one occurrences. In other words, the enclosed phrase is optional.e?\n: A question mark has exactly the same meaning as square brackets: the preceding item is optional.(e)\n: Parentheses are used for grouping.\nThe following notation is only used in lexical definitions.\n\"a\"...\"z\"\n: Two literal characters separated by three dots mean a choice of any single character in the given (inclusive) range of ASCII characters.<...>\n: A phrase between angular brackets gives an informal description of the matched symbol (for example,\n), or an abbreviation that is defined in nearby text (for example,\n).\nSome definitions also use lookaheads, which indicate that an element must (or must not) match at a given position, but without consuming any input:\n&e\n: a positive lookahead (that is,e\nis required to match)!e\n: a negative lookahead (that is,e\nis required not to match)\nThe unary operators (*\n, +\n, ?\n) bind as tightly as possible;\nthe vertical bar (|\n) binds most loosely.\nWhite space is only meaningful to separate tokens.\nRules are normally contained on a single line, but rules that are too long may be wrapped:\nliteral: stringliteral | bytesliteral | integer | floatnumber | imagnumber\nAlternatively, rules may be formatted with the first line ending at the colon, and each alternative beginning with a vertical bar on a new line. For example:\nliteral: | stringliteral | bytesliteral | integer | floatnumber | imagnumber\nThis does not mean that there is an empty first alternative.\n1.2.1. Lexical and Syntactic definitions\u00b6\nThere is some difference between lexical and syntactic analysis: the lexical analyzer operates on the individual characters of the input source, while the parser (syntactic analyzer) operates on the stream of tokens generated by the lexical analysis. However, in some cases the exact boundary between the two phases is a CPython implementation detail.\nThe practical difference between the two is that in lexical definitions,\nall whitespace is significant.\nThe lexical analyzer discards all whitespace that is not\nconverted to tokens like token.INDENT\nor NEWLINE\n.\nSyntactic definitions then use these tokens, rather than source characters.\nThis documentation uses the same BNF grammar for both styles of definitions. All uses of BNF in the next chapter (Lexical analysis) are lexical definitions; uses in subsequent chapters are syntactic definitions.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1954} +{"url": "https://docs.python.org/3/whatsnew/2.0.html", "title": "What\u2019s New in Python 2.0", "content": "What\u2019s New in Python 2.0\u00b6\n- Author:\nA.M. Kuchling and Moshe Zadka\nIntroduction\u00b6\nA new release of Python, version 2.0, was released on October 16, 2000. This article covers the exciting new features in 2.0, highlights some other useful changes, and points out a few incompatible changes that may require rewriting code.\nPython\u2019s development never completely stops between releases, and a steady flow of bug fixes and improvements are always being submitted. A host of minor fixes, a few optimizations, additional docstrings, and better error messages went into 2.0; to list them all would be impossible, but they\u2019re certainly significant. Consult the publicly available CVS logs if you want to see the full list. This progress is due to the five developers working for PythonLabs are now getting paid to spend their days fixing bugs, and also due to the improved communication resulting from moving to SourceForge.\nWhat About Python 1.6?\u00b6\nPython 1.6 can be thought of as the Contractual Obligations Python release. After the core development team left CNRI in May 2000, CNRI requested that a 1.6 release be created, containing all the work on Python that had been performed at CNRI. Python 1.6 therefore represents the state of the CVS tree as of May 2000, with the most significant new feature being Unicode support. Development continued after May, of course, so the 1.6 tree received a few fixes to ensure that it\u2019s forward-compatible with Python 2.0. 1.6 is therefore part of Python\u2019s evolution, and not a side branch.\nSo, should you take much interest in Python 1.6? Probably not. The 1.6final and 2.0beta1 releases were made on the same day (September 5, 2000), the plan being to finalize Python 2.0 within a month or so. If you have applications to maintain, there seems little point in breaking things by moving to 1.6, fixing them, and then having another round of breakage within a month by moving to 2.0; you\u2019re better off just going straight to 2.0. Most of the really interesting features described in this document are only in 2.0, because a lot of work was done between May and September.\nNew Development Process\u00b6\nThe most important change in Python 2.0 may not be to the code at all, but to how Python is developed: in May 2000 the Python developers began using the tools made available by SourceForge for storing source code, tracking bug reports, and managing the queue of patch submissions. To report bugs or submit patches for Python 2.0, use the bug tracking and patch manager tools available from Python\u2019s project page, located at https://sourceforge.net/projects/python/.\nThe most important of the services now hosted at SourceForge is the Python CVS tree, the version-controlled repository containing the source code for Python. Previously, there were roughly 7 or so people who had write access to the CVS tree, and all patches had to be inspected and checked in by one of the people on this short list. Obviously, this wasn\u2019t very scalable. By moving the CVS tree to SourceForge, it became possible to grant write access to more people; as of September 2000 there were 27 people able to check in changes, a fourfold increase. This makes possible large-scale changes that wouldn\u2019t be attempted if they\u2019d have to be filtered through the small group of core developers. For example, one day Peter Schneider-Kamp took it into his head to drop K&R C compatibility and convert the C source for Python to ANSI C. After getting approval on the python-dev mailing list, he launched into a flurry of checkins that lasted about a week, other developers joined in to help, and the job was done. If there were only 5 people with write access, probably that task would have been viewed as \u201cnice, but not worth the time and effort needed\u201d and it would never have gotten done.\nThe shift to using SourceForge\u2019s services has resulted in a remarkable increase in the speed of development. Patches now get submitted, commented on, revised by people other than the original submitter, and bounced back and forth between people until the patch is deemed worth checking in. Bugs are tracked in one central location and can be assigned to a specific person for fixing, and we can count the number of open bugs to measure progress. This didn\u2019t come without a cost: developers now have more e-mail to deal with, more mailing lists to follow, and special tools had to be written for the new environment. For example, SourceForge sends default patch and bug notification e-mail messages that are completely unhelpful, so Ka-Ping Yee wrote an HTML screen-scraper that sends more useful messages.\nThe ease of adding code caused a few initial growing pains, such as code was checked in before it was ready or without getting clear agreement from the developer group. The approval process that has emerged is somewhat similar to that used by the Apache group. Developers can vote +1, +0, -0, or -1 on a patch; +1 and -1 denote acceptance or rejection, while +0 and -0 mean the developer is mostly indifferent to the change, though with a slight positive or negative slant. The most significant change from the Apache model is that the voting is essentially advisory, letting Guido van Rossum, who has Benevolent Dictator For Life status, know what the general opinion is. He can still ignore the result of a vote, and approve or reject a change even if the community disagrees with him.\nProducing an actual patch is the last step in adding a new feature, and is usually easy compared to the earlier task of coming up with a good design. Discussions of new features can often explode into lengthy mailing list threads, making the discussion hard to follow, and no one can read every posting to python-dev. Therefore, a relatively formal process has been set up to write Python Enhancement Proposals (PEPs), modelled on the internet RFC process. PEPs are draft documents that describe a proposed new feature, and are continually revised until the community reaches a consensus, either accepting or rejecting the proposal. Quoting from the introduction to PEP 1, \u201cPEP Purpose and Guidelines\u201d:\nPEP stands for Python Enhancement Proposal. A PEP is a design document providing information to the Python community, or describing a new feature for Python. The PEP should provide a concise technical specification of the feature and a rationale for the feature.\nWe intend PEPs to be the primary mechanisms for proposing new features, for collecting community input on an issue, and for documenting the design decisions that have gone into Python. The PEP author is responsible for building consensus within the community and documenting dissenting opinions.\nRead the rest of PEP 1 for the details of the PEP editorial process, style, and format. PEPs are kept in the Python CVS tree on SourceForge, though they\u2019re not part of the Python 2.0 distribution, and are also available in HTML form from https://peps.python.org/. As of September 2000, there are 25 PEPs, ranging from PEP 201, \u201cLockstep Iteration\u201d, to PEP 225, \u201cElementwise/Objectwise Operators\u201d.\nUnicode\u00b6\nThe largest new feature in Python 2.0 is a new fundamental data type: Unicode strings. Unicode uses 16-bit numbers to represent characters instead of the 8-bit number used by ASCII, meaning that 65,536 distinct characters can be supported.\nThe final interface for Unicode support was arrived at through countless often-stormy discussions on the python-dev mailing list, and mostly implemented by Marc-Andr\u00e9 Lemburg, based on a Unicode string type implementation by Fredrik Lundh. A detailed explanation of the interface was written up as PEP 100, \u201cPython Unicode Integration\u201d. This article will simply cover the most significant points about the Unicode interfaces.\nIn Python source code, Unicode strings are written as u\"string\"\n. Arbitrary\nUnicode characters can be written using a new escape sequence, \\uHHHH\n, where\nHHHH is a 4-digit hexadecimal number from 0000 to FFFF. The existing\n\\xHH\nescape sequence can also be used, and octal escapes can be used for\ncharacters up to U+01FF, which is represented by \\777\n.\nUnicode strings, just like regular strings, are an immutable sequence type.\nThey can be indexed and sliced, but not modified in place. Unicode strings have\nan encode( [encoding] )\nmethod that returns an 8-bit string in the desired\nencoding. Encodings are named by strings, such as 'ascii'\n, 'utf-8'\n,\n'iso-8859-1'\n, or whatever. A codec API is defined for implementing and\nregistering new encodings that are then available throughout a Python program.\nIf an encoding isn\u2019t specified, the default encoding is usually 7-bit ASCII,\nthough it can be changed for your Python installation by calling the\nsys.setdefaultencoding(encoding)\nfunction in a customized version of\nsite.py\n.\nCombining 8-bit and Unicode strings always coerces to Unicode, using the default\nASCII encoding; the result of 'a' + u'bc'\nis u'abc'\n.\nNew built-in functions have been added, and existing built-ins modified to support Unicode:\nunichr(ch)\nreturns a Unicode string 1 character long, containing the character ch.ord(u)\n, where u is a 1-character regular or Unicode string, returns the number of the character as an integer.unicode(string [, encoding] [, errors] )\ncreates a Unicode string from an 8-bit string.encoding\nis a string naming the encoding to use. Theerrors\nparameter specifies the treatment of characters that are invalid for the current encoding; passing'strict'\nas the value causes an exception to be raised on any encoding error, while'ignore'\ncauses errors to be silently ignored and'replace'\nuses U+FFFD, the official replacement character, in case of any problems.The\nexec\nstatement, and various built-ins such aseval()\n,getattr()\n, andsetattr()\nwill also accept Unicode strings as well as regular strings. (It\u2019s possible that the process of fixing this missed some built-ins; if you find a built-in function that accepts strings but doesn\u2019t accept Unicode strings at all, please report it as a bug.)\nA new module, unicodedata\n, provides an interface to Unicode character\nproperties. For example, unicodedata.category(u'A')\nreturns the 2-character\nstring \u2018Lu\u2019, the \u2018L\u2019 denoting it\u2019s a letter, and \u2018u\u2019 meaning that it\u2019s\nuppercase. unicodedata.bidirectional(u'\\u0660')\nreturns \u2018AN\u2019, meaning that\nU+0660 is an Arabic number.\nThe codecs\nmodule contains functions to look up existing encodings and\nregister new ones. Unless you want to implement a new encoding, you\u2019ll most\noften use the codecs.lookup(encoding)\nfunction, which returns a\n4-element tuple: (encode_func, decode_func, stream_reader, stream_writer)\n.\nencode_func is a function that takes a Unicode string, and returns a 2-tuple\n(string, length)\n. string is an 8-bit string containing a portion (perhaps all) of the Unicode string converted into the given encoding, and length tells you how much of the Unicode string was converted.decode_func is the opposite of encode_func, taking an 8-bit string and returning a 2-tuple\n(ustring, length)\n, consisting of the resulting Unicode string ustring and the integer length telling how much of the 8-bit string was consumed.stream_reader is a class that supports decoding input from a stream. stream_reader(file_obj) returns an object that supports the\nread()\n,readline()\n, andreadlines()\nmethods. These methods will all translate from the given encoding and return Unicode strings.stream_writer, similarly, is a class that supports encoding output to a stream. stream_writer(file_obj) returns an object that supports the\nwrite()\nandwritelines()\nmethods. These methods expect Unicode strings, translating them to the given encoding on output.\nFor example, the following code writes a Unicode string into a file, encoding it as UTF-8:\nimport codecs\nunistr = u'\\u0660\\u2000ab ...'\n(UTF8_encode, UTF8_decode,\nUTF8_streamreader, UTF8_streamwriter) = codecs.lookup('UTF-8')\noutput = UTF8_streamwriter( open( '/tmp/output', 'wb') )\noutput.write( unistr )\noutput.close()\nThe following code would then read UTF-8 input from the file:\ninput = UTF8_streamreader( open( '/tmp/output', 'rb') )\nprint repr(input.read())\ninput.close()\nUnicode-aware regular expressions are available through the re\nmodule,\nwhich has a new underlying implementation called SRE written by Fredrik Lundh of\nSecret Labs AB.\nA -U\ncommand line option was added which causes the Python compiler to\ninterpret all string literals as Unicode string literals. This is intended to be\nused in testing and future-proofing your Python code, since some future version\nof Python may drop support for 8-bit strings and provide only Unicode strings.\nList Comprehensions\u00b6\nLists are a workhorse data type in Python, and many programs manipulate a list at some point. Two common operations on lists are to loop over them, and either pick out the elements that meet a certain criterion, or apply some function to each element. For example, given a list of strings, you might want to pull out all the strings containing a given substring, or strip off trailing whitespace from each line.\nThe existing map()\nand filter()\nfunctions can be used for this\npurpose, but they require a function as one of their arguments. This is fine if\nthere\u2019s an existing built-in function that can be passed directly, but if there\nisn\u2019t, you have to create a little function to do the required work, and\nPython\u2019s scoping rules make the result ugly if the little function needs\nadditional information. Take the first example in the previous paragraph,\nfinding all the strings in the list containing a given substring. You could\nwrite the following to do it:\n# Given the list L, make a list of all strings\n# containing the substring S.\nsublist = filter( lambda s, substring=S:\nstring.find(s, substring) != -1,\nL)\nBecause of Python\u2019s scoping rules, a default argument is used so that the\nanonymous function created by the lambda\nexpression knows what\nsubstring is being searched for. List comprehensions make this cleaner:\nsublist = [ s for s in L if string.find(s, S) != -1 ]\nList comprehensions have the form:\n[ expression for expr in sequence1\nfor expr2 in sequence2 ...\nfor exprN in sequenceN\nif condition ]\nThe for\n\u2026in\nclauses contain the sequences to be\niterated over. The sequences do not have to be the same length, because they\nare not iterated over in parallel, but from left to right; this is explained\nmore clearly in the following paragraphs. The elements of the generated list\nwill be the successive values of expression. The final if\nclause\nis optional; if present, expression is only evaluated and added to the result\nif condition is true.\nTo make the semantics very clear, a list comprehension is equivalent to the following Python code:\nfor expr1 in sequence1:\nfor expr2 in sequence2:\n...\nfor exprN in sequenceN:\nif (condition):\n# Append the value of\n# the expression to the\n# resulting list.\nThis means that when there are multiple for\n\u2026in\nclauses, the resulting list will be equal to the product of the lengths of all\nthe sequences. If you have two lists of length 3, the output list is 9 elements\nlong:\nseq1 = 'abc'\nseq2 = (1,2,3)\n>>> [ (x,y) for x in seq1 for y in seq2]\n[('a', 1), ('a', 2), ('a', 3), ('b', 1), ('b', 2), ('b', 3), ('c', 1),\n('c', 2), ('c', 3)]\nTo avoid introducing an ambiguity into Python\u2019s grammar, if expression is creating a tuple, it must be surrounded with parentheses. The first list comprehension below is a syntax error, while the second one is correct:\n# Syntax error\n[ x,y for x in seq1 for y in seq2]\n# Correct\n[ (x,y) for x in seq1 for y in seq2]\nThe idea of list comprehensions originally comes from the functional programming language Haskell (https://www.haskell.org). Greg Ewing argued most effectively for adding them to Python and wrote the initial list comprehension patch, which was then discussed for a seemingly endless time on the python-dev mailing list and kept up-to-date by Skip Montanaro.\nAugmented Assignment\u00b6\nAugmented assignment operators, another long-requested feature, have been added\nto Python 2.0. Augmented assignment operators include +=\n, -=\n, *=\n,\nand so forth. For example, the statement a += 2\nincrements the value of the\nvariable a\nby 2, equivalent to the slightly lengthier a = a + 2\n.\nThe full list of supported assignment operators is +=\n, -=\n, *=\n,\n/=\n, %=\n, **=\n, &=\n, |=\n, ^=\n, >>=\n, and <<=\n. Python\nclasses can override the augmented assignment operators by defining methods\nnamed __iadd__()\n, __isub__()\n, etc. For example, the following\nNumber\nclass stores a number and supports using += to create a new\ninstance with an incremented value.\nclass Number:\ndef __init__(self, value):\nself.value = value\ndef __iadd__(self, increment):\nreturn Number( self.value + increment)\nn = Number(5)\nn += 3\nprint n.value\nThe __iadd__()\nspecial method is called with the value of the increment,\nand should return a new instance with an appropriately modified value; this\nreturn value is bound as the new value of the variable on the left-hand side.\nAugmented assignment operators were first introduced in the C programming language, and most C-derived languages, such as awk, C++, Java, Perl, and PHP also support them. The augmented assignment patch was implemented by Thomas Wouters.\nString Methods\u00b6\nUntil now string-manipulation functionality was in the string\nmodule,\nwhich was usually a front-end for the strop\nmodule written in C. The\naddition of Unicode posed a difficulty for the strop\nmodule, because the\nfunctions would all need to be rewritten in order to accept either 8-bit or\nUnicode strings. For functions such as string.replace()\n, which takes 3\nstring arguments, that means eight possible permutations, and correspondingly\ncomplicated code.\nInstead, Python 2.0 pushes the problem onto the string type, making string manipulation functionality available through methods on both 8-bit strings and Unicode strings.\n>>> 'andrew'.capitalize()\n'Andrew'\n>>> 'hostname'.replace('os', 'linux')\n'hlinuxtname'\n>>> 'moshe'.find('sh')\n2\nOne thing that hasn\u2019t changed, a noteworthy April Fools\u2019 joke notwithstanding, is that Python strings are immutable. Thus, the string methods return new strings, and do not modify the string on which they operate.\nThe old string\nmodule is still around for backwards compatibility, but it\nmostly acts as a front-end to the new string methods.\nTwo methods which have no parallel in pre-2.0 versions, although they did exist\nin JPython for quite some time, are startswith()\nand endswith()\n.\ns.startswith(t)\nis equivalent to s[:len(t)] == t\n, while\ns.endswith(t)\nis equivalent to s[-len(t):] == t\n.\nOne other method which deserves special mention is join()\n. The\njoin()\nmethod of a string receives one parameter, a sequence of strings,\nand is equivalent to the string.join()\nfunction from the old string\nmodule, with the arguments reversed. In other words, s.join(seq)\nis\nequivalent to the old string.join(seq, s)\n.\nGarbage Collection of Cycles\u00b6\nThe C implementation of Python uses reference counting to implement garbage collection. Every Python object maintains a count of the number of references pointing to itself, and adjusts the count as references are created or destroyed. Once the reference count reaches zero, the object is no longer accessible, since you need to have a reference to an object to access it, and if the count is zero, no references exist any longer.\nReference counting has some pleasant properties: it\u2019s easy to understand and implement, and the resulting implementation is portable, fairly fast, and reacts well with other libraries that implement their own memory handling schemes. The major problem with reference counting is that it sometimes doesn\u2019t realise that objects are no longer accessible, resulting in a memory leak. This happens when there are cycles of references.\nConsider the simplest possible cycle, a class instance which has a reference to itself:\ninstance = SomeClass()\ninstance.myself = instance\nAfter the above two lines of code have been executed, the reference count of\ninstance\nis 2; one reference is from the variable named 'instance'\n, and\nthe other is from the myself\nattribute of the instance.\nIf the next line of code is del instance\n, what happens? The reference count\nof instance\nis decreased by 1, so it has a reference count of 1; the\nreference in the myself\nattribute still exists. Yet the instance is no\nlonger accessible through Python code, and it could be deleted. Several objects\ncan participate in a cycle if they have references to each other, causing all of\nthe objects to be leaked.\nPython 2.0 fixes this problem by periodically executing a cycle detection\nalgorithm which looks for inaccessible cycles and deletes the objects involved.\nA new gc\nmodule provides functions to perform a garbage collection,\nobtain debugging statistics, and tuning the collector\u2019s parameters.\nRunning the cycle detection algorithm takes some time, and therefore will result\nin some additional overhead. It is hoped that after we\u2019ve gotten experience\nwith the cycle collection from using 2.0, Python 2.1 will be able to minimize\nthe overhead with careful tuning. It\u2019s not yet obvious how much performance is\nlost, because benchmarking this is tricky and depends crucially on how often the\nprogram creates and destroys objects. The detection of cycles can be disabled\nwhen Python is compiled, if you can\u2019t afford even a tiny speed penalty or\nsuspect that the cycle collection is buggy, by specifying the\n--without-cycle-gc\nswitch when running the configure\nscript.\nSeveral people tackled this problem and contributed to a solution. An early implementation of the cycle detection approach was written by Toby Kelsey. The current algorithm was suggested by Eric Tiedemann during a visit to CNRI, and Guido van Rossum and Neil Schemenauer wrote two different implementations, which were later integrated by Neil. Lots of other people offered suggestions along the way; the March 2000 archives of the python-dev mailing list contain most of the relevant discussion, especially in the threads titled \u201cReference cycle collection for Python\u201d and \u201cFinalization again\u201d.\nOther Core Changes\u00b6\nVarious minor changes have been made to Python\u2019s syntax and built-in functions. None of the changes are very far-reaching, but they\u2019re handy conveniences.\nMinor Language Changes\u00b6\nA new syntax makes it more convenient to call a given function with a tuple of\narguments and/or a dictionary of keyword arguments. In Python 1.5 and earlier,\nyou\u2019d use the apply()\nbuilt-in function: apply(f, args, kw)\ncalls the\nfunction f()\nwith the argument tuple args and the keyword arguments in\nthe dictionary kw. apply()\nis the same in 2.0, but thanks to a patch\nfrom Greg Ewing, f(*args, **kw)\nis a shorter and clearer way to achieve the\nsame effect. This syntax is symmetrical with the syntax for defining\nfunctions:\ndef f(*args, **kw):\n# args is a tuple of positional args,\n# kw is a dictionary of keyword args\n...\nThe print\nstatement can now have its output directed to a file-like\nobject by following the print\nwith >> file\n, similar to the\nredirection operator in Unix shells. Previously you\u2019d either have to use the\nwrite()\nmethod of the file-like object, which lacks the convenience and\nsimplicity of print\n, or you could assign a new value to\nsys.stdout\nand then restore the old value. For sending output to standard\nerror, it\u2019s much easier to write this:\nprint >> sys.stderr, \"Warning: action field not supplied\"\nModules can now be renamed on importing them, using the syntax import module\nas name\nor from module import name as othername\n. The patch was submitted\nby Thomas Wouters.\nA new format style is available when using the %\noperator; \u2018%r\u2019 will insert\nthe repr()\nof its argument. This was also added from symmetry\nconsiderations, this time for symmetry with the existing \u2018%s\u2019 format style,\nwhich inserts the str()\nof its argument. For example, '%r %s' % ('abc',\n'abc')\nreturns a string containing 'abc' abc\n.\nPreviously there was no way to implement a class that overrode Python\u2019s built-in\nin\noperator and implemented a custom version. obj in seq\nreturns\ntrue if obj is present in the sequence seq; Python computes this by simply\ntrying every index of the sequence until either obj is found or an\nIndexError\nis encountered. Moshe Zadka contributed a patch which adds a\n__contains__()\nmagic method for providing a custom implementation for\nin\n. Additionally, new built-in objects written in C can define what\nin\nmeans for them via a new slot in the sequence protocol.\nEarlier versions of Python used a recursive algorithm for deleting objects. Deeply nested data structures could cause the interpreter to fill up the C stack and crash; Christian Tismer rewrote the deletion logic to fix this problem. On a related note, comparing recursive objects recursed infinitely and crashed; Jeremy Hylton rewrote the code to no longer crash, producing a useful result instead. For example, after this code:\na = []\nb = []\na.append(a)\nb.append(b)\nThe comparison a==b\nreturns true, because the two recursive data structures\nare isomorphic. See the thread \u201ctrashcan and PR#7\u201d in the April 2000 archives of\nthe python-dev mailing list for the discussion leading up to this\nimplementation, and some useful relevant links. Note that comparisons can now\nalso raise exceptions. In earlier versions of Python, a comparison operation\nsuch as cmp(a,b)\nwould always produce an answer, even if a user-defined\n__cmp__()\nmethod encountered an error, since the resulting exception would\nsimply be silently swallowed.\nWork has been done on porting Python to 64-bit Windows on the Itanium processor,\nmostly by Trent Mick of ActiveState. (Confusingly, sys.platform\nis still\n'win32'\non Win64 because it seems that for ease of porting, MS Visual C++\ntreats code as 32 bit on Itanium.) PythonWin also supports Windows CE; see the\nPython CE page at https://pythonce.sourceforge.net/ for more information.\nAnother new platform is Darwin/MacOS X; initial support for it is in Python 2.0. Dynamic loading works, if you specify \u201cconfigure \u2013with-dyld \u2013with-suffix=.x\u201d. Consult the README in the Python source distribution for more instructions.\nAn attempt has been made to alleviate one of Python\u2019s warts, the often-confusing\nNameError\nexception when code refers to a local variable before the\nvariable has been assigned a value. For example, the following code raises an\nexception on the print\nstatement in both 1.5.2 and 2.0; in 1.5.2 a\nNameError\nexception is raised, while 2.0 raises a new\nUnboundLocalError\nexception. UnboundLocalError\nis a subclass of\nNameError\n, so any existing code that expects NameError\nto be\nraised should still work.\ndef f():\nprint \"i=\",i\ni = i + 1\nf()\nTwo new exceptions, TabError\nand IndentationError\n, have been\nintroduced. They\u2019re both subclasses of SyntaxError\n, and are raised when\nPython code is found to be improperly indented.\nChanges to Built-in Functions\u00b6\nA new built-in, zip(seq1, seq2, ...)\n, has been added. zip()\nreturns a list of tuples where each tuple contains the i-th element from each of\nthe argument sequences. The difference between zip()\nand map(None,\nseq1, seq2)\nis that map()\npads the sequences with None\nif the\nsequences aren\u2019t all of the same length, while zip()\ntruncates the\nreturned list to the length of the shortest argument sequence.\nThe int()\nand long()\nfunctions now accept an optional \u201cbase\u201d\nparameter when the first argument is a string. int('123', 10)\nreturns 123,\nwhile int('123', 16)\nreturns 291. int(123, 16)\nraises a\nTypeError\nexception with the message \u201ccan\u2019t convert non-string with\nexplicit base\u201d.\nA new variable holding more detailed version information has been added to the\nsys\nmodule. sys.version_info\nis a tuple (major, minor, micro,\nlevel, serial)\nFor example, in a hypothetical 2.0.1beta1, sys.version_info\nwould be (2, 0, 1, 'beta', 1)\n. level is a string such as \"alpha\"\n,\n\"beta\"\n, or \"final\"\nfor a final release.\nDictionaries have an odd new method, setdefault(key, default)\n, which\nbehaves similarly to the existing get()\nmethod. However, if the key is\nmissing, setdefault()\nboth returns the value of default as get()\nwould do, and also inserts it into the dictionary as the value for key. Thus,\nthe following lines of code:\nif dict.has_key( key ): return dict[key]\nelse:\ndict[key] = []\nreturn dict[key]\ncan be reduced to a single return dict.setdefault(key, [])\nstatement.\nThe interpreter sets a maximum recursion depth in order to catch runaway\nrecursion before filling the C stack and causing a core dump or GPF..\nPreviously this limit was fixed when you compiled Python, but in 2.0 the maximum\nrecursion depth can be read and modified using sys.getrecursionlimit()\nand\nsys.setrecursionlimit()\n. The default value is 1000, and a rough maximum\nvalue for a given platform can be found by running a new script,\nMisc/find_recursionlimit.py\n.\nPorting to 2.0\u00b6\nNew Python releases try hard to be compatible with previous releases, and the record has been pretty good. However, some changes are considered useful enough, usually because they fix initial design decisions that turned out to be actively mistaken, that breaking backward compatibility can\u2019t always be avoided. This section lists the changes in Python 2.0 that may cause old Python code to break.\nThe change which will probably break the most code is tightening up the\narguments accepted by some methods. Some methods would take multiple arguments\nand treat them as a tuple, particularly various list methods such as\nappend()\nand insert()\n.\nIn earlier versions of Python, if L\nis\na list, L.append( 1,2 )\nappends the tuple (1,2)\nto the list. In Python\n2.0 this causes a TypeError\nexception to be raised, with the message:\n\u2018append requires exactly 1 argument; 2 given\u2019. The fix is to simply add an\nextra set of parentheses to pass both values as a tuple: L.append( (1,2) )\n.\nThe earlier versions of these methods were more forgiving because they used an\nold function in Python\u2019s C interface to parse their arguments; 2.0 modernizes\nthem to use PyArg_ParseTuple()\n, the current argument parsing function,\nwhich provides more helpful error messages and treats multi-argument calls as\nerrors. If you absolutely must use 2.0 but can\u2019t fix your code, you can edit\nObjects/listobject.c\nand define the preprocessor symbol\nNO_STRICT_LIST_APPEND\nto preserve the old behaviour; this isn\u2019t recommended.\nSome of the functions in the socket\nmodule are still forgiving in this\nway. For example, socket.connect( ('hostname', 25) )\nis the correct\nform, passing a tuple representing an IP address, but socket.connect('hostname', 25)\nalso works. socket.connect_ex\nand socket.bind\nare similarly easy-going. 2.0alpha1 tightened these functions up, but because\nthe documentation actually used the erroneous multiple argument form, many\npeople wrote code which would break with the stricter checking. GvR backed out\nthe changes in the face of public reaction, so for the socket\nmodule, the\ndocumentation was fixed and the multiple argument form is simply marked as\ndeprecated; it will be tightened up again in a future Python version.\nThe \\x\nescape in string literals now takes exactly 2 hex digits. Previously\nit would consume all the hex digits following the \u2018x\u2019 and take the lowest 8 bits\nof the result, so \\x123456\nwas equivalent to \\x56\n.\nThe AttributeError\nand NameError\nexceptions have a more friendly\nerror message, whose text will be something like 'Spam' instance has no\nattribute 'eggs'\nor name 'eggs' is not defined\n. Previously the error\nmessage was just the missing attribute name eggs\n, and code written to take\nadvantage of this fact will break in 2.0.\nSome work has been done to make integers and long integers a bit more\ninterchangeable. In 1.5.2, large-file support was added for Solaris, to allow\nreading files larger than 2 GiB; this made the tell()\nmethod of file\nobjects return a long integer instead of a regular integer. Some code would\nsubtract two file offsets and attempt to use the result to multiply a sequence\nor slice a string, but this raised a TypeError\n. In 2.0, long integers\ncan be used to multiply or slice a sequence, and it\u2019ll behave as you\u2019d\nintuitively expect it to; 3L * 'abc'\nproduces \u2018abcabcabc\u2019, and\n(0,1,2,3)[2L:4L]\nproduces (2,3). Long integers can also be used in various\ncontexts where previously only integers were accepted, such as in the\nseek()\nmethod of file objects, and in the formats supported by the %\noperator (%d\n, %i\n, %x\n, etc.). For example, \"%d\" % 2L**64\nwill\nproduce the string 18446744073709551616\n.\nThe subtlest long integer change of all is that the str()\nof a long\ninteger no longer has a trailing \u2018L\u2019 character, though repr()\nstill\nincludes it. The \u2018L\u2019 annoyed many people who wanted to print long integers that\nlooked just like regular integers, since they had to go out of their way to chop\noff the character. This is no longer a problem in 2.0, but code which does\nstr(longval)[:-1]\nand assumes the \u2018L\u2019 is there, will now lose the final\ndigit.\nTaking the repr()\nof a float now uses a different formatting precision\nthan str()\n. repr()\nuses %.17g\nformat string for C\u2019s\nsprintf()\n, while str()\nuses %.12g\nas before. The effect is that\nrepr()\nmay occasionally show more decimal places than str()\n, for\ncertain numbers. For example, the number 8.1 can\u2019t be represented exactly in\nbinary, so repr(8.1)\nis '8.0999999999999996'\n, while str(8.1) is\n'8.1'\n.\nThe -X\ncommand-line option, which turned all standard exceptions into\nstrings instead of classes, has been removed; the standard exceptions will now\nalways be classes. The exceptions\nmodule containing the standard\nexceptions was translated from Python to a built-in C module, written by Barry\nWarsaw and Fredrik Lundh.\nExtending/Embedding Changes\u00b6\nSome of the changes are under the covers, and will only be apparent to people writing C extension modules or embedding a Python interpreter in a larger application. If you aren\u2019t dealing with Python\u2019s C API, you can safely skip this section.\nThe version number of the Python C API was incremented, so C extensions compiled for 1.5.2 must be recompiled in order to work with 2.0. On Windows, it\u2019s not possible for Python 2.0 to import a third party extension built for Python 1.5.x due to how Windows DLLs work, so Python will raise an exception and the import will fail.\nUsers of Jim Fulton\u2019s ExtensionClass module will be pleased to find out that\nhooks have been added so that ExtensionClasses are now supported by\nisinstance()\nand issubclass()\n. This means you no longer have to\nremember to write code such as if type(obj) == myExtensionClass\n, but can use\nthe more natural if isinstance(obj, myExtensionClass)\n.\nThe Python/importdl.c\nfile, which was a mass of #ifdefs to support\ndynamic loading on many different platforms, was cleaned up and reorganised by\nGreg Stein. importdl.c\nis now quite small, and platform-specific code\nhas been moved into a bunch of Python/dynload_*.c\nfiles. Another\ncleanup: there were also a number of my*.h\nfiles in the Include/\ndirectory that held various portability hacks; they\u2019ve been merged into a single\nfile, Include/pyport.h\n.\nVladimir Marangozov\u2019s long-awaited malloc restructuring was completed, to make\nit easy to have the Python interpreter use a custom allocator instead of C\u2019s\nstandard malloc()\n. For documentation, read the comments in\nInclude/pymem.h\nand Include/objimpl.h\n. For the lengthy\ndiscussions during which the interface was hammered out, see the web archives of\nthe \u2018patches\u2019 and \u2018python-dev\u2019 lists at python.org.\nRecent versions of the GUSI development environment for MacOS support POSIX\nthreads. Therefore, Python\u2019s POSIX threading support now works on the\nMacintosh. Threading support using the user-space GNU pth\nlibrary was also\ncontributed.\nThreading support on Windows was enhanced, too. Windows supports thread locks that use kernel objects only in case of contention; in the common case when there\u2019s no contention, they use simpler functions which are an order of magnitude faster. A threaded version of Python 1.5.2 on NT is twice as slow as an unthreaded version; with the 2.0 changes, the difference is only 10%. These improvements were contributed by Yakov Markovitch.\nPython 2.0\u2019s source now uses only ANSI C prototypes, so compiling Python now requires an ANSI C compiler, and can no longer be done using a compiler that only supports K&R C.\nPreviously the Python virtual machine used 16-bit numbers in its bytecode,\nlimiting the size of source files. In particular, this affected the maximum\nsize of literal lists and dictionaries in Python source; occasionally people who\nare generating Python code would run into this limit. A patch by Charles G.\nWaldman raises the limit from 2**16\nto 2**32\n.\nThree new convenience functions intended for adding constants to a module\u2019s\ndictionary at module initialization time were added: PyModule_AddObject()\n,\nPyModule_AddIntConstant()\n, and PyModule_AddStringConstant()\n. Each\nof these functions takes a module object, a null-terminated C string containing\nthe name to be added, and a third argument for the value to be assigned to the\nname. This third argument is, respectively, a Python object, a C long, or a C\nstring.\nA wrapper API was added for Unix-style signal handlers. PyOS_getsig()\ngets\na signal handler and PyOS_setsig()\nwill set a new handler.\nDistutils: Making Modules Easy to Install\u00b6\nBefore Python 2.0, installing modules was a tedious affair \u2013 there was no way to figure out automatically where Python is installed, or what compiler options to use for extension modules. Software authors had to go through an arduous ritual of editing Makefiles and configuration files, which only really work on Unix and leave Windows and MacOS unsupported. Python users faced wildly differing installation instructions which varied between different extension packages, which made administering a Python installation something of a chore.\nThe SIG for distribution utilities, shepherded by Greg Ward, has created the\nDistutils, a system to make package installation much easier. They form the\ndistutils\npackage, a new part of Python\u2019s standard library. In the best\ncase, installing a Python module from source will require the same steps: first\nyou simply mean unpack the tarball or zip archive, and the run \u201cpython\nsetup.py install\n\u201d. The platform will be automatically detected, the compiler\nwill be recognized, C extension modules will be compiled, and the distribution\ninstalled into the proper directory. Optional command-line arguments provide\nmore control over the installation process, the distutils package offers many\nplaces to override defaults \u2013 separating the build from the install, building\nor installing in non-default directories, and more.\nIn order to use the Distutils, you need to write a setup.py\nscript. For\nthe simple case, when the software contains only .py files, a minimal\nsetup.py\ncan be just a few lines long:\nfrom distutils.core import setup\nsetup (name = \"foo\", version = \"1.0\",\npy_modules = [\"module1\", \"module2\"])\nThe setup.py\nfile isn\u2019t much more complicated if the software consists\nof a few packages:\nfrom distutils.core import setup\nsetup (name = \"foo\", version = \"1.0\",\npackages = [\"package\", \"package.subpackage\"])\nA C extension can be the most complicated case; here\u2019s an example taken from the PyXML package:\nfrom distutils.core import setup, Extension\nexpat_extension = Extension('xml.parsers.pyexpat',\ndefine_macros = [('XML_NS', None)],\ninclude_dirs = [ 'extensions/expat/xmltok',\n'extensions/expat/xmlparse' ],\nsources = [ 'extensions/pyexpat.c',\n'extensions/expat/xmltok/xmltok.c',\n'extensions/expat/xmltok/xmlrole.c', ]\n)\nsetup (name = \"PyXML\", version = \"0.5.4\",\next_modules =[ expat_extension ] )\nThe Distutils can also take care of creating source and binary distributions.\nThe \u201csdist\u201d command, run by \u201cpython setup.py sdist\n\u2019, builds a source\ndistribution such as foo-1.0.tar.gz\n. Adding new commands isn\u2019t\ndifficult, \u201cbdist_rpm\u201d and \u201cbdist_wininst\u201d commands have already been\ncontributed to create an RPM distribution and a Windows installer for the\nsoftware, respectively. Commands to create other distribution formats such as\nDebian packages and Solaris .pkg\nfiles are in various stages of\ndevelopment.\nAll this is documented in a new manual, Distributing Python Modules, that joins the basic set of Python documentation.\nXML Modules\u00b6\nPython 1.5.2 included a simple XML parser in the form of the xmllib\nmodule, contributed by Sjoerd Mullender. Since 1.5.2\u2019s release, two different\ninterfaces for processing XML have become common: SAX2 (version 2 of the Simple\nAPI for XML) provides an event-driven interface with some similarities to\nxmllib\n, and the DOM (Document Object Model) provides a tree-based\ninterface, transforming an XML document into a tree of nodes that can be\ntraversed and modified. Python 2.0 includes a SAX2 interface and a stripped-down\nDOM interface as part of the xml\npackage. Here we will give a brief\noverview of these new interfaces; consult the Python documentation or the source\ncode for complete details. The Python XML SIG is also working on improved\ndocumentation.\nSAX2 Support\u00b6\nSAX defines an event-driven interface for parsing XML. To use SAX, you must\nwrite a SAX handler class. Handler classes inherit from various classes\nprovided by SAX, and override various methods that will then be called by the\nXML parser. For example, the startElement()\nand endElement()\nmethods are called for every starting and end tag encountered by the parser, the\ncharacters()\nmethod is called for every chunk of character data, and so\nforth.\nThe advantage of the event-driven approach is that the whole document doesn\u2019t have to be resident in memory at any one time, which matters if you are processing really huge documents. However, writing the SAX handler class can get very complicated if you\u2019re trying to modify the document structure in some elaborate way.\nFor example, this little example program defines a handler that prints a message\nfor every starting and ending tag, and then parses the file hamlet.xml\nusing it:\nfrom xml import sax\nclass SimpleHandler(sax.ContentHandler):\ndef startElement(self, name, attrs):\nprint 'Start of element:', name, attrs.keys()\ndef endElement(self, name):\nprint 'End of element:', name\n# Create a parser object\nparser = sax.make_parser()\n# Tell it what handler to use\nhandler = SimpleHandler()\nparser.setContentHandler( handler )\n# Parse a file!\nparser.parse( 'hamlet.xml' )\nFor more information, consult the Python documentation, or the XML HOWTO at https://pyxml.sourceforge.net/topics/howto/xml-howto.html.\nDOM Support\u00b6\nThe Document Object Model is a tree-based representation for an XML document. A\ntop-level Document\ninstance is the root of the tree, and has a single\nchild which is the top-level Element\ninstance. This Element\nhas children nodes representing character data and any sub-elements, which may\nhave further children of their own, and so forth. Using the DOM you can\ntraverse the resulting tree any way you like, access element and attribute\nvalues, insert and delete nodes, and convert the tree back into XML.\nThe DOM is useful for modifying XML documents, because you can create a DOM\ntree, modify it by adding new nodes or rearranging subtrees, and then produce a\nnew XML document as output. You can also construct a DOM tree manually and\nconvert it to XML, which can be a more flexible way of producing XML output than\nsimply writing \n\u2026\nto a file.\nThe DOM implementation included with Python lives in the xml.dom.minidom\nmodule. It\u2019s a lightweight implementation of the Level 1 DOM with support for\nXML namespaces. The parse()\nand parseString()\nconvenience\nfunctions are provided for generating a DOM tree:\nfrom xml.dom import minidom\ndoc = minidom.parse('hamlet.xml')\ndoc\nis a Document\ninstance. Document\n, like all the other\nDOM classes such as Element\nand Text\n, is a subclass of the\nNode\nbase class. All the nodes in a DOM tree therefore support certain\ncommon methods, such as toxml()\nwhich returns a string containing the XML\nrepresentation of the node and its children. Each class also has special\nmethods of its own; for example, Element\nand Document\ninstances have a method to find all child elements with a given tag name.\nContinuing from the previous 2-line example:\nperslist = doc.getElementsByTagName( 'PERSONA' )\nprint perslist[0].toxml()\nprint perslist[1].toxml()\nFor the Hamlet XML file, the above few lines output:\nCLAUDIUS, king of Denmark. \nHAMLET, son to the late, and nephew to the present king.\nThe root element of the document is available as doc.documentElement\n, and\nits children can be easily modified by deleting, adding, or removing nodes:\nroot = doc.documentElement\n# Remove the first child\nroot.removeChild( root.childNodes[0] )\n# Move the new first child to the end\nroot.appendChild( root.childNodes[0] )\n# Insert the new first child (originally,\n# the third child) before the 20th child.\nroot.insertBefore( root.childNodes[0], root.childNodes[20] )\nAgain, I will refer you to the Python documentation for a complete listing of\nthe different Node\nclasses and their various methods.\nRelationship to PyXML\u00b6\nThe XML Special Interest Group has been working on XML-related Python code for a\nwhile. Its code distribution, called PyXML, is available from the SIG\u2019s web\npages at https://www.python.org/community/sigs/current/xml-sig. The PyXML distribution also used\nthe package name xml\n. If you\u2019ve written programs that used PyXML, you\u2019re\nprobably wondering about its compatibility with the 2.0 xml\npackage.\nThe answer is that Python 2.0\u2019s xml\npackage isn\u2019t compatible with PyXML,\nbut can be made compatible by installing a recent version PyXML. Many\napplications can get by with the XML support that is included with Python 2.0,\nbut more complicated applications will require that the full PyXML package will\nbe installed. When installed, PyXML versions 0.6.0 or greater will replace the\nxml\npackage shipped with Python, and will be a strict superset of the\nstandard package, adding a bunch of additional features. Some of the additional\nfeatures in PyXML include:\n4DOM, a full DOM implementation from FourThought, Inc.\nThe xmlproc validating parser, written by Lars Marius Garshol.\nThe\nsgmlop\nparser accelerator module, written by Fredrik Lundh.\nModule changes\u00b6\nLots of improvements and bugfixes were made to Python\u2019s extensive standard\nlibrary; some of the affected modules include readline\n,\nConfigParser\n, cgi\n, calendar\n, posix\n, readline\n,\nxmllib\n, aifc\n, chunk\n, wave\n, random\n, shelve\n,\nand nntplib\n. Consult the CVS logs for the exact patch-by-patch details.\nBrian Gallew contributed OpenSSL support for the socket\nmodule. OpenSSL\nis an implementation of the Secure Socket Layer, which encrypts the data being\nsent over a socket. When compiling Python, you can edit Modules/Setup\nto include SSL support, which adds an additional function to the socket\nmodule: socket.ssl(socket, keyfile, certfile)\n, which takes a socket\nobject and returns an SSL socket. The httplib\nand urllib\nmodules\nwere also changed to support https://\nURLs, though no one has implemented\nFTP or SMTP over SSL.\nThe httplib\nmodule has been rewritten by Greg Stein to support HTTP/1.1.\nBackward compatibility with the 1.5 version of httplib\nis provided,\nthough using HTTP/1.1 features such as pipelining will require rewriting code to\nuse a different set of interfaces.\nThe Tkinter\nmodule now supports Tcl/Tk version 8.1, 8.2, or 8.3, and\nsupport for the older 7.x versions has been dropped. The Tkinter module now\nsupports displaying Unicode strings in Tk widgets. Also, Fredrik Lundh\ncontributed an optimization which makes operations like create_line\nand\ncreate_polygon\nmuch faster, especially when using lots of coordinates.\nThe curses\nmodule has been greatly extended, starting from Oliver\nAndrich\u2019s enhanced version, to provide many additional functions from ncurses\nand SYSV curses, such as colour, alternative character set support, pads, and\nmouse support. This means the module is no longer compatible with operating\nsystems that only have BSD curses, but there don\u2019t seem to be any currently\nmaintained OSes that fall into this category.\nAs mentioned in the earlier discussion of 2.0\u2019s Unicode support, the underlying\nimplementation of the regular expressions provided by the re\nmodule has\nbeen changed. SRE, a new regular expression engine written by Fredrik Lundh and\npartially funded by Hewlett Packard, supports matching against both 8-bit\nstrings and Unicode strings.\nNew modules\u00b6\nA number of new modules were added. We\u2019ll simply list them with brief descriptions; consult the 2.0 documentation for the details of a particular module.\natexit\n: For registering functions to be called before the Python interpreter exits. Code that currently setssys.exitfunc\ndirectly should be changed to use theatexit\nmodule instead, importingatexit\nand callingatexit.register()\nwith the function to be called on exit. (Contributed by Skip Montanaro.)codecs\n,encodings\n,unicodedata\n: Added as part of the new Unicode support.filecmp\n: Supersedes the oldcmp\n,cmpcache\nanddircmp\nmodules, which have now become deprecated. (Contributed by Gordon MacMillan and Moshe Zadka.)gettext\n: This module provides internationalization (I18N) and localization (L10N) support for Python programs by providing an interface to the GNU gettext message catalog library. (Integrated by Barry Warsaw, from separate contributions by Martin von L\u00f6wis, Peter Funk, and James Henstridge.)linuxaudiodev\n: Support for the/dev/audio\ndevice on Linux, a twin to the existingsunaudiodev\nmodule. (Contributed by Peter Bosch, with fixes by Jeremy Hylton.)mmap\n: An interface to memory-mapped files on both Windows and Unix. A file\u2019s contents can be mapped directly into memory, at which point it behaves like a mutable string, so its contents can be read and modified. They can even be passed to functions that expect ordinary strings, such as there\nmodule. (Contributed by Sam Rushing, with some extensions by A.M. Kuchling.)pyexpat\n: An interface to the Expat XML parser. (Contributed by Paul Prescod.)robotparser\n: Parse arobots.txt\nfile, which is used for writing web spiders that politely avoid certain areas of a web site. The parser accepts the contents of arobots.txt\nfile, builds a set of rules from it, and can then answer questions about the fetchability of a given URL. (Contributed by Skip Montanaro.)tabnanny\n: A module/script to check Python source code for ambiguous indentation. (Contributed by Tim Peters.)UserString\n: A base class useful for deriving objects that behave like strings.webbrowser\n: A module that provides a platform independent way to launch a web browser on a specific URL. For each platform, various browsers are tried in a specific order. The user can alter which browser is launched by setting the BROWSER environment variable. (Originally inspired by Eric S. Raymond\u2019s patch tourllib\nwhich added similar functionality, but the final module comes from code originally implemented by Fred Drake asTools/idle/BrowserControl.py\n, and adapted for the standard library by Fred.)_winreg\n: An interface to the Windows registry._winreg\nis an adaptation of functions that have been part of PythonWin since 1995, but has now been added to the core distribution, and enhanced to support Unicode._winreg\nwas written by Bill Tutt and Mark Hammond.zipfile\n: A module for reading and writing ZIP-format archives. These are archives produced by PKZIP on DOS/Windows or zip on Unix, not to be confused with gzip-format files (which are supported by thegzip\nmodule) (Contributed by James C. Ahlstrom.)imputil\n: A module that provides a simpler way for writing customized import hooks, in comparison to the existingihooks\nmodule. (Implemented by Greg Stein, with much discussion on python-dev along the way.)\nIDLE Improvements\u00b6\nIDLE is the official Python cross-platform IDE, written using Tkinter. Python 2.0 includes IDLE 0.6, which adds a number of new features and improvements. A partial list:\nUI improvements and optimizations, especially in the area of syntax highlighting and auto-indentation.\nThe class browser now shows more information, such as the top level functions in a module.\nTab width is now a user settable option. When opening an existing Python file, IDLE automatically detects the indentation conventions, and adapts.\nThere is now support for calling browsers on various platforms, used to open the Python documentation in a browser.\nIDLE now has a command line, which is largely similar to the vanilla Python interpreter.\nCall tips were added in many places.\nIDLE can now be installed as a package.\nIn the editor window, there is now a line/column bar at the bottom.\nThree new keystroke commands: Check module (Alt-F5), Import module (F5) and Run script (Ctrl-F5).\nDeleted and Deprecated Modules\u00b6\nA few modules have been dropped because they\u2019re obsolete, or because there are\nnow better ways to do the same thing. The stdwin\nmodule is gone; it was\nfor a platform-independent windowing toolkit that\u2019s no longer developed.\nA number of modules have been moved to the lib-old\nsubdirectory:\ncmp\n, cmpcache\n, dircmp\n, dump\n, find\n,\ngrep\n, packmail\n, poly\n, util\n, whatsound\n,\nzmod\n. If you have code which relies on a module that\u2019s been moved to\nlib-old\n, you can simply add that directory to sys.path\nto get them\nback, but you\u2019re encouraged to update any code that uses these modules.\nAcknowledgements\u00b6\nThe authors would like to thank the following people for offering suggestions on various drafts of this article: David Bolen, Mark Hammond, Gregg Hauser, Jeremy Hylton, Fredrik Lundh, Detlef Lannert, Aahz Maruch, Skip Montanaro, Vladimir Marangozov, Tobias Polzin, Guido van Rossum, Neil Schemenauer, and Russ Schmidt.", "code_snippets": ["\n\n", " ", " ", "\n\n", " ", "\n ", " ", " ", " ", "\n\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n", " ", " ", " ", "\n ", " ", " ", " ", "\n ", "\n ", " ", " ", " ", "\n ", " ", "\n ", "\n ", "\n ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n ", " ", "\n ", " ", " ", "\n ", " ", "\n ", " ", " ", " ", " ", "\n\n", " ", " ", "\n", " ", " ", "\n", " ", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", "\n ", "\n ", "\n ", "\n", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n ", " ", "\n ", " ", " ", " ", " ", "\n", "\n", " ", " ", " ", " ", " ", "\n", "\n ", " ", " ", "\n ", " ", "\n", " ", "\n", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n", " ", "\n", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n", " ", " ", "\n\n", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", " ", " ", "\n ", "\n ", " ", "\n ", "\n", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n", " ", "\n\n", "\n ", " ", " ", "\n ", " ", " ", " ", "\n\n ", " ", "\n ", " ", " ", "\n\n", "\n", " ", " ", "\n\n", "\n", " ", " ", "\n", " ", " ", "\n\n", "\n", " ", " ", "\n", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", "\n", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", "\n\n", "\n", " ", " ", "\n\n", "\n", " ", " ", "\n\n", "\n", "\n", " ", " ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 13399} +{"url": "https://docs.python.org/3/reference/import.html", "title": "The import system", "content": "5. The import system\u00b6\nPython code in one module gains access to the code in another module\nby the process of importing it. The import\nstatement is\nthe most common way of invoking the import machinery, but it is not the only\nway. Functions such as importlib.import_module()\nand built-in\n__import__()\ncan also be used to invoke the import machinery.\nThe import\nstatement combines two operations; it searches for the\nnamed module, then it binds the results of that search to a name in the local\nscope. The search operation of the import\nstatement is defined as\na call to the __import__()\nfunction, with the appropriate arguments.\nThe return value of __import__()\nis used to perform the name\nbinding operation of the import\nstatement. See the\nimport\nstatement for the exact details of that name binding\noperation.\nA direct call to __import__()\nperforms only the module search and, if\nfound, the module creation operation. While certain side-effects may occur,\nsuch as the importing of parent packages, and the updating of various caches\n(including sys.modules\n), only the import\nstatement performs\na name binding operation.\nWhen an import\nstatement is executed, the standard builtin\n__import__()\nfunction is called. Other mechanisms for invoking the\nimport system (such as importlib.import_module()\n) may choose to bypass\n__import__()\nand use their own solutions to implement import semantics.\nWhen a module is first imported, Python searches for the module and if found,\nit creates a module object [1], initializing it. If the named module\ncannot be found, a ModuleNotFoundError\nis raised. Python implements various\nstrategies to search for the named module when the import machinery is\ninvoked. These strategies can be modified and extended by using various hooks\ndescribed in the sections below.\nChanged in version 3.3: The import system has been updated to fully implement the second phase\nof PEP 302. There is no longer any implicit import machinery - the full\nimport system is exposed through sys.meta_path\n. In addition,\nnative namespace package support has been implemented (see PEP 420).\n5.1. importlib\n\u00b6\nThe importlib\nmodule provides a rich API for interacting with the\nimport system. For example importlib.import_module()\nprovides a\nrecommended, simpler API than built-in __import__()\nfor invoking the\nimport machinery. Refer to the importlib\nlibrary documentation for\nadditional detail.\n5.2. Packages\u00b6\nPython has only one type of module object, and all modules are of this type, regardless of whether the module is implemented in Python, C, or something else. To help organize modules and provide a naming hierarchy, Python has a concept of packages.\nYou can think of packages as the directories on a file system and modules as files within directories, but don\u2019t take this analogy too literally since packages and modules need not originate from the file system. For the purposes of this documentation, we\u2019ll use this convenient analogy of directories and files. Like file system directories, packages are organized hierarchically, and packages may themselves contain subpackages, as well as regular modules.\nIt\u2019s important to keep in mind that all packages are modules, but not all\nmodules are packages. Or put another way, packages are just a special kind of\nmodule. Specifically, any module that contains a __path__\nattribute is\nconsidered a package.\nAll modules have a name. Subpackage names are separated from their parent\npackage name by a dot, akin to Python\u2019s standard attribute access syntax. Thus\nyou might have a package called email\n, which in turn has a subpackage\ncalled email.mime\nand a module within that subpackage called\nemail.mime.text\n.\n5.2.1. Regular packages\u00b6\nPython defines two types of packages, regular packages and namespace packages. Regular\npackages are traditional packages as they existed in Python 3.2 and earlier.\nA regular package is typically implemented as a directory containing an\n__init__.py\nfile. When a regular package is imported, this\n__init__.py\nfile is implicitly executed, and the objects it defines are\nbound to names in the package\u2019s namespace. The __init__.py\nfile can\ncontain the same Python code that any other module can contain, and Python\nwill add some additional attributes to the module when it is imported.\nFor example, the following file system layout defines a top level parent\npackage with three subpackages:\nparent/\n__init__.py\none/\n__init__.py\ntwo/\n__init__.py\nthree/\n__init__.py\nImporting parent.one\nwill implicitly execute parent/__init__.py\nand\nparent/one/__init__.py\n. Subsequent imports of parent.two\nor\nparent.three\nwill execute parent/two/__init__.py\nand\nparent/three/__init__.py\nrespectively.\n5.2.2. Namespace packages\u00b6\nA namespace package is a composite of various portions, where each portion contributes a subpackage to the parent package. Portions may reside in different locations on the file system. Portions may also be found in zip files, on the network, or anywhere else that Python searches during import. Namespace packages may or may not correspond directly to objects on the file system; they may be virtual modules that have no concrete representation.\nNamespace packages do not use an ordinary list for their __path__\nattribute. They instead use a custom iterable type which will automatically\nperform a new search for package portions on the next import attempt within\nthat package if the path of their parent package (or sys.path\nfor a\ntop level package) changes.\nWith namespace packages, there is no parent/__init__.py\nfile. In fact,\nthere may be multiple parent\ndirectories found during import search, where\neach one is provided by a different portion. Thus parent/one\nmay not be\nphysically located next to parent/two\n. In this case, Python will create a\nnamespace package for the top-level parent\npackage whenever it or one of\nits subpackages is imported.\nSee also PEP 420 for the namespace package specification.\n5.3. Searching\u00b6\nTo begin the search, Python needs the fully qualified\nname of the module (or package, but for the purposes of this discussion, the\ndifference is immaterial) being imported. This name may come from various\narguments to the import\nstatement, or from the parameters to the\nimportlib.import_module()\nor __import__()\nfunctions.\nThis name will be used in various phases of the import search, and it may be\nthe dotted path to a submodule, e.g. foo.bar.baz\n. In this case, Python\nfirst tries to import foo\n, then foo.bar\n, and finally foo.bar.baz\n.\nIf any of the intermediate imports fail, a ModuleNotFoundError\nis raised.\n5.3.1. The module cache\u00b6\nThe first place checked during import search is sys.modules\n. This\nmapping serves as a cache of all modules that have been previously imported,\nincluding the intermediate paths. So if foo.bar.baz\nwas previously\nimported, sys.modules\nwill contain entries for foo\n, foo.bar\n,\nand foo.bar.baz\n. Each key will have as its value the corresponding module\nobject.\nDuring import, the module name is looked up in sys.modules\nand if\npresent, the associated value is the module satisfying the import, and the\nprocess completes. However, if the value is None\n, then a\nModuleNotFoundError\nis raised. If the module name is missing, Python will\ncontinue searching for the module.\nsys.modules\nis writable. Deleting a key may not destroy the\nassociated module (as other modules may hold references to it),\nbut it will invalidate the cache entry for the named module, causing\nPython to search anew for the named module upon its next\nimport. The key can also be assigned to None\n, forcing the next import\nof the module to result in a ModuleNotFoundError\n.\nBeware though, as if you keep a reference to the module object,\ninvalidate its cache entry in sys.modules\n, and then re-import the\nnamed module, the two module objects will not be the same. By contrast,\nimportlib.reload()\nwill reuse the same module object, and simply\nreinitialise the module contents by rerunning the module\u2019s code.\n5.3.2. Finders and loaders\u00b6\nIf the named module is not found in sys.modules\n, then Python\u2019s import\nprotocol is invoked to find and load the module. This protocol consists of\ntwo conceptual objects, finders and loaders.\nA finder\u2019s job is to determine whether it can find the named module using\nwhatever strategy it knows about. Objects that implement both of these\ninterfaces are referred to as importers - they return\nthemselves when they find that they can load the requested module.\nPython includes a number of default finders and importers. The first one knows how to locate built-in modules, and the second knows how to locate frozen modules. A third default finder searches an import path for modules. The import path is a list of locations that may name file system paths or zip files. It can also be extended to search for any locatable resource, such as those identified by URLs.\nThe import machinery is extensible, so new finders can be added to extend the range and scope of module searching.\nFinders do not actually load modules. If they can find the named module, they return a module spec, an encapsulation of the module\u2019s import-related information, which the import machinery then uses when loading the module.\nThe following sections describe the protocol for finders and loaders in more detail, including how you can create and register new ones to extend the import machinery.\nChanged in version 3.4: In previous versions of Python, finders returned loaders directly, whereas now they return module specs which contain loaders. Loaders are still used during import but have fewer responsibilities.\n5.3.3. Import hooks\u00b6\nThe import machinery is designed to be extensible; the primary mechanism for this are the import hooks. There are two types of import hooks: meta hooks and import path hooks.\nMeta hooks are called at the start of import processing, before any other\nimport processing has occurred, other than sys.modules\ncache look up.\nThis allows meta hooks to override sys.path\nprocessing, frozen\nmodules, or even built-in modules. Meta hooks are registered by adding new\nfinder objects to sys.meta_path\n, as described below.\nImport path hooks are called as part of sys.path\n(or\npackage.__path__\n) processing, at the point where their associated path\nitem is encountered. Import path hooks are registered by adding new callables\nto sys.path_hooks\nas described below.\n5.3.4. The meta path\u00b6\nWhen the named module is not found in sys.modules\n, Python next\nsearches sys.meta_path\n, which contains a list of meta path finder\nobjects. These finders are queried in order to see if they know how to handle\nthe named module. Meta path finders must implement a method called\nfind_spec()\nwhich takes three arguments:\na name, an import path, and (optionally) a target module. The meta path\nfinder can use any strategy it wants to determine whether it can handle\nthe named module or not.\nIf the meta path finder knows how to handle the named module, it returns a\nspec object. If it cannot handle the named module, it returns None\n. If\nsys.meta_path\nprocessing reaches the end of its list without returning\na spec, then a ModuleNotFoundError\nis raised. Any other exceptions\nraised are simply propagated up, aborting the import process.\nThe find_spec()\nmethod of meta path\nfinders is called with two or three arguments. The first is the fully\nqualified name of the module being imported, for example foo.bar.baz\n.\nThe second argument is the path entries to use for the module search. For\ntop-level modules, the second argument is None\n, but for submodules or\nsubpackages, the second argument is the value of the parent package\u2019s\n__path__\nattribute. If the appropriate __path__\nattribute cannot\nbe accessed, a ModuleNotFoundError\nis raised. The third argument\nis an existing module object that will be the target of loading later.\nThe import system passes in a target module only during reload.\nThe meta path may be traversed multiple times for a single import request.\nFor example, assuming none of the modules involved has already been cached,\nimporting foo.bar.baz\nwill first perform a top level import, calling\nmpf.find_spec(\"foo\", None, None)\non each meta path finder (mpf\n). After\nfoo\nhas been imported, foo.bar\nwill be imported by traversing the\nmeta path a second time, calling\nmpf.find_spec(\"foo.bar\", foo.__path__, None)\n. Once foo.bar\nhas been\nimported, the final traversal will call\nmpf.find_spec(\"foo.bar.baz\", foo.bar.__path__, None)\n.\nSome meta path finders only support top level imports. These importers will\nalways return None\nwhen anything other than None\nis passed as the\nsecond argument.\nPython\u2019s default sys.meta_path\nhas three meta path finders, one that\nknows how to import built-in modules, one that knows how to import frozen\nmodules, and one that knows how to import modules from an import path\n(i.e. the path based finder).\nChanged in version 3.4: The find_spec()\nmethod of meta path\nfinders replaced find_module()\n, which\nis now deprecated. While it will continue to work without change, the\nimport machinery will try it only if the finder does not implement\nfind_spec()\n.\nChanged in version 3.10: Use of find_module()\nby the import system\nnow raises ImportWarning\n.\nChanged in version 3.12: find_module()\nhas been removed.\nUse find_spec()\ninstead.\n5.4. Loading\u00b6\nIf and when a module spec is found, the import machinery will use it (and the loader it contains) when loading the module. Here is an approximation of what happens during the loading portion of import:\nmodule = None\nif spec.loader is not None and hasattr(spec.loader, 'create_module'):\n# It is assumed 'exec_module' will also be defined on the loader.\nmodule = spec.loader.create_module(spec)\nif module is None:\nmodule = ModuleType(spec.name)\n# The import-related module attributes get set here:\n_init_module_attrs(spec, module)\nif spec.loader is None:\n# unsupported\nraise ImportError\nif spec.origin is None and spec.submodule_search_locations is not None:\n# namespace package\nsys.modules[spec.name] = module\nelif not hasattr(spec.loader, 'exec_module'):\nmodule = spec.loader.load_module(spec.name)\nelse:\nsys.modules[spec.name] = module\ntry:\nspec.loader.exec_module(module)\nexcept BaseException:\ntry:\ndel sys.modules[spec.name]\nexcept KeyError:\npass\nraise\nreturn sys.modules[spec.name]\nNote the following details:\nIf there is an existing module object with the given name in\nsys.modules\n, import will have already returned it.The module will exist in\nsys.modules\nbefore the loader executes the module code. This is crucial because the module code may (directly or indirectly) import itself; adding it tosys.modules\nbeforehand prevents unbounded recursion in the worst case and multiple loading in the best.If loading fails, the failing module \u2013 and only the failing module \u2013 gets removed from\nsys.modules\n. Any module already in thesys.modules\ncache, and any module that was successfully loaded as a side-effect, must remain in the cache. This contrasts with reloading where even the failing module is left insys.modules\n.After the module is created but before execution, the import machinery sets the import-related module attributes (\u201c_init_module_attrs\u201d in the pseudo-code example above), as summarized in a later section.\nModule execution is the key moment of loading in which the module\u2019s namespace gets populated. Execution is entirely delegated to the loader, which gets to decide what gets populated and how.\nThe module created during loading and passed to exec_module() may not be the one returned at the end of import [2].\nChanged in version 3.4: The import system has taken over the boilerplate responsibilities of\nloaders. These were previously performed by the\nimportlib.abc.Loader.load_module()\nmethod.\n5.4.1. Loaders\u00b6\nModule loaders provide the critical function of loading: module execution.\nThe import machinery calls the importlib.abc.Loader.exec_module()\nmethod with a single argument, the module object to execute. Any value\nreturned from exec_module()\nis ignored.\nLoaders must satisfy the following requirements:\nIf the module is a Python module (as opposed to a built-in module or a dynamically loaded extension), the loader should execute the module\u2019s code in the module\u2019s global name space (\nmodule.__dict__\n).If the loader cannot execute the module, it should raise an\nImportError\n, although any other exception raised duringexec_module()\nwill be propagated.\nIn many cases, the finder and loader can be the same object; in such cases the\nfind_spec()\nmethod would just return a\nspec with the loader set to self\n.\nModule loaders may opt in to creating the module object during loading\nby implementing a create_module()\nmethod.\nIt takes one argument, the module spec, and returns the new module object\nto use during loading. create_module()\ndoes not need to set any attributes\non the module object. If the method returns None\n, the\nimport machinery will create the new module itself.\nAdded in version 3.4: The create_module()\nmethod of loaders.\nChanged in version 3.4: The load_module()\nmethod was replaced by\nexec_module()\nand the import\nmachinery assumed all the boilerplate responsibilities of loading.\nFor compatibility with existing loaders, the import machinery will use\nthe load_module()\nmethod of loaders if it exists and the loader does\nnot also implement exec_module()\n. However, load_module()\nhas been\ndeprecated and loaders should implement exec_module()\ninstead.\nThe load_module()\nmethod must implement all the boilerplate loading\nfunctionality described above in addition to executing the module. All\nthe same constraints apply, with some additional clarification:\nIf there is an existing module object with the given name in\nsys.modules\n, the loader must use that existing module. (Otherwise,importlib.reload()\nwill not work correctly.) If the named module does not exist insys.modules\n, the loader must create a new module object and add it tosys.modules\n.The module must exist in\nsys.modules\nbefore the loader executes the module code, to prevent unbounded recursion or multiple loading.If loading fails, the loader must remove any modules it has inserted into\nsys.modules\n, but it must remove only the failing module(s), and only if the loader itself has loaded the module(s) explicitly.\nChanged in version 3.5: A DeprecationWarning\nis raised when exec_module()\nis defined but\ncreate_module()\nis not.\nChanged in version 3.6: An ImportError\nis raised when exec_module()\nis defined but\ncreate_module()\nis not.\nChanged in version 3.10: Use of load_module()\nwill raise ImportWarning\n.\n5.4.2. Submodules\u00b6\nWhen a submodule is loaded using any mechanism (e.g. importlib\nAPIs, the\nimport\nor import-from\nstatements, or built-in __import__()\n) a\nbinding is placed in the parent module\u2019s namespace to the submodule object.\nFor example, if package spam\nhas a submodule foo\n, after importing\nspam.foo\n, spam\nwill have an attribute foo\nwhich is bound to the\nsubmodule. Let\u2019s say you have the following directory structure:\nspam/\n__init__.py\nfoo.py\nand spam/__init__.py\nhas the following line in it:\nfrom .foo import Foo\nthen executing the following puts name bindings for foo\nand Foo\nin the\nspam\nmodule:\n>>> import spam\n>>> spam.foo\n\n>>> spam.Foo\n\nGiven Python\u2019s familiar name binding rules this might seem surprising, but\nit\u2019s actually a fundamental feature of the import system. The invariant\nholding is that if you have sys.modules['spam']\nand\nsys.modules['spam.foo']\n(as you would after the above import), the latter\nmust appear as the foo\nattribute of the former.\n5.4.3. Module specs\u00b6\nThe import machinery uses a variety of information about each module during import, especially before loading. Most of the information is common to all modules. The purpose of a module\u2019s spec is to encapsulate this import-related information on a per-module basis.\nUsing a spec during import allows state to be transferred between import system components, e.g. between the finder that creates the module spec and the loader that executes it. Most importantly, it allows the import machinery to perform the boilerplate operations of loading, whereas without a module spec the loader had that responsibility.\nThe module\u2019s spec is exposed as module.__spec__\n. Setting\n__spec__\nappropriately applies equally to\nmodules initialized during interpreter startup.\nThe one exception is __main__\n, where __spec__\nis\nset to None in some cases.\nSee ModuleSpec\nfor details on the contents of\nthe module spec.\nAdded in version 3.4.\n5.4.4. __path__ attributes on modules\u00b6\nThe __path__\nattribute should be a (possibly empty)\nsequence of strings enumerating the locations where the package\u2019s\nsubmodules will be found. By definition, if a module has a __path__\nattribute, it is a package.\nA package\u2019s __path__\nattribute is used during imports of its\nsubpackages.\nWithin the import machinery, it functions much the same as sys.path\n,\ni.e. providing a list of locations to search for modules during import.\nHowever, __path__\nis typically much more constrained than\nsys.path\n.\nThe same rules used for sys.path\nalso apply to a package\u2019s\n__path__\n. sys.path_hooks\n(described below) are\nconsulted when traversing a package\u2019s __path__\n.\nA package\u2019s __init__.py\nfile may set or alter the package\u2019s\n__path__\nattribute, and this was typically the way namespace packages were implemented\nprior to PEP 420. With the adoption of PEP 420, namespace packages no\nlonger need to supply __init__.py\nfiles containing only __path__\nmanipulation code; the import machinery automatically sets __path__\ncorrectly for the namespace package.\n5.4.5. Module reprs\u00b6\nBy default, all modules have a usable repr, however depending on the attributes set above, and in the module\u2019s spec, you can more explicitly control the repr of module objects.\nIf the module has a spec (__spec__\n), the import machinery will try\nto generate a repr from it. If that fails or there is no spec, the import\nsystem will craft a default repr using whatever information is available\non the module. It will try to use the module.__name__\n,\nmodule.__file__\n, and module.__loader__\nas input into the repr,\nwith defaults for whatever information is missing.\nHere are the exact rules used:\nIf the module has a\n__spec__\nattribute, the information in the spec is used to generate the repr. The \u201cname\u201d, \u201cloader\u201d, \u201corigin\u201d, and \u201chas_location\u201d attributes are consulted.If the module has a\n__file__\nattribute, this is used as part of the module\u2019s repr.If the module has no\n__file__\nbut does have a__loader__\nthat is notNone\n, then the loader\u2019s repr is used as part of the module\u2019s repr.Otherwise, just use the module\u2019s\n__name__\nin the repr.\nChanged in version 3.12: Use of module_repr()\n, having been deprecated since Python 3.4, was\nremoved in Python 3.12 and is no longer called during the resolution of a\nmodule\u2019s repr.\n5.4.6. Cached bytecode invalidation\u00b6\nBefore Python loads cached bytecode from a .pyc\nfile, it checks whether the\ncache is up-to-date with the source .py\nfile. By default, Python does this\nby storing the source\u2019s last-modified timestamp and size in the cache file when\nwriting it. At runtime, the import system then validates the cache file by\nchecking the stored metadata in the cache file against the source\u2019s\nmetadata.\nPython also supports \u201chash-based\u201d cache files, which store a hash of the source\nfile\u2019s contents rather than its metadata. There are two variants of hash-based\n.pyc\nfiles: checked and unchecked. For checked hash-based .pyc\nfiles,\nPython validates the cache file by hashing the source file and comparing the\nresulting hash with the hash in the cache file. If a checked hash-based cache\nfile is found to be invalid, Python regenerates it and writes a new checked\nhash-based cache file. For unchecked hash-based .pyc\nfiles, Python simply\nassumes the cache file is valid if it exists. Hash-based .pyc\nfiles\nvalidation behavior may be overridden with the --check-hash-based-pycs\nflag.\nChanged in version 3.7: Added hash-based .pyc\nfiles. Previously, Python only supported\ntimestamp-based invalidation of bytecode caches.\n5.5. The Path Based Finder\u00b6\nAs mentioned previously, Python comes with several default meta path finders.\nOne of these, called the path based finder\n(PathFinder\n), searches an import path,\nwhich contains a list of path entries. Each path\nentry names a location to search for modules.\nThe path based finder itself doesn\u2019t know how to import anything. Instead, it traverses the individual path entries, associating each of them with a path entry finder that knows how to handle that particular kind of path.\nThe default set of path entry finders implement all the semantics for finding\nmodules on the file system, handling special file types such as Python source\ncode (.py\nfiles), Python byte code (.pyc\nfiles) and\nshared libraries (e.g. .so\nfiles). When supported by the zipimport\nmodule in the standard library, the default path entry finders also handle\nloading all of these file types (other than shared libraries) from zipfiles.\nPath entries need not be limited to file system locations. They can refer to URLs, database queries, or any other location that can be specified as a string.\nThe path based finder provides additional hooks and protocols so that you can extend and customize the types of searchable path entries. For example, if you wanted to support path entries as network URLs, you could write a hook that implements HTTP semantics to find modules on the web. This hook (a callable) would return a path entry finder supporting the protocol described below, which was then used to get a loader for the module from the web.\nA word of warning: this section and the previous both use the term finder,\ndistinguishing between them by using the terms meta path finder and\npath entry finder. These two types of finders are very similar,\nsupport similar protocols, and function in similar ways during the import\nprocess, but it\u2019s important to keep in mind that they are subtly different.\nIn particular, meta path finders operate at the beginning of the import\nprocess, as keyed off the sys.meta_path\ntraversal.\nBy contrast, path entry finders are in a sense an implementation detail\nof the path based finder, and in fact, if the path based finder were to be\nremoved from sys.meta_path\n, none of the path entry finder semantics\nwould be invoked.\n5.5.1. Path entry finders\u00b6\nThe path based finder is responsible for finding and loading Python modules and packages whose location is specified with a string path entry. Most path entries name locations in the file system, but they need not be limited to this.\nAs a meta path finder, the path based finder implements the\nfind_spec()\nprotocol previously\ndescribed, however it exposes additional hooks that can be used to\ncustomize how modules are found and loaded from the import path.\nThree variables are used by the path based finder, sys.path\n,\nsys.path_hooks\nand sys.path_importer_cache\n. The __path__\nattributes on package objects are also used. These provide additional ways\nthat the import machinery can be customized.\nsys.path\ncontains a list of strings providing search locations for\nmodules and packages. It is initialized from the PYTHONPATH\nenvironment variable and various other installation- and\nimplementation-specific defaults. Entries in sys.path\ncan name\ndirectories on the file system, zip files, and potentially other \u201clocations\u201d\n(see the site\nmodule) that should be searched for modules, such as\nURLs, or database queries. Only strings should be present on\nsys.path\n; all other data types are ignored.\nThe path based finder is a meta path finder, so the import\nmachinery begins the import path search by calling the path\nbased finder\u2019s find_spec()\nmethod as\ndescribed previously. When the path\nargument to\nfind_spec()\nis given, it will be a\nlist of string paths to traverse - typically a package\u2019s __path__\nattribute for an import within that package. If the path\nargument is\nNone\n, this indicates a top level import and sys.path\nis used.\nThe path based finder iterates over every entry in the search path, and\nfor each of these, looks for an appropriate path entry finder\n(PathEntryFinder\n) for the\npath entry. Because this can be an expensive operation (e.g. there may be\nstat()\ncall overheads for this search), the path based finder maintains\na cache mapping path entries to path entry finders. This cache is maintained\nin sys.path_importer_cache\n(despite the name, this cache actually\nstores finder objects rather than being limited to importer objects).\nIn this way, the expensive search for a particular path entry\nlocation\u2019s path entry finder need only be done once. User code is\nfree to remove cache entries from sys.path_importer_cache\nforcing\nthe path based finder to perform the path entry search again.\nIf the path entry is not present in the cache, the path based finder iterates\nover every callable in sys.path_hooks\n. Each of the path entry\nhooks in this list is called with a single argument, the\npath entry to be searched. This callable may either return a path\nentry finder that can handle the path entry, or it may raise\nImportError\n. An ImportError\nis used by the path based finder to\nsignal that the hook cannot find a path entry finder\nfor that path entry. The\nexception is ignored and import path iteration continues. The hook\nshould expect either a string or bytes object; the encoding of bytes objects\nis up to the hook (e.g. it may be a file system encoding, UTF-8, or something\nelse), and if the hook cannot decode the argument, it should raise\nImportError\n.\nIf sys.path_hooks\niteration ends with no path entry finder\nbeing returned, then the path based finder\u2019s\nfind_spec()\nmethod will store None\nin sys.path_importer_cache\n(to indicate that there is no finder for\nthis path entry) and return None\n, indicating that this\nmeta path finder could not find the module.\nIf a path entry finder is returned by one of the path entry\nhook callables on sys.path_hooks\n, then the following protocol is used\nto ask the finder for a module spec, which is then used when loading the\nmodule.\nThe current working directory \u2013 denoted by an empty string \u2013 is handled\nslightly differently from other entries on sys.path\n. First, if the\ncurrent working directory cannot be determined or is found not to exist, no\nvalue is stored in sys.path_importer_cache\n. Second, the value for the\ncurrent working directory is looked up fresh for each module lookup. Third,\nthe path used for sys.path_importer_cache\nand returned by\nimportlib.machinery.PathFinder.find_spec()\nwill be the actual current\nworking directory and not the empty string.\n5.5.2. Path entry finder protocol\u00b6\nIn order to support imports of modules and initialized packages and also to\ncontribute portions to namespace packages, path entry finders must implement\nthe find_spec()\nmethod.\nfind_spec()\ntakes two arguments: the\nfully qualified name of the module being imported, and the (optional) target\nmodule. find_spec()\nreturns a fully populated spec for the module.\nThis spec will always have \u201cloader\u201d set (with one exception).\nTo indicate to the import machinery that the spec represents a namespace\nportion, the path entry finder sets submodule_search_locations\nto\na list containing the portion.\nChanged in version 3.4: find_spec()\nreplaced\nfind_loader()\nand\nfind_module()\n, both of which\nare now deprecated, but will be used if find_spec()\nis not defined.\nOlder path entry finders may implement one of these two deprecated methods\ninstead of find_spec()\n. The methods are still respected for the\nsake of backward compatibility. However, if find_spec()\nis\nimplemented on the path entry finder, the legacy methods are ignored.\nfind_loader()\ntakes one argument, the\nfully qualified name of the module being imported. find_loader()\nreturns a 2-tuple where the first item is the loader and the second item\nis a namespace portion.\nFor backwards compatibility with other implementations of the import\nprotocol, many path entry finders also support the same,\ntraditional find_module()\nmethod that meta path finders support.\nHowever path entry finder find_module()\nmethods are never called\nwith a path\nargument (they are expected to record the appropriate\npath information from the initial call to the path hook).\nThe find_module()\nmethod on path entry finders is deprecated,\nas it does not allow the path entry finder to contribute portions to\nnamespace packages. If both find_loader()\nand find_module()\nexist on a path entry finder, the import system will always call\nfind_loader()\nin preference to find_module()\n.\nChanged in version 3.10: Calls to find_module()\nand\nfind_loader()\nby the import\nsystem will raise ImportWarning\n.\nChanged in version 3.12: find_module()\nand find_loader()\nhave been removed.\n5.6. Replacing the standard import system\u00b6\nThe most reliable mechanism for replacing the entire import system is to\ndelete the default contents of sys.meta_path\n, replacing them\nentirely with a custom meta path hook.\nIf it is acceptable to only alter the behaviour of import statements\nwithout affecting other APIs that access the import system, then replacing\nthe builtin __import__()\nfunction may be sufficient.\nTo selectively prevent the import of some modules from a hook early on the\nmeta path (rather than disabling the standard import system entirely),\nit is sufficient to raise ModuleNotFoundError\ndirectly from\nfind_spec()\ninstead of returning\nNone\n. The latter indicates that the meta path search should continue,\nwhile raising an exception terminates it immediately.\n5.7. Package Relative Imports\u00b6\nRelative imports use leading dots. A single leading dot indicates a relative import, starting with the current package. Two or more leading dots indicate a relative import to the parent(s) of the current package, one level per dot after the first. For example, given the following package layout:\npackage/\n__init__.py\nsubpackage1/\n__init__.py\nmoduleX.py\nmoduleY.py\nsubpackage2/\n__init__.py\nmoduleZ.py\nmoduleA.py\nIn either subpackage1/moduleX.py\nor subpackage1/__init__.py\n,\nthe following are valid relative imports:\nfrom .moduleY import spam\nfrom .moduleY import spam as ham\nfrom . import moduleY\nfrom ..subpackage1 import moduleY\nfrom ..subpackage2.moduleZ import eggs\nfrom ..moduleA import foo\nAbsolute imports may use either the import <>\nor from <> import <>\nsyntax, but relative imports may only use the second form; the reason\nfor this is that:\nimport XXX.YYY.ZZZ\nshould expose XXX.YYY.ZZZ\nas a usable expression, but .moduleY is\nnot a valid expression.\n5.8. Special considerations for __main__\u00b6\nThe __main__\nmodule is a special case relative to Python\u2019s import\nsystem. As noted elsewhere, the __main__\nmodule\nis directly initialized at interpreter startup, much like sys\nand\nbuiltins\n. However, unlike those two, it doesn\u2019t strictly\nqualify as a built-in module. This is because the manner in which\n__main__\nis initialized depends on the flags and other options with\nwhich the interpreter is invoked.\n5.8.1. __main__.__spec__\u00b6\nDepending on how __main__\nis initialized, __main__.__spec__\ngets set appropriately or to None\n.\nWhen Python is started with the -m\noption, __spec__\nis set\nto the module spec of the corresponding module or package. __spec__\nis\nalso populated when the __main__\nmodule is loaded as part of executing a\ndirectory, zipfile or other sys.path\nentry.\nIn the remaining cases\n__main__.__spec__\nis set to None\n, as the code used to populate the\n__main__\ndoes not correspond directly with an importable module:\ninteractive prompt\n-c\noptionrunning from stdin\nrunning directly from a source or bytecode file\nNote that __main__.__spec__\nis always None\nin the last case,\neven if the file could technically be imported directly as a module\ninstead. Use the -m\nswitch if valid module metadata is desired\nin __main__\n.\nNote also that even when __main__\ncorresponds with an importable module\nand __main__.__spec__\nis set accordingly, they\u2019re still considered\ndistinct modules. This is due to the fact that blocks guarded by\nif __name__ == \"__main__\":\nchecks only execute when the module is used\nto populate the __main__\nnamespace, and not during normal import.\n5.9. References\u00b6\nThe import machinery has evolved considerably since Python\u2019s early days. The original specification for packages is still available to read, although some details have changed since the writing of that document.\nThe original specification for sys.meta_path\nwas PEP 302, with\nsubsequent extension in PEP 420.\nPEP 420 introduced namespace packages for\nPython 3.3. PEP 420 also introduced the find_loader()\nprotocol as an\nalternative to find_module()\n.\nPEP 366 describes the addition of the __package__\nattribute for\nexplicit relative imports in main modules.\nPEP 328 introduced absolute and explicit relative imports and initially\nproposed __name__\nfor semantics PEP 366 would eventually specify for\n__package__\n.\nPEP 338 defines executing modules as scripts.\nPEP 451 adds the encapsulation of per-module import state in spec objects. It also off-loads most of the boilerplate responsibilities of loaders back onto the import machinery. These changes allow the deprecation of several APIs in the import system and also addition of new methods to finders and loaders.\nFootnotes", "code_snippets": ["\n ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", "\n ", "\n ", " ", " ", "\n", " ", " ", " ", "\n ", " ", " ", "\n", "\n", " ", "\n\n", " ", " ", " ", "\n ", "\n ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n ", "\n ", " ", " ", "\n", " ", " ", " ", "\n ", " ", " ", "\n", "\n ", " ", " ", "\n ", "\n ", "\n ", " ", "\n ", "\n ", " ", "\n ", " ", "\n ", "\n ", "\n", " ", "\n", "\n ", "\n ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n", " ", "\n", " ", " ", " ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 9279} +{"url": "https://docs.python.org/3/reference/executionmodel.html", "title": "Execution model", "content": "4. Execution model\u00b6\n4.1. Structure of a program\u00b6\nA Python program is constructed from code blocks.\nA block is a piece of Python program text that is executed as a unit.\nThe following are blocks: a module, a function body, and a class definition.\nEach command typed interactively is a block. A script file (a file given as\nstandard input to the interpreter or specified as a command line argument to the\ninterpreter) is a code block. A script command (a command specified on the\ninterpreter command line with the -c\noption) is a code block.\nA module run as a top level script (as module __main__\n) from the command\nline using a -m\nargument is also a code block. The string\nargument passed to the built-in functions eval()\nand exec()\nis a\ncode block.\nA code block is executed in an execution frame. A frame contains some administrative information (used for debugging) and determines where and how execution continues after the code block\u2019s execution has completed.\n4.2. Naming and binding\u00b6\n4.2.1. Binding of names\u00b6\nNames refer to objects. Names are introduced by name binding operations.\nThe following constructs bind names:\nformal parameters to functions,\nclass definitions,\nfunction definitions,\nassignment expressions,\ntargets that are identifiers if occurring in an assignment:\nimport\nstatements.type\nstatements.\nThe import\nstatement of the form from ... import *\nbinds all\nnames defined in the imported module, except those beginning with an underscore.\nThis form may only be used at the module level.\nA target occurring in a del\nstatement is also considered bound for\nthis purpose (though the actual semantics are to unbind the name).\nEach assignment or import statement occurs within a block defined by a class or function definition or at the module level (the top-level code block).\nIf a name is bound in a block, it is a local variable of that block, unless\ndeclared as nonlocal\nor global\n. If a name is bound at\nthe module level, it is a global variable. (The variables of the module code\nblock are local and global.) If a variable is used in a code block but not\ndefined there, it is a free variable.\nEach occurrence of a name in the program text refers to the binding of that name established by the following name resolution rules.\n4.2.2. Resolution of names\u00b6\nA scope defines the visibility of a name within a block. If a local variable is defined in a block, its scope includes that block. If the definition occurs in a function block, the scope extends to any blocks contained within the defining one, unless a contained block introduces a different binding for the name.\nWhen a name is used in a code block, it is resolved using the nearest enclosing scope. The set of all such scopes visible to a code block is called the block\u2019s environment.\nWhen a name is not found at all, a NameError\nexception is raised.\nIf the current scope is a function scope, and the name refers to a local\nvariable that has not yet been bound to a value at the point where the name is\nused, an UnboundLocalError\nexception is raised.\nUnboundLocalError\nis a subclass of NameError\n.\nIf a name binding operation occurs anywhere within a code block, all uses of the name within the block are treated as references to the current block. This can lead to errors when a name is used within a block before it is bound. This rule is subtle. Python lacks declarations and allows name binding operations to occur anywhere within a code block. The local variables of a code block can be determined by scanning the entire text of the block for name binding operations. See the FAQ entry on UnboundLocalError for examples.\nIf the global\nstatement occurs within a block, all uses of the names\nspecified in the statement refer to the bindings of those names in the top-level\nnamespace. Names are resolved in the top-level namespace by searching the\nglobal namespace, i.e. the namespace of the module containing the code block,\nand the builtins namespace, the namespace of the module builtins\n. The\nglobal namespace is searched first. If the names are not found there, the\nbuiltins namespace is searched next. If the names are also not found in the\nbuiltins namespace, new variables are created in the global namespace.\nThe global statement must precede all uses of the listed names.\nThe global\nstatement has the same scope as a name binding operation\nin the same block. If the nearest enclosing scope for a free variable contains\na global statement, the free variable is treated as a global.\nThe nonlocal\nstatement causes corresponding names to refer\nto previously bound variables in the nearest enclosing function scope.\nSyntaxError\nis raised at compile time if the given name does not\nexist in any enclosing function scope. Type parameters\ncannot be rebound with the nonlocal\nstatement.\nThe namespace for a module is automatically created the first time a module is\nimported. The main module for a script is always called __main__\n.\nClass definition blocks and arguments to exec()\nand eval()\nare\nspecial in the context of name resolution.\nA class definition is an executable statement that may use and define names.\nThese references follow the normal rules for name resolution with an exception\nthat unbound local variables are looked up in the global namespace.\nThe namespace of the class definition becomes the attribute dictionary of\nthe class. The scope of names defined in a class block is limited to the\nclass block; it does not extend to the code blocks of methods. This includes\ncomprehensions and generator expressions, but it does not include\nannotation scopes,\nwhich have access to their enclosing class scopes.\nThis means that the following will fail:\nclass A:\na = 42\nb = list(a + i for i in range(10))\nHowever, the following will succeed:\nclass A:\ntype Alias = Nested\nclass Nested: pass\nprint(A.Alias.__value__) # \n4.2.3. Annotation scopes\u00b6\nAnnotations, type parameter lists\nand type\nstatements\nintroduce annotation scopes, which behave mostly like function scopes,\nbut with some exceptions discussed below.\nAnnotation scopes are used in the following contexts:\nType parameter lists for generic type aliases.\nType parameter lists for generic functions. A generic function\u2019s annotations are executed within the annotation scope, but its defaults and decorators are not.\nType parameter lists for generic classes. A generic class\u2019s base classes and keyword arguments are executed within the annotation scope, but its decorators are not.\nThe bounds, constraints, and default values for type parameters (lazily evaluated).\nThe value of type aliases (lazily evaluated).\nAnnotation scopes differ from function scopes in the following ways:\nAnnotation scopes have access to their enclosing class namespace. If an annotation scope is immediately within a class scope, or within another annotation scope that is immediately within a class scope, the code in the annotation scope can use names defined in the class scope as if it were executed directly within the class body. This contrasts with regular functions defined within classes, which cannot access names defined in the class scope.\nExpressions in annotation scopes cannot contain\nyield\n,yield from\n,await\n, or:=\nexpressions. (These expressions are allowed in other scopes contained within the annotation scope.)Names defined in annotation scopes cannot be rebound with\nnonlocal\nstatements in inner scopes. This includes only type parameters, as no other syntactic elements that can appear within annotation scopes can introduce new names.While annotation scopes have an internal name, that name is not reflected in the qualified name of objects defined within the scope. Instead, the\n__qualname__\nof such objects is as if the object were defined in the enclosing scope.\nAdded in version 3.12: Annotation scopes were introduced in Python 3.12 as part of PEP 695.\nChanged in version 3.13: Annotation scopes are also used for type parameter defaults, as introduced by PEP 696.\n4.2.4. Lazy evaluation\u00b6\nMost annotation scopes are lazily evaluated. This includes annotations,\nthe values of type aliases created through the type\nstatement, and\nthe bounds, constraints, and default values of type\nvariables created through the type parameter syntax.\nThis means that they are not evaluated when the type alias or type variable is\ncreated, or when the object carrying annotations is created. Instead, they\nare only evaluated when necessary, for example when the __value__\nattribute on a type alias is accessed.\nExample:\n>>> type Alias = 1/0\n>>> Alias.__value__\nTraceback (most recent call last):\n...\nZeroDivisionError: division by zero\n>>> def func[T: 1/0](): pass\n>>> T = func.__type_params__[0]\n>>> T.__bound__\nTraceback (most recent call last):\n...\nZeroDivisionError: division by zero\nHere the exception is raised only when the __value__\nattribute\nof the type alias or the __bound__\nattribute of the type variable\nis accessed.\nThis behavior is primarily useful for references to types that have not yet been defined when the type alias or type variable is created. For example, lazy evaluation enables creation of mutually recursive type aliases:\nfrom typing import Literal\ntype SimpleExpr = int | Parenthesized\ntype Parenthesized = tuple[Literal[\"(\"], Expr, Literal[\")\"]]\ntype Expr = SimpleExpr | tuple[SimpleExpr, Literal[\"+\", \"-\"], Expr]\nLazily evaluated values are evaluated in annotation scope, which means that names that appear inside the lazily evaluated value are looked up as if they were used in the immediately enclosing scope.\nAdded in version 3.12.\n4.2.5. Builtins and restricted execution\u00b6\nCPython implementation detail: Users should not touch __builtins__\n; it is strictly an implementation\ndetail. Users wanting to override values in the builtins namespace should\nimport\nthe builtins\nmodule and modify its\nattributes appropriately.\nThe builtins namespace associated with the execution of a code block\nis actually found by looking up the name __builtins__\nin its\nglobal namespace; this should be a dictionary or a module (in the\nlatter case the module\u2019s dictionary is used). By default, when in the\n__main__\nmodule, __builtins__\nis the built-in module\nbuiltins\n; when in any other module, __builtins__\nis an\nalias for the dictionary of the builtins\nmodule itself.\n4.2.6. Interaction with dynamic features\u00b6\nName resolution of free variables occurs at runtime, not at compile time. This means that the following code will print 42:\ni = 10\ndef f():\nprint(i)\ni = 42\nf()\nThe eval()\nand exec()\nfunctions do not have access to the full\nenvironment for resolving names. Names may be resolved in the local and global\nnamespaces of the caller. Free variables are not resolved in the nearest\nenclosing namespace, but in the global namespace. [1] The exec()\nand\neval()\nfunctions have optional arguments to override the global and local\nnamespace. If only one namespace is specified, it is used for both.\n4.3. Exceptions\u00b6\nExceptions are a means of breaking out of the normal flow of control of a code block in order to handle errors or other exceptional conditions. An exception is raised at the point where the error is detected; it may be handled by the surrounding code block or by any code block that directly or indirectly invoked the code block where the error occurred.\nThe Python interpreter raises an exception when it detects a run-time error\n(such as division by zero). A Python program can also explicitly raise an\nexception with the raise\nstatement. Exception handlers are specified\nwith the try\n\u2026 except\nstatement. The finally\nclause of such a statement can be used to specify cleanup code which does not\nhandle the exception, but is executed whether an exception occurred or not in\nthe preceding code.\nPython uses the \u201ctermination\u201d model of error handling: an exception handler can find out what happened and continue execution at an outer level, but it cannot repair the cause of the error and retry the failing operation (except by re-entering the offending piece of code from the top).\nWhen an exception is not handled at all, the interpreter terminates execution of\nthe program, or returns to its interactive main loop. In either case, it prints\na stack traceback, except when the exception is SystemExit\n.\nExceptions are identified by class instances. The except\nclause is\nselected depending on the class of the instance: it must reference the class of\nthe instance or a non-virtual base class thereof.\nThe instance can be received by the handler and can carry additional information\nabout the exceptional condition.\nNote\nException messages are not part of the Python API. Their contents may change from one version of Python to the next without warning and should not be relied on by code which will run under multiple versions of the interpreter.\nSee also the description of the try\nstatement in section The try statement\nand raise\nstatement in section The raise statement.\n4.4. Runtime Components\u00b6\n4.4.1. General Computing Model\u00b6\nPython\u2019s execution model does not operate in a vacuum. It runs on a host machine and through that host\u2019s runtime environment, including its operating system (OS), if there is one. When a program runs, the conceptual layers of how it runs on the host look something like this:\nhost machineprocess (global resources)thread (runs machine code)\nEach process represents a program running on the host. Think of each process itself as the data part of its program. Think of the process\u2019 threads as the execution part of the program. This distinction will be important to understand the conceptual Python runtime.\nThe process, as the data part, is the execution context in which the program runs. It mostly consists of the set of resources assigned to the program by the host, including memory, signals, file handles, sockets, and environment variables.\nProcesses are isolated and independent from one another. (The same is true for hosts.) The host manages the process\u2019 access to its assigned resources, in addition to coordinating between processes.\nEach thread represents the actual execution of the program\u2019s machine code, running relative to the resources assigned to the program\u2019s process. It\u2019s strictly up to the host how and when that execution takes place.\nFrom the point of view of Python, a program always starts with exactly one thread. However, the program may grow to run in multiple simultaneous threads. Not all hosts support multiple threads per process, but most do. Unlike processes, threads in a process are not isolated and independent from one another. Specifically, all threads in a process share all of the process\u2019 resources.\nThe fundamental point of threads is that each one does run independently, at the same time as the others. That may be only conceptually at the same time (\u201cconcurrently\u201d) or physically (\u201cin parallel\u201d). Either way, the threads effectively run at a non-synchronized rate.\nNote\nThat non-synchronized rate means none of the process\u2019 memory is guaranteed to stay consistent for the code running in any given thread. Thus multi-threaded programs must take care to coordinate access to intentionally shared resources. Likewise, they must take care to be absolutely diligent about not accessing any other resources in multiple threads; otherwise two threads running at the same time might accidentally interfere with each other\u2019s use of some shared data. All this is true for both Python programs and the Python runtime.\nThe cost of this broad, unstructured requirement is the tradeoff for the kind of raw concurrency that threads provide. The alternative to the required discipline generally means dealing with non-deterministic bugs and data corruption.\n4.4.2. Python Runtime Model\u00b6\nThe same conceptual layers apply to each Python program, with some extra data layers specific to Python:\nhost machineprocess (global resources)Python global runtime (state)Python interpreter (state)thread (runs Python bytecode and \u201cC-API\u201d)Python thread state\nAt the conceptual level: when a Python program starts, it looks exactly like that diagram, with one of each. The runtime may grow to include multiple interpreters, and each interpreter may grow to include multiple thread states.\nNote\nA Python implementation won\u2019t necessarily implement the runtime\nlayers distinctly or even concretely. The only exception is places\nwhere distinct layers are directly specified or exposed to users,\nlike through the threading\nmodule.\nNote\nThe initial interpreter is typically called the \u201cmain\u201d interpreter. Some Python implementations, like CPython, assign special roles to the main interpreter.\nLikewise, the host thread where the runtime was initialized is known as the \u201cmain\u201d thread. It may be different from the process\u2019 initial thread, though they are often the same. In some cases \u201cmain thread\u201d may be even more specific and refer to the initial thread state. A Python runtime might assign specific responsibilities to the main thread, such as handling signals.\nAs a whole, the Python runtime consists of the global runtime state, interpreters, and thread states. The runtime ensures all that state stays consistent over its lifetime, particularly when used with multiple host threads.\nThe global runtime, at the conceptual level, is just a set of interpreters. While those interpreters are otherwise isolated and independent from one another, they may share some data or other resources. The runtime is responsible for managing these global resources safely. The actual nature and management of these resources is implementation-specific. Ultimately, the external utility of the global runtime is limited to managing interpreters.\nIn contrast, an \u201cinterpreter\u201d is conceptually what we would normally think of as the (full-featured) \u201cPython runtime\u201d. When machine code executing in a host thread interacts with the Python runtime, it calls into Python in the context of a specific interpreter.\nNote\nThe term \u201cinterpreter\u201d here is not the same as the \u201cbytecode interpreter\u201d, which is what regularly runs in threads, executing compiled Python code.\nIn an ideal world, \u201cPython runtime\u201d would refer to what we currently call \u201cinterpreter\u201d. However, it\u2019s been called \u201cinterpreter\u201d at least since introduced in 1997 (CPython:a027efa5b).\nEach interpreter completely encapsulates all of the non-process-global,\nnon-thread-specific state needed for the Python runtime to work.\nNotably, the interpreter\u2019s state persists between uses. It includes\nfundamental data like sys.modules\n. The runtime ensures\nmultiple threads using the same interpreter will safely\nshare it between them.\nA Python implementation may support using multiple interpreters at the\nsame time in the same process. They are independent and isolated from\none another. For example, each interpreter has its own\nsys.modules\n.\nFor thread-specific runtime state, each interpreter has a set of thread states, which it manages, in the same way the global runtime contains a set of interpreters. It can have thread states for as many host threads as it needs. It may even have multiple thread states for the same host thread, though that isn\u2019t as common.\nEach thread state, conceptually, has all the thread-specific runtime data an interpreter needs to operate in one host thread. The thread state includes the current raised exception and the thread\u2019s Python call stack. It may include other thread-specific resources.\nNote\nThe term \u201cPython thread\u201d can sometimes refer to a thread state, but\nnormally it means a thread created using the threading\nmodule.\nEach thread state, over its lifetime, is always tied to exactly one interpreter and exactly one host thread. It will only ever be used in that thread and with that interpreter.\nMultiple thread states may be tied to the same host thread, whether for different interpreters or even the same interpreter. However, for any given host thread, only one of the thread states tied to it can be used by the thread at a time.\nThread states are isolated and independent from one another and don\u2019t share any data, except for possibly sharing an interpreter and objects or other resources belonging to that interpreter.\nOnce a program is running, new Python threads can be created using the\nthreading\nmodule (on platforms and Python implementations that\nsupport threads). Additional processes can be created using the\nos\n, subprocess\n, and multiprocessing\nmodules.\nInterpreters can be created and used with the\ninterpreters\nmodule. Coroutines (async) can\nbe run using asyncio\nin each interpreter, typically only\nin a single thread (often the main thread).\nFootnotes", "code_snippets": ["\n ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n ", " ", " ", " ", "\n ", " ", "\n\n", " ", "\n", " ", "\n\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n ", "\n", " ", " ", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 5120} +{"url": "https://docs.python.org/3/library/ossaudiodev.html", "title": " \u2014 Access to OSS-compatible audio devices", "content": "ossaudiodev\n\u2014 Access to OSS-compatible audio devices\u00b6\nDeprecated since version 3.11, removed in version 3.13.\nThis module is no longer part of the Python standard library. It was removed in Python 3.13 after being deprecated in Python 3.11. The removal was decided in PEP 594.\nThe last version of Python that provided the ossaudiodev\nmodule was\nPython 3.12.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 89} +{"url": "https://docs.python.org/3/library/nntplib.html", "title": " \u2014 NNTP protocol client", "content": "nntplib\n\u2014 NNTP protocol client\u00b6\nDeprecated since version 3.11, removed in version 3.13.\nThis module is no longer part of the Python standard library. It was removed in Python 3.13 after being deprecated in Python 3.11. The removal was decided in PEP 594.\nThe last version of Python that provided the nntplib\nmodule was\nPython 3.12.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 83} +{"url": "https://docs.python.org/3/library/asyncio-exceptions.html", "title": "Exceptions", "content": "Exceptions\u00b6\nSource code: Lib/asyncio/exceptions.py\n- exception asyncio.TimeoutError\u00b6\nA deprecated alias of\nTimeoutError\n, raised when the operation has exceeded the given deadline.Changed in version 3.11: This class was made an alias of\nTimeoutError\n.\n- exception asyncio.CancelledError\u00b6\nThe operation has been cancelled.\nThis exception can be caught to perform custom operations when asyncio Tasks are cancelled. In almost all situations the exception must be re-raised.\nChanged in version 3.8:\nCancelledError\nis now a subclass ofBaseException\nrather thanException\n.\n- exception asyncio.InvalidStateError\u00b6\nInvalid internal state of\nTask\norFuture\n.Can be raised in situations like setting a result value for a Future object that already has a result value set.\n- exception asyncio.SendfileNotAvailableError\u00b6\nThe \u201csendfile\u201d syscall is not available for the given socket or file type.\nA subclass of\nRuntimeError\n.\n- exception asyncio.IncompleteReadError\u00b6\nThe requested read operation did not complete fully.\nRaised by the asyncio stream APIs.\nThis exception is a subclass of\nEOFError\n.\n- exception asyncio.LimitOverrunError\u00b6\nReached the buffer size limit while looking for a separator.\nRaised by the asyncio stream APIs.\n- consumed\u00b6\nThe total number of to be consumed bytes.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 318} +{"url": "https://docs.python.org/3/c-api/memoryview.html", "title": "MemoryView objects", "content": "MemoryView objects\u00b6\nA memoryview\nobject exposes the C level buffer interface as a Python object which can then be passed around like\nany other object.\n-\nPyTypeObject PyMemoryView_Type\u00b6\n- Part of the Stable ABI.\nThis instance of\nPyTypeObject\nrepresents the Python memoryview type. This is the same object asmemoryview\nin the Python layer.\n-\nPyObject *PyMemoryView_FromObject(PyObject *obj)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCreate a memoryview object from an object that provides the buffer interface. If obj supports writable buffer exports, the memoryview object will be read/write, otherwise it may be either read-only or read/write at the discretion of the exporter.\n-\nPyBUF_READ\u00b6\n- Part of the Stable ABI since version 3.11.\nFlag to request a readonly buffer.\n-\nPyBUF_WRITE\u00b6\n- Part of the Stable ABI since version 3.11.\nFlag to request a writable buffer.\n-\nPyObject *PyMemoryView_FromMemory(char *mem, Py_ssize_t size, int flags)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.7.\nCreate a memoryview object using mem as the underlying buffer. flags can be one of\nPyBUF_READ\norPyBUF_WRITE\n.Added in version 3.3.\n-\nPyObject *PyMemoryView_FromBuffer(const Py_buffer *view)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.11.\nCreate a memoryview object wrapping the given buffer structure view. For simple byte buffers,\nPyMemoryView_FromMemory()\nis the preferred function.\n-\nPyObject *PyMemoryView_GetContiguous(PyObject *obj, int buffertype, char order)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCreate a memoryview object to a contiguous chunk of memory (in either \u2018C\u2019 or \u2018F\u2019ortran order) from an object that defines the buffer interface. If memory is contiguous, the memoryview object points to the original memory. Otherwise, a copy is made and the memoryview points to a new bytes object.\nbuffertype can be one of\nPyBUF_READ\norPyBUF_WRITE\n.\n-\nint PyMemoryView_Check(PyObject *obj)\u00b6\nReturn true if the object obj is a memoryview object. It is not currently allowed to create subclasses of\nmemoryview\n. This function always succeeds.\n-\nPy_buffer *PyMemoryView_GET_BUFFER(PyObject *mview)\u00b6\nReturn a pointer to the memoryview\u2019s private copy of the exporter\u2019s buffer. mview must be a memoryview instance; this macro doesn\u2019t check its type, you must do it yourself or you will risk crashes.\n-\nPyObject *PyMemoryView_GET_BASE(PyObject *mview)\u00b6\nReturn either a pointer to the exporting object that the memoryview is based on or\nNULL\nif the memoryview has been created by one of the functionsPyMemoryView_FromMemory()\norPyMemoryView_FromBuffer()\n. mview must be a memoryview instance.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 666} +{"url": "https://docs.python.org/3/library/email.utils.html", "title": ": Miscellaneous utilities", "content": "email.utils\n: Miscellaneous utilities\u00b6\nSource code: Lib/email/utils.py\nThere are a couple of useful utilities provided in the email.utils\nmodule:\n- email.utils.localtime(dt=None)\u00b6\nReturn local time as an aware datetime object. If called without arguments, return current time. Otherwise dt argument should be a\ndatetime\ninstance, and it is converted to the local time zone according to the system time zone database. If dt is naive (that is,dt.tzinfo\nisNone\n), it is assumed to be in local time.Added in version 3.3.\nDeprecated since version 3.12, removed in version 3.14: The isdst parameter.\n- email.utils.make_msgid(idstring=None, domain=None)\u00b6\nReturns a string suitable for an RFC 2822-compliant Message-ID header. Optional idstring if given, is a string used to strengthen the uniqueness of the message id. Optional domain if given provides the portion of the msgid after the \u2018@\u2019. The default is the local hostname. It is not normally necessary to override this default, but may be useful certain cases, such as a constructing distributed system that uses a consistent domain name across multiple hosts.\nChanged in version 3.2: Added the domain keyword.\nThe remaining functions are part of the legacy (Compat32\n) email API. There\nis no need to directly use these with the new API, since the parsing and\nformatting they provide is done automatically by the header parsing machinery\nof the new API.\n- email.utils.quote(str)\u00b6\nReturn a new string with backslashes in str replaced by two backslashes, and double quotes replaced by backslash-double quote.\n- email.utils.unquote(str)\u00b6\nReturn a new string which is an unquoted version of str. If str ends and begins with double quotes, they are stripped off. Likewise if str ends and begins with angle brackets, they are stripped off.\n- email.utils.parseaddr(address, *, strict=True)\u00b6\nParse address \u2013 which should be the value of some address-containing field such as To or Cc \u2013 into its constituent realname and email address parts. Returns a tuple of that information, unless the parse fails, in which case a 2-tuple of\n('', '')\nis returned.If strict is true, use a strict parser which rejects malformed inputs.\nChanged in version 3.13: Add strict optional parameter and reject malformed inputs by default.\n- email.utils.formataddr(pair, charset='utf-8')\u00b6\nThe inverse of\nparseaddr()\n, this takes a 2-tuple of the form(realname, email_address)\nand returns the string value suitable for a To or Cc header. If the first element of pair is false, then the second element is returned unmodified.Optional charset is the character set that will be used in the RFC 2047 encoding of the\nrealname\nif therealname\ncontains non-ASCII characters. Can be an instance ofstr\nor aCharset\n. Defaults toutf-8\n.Changed in version 3.3: Added the charset option.\n- email.utils.getaddresses(fieldvalues, *, strict=True)\u00b6\nThis method returns a list of 2-tuples of the form returned by\nparseaddr()\n. fieldvalues is a sequence of header field values as might be returned byMessage.get_all\n.If strict is true, use a strict parser which rejects malformed inputs.\nHere\u2019s a simple example that gets all the recipients of a message:\nfrom email.utils import getaddresses tos = msg.get_all('to', []) ccs = msg.get_all('cc', []) resent_tos = msg.get_all('resent-to', []) resent_ccs = msg.get_all('resent-cc', []) all_recipients = getaddresses(tos + ccs + resent_tos + resent_ccs)\nChanged in version 3.13: Add strict optional parameter and reject malformed inputs by default.\n- email.utils.parsedate(date)\u00b6\nAttempts to parse a date according to the rules in RFC 2822. however, some mailers don\u2019t follow that format as specified, so\nparsedate()\ntries to guess correctly in such cases. date is a string containing an RFC 2822 date, such as\"Mon, 20 Nov 1995 19:12:08 -0500\"\n. If it succeeds in parsing the date,parsedate()\nreturns a 9-tuple that can be passed directly totime.mktime()\n; otherwiseNone\nwill be returned. Note that indexes 6, 7, and 8 of the result tuple are not usable.\n- email.utils.parsedate_tz(date)\u00b6\nPerforms the same function as\nparsedate()\n, but returns eitherNone\nor a 10-tuple; the first 9 elements make up a tuple that can be passed directly totime.mktime()\n, and the tenth is the offset of the date\u2019s timezone from UTC (which is the official term for Greenwich Mean Time) [1]. If the input string has no timezone, the last element of the tuple returned is0\n, which represents UTC. Note that indexes 6, 7, and 8 of the result tuple are not usable.\n- email.utils.parsedate_to_datetime(date)\u00b6\nThe inverse of\nformat_datetime()\n. Performs the same function asparsedate()\n, but on success returns adatetime\n; otherwiseValueError\nis raised if date contains an invalid value such as an hour greater than 23 or a timezone offset not between -24 and 24 hours. If the input date has a timezone of-0000\n, thedatetime\nwill be a naivedatetime\n, and if the date is conforming to the RFCs it will represent a time in UTC but with no indication of the actual source timezone of the message the date comes from. If the input date has any other valid timezone offset, thedatetime\nwill be an awaredatetime\nwith the corresponding atimezone\ntzinfo\n.Added in version 3.3.\n- email.utils.mktime_tz(tuple)\u00b6\nTurn a 10-tuple as returned by\nparsedate_tz()\ninto a UTC timestamp (seconds since the Epoch). If the timezone item in the tuple isNone\n, assume local time.\n- email.utils.formatdate(timeval=None, localtime=False, usegmt=False)\u00b6\nReturns a date string as per RFC 2822, e.g.:\nFri, 09 Nov 2001 01:08:47 -0000\nOptional timeval if given is a floating-point time value as accepted by\ntime.gmtime()\nandtime.localtime()\n, otherwise the current time is used.Optional localtime is a flag that when\nTrue\n, interprets timeval, and returns a date relative to the local timezone instead of UTC, properly taking daylight savings time into account. The default isFalse\nmeaning UTC is used.Optional usegmt is a flag that when\nTrue\n, outputs a date string with the timezone as an ascii stringGMT\n, rather than a numeric-0000\n. This is needed for some protocols (such as HTTP). This only applies when localtime isFalse\n. The default isFalse\n.\n- email.utils.format_datetime(dt, usegmt=False)\u00b6\nLike\nformatdate\n, but the input is adatetime\ninstance. If it is a naive datetime, it is assumed to be \u201cUTC with no information about the source timezone\u201d, and the conventional-0000\nis used for the timezone. If it is an awaredatetime\n, then the numeric timezone offset is used. If it is an aware timezone with offset zero, then usegmt may be set toTrue\n, in which case the stringGMT\nis used instead of the numeric timezone offset. This provides a way to generate standards conformant HTTP date headers.Added in version 3.3.\n- email.utils.encode_rfc2231(s, charset=None, language=None)\u00b6\nEncode the string s according to RFC 2231. Optional charset and language, if given is the character set name and language name to use. If neither is given, s is returned as-is. If charset is given but language is not, the string is encoded using the empty string for language.\n- email.utils.collapse_rfc2231_value(value, errors='replace', fallback_charset='us-ascii')\u00b6\nWhen a header parameter is encoded in RFC 2231 format,\nMessage.get_param\nmay return a 3-tuple containing the character set, language, and value.collapse_rfc2231_value()\nturns this into a unicode string. Optional errors is passed to the errors argument ofstr\n\u2019sencode()\nmethod; it defaults to'replace'\n. Optional fallback_charset specifies the character set to use if the one in the RFC 2231 header is not known by Python; it defaults to'us-ascii'\n.For convenience, if the value passed to\ncollapse_rfc2231_value()\nis not a tuple, it should be a string and it is returned unquoted.\n- email.utils.decode_params(params)\u00b6\nDecode parameters list according to RFC 2231. params is a sequence of 2-tuples containing elements of the form\n(content-type, string-value)\n.\nFootnotes", "code_snippets": [" ", "\n\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 1977} +{"url": "https://docs.python.org/3/c-api/perfmaps.html", "title": "Support for Perf Maps", "content": "Support for Perf Maps\u00b6\nOn supported platforms (as of this writing, only Linux), the runtime can take\nadvantage of perf map files to make Python functions visible to an external\nprofiling tool (such as perf).\nA running process may create a file in the /tmp\ndirectory, which contains entries\nthat can map a section of executable code to a name. This interface is described in the\ndocumentation of the Linux Perf tool.\nIn Python, these helper APIs can be used by libraries and features that rely on generating machine code on the fly.\nNote that holding an attached thread state is not required for these APIs.\n-\nint PyUnstable_PerfMapState_Init(void)\u00b6\n- This is Unstable API. It may change without warning in minor releases.\nOpen the\n/tmp/perf-$pid.map\nfile, unless it\u2019s already opened, and create a lock to ensure thread-safe writes to the file (provided the writes are done throughPyUnstable_WritePerfMapEntry()\n). Normally, there\u2019s no need to call this explicitly; just usePyUnstable_WritePerfMapEntry()\nand it will initialize the state on first call.Returns\n0\non success,-1\non failure to create/open the perf map file, or-2\non failure to create a lock. Checkerrno\nfor more information about the cause of a failure.\n-\nint PyUnstable_WritePerfMapEntry(const void *code_addr, unsigned int code_size, const char *entry_name)\u00b6\n- This is Unstable API. It may change without warning in minor releases.\nWrite one single entry to the\n/tmp/perf-$pid.map\nfile. This function is thread safe. Here is what an example entry looks like:# address size name 7f3529fcf759 b py::bar:/run/t.py\nWill call\nPyUnstable_PerfMapState_Init()\nbefore writing the entry, if the perf map file is not already opened. Returns0\non success, or the same error codes asPyUnstable_PerfMapState_Init()\non failure.\n-\nvoid PyUnstable_PerfMapState_Fini(void)\u00b6\n- This is Unstable API. It may change without warning in minor releases.\nClose the perf map file opened by\nPyUnstable_PerfMapState_Init()\n. This is called by the runtime itself during interpreter shut-down. In general, there shouldn\u2019t be a reason to explicitly call this, except to handle specific scenarios such as forking.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 536} +{"url": "https://docs.python.org/3/c-api/time.html", "title": "PyTime C API", "content": "PyTime C API\u00b6\nAdded in version 3.13.\nThe clock C API provides access to system clocks.\nIt is similar to the Python time\nmodule.\nFor C API related to the datetime\nmodule, see DateTime Objects.\nTypes\u00b6\n-\ntype PyTime_t\u00b6\nA timestamp or duration in nanoseconds, represented as a signed 64-bit integer.\nThe reference point for timestamps depends on the clock used. For example,\nPyTime_Time()\nreturns timestamps relative to the UNIX epoch.The supported range is around [-292.3 years; +292.3 years]. Using the Unix epoch (January 1st, 1970) as reference, the supported date range is around [1677-09-21; 2262-04-11]. The exact limits are exposed as constants:\nClock Functions\u00b6\nThe following functions take a pointer to a PyTime_t that they set to the value of a particular clock. Details of each clock are given in the documentation of the corresponding Python function.\nThe functions return 0\non success, or -1\n(with an exception set)\non failure.\nOn integer overflow, they set the PyExc_OverflowError\nexception and\nset *result\nto the value clamped to the [PyTime_MIN; PyTime_MAX]\nrange.\n(On current systems, integer overflows are likely caused by misconfigured\nsystem time.)\nAs any other C API (unless otherwise specified), the functions must be called with an attached thread state.\n-\nint PyTime_Monotonic(PyTime_t *result)\u00b6\nRead the monotonic clock. See\ntime.monotonic()\nfor important details on this clock.\n-\nint PyTime_PerfCounter(PyTime_t *result)\u00b6\nRead the performance counter. See\ntime.perf_counter()\nfor important details on this clock.\n-\nint PyTime_Time(PyTime_t *result)\u00b6\nRead the \u201cwall clock\u201d time. See\ntime.time()\nfor details important on this clock.\nRaw Clock Functions\u00b6\nSimilar to clock functions, but don\u2019t set an exception on error and don\u2019t require the caller to have an attached thread state.\nOn success, the functions return 0\n.\nOn failure, they set *result\nto 0\nand return -1\n, without setting\nan exception. To get the cause of the error, attach a thread state,\nand call the regular (non-Raw\n) function. Note that the regular function may succeed after\nthe Raw\none failed.\n-\nint PyTime_MonotonicRaw(PyTime_t *result)\u00b6\nSimilar to\nPyTime_Monotonic()\n, but don\u2019t set an exception on error and don\u2019t require an attached thread state.\n-\nint PyTime_PerfCounterRaw(PyTime_t *result)\u00b6\nSimilar to\nPyTime_PerfCounter()\n, but don\u2019t set an exception on error and don\u2019t require an attached thread state.\n-\nint PyTime_TimeRaw(PyTime_t *result)\u00b6\nSimilar to\nPyTime_Time()\n, but don\u2019t set an exception on error and don\u2019t require an attached thread state.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 637} +{"url": "https://docs.python.org/3/extending/windows.html", "title": "Building C and C++ Extensions on Windows", "content": "5. Building C and C++ Extensions on Windows\u00b6\nThis chapter briefly explains how to create a Windows extension module for Python using Microsoft Visual C++, and follows with more detailed background information on how it works. The explanatory material is useful for both the Windows programmer learning to build Python extensions and the Unix programmer interested in producing software which can be successfully built on both Unix and Windows.\nModule authors are encouraged to use the distutils approach for building extension modules, instead of the one described in this section. You will still need the C compiler that was used to build Python; typically Microsoft Visual C++.\nNote\nThis chapter mentions a number of filenames that include an encoded Python\nversion number. These filenames are represented with the version number shown\nas XY\n; in practice, 'X'\nwill be the major version number and 'Y'\nwill be the minor version number of the Python release you\u2019re working with. For\nexample, if you are using Python 2.2.1, XY\nwill actually be 22\n.\n5.1. A Cookbook Approach\u00b6\nThere are two approaches to building extension modules on Windows, just as there\nare on Unix: use the setuptools\npackage to control the build process, or\ndo things manually. The setuptools approach works well for most extensions;\ndocumentation on using setuptools\nto build and package extension modules\nis available in Building C and C++ Extensions with setuptools. If you find you really need to do\nthings manually, it may be instructive to study the project file for the\nwinsound standard library module.\n5.2. Differences Between Unix and Windows\u00b6\nUnix and Windows use completely different paradigms for run-time loading of code. Before you try to build a module that can be dynamically loaded, be aware of how your system works.\nIn Unix, a shared object (.so\n) file contains code to be used by the\nprogram, and also the names of functions and data that it expects to find in the\nprogram. When the file is joined to the program, all references to those\nfunctions and data in the file\u2019s code are changed to point to the actual\nlocations in the program where the functions and data are placed in memory.\nThis is basically a link operation.\nIn Windows, a dynamic-link library (.dll\n) file has no dangling\nreferences. Instead, an access to functions or data goes through a lookup\ntable. So the DLL code does not have to be fixed up at runtime to refer to the\nprogram\u2019s memory; instead, the code already uses the DLL\u2019s lookup table, and the\nlookup table is modified at runtime to point to the functions and data.\nIn Unix, there is only one type of library file (.a\n) which contains code\nfrom several object files (.o\n). During the link step to create a shared\nobject file (.so\n), the linker may find that it doesn\u2019t know where an\nidentifier is defined. The linker will look for it in the object files in the\nlibraries; if it finds it, it will include all the code from that object file.\nIn Windows, there are two types of library, a static library and an import\nlibrary (both called .lib\n). A static library is like a Unix .a\nfile; it contains code to be included as necessary. An import library is\nbasically used only to reassure the linker that a certain identifier is legal,\nand will be present in the program when the DLL is loaded. So the linker uses\nthe information from the import library to build the lookup table for using\nidentifiers that are not included in the DLL. When an application or a DLL is\nlinked, an import library may be generated, which will need to be used for all\nfuture DLLs that depend on the symbols in the application or DLL.\nSuppose you are building two dynamic-load modules, B and C, which should share\nanother block of code A. On Unix, you would not pass A.a\nto the\nlinker for B.so\nand C.so\n; that would cause it to be included\ntwice, so that B and C would each have their own copy. In Windows, building\nA.dll\nwill also build A.lib\n. You do pass A.lib\nto the\nlinker for B and C. A.lib\ndoes not contain code; it just contains\ninformation which will be used at runtime to access A\u2019s code.\nIn Windows, using an import library is sort of like using import spam\n; it\ngives you access to spam\u2019s names, but does not create a separate copy. On Unix,\nlinking with a library is more like from spam import *\n; it does create a\nseparate copy.\n-\nPy_NO_LINK_LIB\u00b6\nTurn off the implicit,\n#pragma\n-based linkage with the Python library, performed inside CPython header files.Added in version 3.14.\n5.3. Using DLLs in Practice\u00b6\nWindows Python is built in Microsoft Visual C++; using other compilers may or may not work. The rest of this section is MSVC++ specific.\nWhen creating DLLs in Windows, you can use the CPython library in two ways:\nBy default, inclusion of\nPC/pyconfig.h\ndirectly or viaPython.h\ntriggers an implicit, configure-aware link with the library. The header file choosespythonXY_d.lib\nfor Debug,pythonXY.lib\nfor Release, andpythonX.lib\nfor Release with the Limited API enabled.To build two DLLs, spam and ni (which uses C functions found in spam), you could use these commands:\ncl /LD /I/python/include spam.c cl /LD /I/python/include ni.c spam.lib\nThe first command created three files:\nspam.obj\n,spam.dll\nandspam.lib\n.Spam.dll\ndoes not contain any Python functions (such asPyArg_ParseTuple()\n), but it does know how to find the Python code thanks to the implicitly linkedpythonXY.lib\n.The second command created\nni.dll\n(and.obj\nand.lib\n), which knows how to find the necessary functions from spam, and also from the Python executable.Manually by defining\nPy_NO_LINK_LIB\nmacro before includingPython.h\n. You must passpythonXY.lib\nto the linker.To build two DLLs, spam and ni (which uses C functions found in spam), you could use these commands:\ncl /LD /DPy_NO_LINK_LIB /I/python/include spam.c ../libs/pythonXY.lib cl /LD /DPy_NO_LINK_LIB /I/python/include ni.c spam.lib ../libs/pythonXY.lib\nThe first command created three files:\nspam.obj\n,spam.dll\nandspam.lib\n.Spam.dll\ndoes not contain any Python functions (such asPyArg_ParseTuple()\n), but it does know how to find the Python code thanks topythonXY.lib\n.The second command created\nni.dll\n(and.obj\nand.lib\n), which knows how to find the necessary functions from spam, and also from the Python executable.\nNot every identifier is exported to the lookup table. If you want any other\nmodules (including Python) to be able to see your identifiers, you have to say\n_declspec(dllexport)\n, as in void _declspec(dllexport) initspam(void)\nor\nPyObject _declspec(dllexport) *NiGetSpamData(void)\n.\nDeveloper Studio will throw in a lot of import libraries that you do not really\nneed, adding about 100K to your executable. To get rid of them, use the Project\nSettings dialog, Link tab, to specify ignore default libraries. Add the\ncorrect msvcrtxx.lib\nto the list of libraries.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1705} +{"url": "https://docs.python.org/3/c-api/capsule.html", "title": "Capsules", "content": "Capsules\u00b6\nRefer to Providing a C API for an Extension Module for more information on using these objects.\nAdded in version 3.1.\n-\ntype PyCapsule\u00b6\nThis subtype of\nPyObject\nrepresents an opaque value, useful for C extension modules which need to pass an opaque value (as a void* pointer) through Python code to other C code. It is often used to make a C function pointer defined in one module available to other modules, so the regular import mechanism can be used to access C APIs defined in dynamically loaded modules.\n-\nPyTypeObject PyCapsule_Type\u00b6\n- Part of the Stable ABI.\nThe type object corresponding to capsule objects. This is the same object as\ntypes.CapsuleType\nin the Python layer.\n-\ntype PyCapsule_Destructor\u00b6\n- Part of the Stable ABI.\nThe type of a destructor callback for a capsule. Defined as:\ntypedef void (*PyCapsule_Destructor)(PyObject *);\nSee\nPyCapsule_New()\nfor the semantics of PyCapsule_Destructor callbacks.\n-\nint PyCapsule_CheckExact(PyObject *p)\u00b6\nReturn true if its argument is a\nPyCapsule\n. This function always succeeds.\n-\nPyObject *PyCapsule_New(void *pointer, const char *name, PyCapsule_Destructor destructor)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCreate a\nPyCapsule\nencapsulating the pointer. The pointer argument may not beNULL\n.On failure, set an exception and return\nNULL\n.The name string may either be\nNULL\nor a pointer to a valid C string. If non-NULL\n, this string must outlive the capsule. (Though it is permitted to free it inside the destructor.)If the destructor argument is not\nNULL\n, it will be called with the capsule as its argument when it is destroyed.If this capsule will be stored as an attribute of a module, the name should be specified as\nmodulename.attributename\n. This will enable other modules to import the capsule usingPyCapsule_Import()\n.\n-\nvoid *PyCapsule_GetPointer(PyObject *capsule, const char *name)\u00b6\n- Part of the Stable ABI.\nRetrieve the pointer stored in the capsule. On failure, set an exception and return\nNULL\n.The name parameter must compare exactly to the name stored in the capsule. If the name stored in the capsule is\nNULL\n, the name passed in must also beNULL\n. Python uses the C functionstrcmp()\nto compare capsule names.\n-\nPyCapsule_Destructor PyCapsule_GetDestructor(PyObject *capsule)\u00b6\n- Part of the Stable ABI.\nReturn the current destructor stored in the capsule. On failure, set an exception and return\nNULL\n.It is legal for a capsule to have a\nNULL\ndestructor. This makes aNULL\nreturn code somewhat ambiguous; usePyCapsule_IsValid()\norPyErr_Occurred()\nto disambiguate.\n-\nvoid *PyCapsule_GetContext(PyObject *capsule)\u00b6\n- Part of the Stable ABI.\nReturn the current context stored in the capsule. On failure, set an exception and return\nNULL\n.It is legal for a capsule to have a\nNULL\ncontext. This makes aNULL\nreturn code somewhat ambiguous; usePyCapsule_IsValid()\norPyErr_Occurred()\nto disambiguate.\n-\nconst char *PyCapsule_GetName(PyObject *capsule)\u00b6\n- Part of the Stable ABI.\nReturn the current name stored in the capsule. On failure, set an exception and return\nNULL\n.It is legal for a capsule to have a\nNULL\nname. This makes aNULL\nreturn code somewhat ambiguous; usePyCapsule_IsValid()\norPyErr_Occurred()\nto disambiguate.\n-\nvoid *PyCapsule_Import(const char *name, int no_block)\u00b6\n- Part of the Stable ABI.\nImport a pointer to a C object from a capsule attribute in a module. The name parameter should specify the full name to the attribute, as in\nmodule.attribute\n. The name stored in the capsule must match this string exactly.This function splits name on the\n.\ncharacter, and imports the first element. It then processes further elements using attribute lookups.Return the capsule\u2019s internal pointer on success. On failure, set an exception and return\nNULL\n.Note\nIf name points to an attribute of some submodule or subpackage, this submodule or subpackage must be previously imported using other means (for example, by using\nPyImport_ImportModule()\n) for the attribute lookups to succeed.Changed in version 3.3: no_block has no effect anymore.\n-\nint PyCapsule_IsValid(PyObject *capsule, const char *name)\u00b6\n- Part of the Stable ABI.\nDetermines whether or not capsule is a valid capsule. A valid capsule is non-\nNULL\n, passesPyCapsule_CheckExact()\n, has a non-NULL\npointer stored in it, and its internal name matches the name parameter. (SeePyCapsule_GetPointer()\nfor information on how capsule names are compared.)In other words, if\nPyCapsule_IsValid()\nreturns a true value, calls to any of the accessors (any function starting withPyCapsule_Get\n) are guaranteed to succeed.Return a nonzero value if the object is valid and matches the name passed in. Return\n0\notherwise. This function will not fail.\n-\nint PyCapsule_SetContext(PyObject *capsule, void *context)\u00b6\n- Part of the Stable ABI.\nSet the context pointer inside capsule to context.\nReturn\n0\non success. Return nonzero and set an exception on failure.\n-\nint PyCapsule_SetDestructor(PyObject *capsule, PyCapsule_Destructor destructor)\u00b6\n- Part of the Stable ABI.\nSet the destructor inside capsule to destructor.\nReturn\n0\non success. Return nonzero and set an exception on failure.\n-\nint PyCapsule_SetName(PyObject *capsule, const char *name)\u00b6\n- Part of the Stable ABI.\nSet the name inside capsule to name. If non-\nNULL\n, the name must outlive the capsule. If the previous name stored in the capsule was notNULL\n, no attempt is made to free it.Return\n0\non success. Return nonzero and set an exception on failure.\n-\nint PyCapsule_SetPointer(PyObject *capsule, void *pointer)\u00b6\n- Part of the Stable ABI.\nSet the void pointer inside capsule to pointer. The pointer may not be\nNULL\n.Return\n0\non success. Return nonzero and set an exception on failure.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1426} +{"url": "https://docs.python.org/3/c-api/codec.html", "title": "Codec registry and support functions", "content": "Codec registry and support functions\u00b6\n-\nint PyCodec_Register(PyObject *search_function)\u00b6\n- Part of the Stable ABI.\nRegister a new codec search function.\nAs a side effect, this tries to load the\nencodings\npackage, if not yet done, to make sure that it is always first in the list of search functions.\n-\nint PyCodec_Unregister(PyObject *search_function)\u00b6\n- Part of the Stable ABI since version 3.10.\nUnregister a codec search function and clear the registry\u2019s cache. If the search function is not registered, do nothing. Return 0 on success. Raise an exception and return -1 on error.\nAdded in version 3.10.\n-\nint PyCodec_KnownEncoding(const char *encoding)\u00b6\n- Part of the Stable ABI.\nReturn\n1\nor0\ndepending on whether there is a registered codec for the given encoding. This function always succeeds.\n-\nPyObject *PyCodec_Encode(PyObject *object, const char *encoding, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nGeneric codec based encoding API.\nobject is passed through the encoder function found for the given encoding using the error handling method defined by errors. errors may be\nNULL\nto use the default method defined for the codec. Raises aLookupError\nif no encoder can be found.\n-\nPyObject *PyCodec_Decode(PyObject *object, const char *encoding, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nGeneric codec based decoding API.\nobject is passed through the decoder function found for the given encoding using the error handling method defined by errors. errors may be\nNULL\nto use the default method defined for the codec. Raises aLookupError\nif no decoder can be found.\nCodec lookup API\u00b6\nIn the following functions, the encoding string is looked up converted to all\nlower-case characters, which makes encodings looked up through this mechanism\neffectively case-insensitive. If no codec is found, a KeyError\nis set\nand NULL\nreturned.\n-\nPyObject *PyCodec_Encoder(const char *encoding)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nGet an encoder function for the given encoding.\n-\nPyObject *PyCodec_Decoder(const char *encoding)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nGet a decoder function for the given encoding.\n-\nPyObject *PyCodec_IncrementalEncoder(const char *encoding, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nGet an\nIncrementalEncoder\nobject for the given encoding.\n-\nPyObject *PyCodec_IncrementalDecoder(const char *encoding, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nGet an\nIncrementalDecoder\nobject for the given encoding.\n-\nPyObject *PyCodec_StreamReader(const char *encoding, PyObject *stream, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nGet a\nStreamReader\nfactory function for the given encoding.\n-\nPyObject *PyCodec_StreamWriter(const char *encoding, PyObject *stream, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nGet a\nStreamWriter\nfactory function for the given encoding.\nRegistry API for Unicode encoding error handlers\u00b6\n-\nint PyCodec_RegisterError(const char *name, PyObject *error)\u00b6\n- Part of the Stable ABI.\nRegister the error handling callback function error under the given name. This callback function will be called by a codec when it encounters unencodable characters/undecodable bytes and name is specified as the error parameter in the call to the encode/decode function.\nThe callback gets a single argument, an instance of\nUnicodeEncodeError\n,UnicodeDecodeError\norUnicodeTranslateError\nthat holds information about the problematic sequence of characters or bytes and their offset in the original string (see Unicode Exception Objects for functions to extract this information). The callback must either raise the given exception, or return a two-item tuple containing the replacement for the problematic sequence, and an integer giving the offset in the original string at which encoding/decoding should be resumed.Return\n0\non success,-1\non error.\n-\nPyObject *PyCodec_LookupError(const char *name)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nLookup the error handling callback function registered under name. As a special case\nNULL\ncan be passed, in which case the error handling callback for \u201cstrict\u201d will be returned.\n-\nPyObject *PyCodec_StrictErrors(PyObject *exc)\u00b6\n- Return value: Always NULL. Part of the Stable ABI.\nRaise exc as an exception.\n-\nPyObject *PyCodec_IgnoreErrors(PyObject *exc)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nIgnore the unicode error, skipping the faulty input.\n-\nPyObject *PyCodec_ReplaceErrors(PyObject *exc)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReplace the unicode encode error with\n?\norU+FFFD\n.\n-\nPyObject *PyCodec_XMLCharRefReplaceErrors(PyObject *exc)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReplace the unicode encode error with XML character references.\n-\nPyObject *PyCodec_BackslashReplaceErrors(PyObject *exc)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReplace the unicode encode error with backslash escapes (\n\\x\n,\\u\nand\\U\n).\n-\nPyObject *PyCodec_NameReplaceErrors(PyObject *exc)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.7.\nReplace the unicode encode error with\n\\N{...}\nescapes.Added in version 3.5.\nCodec utility variables\u00b6\n-\nconst char *Py_hexdigits\u00b6\nA string constant containing the lowercase hexadecimal digits:\n\"0123456789abcdef\"\n.Added in version 3.3.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1364} +{"url": "https://docs.python.org/3/c-api/bool.html", "title": "Boolean Objects", "content": "Boolean Objects\u00b6\nBooleans in Python are implemented as a subclass of integers. There are only\ntwo booleans, Py_False\nand Py_True\n. As such, the normal\ncreation and deletion functions don\u2019t apply to booleans. The following macros\nare available, however.\n-\nPyTypeObject PyBool_Type\u00b6\n- Part of the Stable ABI.\nThis instance of\nPyTypeObject\nrepresents the Python boolean type; it is the same object asbool\nin the Python layer.\n-\nint PyBool_Check(PyObject *o)\u00b6\nReturn true if o is of type\nPyBool_Type\n. This function always succeeds.\n-\nPyObject *PyBool_FromLong(long v)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn\nPy_True\norPy_False\n, depending on the truth value of v.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 171} +{"url": "https://docs.python.org/3/c-api/curses.html", "title": "Curses C API", "content": "Curses C API\u00b6\ncurses\nexposes a small C interface for extension modules.\nConsumers must include the header file py_curses.h\n(which is not\nincluded by default by Python.h\n) and import_curses()\nmust\nbe invoked, usually as part of the module initialisation function, to populate\nPyCurses_API\n.\nWarning\nNeither the C API nor the pure Python curses\nmodule are compatible\nwith subinterpreters.\n-\nimport_curses()\u00b6\nImport the curses C API. The macro does not need a semi-colon to be called.\nOn success, populate the\nPyCurses_API\npointer.On failure, set\nPyCurses_API\nto NULL and set an exception. The caller must check if an error occurred viaPyErr_Occurred()\n:import_curses(); // semi-colon is optional but recommended if (PyErr_Occurred()) { /* cleanup */ }\n-\nvoid **PyCurses_API\u00b6\nDynamically allocated object containing the curses C API. This variable is only available once\nimport_curses\nsucceeds.PyCurses_API[0]\ncorresponds toPyCursesWindow_Type\n.PyCurses_API[1]\n,PyCurses_API[2]\n, andPyCurses_API[3]\nare pointers to predicate functions of typeint (*)(void)\n.When called, these predicates return whether\ncurses.setupterm()\n,curses.initscr()\n, andcurses.start_color()\nhave been called respectively.See also the convenience macros\nPyCursesSetupTermCalled\n,PyCursesInitialised\n, andPyCursesInitialisedColor\n.Note\nThe number of entries in this structure is subject to changes. Consider using\nPyCurses_API_pointers\nto check if new fields are available or not.\n-\nPyCurses_API_pointers\u00b6\nThe number of accessible fields (\n4\n) inPyCurses_API\n. This number is incremented whenever new fields are added.\n-\nPyTypeObject PyCursesWindow_Type\u00b6\nThe heap type corresponding to\ncurses.window\n.\n-\nint PyCursesWindow_Check(PyObject *op)\u00b6\nReturn true if op is a\ncurses.window\ninstance, false otherwise.\nThe following macros are convenience macros expanding into C statements.\nIn particular, they can only be used as macro;\nor macro\n, but not\nmacro()\nor macro();\n.\n-\nPyCursesSetupTermCalled\u00b6\nMacro checking if\ncurses.setupterm()\nhas been called.The macro expansion is roughly equivalent to:\n{ typedef int (*predicate_t)(void); predicate_t was_setupterm_called = (predicate_t)PyCurses_API[1]; if (!was_setupterm_called()) { return NULL; } }\n-\nPyCursesInitialised\u00b6\nMacro checking if\ncurses.initscr()\nhas been called.The macro expansion is roughly equivalent to:\n{ typedef int (*predicate_t)(void); predicate_t was_initscr_called = (predicate_t)PyCurses_API[2]; if (!was_initscr_called()) { return NULL; } }\n-\nPyCursesInitialisedColor\u00b6\nMacro checking if\ncurses.start_color()\nhas been called.The macro expansion is roughly equivalent to:\n{ typedef int (*predicate_t)(void); predicate_t was_start_color_called = (predicate_t)PyCurses_API[3]; if (!was_start_color_called()) { return NULL; } }\nInternal data\u00b6\nThe following objects are exposed by the C API but should be considered internal-only.\n-\nPyCurses_CAPSULE_NAME\u00b6\nName of the curses capsule to pass to\nPyCapsule_Import()\n.Internal usage only. Use\nimport_curses\ninstead.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 748} +{"url": "https://docs.python.org/3/c-api/typehints.html", "title": "Objects for Type Hinting", "content": "Objects for Type Hinting\u00b6\nVarious built-in types for type hinting are provided. Currently,\ntwo types exist \u2013 GenericAlias and\nUnion. Only GenericAlias\nis exposed to C.\n-\nPyObject *Py_GenericAlias(PyObject *origin, PyObject *args)\u00b6\n- Part of the Stable ABI since version 3.9.\nCreate a GenericAlias object. Equivalent to calling the Python class\ntypes.GenericAlias\n. The origin and args arguments set theGenericAlias\n\u2018s__origin__\nand__args__\nattributes respectively. origin should be a PyTypeObject*, and args can be a PyTupleObject* or anyPyObject*\n. If args passed is not a tuple, a 1-tuple is automatically constructed and__args__\nis set to(args,)\n. Minimal checking is done for the arguments, so the function will succeed even if origin is not a type. TheGenericAlias\n\u2018s__parameters__\nattribute is constructed lazily from__args__\n. On failure, an exception is raised andNULL\nis returned.Here\u2019s an example of how to make an extension type generic:\n... static PyMethodDef my_obj_methods[] = { // Other methods. ... {\"__class_getitem__\", Py_GenericAlias, METH_O|METH_CLASS, \"See PEP 585\"} ... }\nSee also\nThe data model method\n__class_getitem__()\n.Added in version 3.9.\n-\nPyTypeObject Py_GenericAliasType\u00b6\n- Part of the Stable ABI since version 3.9.\nThe C type of the object returned by\nPy_GenericAlias()\n. Equivalent totypes.GenericAlias\nin Python.Added in version 3.9.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 342} +{"url": "https://docs.python.org/3/whatsnew/2.1.html", "title": "What\u2019s New in Python 2.1", "content": "What\u2019s New in Python 2.1\u00b6\n- Author:\nA.M. Kuchling\nIntroduction\u00b6\nThis article explains the new features in Python 2.1. While there aren\u2019t as many changes in 2.1 as there were in Python 2.0, there are still some pleasant surprises in store. 2.1 is the first release to be steered through the use of Python Enhancement Proposals, or PEPs, so most of the sizable changes have accompanying PEPs that provide more complete documentation and a design rationale for the change. This article doesn\u2019t attempt to document the new features completely, but simply provides an overview of the new features for Python programmers. Refer to the Python 2.1 documentation, or to the specific PEP, for more details about any new feature that particularly interests you.\nOne recent goal of the Python development team has been to accelerate the pace of new releases, with a new release coming every 6 to 9 months. 2.1 is the first release to come out at this faster pace, with the first alpha appearing in January, 3 months after the final version of 2.0 was released.\nThe final release of Python 2.1 was made on April 17, 2001.\nPEP 227: Nested Scopes\u00b6\nThe largest change in Python 2.1 is to Python\u2019s scoping rules. In Python 2.0, at any given time there are at most three namespaces used to look up variable names: local, module-level, and the built-in namespace. This often surprised people because it didn\u2019t match their intuitive expectations. For example, a nested recursive function definition doesn\u2019t work:\ndef f():\n...\ndef g(value):\n...\nreturn g(value-1) + 1\n...\nThe function g()\nwill always raise a NameError\nexception, because\nthe binding of the name g\nisn\u2019t in either its local namespace or in the\nmodule-level namespace. This isn\u2019t much of a problem in practice (how often do\nyou recursively define interior functions like this?), but this also made using\nthe lambda\nexpression clumsier, and this was a problem in practice.\nIn code which uses lambda\nyou can often find local variables being\ncopied by passing them as the default values of arguments.\ndef find(self, name):\n\"Return list of any entries equal to 'name'\"\nL = filter(lambda x, name=name: x == name,\nself.list_attribute)\nreturn L\nThe readability of Python code written in a strongly functional style suffers greatly as a result.\nThe most significant change to Python 2.1 is that static scoping has been added\nto the language to fix this problem. As a first effect, the name=name\ndefault argument is now unnecessary in the above example. Put simply, when a\ngiven variable name is not assigned a value within a function (by an assignment,\nor the def\n, class\n, or import\nstatements),\nreferences to the variable will be looked up in the local namespace of the\nenclosing scope. A more detailed explanation of the rules, and a dissection of\nthe implementation, can be found in the PEP.\nThis change may cause some compatibility problems for code where the same variable name is used both at the module level and as a local variable within a function that contains further function definitions. This seems rather unlikely though, since such code would have been pretty confusing to read in the first place.\nOne side effect of the change is that the from module import *\nand\nexec\nstatements have been made illegal inside a function scope under\ncertain conditions. The Python reference manual has said all along that from\nmodule import *\nis only legal at the top level of a module, but the CPython\ninterpreter has never enforced this before. As part of the implementation of\nnested scopes, the compiler which turns Python source into bytecodes has to\ngenerate different code to access variables in a containing scope. from\nmodule import *\nand exec\nmake it impossible for the compiler to\nfigure this out, because they add names to the local namespace that are\nunknowable at compile time. Therefore, if a function contains function\ndefinitions or lambda\nexpressions with free variables, the compiler\nwill flag this by raising a SyntaxError\nexception.\nTo make the preceding explanation a bit clearer, here\u2019s an example:\nx = 1\ndef f():\n# The next line is a syntax error\nexec 'x=2'\ndef g():\nreturn x\nLine 4 containing the exec\nstatement is a syntax error, since\nexec\nwould define a new local variable named x\nwhose value should\nbe accessed by g()\n.\nThis shouldn\u2019t be much of a limitation, since exec\nis rarely used in\nmost Python code (and when it is used, it\u2019s often a sign of a poor design\nanyway).\nCompatibility concerns have led to nested scopes being introduced gradually; in Python 2.1, they aren\u2019t enabled by default, but can be turned on within a module by using a future statement as described in PEP 236. (See the following section for further discussion of PEP 236.) In Python 2.2, nested scopes will become the default and there will be no way to turn them off, but users will have had all of 2.1\u2019s lifetime to fix any breakage resulting from their introduction.\nSee also\n- PEP 227 - Statically Nested Scopes\nWritten and implemented by Jeremy Hylton.\nPEP 236: __future__ Directives\u00b6\nThe reaction to nested scopes was widespread concern about the dangers of breaking code with the 2.1 release, and it was strong enough to make the Pythoneers take a more conservative approach. This approach consists of introducing a convention for enabling optional functionality in release N that will become compulsory in release N+1.\nThe syntax uses a from...import\nstatement using the reserved module name\n__future__\n. Nested scopes can be enabled by the following statement:\nfrom __future__ import nested_scopes\nWhile it looks like a normal import\nstatement, it\u2019s not; there are\nstrict rules on where such a future statement can be put. They can only be at\nthe top of a module, and must precede any Python code or regular\nimport\nstatements. This is because such statements can affect how\nthe Python bytecode compiler parses code and generates bytecode, so they must\nprecede any statement that will result in bytecodes being produced.\nSee also\n- PEP 236 - Back to the\n__future__\nWritten by Tim Peters, and primarily implemented by Jeremy Hylton.\nPEP 207: Rich Comparisons\u00b6\nIn earlier versions, Python\u2019s support for implementing comparisons on user-defined\nclasses and extension types was quite simple. Classes could implement a\n__cmp__()\nmethod that was given two instances of a class, and could only\nreturn 0 if they were equal or +1 or -1 if they weren\u2019t; the method couldn\u2019t\nraise an exception or return anything other than a Boolean value. Users of\nNumeric Python often found this model too weak and restrictive, because in the\nnumber-crunching programs that numeric Python is used for, it would be more\nuseful to be able to perform elementwise comparisons of two matrices, returning\na matrix containing the results of a given comparison for each element. If the\ntwo matrices are of different sizes, then the compare has to be able to raise an\nexception to signal the error.\nIn Python 2.1, rich comparisons were added in order to support this need.\nPython classes can now individually overload each of the <\n, <=\n, >\n,\n>=\n, ==\n, and !=\noperations. The new magic method names are:\nOperation |\nMethod name |\n|---|---|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n(The magic methods are named after the corresponding Fortran operators .LT.\n.\n.LE.\n, &c. Numeric programmers are almost certainly quite familiar with\nthese names and will find them easy to remember.)\nEach of these magic methods is of the form method(self, other)\n, where\nself\nwill be the object on the left-hand side of the operator, while\nother\nwill be the object on the right-hand side. For example, the\nexpression A < B\nwill cause A.__lt__(B)\nto be called.\nEach of these magic methods can return anything at all: a Boolean, a matrix, a list, or any other Python object. Alternatively they can raise an exception if the comparison is impossible, inconsistent, or otherwise meaningless.\nThe built-in cmp(A,B)\nfunction can use the rich comparison machinery,\nand now accepts an optional argument specifying which comparison operation to\nuse; this is given as one of the strings \"<\"\n, \"<=\"\n, \">\"\n, \">=\"\n,\n\"==\"\n, or \"!=\"\n. If called without the optional third argument,\ncmp()\nwill only return -1, 0, or +1 as in previous versions of Python;\notherwise it will call the appropriate method and can return any Python object.\nThere are also corresponding changes of interest to C programmers; there\u2019s a new\nslot tp_richcmp\nin type objects and an API for performing a given rich\ncomparison. I won\u2019t cover the C API here, but will refer you to PEP 207, or to\n2.1\u2019s C API documentation, for the full list of related functions.\nSee also\n- PEP 207 - Rich Comparisons\nWritten by Guido van Rossum, heavily based on earlier work by David Ascher, and implemented by Guido van Rossum.\nPEP 230: Warning Framework\u00b6\nOver its 10 years of existence, Python has accumulated a certain number of obsolete modules and features along the way. It\u2019s difficult to know when a feature is safe to remove, since there\u2019s no way of knowing how much code uses it \u2014 perhaps no programs depend on the feature, or perhaps many do. To enable removing old features in a more structured way, a warning framework was added. When the Python developers want to get rid of a feature, it will first trigger a warning in the next version of Python. The following Python version can then drop the feature, and users will have had a full release cycle to remove uses of the old feature.\nPython 2.1 adds the warning framework to be used in this scheme. It adds a\nwarnings\nmodule that provide functions to issue warnings, and to filter\nout warnings that you don\u2019t want to be displayed. Third-party modules can also\nuse this framework to deprecate old features that they no longer wish to\nsupport.\nFor example, in Python 2.1 the regex\nmodule is deprecated, so importing\nit causes a warning to be printed:\n>>> import regex\n__main__:1: DeprecationWarning: the regex module\nis deprecated; please use the re module\n>>>\nWarnings can be issued by calling the warnings.warn()\nfunction:\nwarnings.warn(\"feature X no longer supported\")\nThe first parameter is the warning message; an additional optional parameters can be used to specify a particular warning category.\nFilters can be added to disable certain warnings; a regular expression pattern\ncan be applied to the message or to the module name in order to suppress a\nwarning. For example, you may have a program that uses the regex\nmodule\nand not want to spare the time to convert it to use the re\nmodule right\nnow. The warning can be suppressed by calling\nimport warnings\nwarnings.filterwarnings(action = 'ignore',\nmessage='.*regex module is deprecated',\ncategory=DeprecationWarning,\nmodule = '__main__')\nThis adds a filter that will apply only to warnings of the class\nDeprecationWarning\ntriggered in the __main__\nmodule, and applies\na regular expression to only match the message about the regex\nmodule\nbeing deprecated, and will cause such warnings to be ignored. Warnings can also\nbe printed only once, printed every time the offending code is executed, or\nturned into exceptions that will cause the program to stop (unless the\nexceptions are caught in the usual way, of course).\nFunctions were also added to Python\u2019s C API for issuing warnings; refer to PEP 230 or to Python\u2019s API documentation for the details.\nSee also\n- PEP 5 - Guidelines for Language Evolution\nWritten by Paul Prescod, to specify procedures to be followed when removing old features from Python. The policy described in this PEP hasn\u2019t been officially adopted, but the eventual policy probably won\u2019t be too different from Prescod\u2019s proposal.\n- PEP 230 - Warning Framework\nWritten and implemented by Guido van Rossum.\nPEP 229: New Build System\u00b6\nWhen compiling Python, the user had to go in and edit the Modules/Setup\nfile in order to enable various additional modules; the default set is\nrelatively small and limited to modules that compile on most Unix platforms.\nThis means that on Unix platforms with many more features, most notably Linux,\nPython installations often don\u2019t contain all useful modules they could.\nPython 2.0 added the Distutils, a set of modules for distributing and installing extensions. In Python 2.1, the Distutils are used to compile much of the standard library of extension modules, autodetecting which ones are supported on the current machine. It\u2019s hoped that this will make Python installations easier and more featureful.\nInstead of having to edit the Modules/Setup\nfile in order to enable\nmodules, a setup.py\nscript in the top directory of the Python source\ndistribution is run at build time, and attempts to discover which modules can be\nenabled by examining the modules and header files on the system. If a module is\nconfigured in Modules/Setup\n, the setup.py\nscript won\u2019t attempt\nto compile that module and will defer to the Modules/Setup\nfile\u2019s\ncontents. This provides a way to specific any strange command-line flags or\nlibraries that are required for a specific platform.\nIn another far-reaching change to the build mechanism, Neil Schemenauer\nrestructured things so Python now uses a single makefile that isn\u2019t recursive,\ninstead of makefiles in the top directory and in each of the Python/\n,\nParser/\n, Objects/\n, and Modules/\nsubdirectories. This\nmakes building Python faster and also makes hacking the Makefiles clearer and\nsimpler.\nSee also\n- PEP 229 - Using Distutils to Build Python\nWritten and implemented by A.M. Kuchling.\nPEP 205: Weak References\u00b6\nWeak references, available through the weakref\nmodule, are a minor but\nuseful new data type in the Python programmer\u2019s toolbox.\nStoring a reference to an object (say, in a dictionary or a list) has the side effect of keeping that object alive forever. There are a few specific cases where this behaviour is undesirable, object caches being the most common one, and another being circular references in data structures such as trees.\nFor example, consider a memoizing function that caches the results of another\nfunction f(x)\nby storing the function\u2019s argument and its result in a\ndictionary:\n_cache = {}\ndef memoize(x):\nif _cache.has_key(x):\nreturn _cache[x]\nretval = f(x)\n# Cache the returned object\n_cache[x] = retval\nreturn retval\nThis version works for simple things such as integers, but it has a side effect;\nthe _cache\ndictionary holds a reference to the return values, so they\u2019ll\nnever be deallocated until the Python process exits and cleans up. This isn\u2019t\nvery noticeable for integers, but if f()\nreturns an object, or a data\nstructure that takes up a lot of memory, this can be a problem.\nWeak references provide a way to implement a cache that won\u2019t keep objects alive\nbeyond their time. If an object is only accessible through weak references, the\nobject will be deallocated and the weak references will now indicate that the\nobject it referred to no longer exists. A weak reference to an object obj is\ncreated by calling wr = weakref.ref(obj)\n. The object being referred to is\nreturned by calling the weak reference as if it were a function: wr()\n. It\nwill return the referenced object, or None\nif the object no longer exists.\nThis makes it possible to write a memoize()\nfunction whose cache doesn\u2019t\nkeep objects alive, by storing weak references in the cache.\n_cache = {}\ndef memoize(x):\nif _cache.has_key(x):\nobj = _cache[x]()\n# If weak reference object still exists,\n# return it\nif obj is not None: return obj\nretval = f(x)\n# Cache a weak reference\n_cache[x] = weakref.ref(retval)\nreturn retval\nThe weakref\nmodule also allows creating proxy objects which behave like\nweak references \u2014 an object referenced only by proxy objects is deallocated \u2013\nbut instead of requiring an explicit call to retrieve the object, the proxy\ntransparently forwards all operations to the object as long as the object still\nexists. If the object is deallocated, attempting to use a proxy will cause a\nweakref.ReferenceError\nexception to be raised.\nproxy = weakref.proxy(obj)\nproxy.attr # Equivalent to obj.attr\nproxy.meth() # Equivalent to obj.meth()\ndel obj\nproxy.attr # raises weakref.ReferenceError\nSee also\n- PEP 205 - Weak References\nWritten and implemented by Fred L. Drake, Jr.\nPEP 232: Function Attributes\u00b6\nIn Python 2.1, functions can now have arbitrary information attached to them.\nPeople were often using docstrings to hold information about functions and\nmethods, because the __doc__\nattribute was the only way of\nattaching any\ninformation to a function. For example, in the Zope web application server,\nfunctions are marked as safe for public access by having a docstring, and in\nJohn Aycock\u2019s SPARK parsing framework, docstrings hold parts of the BNF grammar\nto be parsed. This overloading is unfortunate, since docstrings are really\nintended to hold a function\u2019s documentation; for example, it means you can\u2019t\nproperly document functions intended for private use in Zope.\nArbitrary attributes can now be set and retrieved on functions using the regular Python syntax:\ndef f(): pass\nf.publish = 1\nf.secure = 1\nf.grammar = \"A ::= B (C D)*\"\nThe dictionary containing attributes can be accessed as the function\u2019s\n__dict__\n. Unlike the __dict__\nattribute of class instances, in\nfunctions you can actually assign a new dictionary to __dict__\n, though\nthe new value is restricted to a regular Python dictionary; you can\u2019t be\ntricky and set it to a UserDict\ninstance, or any other random object\nthat behaves like a mapping.\nSee also\n- PEP 232 - Function Attributes\nWritten and implemented by Barry Warsaw.\nPEP 235: Importing Modules on Case-Insensitive Platforms\u00b6\nSome operating systems have filesystems that are case-insensitive, MacOS and\nWindows being the primary examples; on these systems, it\u2019s impossible to\ndistinguish the filenames FILE.PY\nand file.py\n, even though they do store\nthe file\u2019s name in its original case (they\u2019re case-preserving, too).\nIn Python 2.1, the import\nstatement will work to simulate case-sensitivity\non case-insensitive platforms. Python will now search for the first\ncase-sensitive match by default, raising an ImportError\nif no such file\nis found, so import file\nwill not import a module named FILE.PY\n.\nCase-insensitive matching can be requested by setting the PYTHONCASEOK\nenvironment variable before starting the Python interpreter.\nPEP 217: Interactive Display Hook\u00b6\nWhen using the Python interpreter interactively, the output of commands is\ndisplayed using the built-in repr()\nfunction. In Python 2.1, the variable\nsys.displayhook()\ncan be set to a callable object which will be called\ninstead of repr()\n. For example, you can set it to a special\npretty-printing function:\n>>> # Create a recursive data structure\n... L = [1,2,3]\n>>> L.append(L)\n>>> L # Show Python's default output\n[1, 2, 3, [...]]\n>>> # Use pprint.pprint() as the display function\n... import sys, pprint\n>>> sys.displayhook = pprint.pprint\n>>> L\n[1, 2, 3, ]\n>>>\nSee also\n- PEP 217 - Display Hook for Interactive Use\nWritten and implemented by Moshe Zadka.\nPEP 208: New Coercion Model\u00b6\nHow numeric coercion is done at the C level was significantly modified. This will only affect the authors of C extensions to Python, allowing them more flexibility in writing extension types that support numeric operations.\nExtension types can now set the type flag Py_TPFLAGS_CHECKTYPES\nin their\nPyTypeObject\nstructure to indicate that they support the new coercion model.\nIn such extension types, the numeric slot functions can no longer assume that\nthey\u2019ll be passed two arguments of the same type; instead they may be passed two\narguments of differing types, and can then perform their own internal coercion.\nIf the slot function is passed a type it can\u2019t handle, it can indicate the\nfailure by returning a reference to the Py_NotImplemented\nsingleton value.\nThe numeric functions of the other type will then be tried, and perhaps they can\nhandle the operation; if the other type also returns Py_NotImplemented\n, then\na TypeError\nwill be raised. Numeric methods written in Python can also\nreturn Py_NotImplemented\n, causing the interpreter to act as if the method\ndid not exist (perhaps raising a TypeError\n, perhaps trying another\nobject\u2019s numeric methods).\nSee also\n- PEP 208 - Reworking the Coercion Model\nWritten and implemented by Neil Schemenauer, heavily based upon earlier work by Marc-Andr\u00e9 Lemburg. Read this to understand the fine points of how numeric operations will now be processed at the C level.\nPEP 241: Metadata in Python Packages\u00b6\nA common complaint from Python users is that there\u2019s no single catalog of all\nthe Python modules in existence. T. Middleton\u2019s Vaults of Parnassus at\nwww.vex.net/parnassus/\n(retired in February 2009, available in the\nInternet Archive Wayback Machine)\nwas the largest catalog of Python modules, but\nregistering software at the Vaults is optional, and many people did not bother.\nAs a first small step toward fixing the problem, Python software packaged using\nthe Distutils sdist command will include a file named\nPKG-INFO\ncontaining information about the package such as its name,\nversion, and author (metadata, in cataloguing terminology). PEP 241 contains\nthe full list of fields that can be present in the PKG-INFO\nfile. As\npeople began to package their software using Python 2.1, more and more packages\nwill include metadata, making it possible to build automated cataloguing systems\nand experiment with them. With the result experience, perhaps it\u2019ll be possible\nto design a really good catalog and then build support for it into Python 2.2.\nFor example, the Distutils sdist and bdist_* commands\ncould support an upload\noption that would automatically upload your\npackage to a catalog server.\nYou can start creating packages containing PKG-INFO\neven if you\u2019re not\nusing Python 2.1, since a new release of the Distutils will be made for users of\nearlier Python versions. Version 1.0.2 of the Distutils includes the changes\ndescribed in PEP 241, as well as various bugfixes and enhancements. It will be\navailable from the Distutils SIG at https://www.python.org/community/sigs/current/distutils-sig/.\nNew and Improved Modules\u00b6\nKa-Ping Yee contributed two new modules:\ninspect.py\n, a module for getting information about live Python code, andpydoc.py\n, a module for interactively converting docstrings to HTML or text. As a bonus,Tools/scripts/pydoc\n, which is now automatically installed, usespydoc.py\nto display documentation given a Python module, package, or class name. For example,pydoc xml.dom\ndisplays the following:Python Library Documentation: package xml.dom in xml NAME xml.dom - W3C Document Object Model implementation for Python. FILE /usr/local/lib/python2.1/xml/dom/__init__.pyc DESCRIPTION The Python mapping of the Document Object Model is documented in the Python Library Reference in the section on the xml.dom package. This package contains the following modules: ...\npydoc\nalso includes a Tk-based interactive help browser.pydoc\nquickly becomes addictive; try it out!Two different modules for unit testing were added to the standard library. The\ndoctest\nmodule, contributed by Tim Peters, provides a testing framework based on running embedded examples in docstrings and comparing the results against the expected output. PyUnit, contributed by Steve Purcell, is a unit testing framework inspired by JUnit, which was in turn an adaptation of Kent Beck\u2019s Smalltalk testing framework. See https://pyunit.sourceforge.net/ for more information about PyUnit.The\ndifflib\nmodule contains a class,SequenceMatcher\n, which compares two sequences and computes the changes required to transform one sequence into the other. For example, this module can be used to write a tool similar to the Unix diff program, and in fact the sample programTools/scripts/ndiff.py\ndemonstrates how to write such a script.curses.panel\n, a wrapper for the panel library, part of ncurses and of SYSV curses, was contributed by Thomas Gellekum. The panel library provides windows with the additional feature of depth. Windows can be moved higher or lower in the depth ordering, and the panel library figures out where panels overlap and which sections are visible.The PyXML package has gone through a few releases since Python 2.0, and Python 2.1 includes an updated version of the\nxml\npackage. Some of the noteworthy changes include support for Expat 1.2 and later versions, the ability for Expat parsers to handle files in any encoding supported by Python, and various bugfixes for SAX, DOM, and theminidom\nmodule.Ping also contributed another hook for handling uncaught exceptions.\nsys.excepthook()\ncan be set to a callable object. When an exception isn\u2019t caught by anytry\n\u2026except\nblocks, the exception will be passed tosys.excepthook()\n, which can then do whatever it likes. At the Ninth Python Conference, Ping demonstrated an application for this hook: printing an extended traceback that not only lists the stack frames, but also lists the function arguments and the local variables for each frame.Various functions in the\ntime\nmodule, such asasctime()\nandlocaltime()\n, require a floating-point argument containing the time in seconds since the epoch. The most common use of these functions is to work with the current time, so the floating-point argument has been made optional; when a value isn\u2019t provided, the current time will be used. For example, log file entries usually need a string containing the current time; in Python 2.1,time.asctime()\ncan be used, instead of the lengthiertime.asctime(time.localtime(time.time()))\nthat was previously required.This change was proposed and implemented by Thomas Wouters.\nThe\nftplib\nmodule now defaults to retrieving files in passive mode, because passive mode is more likely to work from behind a firewall. This request came from the Debian bug tracking system, since other Debian packages useftplib\nto retrieve files and then don\u2019t work from behind a firewall. It\u2019s deemed unlikely that this will cause problems for anyone, because Netscape defaults to passive mode and few people complain, but if passive mode is unsuitable for your application or network setup, callset_pasv(0)\non FTP objects to disable passive mode.Support for raw socket access has been added to the\nsocket\nmodule, contributed by Grant Edwards.The\npstats\nmodule now contains a simple interactive statistics browser for displaying timing profiles for Python programs, invoked when the module is run as a script. Contributed by Eric S. Raymond.A new implementation-dependent function,\nsys._getframe([depth])\n, has been added to return a given frame object from the current call stack.sys._getframe()\nreturns the frame at the top of the call stack; if the optional integer argument depth is supplied, the function returns the frame that is depth calls below the top of the stack. For example,sys._getframe(1)\nreturns the caller\u2019s frame object.This function is only present in CPython, not in Jython or the .NET implementation. Use it for debugging, and resist the temptation to put it into production code.\nOther Changes and Fixes\u00b6\nThere were relatively few smaller changes made in Python 2.1 due to the shorter release cycle. A search through the CVS change logs turns up 117 patches applied, and 136 bugs fixed; both figures are likely to be underestimates. Some of the more notable changes are:\nA specialized object allocator is now optionally available, that should be faster than the system\nmalloc()\nand have less memory overhead. The allocator uses C\u2019smalloc()\nfunction to get large pools of memory, and then fulfills smaller memory requests from these pools. It can be enabled by providing the--with-pymalloc\noption to the configure script; seeObjects/obmalloc.c\nfor the implementation details.Authors of C extension modules should test their code with the object allocator enabled, because some incorrect code may break, causing core dumps at runtime. There are a bunch of memory allocation functions in Python\u2019s C API that have previously been just aliases for the C library\u2019s\nmalloc()\nandfree()\n, meaning that if you accidentally called mismatched functions, the error wouldn\u2019t be noticeable. When the object allocator is enabled, these functions aren\u2019t aliases ofmalloc()\nandfree()\nany more, and calling the wrong function to free memory will get you a core dump. For example, if memory was allocated usingPyMem_New\n, it has to be freed usingPyMem_Del()\n, notfree()\n. A few modules included with Python fell afoul of this and had to be fixed; doubtless there are more third-party modules that will have the same problem.The object allocator was contributed by Vladimir Marangozov.\nThe speed of line-oriented file I/O has been improved because people often complain about its lack of speed, and because it\u2019s often been used as a na\u00efve benchmark. The\nreadline()\nmethod of file objects has therefore been rewritten to be much faster. The exact amount of the speedup will vary from platform to platform depending on how slow the C library\u2019sgetc()\nwas, but is around 66%, and potentially much faster on some particular operating systems. Tim Peters did much of the benchmarking and coding for this change, motivated by a discussion in comp.lang.python.A new module and method for file objects was also added, contributed by Jeff Epler. The new method,\nxreadlines()\n, is similar to the existingxrange()\nbuilt-in.xreadlines()\nreturns an opaque sequence object that only supports being iterated over, reading a line on every iteration but not reading the entire file into memory as the existingreadlines()\nmethod does. You\u2019d use it like this:for line in sys.stdin.xreadlines(): # ... do something for each line ... ...\nFor a fuller discussion of the line I/O changes, see the python-dev summary for January 1\u201315, 2001 at https://mail.python.org/pipermail/python-dev/2001-January/.\nA new method,\npopitem()\n, was added to dictionaries to enable destructively iterating through the contents of a dictionary; this can be faster for large dictionaries because there\u2019s no need to construct a list containing all the keys or values.D.popitem()\nremoves a random(key, value)\npair from the dictionaryD\nand returns it as a 2-tuple. This was implemented mostly by Tim Peters and Guido van Rossum, after a suggestion and preliminary patch by Moshe Zadka.Modules can now control which names are imported when\nfrom module import *\nis used, by defining an__all__\nattribute containing a list of names that will be imported. One common complaint is that if the module imports other modules such assys\norstring\n,from module import *\nwill add them to the importing module\u2019s namespace. To fix this, simply list the public names in__all__\n:# List public names __all__ = ['Database', 'open']\nA stricter version of this patch was first suggested and implemented by Ben Wolfson, but after some python-dev discussion, a weaker final version was checked in.\nApplying\nrepr()\nto strings previously used octal escapes for non-printable characters; for example, a newline was'\\012'\n. This was a vestigial trace of Python\u2019s C ancestry, but today octal is of very little practical use. Ka-Ping Yee suggested using hex escapes instead of octal ones, and using the\\n\n,\\t\n,\\r\nescapes for the appropriate characters, and implemented this new formatting.Syntax errors detected at compile-time can now raise exceptions containing the filename and line number of the error, a pleasant side effect of the compiler reorganization done by Jeremy Hylton.\nC extensions which import other modules have been changed to use\nPyImport_ImportModule()\n, which means that they will use any import hooks that have been installed. This is also encouraged for third-party extensions that need to import some other module from C code.The size of the Unicode character database was shrunk by another 340K thanks to Fredrik Lundh.\nSome new ports were contributed: MacOS X (by Steven Majewski), Cygwin (by Jason Tishler); RISCOS (by Dietmar Schwertberger); Unixware 7 (by Billy G. Allie).\nAnd there\u2019s the usual list of minor bugfixes, minor memory leaks, docstring edits, and other tweaks, too lengthy to be worth itemizing; see the CVS logs for the full details if you want them.\nAcknowledgements\u00b6\nThe author would like to thank the following people for offering suggestions on various drafts of this article: Graeme Cross, David Goodger, Jay Graves, Michael Hudson, Marc-Andr\u00e9 Lemburg, Fredrik Lundh, Neil Schemenauer, Thomas Wouters.", "code_snippets": ["\n ", "\n ", "\n ", "\n ", " ", " ", " ", "\n ", "\n", " ", "\n ", "\n ", " ", " ", " ", " ", " ", " ", " ", "\n ", "\n ", " ", "\n", " ", " ", "\n", "\n ", "\n ", " ", "\n ", "\n ", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n ", "\n ", "\n ", " ", " ", "\n", " ", " ", "\n", "\n ", " ", "\n ", " ", "\n\n ", " ", " ", "\n\n ", "\n ", " ", " ", "\n\n ", " ", "\n", " ", " ", "\n", "\n ", " ", "\n ", " ", " ", "\n ", "\n ", "\n ", " ", " ", " ", " ", " ", " ", "\n\n ", " ", " ", "\n\n ", "\n ", " ", " ", "\n\n ", " ", "\n", " ", " ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", "\n\n", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", "\n\n", "\n ", "\n\n", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n\n ", " ", " ", " ", " ", " ", "\n ", "\n", " ", " ", " ", "\n ", "\n ", "\n", "\n", " ", " ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 8079} +{"url": "https://docs.python.org/3/whatsnew/2.2.html", "title": "What\u2019s New in Python 2.2", "content": "What\u2019s New in Python 2.2\u00b6\n- Author:\nA.M. Kuchling\nIntroduction\u00b6\nThis article explains the new features in Python 2.2.2, released on October 14, 2002. Python 2.2.2 is a bugfix release of Python 2.2, originally released on December 21, 2001.\nPython 2.2 can be thought of as the \u201ccleanup release\u201d. There are some features such as generators and iterators that are completely new, but most of the changes, significant and far-reaching though they may be, are aimed at cleaning up irregularities and dark corners of the language design.\nThis article doesn\u2019t attempt to provide a complete specification of the new features, but instead provides a convenient overview. For full details, you should refer to the documentation for Python 2.2, such as the Python Library Reference and the Python Reference Manual. If you want to understand the complete implementation and design rationale for a change, refer to the PEP for a particular new feature.\nPEPs 252 and 253: Type and Class Changes\u00b6\nThe largest and most far-reaching changes in Python 2.2 are to Python\u2019s model of objects and classes. The changes should be backward compatible, so it\u2019s likely that your code will continue to run unchanged, but the changes provide some amazing new capabilities. Before beginning this, the longest and most complicated section of this article, I\u2019ll provide an overview of the changes and offer some comments.\nA long time ago I wrote a web page listing flaws in Python\u2019s design. One of the\nmost significant flaws was that it\u2019s impossible to subclass Python types\nimplemented in C. In particular, it\u2019s not possible to subclass built-in types,\nso you can\u2019t just subclass, say, lists in order to add a single useful method to\nthem. The UserList\nmodule provides a class that supports all of the\nmethods of lists and that can be subclassed further, but there\u2019s lots of C code\nthat expects a regular Python list and won\u2019t accept a UserList\ninstance.\nPython 2.2 fixes this, and in the process adds some exciting new capabilities. A brief summary:\nYou can subclass built-in types such as lists and even integers, and your subclasses should work in every place that requires the original type.\nIt\u2019s now possible to define static and class methods, in addition to the instance methods available in previous versions of Python.\nIt\u2019s also possible to automatically call methods on accessing or setting an instance attribute by using a new mechanism called properties. Many uses of\n__getattr__()\ncan be rewritten to use properties instead, making the resulting code simpler and faster. As a small side benefit, attributes can now have docstrings, too.The list of legal attributes for an instance can be limited to a particular set using slots, making it possible to safeguard against typos and perhaps make more optimizations possible in future versions of Python.\nSome users have voiced concern about all these changes. Sure, they say, the new features are neat and lend themselves to all sorts of tricks that weren\u2019t possible in previous versions of Python, but they also make the language more complicated. Some people have said that they\u2019ve always recommended Python for its simplicity, and feel that its simplicity is being lost.\nPersonally, I think there\u2019s no need to worry. Many of the new features are quite esoteric, and you can write a lot of Python code without ever needed to be aware of them. Writing a simple class is no more difficult than it ever was, so you don\u2019t need to bother learning or teaching them unless they\u2019re actually needed. Some very complicated tasks that were previously only possible from C will now be possible in pure Python, and to my mind that\u2019s all for the better.\nI\u2019m not going to attempt to cover every single corner case and small change that were required to make the new features work. Instead this section will paint only the broad strokes. See section Related Links, \u201cRelated Links\u201d, for further sources of information about Python 2.2\u2019s new object model.\nOld and New Classes\u00b6\nFirst, you should know that Python 2.2 really has two kinds of classes: classic or old-style classes, and new-style classes. The old-style class model is exactly the same as the class model in earlier versions of Python. All the new features described in this section apply only to new-style classes. This divergence isn\u2019t intended to last forever; eventually old-style classes will be dropped, possibly in Python 3.0.\nSo how do you define a new-style class? You do it by subclassing an existing\nnew-style class. Most of Python\u2019s built-in types, such as integers, lists,\ndictionaries, and even files, are new-style classes now. A new-style class\nnamed object\n, the base class for all built-in types, has also been\nadded so if no built-in type is suitable, you can just subclass\nobject\n:\nclass C(object):\ndef __init__ (self):\n...\n...\nThis means that class\nstatements that don\u2019t have any base classes are\nalways classic classes in Python 2.2. (Actually you can also change this by\nsetting a module-level variable named __metaclass__\n\u2014 see PEP 253\nfor the details \u2014 but it\u2019s easier to just subclass object\n.)\nThe type objects for the built-in types are available as built-ins, named using\na clever trick. Python has always had built-in functions named int()\n,\nfloat()\n, and str()\n. In 2.2, they aren\u2019t functions any more, but\ntype objects that behave as factories when called.\n>>> int\n\n>>> int('123')\n123\nTo make the set of types complete, new type objects such as dict()\nand\nfile()\nhave been added. Here\u2019s a more interesting example, adding a\nlock()\nmethod to file objects:\nclass LockableFile(file):\ndef lock (self, operation, length=0, start=0, whence=0):\nimport fcntl\nreturn fcntl.lockf(self.fileno(), operation,\nlength, start, whence)\nThe now-obsolete posixfile\nmodule contained a class that emulated all of\na file object\u2019s methods and also added a lock()\nmethod, but this class\ncouldn\u2019t be passed to internal functions that expected a built-in file,\nsomething which is possible with our new LockableFile\n.\nDescriptors\u00b6\nIn previous versions of Python, there was no consistent way to discover what\nattributes and methods were supported by an object. There were some informal\nconventions, such as defining __members__\nand __methods__\nattributes that were lists of names, but often the author of an extension type\nor a class wouldn\u2019t bother to define them. You could fall back on inspecting\nthe __dict__\nof an object, but when class inheritance or an arbitrary\n__getattr__()\nhook were in use this could still be inaccurate.\nThe one big idea underlying the new class model is that an API for describing the attributes of an object using descriptors has been formalized. Descriptors specify the value of an attribute, stating whether it\u2019s a method or a field. With the descriptor API, static methods and class methods become possible, as well as more exotic constructs.\nAttribute descriptors are objects that live inside class objects, and have a few attributes of their own:\n__name__\nis the attribute\u2019s name.__doc__\nis the attribute\u2019s docstring.__get__(object)\nis a method that retrieves the attribute value from object.__set__(object, value)\nsets the attribute on object to value.__delete__(object, value)\ndeletes the value attribute of object.\nFor example, when you write obj.x\n, the steps that Python actually performs\nare:\ndescriptor = obj.__class__.x\ndescriptor.__get__(obj)\nFor methods, descriptor.__get__\nreturns a temporary\nobject that\u2019s\ncallable, and wraps up the instance and the method to be called on it. This is\nalso why static methods and class methods are now possible; they have\ndescriptors that wrap up just the method, or the method and the class. As a\nbrief explanation of these new kinds of methods, static methods aren\u2019t passed\nthe instance, and therefore resemble regular functions. Class methods are\npassed the class of the object, but not the object itself. Static and class\nmethods are defined like this:\nclass C(object):\ndef f(arg1, arg2):\n...\nf = staticmethod(f)\ndef g(cls, arg1, arg2):\n...\ng = classmethod(g)\nThe staticmethod()\nfunction takes the function f()\n, and returns it\nwrapped up in a descriptor so it can be stored in the class object. You might\nexpect there to be special syntax for creating such methods (def static f\n,\ndefstatic f()\n, or something like that) but no such syntax has been defined\nyet; that\u2019s been left for future versions of Python.\nMore new features, such as slots and properties, are also implemented as new kinds of descriptors, and it\u2019s not difficult to write a descriptor class that does something novel. For example, it would be possible to write a descriptor class that made it possible to write Eiffel-style preconditions and postconditions for a method. A class that used this feature might be defined like this:\nfrom eiffel import eiffelmethod\nclass C(object):\ndef f(self, arg1, arg2):\n# The actual function\n...\ndef pre_f(self):\n# Check preconditions\n...\ndef post_f(self):\n# Check postconditions\n...\nf = eiffelmethod(f, pre_f, post_f)\nNote that a person using the new eiffelmethod()\ndoesn\u2019t have to understand\nanything about descriptors. This is why I think the new features don\u2019t increase\nthe basic complexity of the language. There will be a few wizards who need to\nknow about it in order to write eiffelmethod()\nor the ZODB or whatever,\nbut most users will just write code on top of the resulting libraries and ignore\nthe implementation details.\nMultiple Inheritance: The Diamond Rule\u00b6\nMultiple inheritance has also been made more useful through changing the rules under which names are resolved. Consider this set of classes (diagram taken from PEP 253 by Guido van Rossum):\nclass A:\n^ ^ def save(self): ...\n/ \\\n/ \\\n/ \\\n/ \\\nclass B class C:\n^ ^ def save(self): ...\n\\ /\n\\ /\n\\ /\n\\ /\nclass D\nThe lookup rule for classic classes is simple but not very smart; the base\nclasses are searched depth-first, going from left to right. A reference to\nD.save()\nwill search the classes D\n, B\n, and then\nA\n, where save()\nwould be found and returned. C.save()\nwould never be found at all. This is bad, because if C\n\u2019s save()\nmethod is saving some internal state specific to C\n, not calling it will\nresult in that state never getting saved.\nNew-style classes follow a different algorithm that\u2019s a bit more complicated to explain, but does the right thing in this situation. (Note that Python 2.3 changes this algorithm to one that produces the same results in most cases, but produces more useful results for really complicated inheritance graphs.)\nList all the base classes, following the classic lookup rule and include a class multiple times if it\u2019s visited repeatedly. In the above example, the list of visited classes is [\nD\n,B\n,A\n,C\n,A\n].Scan the list for duplicated classes. If any are found, remove all but one occurrence, leaving the last one in the list. In the above example, the list becomes [\nD\n,B\n,C\n,A\n] after dropping duplicates.\nFollowing this rule, referring to D.save()\nwill return C.save()\n,\nwhich is the behaviour we\u2019re after. This lookup rule is the same as the one\nfollowed by Common Lisp. A new built-in function, super()\n, provides a way\nto get at a class\u2019s superclasses without having to reimplement Python\u2019s\nalgorithm. The most commonly used form will be super(class, obj)\n, which\nreturns a bound superclass object (not the actual class object). This form\nwill be used in methods to call a method in the superclass; for example,\nD\n\u2019s save()\nmethod would look like this:\nclass D (B,C):\ndef save (self):\n# Call superclass .save()\nsuper(D, self).save()\n# Save D's private information here\n...\nsuper()\ncan also return unbound superclass objects when called as\nsuper(class)\nor super(class1, class2)\n, but this probably won\u2019t\noften be useful.\nAttribute Access\u00b6\nA fair number of sophisticated Python classes define hooks for attribute access\nusing __getattr__()\n; most commonly this is done for convenience, to make\ncode more readable by automatically mapping an attribute access such as\nobj.parent\ninto a method call such as obj.get_parent\n. Python 2.2 adds\nsome new ways of controlling attribute access.\nFirst, __getattr__(attr_name)\nis still supported by new-style classes,\nand nothing about it has changed. As before, it will be called when an attempt\nis made to access obj.foo\nand no attribute named foo\nis found in the\ninstance\u2019s dictionary.\nNew-style classes also support a new method,\n__getattribute__(attr_name)\n. The difference between the two methods is\nthat __getattribute__()\nis always called whenever any attribute is\naccessed, while the old __getattr__()\nis only called if foo\nisn\u2019t\nfound in the instance\u2019s dictionary.\nHowever, Python 2.2\u2019s support for properties will often be a simpler way\nto trap attribute references. Writing a __getattr__()\nmethod is\ncomplicated because to avoid recursion you can\u2019t use regular attribute accesses\ninside them, and instead have to mess around with the contents of\n__dict__\n. __getattr__()\nmethods also end up being called by Python\nwhen it checks for other methods such as __repr__()\nor __coerce__()\n,\nand so have to be written with this in mind. Finally, calling a function on\nevery attribute access results in a sizable performance loss.\nproperty\nis a new built-in type that packages up three functions that\nget, set, or delete an attribute, and a docstring. For example, if you want to\ndefine a size\nattribute that\u2019s computed, but also settable, you could\nwrite:\nclass C(object):\ndef get_size (self):\nresult = ... computation ...\nreturn result\ndef set_size (self, size):\n... compute something based on the size\nand set internal state appropriately ...\n# Define a property. The 'delete this attribute'\n# method is defined as None, so the attribute\n# can't be deleted.\nsize = property(get_size, set_size,\nNone,\n\"Storage size of this instance\")\nThat is certainly clearer and easier to write than a pair of\n__getattr__()\n/__setattr__()\nmethods that check for the size\nattribute and handle it specially while retrieving all other attributes from the\ninstance\u2019s __dict__\n. Accesses to size\nare also the only ones\nwhich have to perform the work of calling a function, so references to other\nattributes run at their usual speed.\nFinally, it\u2019s possible to constrain the list of attributes that can be\nreferenced on an object using the new __slots__\nclass attribute. Python\nobjects are usually very dynamic; at any time it\u2019s possible to define a new\nattribute on an instance by just doing obj.new_attr=1\n. A new-style class\ncan define a class attribute named __slots__\nto limit the legal\nattributes to a particular set of names. An example will make this clear:\n>>> class C(object):\n... __slots__ = ('template', 'name')\n...\n>>> obj = C()\n>>> print obj.template\nNone\n>>> obj.template = 'Test'\n>>> print obj.template\nTest\n>>> obj.newattr = None\nTraceback (most recent call last):\nFile \"\", line 1, in ?\nAttributeError: 'C' object has no attribute 'newattr'\nNote how you get an AttributeError\non the attempt to assign to an\nattribute not listed in __slots__\n.\nPEP 234: Iterators\u00b6\nAnother significant addition to 2.2 is an iteration interface at both the C and Python levels. Objects can define how they can be looped over by callers.\nIn Python versions up to 2.1, the usual way to make for item in obj\nwork is\nto define a __getitem__()\nmethod that looks something like this:\ndef __getitem__(self, index):\nreturn \n__getitem__()\nis more properly used to define an indexing operation on an\nobject so that you can write obj[5]\nto retrieve the sixth element. It\u2019s a\nbit misleading when you\u2019re using this only to support for\nloops.\nConsider some file-like object that wants to be looped over; the index\nparameter is essentially meaningless, as the class probably assumes that a\nseries of __getitem__()\ncalls will be made with index incrementing by\none each time. In other words, the presence of the __getitem__()\nmethod\ndoesn\u2019t mean that using file[5]\nto randomly access the sixth element will\nwork, though it really should.\nIn Python 2.2, iteration can be implemented separately, and __getitem__()\nmethods can be limited to classes that really do support random access. The\nbasic idea of iterators is simple. A new built-in function, iter(obj)\nor iter(C, sentinel)\n, is used to get an iterator. iter(obj)\nreturns\nan iterator for the object obj, while iter(C, sentinel)\nreturns an\niterator that will invoke the callable object C until it returns sentinel to\nsignal that the iterator is done.\nPython classes can define an __iter__()\nmethod, which should create and\nreturn a new iterator for the object; if the object is its own iterator, this\nmethod can just return self\n. In particular, iterators will usually be their\nown iterators. Extension types implemented in C can implement a tp_iter\nfunction in order to return an iterator, and extension types that want to behave\nas iterators can define a tp_iternext\nfunction.\nSo, after all this, what do iterators actually do? They have one required\nmethod, next()\n, which takes no arguments and returns the next value. When\nthere are no more values to be returned, calling next()\nshould raise the\nStopIteration\nexception.\n>>> L = [1,2,3]\n>>> i = iter(L)\n>>> print i\n\n>>> i.next()\n1\n>>> i.next()\n2\n>>> i.next()\n3\n>>> i.next()\nTraceback (most recent call last):\nFile \"\", line 1, in ?\nStopIteration\n>>>\nIn 2.2, Python\u2019s for\nstatement no longer expects a sequence; it\nexpects something for which iter()\nwill return an iterator. For backward\ncompatibility and convenience, an iterator is automatically constructed for\nsequences that don\u2019t implement __iter__()\nor a tp_iter\nslot, so\nfor i in [1,2,3]\nwill still work. Wherever the Python interpreter loops\nover a sequence, it\u2019s been changed to use the iterator protocol. This means you\ncan do things like this:\n>>> L = [1,2,3]\n>>> i = iter(L)\n>>> a,b,c = i\n>>> a,b,c\n(1, 2, 3)\nIterator support has been added to some of Python\u2019s basic types. Calling\niter()\non a dictionary will return an iterator which loops over its keys:\n>>> m = {'Jan': 1, 'Feb': 2, 'Mar': 3, 'Apr': 4, 'May': 5, 'Jun': 6,\n... 'Jul': 7, 'Aug': 8, 'Sep': 9, 'Oct': 10, 'Nov': 11, 'Dec': 12}\n>>> for key in m: print key, m[key]\n...\nMar 3\nFeb 2\nAug 8\nSep 9\nMay 5\nJun 6\nJul 7\nJan 1\nApr 4\nNov 11\nDec 12\nOct 10\nThat\u2019s just the default behaviour. If you want to iterate over keys, values, or\nkey/value pairs, you can explicitly call the iterkeys()\n,\nitervalues()\n, or iteritems()\nmethods to get an appropriate iterator.\nIn a minor related change, the in\noperator now works on dictionaries,\nso key in dict\nis now equivalent to dict.has_key(key)\n.\nFiles also provide an iterator, which calls the readline()\nmethod until\nthere are no more lines in the file. This means you can now read each line of a\nfile using code like this:\nfor line in file:\n# do something for each line\n...\nNote that you can only go forward in an iterator; there\u2019s no way to get the\nprevious element, reset the iterator, or make a copy of it. An iterator object\ncould provide such additional capabilities, but the iterator protocol only\nrequires a next()\nmethod.\nSee also\n- PEP 234 - Iterators\nWritten by Ka-Ping Yee and GvR; implemented by the Python Labs crew, mostly by GvR and Tim Peters.\nPEP 255: Simple Generators\u00b6\nGenerators are another new feature, one that interacts with the introduction of iterators.\nYou\u2019re doubtless familiar with how function calls work in Python or C. When you\ncall a function, it gets a private namespace where its local variables are\ncreated. When the function reaches a return\nstatement, the local\nvariables are destroyed and the resulting value is returned to the caller. A\nlater call to the same function will get a fresh new set of local variables.\nBut, what if the local variables weren\u2019t thrown away on exiting a function?\nWhat if you could later resume the function where it left off? This is what\ngenerators provide; they can be thought of as resumable functions.\nHere\u2019s the simplest example of a generator function:\ndef generate_ints(N):\nfor i in range(N):\nyield i\nA new keyword, yield\n, was introduced for generators. Any function\ncontaining a yield\nstatement is a generator function; this is\ndetected by Python\u2019s bytecode compiler which compiles the function specially as\na result. Because a new keyword was introduced, generators must be explicitly\nenabled in a module by including a from __future__ import generators\nstatement near the top of the module\u2019s source code. In Python 2.3 this\nstatement will become unnecessary.\nWhen you call a generator function, it doesn\u2019t return a single value; instead it\nreturns a generator object that supports the iterator protocol. On executing\nthe yield\nstatement, the generator outputs the value of i\n,\nsimilar to a return\nstatement. The big difference between\nyield\nand a return\nstatement is that on reaching a\nyield\nthe generator\u2019s state of execution is suspended and local\nvariables are preserved. On the next call to the generator\u2019s next()\nmethod,\nthe function will resume executing immediately after the yield\nstatement. (For complicated reasons, the yield\nstatement isn\u2019t\nallowed inside the try\nblock of a\ntry\n\u2026finally\nstatement; read PEP 255 for a full\nexplanation of the interaction between yield\nand exceptions.)\nHere\u2019s a sample usage of the generate_ints()\ngenerator:\n>>> gen = generate_ints(3)\n>>> gen\n\n>>> gen.next()\n0\n>>> gen.next()\n1\n>>> gen.next()\n2\n>>> gen.next()\nTraceback (most recent call last):\nFile \"\", line 1, in ?\nFile \"\", line 2, in generate_ints\nStopIteration\nYou could equally write for i in generate_ints(5)\n, or a,b,c =\ngenerate_ints(3)\n.\nInside a generator function, the return\nstatement can only be used\nwithout a value, and signals the end of the procession of values; afterwards the\ngenerator cannot return any further values. return\nwith a value, such\nas return 5\n, is a syntax error inside a generator function. The end of the\ngenerator\u2019s results can also be indicated by raising StopIteration\nmanually, or by just letting the flow of execution fall off the bottom of the\nfunction.\nYou could achieve the effect of generators manually by writing your own class\nand storing all the local variables of the generator as instance variables. For\nexample, returning a list of integers could be done by setting self.count\nto\n0, and having the next()\nmethod increment self.count\nand return it.\nHowever, for a moderately complicated generator, writing a corresponding class\nwould be much messier. Lib/test/test_generators.py\ncontains a number of\nmore interesting examples. The simplest one implements an in-order traversal of\na tree using generators recursively.\n# A recursive generator that generates Tree leaves in in-order.\ndef inorder(t):\nif t:\nfor x in inorder(t.left):\nyield x\nyield t.label\nfor x in inorder(t.right):\nyield x\nTwo other examples in Lib/test/test_generators.py\nproduce solutions for\nthe N-Queens problem (placing $N$ queens on an $NxN$ chess board so that no\nqueen threatens another) and the Knight\u2019s Tour (a route that takes a knight to\nevery square of an $NxN$ chessboard without visiting any square twice).\nThe idea of generators comes from other programming languages, especially Icon (https://www2.cs.arizona.edu/icon/), where the idea of generators is central. In Icon, every expression and function call behaves like a generator. One example from \u201cAn Overview of the Icon Programming Language\u201d at https://www2.cs.arizona.edu/icon/docs/ipd266.htm gives an idea of what this looks like:\nsentence := \"Store it in the neighboring harbor\"\nif (i := find(\"or\", sentence)) > 5 then write(i)\nIn Icon the find()\nfunction returns the indexes at which the substring\n\u201cor\u201d is found: 3, 23, 33. In the if\nstatement, i\nis first\nassigned a value of 3, but 3 is less than 5, so the comparison fails, and Icon\nretries it with the second value of 23. 23 is greater than 5, so the comparison\nnow succeeds, and the code prints the value 23 to the screen.\nPython doesn\u2019t go nearly as far as Icon in adopting generators as a central concept. Generators are considered a new part of the core Python language, but learning or using them isn\u2019t compulsory; if they don\u2019t solve any problems that you have, feel free to ignore them. One novel feature of Python\u2019s interface as compared to Icon\u2019s is that a generator\u2019s state is represented as a concrete object (the iterator) that can be passed around to other functions or stored in a data structure.\nSee also\n- PEP 255 - Simple Generators\nWritten by Neil Schemenauer, Tim Peters, Magnus Lie Hetland. Implemented mostly by Neil Schemenauer and Tim Peters, with other fixes from the Python Labs crew.\nPEP 237: Unifying Long Integers and Integers\u00b6\nIn recent versions, the distinction between regular integers, which are 32-bit\nvalues on most machines, and long integers, which can be of arbitrary size, was\nbecoming an annoyance. For example, on platforms that support files larger than\n2**32\nbytes, the tell()\nmethod of file objects has to return a long\ninteger. However, there were various bits of Python that expected plain integers\nand would raise an error if a long integer was provided instead. For example,\nin Python 1.5, only regular integers could be used as a slice index, and\n'abc'[1L:]\nwould raise a TypeError\nexception with the message \u2018slice\nindex must be int\u2019.\nPython 2.2 will shift values from short to long integers as required. The \u2018L\u2019\nsuffix is no longer needed to indicate a long integer literal, as now the\ncompiler will choose the appropriate type. (Using the \u2018L\u2019 suffix will be\ndiscouraged in future 2.x versions of Python, triggering a warning in Python\n2.4, and probably dropped in Python 3.0.) Many operations that used to raise an\nOverflowError\nwill now return a long integer as their result. For\nexample:\n>>> 1234567890123\n1234567890123L\n>>> 2 ** 64\n18446744073709551616L\nIn most cases, integers and long integers will now be treated identically. You\ncan still distinguish them with the type()\nbuilt-in function, but that\u2019s\nrarely needed.\nSee also\n- PEP 237 - Unifying Long Integers and Integers\nWritten by Moshe Zadka and Guido van Rossum. Implemented mostly by Guido van Rossum.\nPEP 238: Changing the Division Operator\u00b6\nThe most controversial change in Python 2.2 heralds the start of an effort to\nfix an old design flaw that\u2019s been in Python from the beginning. Currently\nPython\u2019s division operator, /\n, behaves like C\u2019s division operator when\npresented with two integer arguments: it returns an integer result that\u2019s\ntruncated down when there would be a fractional part. For example, 3/2\nis\n1, not 1.5, and (-1)/2\nis -1, not -0.5. This means that the results of\ndivision can vary unexpectedly depending on the type of the two operands and\nbecause Python is dynamically typed, it can be difficult to determine the\npossible types of the operands.\n(The controversy is over whether this is really a design flaw, and whether it\u2019s worth breaking existing code to fix this. It\u2019s caused endless discussions on python-dev, and in July 2001 erupted into a storm of acidly sarcastic postings on comp.lang.python. I won\u2019t argue for either side here and will stick to describing what\u2019s implemented in 2.2. Read PEP 238 for a summary of arguments and counter-arguments.)\nBecause this change might break code, it\u2019s being introduced very gradually. Python 2.2 begins the transition, but the switch won\u2019t be complete until Python 3.0.\nFirst, I\u2019ll borrow some terminology from PEP 238. \u201cTrue division\u201d is the\ndivision that most non-programmers are familiar with: 3/2 is 1.5, 1/4 is 0.25,\nand so forth. \u201cFloor division\u201d is what Python\u2019s /\noperator currently does\nwhen given integer operands; the result is the floor of the value returned by\ntrue division. \u201cClassic division\u201d is the current mixed behaviour of /\n; it\nreturns the result of floor division when the operands are integers, and returns\nthe result of true division when one of the operands is a floating-point number.\nHere are the changes 2.2 introduces:\nA new operator,\n//\n, is the floor division operator. (Yes, we know it looks like C++\u2019s comment symbol.)//\nalways performs floor division no matter what the types of its operands are, so1 // 2\nis 0 and1.0 // 2.0\nis also 0.0.//\nis always available in Python 2.2; you don\u2019t need to enable it using a__future__\nstatement.By including a\nfrom __future__ import division\nin a module, the/\noperator will be changed to return the result of true division, so1/2\nis 0.5. Without the__future__\nstatement,/\nstill means classic division. The default meaning of/\nwill not change until Python 3.0.Classes can define methods called\n__truediv__()\nand__floordiv__()\nto overload the two division operators. At the C level, there are also slots in thePyNumberMethods\nstructure so extension types can define the two operators.Python 2.2 supports some command-line arguments for testing whether code will work with the changed division semantics. Running python with\n-Q warn\nwill cause a warning to be issued whenever division is applied to two integers. You can use this to find code that\u2019s affected by the change and fix it. By default, Python 2.2 will simply perform classic division without a warning; the warning will be turned on by default in Python 2.3.\nSee also\n- PEP 238 - Changing the Division Operator\nWritten by Moshe Zadka and Guido van Rossum. Implemented by Guido van Rossum..\nUnicode Changes\u00b6\nPython\u2019s Unicode support has been enhanced a bit in 2.2. Unicode strings are\nusually stored as UCS-2, as 16-bit unsigned integers. Python 2.2 can also be\ncompiled to use UCS-4, 32-bit unsigned integers, as its internal encoding by\nsupplying --enable-unicode=ucs4\nto the configure script. (It\u2019s also\npossible to specify --disable-unicode\nto completely disable Unicode\nsupport.)\nWhen built to use UCS-4 (a \u201cwide Python\u201d), the interpreter can natively handle\nUnicode characters from U+000000 to U+110000, so the range of legal values for\nthe unichr()\nfunction is expanded accordingly. Using an interpreter\ncompiled to use UCS-2 (a \u201cnarrow Python\u201d), values greater than 65535 will still\ncause unichr()\nto raise a ValueError\nexception. This is all\ndescribed in PEP 261, \u201cSupport for \u2018wide\u2019 Unicode characters\u201d; consult it for\nfurther details.\nAnother change is simpler to explain. Since their introduction, Unicode strings\nhave supported an encode()\nmethod to convert the string to a selected\nencoding such as UTF-8 or Latin-1. A symmetric decode([*encoding*])\nmethod has been added to 8-bit strings (though not to Unicode strings) in 2.2.\ndecode()\nassumes that the string is in the specified encoding and decodes\nit, returning whatever is returned by the codec.\nUsing this new feature, codecs have been added for tasks not directly related to\nUnicode. For example, codecs have been added for uu-encoding, MIME\u2019s base64\nencoding, and compression with the zlib\nmodule:\n>>> s = \"\"\"Here is a lengthy piece of redundant, overly verbose,\n... and repetitive text.\n... \"\"\"\n>>> data = s.encode('zlib')\n>>> data\n'x\\x9c\\r\\xc9\\xc1\\r\\x80 \\x10\\x04\\xc0?Ul...'\n>>> data.decode('zlib')\n'Here is a lengthy piece of redundant, overly verbose,\\nand repetitive text.\\n'\n>>> print s.encode('uu')\nbegin 666 \nM2&5R92!I=F5R8F]S92P*86YD(')E<&5T:71I=F4@=&5X=\"X*\nend\n>>> \"sheesh\".encode('rot-13')\n'furrfu'\nTo convert a class instance to Unicode, a __unicode__()\nmethod can be\ndefined by a class, analogous to __str__()\n.\nencode()\n, decode()\n, and __unicode__()\nwere implemented by\nMarc-Andr\u00e9 Lemburg. The changes to support using UCS-4 internally were\nimplemented by Fredrik Lundh and Martin von L\u00f6wis.\nSee also\n- PEP 261 - Support for \u2018wide\u2019 Unicode characters\nWritten by Paul Prescod.\nPEP 227: Nested Scopes\u00b6\nIn Python 2.1, statically nested scopes were added as an optional feature, to be\nenabled by a from __future__ import nested_scopes\ndirective. In 2.2 nested\nscopes no longer need to be specially enabled, and are now always present. The\nrest of this section is a copy of the description of nested scopes from my\n\u201cWhat\u2019s New in Python 2.1\u201d document; if you read it when 2.1 came out, you can\nskip the rest of this section.\nThe largest change introduced in Python 2.1, and made complete in 2.2, is to Python\u2019s scoping rules. In Python 2.0, at any given time there are at most three namespaces used to look up variable names: local, module-level, and the built-in namespace. This often surprised people because it didn\u2019t match their intuitive expectations. For example, a nested recursive function definition doesn\u2019t work:\ndef f():\n...\ndef g(value):\n...\nreturn g(value-1) + 1\n...\nThe function g()\nwill always raise a NameError\nexception, because\nthe binding of the name g\nisn\u2019t in either its local namespace or in the\nmodule-level namespace. This isn\u2019t much of a problem in practice (how often do\nyou recursively define interior functions like this?), but this also made using\nthe lambda\nexpression clumsier, and this was a problem in practice.\nIn code which uses lambda\nyou can often find local variables being\ncopied by passing them as the default values of arguments.\ndef find(self, name):\n\"Return list of any entries equal to 'name'\"\nL = filter(lambda x, name=name: x == name,\nself.list_attribute)\nreturn L\nThe readability of Python code written in a strongly functional style suffers greatly as a result.\nThe most significant change to Python 2.2 is that static scoping has been added\nto the language to fix this problem. As a first effect, the name=name\ndefault argument is now unnecessary in the above example. Put simply, when a\ngiven variable name is not assigned a value within a function (by an assignment,\nor the def\n, class\n, or import\nstatements),\nreferences to the variable will be looked up in the local namespace of the\nenclosing scope. A more detailed explanation of the rules, and a dissection of\nthe implementation, can be found in the PEP.\nThis change may cause some compatibility problems for code where the same variable name is used both at the module level and as a local variable within a function that contains further function definitions. This seems rather unlikely though, since such code would have been pretty confusing to read in the first place.\nOne side effect of the change is that the from module import *\nand\nexec\nstatements have been made illegal inside a function scope under\ncertain conditions. The Python reference manual has said all along that from\nmodule import *\nis only legal at the top level of a module, but the CPython\ninterpreter has never enforced this before. As part of the implementation of\nnested scopes, the compiler which turns Python source into bytecodes has to\ngenerate different code to access variables in a containing scope. from\nmodule import *\nand exec\nmake it impossible for the compiler to\nfigure this out, because they add names to the local namespace that are\nunknowable at compile time. Therefore, if a function contains function\ndefinitions or lambda\nexpressions with free variables, the compiler\nwill flag this by raising a SyntaxError\nexception.\nTo make the preceding explanation a bit clearer, here\u2019s an example:\nx = 1\ndef f():\n# The next line is a syntax error\nexec 'x=2'\ndef g():\nreturn x\nLine 4 containing the exec\nstatement is a syntax error, since\nexec\nwould define a new local variable named x\nwhose value should\nbe accessed by g()\n.\nThis shouldn\u2019t be much of a limitation, since exec\nis rarely used in\nmost Python code (and when it is used, it\u2019s often a sign of a poor design\nanyway).\nSee also\n- PEP 227 - Statically Nested Scopes\nWritten and implemented by Jeremy Hylton.\nNew and Improved Modules\u00b6\nThe\nxmlrpclib\nmodule was contributed to the standard library by Fredrik Lundh, providing support for writing XML-RPC clients. XML-RPC is a simple remote procedure call protocol built on top of HTTP and XML. For example, the following snippet retrieves a list of RSS channels from the O\u2019Reilly Network, and then lists the recent headlines for one channel:import xmlrpclib s = xmlrpclib.Server( 'http://www.oreillynet.com/meerkat/xml-rpc/server.php') channels = s.meerkat.getChannels() # channels is a list of dictionaries, like this: # [{'id': 4, 'title': 'Freshmeat Daily News'} # {'id': 190, 'title': '32Bits Online'}, # {'id': 4549, 'title': '3DGamers'}, ... ] # Get the items for one channel items = s.meerkat.getItems( {'channel': 4} ) # 'items' is another list of dictionaries, like this: # [{'link': 'http://freshmeat.net/releases/52719/', # 'description': 'A utility which converts HTML to XSL FO.', # 'title': 'html2fo 0.3 (Default)'}, ... ]\nThe\nSimpleXMLRPCServer\nmodule makes it easy to create straightforward XML-RPC servers. See http://xmlrpc.scripting.com/ for more information about XML-RPC.The new\nhmac\nmodule implements the HMAC algorithm described by RFC 2104. (Contributed by Gerhard H\u00e4ring.)Several functions that originally returned lengthy tuples now return pseudo-sequences that still behave like tuples but also have mnemonic attributes such as\nmemberst_mtime\nortm_year\n. The enhanced functions includestat()\n,fstat()\n,statvfs()\n, andfstatvfs()\nin theos\nmodule, andlocaltime()\n,gmtime()\n, andstrptime()\nin thetime\nmodule.For example, to obtain a file\u2019s size using the old tuples, you\u2019d end up writing something like\nfile_size = os.stat(filename)[stat.ST_SIZE]\n, but now this can be written more clearly asfile_size = os.stat(filename).st_size\n.The original patch for this feature was contributed by Nick Mathewson.\nThe Python profiler has been extensively reworked and various errors in its output have been corrected. (Contributed by Fred L. Drake, Jr. and Tim Peters.)\nThe\nsocket\nmodule can be compiled to support IPv6; specify the--enable-ipv6\noption to Python\u2019s configure script. (Contributed by Jun-ichiro \u201citojun\u201d Hagino.)Two new format characters were added to the\nstruct\nmodule for 64-bit integers on platforms that support the C long long type.q\nis for a signed 64-bit integer, andQ\nis for an unsigned one. The value is returned in Python\u2019s long integer type. (Contributed by Tim Peters.)In the interpreter\u2019s interactive mode, there\u2019s a new built-in function\nhelp()\nthat uses thepydoc\nmodule introduced in Python 2.1 to provide interactive help.help(object)\ndisplays any available help text about object.help()\nwith no argument puts you in an online help utility, where you can enter the names of functions, classes, or modules to read their help text. (Contributed by Guido van Rossum, using Ka-Ping Yee\u2019spydoc\nmodule.)Various bugfixes and performance improvements have been made to the SRE engine underlying the\nre\nmodule. For example, there.sub()\nandre.split()\nfunctions have been rewritten in C. Another contributed patch speeds up certain Unicode character ranges by a factor of two, and a newfinditer()\nmethod that returns an iterator over all the non-overlapping matches in a given string. (SRE is maintained by Fredrik Lundh. The BIGCHARSET patch was contributed by Martin von L\u00f6wis.)The\nsmtplib\nmodule now supports RFC 2487, \u201cSecure SMTP over TLS\u201d, so it\u2019s now possible to encrypt the SMTP traffic between a Python program and the mail transport agent being handed a message.smtplib\nalso supports SMTP authentication. (Contributed by Gerhard H\u00e4ring.)The\nimaplib\nmodule, maintained by Piers Lauder, has support for several new extensions: the NAMESPACE extension defined in RFC 2342, SORT, GETACL and SETACL. (Contributed by Anthony Baxter and Michel Pelletier.)The\nrfc822\nmodule\u2019s parsing of email addresses is now compliant with RFC 2822, an update to RFC 822. (The module\u2019s name is not going to be changed torfc2822\n.) A new package,email\n, has also been added for parsing and generating e-mail messages. (Contributed by Barry Warsaw, and arising out of his work on Mailman.)The\ndifflib\nmodule now contains a newDiffer\nclass for producing human-readable lists of changes (a \u201cdelta\u201d) between two sequences of lines of text. There are also two generator functions,ndiff()\nandrestore()\n, which respectively return a delta from two sequences, or one of the original sequences from a delta. (Grunt work contributed by David Goodger, from ndiff.py code by Tim Peters who then did the generatorization.)New constants\nascii_letters\n,ascii_lowercase\n, andascii_uppercase\nwere added to thestring\nmodule. There were several modules in the standard library that usedstring.letters\nto mean the ranges A-Za-z, but that assumption is incorrect when locales are in use, becausestring.letters\nvaries depending on the set of legal characters defined by the current locale. The buggy modules have all been fixed to useascii_letters\ninstead. (Reported by an unknown person; fixed by Fred L. Drake, Jr.)The\nmimetypes\nmodule now makes it easier to use alternative MIME-type databases by the addition of aMimeTypes\nclass, which takes a list of filenames to be parsed. (Contributed by Fred L. Drake, Jr.)A\nTimer\nclass was added to thethreading\nmodule that allows scheduling an activity to happen at some future time. (Contributed by Itamar Shtull-Trauring.)\nInterpreter Changes and Fixes\u00b6\nSome of the changes only affect people who deal with the Python interpreter at the C level because they\u2019re writing Python extension modules, embedding the interpreter, or just hacking on the interpreter itself. If you only write Python code, none of the changes described here will affect you very much.\nProfiling and tracing functions can now be implemented in C, which can operate at much higher speeds than Python-based functions and should reduce the overhead of profiling and tracing. This will be of interest to authors of development environments for Python. Two new C functions were added to Python\u2019s API,\nPyEval_SetProfile()\nandPyEval_SetTrace()\n. The existingsys.setprofile()\nandsys.settrace()\nfunctions still exist, and have simply been changed to use the new C-level interface. (Contributed by Fred L. Drake, Jr.)Another low-level API, primarily of interest to implementers of Python debuggers and development tools, was added.\nPyInterpreterState_Head()\nandPyInterpreterState_Next()\nlet a caller walk through all the existing interpreter objects;PyInterpreterState_ThreadHead()\nandPyThreadState_Next()\nallow looping over all the thread states for a given interpreter. (Contributed by David Beazley.)The C-level interface to the garbage collector has been changed to make it easier to write extension types that support garbage collection and to debug misuses of the functions. Various functions have slightly different semantics, so a bunch of functions had to be renamed. Extensions that use the old API will still compile but will not participate in garbage collection, so updating them for 2.2 should be considered fairly high priority.\nTo upgrade an extension module to the new API, perform the following steps:\nRename\nPy_TPFLAGS_GC\ntoPy_TPFLAGS_HAVE_GC\n.- Use\nPyObject_GC_New()\norPyObject_GC_NewVar()\nto allocate objects, and\nPyObject_GC_Del()\nto deallocate them.\n- Use\nRename\nPyObject_GC_Init()\ntoPyObject_GC_Track()\nandPyObject_GC_Fini()\ntoPyObject_GC_UnTrack()\n.Remove\nPyGC_HEAD_SIZE\nfrom object size calculations.Remove calls to\nPyObject_AS_GC()\nandPyObject_FROM_GC()\n.A new\net\nformat sequence was added toPyArg_ParseTuple()\n;et\ntakes both a parameter and an encoding name, and converts the parameter to the given encoding if the parameter turns out to be a Unicode string, or leaves it alone if it\u2019s an 8-bit string, assuming it to already be in the desired encoding. This differs from thees\nformat character, which assumes that 8-bit strings are in Python\u2019s default ASCII encoding and converts them to the specified new encoding. (Contributed by M.-A. Lemburg, and used for the MBCS support on Windows described in the following section.)A different argument parsing function,\nPyArg_UnpackTuple()\n, has been added that\u2019s simpler and presumably faster. Instead of specifying a format string, the caller simply gives the minimum and maximum number of arguments expected, and a set of pointers to PyObject* variables that will be filled in with argument values.Two new flags\nMETH_NOARGS\nandMETH_O\nare available in method definition tables to simplify implementation of methods with no arguments or a single untyped argument. Calling such methods is more efficient than calling a corresponding method that usesMETH_VARARGS\n. Also, the oldMETH_OLDARGS\nstyle of writing C methods is now officially deprecated.Two new wrapper functions,\nPyOS_snprintf()\nandPyOS_vsnprintf()\nwere added to provide cross-platform implementations for the relatively newsnprintf()\nandvsnprintf()\nC lib APIs. In contrast to the standardsprintf()\nandvsprintf()\nfunctions, the Python versions check the bounds of the buffer used to protect against buffer overruns. (Contributed by M.-A. Lemburg.)The\n_PyTuple_Resize()\nfunction has lost an unused parameter, so now it takes 2 parameters instead of 3. The third argument was never used, and can simply be discarded when porting code from earlier versions to Python 2.2.\nOther Changes and Fixes\u00b6\nAs usual there were a bunch of other improvements and bugfixes scattered throughout the source tree. A search through the CVS change logs finds there were 527 patches applied and 683 bugs fixed between Python 2.1 and 2.2; 2.2.1 applied 139 patches and fixed 143 bugs; 2.2.2 applied 106 patches and fixed 82 bugs. These figures are likely to be underestimates.\nSome of the more notable changes are:\nThe code for the MacOS port for Python, maintained by Jack Jansen, is now kept in the main Python CVS tree, and many changes have been made to support MacOS X.\nThe most significant change is the ability to build Python as a framework, enabled by supplying the\n--enable-framework\noption to the configure script when compiling Python. According to Jack Jansen, \u201cThis installs a self-contained Python installation plus the OS X framework \u201cglue\u201d into/Library/Frameworks/Python.framework\n(or another location of choice). For now there is little immediate added benefit to this (actually, there is the disadvantage that you have to change your PATH to be able to find Python), but it is the basis for creating a full-blown Python application, porting the MacPython IDE, possibly using Python as a standard OSA scripting language and much more.\u201dMost of the MacPython toolbox modules, which interface to MacOS APIs such as windowing, QuickTime, scripting, etc. have been ported to OS X, but they\u2019ve been left commented out in\nsetup.py\n. People who want to experiment with these modules can uncomment them manually.Keyword arguments passed to built-in functions that don\u2019t take them now cause a\nTypeError\nexception to be raised, with the message \u201cfunction takes no keyword arguments\u201d.Weak references, added in Python 2.1 as an extension module, are now part of the core because they\u2019re used in the implementation of new-style classes. The\nReferenceError\nexception has therefore moved from theweakref\nmodule to become a built-in exception.A new script,\nTools/scripts/cleanfuture.py\nby Tim Peters, automatically removes obsolete__future__\nstatements from Python source code.An additional flags argument has been added to the built-in function\ncompile()\n, so the behaviour of__future__\nstatements can now be correctly observed in simulated shells, such as those presented by IDLE and other development environments. This is described in PEP 264. (Contributed by Michael Hudson.)The new license introduced with Python 1.6 wasn\u2019t GPL-compatible. This is fixed by some minor textual changes to the 2.2 license, so it\u2019s now legal to embed Python inside a GPLed program again. Note that Python itself is not GPLed, but instead is under a license that\u2019s essentially equivalent to the BSD license, same as it always was. The license changes were also applied to the Python 2.0.1 and 2.1.1 releases.\nWhen presented with a Unicode filename on Windows, Python will now convert it to an MBCS encoded string, as used by the Microsoft file APIs. As MBCS is explicitly used by the file APIs, Python\u2019s choice of ASCII as the default encoding turns out to be an annoyance. On Unix, the locale\u2019s character set is used if\nlocale.nl_langinfo(CODESET)\nis available. (Windows support was contributed by Mark Hammond with assistance from Marc-Andr\u00e9 Lemburg. Unix support was added by Martin von L\u00f6wis.)Large file support is now enabled on Windows. (Contributed by Tim Peters.)\nThe\nTools/scripts/ftpmirror.py\nscript now parses a.netrc\nfile, if you have one. (Contributed by Mike Romberg.)Some features of the object returned by the\nxrange()\nfunction are now deprecated, and trigger warnings when they\u2019re accessed; they\u2019ll disappear in Python 2.3.xrange\nobjects tried to pretend they were full sequence types by supporting slicing, sequence multiplication, and thein\noperator, but these features were rarely used and therefore buggy. Thetolist()\nmethod and thestart\n,stop\n, andstep\nattributes are also being deprecated. At the C level, the fourth argument to thePyRange_New()\nfunction,repeat\n, has also been deprecated.There were a bunch of patches to the dictionary implementation, mostly to fix potential core dumps if a dictionary contains objects that sneakily changed their hash value, or mutated the dictionary they were contained in. For a while python-dev fell into a gentle rhythm of Michael Hudson finding a case that dumped core, Tim Peters fixing the bug, Michael finding another case, and round and round it went.\nOn Windows, Python can now be compiled with Borland C thanks to a number of patches contributed by Stephen Hansen, though the result isn\u2019t fully functional yet. (But this is progress\u2026)\nAnother Windows enhancement: Wise Solutions generously offered PythonLabs use of their InstallerMaster 8.1 system. Earlier PythonLabs Windows installers used Wise 5.0a, which was beginning to show its age. (Packaged up by Tim Peters.)\nFiles ending in\n.pyw\ncan now be imported on Windows..pyw\nis a Windows-only thing, used to indicate that a script needs to be run using PYTHONW.EXE instead of PYTHON.EXE in order to prevent a DOS console from popping up to display the output. This patch makes it possible to import such scripts, in case they\u2019re also usable as modules. (Implemented by David Bolen.)On platforms where Python uses the C\ndlopen()\nfunction to load extension modules, it\u2019s now possible to set the flags used bydlopen()\nusing thesys.getdlopenflags()\nandsys.setdlopenflags()\nfunctions. (Contributed by Bram Stolk.)The\npow()\nbuilt-in function no longer supports 3 arguments when floating-point numbers are supplied.pow(x, y, z)\nreturns(x**y) % z\n, but this is never useful for floating-point numbers, and the final result varies unpredictably depending on the platform. A call such aspow(2.0, 8.0, 7.0)\nwill now raise aTypeError\nexception.\nAcknowledgements\u00b6\nThe author would like to thank the following people for offering suggestions, corrections and assistance with various drafts of this article: Fred Bremmer, Keith Briggs, Andrew Dalke, Fred L. Drake, Jr., Carel Fellinger, David Goodger, Mark Hammond, Stephen Hansen, Michael Hudson, Jack Jansen, Marc-Andr\u00e9 Lemburg, Martin von L\u00f6wis, Fredrik Lundh, Michael McLay, Nick Mathewson, Paul Moore, Gustavo Niemeyer, Don O\u2019Donnell, Joonas Paalasma, Tim Peters, Jens Quade, Tom Reinhardt, Neil Schemenauer, Guido van Rossum, Greg Ward, Edward Welbourne.", "code_snippets": ["\n ", " ", "\n ", "\n ", "\n", "\n", "\n", "\n", "\n", "\n ", " ", " ", " ", " ", " ", "\n ", "\n ", " ", " ", "\n ", " ", " ", "\n", " ", " ", "\n", "\n", "\n ", " ", "\n ", "\n ", " ", " ", "\n\n ", " ", " ", "\n ", "\n ", " ", " ", "\n", " ", "\n\n", "\n ", " ", " ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n\n ", " ", " ", " ", " ", "\n", " ", "\n ", " ", " ", " ", "\n ", " \\\n ", " \\\n ", " \\\n ", " \\\n", " ", "\n ", " ", " ", " ", "\n \\ ", "\n \\ ", "\n \\ ", "\n \\ ", "\n ", "\n", " ", "\n ", " ", "\n ", "\n ", " ", "\n ", "\n ", "\n", "\n ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n\n ", "\n ", "\n ", "\n ", " ", " ", " ", "\n ", "\n ", "\n", "\n", " ", " ", " ", " ", "\n", "\n", " ", " ", "\n", " ", "\n", "\n", " ", " ", "\n", " ", "\n", "\n", " ", " ", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", " ", "\n ", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n File ", ", line ", ", in ", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", "\n ", "\n ", "\n", "\n ", " ", " ", " ", "\n ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n File ", ", line ", ", in ", "\n File ", ", line ", ", in ", "\n", "\n", "\n", "\n ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n\n", "\n", "\n", "\n", "\n ", "\n ", "\n ", "\n ", " ", " ", " ", "\n ", "\n", " ", "\n ", "\n ", " ", " ", " ", " ", " ", " ", " ", "\n ", "\n ", " ", "\n", " ", " ", "\n", "\n ", "\n ", " ", "\n ", "\n ", " ", "\n", "\n", " ", " ", "\n ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n\n", "\n", " ", " ", " ", " ", " ", "\n\n", "\n", "\n", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 12920} +{"url": "https://docs.python.org/3/reference/toplevel_components.html", "title": "Top-level components", "content": "9. Top-level components\u00b6\nThe Python interpreter can get its input from a number of sources: from a script passed to it as standard input or as program argument, typed in interactively, from a module source file, etc. This chapter gives the syntax used in these cases.\n9.1. Complete Python programs\u00b6\nWhile a language specification need not prescribe how the language interpreter\nis invoked, it is useful to have a notion of a complete Python program. A\ncomplete Python program is executed in a minimally initialized environment: all\nbuilt-in and standard modules are available, but none have been initialized,\nexcept for sys\n(various system services), builtins\n(built-in\nfunctions, exceptions and None\n) and __main__\n. The latter is used to\nprovide the local and global namespace for execution of the complete program.\nThe syntax for a complete Python program is that for file input, described in the next section.\nThe interpreter may also be invoked in interactive mode; in this case, it does\nnot read and execute a complete program but reads and executes one statement\n(possibly compound) at a time. The initial environment is identical to that of\na complete program; each statement is executed in the namespace of\n__main__\n.\nA complete program can be passed to the interpreter\nin three forms: with the -c\nstring command line option, as a file\npassed as the first command line argument, or as standard input. If the file\nor standard input is a tty device, the interpreter enters interactive mode;\notherwise, it executes the file as a complete program.\n9.2. File input\u00b6\nAll input read from non-interactive files has the same form:\nfile_input: (NEWLINE | statement\n)* ENDMARKER\nThis syntax is used in the following situations:\nwhen parsing a complete Python program (from a file or from a string);\nwhen parsing a module;\nwhen parsing a string passed to the\nexec()\nfunction;\n9.3. Interactive input\u00b6\nInput in interactive mode is parsed using the following grammar:\ninteractive_input: [stmt_list\n] NEWLINE |compound_stmt\nNEWLINE | ENDMARKER\nNote that a (top-level) compound statement must be followed by a blank line in interactive mode; this is needed to help the parser detect the end of the input.\n9.4. Expression input\u00b6\neval()\nis used for expression input. It ignores leading whitespace. The\nstring argument to eval()\nmust have the following form:\neval_input: expression_list\nNEWLINE* ENDMARKER", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 598} +{"url": "https://docs.python.org/3/library/email.mime.html", "title": ": Creating email and MIME objects from scratch", "content": "email.mime\n: Creating email and MIME objects from scratch\u00b6\nSource code: Lib/email/mime/\nThis module is part of the legacy (Compat32\n) email API. Its functionality\nis partially replaced by the contentmanager\nin the new API, but\nin certain applications these classes may still be useful, even in non-legacy\ncode.\nOrdinarily, you get a message object structure by passing a file or some text to\na parser, which parses the text and returns the root message object. However\nyou can also build a complete message structure from scratch, or even individual\nMessage\nobjects by hand. In fact, you can also take an\nexisting structure and add new Message\nobjects, move them\naround, etc. This makes a very convenient interface for slicing-and-dicing MIME\nmessages.\nYou can create a new object structure by creating Message\ninstances, adding attachments and all the appropriate headers manually. For MIME\nmessages though, the email\npackage provides some convenient subclasses to\nmake things easier.\nHere are the classes:\n- class email.mime.base.MIMEBase(_maintype, _subtype, *, policy=compat32, **_params)\u00b6\nModule:\nemail.mime.base\nThis is the base class for all the MIME-specific subclasses of\nMessage\n. Ordinarily you won\u2019t create instances specifically ofMIMEBase\n, although you could.MIMEBase\nis provided primarily as a convenient base class for more specific MIME-aware subclasses._maintype is the Content-Type major type (e.g. text or image), and _subtype is the Content-Type minor type (e.g. plain or gif). _params is a parameter key/value dictionary and is passed directly to\nMessage.add_header\n.If policy is specified, (defaults to the\ncompat32\npolicy) it will be passed toMessage\n.The\nMIMEBase\nclass always adds a Content-Type header (based on _maintype, _subtype, and _params), and a MIME-Version header (always set to1.0\n).Changed in version 3.6: Added policy keyword-only parameter.\n- class email.mime.nonmultipart.MIMENonMultipart\u00b6\nModule:\nemail.mime.nonmultipart\nA subclass of\nMIMEBase\n, this is an intermediate base class for MIME messages that are not multipart. The primary purpose of this class is to prevent the use of theattach()\nmethod, which only makes sense for multipart messages. Ifattach()\nis called, aMultipartConversionError\nexception is raised.\n- class email.mime.multipart.MIMEMultipart(_subtype='mixed', boundary=None, _subparts=None, *, policy=compat32, **_params)\u00b6\nModule:\nemail.mime.multipart\nA subclass of\nMIMEBase\n, this is an intermediate base class for MIME messages that are multipart. Optional _subtype defaults to mixed, but can be used to specify the subtype of the message. A Content-Type header of multipart/_subtype will be added to the message object. A MIME-Version header will also be added.Optional boundary is the multipart boundary string. When\nNone\n(the default), the boundary is calculated when needed (for example, when the message is serialized)._subparts is a sequence of initial subparts for the payload. It must be possible to convert this sequence to a list. You can always attach new subparts to the message by using the\nMessage.attach\nmethod.Optional policy argument defaults to\ncompat32\n.Additional parameters for the Content-Type header are taken from the keyword arguments, or passed into the _params argument, which is a keyword dictionary.\nChanged in version 3.6: Added policy keyword-only parameter.\n- class email.mime.application.MIMEApplication(_data, _subtype='octet-stream', _encoder=email.encoders.encode_base64, *, policy=compat32, **_params)\u00b6\nModule:\nemail.mime.application\nA subclass of\nMIMENonMultipart\n, theMIMEApplication\nclass is used to represent MIME message objects of major type application. _data contains the bytes for the raw application data. Optional _subtype specifies the MIME subtype and defaults to octet-stream.Optional _encoder is a callable (i.e. function) which will perform the actual encoding of the data for transport. This callable takes one argument, which is the\nMIMEApplication\ninstance. It should useget_payload()\nandset_payload()\nto change the payload to encoded form. It should also add any Content-Transfer-Encoding or other headers to the message object as necessary. The default encoding is base64. See theemail.encoders\nmodule for a list of the built-in encoders.Optional policy argument defaults to\ncompat32\n._params are passed straight through to the base class constructor.\nChanged in version 3.6: Added policy keyword-only parameter.\n- class email.mime.audio.MIMEAudio(_audiodata, _subtype=None, _encoder=email.encoders.encode_base64, *, policy=compat32, **_params)\u00b6\nModule:\nemail.mime.audio\nA subclass of\nMIMENonMultipart\n, theMIMEAudio\nclass is used to create MIME message objects of major type audio. _audiodata contains the bytes for the raw audio data. If this data can be decoded as au, wav, aiff, or aifc, then the subtype will be automatically included in the Content-Type header. Otherwise you can explicitly specify the audio subtype via the _subtype argument. If the minor type could not be guessed and _subtype was not given, thenTypeError\nis raised.Optional _encoder is a callable (i.e. function) which will perform the actual encoding of the audio data for transport. This callable takes one argument, which is the\nMIMEAudio\ninstance. It should useget_payload()\nandset_payload()\nto change the payload to encoded form. It should also add any Content-Transfer-Encoding or other headers to the message object as necessary. The default encoding is base64. See theemail.encoders\nmodule for a list of the built-in encoders.Optional policy argument defaults to\ncompat32\n._params are passed straight through to the base class constructor.\nChanged in version 3.6: Added policy keyword-only parameter.\n- class email.mime.image.MIMEImage(_imagedata, _subtype=None, _encoder=email.encoders.encode_base64, *, policy=compat32, **_params)\u00b6\nModule:\nemail.mime.image\nA subclass of\nMIMENonMultipart\n, theMIMEImage\nclass is used to create MIME message objects of major type image. _imagedata contains the bytes for the raw image data. If this data type can be detected (jpeg, png, gif, tiff, rgb, pbm, pgm, ppm, rast, xbm, bmp, webp, and exr attempted), then the subtype will be automatically included in the Content-Type header. Otherwise you can explicitly specify the image subtype via the _subtype argument. If the minor type could not be guessed and _subtype was not given, thenTypeError\nis raised.Optional _encoder is a callable (i.e. function) which will perform the actual encoding of the image data for transport. This callable takes one argument, which is the\nMIMEImage\ninstance. It should useget_payload()\nandset_payload()\nto change the payload to encoded form. It should also add any Content-Transfer-Encoding or other headers to the message object as necessary. The default encoding is base64. See theemail.encoders\nmodule for a list of the built-in encoders.Optional policy argument defaults to\ncompat32\n._params are passed straight through to the\nMIMEBase\nconstructor.Changed in version 3.6: Added policy keyword-only parameter.\n- class email.mime.message.MIMEMessage(_msg, _subtype='rfc822', *, policy=compat32)\u00b6\nModule:\nemail.mime.message\nA subclass of\nMIMENonMultipart\n, theMIMEMessage\nclass is used to create MIME objects of main type message. _msg is used as the payload, and must be an instance of classMessage\n(or a subclass thereof), otherwise aTypeError\nis raised.Optional _subtype sets the subtype of the message; it defaults to rfc822.\nOptional policy argument defaults to\ncompat32\n.Changed in version 3.6: Added policy keyword-only parameter.\n- class email.mime.text.MIMEText(_text, _subtype='plain', _charset=None, *, policy=compat32)\u00b6\nModule:\nemail.mime.text\nA subclass of\nMIMENonMultipart\n, theMIMEText\nclass is used to create MIME objects of major type text. _text is the string for the payload. _subtype is the minor type and defaults to plain. _charset is the character set of the text and is passed as an argument to theMIMENonMultipart\nconstructor; it defaults tous-ascii\nif the string contains onlyascii\ncode points, andutf-8\notherwise. The _charset parameter accepts either a string or aCharset\ninstance.Unless the _charset argument is explicitly set to\nNone\n, the MIMEText object created will have both a Content-Type header with acharset\nparameter, and a Content-Transfer-Encoding header. This means that a subsequentset_payload\ncall will not result in an encoded payload, even if a charset is passed in theset_payload\ncommand. You can \u201creset\u201d this behavior by deleting theContent-Transfer-Encoding\nheader, after which aset_payload\ncall will automatically encode the new payload (and add a new Content-Transfer-Encoding header).Optional policy argument defaults to\ncompat32\n.Changed in version 3.5: _charset also accepts\nCharset\ninstances.Changed in version 3.6: Added policy keyword-only parameter.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 2208} +{"url": "https://docs.python.org/3/whatsnew/2.3.html", "title": "What\u2019s New in Python 2.3", "content": "What\u2019s New in Python 2.3\u00b6\n- Author:\nA.M. Kuchling\nThis article explains the new features in Python 2.3. Python 2.3 was released on July 29, 2003.\nThe main themes for Python 2.3 are polishing some of the features added in 2.2,\nadding various small but useful enhancements to the core language, and expanding\nthe standard library. The new object model introduced in the previous version\nhas benefited from 18 months of bugfixes and from optimization efforts that have\nimproved the performance of new-style classes. A few new built-in functions\nhave been added such as sum()\nand enumerate()\n. The in\noperator can now be used for substring searches (e.g. \"ab\" in \"abc\"\nreturns\nTrue\n).\nSome of the many new library features include Boolean, set, heap, and date/time data types, the ability to import modules from ZIP-format archives, metadata support for the long-awaited Python catalog, an updated version of IDLE, and modules for logging messages, wrapping text, parsing CSV files, processing command-line options, using BerkeleyDB databases\u2026 the list of new and enhanced modules is lengthy.\nThis article doesn\u2019t attempt to provide a complete specification of the new features, but instead provides a convenient overview. For full details, you should refer to the documentation for Python 2.3, such as the Python Library Reference and the Python Reference Manual. If you want to understand the complete implementation and design rationale, refer to the PEP for a particular new feature.\nPEP 218: A Standard Set Datatype\u00b6\nThe new sets\nmodule contains an implementation of a set datatype. The\nSet\nclass is for mutable sets, sets that can have members added and\nremoved. The ImmutableSet\nclass is for sets that can\u2019t be modified,\nand instances of ImmutableSet\ncan therefore be used as dictionary keys.\nSets are built on top of dictionaries, so the elements within a set must be\nhashable.\nHere\u2019s a simple example:\n>>> import sets\n>>> S = sets.Set([1,2,3])\n>>> S\nSet([1, 2, 3])\n>>> 1 in S\nTrue\n>>> 0 in S\nFalse\n>>> S.add(5)\n>>> S.remove(3)\n>>> S\nSet([1, 2, 5])\n>>>\nThe union and intersection of sets can be computed with the union()\nand\nintersection()\nmethods; an alternative notation uses the bitwise operators\n&\nand |\n. Mutable sets also have in-place versions of these methods,\nunion_update()\nand intersection_update()\n.\n>>> S1 = sets.Set([1,2,3])\n>>> S2 = sets.Set([4,5,6])\n>>> S1.union(S2)\nSet([1, 2, 3, 4, 5, 6])\n>>> S1 | S2 # Alternative notation\nSet([1, 2, 3, 4, 5, 6])\n>>> S1.intersection(S2)\nSet([])\n>>> S1 & S2 # Alternative notation\nSet([])\n>>> S1.union_update(S2)\n>>> S1\nSet([1, 2, 3, 4, 5, 6])\n>>>\nIt\u2019s also possible to take the symmetric difference of two sets. This is the\nset of all elements in the union that aren\u2019t in the intersection. Another way\nof putting it is that the symmetric difference contains all elements that are in\nexactly one set. Again, there\u2019s an alternative notation (^\n), and an\nin-place version with the ungainly name symmetric_difference_update()\n.\n>>> S1 = sets.Set([1,2,3,4])\n>>> S2 = sets.Set([3,4,5,6])\n>>> S1.symmetric_difference(S2)\nSet([1, 2, 5, 6])\n>>> S1 ^ S2\nSet([1, 2, 5, 6])\n>>>\nThere are also issubset()\nand issuperset()\nmethods for checking\nwhether one set is a subset or superset of another:\n>>> S1 = sets.Set([1,2,3])\n>>> S2 = sets.Set([2,3])\n>>> S2.issubset(S1)\nTrue\n>>> S1.issubset(S2)\nFalse\n>>> S1.issuperset(S2)\nTrue\n>>>\nSee also\n- PEP 218 - Adding a Built-In Set Object Type\nPEP written by Greg V. Wilson. Implemented by Greg V. Wilson, Alex Martelli, and GvR.\nPEP 255: Simple Generators\u00b6\nIn Python 2.2, generators were added as an optional feature, to be enabled by a\nfrom __future__ import generators\ndirective. In 2.3 generators no longer\nneed to be specially enabled, and are now always present; this means that\nyield\nis now always a keyword. The rest of this section is a copy of\nthe description of generators from the \u201cWhat\u2019s New in Python 2.2\u201d document; if\nyou read it back when Python 2.2 came out, you can skip the rest of this\nsection.\nYou\u2019re doubtless familiar with how function calls work in Python or C. When you\ncall a function, it gets a private namespace where its local variables are\ncreated. When the function reaches a return\nstatement, the local\nvariables are destroyed and the resulting value is returned to the caller. A\nlater call to the same function will get a fresh new set of local variables.\nBut, what if the local variables weren\u2019t thrown away on exiting a function?\nWhat if you could later resume the function where it left off? This is what\ngenerators provide; they can be thought of as resumable functions.\nHere\u2019s the simplest example of a generator function:\ndef generate_ints(N):\nfor i in range(N):\nyield i\nA new keyword, yield\n, was introduced for generators. Any function\ncontaining a yield\nstatement is a generator function; this is\ndetected by Python\u2019s bytecode compiler which compiles the function specially as\na result.\nWhen you call a generator function, it doesn\u2019t return a single value; instead it\nreturns a generator object that supports the iterator protocol. On executing\nthe yield\nstatement, the generator outputs the value of i\n,\nsimilar to a return\nstatement. The big difference between\nyield\nand a return\nstatement is that on reaching a\nyield\nthe generator\u2019s state of execution is suspended and local\nvariables are preserved. On the next call to the generator\u2019s .next()\nmethod, the function will resume executing immediately after the\nyield\nstatement. (For complicated reasons, the yield\nstatement isn\u2019t allowed inside the try\nblock of a\ntry\n\u2026finally\nstatement; read PEP 255 for a full\nexplanation of the interaction between yield\nand exceptions.)\nHere\u2019s a sample usage of the generate_ints()\ngenerator:\n>>> gen = generate_ints(3)\n>>> gen\n\n>>> gen.next()\n0\n>>> gen.next()\n1\n>>> gen.next()\n2\n>>> gen.next()\nTraceback (most recent call last):\nFile \"stdin\", line 1, in ?\nFile \"stdin\", line 2, in generate_ints\nStopIteration\nYou could equally write for i in generate_ints(5)\n, or a,b,c =\ngenerate_ints(3)\n.\nInside a generator function, the return\nstatement can only be used\nwithout a value, and signals the end of the procession of values; afterwards the\ngenerator cannot return any further values. return\nwith a value, such\nas return 5\n, is a syntax error inside a generator function. The end of the\ngenerator\u2019s results can also be indicated by raising StopIteration\nmanually, or by just letting the flow of execution fall off the bottom of the\nfunction.\nYou could achieve the effect of generators manually by writing your own class\nand storing all the local variables of the generator as instance variables. For\nexample, returning a list of integers could be done by setting self.count\nto\n0, and having the next()\nmethod increment self.count\nand return it.\nHowever, for a moderately complicated generator, writing a corresponding class\nwould be much messier. Lib/test/test_generators.py\ncontains a number of\nmore interesting examples. The simplest one implements an in-order traversal of\na tree using generators recursively.\n# A recursive generator that generates Tree leaves in in-order.\ndef inorder(t):\nif t:\nfor x in inorder(t.left):\nyield x\nyield t.label\nfor x in inorder(t.right):\nyield x\nTwo other examples in Lib/test/test_generators.py\nproduce solutions for\nthe N-Queens problem (placing $N$ queens on an $NxN$ chess board so that no\nqueen threatens another) and the Knight\u2019s Tour (a route that takes a knight to\nevery square of an $NxN$ chessboard without visiting any square twice).\nThe idea of generators comes from other programming languages, especially Icon (https://www2.cs.arizona.edu/icon/), where the idea of generators is central. In Icon, every expression and function call behaves like a generator. One example from \u201cAn Overview of the Icon Programming Language\u201d at https://www2.cs.arizona.edu/icon/docs/ipd266.htm gives an idea of what this looks like:\nsentence := \"Store it in the neighboring harbor\"\nif (i := find(\"or\", sentence)) > 5 then write(i)\nIn Icon the find()\nfunction returns the indexes at which the substring\n\u201cor\u201d is found: 3, 23, 33. In the if\nstatement, i\nis first\nassigned a value of 3, but 3 is less than 5, so the comparison fails, and Icon\nretries it with the second value of 23. 23 is greater than 5, so the comparison\nnow succeeds, and the code prints the value 23 to the screen.\nPython doesn\u2019t go nearly as far as Icon in adopting generators as a central concept. Generators are considered part of the core Python language, but learning or using them isn\u2019t compulsory; if they don\u2019t solve any problems that you have, feel free to ignore them. One novel feature of Python\u2019s interface as compared to Icon\u2019s is that a generator\u2019s state is represented as a concrete object (the iterator) that can be passed around to other functions or stored in a data structure.\nSee also\n- PEP 255 - Simple Generators\nWritten by Neil Schemenauer, Tim Peters, Magnus Lie Hetland. Implemented mostly by Neil Schemenauer and Tim Peters, with other fixes from the Python Labs crew.\nPEP 263: Source Code Encodings\u00b6\nPython source files can now be declared as being in different character set encodings. Encodings are declared by including a specially formatted comment in the first or second line of the source file. For example, a UTF-8 file can be declared with:\n#!/usr/bin/env python\n# -*- coding: UTF-8 -*-\nWithout such an encoding declaration, the default encoding used is 7-bit ASCII.\nExecuting or importing modules that contain string literals with 8-bit\ncharacters and have no encoding declaration will result in a\nDeprecationWarning\nbeing signalled by Python 2.3; in 2.4 this will be a\nsyntax error.\nThe encoding declaration only affects Unicode string literals, which will be converted to Unicode using the specified encoding. Note that Python identifiers are still restricted to ASCII characters, so you can\u2019t have variable names that use characters outside of the usual alphanumerics.\nSee also\n- PEP 263 - Defining Python Source Code Encodings\nWritten by Marc-Andr\u00e9 Lemburg and Martin von L\u00f6wis; implemented by Suzuki Hisao and Martin von L\u00f6wis.\nPEP 273: Importing Modules from ZIP Archives\u00b6\nThe new zipimport\nmodule adds support for importing modules from a\nZIP-format archive. You don\u2019t need to import the module explicitly; it will be\nautomatically imported if a ZIP archive\u2019s filename is added to sys.path\n.\nFor example:\namk@nyman:~/src/python$ unzip -l /tmp/example.zip\nArchive: /tmp/example.zip\nLength Date Time Name\n-------- ---- ---- ----\n8467 11-26-02 22:30 jwzthreading.py\n-------- -------\n8467 1 file\namk@nyman:~/src/python$ ./python\nPython 2.3 (#1, Aug 1 2003, 19:54:32)\n>>> import sys\n>>> sys.path.insert(0, '/tmp/example.zip') # Add .zip file to front of path\n>>> import jwzthreading\n>>> jwzthreading.__file__\n'/tmp/example.zip/jwzthreading.py'\n>>>\nAn entry in sys.path\ncan now be the filename of a ZIP archive. The ZIP\narchive can contain any kind of files, but only files named *.py\n,\n*.pyc\n, or *.pyo\ncan be imported. If an archive only contains\n*.py\nfiles, Python will not attempt to modify the archive by adding the\ncorresponding *.pyc\nfile, meaning that if a ZIP archive doesn\u2019t contain\n*.pyc\nfiles, importing may be rather slow.\nA path within the archive can also be specified to only import from a\nsubdirectory; for example, the path /tmp/example.zip/lib/\nwould only\nimport from the lib/\nsubdirectory within the archive.\nSee also\n- PEP 273 - Import Modules from Zip Archives\nWritten by James C. Ahlstrom, who also provided an implementation. Python 2.3 follows the specification in PEP 273, but uses an implementation written by Just van Rossum that uses the import hooks described in PEP 302. See section PEP 302: New Import Hooks for a description of the new import hooks.\nPEP 277: Unicode file name support for Windows NT\u00b6\nOn Windows NT, 2000, and XP, the system stores file names as Unicode strings. Traditionally, Python has represented file names as byte strings, which is inadequate because it renders some file names inaccessible.\nPython now allows using arbitrary Unicode strings (within the limitations of the\nfile system) for all functions that expect file names, most notably the\nopen()\nbuilt-in function. If a Unicode string is passed to\nos.listdir()\n, Python now returns a list of Unicode strings. A new\nfunction, os.getcwdu()\n, returns the current directory as a Unicode string.\nByte strings still work as file names, and on Windows Python will transparently\nconvert them to Unicode using the mbcs\nencoding.\nOther systems also allow Unicode strings as file names but convert them to byte\nstrings before passing them to the system, which can cause a UnicodeError\nto be raised. Applications can test whether arbitrary Unicode strings are\nsupported as file names by checking os.path.supports_unicode_filenames\n,\na Boolean value.\nUnder MacOS, os.listdir()\nmay now return Unicode filenames.\nSee also\n- PEP 277 - Unicode file name support for Windows NT\nWritten by Neil Hodgson; implemented by Neil Hodgson, Martin von L\u00f6wis, and Mark Hammond.\nPEP 278: Universal Newline Support\u00b6\nThe three major operating systems used today are Microsoft Windows, Apple\u2019s Macintosh OS, and the various Unix derivatives. A minor irritation of cross-platform work is that these three platforms all use different characters to mark the ends of lines in text files. Unix uses the linefeed (ASCII character 10), MacOS uses the carriage return (ASCII character 13), and Windows uses a two-character sequence of a carriage return plus a newline.\nPython\u2019s file objects can now support end of line conventions other than the\none followed by the platform on which Python is running. Opening a file with\nthe mode 'U'\nor 'rU'\nwill open a file for reading in universal\nnewlines mode. All three line ending conventions will be translated to a\n'\\n'\nin the strings returned by the various file methods such as\nread()\nand readline()\n.\nUniversal newline support is also used when importing modules and when executing\na file with the execfile()\nfunction. This means that Python modules can\nbe shared between all three operating systems without needing to convert the\nline-endings.\nThis feature can be disabled when compiling Python by specifying the\n--without-universal-newlines\nswitch when running Python\u2019s\nconfigure script.\nSee also\n- PEP 278 - Universal Newline Support\nWritten and implemented by Jack Jansen.\nPEP 279: enumerate()\u00b6\nA new built-in function, enumerate()\n, will make certain loops a bit\nclearer. enumerate(thing)\n, where thing is either an iterator or a\nsequence, returns an iterator that will return (0, thing[0])\n, (1,\nthing[1])\n, (2, thing[2])\n, and so forth.\nA common idiom to change every element of a list looks like this:\nfor i in range(len(L)):\nitem = L[i]\n# ... compute some result based on item ...\nL[i] = result\nThis can be rewritten using enumerate()\nas:\nfor i, item in enumerate(L):\n# ... compute some result based on item ...\nL[i] = result\nSee also\n- PEP 279 - The enumerate() built-in function\nWritten and implemented by Raymond D. Hettinger.\nPEP 282: The logging Package\u00b6\nA standard package for writing logs, logging\n, has been added to Python\n2.3. It provides a powerful and flexible mechanism for generating logging\noutput which can then be filtered and processed in various ways. A\nconfiguration file written in a standard format can be used to control the\nlogging behavior of a program. Python includes handlers that will write log\nrecords to standard error or to a file or socket, send them to the system log,\nor even e-mail them to a particular address; of course, it\u2019s also possible to\nwrite your own handler classes.\nThe Logger\nclass is the primary class. Most application code will deal\nwith one or more Logger\nobjects, each one used by a particular\nsubsystem of the application. Each Logger\nis identified by a name, and\nnames are organized into a hierarchy using .\nas the component separator.\nFor example, you might have Logger\ninstances named server\n,\nserver.auth\nand server.network\n. The latter two instances are below\nserver\nin the hierarchy. This means that if you turn up the verbosity for\nserver\nor direct server\nmessages to a different handler, the changes\nwill also apply to records logged to server.auth\nand server.network\n.\nThere\u2019s also a root Logger\nthat\u2019s the parent of all other loggers.\nFor simple uses, the logging\npackage contains some convenience functions\nthat always use the root log:\nimport logging\nlogging.debug('Debugging information')\nlogging.info('Informational message')\nlogging.warning('Warning:config file %s not found', 'server.conf')\nlogging.error('Error occurred')\nlogging.critical('Critical error -- shutting down')\nThis produces the following output:\nWARNING:root:Warning:config file server.conf not found\nERROR:root:Error occurred\nCRITICAL:root:Critical error -- shutting down\nIn the default configuration, informational and debugging messages are\nsuppressed and the output is sent to standard error. You can enable the display\nof informational and debugging messages by calling the setLevel()\nmethod\non the root logger.\nNotice the warning()\ncall\u2019s use of string formatting operators; all of the\nfunctions for logging messages take the arguments (msg, arg1, arg2, ...)\nand\nlog the string resulting from msg % (arg1, arg2, ...)\n.\nThere\u2019s also an exception()\nfunction that records the most recent\ntraceback. Any of the other functions will also record the traceback if you\nspecify a true value for the keyword argument exc_info.\ndef f():\ntry: 1/0\nexcept: logging.exception('Problem recorded')\nf()\nThis produces the following output:\nERROR:root:Problem recorded\nTraceback (most recent call last):\nFile \"t.py\", line 6, in f\n1/0\nZeroDivisionError: integer division or modulo by zero\nSlightly more advanced programs will use a logger other than the root logger.\nThe getLogger(name)\nfunction is used to get a particular log, creating\nit if it doesn\u2019t exist yet. getLogger(None)\nreturns the root logger.\nlog = logging.getLogger('server')\n...\nlog.info('Listening on port %i', port)\n...\nlog.critical('Disk full')\n...\nLog records are usually propagated up the hierarchy, so a message logged to\nserver.auth\nis also seen by server\nand root\n, but a Logger\ncan prevent this by setting its propagate\nattribute to False\n.\nThere are more classes provided by the logging\npackage that can be\ncustomized. When a Logger\ninstance is told to log a message, it\ncreates a LogRecord\ninstance that is sent to any number of different\nHandler\ninstances. Loggers and handlers can also have an attached list\nof filters, and each filter can cause the LogRecord\nto be ignored or\ncan modify the record before passing it along. When they\u2019re finally output,\nLogRecord\ninstances are converted to text by a Formatter\nclass. All of these classes can be replaced by your own specially written\nclasses.\nWith all of these features the logging\npackage should provide enough\nflexibility for even the most complicated applications. This is only an\nincomplete overview of its features, so please see the package\u2019s reference\ndocumentation for all of the details. Reading PEP 282 will also be helpful.\nSee also\n- PEP 282 - A Logging System\nWritten by Vinay Sajip and Trent Mick; implemented by Vinay Sajip.\nPEP 285: A Boolean Type\u00b6\nA Boolean type was added to Python 2.3. Two new constants were added to the\n__builtin__\nmodule, True\nand False\n. (True\nand\nFalse\nconstants were added to the built-ins in Python 2.2.1, but the\n2.2.1 versions are simply set to integer values of 1 and 0 and aren\u2019t a\ndifferent type.)\nThe type object for this new type is named bool\n; the constructor for it\ntakes any Python value and converts it to True\nor False\n.\n>>> bool(1)\nTrue\n>>> bool(0)\nFalse\n>>> bool([])\nFalse\n>>> bool( (1,) )\nTrue\nMost of the standard library modules and built-in functions have been changed to return Booleans.\n>>> obj = []\n>>> hasattr(obj, 'append')\nTrue\n>>> isinstance(obj, list)\nTrue\n>>> isinstance(obj, tuple)\nFalse\nPython\u2019s Booleans were added with the primary goal of making code clearer. For\nexample, if you\u2019re reading a function and encounter the statement return 1\n,\nyou might wonder whether the 1\nrepresents a Boolean truth value, an index,\nor a coefficient that multiplies some other quantity. If the statement is\nreturn True\n, however, the meaning of the return value is quite clear.\nPython\u2019s Booleans were not added for the sake of strict type-checking. A very\nstrict language such as Pascal would also prevent you performing arithmetic with\nBooleans, and would require that the expression in an if\nstatement\nalways evaluate to a Boolean result. Python is not this strict and never will\nbe, as PEP 285 explicitly says. This means you can still use any expression\nin an if\nstatement, even ones that evaluate to a list or tuple or\nsome random object. The Boolean type is a subclass of the int\nclass so\nthat arithmetic using a Boolean still works.\n>>> True + 1\n2\n>>> False + 1\n1\n>>> False * 75\n0\n>>> True * 75\n75\nTo sum up True\nand False\nin a sentence: they\u2019re alternative\nways to spell the integer values 1 and 0, with the single difference that\nstr()\nand repr()\nreturn the strings 'True'\nand 'False'\ninstead of '1'\nand '0'\n.\nSee also\n- PEP 285 - Adding a bool type\nWritten and implemented by GvR.\nPEP 293: Codec Error Handling Callbacks\u00b6\nWhen encoding a Unicode string into a byte string, unencodable characters may be\nencountered. So far, Python has allowed specifying the error processing as\neither \u201cstrict\u201d (raising UnicodeError\n), \u201cignore\u201d (skipping the\ncharacter), or \u201creplace\u201d (using a question mark in the output string), with\n\u201cstrict\u201d being the default behavior. It may be desirable to specify alternative\nprocessing of such errors, such as inserting an XML character reference or HTML\nentity reference into the converted string.\nPython now has a flexible framework to add different processing strategies. New\nerror handlers can be added with codecs.register_error()\n, and codecs then\ncan access the error handler with codecs.lookup_error()\n. An equivalent C\nAPI has been added for codecs written in C. The error handler gets the necessary\nstate information such as the string being converted, the position in the string\nwhere the error was detected, and the target encoding. The handler can then\neither raise an exception or return a replacement string.\nTwo additional error handlers have been implemented using this framework: \u201cbackslashreplace\u201d uses Python backslash quoting to represent unencodable characters and \u201cxmlcharrefreplace\u201d emits XML character references.\nSee also\n- PEP 293 - Codec Error Handling Callbacks\nWritten and implemented by Walter D\u00f6rwald.\nPEP 301: Package Index and Metadata for Distutils\u00b6\nSupport for the long-requested Python catalog makes its first appearance in 2.3.\nThe heart of the catalog is the new Distutils register command.\nRunning python setup.py register\nwill collect the metadata describing a\npackage, such as its name, version, maintainer, description, &c., and send it to\na central catalog server. The resulting catalog is available from\nhttps://pypi.org.\nTo make the catalog a bit more useful, a new optional classifiers keyword\nargument has been added to the Distutils setup()\nfunction. A list of\nTrove-style strings can be supplied to help\nclassify the software.\nHere\u2019s an example setup.py\nwith classifiers, written to be compatible\nwith older versions of the Distutils:\nfrom distutils import core\nkw = {'name': \"Quixote\",\n'version': \"0.5.1\",\n'description': \"A highly Pythonic Web application framework\",\n# ...\n}\nif (hasattr(core, 'setup_keywords') and\n'classifiers' in core.setup_keywords):\nkw['classifiers'] = \\\n['Topic :: Internet :: WWW/HTTP :: Dynamic Content',\n'Environment :: No Input/Output (Daemon)',\n'Intended Audience :: Developers'],\ncore.setup(**kw)\nThe full list of classifiers can be obtained by running python setup.py\nregister --list-classifiers\n.\nSee also\n- PEP 301 - Package Index and Metadata for Distutils\nWritten and implemented by Richard Jones.\nPEP 302: New Import Hooks\u00b6\nWhile it\u2019s been possible to write custom import hooks ever since the\nihooks\nmodule was introduced in Python 1.3, no one has ever been really\nhappy with it because writing new import hooks is difficult and messy. There\nhave been various proposed alternatives such as the imputil\nand iu\nmodules, but none of them has ever gained much acceptance, and none of them were\neasily usable from C code.\nPEP 302 borrows ideas from its predecessors, especially from Gordon\nMcMillan\u2019s iu\nmodule. Three new items are added to the sys\nmodule:\nsys.path_hooks\nis a list of callable objects; most often they\u2019ll be classes. Each callable takes a string containing a path and either returns an importer object that will handle imports from this path or raises anImportError\nexception if it can\u2019t handle this path.sys.path_importer_cache\ncaches importer objects for each path, sosys.path_hooks\nwill only need to be traversed once for each path.sys.meta_path\nis a list of importer objects that will be traversed beforesys.path\nis checked. This list is initially empty, but user code can add objects to it. Additional built-in and frozen modules can be imported by an object added to this list.\nImporter objects must have a single method, find_module(fullname,\npath=None)\n. fullname will be a module or package name, e.g. string\nor\ndistutils.core\n. find_module()\nmust return a loader object that has a\nsingle method, load_module(fullname)\n, that creates and returns the\ncorresponding module object.\nPseudo-code for Python\u2019s new import logic, therefore, looks something like this (simplified a bit; see PEP 302 for the full details):\nfor mp in sys.meta_path:\nloader = mp(fullname)\nif loader is not None:\n = loader.load_module(fullname)\nfor path in sys.path:\nfor hook in sys.path_hooks:\ntry:\nimporter = hook(path)\nexcept ImportError:\n# ImportError, so try the other path hooks\npass\nelse:\nloader = importer.find_module(fullname)\n = loader.load_module(fullname)\n# Not found!\nraise ImportError\nSee also\n- PEP 302 - New Import Hooks\nWritten by Just van Rossum and Paul Moore. Implemented by Just van Rossum.\nPEP 305: Comma-separated Files\u00b6\nComma-separated files are a format frequently used for exporting data from databases and spreadsheets. Python 2.3 adds a parser for comma-separated files.\nComma-separated format is deceptively simple at first glance:\nCosts,150,200,3.95\nRead a line and call line.split(',')\n: what could be simpler? But toss in\nstring data that can contain commas, and things get more complicated:\n\"Costs\",150,200,3.95,\"Includes taxes, shipping, and sundry items\"\nA big ugly regular expression can parse this, but using the new csv\npackage is much simpler:\nimport csv\ninput = open('datafile', 'rb')\nreader = csv.reader(input)\nfor line in reader:\nprint line\nThe reader()\nfunction takes a number of different options. The field\nseparator isn\u2019t limited to the comma and can be changed to any character, and so\ncan the quoting and line-ending characters.\nDifferent dialects of comma-separated files can be defined and registered;\ncurrently there are two dialects, both used by Microsoft Excel. A separate\ncsv.writer\nclass will generate comma-separated files from a succession\nof tuples or lists, quoting strings that contain the delimiter.\nSee also\n- PEP 305 - CSV File API\nWritten and implemented by Kevin Altis, Dave Cole, Andrew McNamara, Skip Montanaro, Cliff Wells.\nPEP 307: Pickle Enhancements\u00b6\nThe pickle\nand cPickle\nmodules received some attention during the\n2.3 development cycle. In 2.2, new-style classes could be pickled without\ndifficulty, but they weren\u2019t pickled very compactly; PEP 307 quotes a trivial\nexample where a new-style class results in a pickled string three times longer\nthan that for a classic class.\nThe solution was to invent a new pickle protocol. The pickle.dumps()\nfunction has supported a text-or-binary flag for a long time. In 2.3, this\nflag is redefined from a Boolean to an integer: 0 is the old text-mode pickle\nformat, 1 is the old binary format, and now 2 is a new 2.3-specific format. A\nnew constant, pickle.HIGHEST_PROTOCOL\n, can be used to select the\nfanciest protocol available.\nUnpickling is no longer considered a safe operation. 2.2\u2019s pickle\nprovided hooks for trying to prevent unsafe classes from being unpickled\n(specifically, a __safe_for_unpickling__\nattribute), but none of this\ncode was ever audited and therefore it\u2019s all been ripped out in 2.3. You should\nnot unpickle untrusted data in any version of Python.\nTo reduce the pickling overhead for new-style classes, a new interface for\ncustomizing pickling was added using three special methods:\n__getstate__()\n, __setstate__()\n, and __getnewargs__()\n. Consult\nPEP 307 for the full semantics of these methods.\nAs a way to compress pickles yet further, it\u2019s now possible to use integer codes instead of long strings to identify pickled classes. The Python Software Foundation will maintain a list of standardized codes; there\u2019s also a range of codes for private use. Currently no codes have been specified.\nSee also\n- PEP 307 - Extensions to the pickle protocol\nWritten and implemented by Guido van Rossum and Tim Peters.\nExtended Slices\u00b6\nEver since Python 1.4, the slicing syntax has supported an optional third \u201cstep\u201d\nor \u201cstride\u201d argument. For example, these are all legal Python syntax:\nL[1:10:2]\n, L[:-1:1]\n, L[::-1]\n. This was added to Python at the\nrequest of the developers of Numerical Python, which uses the third argument\nextensively. However, Python\u2019s built-in list, tuple, and string sequence types\nhave never supported this feature, raising a TypeError\nif you tried it.\nMichael Hudson contributed a patch to fix this shortcoming.\nFor example, you can now easily extract the elements of a list that have even indexes:\n>>> L = range(10)\n>>> L[::2]\n[0, 2, 4, 6, 8]\nNegative values also work to make a copy of the same list in reverse order:\n>>> L[::-1]\n[9, 8, 7, 6, 5, 4, 3, 2, 1, 0]\nThis also works for tuples, arrays, and strings:\n>>> s='abcd'\n>>> s[::2]\n'ac'\n>>> s[::-1]\n'dcba'\nIf you have a mutable sequence such as a list or an array you can assign to or delete an extended slice, but there are some differences between assignment to extended and regular slices. Assignment to a regular slice can be used to change the length of the sequence:\n>>> a = range(3)\n>>> a\n[0, 1, 2]\n>>> a[1:3] = [4, 5, 6]\n>>> a\n[0, 4, 5, 6]\nExtended slices aren\u2019t this flexible. When assigning to an extended slice, the list on the right hand side of the statement must contain the same number of items as the slice it is replacing:\n>>> a = range(4)\n>>> a\n[0, 1, 2, 3]\n>>> a[::2]\n[0, 2]\n>>> a[::2] = [0, -1]\n>>> a\n[0, 1, -1, 3]\n>>> a[::2] = [0,1,2]\nTraceback (most recent call last):\nFile \"\", line 1, in ?\nValueError: attempt to assign sequence of size 3 to extended slice of size 2\nDeletion is more straightforward:\n>>> a = range(4)\n>>> a\n[0, 1, 2, 3]\n>>> a[::2]\n[0, 2]\n>>> del a[::2]\n>>> a\n[1, 3]\nOne can also now pass slice objects to the __getitem__()\nmethods of the\nbuilt-in sequences:\n>>> range(10).__getitem__(slice(0, 5, 2))\n[0, 2, 4]\nOr use slice objects directly in subscripts:\n>>> range(10)[slice(0, 5, 2)]\n[0, 2, 4]\nTo simplify implementing sequences that support extended slicing, slice objects\nnow have a method indices(length)\nwhich, given the length of a sequence,\nreturns a (start, stop, step)\ntuple that can be passed directly to\nrange()\n. indices()\nhandles omitted and out-of-bounds indices in a\nmanner consistent with regular slices (and this innocuous phrase hides a welter\nof confusing details!). The method is intended to be used like this:\nclass FakeSeq:\n...\ndef calc_item(self, i):\n...\ndef __getitem__(self, item):\nif isinstance(item, slice):\nindices = item.indices(len(self))\nreturn FakeSeq([self.calc_item(i) for i in range(*indices)])\nelse:\nreturn self.calc_item(i)\nFrom this example you can also see that the built-in slice\nobject is\nnow the type object for the slice type, and is no longer a function. This is\nconsistent with Python 2.2, where int\n, str\n, etc., underwent\nthe same change.\nOther Language Changes\u00b6\nHere are all of the changes that Python 2.3 makes to the core Python language.\nThe\nyield\nstatement is now always a keyword, as described in section PEP 255: Simple Generators of this document.A new built-in function\nenumerate()\nwas added, as described in section PEP 279: enumerate() of this document.Two new constants,\nTrue\nandFalse\nwere added along with the built-inbool\ntype, as described in section PEP 285: A Boolean Type of this document.The\nint()\ntype constructor will now return a long integer instead of raising anOverflowError\nwhen a string or floating-point number is too large to fit into an integer. This can lead to the paradoxical result thatisinstance(int(expression), int)\nis false, but that seems unlikely to cause problems in practice.Built-in types now support the extended slicing syntax, as described in section Extended Slices of this document.\nA new built-in function,\nsum(iterable, start=0)\n, adds up the numeric items in the iterable object and returns their sum.sum()\nonly accepts numbers, meaning that you can\u2019t use it to concatenate a bunch of strings. (Contributed by Alex Martelli.)list.insert(pos, value)\nused to insert value at the front of the list when pos was negative. The behaviour has now been changed to be consistent with slice indexing, so when pos is -1 the value will be inserted before the last element, and so forth.list.index(value)\n, which searches for value within the list and returns its index, now takes optional start and stop arguments to limit the search to only part of the list.Dictionaries have a new method,\npop(key[, *default*])\n, that returns the value corresponding to key and removes that key/value pair from the dictionary. If the requested key isn\u2019t present in the dictionary, default is returned if it\u2019s specified andKeyError\nraised if it isn\u2019t.>>> d = {1:2} >>> d {1: 2} >>> d.pop(4) Traceback (most recent call last): File \"stdin\", line 1, in ? KeyError: 4 >>> d.pop(1) 2 >>> d.pop(1) Traceback (most recent call last): File \"stdin\", line 1, in ? KeyError: 'pop(): dictionary is empty' >>> d {} >>>\nThere\u2019s also a new class method,\ndict.fromkeys(iterable, value)\n, that creates a dictionary with keys taken from the supplied iterator iterable and all values set to value, defaulting toNone\n.(Patches contributed by Raymond Hettinger.)\nAlso, the\ndict()\nconstructor now accepts keyword arguments to simplify creating small dictionaries:>>> dict(red=1, blue=2, green=3, black=4) {'blue': 2, 'black': 4, 'green': 3, 'red': 1}\n(Contributed by Just van Rossum.)\nThe\nassert\nstatement no longer checks the__debug__\nflag, so you can no longer disable assertions by assigning to__debug__\n. Running Python with the-O\nswitch will still generate code that doesn\u2019t execute any assertions.Most type objects are now callable, so you can use them to create new objects such as functions, classes, and modules. (This means that the\nnew\nmodule can be deprecated in a future Python version, because you can now use the type objects available in thetypes\nmodule.) For example, you can create a new module object with the following code:>>> import types >>> m = types.ModuleType('abc','docstring') >>> m >>> m.__doc__ 'docstring'\nA new warning,\nPendingDeprecationWarning\nwas added to indicate features which are in the process of being deprecated. The warning will not be printed by default. To check for use of features that will be deprecated in the future, supply-Walways::PendingDeprecationWarning::\non the command line or usewarnings.filterwarnings()\n.The process of deprecating string-based exceptions, as in\nraise \"Error occurred\"\n, has begun. Raising a string will now triggerPendingDeprecationWarning\n.Using\nNone\nas a variable name will now result in aSyntaxWarning\nwarning. In a future version of Python,None\nmay finally become a keyword.The\nxreadlines()\nmethod of file objects, introduced in Python 2.1, is no longer necessary because files now behave as their own iterator.xreadlines()\nwas originally introduced as a faster way to loop over all the lines in a file, but now you can simply writefor line in file_obj\n. File objects also have a new read-onlyencoding\nattribute that gives the encoding used by the file; Unicode strings written to the file will be automatically converted to bytes using the given encoding.The method resolution order used by new-style classes has changed, though you\u2019ll only notice the difference if you have a really complicated inheritance hierarchy. Classic classes are unaffected by this change. Python 2.2 originally used a topological sort of a class\u2019s ancestors, but 2.3 now uses the C3 algorithm as described in the paper \u201cA Monotonic Superclass Linearization for Dylan\u201d. To understand the motivation for this change, read Michele Simionato\u2019s article The Python 2.3 Method Resolution Order, or read the thread on python-dev starting with the message at https://mail.python.org/pipermail/python-dev/2002-October/029035.html. Samuele Pedroni first pointed out the problem and also implemented the fix by coding the C3 algorithm.\nPython runs multithreaded programs by switching between threads after executing N bytecodes. The default value for N has been increased from 10 to 100 bytecodes, speeding up single-threaded applications by reducing the switching overhead. Some multithreaded applications may suffer slower response time, but that\u2019s easily fixed by setting the limit back to a lower number using\nsys.setcheckinterval(N)\n. The limit can be retrieved with the newsys.getcheckinterval()\nfunction.One minor but far-reaching change is that the names of extension types defined by the modules included with Python now contain the module and a\n'.'\nin front of the type name. For example, in Python 2.2, if you created a socket and printed its__class__\n, you\u2019d get this output:>>> s = socket.socket() >>> s.__class__ \nIn 2.3, you get this:\n>>> s.__class__ \nOne of the noted incompatibilities between old- and new-style classes has been removed: you can now assign to the\n__name__\nand__bases__\nattributes of new-style classes. There are some restrictions on what can be assigned to__bases__\nalong the lines of those relating to assigning to an instance\u2019s__class__\nattribute.\nString Changes\u00b6\nThe\nin\noperator now works differently for strings. Previously, when evaluatingX in Y\nwhere X and Y are strings, X could only be a single character. That\u2019s now changed; X can be a string of any length, andX in Y\nwill returnTrue\nif X is a substring of Y. If X is the empty string, the result is alwaysTrue\n.>>> 'ab' in 'abcd' True >>> 'ad' in 'abcd' False >>> '' in 'abcd' True\nNote that this doesn\u2019t tell you where the substring starts; if you need that information, use the\nfind()\nstring method.The\nstrip()\n,lstrip()\n, andrstrip()\nstring methods now have an optional argument for specifying the characters to strip. The default is still to remove all whitespace characters:>>> ' abc '.strip() 'abc' >>> '><><><>'.strip('<>') 'abc' >>> '><><><>\\n'.strip('<>') 'abc<><><>\\n' >>> u'\\u4000\\u4001abc\\u4000'.strip(u'\\u4000') u'\\u4001abc' >>>\n(Suggested by Simon Brunning and implemented by Walter D\u00f6rwald.)\nThe\nstartswith()\nandendswith()\nstring methods now accept negative numbers for the start and end parameters.Another new string method is\nzfill()\n, originally a function in thestring\nmodule.zfill()\npads a numeric string with zeros on the left until it\u2019s the specified width. Note that the%\noperator is still more flexible and powerful thanzfill()\n.>>> '45'.zfill(4) '0045' >>> '12345'.zfill(4) '12345' >>> 'goofy'.zfill(6) '0goofy'\n(Contributed by Walter D\u00f6rwald.)\nA new type object,\nbasestring\n, has been added. Both 8-bit strings and Unicode strings inherit from this type, soisinstance(obj, basestring)\nwill returnTrue\nfor either kind of string. It\u2019s a completely abstract type, so you can\u2019t createbasestring\ninstances.Interned strings are no longer immortal and will now be garbage-collected in the usual way when the only reference to them is from the internal dictionary of interned strings. (Implemented by Oren Tirosh.)\nOptimizations\u00b6\nThe creation of new-style class instances has been made much faster; they\u2019re now faster than classic classes!\nThe\nsort()\nmethod of list objects has been extensively rewritten by Tim Peters, and the implementation is significantly faster.Multiplication of large long integers is now much faster thanks to an implementation of Karatsuba multiplication, an algorithm that scales better than the O(n2) required for the grade-school multiplication algorithm. (Original patch by Christopher A. Craig, and significantly reworked by Tim Peters.)\nThe\nSET_LINENO\nopcode is now gone. This may provide a small speed increase, depending on your compiler\u2019s idiosyncrasies. See section Other Changes and Fixes for a longer explanation. (Removed by Michael Hudson.)xrange()\nobjects now have their own iterator, makingfor i in xrange(n)\nslightly faster thanfor i in range(n)\n. (Patch by Raymond Hettinger.)A number of small rearrangements have been made in various hotspots to improve performance, such as inlining a function or removing some code. (Implemented mostly by GvR, but lots of people have contributed single changes.)\nThe net result of the 2.3 optimizations is that Python 2.3 runs the pystone benchmark around 25% faster than Python 2.2.\nNew, Improved, and Deprecated Modules\u00b6\nAs usual, Python\u2019s standard library received a number of enhancements and bug\nfixes. Here\u2019s a partial list of the most notable changes, sorted alphabetically\nby module name. Consult the Misc/NEWS\nfile in the source tree for a more\ncomplete list of changes, or look through the CVS logs for all the details.\nThe\narray\nmodule now supports arrays of Unicode characters using the'u'\nformat character. Arrays also now support using the+=\nassignment operator to add another array\u2019s contents, and the*=\nassignment operator to repeat an array. (Contributed by Jason Orendorff.)The\nbsddb\nmodule has been replaced by version 4.1.6 of the PyBSDDB package, providing a more complete interface to the transactional features of the BerkeleyDB library.The old version of the module has been renamed to\nbsddb185\nand is no longer built automatically; you\u2019ll have to editModules/Setup\nto enable it. Note that the newbsddb\npackage is intended to be compatible with the old module, so be sure to file bugs if you discover any incompatibilities. When upgrading to Python 2.3, if the new interpreter is compiled with a new version of the underlying BerkeleyDB library, you will almost certainly have to convert your database files to the new version. You can do this fairly easily with the new scriptsdb2pickle.py\nandpickle2db.py\nwhich you will find in the distribution\u2019sTools/scripts\ndirectory. If you\u2019ve already been using the PyBSDDB package and importing it asbsddb3\n, you will have to change yourimport\nstatements to import it asbsddb\n.The new\nbz2\nmodule is an interface to the bz2 data compression library. bz2-compressed data is usually smaller than correspondingzlib\n-compressed data. (Contributed by Gustavo Niemeyer.)A set of standard date/time types has been added in the new\ndatetime\nmodule. See the following section for more details.The Distutils\nExtension\nclass now supports an extra constructor argument named depends for listing additional source files that an extension depends on. This lets Distutils recompile the module if any of the dependency files are modified. For example, ifsampmodule.c\nincludes the header filesample.h\n, you would create theExtension\nobject like this:ext = Extension(\"samp\", sources=[\"sampmodule.c\"], depends=[\"sample.h\"])\nModifying\nsample.h\nwould then cause the module to be recompiled. (Contributed by Jeremy Hylton.)Other minor changes to Distutils: it now checks for the\nCC\n,CFLAGS\n,CPP\n,LDFLAGS\n, andCPPFLAGS\nenvironment variables, using them to override the settings in Python\u2019s configuration (contributed by Robert Weber).Previously the\ndoctest\nmodule would only search the docstrings of public methods and functions for test cases, but it now also examines private ones as well. TheDocTestSuite()\nfunction creates aunittest.TestSuite\nobject from a set ofdoctest\ntests.The new\ngc.get_referents(object)\nfunction returns a list of all the objects referenced by object.The\ngetopt\nmodule gained a new function,gnu_getopt()\n, that supports the same arguments as the existinggetopt()\nfunction but uses GNU-style scanning mode. The existinggetopt()\nstops processing options as soon as a non-option argument is encountered, but in GNU-style mode processing continues, meaning that options and arguments can be mixed. For example:>>> getopt.getopt(['-f', 'filename', 'output', '-v'], 'f:v') ([('-f', 'filename')], ['output', '-v']) >>> getopt.gnu_getopt(['-f', 'filename', 'output', '-v'], 'f:v') ([('-f', 'filename'), ('-v', '')], ['output'])\n(Contributed by Peter \u00c5strand.)\nThe\ngrp\n,pwd\n, andresource\nmodules now return enhanced tuples:>>> import grp >>> g = grp.getgrnam('amk') >>> g.gr_name, g.gr_gid ('amk', 500)\nThe\ngzip\nmodule can now handle files exceeding 2 GiB.The new\nheapq\nmodule contains an implementation of a heap queue algorithm. A heap is an array-like data structure that keeps items in a partially sorted order such that, for every index k,heap[k] <= heap[2*k+1]\nandheap[k] <= heap[2*k+2]\n. This makes it quick to remove the smallest item, and inserting a new item while maintaining the heap property is O(log n). (See https://xlinux.nist.gov/dads//HTML/priorityque.html for more information about the priority queue data structure.)The\nheapq\nmodule providesheappush()\nandheappop()\nfunctions for adding and removing items while maintaining the heap property on top of some other mutable Python sequence type. Here\u2019s an example that uses a Python list:>>> import heapq >>> heap = [] >>> for item in [3, 7, 5, 11, 1]: ... heapq.heappush(heap, item) ... >>> heap [1, 3, 5, 11, 7] >>> heapq.heappop(heap) 1 >>> heapq.heappop(heap) 3 >>> heap [5, 7, 11]\n(Contributed by Kevin O\u2019Connor.)\nThe IDLE integrated development environment has been updated using the code from the IDLEfork project (https://idlefork.sourceforge.net). The most notable feature is that the code being developed is now executed in a subprocess, meaning that there\u2019s no longer any need for manual\nreload()\noperations. IDLE\u2019s core code has been incorporated into the standard library as theidlelib\npackage.The\nimaplib\nmodule now supports IMAP over SSL. (Contributed by Piers Lauder and Tino Lange.)The\nitertools\ncontains a number of useful functions for use with iterators, inspired by various functions provided by the ML and Haskell languages. For example,itertools.ifilter(predicate, iterator)\nreturns all elements in the iterator for which the functionpredicate()\nreturnsTrue\n, anditertools.repeat(obj, N)\nreturnsobj\nN times. There are a number of other functions in the module; see the package\u2019s reference documentation for details. (Contributed by Raymond Hettinger.)Two new functions in the\nmath\nmodule,degrees(rads)\nandradians(degs)\n, convert between radians and degrees. Other functions in themath\nmodule such asmath.sin()\nandmath.cos()\nhave always required input values measured in radians. Also, an optional base argument was added tomath.log()\nto make it easier to compute logarithms for bases other thane\nand10\n. (Contributed by Raymond Hettinger.)Several new POSIX functions (\ngetpgid()\n,killpg()\n,lchown()\n,loadavg()\n,major()\n,makedev()\n,minor()\n, andmknod()\n) were added to theposix\nmodule that underlies theos\nmodule. (Contributed by Gustavo Niemeyer, Geert Jansen, and Denis S. Otkidach.)In the\nos\nmodule, the*stat()\nfamily of functions can now report fractions of a second in a timestamp. Such time stamps are represented as floats, similar to the value returned bytime.time()\n.During testing, it was found that some applications will break if time stamps are floats. For compatibility, when using the tuple interface of the\nstat_result\ntime stamps will be represented as integers. When using named fields (a feature first introduced in Python 2.2), time stamps are still represented as integers, unlessos.stat_float_times()\nis invoked to enable float return values:>>> os.stat(\"/tmp\").st_mtime 1034791200 >>> os.stat_float_times(True) >>> os.stat(\"/tmp\").st_mtime 1034791200.6335014\nIn Python 2.4, the default will change to always returning floats.\nApplication developers should enable this feature only if all their libraries work properly when confronted with floating-point time stamps, or if they use the tuple API. If used, the feature should be activated on an application level instead of trying to enable it on a per-use basis.\nThe\noptparse\nmodule contains a new parser for command-line arguments that can convert option values to a particular Python type and will automatically generate a usage message. See the following section for more details.The old and never-documented\nlinuxaudiodev\nmodule has been deprecated, and a new version namedossaudiodev\nhas been added. The module was renamed because the OSS sound drivers can be used on platforms other than Linux, and the interface has also been tidied and brought up to date in various ways. (Contributed by Greg Ward and Nicholas FitzRoy-Dale.)The new\nplatform\nmodule contains a number of functions that try to determine various properties of the platform you\u2019re running on. There are functions for getting the architecture, CPU type, the Windows OS version, and even the Linux distribution version. (Contributed by Marc-Andr\u00e9 Lemburg.)The parser objects provided by the\npyexpat\nmodule can now optionally buffer character data, resulting in fewer calls to your character data handler and therefore faster performance. Setting the parser object\u2019sbuffer_text\nattribute toTrue\nwill enable buffering.The\nsample(population, k)\nfunction was added to therandom\nmodule. population is a sequence orxrange\nobject containing the elements of a population, andsample()\nchooses k elements from the population without replacing chosen elements. k can be any value up tolen(population)\n. For example:>>> days = ['Mo', 'Tu', 'We', 'Th', 'Fr', 'St', 'Sn'] >>> random.sample(days, 3) # Choose 3 elements ['St', 'Sn', 'Th'] >>> random.sample(days, 7) # Choose 7 elements ['Tu', 'Th', 'Mo', 'We', 'St', 'Fr', 'Sn'] >>> random.sample(days, 7) # Choose 7 again ['We', 'Mo', 'Sn', 'Fr', 'Tu', 'St', 'Th'] >>> random.sample(days, 8) # Can't choose eight Traceback (most recent call last): File \"\", line 1, in ? File \"random.py\", line 414, in sample raise ValueError, \"sample larger than population\" ValueError: sample larger than population >>> random.sample(xrange(1,10000,2), 10) # Choose ten odd nos. under 10000 [3407, 3805, 1505, 7023, 2401, 2267, 9733, 3151, 8083, 9195]\nThe\nrandom\nmodule now uses a new algorithm, the Mersenne Twister, implemented in C. It\u2019s faster and more extensively studied than the previous algorithm.(All changes contributed by Raymond Hettinger.)\nThe\nreadline\nmodule also gained a number of new functions:get_history_item()\n,get_current_history_length()\n, andredisplay()\n.The\nrexec\nandBastion\nmodules have been declared dead, and attempts to import them will fail with aRuntimeError\n. New-style classes provide new ways to break out of the restricted execution environment provided byrexec\n, and no one has interest in fixing them or time to do so. If you have applications usingrexec\n, rewrite them to use something else.(Sticking with Python 2.2 or 2.1 will not make your applications any safer because there are known bugs in the\nrexec\nmodule in those versions. To repeat: if you\u2019re usingrexec\n, stop using it immediately.)The\nrotor\nmodule has been deprecated because the algorithm it uses for encryption is not believed to be secure. If you need encryption, use one of the several AES Python modules that are available separately.The\nshutil\nmodule gained amove(src, dest)\nfunction that recursively moves a file or directory to a new location.Support for more advanced POSIX signal handling was added to the\nsignal\nbut then removed again as it proved impossible to make it work reliably across platforms.The\nsocket\nmodule now supports timeouts. You can call thesettimeout(t)\nmethod on a socket object to set a timeout of t seconds. Subsequent socket operations that take longer than t seconds to complete will abort and raise asocket.timeout\nexception.The original timeout implementation was by Tim O\u2019Malley. Michael Gilfix integrated it into the Python\nsocket\nmodule and shepherded it through a lengthy review. After the code was checked in, Guido van Rossum rewrote parts of it. (This is a good example of a collaborative development process in action.)On Windows, the\nsocket\nmodule now ships with Secure Sockets Layer (SSL) support.The value of the C\nPYTHON_API_VERSION\nmacro is now exposed at the Python level assys.api_version\n. The current exception can be cleared by calling the newsys.exc_clear()\nfunction.The new\ntarfile\nmodule allows reading from and writing to tar-format archive files. (Contributed by Lars Gust\u00e4bel.)The new\ntextwrap\nmodule contains functions for wrapping strings containing paragraphs of text. Thewrap(text, width)\nfunction takes a string and returns a list containing the text split into lines of no more than the chosen width. Thefill(text, width)\nfunction returns a single string, reformatted to fit into lines no longer than the chosen width. (As you can guess,fill()\nis built on top ofwrap()\n. For example:>>> import textwrap >>> paragraph = \"Not a whit, we defy augury: ... more text ...\" >>> textwrap.wrap(paragraph, 60) [\"Not a whit, we defy augury: there's a special providence in\", \"the fall of a sparrow. If it be now, 'tis not to come; if it\", ...] >>> print textwrap.fill(paragraph, 35) Not a whit, we defy augury: there's a special providence in the fall of a sparrow. If it be now, 'tis not to come; if it be not to come, it will be now; if it be not now, yet it will come: the readiness is all. >>>\nThe module also contains a\nTextWrapper\nclass that actually implements the text wrapping strategy. Both theTextWrapper\nclass and thewrap()\nandfill()\nfunctions support a number of additional keyword arguments for fine-tuning the formatting; consult the module\u2019s documentation for details. (Contributed by Greg Ward.)The\nthread\nandthreading\nmodules now have companion modules,dummy_thread\nanddummy_threading\n, that provide a do-nothing implementation of thethread\nmodule\u2019s interface for platforms where threads are not supported. The intention is to simplify thread-aware modules (ones that don\u2019t rely on threads to run) by putting the following code at the top:try: import threading as _threading except ImportError: import dummy_threading as _threading\nIn this example,\n_threading\nis used as the module name to make it clear that the module being used is not necessarily the actualthreading\nmodule. Code can call functions and use classes in_threading\nwhether or not threads are supported, avoiding anif\nstatement and making the code slightly clearer. This module will not magically make multithreaded code run without threads; code that waits for another thread to return or to do something will simply hang forever.The\ntime\nmodule\u2019sstrptime()\nfunction has long been an annoyance because it uses the platform C library\u2019sstrptime()\nimplementation, and different platforms sometimes have odd bugs. Brett Cannon contributed a portable implementation that\u2019s written in pure Python and should behave identically on all platforms.The new\ntimeit\nmodule helps measure how long snippets of Python code take to execute. Thetimeit.py\nfile can be run directly from the command line, or the module\u2019sTimer\nclass can be imported and used directly. Here\u2019s a short example that figures out whether it\u2019s faster to convert an 8-bit string to Unicode by appending an empty Unicode string to it or by using theunicode()\nfunction:import timeit timer1 = timeit.Timer('unicode(\"abc\")') timer2 = timeit.Timer('\"abc\" + u\"\"') # Run three trials print timer1.repeat(repeat=3, number=100000) print timer2.repeat(repeat=3, number=100000) # On my laptop this outputs: # [0.36831796169281006, 0.37441694736480713, 0.35304892063140869] # [0.17574405670166016, 0.18193507194519043, 0.17565798759460449]\nThe\nTix\nmodule has received various bug fixes and updates for the current version of the Tix package.The\nTkinter\nmodule now works with a thread-enabled version of Tcl. Tcl\u2019s threading model requires that widgets only be accessed from the thread in which they\u2019re created; accesses from another thread can cause Tcl to panic. For certain Tcl interfaces,Tkinter\nwill now automatically avoid this when a widget is accessed from a different thread by marshalling a command, passing it to the correct thread, and waiting for the results. Other interfaces can\u2019t be handled automatically butTkinter\nwill now raise an exception on such an access so that you can at least find out about the problem. See https://mail.python.org/pipermail/python-dev/2002-December/031107.html for a more detailed explanation of this change. (Implemented by Martin von L\u00f6wis.)Calling Tcl methods through\n_tkinter\nno longer returns only strings. Instead, if Tcl returns other objects those objects are converted to their Python equivalent, if one exists, or wrapped with a_tkinter.Tcl_Obj\nobject if no Python equivalent exists. This behavior can be controlled through thewantobjects()\nmethod oftkapp\nobjects.When using\n_tkinter\nthrough theTkinter\nmodule (as most Tkinter applications will), this feature is always activated. It should not cause compatibility problems, since Tkinter would always convert string results to Python types where possible.If any incompatibilities are found, the old behavior can be restored by setting the\nwantobjects\nvariable in theTkinter\nmodule to false before creating the firsttkapp\nobject.import Tkinter Tkinter.wantobjects = 0\nAny breakage caused by this change should be reported as a bug.\nThe\nUserDict\nmodule has a newDictMixin\nclass which defines all dictionary methods for classes that already have a minimum mapping interface. This greatly simplifies writing classes that need to be substitutable for dictionaries, such as the classes in theshelve\nmodule.Adding the mix-in as a superclass provides the full dictionary interface whenever the class defines\n__getitem__()\n,__setitem__()\n,__delitem__()\n, andkeys()\n. For example:>>> import UserDict >>> class SeqDict(UserDict.DictMixin): ... \"\"\"Dictionary lookalike implemented with lists.\"\"\" ... def __init__(self): ... self.keylist = [] ... self.valuelist = [] ... def __getitem__(self, key): ... try: ... i = self.keylist.index(key) ... except ValueError: ... raise KeyError ... return self.valuelist[i] ... def __setitem__(self, key, value): ... try: ... i = self.keylist.index(key) ... self.valuelist[i] = value ... except ValueError: ... self.keylist.append(key) ... self.valuelist.append(value) ... def __delitem__(self, key): ... try: ... i = self.keylist.index(key) ... except ValueError: ... raise KeyError ... self.keylist.pop(i) ... self.valuelist.pop(i) ... def keys(self): ... return list(self.keylist) ... >>> s = SeqDict() >>> dir(s) # See that other dictionary methods are implemented ['__cmp__', '__contains__', '__delitem__', '__doc__', '__getitem__', '__init__', '__iter__', '__len__', '__module__', '__repr__', '__setitem__', 'clear', 'get', 'has_key', 'items', 'iteritems', 'iterkeys', 'itervalues', 'keylist', 'keys', 'pop', 'popitem', 'setdefault', 'update', 'valuelist', 'values']\n(Contributed by Raymond Hettinger.)\nThe DOM implementation in\nxml.dom.minidom\ncan now generate XML output in a particular encoding by providing an optional encoding argument to thetoxml()\nandtoprettyxml()\nmethods of DOM nodes.The\nxmlrpclib\nmodule now supports an XML-RPC extension for handling nil data values such as Python\u2019sNone\n. Nil values are always supported on unmarshalling an XML-RPC response. To generate requests containingNone\n, you must supply a true value for the allow_none parameter when creating aMarshaller\ninstance.The new\nDocXMLRPCServer\nmodule allows writing self-documenting XML-RPC servers. Run it in demo mode (as a program) to see it in action. Pointing the web browser to the RPC server produces pydoc-style documentation; pointing xmlrpclib to the server allows invoking the actual methods. (Contributed by Brian Quinlan.)Support for internationalized domain names (RFCs 3454, 3490, 3491, and 3492) has been added. The \u201cidna\u201d encoding can be used to convert between a Unicode domain name and the ASCII-compatible encoding (ACE) of that name.\n>{}>{}> u\"www.Alliancefran\u00e7aise.nu\".encode(\"idna\") 'www.xn--alliancefranaise-npb.nu'\nThe\nsocket\nmodule has also been extended to transparently convert Unicode hostnames to the ACE version before passing them to the C library. Modules that deal with hostnames such ashttplib\nandftplib\n) also support Unicode host names;httplib\nalso sends HTTPHost\nheaders using the ACE version of the domain name.urllib\nsupports Unicode URLs with non-ASCII host names as long as thepath\npart of the URL is ASCII only.To implement this change, the\nstringprep\nmodule, themkstringprep\ntool and thepunycode\nencoding have been added.\nDate/Time Type\u00b6\nDate and time types suitable for expressing timestamps were added as the\ndatetime\nmodule. The types don\u2019t support different calendars or many\nfancy features, and just stick to the basics of representing time.\nThe three primary types are: date\n, representing a day, month, and year;\ntime\n, consisting of hour, minute, and second; and datetime\n,\nwhich contains all the attributes of both date\nand time\n.\nThere\u2019s also a timedelta\nclass representing differences between two\npoints in time, and time zone logic is implemented by classes inheriting from\nthe abstract tzinfo\nclass.\nYou can create instances of date\nand time\nby either supplying\nkeyword arguments to the appropriate constructor, e.g.\ndatetime.date(year=1972, month=10, day=15)\n, or by using one of a number of\nclass methods. For example, the today()\nclass method returns the\ncurrent local date.\nOnce created, instances of the date/time classes are all immutable. There are a number of methods for producing formatted strings from objects:\n>>> import datetime\n>>> now = datetime.datetime.now()\n>>> now.isoformat()\n'2002-12-30T21:27:03.994956'\n>>> now.ctime() # Only available on date, datetime\n'Mon Dec 30 21:27:03 2002'\n>>> now.strftime('%Y %d %b')\n'2002 30 Dec'\nThe replace()\nmethod allows modifying one or more fields of a\ndate\nor datetime\ninstance, returning a new instance:\n>>> d = datetime.datetime.now()\n>>> d\ndatetime.datetime(2002, 12, 30, 22, 15, 38, 827738)\n>>> d.replace(year=2001, hour = 12)\ndatetime.datetime(2001, 12, 30, 12, 15, 38, 827738)\n>>>\nInstances can be compared, hashed, and converted to strings (the result is the\nsame as that of isoformat()\n). date\nand datetime\ninstances can be subtracted from each other, and added to timedelta\ninstances. The largest missing feature is that there\u2019s no standard library\nsupport for parsing strings and getting back a date\nor\ndatetime\n.\nFor more information, refer to the module\u2019s reference documentation. (Contributed by Tim Peters.)\nThe optparse Module\u00b6\nThe getopt\nmodule provides simple parsing of command-line arguments. The\nnew optparse\nmodule (originally named Optik) provides more elaborate\ncommand-line parsing that follows the Unix conventions, automatically creates\nthe output for --help\n, and can perform different actions for different\noptions.\nYou start by creating an instance of OptionParser\nand telling it what\nyour program\u2019s options are.\nimport sys\nfrom optparse import OptionParser\nop = OptionParser()\nop.add_option('-i', '--input',\naction='store', type='string', dest='input',\nhelp='set input filename')\nop.add_option('-l', '--length',\naction='store', type='int', dest='length',\nhelp='set maximum length of output')\nParsing a command line is then done by calling the parse_args()\nmethod.\noptions, args = op.parse_args(sys.argv[1:])\nprint options\nprint args\nThis returns an object containing all of the option values, and a list of strings containing the remaining arguments.\nInvoking the script with the various arguments now works as you\u2019d expect it to. Note that the length argument is automatically converted to an integer.\n$ ./python opt.py -i data arg1\n\n['arg1']\n$ ./python opt.py --input=data --length=4\n\n[]\n$\nThe help message is automatically generated for you:\n$ ./python opt.py --help\nusage: opt.py [options]\noptions:\n-h, --help show this help message and exit\n-iINPUT, --input=INPUT\nset input filename\n-lLENGTH, --length=LENGTH\nset maximum length of output\n$\nSee the module\u2019s documentation for more details.\nOptik was written by Greg Ward, with suggestions from the readers of the Getopt SIG.\nPymalloc: A Specialized Object Allocator\u00b6\nPymalloc, a specialized object allocator written by Vladimir Marangozov, was a\nfeature added to Python 2.1. Pymalloc is intended to be faster than the system\nmalloc()\nand to have less memory overhead for allocation patterns typical\nof Python programs. The allocator uses C\u2019s malloc()\nfunction to get large\npools of memory and then fulfills smaller memory requests from these pools.\nIn 2.1 and 2.2, pymalloc was an experimental feature and wasn\u2019t enabled by\ndefault; you had to explicitly enable it when compiling Python by providing the\n--with-pymalloc\noption to the configure script. In 2.3,\npymalloc has had further enhancements and is now enabled by default; you\u2019ll have\nto supply --without-pymalloc\nto disable it.\nThis change is transparent to code written in Python; however, pymalloc may expose bugs in C extensions. Authors of C extension modules should test their code with pymalloc enabled, because some incorrect code may cause core dumps at runtime.\nThere\u2019s one particularly common error that causes problems. There are a number\nof memory allocation functions in Python\u2019s C API that have previously just been\naliases for the C library\u2019s malloc()\nand free()\n, meaning that if\nyou accidentally called mismatched functions the error wouldn\u2019t be noticeable.\nWhen the object allocator is enabled, these functions aren\u2019t aliases of\nmalloc()\nand free()\nany more, and calling the wrong function to\nfree memory may get you a core dump. For example, if memory was allocated using\nPyObject_Malloc()\n, it has to be freed using PyObject_Free()\n, not\nfree()\n. A few modules included with Python fell afoul of this and had to\nbe fixed; doubtless there are more third-party modules that will have the same\nproblem.\nAs part of this change, the confusing multiple interfaces for allocating memory have been consolidated down into two API families. Memory allocated with one family must not be manipulated with functions from the other family. There is one family for allocating chunks of memory and another family of functions specifically for allocating Python objects.\nTo allocate and free an undistinguished chunk of memory use the \u201craw memory\u201d family:\nPyMem_Malloc()\n,PyMem_Realloc()\n, andPyMem_Free()\n.The \u201cobject memory\u201d family is the interface to the pymalloc facility described above and is biased towards a large number of \u201csmall\u201d allocations:\nPyObject_Malloc()\n,PyObject_Realloc()\n, andPyObject_Free()\n.To allocate and free Python objects, use the \u201cobject\u201d family\nPyObject_New\n,PyObject_NewVar\n, andPyObject_Del()\n.\nThanks to lots of work by Tim Peters, pymalloc in 2.3 also provides debugging\nfeatures to catch memory overwrites and doubled frees in both extension modules\nand in the interpreter itself. To enable this support, compile a debugging\nversion of the Python interpreter by running configure with\n--with-pydebug\n.\nTo aid extension writers, a header file Misc/pymemcompat.h\nis\ndistributed with the source to Python 2.3 that allows Python extensions to use\nthe 2.3 interfaces to memory allocation while compiling against any version of\nPython since 1.5.2. You would copy the file from Python\u2019s source distribution\nand bundle it with the source of your extension.\nSee also\n- https://hg.python.org/cpython/file/default/Objects/obmalloc.c\nFor the full details of the pymalloc implementation, see the comments at the top of the file\nObjects/obmalloc.c\nin the Python source code. The above link points to the file within the python.org SVN browser.\nBuild and C API Changes\u00b6\nChanges to Python\u2019s build process and to the C API include:\nThe cycle detection implementation used by the garbage collection has proven to be stable, so it\u2019s now been made mandatory. You can no longer compile Python without it, and the\n--with-cycle-gc\nswitch to configure has been removed.Python can now optionally be built as a shared library (\nlibpython2.3.so\n) by supplying--enable-shared\nwhen running Python\u2019s configure script. (Contributed by Ondrej Palkovsky.)The\nDL_EXPORT\nandDL_IMPORT\nmacros are now deprecated. Initialization functions for Python extension modules should now be declared using the new macroPyMODINIT_FUNC\n, while the Python core will generally use thePyAPI_FUNC\nandPyAPI_DATA\nmacros.The interpreter can be compiled without any docstrings for the built-in functions and modules by supplying\n--without-doc-strings\nto the configure script. This makes the Python executable about 10% smaller, but will also mean that you can\u2019t get help for Python\u2019s built-ins. (Contributed by Gustavo Niemeyer.)The\nPyArg_NoArgs()\nmacro is now deprecated, and code that uses it should be changed. For Python 2.2 and later, the method definition table can specify theMETH_NOARGS\nflag, signalling that there are no arguments, and the argument checking can then be removed. If compatibility with pre-2.2 versions of Python is important, the code could usePyArg_ParseTuple(args, \"\")\ninstead, but this will be slower than usingMETH_NOARGS\n.PyArg_ParseTuple()\naccepts new format characters for various sizes of unsigned integers:B\nfor unsigned char,H\nfor unsigned short int,I\nfor unsigned int, andK\nfor unsigned long long.A new function,\nPyObject_DelItemString(mapping, char *key)\nwas added as shorthand forPyObject_DelItem(mapping, PyString_New(key))\n.File objects now manage their internal string buffer differently, increasing it exponentially when needed. This results in the benchmark tests in\nLib/test/test_bufio.py\nspeeding up considerably (from 57 seconds to 1.7 seconds, according to one measurement).It\u2019s now possible to define class and static methods for a C extension type by setting either the\nMETH_CLASS\norMETH_STATIC\nflags in a method\u2019sPyMethodDef\nstructure.Python now includes a copy of the Expat XML parser\u2019s source code, removing any dependence on a system version or local installation of Expat.\nIf you dynamically allocate type objects in your extension, you should be aware of a change in the rules relating to the\n__module__\nand__name__\nattributes. In summary, you will want to ensure the type\u2019s dictionary contains a'__module__'\nkey; making the module name the part of the type name leading up to the final period will no longer have the desired effect. For more detail, read the API reference documentation or the source.\nPort-Specific Changes\u00b6\nSupport for a port to IBM\u2019s OS/2 using the EMX runtime environment was merged\ninto the main Python source tree. EMX is a POSIX emulation layer over the OS/2\nsystem APIs. The Python port for EMX tries to support all the POSIX-like\ncapability exposed by the EMX runtime, and mostly succeeds; fork()\nand\nfcntl()\nare restricted by the limitations of the underlying emulation\nlayer. The standard OS/2 port, which uses IBM\u2019s Visual Age compiler, also\ngained support for case-sensitive import semantics as part of the integration of\nthe EMX port into CVS. (Contributed by Andrew MacIntyre.)\nOn MacOS, most toolbox modules have been weaklinked to improve backward compatibility. This means that modules will no longer fail to load if a single routine is missing on the current OS version. Instead calling the missing routine will raise an exception. (Contributed by Jack Jansen.)\nThe RPM spec files, found in the Misc/RPM/\ndirectory in the Python\nsource distribution, were updated for 2.3. (Contributed by Sean Reifschneider.)\nOther new platforms now supported by Python include AtheOS (http://www.atheos.cx/), GNU/Hurd, and OpenVMS.\nOther Changes and Fixes\u00b6\nAs usual, there were a bunch of other improvements and bugfixes scattered throughout the source tree. A search through the CVS change logs finds there were 523 patches applied and 514 bugs fixed between Python 2.2 and 2.3. Both figures are likely to be underestimates.\nSome of the more notable changes are:\nIf the\nPYTHONINSPECT\nenvironment variable is set, the Python interpreter will enter the interactive prompt after running a Python program, as if Python had been invoked with the-i\noption. The environment variable can be set before running the Python interpreter, or it can be set by the Python program as part of its execution.The\nregrtest.py\nscript now provides a way to allow \u201call resources except foo.\u201d A resource name passed to the-u\noption can now be prefixed with a hyphen ('-'\n) to mean \u201cremove this resource.\u201d For example, the option \u2018-uall,-bsddb\n\u2019 could be used to enable the use of all resources exceptbsddb\n.The tools used to build the documentation now work under Cygwin as well as Unix.\nThe\nSET_LINENO\nopcode has been removed. Back in the mists of time, this opcode was needed to produce line numbers in tracebacks and support trace functions (for, e.g.,pdb\n). Since Python 1.5, the line numbers in tracebacks have been computed using a different mechanism that works with \u201cpython -O\u201d. For Python 2.3 Michael Hudson implemented a similar scheme to determine when to call the trace function, removing the need forSET_LINENO\nentirely.It would be difficult to detect any resulting difference from Python code, apart from a slight speed up when Python is run without\n-O\n.C extensions that access the\nf_lineno\nfield of frame objects should instead callPyCode_Addr2Line(f->f_code, f->f_lasti)\n. This will have the added effect of making the code work as desired under \u201cpython -O\u201d in earlier versions of Python.A nifty new feature is that trace functions can now assign to the\nf_lineno\nattribute of frame objects, changing the line that will be executed next. Ajump\ncommand has been added to thepdb\ndebugger taking advantage of this new feature. (Implemented by Richie Hindle.)\nPorting to Python 2.3\u00b6\nThis section lists previously described changes that may require changes to your code:\nyield\nis now always a keyword; if it\u2019s used as a variable name in your code, a different name must be chosen.For strings X and Y,\nX in Y\nnow works if X is more than one character long.The\nint()\ntype constructor will now return a long integer instead of raising anOverflowError\nwhen a string or floating-point number is too large to fit into an integer.If you have Unicode strings that contain 8-bit characters, you must declare the file\u2019s encoding (UTF-8, Latin-1, or whatever) by adding a comment to the top of the file. See section PEP 263: Source Code Encodings for more information.\nCalling Tcl methods through\n_tkinter\nno longer returns only strings. Instead, if Tcl returns other objects those objects are converted to their Python equivalent, if one exists, or wrapped with a_tkinter.Tcl_Obj\nobject if no Python equivalent exists.Large octal and hex literals such as\n0xffffffff\nnow trigger aFutureWarning\n. Currently they\u2019re stored as 32-bit numbers and result in a negative value, but in Python 2.4 they\u2019ll become positive long integers.There are a few ways to fix this warning. If you really need a positive number, just add an\nL\nto the end of the literal. If you\u2019re trying to get a 32-bit integer with low bits set and have previously used an expression such as~(1 << 31)\n, it\u2019s probably clearest to start with all bits set and clear the desired upper bits. For example, to clear just the top bit (bit 31), you could write0xffffffffL &~(1L<<31)\n.You can no longer disable assertions by assigning to\n__debug__\n.The Distutils\nsetup()\nfunction has gained various new keyword arguments such as depends. Old versions of the Distutils will abort if passed unknown keywords. A solution is to check for the presence of the newget_distutil_options()\nfunction in yoursetup.py\nand only uses the new keywords with a version of the Distutils that supports them:from distutils import core kw = {'sources': 'foo.c', ...} if hasattr(core, 'get_distutil_options'): kw['depends'] = ['foo.h'] ext = Extension(**kw)\nUsing\nNone\nas a variable name will now result in aSyntaxWarning\nwarning.Names of extension types defined by the modules included with Python now contain the module and a\n'.'\nin front of the type name.\nAcknowledgements\u00b6\nThe author would like to thank the following people for offering suggestions, corrections and assistance with various drafts of this article: Jeff Bauer, Simon Brunning, Brett Cannon, Michael Chermside, Andrew Dalke, Scott David Daniels, Fred L. Drake, Jr., David Fraser, Kelly Gerber, Raymond Hettinger, Michael Hudson, Chris Lambert, Detlef Lannert, Martin von L\u00f6wis, Andrew MacIntyre, Lalo Martins, Chad Netzer, Gustavo Niemeyer, Neal Norwitz, Hans Nowak, Chris Reedy, Francesco Ricciardi, Vinay Sajip, Neil Schemenauer, Roman Suzi, Jason Tishler, Just van Rossum.", "code_snippets": ["\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n ", " ", " ", " ", "\n ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n File ", ", line ", ", in ", "\n File ", ", line ", ", in ", "\n", "\n", "\n", "\n ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", " ", " ", " ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n", " ", " ", " ", " ", "\n ", "\n ", " ", " ", "\n", "\n\n", "\n", "\n", " ", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", "\n", " ", " ", " ", " ", "\n", "\n ", " ", "\n ", " ", "\n\n", "\n", " ", "\n", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n ", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", "\n ", "\n", " ", "\n ", "\n", "\n ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", "\n", " ", " ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", "\n\n", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " \\\n ", "\n ", "\n ", "\n\n", "\n", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", "\n\n", " ", " ", " ", "\n ", " ", " ", " ", "\n ", "\n ", " ", " ", "\n ", " ", "\n ", "\n ", "\n ", "\n ", " ", " ", "\n ", " ", " ", "\n\n", "\n", " ", "\n", "\n", "\n", "\n\n", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n ", "\n ", " ", "\n ", "\n ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n ", "\n ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", "\n", "\n", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n ", "\n ", "\n", " ", " ", " ", " ", "\n", "\n", " ", " ", " ", " ", "\n", "\n", "\n", " ", " ", "\n", " ", "\n", "\n", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n File ", ", line ", ", in ", "\n File ", ", line ", ", in ", "\n", " ", " ", " ", "\n", ": ", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n ", "\n", " ", "\n ", "\n", "\n\n", " ", " ", "\n", " ", " ", "\n\n", "\n", " ", " ", "\n", " ", " ", "\n\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", " ", "\n", " ", "\n", " ", " ", "\n", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", " ", "\n\n", " ", " ", "\n", " ", "\n ", " ", " ", "\n ", "\n", " ", "\n ", " ", " ", "\n ", "\n", " ", " ", " ", "\n", " ", "\n", " ", "\n", " ", "\n\n", " ", " ", " ", " ", "\n", " ", " ", "\n ", " ", " ", "\n", " ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 19560} +{"url": "https://docs.python.org/3/whatsnew/2.4.html", "title": "What\u2019s New in Python 2.4", "content": "What\u2019s New in Python 2.4\u00b6\n- Author:\nA.M. Kuchling\nThis article explains the new features in Python 2.4.1, released on March 30, 2005.\nPython 2.4 is a medium-sized release. It doesn\u2019t introduce as many changes as the radical Python 2.2, but introduces more features than the conservative 2.3 release. The most significant new language features are function decorators and generator expressions; most other changes are to the standard library.\nAccording to the CVS change logs, there were 481 patches applied and 502 bugs fixed between Python 2.3 and 2.4. Both figures are likely to be underestimates.\nThis article doesn\u2019t attempt to provide a complete specification of every single new feature, but instead provides a brief introduction to each feature. For full details, you should refer to the documentation for Python 2.4, such as the Python Library Reference and the Python Reference Manual. Often you will be referred to the PEP for a particular new feature for explanations of the implementation and design rationale.\nPEP 218: Built-In Set Objects\u00b6\nPython 2.3 introduced the sets\nmodule. C implementations of set data\ntypes have now been added to the Python core as two new built-in types,\nset(iterable)\nand frozenset(iterable)\n. They provide high speed\noperations for membership testing, for eliminating duplicates from sequences,\nand for mathematical operations like unions, intersections, differences, and\nsymmetric differences.\n>>> a = set('abracadabra') # form a set from a string\n>>> 'z' in a # fast membership testing\nFalse\n>>> a # unique letters in a\nset(['a', 'r', 'b', 'c', 'd'])\n>>> ''.join(a) # convert back into a string\n'arbcd'\n>>> b = set('alacazam') # form a second set\n>>> a - b # letters in a but not in b\nset(['r', 'd', 'b'])\n>>> a | b # letters in either a or b\nset(['a', 'c', 'r', 'd', 'b', 'm', 'z', 'l'])\n>>> a & b # letters in both a and b\nset(['a', 'c'])\n>>> a ^ b # letters in a or b but not both\nset(['r', 'd', 'b', 'm', 'z', 'l'])\n>>> a.add('z') # add a new element\n>>> a.update('wxy') # add multiple new elements\n>>> a\nset(['a', 'c', 'b', 'd', 'r', 'w', 'y', 'x', 'z'])\n>>> a.remove('x') # take one element out\n>>> a\nset(['a', 'c', 'b', 'd', 'r', 'w', 'y', 'z'])\nThe frozenset()\ntype is an immutable version of set()\n. Since it is\nimmutable and hashable, it may be used as a dictionary key or as a member of\nanother set.\nThe sets\nmodule remains in the standard library, and may be useful if you\nwish to subclass the Set\nor ImmutableSet\nclasses. There are\ncurrently no plans to deprecate the module.\nSee also\n- PEP 218 - Adding a Built-In Set Object Type\nOriginally proposed by Greg Wilson and ultimately implemented by Raymond Hettinger.\nPEP 237: Unifying Long Integers and Integers\u00b6\nThe lengthy transition process for this PEP, begun in Python 2.2, takes another\nstep forward in Python 2.4. In 2.3, certain integer operations that would\nbehave differently after int/long unification triggered FutureWarning\nwarnings and returned values limited to 32 or 64 bits (depending on your\nplatform). In 2.4, these expressions no longer produce a warning and instead\nproduce a different result that\u2019s usually a long integer.\nThe problematic expressions are primarily left shifts and lengthy hexadecimal\nand octal constants. For example, 2 << 32\nresults in a warning in 2.3,\nevaluating to 0 on 32-bit platforms. In Python 2.4, this expression now returns\nthe correct answer, 8589934592.\nSee also\n- PEP 237 - Unifying Long Integers and Integers\nOriginal PEP written by Moshe Zadka and GvR. The changes for 2.4 were implemented by Kalle Svensson.\nPEP 289: Generator Expressions\u00b6\nThe iterator feature introduced in Python 2.2 and the itertools\nmodule\nmake it easier to write programs that loop through large data sets without\nhaving the entire data set in memory at one time. List comprehensions don\u2019t fit\ninto this picture very well because they produce a Python list object containing\nall of the items. This unavoidably pulls all of the objects into memory, which\ncan be a problem if your data set is very large. When trying to write a\nfunctionally styled program, it would be natural to write something like:\nlinks = [link for link in get_all_links() if not link.followed]\nfor link in links:\n...\ninstead of\nfor link in get_all_links():\nif link.followed:\ncontinue\n...\nThe first form is more concise and perhaps more readable, but if you\u2019re dealing with a large number of link objects you\u2019d have to write the second form to avoid having all link objects in memory at the same time.\nGenerator expressions work similarly to list comprehensions but don\u2019t materialize the entire list; instead they create a generator that will return elements one by one. The above example could be written as:\nlinks = (link for link in get_all_links() if not link.followed)\nfor link in links:\n...\nGenerator expressions always have to be written inside parentheses, as in the above example. The parentheses signalling a function call also count, so if you want to create an iterator that will be immediately passed to a function you could write:\nprint sum(obj.count for obj in list_all_objects())\nGenerator expressions differ from list comprehensions in various small ways. Most notably, the loop variable (obj in the above example) is not accessible outside of the generator expression. List comprehensions leave the variable assigned to its last value; future versions of Python will change this, making list comprehensions match generator expressions in this respect.\nSee also\n- PEP 289 - Generator Expressions\nProposed by Raymond Hettinger and implemented by Jiwon Seo with early efforts steered by Hye-Shik Chang.\nPEP 292: Simpler String Substitutions\u00b6\nSome new classes in the standard library provide an alternative mechanism for substituting variables into strings; this style of substitution may be better for applications where untrained users need to edit templates.\nThe usual way of substituting variables by name is the %\noperator:\n>>> '%(page)i: %(title)s' % {'page':2, 'title': 'The Best of Times'}\n'2: The Best of Times'\nWhen writing the template string, it can be easy to forget the i\nor s\nafter the closing parenthesis. This isn\u2019t a big problem if the template is in a\nPython module, because you run the code, get an \u201cUnsupported format character\u201d\nValueError\n, and fix the problem. However, consider an application such\nas Mailman where template strings or translations are being edited by users who\naren\u2019t aware of the Python language. The format string\u2019s syntax is complicated\nto explain to such users, and if they make a mistake, it\u2019s difficult to provide\nhelpful feedback to them.\nPEP 292 adds a Template\nclass to the string\nmodule that uses\n$\nto indicate a substitution:\n>>> import string\n>>> t = string.Template('$page: $title')\n>>> t.substitute({'page':2, 'title': 'The Best of Times'})\n'2: The Best of Times'\nIf a key is missing from the dictionary, the substitute()\nmethod will\nraise a KeyError\n. There\u2019s also a safe_substitute()\nmethod that\nignores missing keys:\n>>> t = string.Template('$page: $title')\n>>> t.safe_substitute({'page':3})\n'3: $title'\nSee also\n- PEP 292 - Simpler String Substitutions\nWritten and implemented by Barry Warsaw.\nPEP 318: Decorators for Functions and Methods\u00b6\nPython 2.2 extended Python\u2019s object model by adding static methods and class\nmethods, but it didn\u2019t extend Python\u2019s syntax to provide any new way of defining\nstatic or class methods. Instead, you had to write a def\nstatement\nin the usual way, and pass the resulting method to a staticmethod()\nor\nclassmethod()\nfunction that would wrap up the function as a method of the\nnew type. Your code would look like this:\nclass C:\ndef meth (cls):\n...\nmeth = classmethod(meth) # Rebind name to wrapped-up class method\nIf the method was very long, it would be easy to miss or forget the\nclassmethod()\ninvocation after the function body.\nThe intention was always to add some syntax to make such definitions more readable, but at the time of 2.2\u2019s release a good syntax was not obvious. Today a good syntax still isn\u2019t obvious but users are asking for easier access to the feature; a new syntactic feature has been added to meet this need.\nThe new feature is called \u201cfunction decorators\u201d. The name comes from the idea\nthat classmethod()\n, staticmethod()\n, and friends are storing\nadditional information on a function object; they\u2019re decorating functions with\nmore details.\nThe notation borrows from Java and uses the '@'\ncharacter as an indicator.\nUsing the new syntax, the example above would be written:\nclass C:\n@classmethod\ndef meth (cls):\n...\nThe @classmethod\nis shorthand for the meth=classmethod(meth)\nassignment.\nMore generally, if you have the following:\n@A\n@B\n@C\ndef f ():\n...\nIt\u2019s equivalent to the following pre-decorator code:\ndef f(): ...\nf = A(B(C(f)))\nDecorators must come on the line before a function definition, one decorator per\nline, and can\u2019t be on the same line as the def statement, meaning that @A def\nf(): ...\nis illegal. You can only decorate function definitions, either at\nthe module level or inside a class; you can\u2019t decorate class definitions.\nA decorator is just a function that takes the function to be decorated as an argument and returns either the same function or some new object. The return value of the decorator need not be callable (though it typically is), unless further decorators will be applied to the result. It\u2019s easy to write your own decorators. The following simple example just sets an attribute on the function object:\n>>> def deco(func):\n... func.attr = 'decorated'\n... return func\n...\n>>> @deco\n... def f(): pass\n...\n>>> f\n\n>>> f.attr\n'decorated'\n>>>\nAs a slightly more realistic example, the following decorator checks that the supplied argument is an integer:\ndef require_int (func):\ndef wrapper (arg):\nassert isinstance(arg, int)\nreturn func(arg)\nreturn wrapper\n@require_int\ndef p1 (arg):\nprint arg\n@require_int\ndef p2(arg):\nprint arg*2\nAn example in PEP 318 contains a fancier version of this idea that lets you both specify the required type and check the returned type.\nDecorator functions can take arguments. If arguments are supplied, your\ndecorator function is called with only those arguments and must return a new\ndecorator function; this function must take a single function and return a\nfunction, as previously described. In other words, @A @B @C(args)\nbecomes:\ndef f(): ...\n_deco = C(args)\nf = A(B(_deco(f)))\nGetting this right can be slightly brain-bending, but it\u2019s not too difficult.\nA small related change makes the func_name\nattribute of functions\nwritable. This attribute is used to display function names in tracebacks, so\ndecorators should change the name of any new function that\u2019s constructed and\nreturned.\nSee also\n- PEP 318 - Decorators for Functions, Methods and Classes\nWritten by Kevin D. Smith, Jim Jewett, and Skip Montanaro. Several people wrote patches implementing function decorators, but the one that was actually checked in was patch #979728, written by Mark Russell.\n- https://wiki.python.org/moin/PythonDecoratorLibrary\nThis Wiki page contains several examples of decorators.\nPEP 322: Reverse Iteration\u00b6\nA new built-in function, reversed(seq)\n, takes a sequence and returns an\niterator that loops over the elements of the sequence in reverse order.\n>>> for i in reversed(xrange(1,4)):\n... print i\n...\n3\n2\n1\nCompared to extended slicing, such as range(1,4)[::-1]\n, reversed()\nis\neasier to read, runs faster, and uses substantially less memory.\nNote that reversed()\nonly accepts sequences, not arbitrary iterators. If\nyou want to reverse an iterator, first convert it to a list with list()\n.\n>>> input = open('/etc/passwd', 'r')\n>>> for line in reversed(list(input)):\n... print line\n...\nroot:*:0:0:System Administrator:/var/root:/bin/tcsh\n...\nSee also\n- PEP 322 - Reverse Iteration\nWritten and implemented by Raymond Hettinger.\nPEP 324: New subprocess Module\u00b6\nThe standard library provides a number of ways to execute a subprocess, offering\ndifferent features and different levels of complexity.\nos.system(command)\nis easy to use, but slow (it runs a shell process\nwhich executes the command) and dangerous (you have to be careful about escaping\nthe shell\u2019s metacharacters). The popen2\nmodule offers classes that can\ncapture standard output and standard error from the subprocess, but the naming\nis confusing. The subprocess\nmodule cleans this up, providing a unified\ninterface that offers all the features you might need.\nInstead of popen2\n\u2019s collection of classes, subprocess\ncontains a\nsingle class called subprocess.Popen\nwhose constructor supports a number of\ndifferent keyword arguments.\nclass Popen(args, bufsize=0, executable=None,\nstdin=None, stdout=None, stderr=None,\npreexec_fn=None, close_fds=False, shell=False,\ncwd=None, env=None, universal_newlines=False,\nstartupinfo=None, creationflags=0):\nargs is commonly a sequence of strings that will be the arguments to the\nprogram executed as the subprocess. (If the shell argument is true, args\ncan be a string which will then be passed on to the shell for interpretation,\njust as os.system()\ndoes.)\nstdin, stdout, and stderr specify what the subprocess\u2019s input, output, and\nerror streams will be. You can provide a file object or a file descriptor, or\nyou can use the constant subprocess.PIPE\nto create a pipe between the\nsubprocess and the parent.\nThe constructor has a number of handy options:\nclose_fds requests that all file descriptors be closed before running the subprocess.\ncwd specifies the working directory in which the subprocess will be executed (defaulting to whatever the parent\u2019s working directory is).\nenv is a dictionary specifying environment variables.\npreexec_fn is a function that gets called before the child is started.\nuniversal_newlines opens the child\u2019s input and output using Python\u2019s universal newlines feature.\nOnce you\u2019ve created the Popen\ninstance, you can call its wait()\nmethod to pause until the subprocess has exited, poll()\nto check if it\u2019s\nexited without pausing, or communicate(data)\nto send the string data\nto the subprocess\u2019s standard input. communicate(data)\nthen reads any\ndata that the subprocess has sent to its standard output or standard error,\nreturning a tuple (stdout_data, stderr_data)\n.\ncall()\nis a shortcut that passes its arguments along to the Popen\nconstructor, waits for the command to complete, and returns the status code of\nthe subprocess. It can serve as a safer analog to os.system()\n:\nsts = subprocess.call(['dpkg', '-i', '/tmp/new-package.deb'])\nif sts == 0:\n# Success\n...\nelse:\n# dpkg returned an error\n...\nThe command is invoked without use of the shell. If you really do want to use\nthe shell, you can add shell=True\nas a keyword argument and provide a string\ninstead of a sequence:\nsts = subprocess.call('dpkg -i /tmp/new-package.deb', shell=True)\nThe PEP takes various examples of shell and Python code and shows how they\u2019d be\ntranslated into Python code that uses subprocess\n. Reading this section\nof the PEP is highly recommended.\nSee also\n- PEP 324 - subprocess - New process module\nWritten and implemented by Peter \u00c5strand, with assistance from Fredrik Lundh and others.\nPEP 327: Decimal Data Type\u00b6\nPython has always supported floating-point (FP) numbers, based on the underlying\nC double type, as a data type. However, while most programming\nlanguages provide a floating-point type, many people (even programmers) are\nunaware that floating-point numbers don\u2019t represent certain decimal fractions\naccurately. The new Decimal\ntype can represent these fractions\naccurately, up to a user-specified precision limit.\nWhy is Decimal needed?\u00b6\nThe limitations arise from the representation used for floating-point numbers. FP numbers are made up of three components:\nThe sign, which is positive or negative.\nThe mantissa, which is a single-digit binary number followed by a fractional part. For example,\n1.01\nin base-2 notation is1 + 0/2 + 1/4\n, or 1.25 in decimal notation.The exponent, which tells where the decimal point is located in the number represented.\nFor example, the number 1.25 has positive sign, a mantissa value of 1.01 (in binary), and an exponent of 0 (the decimal point doesn\u2019t need to be shifted). The number 5 has the same sign and mantissa, but the exponent is 2 because the mantissa is multiplied by 4 (2 to the power of the exponent 2); 1.25 * 4 equals 5.\nModern systems usually provide floating-point support that conforms to a\nstandard called IEEE 754. C\u2019s double type is usually implemented as a\n64-bit IEEE 754 number, which uses 52 bits of space for the mantissa. This\nmeans that numbers can only be specified to 52 bits of precision. If you\u2019re\ntrying to represent numbers whose expansion repeats endlessly, the expansion is\ncut off after 52 bits. Unfortunately, most software needs to produce output in\nbase 10, and common fractions in base 10 are often repeating decimals in binary.\nFor example, 1.1 decimal is binary 1.0001100110011 ...\n; .1 = 1/16 + 1/32 +\n1/256 plus an infinite number of additional terms. IEEE 754 has to chop off\nthat infinitely repeated decimal after 52 digits, so the representation is\nslightly inaccurate.\nSometimes you can see this inaccuracy when the number is printed:\n>>> 1.1\n1.1000000000000001\nThe inaccuracy isn\u2019t always visible when you print the number because the FP-to-decimal-string conversion is provided by the C library, and most C libraries try to produce sensible output. Even if it\u2019s not displayed, however, the inaccuracy is still there and subsequent operations can magnify the error.\nFor many applications this doesn\u2019t matter. If I\u2019m plotting points and displaying them on my monitor, the difference between 1.1 and 1.1000000000000001 is too small to be visible. Reports often limit output to a certain number of decimal places, and if you round the number to two or three or even eight decimal places, the error is never apparent. However, for applications where it does matter, it\u2019s a lot of work to implement your own custom arithmetic routines.\nHence, the Decimal\ntype was created.\nThe Decimal\ntype\u00b6\nA new module, decimal\n, was added to Python\u2019s standard library. It\ncontains two classes, Decimal\nand Context\n. Decimal\ninstances represent numbers, and Context\ninstances are used to wrap up\nvarious settings such as the precision and default rounding mode.\nDecimal\ninstances are immutable, like regular Python integers and FP\nnumbers; once it\u2019s been created, you can\u2019t change the value an instance\nrepresents. Decimal\ninstances can be created from integers or\nstrings:\n>>> import decimal\n>>> decimal.Decimal(1972)\nDecimal(\"1972\")\n>>> decimal.Decimal(\"1.1\")\nDecimal(\"1.1\")\nYou can also provide tuples containing the sign, the mantissa represented as a tuple of decimal digits, and the exponent:\n>>> decimal.Decimal((1, (1, 4, 7, 5), -2))\nDecimal(\"-14.75\")\nCautionary note: the sign bit is a Boolean value, so 0 is positive and 1 is negative.\nConverting from floating-point numbers poses a bit of a problem: should the FP\nnumber representing 1.1 turn into the decimal number for exactly 1.1, or for 1.1\nplus whatever inaccuracies are introduced? The decision was to dodge the issue\nand leave such a conversion out of the API. Instead, you should convert the\nfloating-point number into a string using the desired precision and pass the\nstring to the Decimal\nconstructor:\n>>> f = 1.1\n>>> decimal.Decimal(str(f))\nDecimal(\"1.1\")\n>>> decimal.Decimal('%.12f' % f)\nDecimal(\"1.100000000000\")\nOnce you have Decimal\ninstances, you can perform the usual mathematical\noperations on them. One limitation: exponentiation requires an integer\nexponent:\n>>> a = decimal.Decimal('35.72')\n>>> b = decimal.Decimal('1.73')\n>>> a+b\nDecimal(\"37.45\")\n>>> a-b\nDecimal(\"33.99\")\n>>> a*b\nDecimal(\"61.7956\")\n>>> a/b\nDecimal(\"20.64739884393063583815028902\")\n>>> a ** 2\nDecimal(\"1275.9184\")\n>>> a**b\nTraceback (most recent call last):\n...\ndecimal.InvalidOperation: x ** (non-integer)\nYou can combine Decimal\ninstances with integers, but not with\nfloating-point numbers:\n>>> a + 4\nDecimal(\"39.72\")\n>>> a + 4.5\nTraceback (most recent call last):\n...\nTypeError: You can interact Decimal only with int, long or Decimal data types.\n>>>\nDecimal\nnumbers can be used with the math\nand cmath\nmodules, but note that they\u2019ll be immediately converted to floating-point\nnumbers before the operation is performed, resulting in a possible loss of\nprecision and accuracy. You\u2019ll also get back a regular floating-point number\nand not a Decimal\n.\n>>> import math, cmath\n>>> d = decimal.Decimal('123456789012.345')\n>>> math.sqrt(d)\n351364.18288201344\n>>> cmath.sqrt(-d)\n351364.18288201344j\nDecimal\ninstances have a sqrt()\nmethod that returns a\nDecimal\n, but if you need other things such as trigonometric functions\nyou\u2019ll have to implement them.\n>>> d.sqrt()\nDecimal(\"351364.1828820134592177245001\")\nThe Context\ntype\u00b6\nInstances of the Context\nclass encapsulate several settings for\ndecimal operations:\nprec\nis the precision, the number of decimal places.rounding\nspecifies the rounding mode. Thedecimal\nmodule has constants for the various possibilities:ROUND_DOWN\n,ROUND_CEILING\n,ROUND_HALF_EVEN\n, and various others.traps\nis a dictionary specifying what happens on encountering certain error conditions: either an exception is raised or a value is returned. Some examples of error conditions are division by zero, loss of precision, and overflow.\nThere\u2019s a thread-local default context available by calling getcontext()\n;\nyou can change the properties of this context to alter the default precision,\nrounding, or trap handling. The following example shows the effect of changing\nthe precision of the default context:\n>>> decimal.getcontext().prec\n28\n>>> decimal.Decimal(1) / decimal.Decimal(7)\nDecimal(\"0.1428571428571428571428571429\")\n>>> decimal.getcontext().prec = 9\n>>> decimal.Decimal(1) / decimal.Decimal(7)\nDecimal(\"0.142857143\")\nThe default action for error conditions is selectable; the module can either return a special value such as infinity or not-a-number, or exceptions can be raised:\n>>> decimal.Decimal(1) / decimal.Decimal(0)\nTraceback (most recent call last):\n...\ndecimal.DivisionByZero: x / 0\n>>> decimal.getcontext().traps[decimal.DivisionByZero] = False\n>>> decimal.Decimal(1) / decimal.Decimal(0)\nDecimal(\"Infinity\")\n>>>\nThe Context\ninstance also has various methods for formatting numbers\nsuch as to_eng_string()\nand to_sci_string()\n.\nFor more information, see the documentation for the decimal\nmodule, which\nincludes a quick-start tutorial and a reference.\nSee also\n- PEP 327 - Decimal Data Type\nWritten by Facundo Batista and implemented by Facundo Batista, Eric Price, Raymond Hettinger, Aahz, and Tim Peters.\n- http://www.lahey.com/float.htm\nThe article uses Fortran code to illustrate many of the problems that floating-point inaccuracy can cause.\n- https://speleotrove.com/decimal/\nA description of a decimal-based representation. This representation is being proposed as a standard, and underlies the new Python decimal type. Much of this material was written by Mike Cowlishaw, designer of the Rexx language.\nPEP 328: Multi-line Imports\u00b6\nOne language change is a small syntactic tweak aimed at making it easier to\nimport many names from a module. In a from module import names\nstatement,\nnames is a sequence of names separated by commas. If the sequence is very\nlong, you can either write multiple imports from the same module, or you can use\nbackslashes to escape the line endings like this:\nfrom SimpleXMLRPCServer import SimpleXMLRPCServer,\\\nSimpleXMLRPCRequestHandler,\\\nCGIXMLRPCRequestHandler,\\\nresolve_dotted_attribute\nThe syntactic change in Python 2.4 simply allows putting the names within parentheses. Python ignores newlines within a parenthesized expression, so the backslashes are no longer needed:\nfrom SimpleXMLRPCServer import (SimpleXMLRPCServer,\nSimpleXMLRPCRequestHandler,\nCGIXMLRPCRequestHandler,\nresolve_dotted_attribute)\nThe PEP also proposes that all import\nstatements be absolute imports,\nwith a leading .\ncharacter to indicate a relative import. This part of the\nPEP was not implemented for Python 2.4, but was completed for Python 2.5.\nSee also\n- PEP 328 - Imports: Multi-Line and Absolute/Relative\nWritten by Aahz. Multi-line imports were implemented by Dima Dorfman.\nPEP 331: Locale-Independent Float/String Conversions\u00b6\nThe locale\nmodules lets Python software select various conversions and\ndisplay conventions that are localized to a particular country or language.\nHowever, the module was careful to not change the numeric locale because various\nfunctions in Python\u2019s implementation required that the numeric locale remain set\nto the 'C'\nlocale. Often this was because the code was using the C\nlibrary\u2019s atof()\nfunction.\nNot setting the numeric locale caused trouble for extensions that used third-party C libraries, however, because they wouldn\u2019t have the correct locale set. The motivating example was GTK+, whose user interface widgets weren\u2019t displaying numbers in the current locale.\nThe solution described in the PEP is to add three new functions to the Python API that perform ASCII-only conversions, ignoring the locale setting:\nPyOS_ascii_strtod(str, ptr)\nandPyOS_ascii_atof(str, ptr)\nboth convert a string to a C double.PyOS_ascii_formatd(buffer, buf_len, format, d)\nconverts a double to an ASCII string.\nThe code for these functions came from the GLib library\n(https://developer-old.gnome.org/glib/2.26/), whose developers kindly\nrelicensed the relevant functions and donated them to the Python Software\nFoundation. The locale\nmodule can now change the numeric locale,\nletting extensions such as GTK+ produce the correct results.\nSee also\n- PEP 331 - Locale-Independent Float/String Conversions\nWritten by Christian R. Reis, and implemented by Gustavo Carneiro.\nOther Language Changes\u00b6\nHere are all of the changes that Python 2.4 makes to the core Python language.\nDecorators for functions and methods were added (PEP 318).\nBuilt-in\nset()\nandfrozenset()\ntypes were added (PEP 218). Other new built-ins include thereversed(seq)\nfunction (PEP 322).Generator expressions were added (PEP 289).\nCertain numeric expressions no longer return values restricted to 32 or 64 bits (PEP 237).\nYou can now put parentheses around the list of names in a\nfrom module import names\nstatement (PEP 328).The\ndict.update()\nmethod now accepts the same argument forms as thedict\nconstructor. This includes any mapping, any iterable of key/value pairs, and keyword arguments. (Contributed by Raymond Hettinger.)The string methods\nljust()\n,rjust()\n, andcenter()\nnow take an optional argument for specifying a fill character other than a space. (Contributed by Raymond Hettinger.)Strings also gained an\nrsplit()\nmethod that works like thesplit()\nmethod but splits from the end of the string. (Contributed by Sean Reifschneider.)>>> 'www.python.org'.split('.', 1) ['www', 'python.org'] 'www.python.org'.rsplit('.', 1) ['www.python', 'org']\nThree keyword parameters, cmp, key, and reverse, were added to the\nsort()\nmethod of lists. These parameters make some common usages ofsort()\nsimpler. All of these parameters are optional.For the cmp parameter, the value should be a comparison function that takes two parameters and returns -1, 0, or +1 depending on how the parameters compare. This function will then be used to sort the list. Previously this was the only parameter that could be provided to\nsort()\n.key should be a single-parameter function that takes a list element and returns a comparison key for the element. The list is then sorted using the comparison keys. The following example sorts a list case-insensitively:\n>>> L = ['A', 'b', 'c', 'D'] >>> L.sort() # Case-sensitive sort >>> L ['A', 'D', 'b', 'c'] >>> # Using 'key' parameter to sort list >>> L.sort(key=lambda x: x.lower()) >>> L ['A', 'b', 'c', 'D'] >>> # Old-fashioned way >>> L.sort(cmp=lambda x,y: cmp(x.lower(), y.lower())) >>> L ['A', 'b', 'c', 'D']\nThe last example, which uses the cmp parameter, is the old way to perform a case-insensitive sort. It works but is slower than using a key parameter. Using key calls\nlower()\nmethod once for each element in the list while using cmp will call it twice for each comparison, so using key saves on invocations of thelower()\nmethod.For simple key functions and comparison functions, it is often possible to avoid a\nlambda\nexpression by using an unbound method instead. For example, the above case-insensitive sort is best written as:>>> L.sort(key=str.lower) >>> L ['A', 'b', 'c', 'D']\nFinally, the reverse parameter takes a Boolean value. If the value is true, the list will be sorted into reverse order. Instead of\nL.sort(); L.reverse()\n, you can now writeL.sort(reverse=True)\n.The results of sorting are now guaranteed to be stable. This means that two entries with equal keys will be returned in the same order as they were input. For example, you can sort a list of people by name, and then sort the list by age, resulting in a list sorted by age where people with the same age are in name-sorted order.\n(All changes to\nsort()\ncontributed by Raymond Hettinger.)There is a new built-in function\nsorted(iterable)\nthat works like the in-placelist.sort()\nmethod but can be used in expressions. The differences are:the input may be any iterable;\na newly formed copy is sorted, leaving the original intact; and\nthe expression returns the new sorted copy\n>>> L = [9,7,8,3,2,4,1,6,5] >>> [10+i for i in sorted(L)] # usable in a list comprehension [11, 12, 13, 14, 15, 16, 17, 18, 19] >>> L # original is left unchanged [9,7,8,3,2,4,1,6,5] >>> sorted('Monty Python') # any iterable may be an input [' ', 'M', 'P', 'h', 'n', 'n', 'o', 'o', 't', 't', 'y', 'y'] >>> # List the contents of a dict sorted by key values >>> colormap = dict(red=1, blue=2, green=3, black=4, yellow=5) >>> for k, v in sorted(colormap.iteritems()): ... print k, v ... black 4 blue 2 green 3 red 1 yellow 5\n(Contributed by Raymond Hettinger.)\nInteger operations will no longer trigger an\nOverflowWarning\n. TheOverflowWarning\nwarning will disappear in Python 2.5.The interpreter gained a new switch,\n-m\n, that takes a name, searches for the corresponding module onsys.path\n, and runs the module as a script. For example, you can now run the Python profiler withpython -m profile\n. (Contributed by Nick Coghlan.)The\neval(expr, globals, locals)\nandexecfile(filename, globals, locals)\nfunctions and theexec\nstatement now accept any mapping type for the locals parameter. Previously this had to be a regular Python dictionary. (Contributed by Raymond Hettinger.)The\nzip()\nbuilt-in function anditertools.izip()\nnow return an empty list if called with no arguments. Previously they raised aTypeError\nexception. This makes them more suitable for use with variable length argument lists:>>> def transpose(array): ... return zip(*array) ... >>> transpose([(1,2,3), (4,5,6)]) [(1, 4), (2, 5), (3, 6)] >>> transpose([]) []\n(Contributed by Raymond Hettinger.)\nEncountering a failure while importing a module no longer leaves a partially initialized module object in\nsys.modules\n. The incomplete module object left behind would fool further imports of the same module into succeeding, leading to confusing errors. (Fixed by Tim Peters.)None\nis now a constant; code that binds a new value to the nameNone\nis now a syntax error. (Contributed by Raymond Hettinger.)\nOptimizations\u00b6\nThe inner loops for list and tuple slicing were optimized and now run about one-third faster. The inner loops for dictionaries were also optimized, resulting in performance boosts for\nkeys()\n,values()\n,items()\n,iterkeys()\n,itervalues()\n, anditeritems()\n. (Contributed by Raymond Hettinger.)The machinery for growing and shrinking lists was optimized for speed and for space efficiency. Appending and popping from lists now runs faster due to more efficient code paths and less frequent use of the underlying system\nrealloc()\n. List comprehensions also benefit.list.extend()\nwas also optimized and no longer converts its argument into a temporary list before extending the base list. (Contributed by Raymond Hettinger.)list()\n,tuple()\n,map()\n,filter()\n, andzip()\nnow run several times faster with non-sequence arguments that supply a__len__()\nmethod. (Contributed by Raymond Hettinger.)The methods\nlist.__getitem__()\n,dict.__getitem__()\n, anddict.__contains__()\nare now implemented asmethod_descriptor\nobjects rather thanwrapper_descriptor\nobjects. This form of access doubles their performance and makes them more suitable for use as arguments to functionals:map(mydict.__getitem__, keylist)\n. (Contributed by Raymond Hettinger.)Added a new opcode,\nLIST_APPEND\n, that simplifies the generated bytecode for list comprehensions and speeds them up by about a third. (Contributed by Raymond Hettinger.)The peephole bytecode optimizer has been improved to produce shorter, faster bytecode; remarkably, the resulting bytecode is more readable. (Enhanced by Raymond Hettinger.)\nString concatenations in statements of the form\ns = s + \"abc\"\nands += \"abc\"\nare now performed more efficiently in certain circumstances. This optimization won\u2019t be present in other Python implementations such as Jython, so you shouldn\u2019t rely on it; using thejoin()\nmethod of strings is still recommended when you want to efficiently glue a large number of strings together. (Contributed by Armin Rigo.)\nThe net result of the 2.4 optimizations is that Python 2.4 runs the pystone benchmark around 5% faster than Python 2.3 and 35% faster than Python 2.2. (pystone is not a particularly good benchmark, but it\u2019s the most commonly used measurement of Python\u2019s performance. Your own applications may show greater or smaller benefits from Python 2.4.)\nNew, Improved, and Deprecated Modules\u00b6\nAs usual, Python\u2019s standard library received a number of enhancements and bug\nfixes. Here\u2019s a partial list of the most notable changes, sorted alphabetically\nby module name. Consult the Misc/NEWS\nfile in the source tree for a more\ncomplete list of changes, or look through the CVS logs for all the details.\nThe\nasyncore\nmodule\u2019sloop()\nfunction now has a count parameter that lets you perform a limited number of passes through the polling loop. The default is still to loop forever.The\nbase64\nmodule now has more complete RFC 3548 support for Base64, Base32, and Base16 encoding and decoding, including optional case folding and optional alternative alphabets. (Contributed by Barry Warsaw.)The\nbisect\nmodule now has an underlying C implementation for improved performance. (Contributed by Dmitry Vasiliev.)The CJKCodecs collections of East Asian codecs, maintained by Hye-Shik Chang, was integrated into 2.4. The new encodings are:\nChinese (PRC): gb2312, gbk, gb18030, big5hkscs, hz\nChinese (ROC): big5, cp950\n- Japanese: cp932, euc-jis-2004, euc-jp, euc-jisx0213, iso-2022-jp,\niso-2022-jp-1, iso-2022-jp-2, iso-2022-jp-3, iso-2022-jp-ext, iso-2022-jp-2004, shift-jis, shift-jisx0213, shift-jis-2004\nKorean: cp949, euc-kr, johab, iso-2022-kr\nSome other new encodings were added: HP Roman8, ISO_8859-11, ISO_8859-16, PCTP-154, and TIS-620.\nThe UTF-8 and UTF-16 codecs now cope better with receiving partial input. Previously the\nStreamReader\nclass would try to read more data, making it impossible to resume decoding from the stream. Theread()\nmethod will now return as much data as it can and future calls will resume decoding where previous ones left off. (Implemented by Walter D\u00f6rwald.)There is a new\ncollections\nmodule for various specialized collection datatypes. Currently it contains just one type,deque\n, a double-ended queue that supports efficiently adding and removing elements from either end:>>> from collections import deque >>> d = deque('ghi') # make a new deque with three items >>> d.append('j') # add a new entry to the right side >>> d.appendleft('f') # add a new entry to the left side >>> d # show the representation of the deque deque(['f', 'g', 'h', 'i', 'j']) >>> d.pop() # return and remove the rightmost item 'j' >>> d.popleft() # return and remove the leftmost item 'f' >>> list(d) # list the contents of the deque ['g', 'h', 'i'] >>> 'h' in d # search the deque True\nSeveral modules, such as the\nQueue\nandthreading\nmodules, now take advantage ofcollections.deque\nfor improved performance. (Contributed by Raymond Hettinger.)The\nConfigParser\nclasses have been enhanced slightly. Theread()\nmethod now returns a list of the files that were successfully parsed, and theset()\nmethod raisesTypeError\nif passed a value argument that isn\u2019t a string. (Contributed by John Belmonte and David Goodger.)The\ncurses\nmodule now supports the ncurses extensionuse_default_colors()\n. On platforms where the terminal supports transparency, this makes it possible to use a transparent background. (Contributed by J\u00f6rg Lehmann.)The\ndifflib\nmodule now includes anHtmlDiff\nclass that creates an HTML table showing a side by side comparison of two versions of a text. (Contributed by Dan Gass.)The\nemail\npackage was updated to version 3.0, which dropped various deprecated APIs and removes support for Python versions earlier than 2.3. The 3.0 version of the package uses a new incremental parser for MIME messages, available in theemail.FeedParser\nmodule. The new parser doesn\u2019t require reading the entire message into memory, and doesn\u2019t raise exceptions if a message is malformed; instead it records any problems in thedefect\nattribute of the message. (Developed by Anthony Baxter, Barry Warsaw, Thomas Wouters, and others.)The\nheapq\nmodule has been converted to C. The resulting tenfold improvement in speed makes the module suitable for handling high volumes of data. In addition, the module has two new functionsnlargest()\nandnsmallest()\nthat use heaps to find the N largest or smallest values in a dataset without the expense of a full sort. (Contributed by Raymond Hettinger.)The\nhttplib\nmodule now contains constants for HTTP status codes defined in various HTTP-related RFC documents. Constants have names such asOK\n,CREATED\n,CONTINUE\n, andMOVED_PERMANENTLY\n; use pydoc to get a full list. (Contributed by Andrew Eland.)The\nimaplib\nmodule now supports IMAP\u2019s THREAD command (contributed by Yves Dionne) and newdeleteacl()\nandmyrights()\nmethods (contributed by Arnaud Mazin).The\nitertools\nmodule gained agroupby(iterable[, *func*])\nfunction. iterable is something that can be iterated over to return a stream of elements, and the optional func parameter is a function that takes an element and returns a key value; if omitted, the key is simply the element itself.groupby()\nthen groups the elements into subsequences which have matching values of the key, and returns a series of 2-tuples containing the key value and an iterator over the subsequence.Here\u2019s an example to make this clearer. The key function simply returns whether a number is even or odd, so the result of\ngroupby()\nis to return consecutive runs of odd or even numbers.>>> import itertools >>> L = [2, 4, 6, 7, 8, 9, 11, 12, 14] >>> for key_val, it in itertools.groupby(L, lambda x: x % 2): ... print key_val, list(it) ... 0 [2, 4, 6] 1 [7] 0 [8] 1 [9, 11] 0 [12, 14] >>>\ngroupby()\nis typically used with sorted input. The logic forgroupby()\nis similar to the Unixuniq\nfilter which makes it handy for eliminating, counting, or identifying duplicate elements:>>> word = 'abracadabra' >>> letters = sorted(word) # Turn string into a sorted list of letters >>> letters ['a', 'a', 'a', 'a', 'a', 'b', 'b', 'c', 'd', 'r', 'r'] >>> for k, g in itertools.groupby(letters): ... print k, list(g) ... a ['a', 'a', 'a', 'a', 'a'] b ['b', 'b'] c ['c'] d ['d'] r ['r', 'r'] >>> # List unique letters >>> [k for k, g in groupby(letters)] ['a', 'b', 'c', 'd', 'r'] >>> # Count letter occurrences >>> [(k, len(list(g))) for k, g in groupby(letters)] [('a', 5), ('b', 2), ('c', 1), ('d', 1), ('r', 2)]\n(Contributed by Hye-Shik Chang.)\nitertools\nalso gained a function namedtee(iterator, N)\nthat returns N independent iterators that replicate iterator. If N is omitted, the default is 2.>>> L = [1,2,3] >>> i1, i2 = itertools.tee(L) >>> i1,i2 (, ) >>> list(i1) # Run the first iterator to exhaustion [1, 2, 3] >>> list(i2) # Run the second iterator to exhaustion [1, 2, 3]\nNote that\ntee()\nhas to keep copies of the values returned by the iterator; in the worst case, it may need to keep all of them. This should therefore be used carefully if the leading iterator can run far ahead of the trailing iterator in a long stream of inputs. If the separation is large, then you might as well uselist()\ninstead. When the iterators track closely with one another,tee()\nis ideal. Possible applications include bookmarking, windowing, or lookahead iterators. (Contributed by Raymond Hettinger.)A number of functions were added to the\nlocale\nmodule, such asbind_textdomain_codeset()\nto specify a particular encoding and a family ofl*gettext()\nfunctions that return messages in the chosen encoding. (Contributed by Gustavo Niemeyer.)Some keyword arguments were added to the\nlogging\npackage\u2019sbasicConfig()\nfunction to simplify log configuration. The default behavior is to log messages to standard error, but various keyword arguments can be specified to log to a particular file, change the logging format, or set the logging level. For example:import logging logging.basicConfig(filename='/var/log/application.log', level=0, # Log all messages format='%(levelname):%(process):%(thread):%(message)')\nOther additions to the\nlogging\npackage include alog(level, msg)\nconvenience method, as well as aTimedRotatingFileHandler\nclass that rotates its log files at a timed interval. The module already hadRotatingFileHandler\n, which rotated logs once the file exceeded a certain size. Both classes derive from a newBaseRotatingHandler\nclass that can be used to implement other rotating handlers.(Changes implemented by Vinay Sajip.)\nThe\nmarshal\nmodule now shares interned strings on unpacking a data structure. This may shrink the size of certain pickle strings, but the primary effect is to make.pyc\nfiles significantly smaller. (Contributed by Martin von L\u00f6wis.)The\nnntplib\nmodule\u2019sNNTP\nclass gaineddescription()\nanddescriptions()\nmethods to retrieve newsgroup descriptions for a single group or for a range of groups. (Contributed by J\u00fcrgen A. Erhard.)Two new functions were added to the\noperator\nmodule,attrgetter(attr)\nanditemgetter(index)\n. Both functions return callables that take a single argument and return the corresponding attribute or item; these callables make excellent data extractors when used withmap()\norsorted()\n. For example:>>> L = [('c', 2), ('d', 1), ('a', 4), ('b', 3)] >>> map(operator.itemgetter(0), L) ['c', 'd', 'a', 'b'] >>> map(operator.itemgetter(1), L) [2, 1, 4, 3] >>> sorted(L, key=operator.itemgetter(1)) # Sort list by second tuple item [('d', 1), ('c', 2), ('b', 3), ('a', 4)]\n(Contributed by Raymond Hettinger.)\nThe\noptparse\nmodule was updated in various ways. The module now passes its messages throughgettext.gettext()\n, making it possible to internationalize Optik\u2019s help and error messages. Help messages for options can now include the string'%default'\n, which will be replaced by the option\u2019s default value. (Contributed by Greg Ward.)The long-term plan is to deprecate the\nrfc822\nmodule in some future Python release in favor of theemail\npackage. To this end, theemail.Utils.formatdate\nfunction has been changed to make it usable as a replacement forrfc822.formatdate()\n. You may want to write new e-mail processing code with this in mind. (Change implemented by Anthony Baxter.)A new\nurandom(n)\nfunction was added to theos\nmodule, returning a string containing n bytes of random data. This function provides access to platform-specific sources of randomness such as/dev/urandom\non Linux or the Windows CryptoAPI. (Contributed by Trevor Perrin.)Another new function:\nos.path.lexists(path)\nreturns true if the file specified by path exists, whether or not it\u2019s a symbolic link. This differs from the existingos.path.exists(path)\nfunction, which returns false if path is a symlink that points to a destination that doesn\u2019t exist. (Contributed by Beni Cherniavsky.)A new\ngetsid()\nfunction was added to theposix\nmodule that underlies theos\nmodule. (Contributed by J. Raynor.)The\npoplib\nmodule now supports POP over SSL. (Contributed by Hector Urtubia.)The\nprofile\nmodule can now profile C extension functions. (Contributed by Nick Bastin.)The\nrandom\nmodule has a new method calledgetrandbits(N)\nthat returns a long integer N bits in length. The existingrandrange()\nmethod now usesgetrandbits()\nwhere appropriate, making generation of arbitrarily large random numbers more efficient. (Contributed by Raymond Hettinger.)The regular expression language accepted by the\nre\nmodule was extended with simple conditional expressions, written as(?(group)A|B)\n. group is either a numeric group ID or a group name defined with(?P...)\nearlier in the expression. If the specified group matched, the regular expression pattern A will be tested against the string; if the group didn\u2019t match, the pattern B will be used instead. (Contributed by Gustavo Niemeyer.)The\nre\nmodule is also no longer recursive, thanks to a massive amount of work by Gustavo Niemeyer. In a recursive regular expression engine, certain patterns result in a large amount of C stack space being consumed, and it was possible to overflow the stack. For example, if you matched a 30000-byte string ofa\ncharacters against the expression(a|b)+\n, one stack frame was consumed per character. Python 2.3 tried to check for stack overflow and raise aRuntimeError\nexception, but certain patterns could sidestep the checking and if you were unlucky Python could segfault. Python 2.4\u2019s regular expression engine can match this pattern without problems.The\nsignal\nmodule now performs tighter error-checking on the parameters to thesignal.signal()\nfunction. For example, you can\u2019t set a handler on theSIGKILL\nsignal; previous versions of Python would quietly accept this, but 2.4 will raise aRuntimeError\nexception.Two new functions were added to the\nsocket\nmodule.socketpair()\nreturns a pair of connected sockets andgetservbyport(port)\nlooks up the service name for a given port number. (Contributed by Dave Cole and Barry Warsaw.)The\nsys.exitfunc()\nfunction has been deprecated. Code should be using the existingatexit\nmodule, which correctly handles calling multiple exit functions. Eventuallysys.exitfunc()\nwill become a purely internal interface, accessed only byatexit\n.The\ntarfile\nmodule now generates GNU-format tar files by default. (Contributed by Lars Gust\u00e4bel.)The\nthreading\nmodule now has an elegantly simple way to support thread-local data. The module contains alocal\nclass whose attribute values are local to different threads.import threading data = threading.local() data.number = 42 data.url = ('www.python.org', 80)\nOther threads can assign and retrieve their own values for the\nnumber\nandurl\nattributes. You can subclasslocal\nto initialize attributes or to add methods. (Contributed by Jim Fulton.)The\ntimeit\nmodule now automatically disables periodic garbage collection during the timing loop. This change makes consecutive timings more comparable. (Contributed by Raymond Hettinger.)The\nweakref\nmodule now supports a wider variety of objects including Python functions, class instances, sets, frozensets, deques, arrays, files, sockets, and regular expression pattern objects. (Contributed by Raymond Hettinger.)The\nxmlrpclib\nmodule now supports a multi-call extension for transmitting multiple XML-RPC calls in a single HTTP operation. (Contributed by Brian Quinlan.)The\nmpz\n,rotor\n, andxreadlines\nmodules have been removed.\ndoctest\u00b6\nThe doctest\nmodule underwent considerable refactoring thanks to Edward\nLoper and Tim Peters. Testing can still be as simple as running\ndoctest.testmod()\n, but the refactorings allow customizing the module\u2019s\noperation in various ways\nThe new DocTestFinder\nclass extracts the tests from a given object\u2019s\ndocstrings:\ndef f (x, y):\n\"\"\">>> f(2,2)\n4\n>>> f(3,2)\n6\n\"\"\"\nreturn x*y\nfinder = doctest.DocTestFinder()\n# Get list of DocTest instances\ntests = finder.find(f)\nThe new DocTestRunner\nclass then runs individual tests and can produce\na summary of the results:\nrunner = doctest.DocTestRunner()\nfor t in tests:\ntried, failed = runner.run(t)\nrunner.summarize(verbose=1)\nThe above example produces the following output:\n1 items passed all tests:\n2 tests in f\n2 tests in 1 items.\n2 passed and 0 failed.\nTest passed.\nDocTestRunner\nuses an instance of the OutputChecker\nclass to\ncompare the expected output with the actual output. This class takes a number\nof different flags that customize its behaviour; ambitious users can also write\na completely new subclass of OutputChecker\n.\nThe default output checker provides a number of handy features. For example,\nwith the doctest.ELLIPSIS\noption flag, an ellipsis (...\n) in the\nexpected output matches any substring, making it easier to accommodate outputs\nthat vary in minor ways:\ndef o (n):\n\"\"\">>> o(1)\n<__main__.C instance at 0x...>\n>>>\n\"\"\"\nAnother special string, \n, matches a blank line:\ndef p (n):\n\"\"\">>> p(1)\n\n>>>\n\"\"\"\nAnother new capability is producing a diff-style display of the output by\nspecifying the doctest.REPORT_UDIFF\n(unified diffs),\ndoctest.REPORT_CDIFF\n(context diffs), or doctest.REPORT_NDIFF\n(delta-style) option flags. For example:\ndef g (n):\n\"\"\">>> g(4)\nhere\nis\na\nlengthy\n>>>\"\"\"\nL = 'here is a rather lengthy list of words'.split()\nfor word in L[:n]:\nprint word\nRunning the above function\u2019s tests with doctest.REPORT_UDIFF\nspecified,\nyou get the following output:\n**********************************************************************\nFile \"t.py\", line 15, in g\nFailed example:\ng(4)\nDifferences (unified diff with -expected +actual):\n@@ -2,3 +2,3 @@\nis\na\n-lengthy\n+rather\n**********************************************************************\nBuild and C API Changes\u00b6\nSome of the changes to Python\u2019s build process and to the C API are:\nThree new convenience macros were added for common return values from extension functions:\nPy_RETURN_NONE\n,Py_RETURN_TRUE\n, andPy_RETURN_FALSE\n. (Contributed by Brett Cannon.)Another new macro,\nPy_CLEAR\n, decreases the reference count of obj and sets obj to the null pointer. (Contributed by Jim Fulton.)A new function,\nPyTuple_Pack(N, obj1, obj2, ..., objN)\n, constructs tuples from a variable length argument list of Python objects. (Contributed by Raymond Hettinger.)A new function,\nPyDict_Contains(d, k)\n, implements fast dictionary lookups without masking exceptions raised during the look-up process. (Contributed by Raymond Hettinger.)The Py_IS_NAN(X) macro returns 1 if its float or double argument X is a NaN. (Contributed by Tim Peters.)\nC code can avoid unnecessary locking by using the new\nPyEval_ThreadsInitialized()\nfunction to tell if any thread operations have been performed. If this function returns false, no lock operations are needed. (Contributed by Nick Coghlan.)A new function,\nPyArg_VaParseTupleAndKeywords()\n, is the same asPyArg_ParseTupleAndKeywords()\nbut takes ava_list\ninstead of a number of arguments. (Contributed by Greg Chapman.)A new method flag,\nMETH_COEXIST\n, allows a function defined in slots to co-exist with aPyCFunction\nhaving the same name. This can halve the access time for a method such asset.__contains__()\n. (Contributed by Raymond Hettinger.)Python can now be built with additional profiling for the interpreter itself, intended as an aid to people developing the Python core. Providing\n--enable-profiling\nto the configure script will let you profile the interpreter with gprof, and providing the--with-tsc\nswitch enables profiling using the Pentium\u2019s Time-Stamp-Counter register. Note that the--with-tsc\nswitch is slightly misnamed, because the profiling feature also works on the PowerPC platform, though that processor architecture doesn\u2019t call that register \u201cthe TSC register\u201d. (Contributed by Jeremy Hylton.)The\ntracebackobject\ntype has been renamed toPyTracebackObject\n.\nPort-Specific Changes\u00b6\nThe Windows port now builds under MSVC++ 7.1 as well as version 6. (Contributed by Martin von L\u00f6wis.)\nPorting to Python 2.4\u00b6\nThis section lists previously described changes that may require changes to your code:\nLeft shifts and hexadecimal/octal constants that are too large no longer trigger a\nFutureWarning\nand return a value limited to 32 or 64 bits; instead they return a long integer.Integer operations will no longer trigger an\nOverflowWarning\n. TheOverflowWarning\nwarning will disappear in Python 2.5.The\nzip()\nbuilt-in function anditertools.izip()\nnow return an empty list instead of raising aTypeError\nexception if called with no arguments.You can no longer compare the\ndate\nanddatetime\ninstances provided by thedatetime\nmodule. Two instances of different classes will now always be unequal, and relative comparisons (<\n,>\n) will raise aTypeError\n.dircache.listdir()\nnow passes exceptions to the caller instead of returning empty lists.LexicalHandler.startDTD()\nused to receive the public and system IDs in the wrong order. This has been corrected; applications relying on the wrong order need to be fixed.fcntl.ioctl()\nnow warns if the mutate argument is omitted and relevant.The\ntarfile\nmodule now generates GNU-format tar files by default.Encountering a failure while importing a module no longer leaves a partially initialized module object in\nsys.modules\n.None\nis now a constant; code that binds a new value to the nameNone\nis now a syntax error.The\nsignals.signal()\nfunction now raises aRuntimeError\nexception for certain illegal values; previously these errors would pass silently. For example, you can no longer set a handler on theSIGKILL\nsignal.\nAcknowledgements\u00b6\nThe author would like to thank the following people for offering suggestions, corrections and assistance with various drafts of this article: Koray Can, Hye-Shik Chang, Michael Dyck, Raymond Hettinger, Brian Hurt, Hamish Lawson, Fredrik Lundh, Sean Reifschneider, Sadruddin Rejeb.", "code_snippets": [" ", " ", " ", "\n", " ", " ", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n\n", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n\n", " ", "\n", " ", "\n", "\n", "\n", " ", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n ", "\n", " ", " ", " ", "\n ", " ", "\n ", "\n ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n ", " ", "\n ", "\n\n ", " ", " ", " ", "\n", "\n\n ", "\n ", " ", "\n ", "\n", "\n", "\n", "\n", " ", "\n ", "\n", " ", "\n", " ", " ", "\n", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n ", " ", "\n ", " ", " ", "\n ", " ", "\n\n ", " ", "\n\n", "\n", " ", "\n ", " ", "\n\n", "\n", "\n ", " ", "\n", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", "\n ", "\n ", "\n", "\n ", "\n ", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", ": ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", ": ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", ": ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", " ", "\\\n ", "\\\n ", "\\\n ", "\n", " ", "\n ", "\n ", "\n ", "\n", " ", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n\n", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", " ", "\n", "\n", "\n", "\n", " ", "\n", " ", " ", " ", "\n", " ", "\n", " ", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", "\n", "\n", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", "\n", "\n ", " ", "\n ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", " ", "\n", "\n", "\n\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n ", " ", "\n\n", " ", " ", "\n\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n ", " ", " ", " ", "\n\n", "\n", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 13504} +{"url": "https://docs.python.org/3/library/asyncio-stream.html", "title": "Streams", "content": "Streams\u00b6\nSource code: Lib/asyncio/streams.py\nStreams are high-level async/await-ready primitives to work with network connections. Streams allow sending and receiving data without using callbacks or low-level protocols and transports.\nHere is an example of a TCP echo client written using asyncio streams:\nimport asyncio\nasync def tcp_echo_client(message):\nreader, writer = await asyncio.open_connection(\n'127.0.0.1', 8888)\nprint(f'Send: {message!r}')\nwriter.write(message.encode())\nawait writer.drain()\ndata = await reader.read(100)\nprint(f'Received: {data.decode()!r}')\nprint('Close the connection')\nwriter.close()\nawait writer.wait_closed()\nasyncio.run(tcp_echo_client('Hello World!'))\nSee also the Examples section below.\nStream Functions\nThe following top-level asyncio functions can be used to create and work with streams:\n- async asyncio.open_connection(host=None, port=None, *, limit=None, ssl=None, family=0, proto=0, flags=0, sock=None, local_addr=None, server_hostname=None, ssl_handshake_timeout=None, ssl_shutdown_timeout=None, happy_eyeballs_delay=None, interleave=None)\u00b6\nEstablish a network connection and return a pair of\n(reader, writer)\nobjects.The returned reader and writer objects are instances of\nStreamReader\nandStreamWriter\nclasses.limit determines the buffer size limit used by the returned\nStreamReader\ninstance. By default the limit is set to 64 KiB.The rest of the arguments are passed directly to\nloop.create_connection()\n.Note\nThe sock argument transfers ownership of the socket to the\nStreamWriter\ncreated. To close the socket, call itsclose()\nmethod.Changed in version 3.7: Added the ssl_handshake_timeout parameter.\nChanged in version 3.8: Added the happy_eyeballs_delay and interleave parameters.\nChanged in version 3.10: Removed the loop parameter.\nChanged in version 3.11: Added the ssl_shutdown_timeout parameter.\n- async asyncio.start_server(client_connected_cb, host=None, port=None, *, limit=None, family=socket.AF_UNSPEC, flags=socket.AI_PASSIVE, sock=None, backlog=100, ssl=None, reuse_address=None, reuse_port=None, keep_alive=None, ssl_handshake_timeout=None, ssl_shutdown_timeout=None, start_serving=True)\u00b6\nStart a socket server.\nThe client_connected_cb callback is called whenever a new client connection is established. It receives a\n(reader, writer)\npair as two arguments, instances of theStreamReader\nandStreamWriter\nclasses.client_connected_cb can be a plain callable or a coroutine function; if it is a coroutine function, it will be automatically scheduled as a\nTask\n.limit determines the buffer size limit used by the returned\nStreamReader\ninstance. By default the limit is set to 64 KiB.The rest of the arguments are passed directly to\nloop.create_server()\n.Note\nThe sock argument transfers ownership of the socket to the server created. To close the socket, call the server\u2019s\nclose()\nmethod.Changed in version 3.7: Added the ssl_handshake_timeout and start_serving parameters.\nChanged in version 3.10: Removed the loop parameter.\nChanged in version 3.11: Added the ssl_shutdown_timeout parameter.\nChanged in version 3.13: Added the keep_alive parameter.\nUnix Sockets\n- async asyncio.open_unix_connection(path=None, *, limit=None, ssl=None, sock=None, server_hostname=None, ssl_handshake_timeout=None, ssl_shutdown_timeout=None)\u00b6\nEstablish a Unix socket connection and return a pair of\n(reader, writer)\n.Similar to\nopen_connection()\nbut operates on Unix sockets.See also the documentation of\nloop.create_unix_connection()\n.Note\nThe sock argument transfers ownership of the socket to the\nStreamWriter\ncreated. To close the socket, call itsclose()\nmethod.Availability: Unix.\nChanged in version 3.7: Added the ssl_handshake_timeout parameter. The path parameter can now be a path-like object\nChanged in version 3.10: Removed the loop parameter.\nChanged in version 3.11: Added the ssl_shutdown_timeout parameter.\n- async asyncio.start_unix_server(client_connected_cb, path=None, *, limit=None, sock=None, backlog=100, ssl=None, ssl_handshake_timeout=None, ssl_shutdown_timeout=None, start_serving=True, cleanup_socket=True)\u00b6\nStart a Unix socket server.\nSimilar to\nstart_server()\nbut works with Unix sockets.If cleanup_socket is true then the Unix socket will automatically be removed from the filesystem when the server is closed, unless the socket has been replaced after the server has been created.\nSee also the documentation of\nloop.create_unix_server()\n.Note\nThe sock argument transfers ownership of the socket to the server created. To close the socket, call the server\u2019s\nclose()\nmethod.Availability: Unix.\nChanged in version 3.7: Added the ssl_handshake_timeout and start_serving parameters. The path parameter can now be a path-like object.\nChanged in version 3.10: Removed the loop parameter.\nChanged in version 3.11: Added the ssl_shutdown_timeout parameter.\nChanged in version 3.13: Added the cleanup_socket parameter.\nStreamReader\u00b6\n- class asyncio.StreamReader\u00b6\nRepresents a reader object that provides APIs to read data from the IO stream. As an asynchronous iterable, the object supports the\nasync for\nstatement.It is not recommended to instantiate StreamReader objects directly; use\nopen_connection()\nandstart_server()\ninstead.- feed_eof()\u00b6\nAcknowledge the EOF.\n- async read(n=-1)\u00b6\nRead up to n bytes from the stream.\nIf n is not provided or set to\n-1\n, read until EOF, then return all readbytes\n. If EOF was received and the internal buffer is empty, return an emptybytes\nobject.If n is\n0\n, return an emptybytes\nobject immediately.If n is positive, return at most n available\nbytes\nas soon as at least 1 byte is available in the internal buffer. If EOF is received before any byte is read, return an emptybytes\nobject.\n- async readline()\u00b6\nRead one line, where \u201cline\u201d is a sequence of bytes ending with\n\\n\n.If EOF is received and\n\\n\nwas not found, the method returns partially read data.If EOF is received and the internal buffer is empty, return an empty\nbytes\nobject.\n- async readexactly(n)\u00b6\nRead exactly n bytes.\nRaise an\nIncompleteReadError\nif EOF is reached before n can be read. Use theIncompleteReadError.partial\nattribute to get the partially read data.\n- async readuntil(separator=b'\\n')\u00b6\nRead data from the stream until separator is found.\nOn success, the data and separator will be removed from the internal buffer (consumed). Returned data will include the separator at the end.\nIf the amount of data read exceeds the configured stream limit, a\nLimitOverrunError\nexception is raised, and the data is left in the internal buffer and can be read again.If EOF is reached before the complete separator is found, an\nIncompleteReadError\nexception is raised, and the internal buffer is reset. TheIncompleteReadError.partial\nattribute may contain a portion of the separator.The separator may also be a tuple of separators. In this case the return value will be the shortest possible that has any separator as the suffix. For the purposes of\nLimitOverrunError\n, the shortest possible separator is considered to be the one that matched.Added in version 3.5.2.\nChanged in version 3.13: The separator parameter may now be a\ntuple\nof separators.\n- at_eof()\u00b6\nReturn\nTrue\nif the buffer is empty andfeed_eof()\nwas called.\nStreamWriter\u00b6\n- class asyncio.StreamWriter\u00b6\nRepresents a writer object that provides APIs to write data to the IO stream.\nIt is not recommended to instantiate StreamWriter objects directly; use\nopen_connection()\nandstart_server()\ninstead.- write(data)\u00b6\nThe method attempts to write the data to the underlying socket immediately. If that fails, the data is queued in an internal write buffer until it can be sent.\nThe data buffer should be a bytes, bytearray, or C-contiguous one-dimensional memoryview object.\nThe method should be used along with the\ndrain()\nmethod:stream.write(data) await stream.drain()\n- writelines(data)\u00b6\nThe method writes a list (or any iterable) of bytes to the underlying socket immediately. If that fails, the data is queued in an internal write buffer until it can be sent.\nThe method should be used along with the\ndrain()\nmethod:stream.writelines(lines) await stream.drain()\n- close()\u00b6\nThe method closes the stream and the underlying socket.\nThe method should be used, though not mandatory, along with the\nwait_closed()\nmethod:stream.close() await stream.wait_closed()\n- can_write_eof()\u00b6\nReturn\nTrue\nif the underlying transport supports thewrite_eof()\nmethod,False\notherwise.\n- write_eof()\u00b6\nClose the write end of the stream after the buffered write data is flushed.\n- transport\u00b6\nReturn the underlying asyncio transport.\n- get_extra_info(name, default=None)\u00b6\nAccess optional transport information; see\nBaseTransport.get_extra_info()\nfor details.\n- async drain()\u00b6\nWait until it is appropriate to resume writing to the stream. Example:\nwriter.write(data) await writer.drain()\nThis is a flow control method that interacts with the underlying IO write buffer. When the size of the buffer reaches the high watermark, drain() blocks until the size of the buffer is drained down to the low watermark and writing can be resumed. When there is nothing to wait for, the\ndrain()\nreturns immediately.\n- async start_tls(sslcontext, *, server_hostname=None, ssl_handshake_timeout=None, ssl_shutdown_timeout=None)\u00b6\nUpgrade an existing stream-based connection to TLS.\nParameters:\nsslcontext: a configured instance of\nSSLContext\n.server_hostname: sets or overrides the host name that the target server\u2019s certificate will be matched against.\nssl_handshake_timeout is the time in seconds to wait for the TLS handshake to complete before aborting the connection.\n60.0\nseconds ifNone\n(default).ssl_shutdown_timeout is the time in seconds to wait for the SSL shutdown to complete before aborting the connection.\n30.0\nseconds ifNone\n(default).\nAdded in version 3.11.\nChanged in version 3.12: Added the ssl_shutdown_timeout parameter.\n- is_closing()\u00b6\nReturn\nTrue\nif the stream is closed or in the process of being closed.Added in version 3.7.\nExamples\u00b6\nTCP echo client using streams\u00b6\nTCP echo client using the asyncio.open_connection()\nfunction:\nimport asyncio\nasync def tcp_echo_client(message):\nreader, writer = await asyncio.open_connection(\n'127.0.0.1', 8888)\nprint(f'Send: {message!r}')\nwriter.write(message.encode())\nawait writer.drain()\ndata = await reader.read(100)\nprint(f'Received: {data.decode()!r}')\nprint('Close the connection')\nwriter.close()\nawait writer.wait_closed()\nasyncio.run(tcp_echo_client('Hello World!'))\nSee also\nThe TCP echo client protocol\nexample uses the low-level loop.create_connection()\nmethod.\nTCP echo server using streams\u00b6\nTCP echo server using the asyncio.start_server()\nfunction:\nimport asyncio\nasync def handle_echo(reader, writer):\ndata = await reader.read(100)\nmessage = data.decode()\naddr = writer.get_extra_info('peername')\nprint(f\"Received {message!r} from {addr!r}\")\nprint(f\"Send: {message!r}\")\nwriter.write(data)\nawait writer.drain()\nprint(\"Close the connection\")\nwriter.close()\nawait writer.wait_closed()\nasync def main():\nserver = await asyncio.start_server(\nhandle_echo, '127.0.0.1', 8888)\naddrs = ', '.join(str(sock.getsockname()) for sock in server.sockets)\nprint(f'Serving on {addrs}')\nasync with server:\nawait server.serve_forever()\nasyncio.run(main())\nSee also\nThe TCP echo server protocol\nexample uses the loop.create_server()\nmethod.\nGet HTTP headers\u00b6\nSimple example querying HTTP headers of the URL passed on the command line:\nimport asyncio\nimport urllib.parse\nimport sys\nasync def print_http_headers(url):\nurl = urllib.parse.urlsplit(url)\nif url.scheme == 'https':\nreader, writer = await asyncio.open_connection(\nurl.hostname, 443, ssl=True)\nelse:\nreader, writer = await asyncio.open_connection(\nurl.hostname, 80)\nquery = (\nf\"HEAD {url.path or '/'} HTTP/1.0\\r\\n\"\nf\"Host: {url.hostname}\\r\\n\"\nf\"\\r\\n\"\n)\nwriter.write(query.encode('latin-1'))\nwhile True:\nline = await reader.readline()\nif not line:\nbreak\nline = line.decode('latin1').rstrip()\nif line:\nprint(f'HTTP header> {line}')\n# Ignore the body, close the socket\nwriter.close()\nawait writer.wait_closed()\nurl = sys.argv[1]\nasyncio.run(print_http_headers(url))\nUsage:\npython example.py http://example.com/path/page.html\nor with HTTPS:\npython example.py https://example.com/path/page.html\nRegister an open socket to wait for data using streams\u00b6\nCoroutine waiting until a socket receives data using the\nopen_connection()\nfunction:\nimport asyncio\nimport socket\nasync def wait_for_data():\n# Get a reference to the current event loop because\n# we want to access low-level APIs.\nloop = asyncio.get_running_loop()\n# Create a pair of connected sockets.\nrsock, wsock = socket.socketpair()\n# Register the open socket to wait for data.\nreader, writer = await asyncio.open_connection(sock=rsock)\n# Simulate the reception of data from the network\nloop.call_soon(wsock.send, 'abc'.encode())\n# Wait for data\ndata = await reader.read(100)\n# Got data, we are done: close the socket\nprint(\"Received:\", data.decode())\nwriter.close()\nawait writer.wait_closed()\n# Close the second socket\nwsock.close()\nasyncio.run(wait_for_data())\nSee also\nThe register an open socket to wait for data using a protocol example uses a low-level protocol and\nthe loop.create_connection()\nmethod.\nThe watch a file descriptor for read events example uses the low-level\nloop.add_reader()\nmethod to watch a file descriptor.", "code_snippets": ["\n\n", " ", "\n ", " ", " ", " ", " ", "\n ", " ", "\n\n ", "\n ", "\n ", " ", "\n\n ", " ", " ", " ", "\n ", "\n\n ", "\n ", "\n ", " ", "\n\n", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n\n", " ", "\n ", " ", " ", " ", " ", "\n ", " ", "\n\n ", "\n ", "\n ", " ", "\n\n ", " ", " ", " ", "\n ", "\n\n ", "\n ", "\n ", " ", "\n\n", "\n", "\n\n", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n\n ", "\n\n ", "\n ", "\n ", " ", "\n\n ", "\n ", "\n ", " ", "\n\n", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n\n ", " ", " ", " ", " ", " ", " ", "\n ", "\n\n ", " ", " ", "\n ", " ", "\n\n", "\n", "\n", "\n", "\n\n", " ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", "\n ", "\n ", " ", " ", " ", " ", "\n ", " ", "\n\n ", " ", " ", "\n ", "\n ", "\n ", "\n ", "\n\n ", "\n ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", "\n\n ", " ", " ", "\n ", " ", "\n ", "\n\n ", "\n ", "\n ", " ", "\n\n", " ", " ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n\n", " ", "\n ", "\n ", "\n ", " ", " ", "\n\n ", "\n ", " ", " ", " ", "\n\n ", "\n ", " ", " ", " ", " ", "\n\n ", "\n ", " ", "\n\n ", "\n ", " ", " ", " ", "\n\n ", "\n ", " ", "\n ", "\n ", " ", "\n\n ", "\n ", "\n\n", "\n"], "language": "Python", "source": "python.org", "token_count": 3318} +{"url": "https://docs.python.org/3/library/asyncio-graph.html", "title": "Call Graph Introspection", "content": "Call Graph Introspection\u00b6\nSource code: Lib/asyncio/graph.py\nasyncio has powerful runtime call graph introspection utilities to trace the entire call graph of a running coroutine or task, or a suspended future. These utilities and the underlying machinery can be used from within a Python program or by external profilers and debuggers.\nAdded in version 3.14.\n- asyncio.print_call_graph(future=None, /, *, file=None, depth=1, limit=None)\u00b6\nPrint the async call graph for the current task or the provided\nTask\norFuture\n.This function prints entries starting from the top frame and going down towards the invocation point.\nThe function receives an optional future argument. If not passed, the current running task will be used.\nIf the function is called on the current task, the optional keyword-only depth argument can be used to skip the specified number of frames from top of the stack.\nIf the optional keyword-only limit argument is provided, each call stack in the resulting graph is truncated to include at most\nabs(limit)\nentries. If limit is positive, the entries left are the closest to the invocation point. If limit is negative, the topmost entries are left. If limit is omitted orNone\n, all entries are present. If limit is0\n, the call stack is not printed at all, only \u201cawaited by\u201d information is printed.If file is omitted or\nNone\n, the function will print tosys.stdout\n.Example:\nThe following Python code:\nimport asyncio async def test(): asyncio.print_call_graph() async def main(): async with asyncio.TaskGroup() as g: g.create_task(test(), name='test') asyncio.run(main())\nwill print:\n* Task(name='test', id=0x1039f0fe0) + Call stack: | File 't2.py', line 4, in async test() + Awaited by: * Task(name='Task-1', id=0x103a5e060) + Call stack: | File 'taskgroups.py', line 107, in async TaskGroup.__aexit__() | File 't2.py', line 7, in async main()\n- asyncio.format_call_graph(future=None, /, *, depth=1, limit=None)\u00b6\nLike\nprint_call_graph()\n, but returns a string. If future isNone\nand there\u2019s no current task, the function returns an empty string.\n- asyncio.capture_call_graph(future=None, /, *, depth=1, limit=None)\u00b6\nCapture the async call graph for the current task or the provided\nTask\norFuture\n.The function receives an optional future argument. If not passed, the current running task will be used. If there\u2019s no current task, the function returns\nNone\n.If the function is called on the current task, the optional keyword-only depth argument can be used to skip the specified number of frames from top of the stack.\nReturns a\nFutureCallGraph\ndata class object:FutureCallGraph(future, call_stack, awaited_by)\nFrameCallGraphEntry(frame)\nWhere frame is a frame object of a regular Python function in the call stack.\nLow level utility functions\u00b6\nTo introspect an async call graph asyncio requires cooperation from\ncontrol flow structures, such as shield()\nor TaskGroup\n.\nAny time an intermediate Future\nobject with low-level APIs like\nFuture.add_done_callback()\nis\ninvolved, the following two functions should be used to inform asyncio\nabout how exactly such intermediate future objects are connected with\nthe tasks they wrap or control.\n- asyncio.future_add_to_awaited_by(future, waiter, /)\u00b6\nRecord that future is awaited on by waiter.\nBoth future and waiter must be instances of\nFuture\norTask\nor their subclasses, otherwise the call would have no effect.A call to\nfuture_add_to_awaited_by()\nmust be followed by an eventual call to thefuture_discard_from_awaited_by()\nfunction with the same arguments.", "code_snippets": [" ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 879} +{"url": "https://docs.python.org/3/library/email.charset.html", "title": ": Representing character sets", "content": "email.charset\n: Representing character sets\u00b6\nSource code: Lib/email/charset.py\nThis module is part of the legacy (Compat32\n) email API. In the new\nAPI only the aliases table is used.\nThe remaining text in this section is the original documentation of the module.\nThis module provides a class Charset\nfor representing character sets\nand character set conversions in email messages, as well as a character set\nregistry and several convenience methods for manipulating this registry.\nInstances of Charset\nare used in several other modules within the\nemail\npackage.\nImport this class from the email.charset\nmodule.\n- class email.charset.Charset(input_charset=DEFAULT_CHARSET)\u00b6\nMap character sets to their email properties.\nThis class provides information about the requirements imposed on email for a specific character set. It also provides convenience routines for converting between character sets, given the availability of the applicable codecs. Given a character set, it will do its best to provide information on how to use that character set in an email message in an RFC-compliant way.\nCertain character sets must be encoded with quoted-printable or base64 when used in email headers or bodies. Certain character sets must be converted outright, and are not allowed in email.\nOptional input_charset is as described below; it is always coerced to lower case. After being alias normalized it is also used as a lookup into the registry of character sets to find out the header encoding, body encoding, and output conversion codec to be used for the character set. For example, if input_charset is\niso-8859-1\n, then headers and bodies will be encoded using quoted-printable and no output conversion codec is necessary. If input_charset iseuc-jp\n, then headers will be encoded with base64, bodies will not be encoded, but output text will be converted from theeuc-jp\ncharacter set to theiso-2022-jp\ncharacter set.Charset\ninstances have the following data attributes:- input_charset\u00b6\nThe initial character set specified. Common aliases are converted to their official email names (e.g.\nlatin_1\nis converted toiso-8859-1\n). Defaults to 7-bitus-ascii\n.\n- header_encoding\u00b6\nIf the character set must be encoded before it can be used in an email header, this attribute will be set to\ncharset.QP\n(for quoted-printable),charset.BASE64\n(for base64 encoding), orcharset.SHORTEST\nfor the shortest of QP or BASE64 encoding. Otherwise, it will beNone\n.\n- body_encoding\u00b6\nSame as header_encoding, but describes the encoding for the mail message\u2019s body, which indeed may be different than the header encoding.\ncharset.SHORTEST\nis not allowed for body_encoding.\n- output_charset\u00b6\nSome character sets must be converted before they can be used in email headers or bodies. If the input_charset is one of them, this attribute will contain the name of the character set output will be converted to. Otherwise, it will be\nNone\n.\n- input_codec\u00b6\nThe name of the Python codec used to convert the input_charset to Unicode. If no conversion codec is necessary, this attribute will be\nNone\n.\n- output_codec\u00b6\nThe name of the Python codec used to convert Unicode to the output_charset. If no conversion codec is necessary, this attribute will have the same value as the input_codec.\nCharset\ninstances also have the following methods:- get_body_encoding()\u00b6\nReturn the content transfer encoding used for body encoding.\nThis is either the string\nquoted-printable\norbase64\ndepending on the encoding used, or it is a function, in which case you should call the function with a single argument, the Message object being encoded. The function should then set the Content-Transfer-Encoding header itself to whatever is appropriate.Returns the string\nquoted-printable\nif body_encoding isQP\n, returns the stringbase64\nif body_encoding isBASE64\n, and returns the string7bit\notherwise.\n- get_output_charset()\u00b6\nReturn the output character set.\nThis is the output_charset attribute if that is not\nNone\n, otherwise it is input_charset.\n- header_encode(string)\u00b6\nHeader-encode the string string.\nThe type of encoding (base64 or quoted-printable) will be based on the header_encoding attribute.\n- header_encode_lines(string, maxlengths)\u00b6\nHeader-encode a string by converting it first to bytes.\nThis is similar to\nheader_encode()\nexcept that the string is fit into maximum line lengths as given by the argument maxlengths, which must be an iterator: each element returned from this iterator will provide the next maximum line length.\n- body_encode(string)\u00b6\nBody-encode the string string.\nThe type of encoding (base64 or quoted-printable) will be based on the body_encoding attribute.\nThe\nCharset\nclass also provides a number of methods to support standard operations and built-in functions.- __str__()\u00b6\nReturns input_charset as a string coerced to lower case.\n__repr__()\nis an alias for__str__()\n.\nThe email.charset\nmodule also provides the following functions for adding\nnew entries to the global character set, alias, and codec registries:\n- email.charset.add_charset(charset, header_enc=None, body_enc=None, output_charset=None)\u00b6\nAdd character properties to the global registry.\ncharset is the input character set, and must be the canonical name of a character set.\nOptional header_enc and body_enc is either\ncharset.QP\nfor quoted-printable,charset.BASE64\nfor base64 encoding,charset.SHORTEST\nfor the shortest of quoted-printable or base64 encoding, orNone\nfor no encoding.SHORTEST\nis only valid for header_enc. The default isNone\nfor no encoding.Optional output_charset is the character set that the output should be in. Conversions will proceed from input charset, to Unicode, to the output charset when the method\nCharset.convert()\nis called. The default is to output in the same character set as the input.Both input_charset and output_charset must have Unicode codec entries in the module\u2019s character set-to-codec mapping; use\nadd_codec()\nto add codecs the module does not know about. See thecodecs\nmodule\u2019s documentation for more information.The global character set registry is kept in the module global dictionary\nCHARSETS\n.\n- email.charset.add_alias(alias, canonical)\u00b6\nAdd a character set alias. alias is the alias name, e.g.\nlatin-1\n. canonical is the character set\u2019s canonical name, e.g.iso-8859-1\n.The global charset alias registry is kept in the module global dictionary\nALIASES\n.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1586} +{"url": "https://docs.python.org/3/library/email.iterators.html", "title": ": Iterators", "content": "email.iterators\n: Iterators\u00b6\nSource code: Lib/email/iterators.py\nIterating over a message object tree is fairly easy with the\nMessage.walk\nmethod. The\nemail.iterators\nmodule provides some useful higher level iterations over\nmessage object trees.\n- email.iterators.body_line_iterator(msg, decode=False)\u00b6\nThis iterates over all the payloads in all the subparts of msg, returning the string payloads line-by-line. It skips over all the subpart headers, and it skips over any subpart with a payload that isn\u2019t a Python string. This is somewhat equivalent to reading the flat text representation of the message from a file using\nreadline()\n, skipping over all the intervening headers.Optional decode is passed through to\nMessage.get_payload\n.\n- email.iterators.typed_subpart_iterator(msg, maintype='text', subtype=None)\u00b6\nThis iterates over all the subparts of msg, returning only those subparts that match the MIME type specified by maintype and subtype.\nNote that subtype is optional; if omitted, then subpart MIME type matching is done only with the main type. maintype is optional too; it defaults to text.\nThus, by default\ntyped_subpart_iterator()\nreturns each subpart that has a MIME type of text/*.\nThe following function has been added as a useful debugging tool. It should not be considered part of the supported public interface for the package.\n- email.iterators._structure(msg, fp=None, level=0, include_default=False)\u00b6\nPrints an indented representation of the content types of the message object structure. For example:\n>>> msg = email.message_from_file(somefile) >>> _structure(msg) multipart/mixed text/plain text/plain multipart/digest message/rfc822 text/plain message/rfc822 text/plain message/rfc822 text/plain message/rfc822 text/plain message/rfc822 text/plain text/plain\nOptional fp is a file-like object to print the output to. It must be suitable for Python\u2019s\nprint()\nfunction. level is used internally. include_default, if true, prints the default type as well.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 495} +{"url": "https://docs.python.org/3/library/email.encoders.html", "title": ": Encoders", "content": "email.encoders\n: Encoders\u00b6\nSource code: Lib/email/encoders.py\nThis module is part of the legacy (Compat32\n) email API. In the\nnew API the functionality is provided by the cte parameter of\nthe set_content()\nmethod.\nThis module is deprecated in Python 3. The functions provided here\nshould not be called explicitly since the MIMEText\nclass sets the content type and CTE header using the _subtype and _charset\nvalues passed during the instantiation of that class.\nThe remaining text in this section is the original documentation of the module.\nWhen creating Message\nobjects from scratch, you often\nneed to encode the payloads for transport through compliant mail servers. This\nis especially true for image/* and text/* type messages\ncontaining binary data.\nThe email\npackage provides some convenient encoders in its\nencoders\nmodule. These encoders are actually used by the\nMIMEAudio\nand MIMEImage\nclass constructors to provide default encodings. All encoder functions take\nexactly one argument, the message object to encode. They usually extract the\npayload, encode it, and reset the payload to this newly encoded value. They\nshould also set the Content-Transfer-Encoding header as appropriate.\nNote that these functions are not meaningful for a multipart message. They\nmust be applied to individual subparts instead, and will raise a\nTypeError\nif passed a message whose type is multipart.\nHere are the encoding functions provided:\n- email.encoders.encode_quopri(msg)\u00b6\nEncodes the payload into quoted-printable form and sets the Content-Transfer-Encoding header to\nquoted-printable\n[1]. This is a good encoding to use when most of your payload is normal printable data, but contains a few unprintable characters.\n- email.encoders.encode_base64(msg)\u00b6\nEncodes the payload into base64 form and sets the Content-Transfer-Encoding header to\nbase64\n. This is a good encoding to use when most of your payload is unprintable data since it is a more compact form than quoted-printable. The drawback of base64 encoding is that it renders the text non-human readable.\n- email.encoders.encode_7or8bit(msg)\u00b6\nThis doesn\u2019t actually modify the message\u2019s payload, but it does set the Content-Transfer-Encoding header to either\n7bit\nor8bit\nas appropriate, based on the payload data.\n- email.encoders.encode_noop(msg)\u00b6\nThis does nothing; it doesn\u2019t even set the Content-Transfer-Encoding header.\nFootnotes", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 596} +{"url": "https://docs.python.org/3/extending/embedding.html", "title": "Embedding Python in Another Application", "content": "1. Embedding Python in Another Application\u00b6\nThe previous chapters discussed how to extend Python, that is, how to extend the functionality of Python by attaching a library of C functions to it. It is also possible to do it the other way around: enrich your C/C++ application by embedding Python in it. Embedding provides your application with the ability to implement some of the functionality of your application in Python rather than C or C++. This can be used for many purposes; one example would be to allow users to tailor the application to their needs by writing some scripts in Python. You can also use it yourself if some of the functionality can be written in Python more easily.\nEmbedding Python is similar to extending it, but not quite. The difference is that when you extend Python, the main program of the application is still the Python interpreter, while if you embed Python, the main program may have nothing to do with Python \u2014 instead, some parts of the application occasionally call the Python interpreter to run some Python code.\nSo if you are embedding Python, you are providing your own main program. One of\nthe things this main program has to do is initialize the Python interpreter. At\nthe very least, you have to call the function Py_Initialize()\n. There are\noptional calls to pass command line arguments to Python. Then later you can\ncall the interpreter from any part of the application.\nThere are several different ways to call the interpreter: you can pass a string\ncontaining Python statements to PyRun_SimpleString()\n, or you can pass a\nstdio file pointer and a file name (for identification in error messages only)\nto PyRun_SimpleFile()\n. You can also call the lower-level operations\ndescribed in the previous chapters to construct and use Python objects.\nSee also\n- Python/C API Reference Manual\nThe details of Python\u2019s C interface are given in this manual. A great deal of necessary information can be found here.\n1.1. Very High Level Embedding\u00b6\nThe simplest form of embedding Python is the use of the very high level interface. This interface is intended to execute a Python script without needing to interact with the application directly. This can for example be used to perform some operation on a file.\n#define PY_SSIZE_T_CLEAN\n#include \nint\nmain(int argc, char *argv[])\n{\nPyStatus status;\nPyConfig config;\nPyConfig_InitPythonConfig(&config);\n/* optional but recommended */\nstatus = PyConfig_SetBytesString(&config, &config.program_name, argv[0]);\nif (PyStatus_Exception(status)) {\ngoto exception;\n}\nstatus = Py_InitializeFromConfig(&config);\nif (PyStatus_Exception(status)) {\ngoto exception;\n}\nPyConfig_Clear(&config);\nPyRun_SimpleString(\"from time import time,ctime\\n\"\n\"print('Today is', ctime(time()))\\n\");\nif (Py_FinalizeEx() < 0) {\nexit(120);\n}\nreturn 0;\nexception:\nPyConfig_Clear(&config);\nPy_ExitStatusException(status);\n}\nNote\n#define PY_SSIZE_T_CLEAN\nwas used to indicate that Py_ssize_t\nshould be\nused in some APIs instead of int\n.\nIt is not necessary since Python 3.13, but we keep it here for backward compatibility.\nSee Strings and buffers for a description of this macro.\nSetting PyConfig.program_name\nshould be called before\nPy_InitializeFromConfig()\nto inform the interpreter about paths to Python run-time\nlibraries. Next, the Python interpreter is initialized with\nPy_Initialize()\n, followed by the execution of a hard-coded Python script\nthat prints the date and time. Afterwards, the Py_FinalizeEx()\ncall shuts\nthe interpreter down, followed by the end of the program. In a real program,\nyou may want to get the Python script from another source, perhaps a text-editor\nroutine, a file, or a database. Getting the Python code from a file can better\nbe done by using the PyRun_SimpleFile()\nfunction, which saves you the\ntrouble of allocating memory space and loading the file contents.\n1.2. Beyond Very High Level Embedding: An overview\u00b6\nThe high level interface gives you the ability to execute arbitrary pieces of Python code from your application, but exchanging data values is quite cumbersome to say the least. If you want that, you should use lower level calls. At the cost of having to write more C code, you can achieve almost anything.\nIt should be noted that extending Python and embedding Python is quite the same activity, despite the different intent. Most topics discussed in the previous chapters are still valid. To show this, consider what the extension code from Python to C really does:\nConvert data values from Python to C,\nPerform a function call to a C routine using the converted values, and\nConvert the data values from the call from C to Python.\nWhen embedding Python, the interface code does:\nConvert data values from C to Python,\nPerform a function call to a Python interface routine using the converted values, and\nConvert the data values from the call from Python to C.\nAs you can see, the data conversion steps are simply swapped to accommodate the different direction of the cross-language transfer. The only difference is the routine that you call between both data conversions. When extending, you call a C routine, when embedding, you call a Python routine.\nThis chapter will not discuss how to convert data from Python to C and vice versa. Also, proper use of references and dealing with errors is assumed to be understood. Since these aspects do not differ from extending the interpreter, you can refer to earlier chapters for the required information.\n1.3. Pure Embedding\u00b6\nThe first program aims to execute a function in a Python script. Like in the section about the very high level interface, the Python interpreter does not directly interact with the application (but that will change in the next section).\nThe code to run a function defined in a Python script is:\n#define PY_SSIZE_T_CLEAN\n#include \nint\nmain(int argc, char *argv[])\n{\nPyObject *pName, *pModule, *pFunc;\nPyObject *pArgs, *pValue;\nint i;\nif (argc < 3) {\nfprintf(stderr,\"Usage: call pythonfile funcname [args]\\n\");\nreturn 1;\n}\nPy_Initialize();\npName = PyUnicode_DecodeFSDefault(argv[1]);\n/* Error checking of pName left out */\npModule = PyImport_Import(pName);\nPy_DECREF(pName);\nif (pModule != NULL) {\npFunc = PyObject_GetAttrString(pModule, argv[2]);\n/* pFunc is a new reference */\nif (pFunc && PyCallable_Check(pFunc)) {\npArgs = PyTuple_New(argc - 3);\nfor (i = 0; i < argc - 3; ++i) {\npValue = PyLong_FromLong(atoi(argv[i + 3]));\nif (!pValue) {\nPy_DECREF(pArgs);\nPy_DECREF(pModule);\nfprintf(stderr, \"Cannot convert argument\\n\");\nreturn 1;\n}\n/* pValue reference stolen here: */\nPyTuple_SetItem(pArgs, i, pValue);\n}\npValue = PyObject_CallObject(pFunc, pArgs);\nPy_DECREF(pArgs);\nif (pValue != NULL) {\nprintf(\"Result of call: %ld\\n\", PyLong_AsLong(pValue));\nPy_DECREF(pValue);\n}\nelse {\nPy_DECREF(pFunc);\nPy_DECREF(pModule);\nPyErr_Print();\nfprintf(stderr,\"Call failed\\n\");\nreturn 1;\n}\n}\nelse {\nif (PyErr_Occurred())\nPyErr_Print();\nfprintf(stderr, \"Cannot find function \\\"%s\\\"\\n\", argv[2]);\n}\nPy_XDECREF(pFunc);\nPy_DECREF(pModule);\n}\nelse {\nPyErr_Print();\nfprintf(stderr, \"Failed to load \\\"%s\\\"\\n\", argv[1]);\nreturn 1;\n}\nif (Py_FinalizeEx() < 0) {\nreturn 120;\n}\nreturn 0;\n}\nThis code loads a Python script using argv[1]\n, and calls the function named\nin argv[2]\n. Its integer arguments are the other values of the argv\narray. If you compile and link this program (let\u2019s call\nthe finished executable call), and use it to execute a Python\nscript, such as:\ndef multiply(a,b):\nprint(\"Will compute\", a, \"times\", b)\nc = 0\nfor i in range(0, a):\nc = c + b\nreturn c\nthen the result should be:\n$ call multiply multiply 3 2\nWill compute 3 times 2\nResult of call: 6\nAlthough the program is quite large for its functionality, most of the code is for data conversion between Python and C, and for error reporting. The interesting part with respect to embedding Python starts with\nPy_Initialize();\npName = PyUnicode_DecodeFSDefault(argv[1]);\n/* Error checking of pName left out */\npModule = PyImport_Import(pName);\nAfter initializing the interpreter, the script is loaded using\nPyImport_Import()\n. This routine needs a Python string as its argument,\nwhich is constructed using the PyUnicode_DecodeFSDefault()\ndata\nconversion routine.\npFunc = PyObject_GetAttrString(pModule, argv[2]);\n/* pFunc is a new reference */\nif (pFunc && PyCallable_Check(pFunc)) {\n...\n}\nPy_XDECREF(pFunc);\nOnce the script is loaded, the name we\u2019re looking for is retrieved using\nPyObject_GetAttrString()\n. If the name exists, and the object returned is\ncallable, you can safely assume that it is a function. The program then\nproceeds by constructing a tuple of arguments as normal. The call to the Python\nfunction is then made with:\npValue = PyObject_CallObject(pFunc, pArgs);\nUpon return of the function, pValue\nis either NULL\nor it contains a\nreference to the return value of the function. Be sure to release the reference\nafter examining the value.\n1.4. Extending Embedded Python\u00b6\nUntil now, the embedded Python interpreter had no access to functionality from the application itself. The Python API allows this by extending the embedded interpreter. That is, the embedded interpreter gets extended with routines provided by the application. While it sounds complex, it is not so bad. Simply forget for a while that the application starts the Python interpreter. Instead, consider the application to be a set of subroutines, and write some glue code that gives Python access to those routines, just like you would write a normal Python extension. For example:\nstatic int numargs=0;\n/* Return the number of arguments of the application command line */\nstatic PyObject*\nemb_numargs(PyObject *self, PyObject *args)\n{\nif(!PyArg_ParseTuple(args, \":numargs\"))\nreturn NULL;\nreturn PyLong_FromLong(numargs);\n}\nstatic PyMethodDef emb_module_methods[] = {\n{\"numargs\", emb_numargs, METH_VARARGS,\n\"Return the number of arguments received by the process.\"},\n{NULL, NULL, 0, NULL}\n};\nstatic struct PyModuleDef emb_module = {\n.m_base = PyModuleDef_HEAD_INIT,\n.m_name = \"emb\",\n.m_size = 0,\n.m_methods = emb_module_methods,\n};\nstatic PyObject*\nPyInit_emb(void)\n{\nreturn PyModuleDef_Init(&emb_module);\n}\nInsert the above code just above the main()\nfunction. Also, insert the\nfollowing two statements before the call to Py_Initialize()\n:\nnumargs = argc;\nPyImport_AppendInittab(\"emb\", &PyInit_emb);\nThese two lines initialize the numargs\nvariable, and make the\nemb.numargs()\nfunction accessible to the embedded Python interpreter.\nWith these extensions, the Python script can do things like\nimport emb\nprint(\"Number of arguments\", emb.numargs())\nIn a real application, the methods will expose an API of the application to Python.\n1.5. Embedding Python in C++\u00b6\nIt is also possible to embed Python in a C++ program; precisely how this is done will depend on the details of the C++ system used; in general you will need to write the main program in C++, and use the C++ compiler to compile and link your program. There is no need to recompile Python itself using C++.\n1.6. Compiling and Linking under Unix-like systems\u00b6\nIt is not necessarily trivial to find the right flags to pass to your\ncompiler (and linker) in order to embed the Python interpreter into your\napplication, particularly because Python needs to load library modules\nimplemented as C dynamic extensions (.so\nfiles) linked against\nit.\nTo find out the required compiler and linker flags, you can execute the\npythonX.Y-config\nscript which is generated as part of the\ninstallation process (a python3-config\nscript may also be\navailable). This script has several options, of which the following will\nbe directly useful to you:\npythonX.Y-config --cflags\nwill give you the recommended flags when compiling:$ /opt/bin/python3.11-config --cflags -I/opt/include/python3.11 -I/opt/include/python3.11 -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall\npythonX.Y-config --ldflags --embed\nwill give you the recommended flags when linking:$ /opt/bin/python3.11-config --ldflags --embed -L/opt/lib/python3.11/config-3.11-x86_64-linux-gnu -L/opt/lib -lpython3.11 -lpthread -ldl -lutil -lm\nNote\nTo avoid confusion between several Python installations (and especially\nbetween the system Python and your own compiled Python), it is recommended\nthat you use the absolute path to pythonX.Y-config\n, as in the above\nexample.\nIf this procedure doesn\u2019t work for you (it is not guaranteed to work for\nall Unix-like platforms; however, we welcome bug reports)\nyou will have to read your system\u2019s documentation about dynamic linking and/or\nexamine Python\u2019s Makefile\n(use sysconfig.get_makefile_filename()\nto find its location) and compilation\noptions. In this case, the sysconfig\nmodule is a useful tool to\nprogrammatically extract the configuration values that you will want to\ncombine together. For example:\n>>> import sysconfig\n>>> sysconfig.get_config_var('LIBS')\n'-lpthread -ldl -lutil'\n>>> sysconfig.get_config_var('LINKFORSHARED')\n'-Xlinker -export-dynamic'", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 3228} +{"url": "https://docs.python.org/3/library/email.message.html", "title": ": Representing an email message", "content": "email.message\n: Representing an email message\u00b6\nSource code: Lib/email/message.py\nAdded in version 3.6: [1]\nThe central class in the email\npackage is the EmailMessage\nclass, imported from the email.message\nmodule. It is the base class for\nthe email\nobject model. EmailMessage\nprovides the core\nfunctionality for setting and querying header fields, for accessing message\nbodies, and for creating or modifying structured messages.\nAn email message consists of headers and a payload (which is also referred to as the content). Headers are RFC 5322 or RFC 6532 style field names and values, where the field name and value are separated by a colon. The colon is not part of either the field name or the field value. The payload may be a simple text message, or a binary object, or a structured sequence of sub-messages each with their own set of headers and their own payload. The latter type of payload is indicated by the message having a MIME type such as multipart/* or message/rfc822.\nThe conceptual model provided by an EmailMessage\nobject is that of an\nordered dictionary of headers coupled with a payload that represents the\nRFC 5322 body of the message, which might be a list of sub-EmailMessage\nobjects. In addition to the normal dictionary methods for accessing the header\nnames and values, there are methods for accessing specialized information from\nthe headers (for example the MIME content type), for operating on the payload,\nfor generating a serialized version of the message, and for recursively walking\nover the object tree.\nThe EmailMessage\ndictionary-like interface is indexed by the header\nnames, which must be ASCII values. The values of the dictionary are strings\nwith some extra methods. Headers are stored and returned in case-preserving\nform, but field names are matched case-insensitively. The keys are ordered,\nbut unlike a real dict, there can be duplicates. Additional methods are\nprovided for working with headers that have duplicate keys.\nThe payload is either a string or bytes object, in the case of simple message\nobjects, or a list of EmailMessage\nobjects, for MIME container\ndocuments such as multipart/* and message/rfc822\nmessage objects.\n- class email.message.EmailMessage(policy=default)\u00b6\nIf policy is specified use the rules it specifies to update and serialize the representation of the message. If policy is not set, use the\ndefault\npolicy, which follows the rules of the email RFCs except for line endings (instead of the RFC mandated\\r\\n\n, it uses the Python standard\\n\nline endings). For more information see thepolicy\ndocumentation. [2]- as_string(unixfrom=False, maxheaderlen=None, policy=None)\u00b6\nReturn the entire message flattened as a string. When optional unixfrom is true, the envelope header is included in the returned string. unixfrom defaults to\nFalse\n. For backward compatibility with the baseMessage\nclass maxheaderlen is accepted, but defaults toNone\n, which means that by default the line length is controlled by themax_line_length\nof the policy. The policy argument may be used to override the default policy obtained from the message instance. This can be used to control some of the formatting produced by the method, since the specified policy will be passed to theGenerator\n.Flattening the message may trigger changes to the\nEmailMessage\nif defaults need to be filled in to complete the transformation to a string (for example, MIME boundaries may be generated or modified).Note that this method is provided as a convenience and may not be the most useful way to serialize messages in your application, especially if you are dealing with multiple messages. See\nemail.generator.Generator\nfor a more flexible API for serializing messages. Note also that this method is restricted to producing messages serialized as \u201c7 bit clean\u201d whenutf8\nisFalse\n, which is the default.Changed in version 3.6: the default behavior when maxheaderlen is not specified was changed from defaulting to 0 to defaulting to the value of max_line_length from the policy.\n- __str__()\u00b6\nEquivalent to\nas_string(policy=self.policy.clone(utf8=True))\n. Allowsstr(msg)\nto produce a string containing the serialized message in a readable format.Changed in version 3.4: the method was changed to use\nutf8=True\n, thus producing an RFC 6531-like message representation, instead of being a direct alias foras_string()\n.\n- as_bytes(unixfrom=False, policy=None)\u00b6\nReturn the entire message flattened as a bytes object. When optional unixfrom is true, the envelope header is included in the returned string. unixfrom defaults to\nFalse\n. The policy argument may be used to override the default policy obtained from the message instance. This can be used to control some of the formatting produced by the method, since the specified policy will be passed to theBytesGenerator\n.Flattening the message may trigger changes to the\nEmailMessage\nif defaults need to be filled in to complete the transformation to a string (for example, MIME boundaries may be generated or modified).Note that this method is provided as a convenience and may not be the most useful way to serialize messages in your application, especially if you are dealing with multiple messages. See\nemail.generator.BytesGenerator\nfor a more flexible API for serializing messages.\n- __bytes__()\u00b6\nEquivalent to\nas_bytes()\n. Allowsbytes(msg)\nto produce a bytes object containing the serialized message.\n- is_multipart()\u00b6\nReturn\nTrue\nif the message\u2019s payload is a list of sub-EmailMessage\nobjects, otherwise returnFalse\n. Whenis_multipart()\nreturnsFalse\n, the payload should be a string object (which might be a CTE encoded binary payload). Note thatis_multipart()\nreturningTrue\ndoes not necessarily mean that \u201cmsg.get_content_maintype() == \u2018multipart\u2019\u201d will return theTrue\n. For example,is_multipart\nwill returnTrue\nwhen theEmailMessage\nis of typemessage/rfc822\n.\n- set_unixfrom(unixfrom)\u00b6\nSet the message\u2019s envelope header to unixfrom, which should be a string. (See\nmboxMessage\nfor a brief description of this header.)\n- get_unixfrom()\u00b6\nReturn the message\u2019s envelope header. Defaults to\nNone\nif the envelope header was never set.\nThe following methods implement the mapping-like interface for accessing the message\u2019s headers. Note that there are some semantic differences between these methods and a normal mapping (i.e. dictionary) interface. For example, in a dictionary there are no duplicate keys, but here there may be duplicate message headers. Also, in dictionaries there is no guaranteed order to the keys returned by\nkeys()\n, but in anEmailMessage\nobject, headers are always returned in the order they appeared in the original message, or in which they were added to the message later. Any header deleted and then re-added is always appended to the end of the header list.These semantic differences are intentional and are biased toward convenience in the most common use cases.\nNote that in all cases, any envelope header present in the message is not included in the mapping interface.\n- __len__()\u00b6\nReturn the total number of headers, including duplicates.\n- __contains__(name)\u00b6\nReturn\nTrue\nif the message object has a field named name. Matching is done without regard to case and name does not include the trailing colon. Used for thein\noperator. For example:if 'message-id' in myMessage: print('Message-ID:', myMessage['message-id'])\n- __getitem__(name)\u00b6\nReturn the value of the named header field. name does not include the colon field separator. If the header is missing,\nNone\nis returned; aKeyError\nis never raised.Note that if the named field appears more than once in the message\u2019s headers, exactly which of those field values will be returned is undefined. Use the\nget_all()\nmethod to get the values of all the extant headers named name.Using the standard (non-\ncompat32\n) policies, the returned value is an instance of a subclass ofemail.headerregistry.BaseHeader\n.\n- __setitem__(name, val)\u00b6\nAdd a header to the message with field name name and value val. The field is appended to the end of the message\u2019s existing headers.\nNote that this does not overwrite or delete any existing header with the same name. If you want to ensure that the new header is the only one present in the message with field name name, delete the field first, e.g.:\ndel msg['subject'] msg['subject'] = 'Python roolz!'\nIf the\npolicy\ndefines certain headers to be unique (as the standard policies do), this method may raise aValueError\nwhen an attempt is made to assign a value to such a header when one already exists. This behavior is intentional for consistency\u2019s sake, but do not depend on it as we may choose to make such assignments do an automatic deletion of the existing header in the future.\n- __delitem__(name)\u00b6\nDelete all occurrences of the field with name name from the message\u2019s headers. No exception is raised if the named field isn\u2019t present in the headers.\n- keys()\u00b6\nReturn a list of all the message\u2019s header field names.\n- values()\u00b6\nReturn a list of all the message\u2019s field values.\n- items()\u00b6\nReturn a list of 2-tuples containing all the message\u2019s field headers and values.\n- get(name, failobj=None)\u00b6\nReturn the value of the named header field. This is identical to\n__getitem__()\nexcept that optional failobj is returned if the named header is missing (failobj defaults toNone\n).\nHere are some additional useful header related methods:\n- get_all(name, failobj=None)\u00b6\nReturn a list of all the values for the field named name. If there are no such named headers in the message, failobj is returned (defaults to\nNone\n).\n- add_header(_name, _value, **_params)\u00b6\nExtended header setting. This method is similar to\n__setitem__()\nexcept that additional header parameters can be provided as keyword arguments. _name is the header field to add and _value is the primary value for the header.For each item in the keyword argument dictionary _params, the key is taken as the parameter name, with underscores converted to dashes (since dashes are illegal in Python identifiers). Normally, the parameter will be added as\nkey=\"value\"\nunless the value isNone\n, in which case only the key will be added.If the value contains non-ASCII characters, the charset and language may be explicitly controlled by specifying the value as a three tuple in the format\n(CHARSET, LANGUAGE, VALUE)\n, whereCHARSET\nis a string naming the charset to be used to encode the value,LANGUAGE\ncan usually be set toNone\nor the empty string (see RFC 2231 for other possibilities), andVALUE\nis the string value containing non-ASCII code points. If a three tuple is not passed and the value contains non-ASCII characters, it is automatically encoded in RFC 2231 format using aCHARSET\nofutf-8\nand aLANGUAGE\nofNone\n.Here is an example:\nmsg.add_header('Content-Disposition', 'attachment', filename='bud.gif')\nThis will add a header that looks like\nContent-Disposition: attachment; filename=\"bud.gif\"\nAn example of the extended interface with non-ASCII characters:\nmsg.add_header('Content-Disposition', 'attachment', filename=('iso-8859-1', '', 'Fu\u00dfballer.ppt'))\n- replace_header(_name, _value)\u00b6\nReplace a header. Replace the first header found in the message that matches _name, retaining header order and field name case of the original header. If no matching header is found, raise a\nKeyError\n.\n- get_content_type()\u00b6\nReturn the message\u2019s content type, coerced to lower case of the form maintype/subtype. If there is no Content-Type header in the message return the value returned by\nget_default_type()\n. If the Content-Type header is invalid, returntext/plain\n.(According to RFC 2045, messages always have a default type,\nget_content_type()\nwill always return a value. RFC 2045 defines a message\u2019s default type to be text/plain unless it appears inside a multipart/digest container, in which case it would be message/rfc822. If the Content-Type header has an invalid type specification, RFC 2045 mandates that the default type be text/plain.)\n- get_content_maintype()\u00b6\nReturn the message\u2019s main content type. This is the maintype part of the string returned by\nget_content_type()\n.\n- get_content_subtype()\u00b6\nReturn the message\u2019s sub-content type. This is the subtype part of the string returned by\nget_content_type()\n.\n- get_default_type()\u00b6\nReturn the default content type. Most messages have a default content type of text/plain, except for messages that are subparts of multipart/digest containers. Such subparts have a default content type of message/rfc822.\n- set_default_type(ctype)\u00b6\nSet the default content type. ctype should either be text/plain or message/rfc822, although this is not enforced. The default content type is not stored in the Content-Type header, so it only affects the return value of the\nget_content_type\nmethods when no Content-Type header is present in the message.\n- set_param(param, value, header='Content-Type', requote=True, charset=None, language='', replace=False)\u00b6\nSet a parameter in the Content-Type header. If the parameter already exists in the header, replace its value with value. When header is\nContent-Type\n(the default) and the header does not yet exist in the message, add it, set its value to text/plain, and append the new parameter value. Optional header specifies an alternative header to Content-Type.If the value contains non-ASCII characters, the charset and language may be explicitly specified using the optional charset and language parameters. Optional language specifies the RFC 2231 language, defaulting to the empty string. Both charset and language should be strings. The default is to use the\nutf8\ncharset andNone\nfor the language.If replace is\nFalse\n(the default) the header is moved to the end of the list of headers. If replace isTrue\n, the header will be updated in place.Use of the requote parameter with\nEmailMessage\nobjects is deprecated.Note that existing parameter values of headers may be accessed through the\nparams\nattribute of the header value (for example,msg['Content-Type'].params['charset']\n).Changed in version 3.4:\nreplace\nkeyword was added.\n- del_param(param, header='content-type', requote=True)\u00b6\nRemove the given parameter completely from the Content-Type header. The header will be re-written in place without the parameter or its value. Optional header specifies an alternative to Content-Type.\nUse of the requote parameter with\nEmailMessage\nobjects is deprecated.\n- get_filename(failobj=None)\u00b6\nReturn the value of the\nfilename\nparameter of the Content-Disposition header of the message. If the header does not have afilename\nparameter, this method falls back to looking for thename\nparameter on the Content-Type header. If neither is found, or the header is missing, then failobj is returned. The returned string will always be unquoted as peremail.utils.unquote()\n.\n- get_boundary(failobj=None)\u00b6\nReturn the value of the\nboundary\nparameter of the Content-Type header of the message, or failobj if either the header is missing, or has noboundary\nparameter. The returned string will always be unquoted as peremail.utils.unquote()\n.\n- set_boundary(boundary)\u00b6\nSet the\nboundary\nparameter of the Content-Type header to boundary.set_boundary()\nwill always quote boundary if necessary. AHeaderParseError\nis raised if the message object has no Content-Type header.Note that using this method is subtly different from deleting the old Content-Type header and adding a new one with the new boundary via\nadd_header()\n, becauseset_boundary()\npreserves the order of the Content-Type header in the list of headers.\n- get_content_charset(failobj=None)\u00b6\nReturn the\ncharset\nparameter of the Content-Type header, coerced to lower case. If there is no Content-Type header, or if that header has nocharset\nparameter, failobj is returned.\n- get_charsets(failobj=None)\u00b6\nReturn a list containing the character set names in the message. If the message is a multipart, then the list will contain one element for each subpart in the payload, otherwise, it will be a list of length 1.\nEach item in the list will be a string which is the value of the\ncharset\nparameter in the Content-Type header for the represented subpart. If the subpart has no Content-Type header, nocharset\nparameter, or is not of the text main MIME type, then that item in the returned list will be failobj.\n- is_attachment()\u00b6\nReturn\nTrue\nif there is a Content-Disposition header and its (case insensitive) value isattachment\n,False\notherwise.Changed in version 3.4.2: is_attachment is now a method instead of a property, for consistency with\nis_multipart()\n.\n- get_content_disposition()\u00b6\nReturn the lowercased value (without parameters) of the message\u2019s Content-Disposition header if it has one, or\nNone\n. The possible values for this method are inline, attachment orNone\nif the message follows RFC 2183.Added in version 3.5.\nThe following methods relate to interrogating and manipulating the content (payload) of the message.\n- walk()\u00b6\nThe\nwalk()\nmethod is an all-purpose generator which can be used to iterate over all the parts and subparts of a message object tree, in depth-first traversal order. You will typically usewalk()\nas the iterator in afor\nloop; each iteration returns the next subpart.Here\u2019s an example that prints the MIME type of every part of a multipart message structure:\n>>> for part in msg.walk(): ... print(part.get_content_type()) multipart/report text/plain message/delivery-status text/plain text/plain message/rfc822 text/plain\nwalk\niterates over the subparts of any part whereis_multipart()\nreturnsTrue\n, even thoughmsg.get_content_maintype() == 'multipart'\nmay returnFalse\n. We can see this in our example by making use of the_structure\ndebug helper function:>>> from email.iterators import _structure >>> for part in msg.walk(): ... print(part.get_content_maintype() == 'multipart', ... part.is_multipart()) True True False False False True False False False False False True False False >>> _structure(msg) multipart/report text/plain message/delivery-status text/plain text/plain message/rfc822 text/plain\nHere the\nmessage\nparts are notmultiparts\n, but they do contain subparts.is_multipart()\nreturnsTrue\nandwalk\ndescends into the subparts.\n- get_body(preferencelist=('related', 'html', 'plain'))\u00b6\nReturn the MIME part that is the best candidate to be the \u201cbody\u201d of the message.\npreferencelist must be a sequence of strings from the set\nrelated\n,html\n, andplain\n, and indicates the order of preference for the content type of the part returned.Start looking for candidate matches with the object on which the\nget_body\nmethod is called.If\nrelated\nis not included in preferencelist, consider the root part (or subpart of the root part) of any related encountered as a candidate if the (sub-)part matches a preference.When encountering a\nmultipart/related\n, check thestart\nparameter and if a part with a matching Content-ID is found, consider only it when looking for candidate matches. Otherwise consider only the first (default root) part of themultipart/related\n.If a part has a Content-Disposition header, only consider the part a candidate match if the value of the header is\ninline\n.If none of the candidates matches any of the preferences in preferencelist, return\nNone\n.Notes: (1) For most applications the only preferencelist combinations that really make sense are\n('plain',)\n,('html', 'plain')\n, and the default('related', 'html', 'plain')\n. (2) Because matching starts with the object on whichget_body\nis called, callingget_body\non amultipart/related\nwill return the object itself unless preferencelist has a non-default value. (3) Messages (or message parts) that do not specify a Content-Type or whose Content-Type header is invalid will be treated as if they are of typetext/plain\n, which may occasionally causeget_body\nto return unexpected results.\n- iter_attachments()\u00b6\nReturn an iterator over all of the immediate sub-parts of the message that are not candidate \u201cbody\u201d parts. That is, skip the first occurrence of each of\ntext/plain\n,text/html\n,multipart/related\n, ormultipart/alternative\n(unless they are explicitly marked as attachments via Content-Disposition: attachment), and return all remaining parts. When applied directly to amultipart/related\n, return an iterator over the all the related parts except the root part (ie: the part pointed to by thestart\nparameter, or the first part if there is nostart\nparameter or thestart\nparameter doesn\u2019t match the Content-ID of any of the parts). When applied directly to amultipart/alternative\nor a non-multipart\n, return an empty iterator.\n- iter_parts()\u00b6\nReturn an iterator over all of the immediate sub-parts of the message, which will be empty for a non-\nmultipart\n. (See alsowalk()\n.)\n- get_content(*args, content_manager=None, **kw)\u00b6\nCall the\nget_content()\nmethod of the content_manager, passing self as the message object, and passing along any other arguments or keywords as additional arguments. If content_manager is not specified, use thecontent_manager\nspecified by the currentpolicy\n.\n- set_content(*args, content_manager=None, **kw)\u00b6\nCall the\nset_content()\nmethod of the content_manager, passing self as the message object, and passing along any other arguments or keywords as additional arguments. If content_manager is not specified, use thecontent_manager\nspecified by the currentpolicy\n.\nConvert a non-\nmultipart\nmessage into amultipart/related\nmessage, moving any existing Content- headers and payload into a (new) first part of themultipart\n. If boundary is specified, use it as the boundary string in the multipart, otherwise leave the boundary to be automatically created when it is needed (for example, when the message is serialized).\n- make_alternative(boundary=None)\u00b6\nConvert a non-\nmultipart\nor amultipart/related\ninto amultipart/alternative\n, moving any existing Content- headers and payload into a (new) first part of themultipart\n. If boundary is specified, use it as the boundary string in the multipart, otherwise leave the boundary to be automatically created when it is needed (for example, when the message is serialized).\n- make_mixed(boundary=None)\u00b6\nConvert a non-\nmultipart\n, amultipart/related\n, or amultipart-alternative\ninto amultipart/mixed\n, moving any existing Content- headers and payload into a (new) first part of themultipart\n. If boundary is specified, use it as the boundary string in the multipart, otherwise leave the boundary to be automatically created when it is needed (for example, when the message is serialized).\nIf the message is a\nmultipart/related\n, create a new message object, pass all of the arguments to itsset_content()\nmethod, andattach()\nit to themultipart\n. If the message is a non-multipart\n, callmake_related()\nand then proceed as above. If the message is any other type ofmultipart\n, raise aTypeError\n. If content_manager is not specified, use thecontent_manager\nspecified by the currentpolicy\n. If the added part has no Content-Disposition header, add one with the valueinline\n.\n- add_alternative(*args, content_manager=None, **kw)\u00b6\nIf the message is a\nmultipart/alternative\n, create a new message object, pass all of the arguments to itsset_content()\nmethod, andattach()\nit to themultipart\n. If the message is a non-multipart\normultipart/related\n, callmake_alternative()\nand then proceed as above. If the message is any other type ofmultipart\n, raise aTypeError\n. If content_manager is not specified, use thecontent_manager\nspecified by the currentpolicy\n.\n- add_attachment(*args, content_manager=None, **kw)\u00b6\nIf the message is a\nmultipart/mixed\n, create a new message object, pass all of the arguments to itsset_content()\nmethod, andattach()\nit to themultipart\n. If the message is a non-multipart\n,multipart/related\n, ormultipart/alternative\n, callmake_mixed()\nand then proceed as above. If content_manager is not specified, use thecontent_manager\nspecified by the currentpolicy\n. If the added part has no Content-Disposition header, add one with the valueattachment\n. This method can be used both for explicit attachments (Content-Disposition: attachment) andinline\nattachments (Content-Disposition: inline), by passing appropriate options to thecontent_manager\n.\n- clear()\u00b6\nRemove the payload and all of the headers.\n- clear_content()\u00b6\nRemove the payload and all of the !Content- headers, leaving all other headers intact and in their original order.\nEmailMessage\nobjects have the following instance attributes:- preamble\u00b6\nThe format of a MIME document allows for some text between the blank line following the headers, and the first multipart boundary string. Normally, this text is never visible in a MIME-aware mail reader because it falls outside the standard MIME armor. However, when viewing the raw text of the message, or when viewing the message in a non-MIME aware reader, this text can become visible.\nThe preamble attribute contains this leading extra-armor text for MIME documents. When the\nParser\ndiscovers some text after the headers but before the first boundary string, it assigns this text to the message\u2019s preamble attribute. When theGenerator\nis writing out the plain text representation of a MIME message, and it finds the message has a preamble attribute, it will write this text in the area between the headers and the first boundary. Seeemail.parser\nandemail.generator\nfor details.Note that if the message object has no preamble, the preamble attribute will be\nNone\n.\n- epilogue\u00b6\nThe epilogue attribute acts the same way as the preamble attribute, except that it contains text that appears between the last boundary and the end of the message. As with the\npreamble\n, if there is no epilog text this attribute will beNone\n.\n- defects\u00b6\nThe defects attribute contains a list of all the problems found when parsing this message. See\nemail.errors\nfor a detailed description of the possible parsing defects.\n- class email.message.MIMEPart(policy=default)\u00b6\nThis class represents a subpart of a MIME message. It is identical to\nEmailMessage\n, except that no MIME-Version headers are added whenset_content()\nis called, since sub-parts do not need their own MIME-Version headers.\nFootnotes", "code_snippets": [" ", " ", " ", "\n ", " ", "\n", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", "\n ", " ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 6487} +{"url": "https://docs.python.org/3/library/asyncio-queue.html", "title": "Queues", "content": "Queues\u00b6\nSource code: Lib/asyncio/queues.py\nasyncio queues are designed to be similar to classes of the\nqueue\nmodule. Although asyncio queues are not thread-safe,\nthey are designed to be used specifically in async/await code.\nNote that methods of asyncio queues don\u2019t have a timeout parameter;\nuse asyncio.wait_for()\nfunction to do queue operations with a\ntimeout.\nSee also the Examples section below.\nQueue\u00b6\n- class asyncio.Queue(maxsize=0)\u00b6\nA first in, first out (FIFO) queue.\nIf maxsize is less than or equal to zero, the queue size is infinite. If it is an integer greater than\n0\n, thenawait put()\nblocks when the queue reaches maxsize until an item is removed byget()\n.Unlike the standard library threading\nqueue\n, the size of the queue is always known and can be returned by calling theqsize()\nmethod.Changed in version 3.10: Removed the loop parameter.\nThis class is not thread safe.\n- maxsize\u00b6\nNumber of items allowed in the queue.\n- empty()\u00b6\nReturn\nTrue\nif the queue is empty,False\notherwise.\n- full()\u00b6\nReturn\nTrue\nif there aremaxsize\nitems in the queue.If the queue was initialized with\nmaxsize=0\n(the default), thenfull()\nnever returnsTrue\n.\n- async get()\u00b6\nRemove and return an item from the queue. If queue is empty, wait until an item is available.\nRaises\nQueueShutDown\nif the queue has been shut down and is empty, or if the queue has been shut down immediately.\n- get_nowait()\u00b6\nReturn an item if one is immediately available, else raise\nQueueEmpty\n.\n- async join()\u00b6\nBlock until all items in the queue have been received and processed.\nThe count of unfinished tasks goes up whenever an item is added to the queue. The count goes down whenever a consumer coroutine calls\ntask_done()\nto indicate that the item was retrieved and all work on it is complete. When the count of unfinished tasks drops to zero,join()\nunblocks.\n- async put(item)\u00b6\nPut an item into the queue. If the queue is full, wait until a free slot is available before adding the item.\nRaises\nQueueShutDown\nif the queue has been shut down.\n- put_nowait(item)\u00b6\nPut an item into the queue without blocking.\nIf no free slot is immediately available, raise\nQueueFull\n.\n- qsize()\u00b6\nReturn the number of items in the queue.\n- shutdown(immediate=False)\u00b6\nPut a\nQueue\ninstance into a shutdown mode.The queue can no longer grow. Future calls to\nput()\nraiseQueueShutDown\n. Currently blocked callers ofput()\nwill be unblocked and will raiseQueueShutDown\nin the formerly awaiting task.If immediate is false (the default), the queue can be wound down normally with\nget()\ncalls to extract tasks that have already been loaded.And if\ntask_done()\nis called for each remaining task, a pendingjoin()\nwill be unblocked normally.Once the queue is empty, future calls to\nget()\nwill raiseQueueShutDown\n.If immediate is true, the queue is terminated immediately. The queue is drained to be completely empty and the count of unfinished tasks is reduced by the number of tasks drained. If unfinished tasks is zero, callers of\njoin()\nare unblocked. Also, blocked callers ofget()\nare unblocked and will raiseQueueShutDown\nbecause the queue is empty.Use caution when using\njoin()\nwith immediate set to true. This unblocks the join even when no work has been done on the tasks, violating the usual invariant for joining a queue.Added in version 3.13.\n- task_done()\u00b6\nIndicate that a formerly enqueued work item is complete.\nUsed by queue consumers. For each\nget()\nused to fetch a work item, a subsequent call totask_done()\ntells the queue that the processing on the work item is complete.If a\njoin()\nis currently blocking, it will resume when all items have been processed (meaning that atask_done()\ncall was received for every item that had beenput()\ninto the queue).Raises\nValueError\nif called more times than there were items placed in the queue.\nPriority Queue\u00b6\nLIFO Queue\u00b6\nExceptions\u00b6\n- exception asyncio.QueueEmpty\u00b6\nThis exception is raised when the\nget_nowait()\nmethod is called on an empty queue.\n- exception asyncio.QueueFull\u00b6\nException raised when the\nput_nowait()\nmethod is called on a queue that has reached its maxsize.\nExamples\u00b6\nQueues can be used to distribute workload between several concurrent tasks:\nimport asyncio\nimport random\nimport time\nasync def worker(name, queue):\nwhile True:\n# Get a \"work item\" out of the queue.\nsleep_for = await queue.get()\n# Sleep for the \"sleep_for\" seconds.\nawait asyncio.sleep(sleep_for)\n# Notify the queue that the \"work item\" has been processed.\nqueue.task_done()\nprint(f'{name} has slept for {sleep_for:.2f} seconds')\nasync def main():\n# Create a queue that we will use to store our \"workload\".\nqueue = asyncio.Queue()\n# Generate random timings and put them into the queue.\ntotal_sleep_time = 0\nfor _ in range(20):\nsleep_for = random.uniform(0.05, 1.0)\ntotal_sleep_time += sleep_for\nqueue.put_nowait(sleep_for)\n# Create three worker tasks to process the queue concurrently.\ntasks = []\nfor i in range(3):\ntask = asyncio.create_task(worker(f'worker-{i}', queue))\ntasks.append(task)\n# Wait until the queue is fully processed.\nstarted_at = time.monotonic()\nawait queue.join()\ntotal_slept_for = time.monotonic() - started_at\n# Cancel our worker tasks.\nfor task in tasks:\ntask.cancel()\n# Wait until all worker tasks are cancelled.\nawait asyncio.gather(*tasks, return_exceptions=True)\nprint('====')\nprint(f'3 workers slept in parallel for {total_slept_for:.2f} seconds')\nprint(f'total expected sleep time: {total_sleep_time:.2f} seconds')\nasyncio.run(main())", "code_snippets": ["\n", "\n", "\n\n\n", " ", " ", "\n ", " ", "\n ", "\n ", " ", " ", " ", "\n\n ", "\n ", " ", "\n\n ", "\n ", "\n\n ", "\n\n\n", " ", "\n ", "\n ", " ", " ", "\n\n ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", "\n\n ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", "\n\n ", "\n ", " ", " ", "\n ", " ", "\n ", " ", " ", " ", " ", "\n\n ", "\n ", " ", " ", " ", "\n ", "\n ", "\n ", " ", " ", "\n\n ", "\n ", "\n ", "\n\n\n", "\n"], "language": "Python", "source": "python.org", "token_count": 1362} +{"url": "https://docs.python.org/3/whatsnew/2.5.html", "title": "What\u2019s New in Python 2.5", "content": "What\u2019s New in Python 2.5\u00b6\n- Author:\nA.M. Kuchling\nThis article explains the new features in Python 2.5. The final release of Python 2.5 is scheduled for August 2006; PEP 356 describes the planned release schedule. Python 2.5 was released on September 19, 2006.\nThe changes in Python 2.5 are an interesting mix of language and library\nimprovements. The library enhancements will be more important to Python\u2019s user\ncommunity, I think, because several widely useful packages were added. New\nmodules include ElementTree for XML processing (xml.etree\n),\nthe SQLite database module (sqlite\n), and the ctypes\nmodule for calling C functions.\nThe language changes are of middling significance. Some pleasant new features\nwere added, but most of them aren\u2019t features that you\u2019ll use every day.\nConditional expressions were finally added to the language using a novel syntax;\nsee section PEP 308: Conditional Expressions. The new \u2018with\n\u2019 statement will make\nwriting cleanup code easier (section PEP 343: The \u2018with\u2019 statement). Values can now be passed\ninto generators (section PEP 342: New Generator Features). Imports are now visible as either\nabsolute or relative (section PEP 328: Absolute and Relative Imports). Some corner cases of exception\nhandling are handled better (section PEP 341: Unified try/except/finally). All these improvements\nare worthwhile, but they\u2019re improvements to one specific language feature or\nanother; none of them are broad modifications to Python\u2019s semantics.\nAs well as the language and library additions, other improvements and bugfixes were made throughout the source tree. A search through the SVN change logs finds there were 353 patches applied and 458 bugs fixed between Python 2.4 and 2.5. (Both figures are likely to be underestimates.)\nThis article doesn\u2019t try to be a complete specification of the new features; instead changes are briefly introduced using helpful examples. For full details, you should always refer to the documentation for Python 2.5 at https://docs.python.org. If you want to understand the complete implementation and design rationale, refer to the PEP for a particular new feature.\nComments, suggestions, and error reports for this document are welcome; please e-mail them to the author or open a bug in the Python bug tracker.\nPEP 308: Conditional Expressions\u00b6\nFor a long time, people have been requesting a way to write conditional expressions, which are expressions that return value A or value B depending on whether a Boolean value is true or false. A conditional expression lets you write a single assignment statement that has the same effect as the following:\nif condition:\nx = true_value\nelse:\nx = false_value\nThere have been endless tedious discussions of syntax on both python-dev and\ncomp.lang.python. A vote was even held that found the majority of voters wanted\nconditional expressions in some form, but there was no syntax that was preferred\nby a clear majority. Candidates included C\u2019s cond ? true_v : false_v\n, if\ncond then true_v else false_v\n, and 16 other variations.\nGuido van Rossum eventually chose a surprising syntax:\nx = true_value if condition else false_value\nEvaluation is still lazy as in existing Boolean expressions, so the order of evaluation jumps around a bit. The condition expression in the middle is evaluated first, and the true_value expression is evaluated only if the condition was true. Similarly, the false_value expression is only evaluated when the condition is false.\nThis syntax may seem strange and backwards; why does the condition go in the\nmiddle of the expression, and not in the front as in C\u2019s c ? x : y\n? The\ndecision was checked by applying the new syntax to the modules in the standard\nlibrary and seeing how the resulting code read. In many cases where a\nconditional expression is used, one value seems to be the \u2018common case\u2019 and one\nvalue is an \u2018exceptional case\u2019, used only on rarer occasions when the condition\nisn\u2019t met. The conditional syntax makes this pattern a bit more obvious:\ncontents = ((doc + '\\n') if doc else '')\nI read the above statement as meaning \u201chere contents is usually assigned a\nvalue of doc+'\\n'\n; sometimes doc is empty, in which special case an empty\nstring is returned.\u201d I doubt I will use conditional expressions very often\nwhere there isn\u2019t a clear common and uncommon case.\nThere was some discussion of whether the language should require surrounding conditional expressions with parentheses. The decision was made to not require parentheses in the Python language\u2019s grammar, but as a matter of style I think you should always use them. Consider these two statements:\n# First version -- no parens\nlevel = 1 if logging else 0\n# Second version -- with parens\nlevel = (1 if logging else 0)\nIn the first version, I think a reader\u2019s eye might group the statement into \u2018level = 1\u2019, \u2018if logging\u2019, \u2018else 0\u2019, and think that the condition decides whether the assignment to level is performed. The second version reads better, in my opinion, because it makes it clear that the assignment is always performed and the choice is being made between two values.\nAnother reason for including the brackets: a few odd combinations of list comprehensions and lambdas could look like incorrect conditional expressions. See PEP 308 for some examples. If you put parentheses around your conditional expressions, you won\u2019t run into this case.\nSee also\n- PEP 308 - Conditional Expressions\nPEP written by Guido van Rossum and Raymond D. Hettinger; implemented by Thomas Wouters.\nPEP 309: Partial Function Application\u00b6\nThe functools\nmodule is intended to contain tools for functional-style\nprogramming.\nOne useful tool in this module is the partial()\nfunction. For programs\nwritten in a functional style, you\u2019ll sometimes want to construct variants of\nexisting functions that have some of the parameters filled in. Consider a\nPython function f(a, b, c)\n; you could create a new function g(b, c)\nthat\nwas equivalent to f(1, b, c)\n. This is called \u201cpartial function\napplication\u201d.\npartial()\ntakes the arguments (function, arg1, arg2, ... kwarg1=value1,\nkwarg2=value2)\n. The resulting object is callable, so you can just call it to\ninvoke function with the filled-in arguments.\nHere\u2019s a small but realistic example:\nimport functools\ndef log (message, subsystem):\n\"Write the contents of 'message' to the specified subsystem.\"\nprint '%s: %s' % (subsystem, message)\n...\nserver_log = functools.partial(log, subsystem='server')\nserver_log('Unable to open socket')\nHere\u2019s another example, from a program that uses PyGTK. Here a context-sensitive\npop-up menu is being constructed dynamically. The callback provided\nfor the menu option is a partially applied version of the open_item()\nmethod, where the first argument has been provided.\n...\nclass Application:\ndef open_item(self, path):\n...\ndef init (self):\nopen_func = functools.partial(self.open_item, item_path)\npopup_menu.append( (\"Open\", open_func, 1) )\nAnother function in the functools\nmodule is the\nupdate_wrapper(wrapper, wrapped)\nfunction that helps you write\nwell-behaved decorators. update_wrapper()\ncopies the name, module, and\ndocstring attribute to a wrapper function so that tracebacks inside the wrapped\nfunction are easier to understand. For example, you might write:\ndef my_decorator(f):\ndef wrapper(*args, **kwds):\nprint 'Calling decorated function'\nreturn f(*args, **kwds)\nfunctools.update_wrapper(wrapper, f)\nreturn wrapper\nwraps()\nis a decorator that can be used inside your own decorators to copy\nthe wrapped function\u2019s information. An alternate version of the previous\nexample would be:\ndef my_decorator(f):\n@functools.wraps(f)\ndef wrapper(*args, **kwds):\nprint 'Calling decorated function'\nreturn f(*args, **kwds)\nreturn wrapper\nSee also\n- PEP 309 - Partial Function Application\nPEP proposed and written by Peter Harris; implemented by Hye-Shik Chang and Nick Coghlan, with adaptations by Raymond Hettinger.\nPEP 314: Metadata for Python Software Packages v1.1\u00b6\nSome simple dependency support was added to Distutils. The setup()\nfunction now has requires\n, provides\n, and obsoletes\nkeyword\nparameters. When you build a source distribution using the sdist\ncommand,\nthe dependency information will be recorded in the PKG-INFO\nfile.\nAnother new keyword parameter is download_url\n, which should be set to a URL\nfor the package\u2019s source code. This means it\u2019s now possible to look up an entry\nin the package index, determine the dependencies for a package, and download the\nrequired packages.\nVERSION = '1.0'\nsetup(name='PyPackage',\nversion=VERSION,\nrequires=['numarray', 'zlib (>=1.1.4)'],\nobsoletes=['OldPackage']\ndownload_url=('http://www.example.com/pypackage/dist/pkg-%s.tar.gz'\n% VERSION),\n)\nAnother new enhancement to the Python package index at https://pypi.org is storing source and binary archives for a package. The new upload Distutils command will upload a package to the repository.\nBefore a package can be uploaded, you must be able to build a distribution using\nthe sdist Distutils command. Once that works, you can run python\nsetup.py upload\nto add your package to the PyPI archive. Optionally you can\nGPG-sign the package by supplying the --sign\nand --identity\noptions.\nPackage uploading was implemented by Martin von L\u00f6wis and Richard Jones.\nSee also\n- PEP 314 - Metadata for Python Software Packages v1.1\nPEP proposed and written by A.M. Kuchling, Richard Jones, and Fred Drake; implemented by Richard Jones and Fred Drake.\nPEP 328: Absolute and Relative Imports\u00b6\nThe simpler part of PEP 328 was implemented in Python 2.4: parentheses could now\nbe used to enclose the names imported from a module using the from ... import\n...\nstatement, making it easier to import many different names.\nThe more complicated part has been implemented in Python 2.5: importing a module can be specified to use absolute or package-relative imports. The plan is to move toward making absolute imports the default in future versions of Python.\nLet\u2019s say you have a package directory like this:\npkg/\npkg/__init__.py\npkg/main.py\npkg/string.py\nThis defines a package named pkg\ncontaining the pkg.main\nand\npkg.string\nsubmodules.\nConsider the code in the main.py\nmodule. What happens if it executes\nthe statement import string\n? In Python 2.4 and earlier, it will first look\nin the package\u2019s directory to perform a relative import, finds\npkg/string.py\n, imports the contents of that file as the\npkg.string\nmodule, and that module is bound to the name string\nin the\npkg.main\nmodule\u2019s namespace.\nThat\u2019s fine if pkg.string\nwas what you wanted. But what if you wanted\nPython\u2019s standard string\nmodule? There\u2019s no clean way to ignore\npkg.string\nand look for the standard module; generally you had to look at\nthe contents of sys.modules\n, which is slightly unclean. Holger Krekel\u2019s\npy.std\npackage provides a tidier way to perform imports from the standard\nlibrary, import py; py.std.string.join()\n, but that package isn\u2019t available\non all Python installations.\nReading code which relies on relative imports is also less clear, because a\nreader may be confused about which module, string\nor pkg.string\n,\nis intended to be used. Python users soon learned not to duplicate the names of\nstandard library modules in the names of their packages\u2019 submodules, but you\ncan\u2019t protect against having your submodule\u2019s name being used for a new module\nadded in a future version of Python.\nIn Python 2.5, you can switch import\n\u2019s behaviour to absolute imports\nusing a from __future__ import absolute_import\ndirective. This absolute-import\nbehaviour will become the default in a future version (probably Python\n2.7). Once absolute imports are the default, import string\nwill always\nfind the standard library\u2019s version. It\u2019s suggested that users should begin\nusing absolute imports as much as possible, so it\u2019s preferable to begin writing\nfrom pkg import string\nin your code.\nRelative imports are still possible by adding a leading period to the module\nname when using the from ... import\nform:\n# Import names from pkg.string\nfrom .string import name1, name2\n# Import pkg.string\nfrom . import string\nThis imports the string\nmodule relative to the current package, so in\npkg.main\nthis will import name1 and name2 from pkg.string\n.\nAdditional leading periods perform the relative import starting from the parent\nof the current package. For example, code in the A.B.C\nmodule can do:\nfrom . import D # Imports A.B.D\nfrom .. import E # Imports A.E\nfrom ..F import G # Imports A.F.G\nLeading periods cannot be used with the import modname\nform of the import\nstatement, only the from ... import\nform.\nSee also\n- PEP 328 - Imports: Multi-Line and Absolute/Relative\nPEP written by Aahz; implemented by Thomas Wouters.\n- https://pylib.readthedocs.io/\nThe py library by Holger Krekel, which contains the\npy.std\npackage.\nPEP 338: Executing Modules as Scripts\u00b6\nThe -m\nswitch added in Python 2.4 to execute a module as a script\ngained a few more abilities. Instead of being implemented in C code inside the\nPython interpreter, the switch now uses an implementation in a new module,\nrunpy\n.\nThe runpy\nmodule implements a more sophisticated import mechanism so that\nit\u2019s now possible to run modules in a package such as pychecker.checker\n.\nThe module also supports alternative import mechanisms such as the\nzipimport\nmodule. This means you can add a .zip archive\u2019s path to\nsys.path\nand then use the -m\nswitch to execute code from the\narchive.\nSee also\n- PEP 338 - Executing modules as scripts\nPEP written and implemented by Nick Coghlan.\nPEP 341: Unified try/except/finally\u00b6\nUntil Python 2.5, the try\nstatement came in two flavours. You could\nuse a finally\nblock to ensure that code is always executed, or one or\nmore except\nblocks to catch specific exceptions. You couldn\u2019t\ncombine both except\nblocks and a finally\nblock, because\ngenerating the right bytecode for the combined version was complicated and it\nwasn\u2019t clear what the semantics of the combined statement should be.\nGuido van Rossum spent some time working with Java, which does support the\nequivalent of combining except\nblocks and a finally\nblock,\nand this clarified what the statement should mean. In Python 2.5, you can now\nwrite:\ntry:\nblock-1 ...\nexcept Exception1:\nhandler-1 ...\nexcept Exception2:\nhandler-2 ...\nelse:\nelse-block\nfinally:\nfinal-block\nThe code in block-1 is executed. If the code raises an exception, the various\nexcept\nblocks are tested: if the exception is of class\nException1\n, handler-1 is executed; otherwise if it\u2019s of class\nException2\n, handler-2 is executed, and so forth. If no exception is\nraised, the else-block is executed.\nNo matter what happened previously, the final-block is executed once the code block is complete and any raised exceptions handled. Even if there\u2019s an error in an exception handler or the else-block and a new exception is raised, the code in the final-block is still run.\nSee also\n- PEP 341 - Unifying try-except and try-finally\nPEP written by Georg Brandl; implementation by Thomas Lee.\nPEP 342: New Generator Features\u00b6\nPython 2.5 adds a simple way to pass values into a generator. As introduced in Python 2.3, generators only produce output; once a generator\u2019s code was invoked to create an iterator, there was no way to pass any new information into the function when its execution is resumed. Sometimes the ability to pass in some information would be useful. Hackish solutions to this include making the generator\u2019s code look at a global variable and then changing the global variable\u2019s value, or passing in some mutable object that callers then modify.\nTo refresh your memory of basic generators, here\u2019s a simple example:\ndef counter (maximum):\ni = 0\nwhile i < maximum:\nyield i\ni += 1\nWhen you call counter(10)\n, the result is an iterator that returns the values\nfrom 0 up to 9. On encountering the yield\nstatement, the iterator\nreturns the provided value and suspends the function\u2019s execution, preserving the\nlocal variables. Execution resumes on the following call to the iterator\u2019s\nnext()\nmethod, picking up after the yield\nstatement.\nIn Python 2.3, yield\nwas a statement; it didn\u2019t return any value. In\n2.5, yield\nis now an expression, returning a value that can be\nassigned to a variable or otherwise operated on:\nval = (yield i)\nI recommend that you always put parentheses around a yield\nexpression\nwhen you\u2019re doing something with the returned value, as in the above example.\nThe parentheses aren\u2019t always necessary, but it\u2019s easier to always add them\ninstead of having to remember when they\u2019re needed.\n(PEP 342 explains the exact rules, which are that a\nyield\n-expression must always be parenthesized except when it\noccurs at the top-level\nexpression on the right-hand side of an assignment. This means you can write\nval = yield i\nbut have to use parentheses when there\u2019s an operation, as in\nval = (yield i) + 12\n.)\nValues are sent into a generator by calling its send(value)\nmethod. The\ngenerator\u2019s code is then resumed and the yield\nexpression returns the\nspecified value. If the regular next()\nmethod is called, the\nyield\nreturns None\n.\nHere\u2019s the previous example, modified to allow changing the value of the internal counter.\ndef counter (maximum):\ni = 0\nwhile i < maximum:\nval = (yield i)\n# If value provided, change counter\nif val is not None:\ni = val\nelse:\ni += 1\nAnd here\u2019s an example of changing the counter:\n>>> it = counter(10)\n>>> print it.next()\n0\n>>> print it.next()\n1\n>>> print it.send(8)\n8\n>>> print it.next()\n9\n>>> print it.next()\nTraceback (most recent call last):\nFile \"t.py\", line 15, in ?\nprint it.next()\nStopIteration\nyield\nwill usually return None\n, so you should always check\nfor this case. Don\u2019t just use its value in expressions unless you\u2019re sure that\nthe send()\nmethod will be the only method used to resume your generator\nfunction.\nIn addition to send()\n, there are two other new methods on generators:\nthrow(type, value=None, traceback=None)\nis used to raise an exception inside the generator; the exception is raised by theyield\nexpression where the generator\u2019s execution is paused.close()\nraises a newGeneratorExit\nexception inside the generator to terminate the iteration. On receiving this exception, the generator\u2019s code must either raiseGeneratorExit\norStopIteration\n. Catching theGeneratorExit\nexception and returning a value is illegal and will trigger aRuntimeError\n; if the function raises some other exception, that exception is propagated to the caller.close()\nwill also be called by Python\u2019s garbage collector when the generator is garbage-collected.If you need to run cleanup code when a\nGeneratorExit\noccurs, I suggest using atry: ... finally:\nsuite instead of catchingGeneratorExit\n.\nThe cumulative effect of these changes is to turn generators from one-way producers of information into both producers and consumers.\nGenerators also become coroutines, a more generalized form of subroutines.\nSubroutines are entered at one point and exited at another point (the top of the\nfunction, and a return\nstatement), but coroutines can be entered,\nexited, and resumed at many different points (the yield\nstatements).\nWe\u2019ll have to figure out patterns for using coroutines effectively in Python.\nThe addition of the close()\nmethod has one side effect that isn\u2019t obvious.\nclose()\nis called when a generator is garbage-collected, so this means the\ngenerator\u2019s code gets one last chance to run before the generator is destroyed.\nThis last chance means that try...finally\nstatements in generators can now\nbe guaranteed to work; the finally\nclause will now always get a\nchance to run. The syntactic restriction that you couldn\u2019t mix yield\nstatements with a try...finally\nsuite has therefore been removed. This\nseems like a minor bit of language trivia, but using generators and\ntry...finally\nis actually necessary in order to implement the\nwith\nstatement described by PEP 343. I\u2019ll look at this new statement\nin the following section.\nAnother even more esoteric effect of this change: previously, the\ngi_frame\nattribute of a generator was always a frame object. It\u2019s now\npossible for gi_frame\nto be None\nonce the generator has been\nexhausted.\nSee also\n- PEP 342 - Coroutines via Enhanced Generators\nPEP written by Guido van Rossum and Phillip J. Eby; implemented by Phillip J. Eby. Includes examples of some fancier uses of generators as coroutines.\nEarlier versions of these features were proposed in PEP 288 by Raymond Hettinger and PEP 325 by Samuele Pedroni.\n- https://en.wikipedia.org/wiki/Coroutine\nThe Wikipedia entry for coroutines.\n- https://web.archive.org/web/20160321211320/http://www.sidhe.org/~dan/blog/archives/000178.html\nAn explanation of coroutines from a Perl point of view, written by Dan Sugalski.\nPEP 343: The \u2018with\u2019 statement\u00b6\nThe \u2018with\n\u2019 statement clarifies code that previously would use\ntry...finally\nblocks to ensure that clean-up code is executed. In this\nsection, I\u2019ll discuss the statement as it will commonly be used. In the next\nsection, I\u2019ll examine the implementation details and show how to write objects\nfor use with this statement.\nThe \u2018with\n\u2019 statement is a new control-flow structure whose basic\nstructure is:\nwith expression [as variable]:\nwith-block\nThe expression is evaluated, and it should result in an object that supports the\ncontext management protocol (that is, has __enter__()\nand __exit__()\nmethods.\nThe object\u2019s __enter__()\nis called before with-block is executed and\ntherefore can run set-up code. It also may return a value that is bound to the\nname variable, if given. (Note carefully that variable is not assigned\nthe result of expression.)\nAfter execution of the with-block is finished, the object\u2019s __exit__()\nmethod is called, even if the block raised an exception, and can therefore run\nclean-up code.\nTo enable the statement in Python 2.5, you need to add the following directive to your module:\nfrom __future__ import with_statement\nThe statement will always be enabled in Python 2.6.\nSome standard Python objects now support the context management protocol and can\nbe used with the \u2018with\n\u2019 statement. File objects are one example:\nwith open('/etc/passwd', 'r') as f:\nfor line in f:\nprint line\n... more processing code ...\nAfter this statement has executed, the file object in f will have been\nautomatically closed, even if the for\nloop raised an exception\npart-way through the block.\nNote\nIn this case, f is the same object created by open()\n, because\n__enter__()\nreturns self.\nThe threading\nmodule\u2019s locks and condition variables also support the\n\u2018with\n\u2019 statement:\nlock = threading.Lock()\nwith lock:\n# Critical section of code\n...\nThe lock is acquired before the block is executed and always released once the block is complete.\nThe new localcontext()\nfunction in the decimal\nmodule makes it easy\nto save and restore the current decimal context, which encapsulates the desired\nprecision and rounding characteristics for computations:\nfrom decimal import Decimal, Context, localcontext\n# Displays with default precision of 28 digits\nv = Decimal('578')\nprint v.sqrt()\nwith localcontext(Context(prec=16)):\n# All code in this block uses a precision of 16 digits.\n# The original context is restored on exiting the block.\nprint v.sqrt()\nWriting Context Managers\u00b6\nUnder the hood, the \u2018with\n\u2019 statement is fairly complicated. Most\npeople will only use \u2018with\n\u2019 in company with existing objects and\ndon\u2019t need to know these details, so you can skip the rest of this section if\nyou like. Authors of new objects will need to understand the details of the\nunderlying implementation and should keep reading.\nA high-level explanation of the context management protocol is:\nThe expression is evaluated and should result in an object called a \u201ccontext manager\u201d. The context manager must have\n__enter__()\nand__exit__()\nmethods.The context manager\u2019s\n__enter__()\nmethod is called. The value returned is assigned to VAR. If no'as VAR'\nclause is present, the value is simply discarded.The code in BLOCK is executed.\nIf BLOCK raises an exception, the\n__exit__(type, value, traceback)\nis called with the exception details, the same values returned bysys.exc_info()\n. The method\u2019s return value controls whether the exception is re-raised: any false value re-raises the exception, andTrue\nwill result in suppressing it. You\u2019ll only rarely want to suppress the exception, because if you do the author of the code containing the \u2018with\n\u2019 statement will never realize anything went wrong.If BLOCK didn\u2019t raise an exception, the\n__exit__()\nmethod is still called, but type, value, and traceback are allNone\n.\nLet\u2019s think through an example. I won\u2019t present detailed code but will only sketch the methods necessary for a database that supports transactions.\n(For people unfamiliar with database terminology: a set of changes to the database are grouped into a transaction. Transactions can be either committed, meaning that all the changes are written into the database, or rolled back, meaning that the changes are all discarded and the database is unchanged. See any database textbook for more information.)\nLet\u2019s assume there\u2019s an object representing a database connection. Our goal will be to let the user write code like this:\ndb_connection = DatabaseConnection()\nwith db_connection as cursor:\ncursor.execute('insert into ...')\ncursor.execute('delete from ...')\n# ... more operations ...\nThe transaction should be committed if the code in the block runs flawlessly or\nrolled back if there\u2019s an exception. Here\u2019s the basic interface for\nDatabaseConnection\nthat I\u2019ll assume:\nclass DatabaseConnection:\n# Database interface\ndef cursor (self):\n\"Returns a cursor object and starts a new transaction\"\ndef commit (self):\n\"Commits current transaction\"\ndef rollback (self):\n\"Rolls back current transaction\"\nThe __enter__()\nmethod is pretty easy, having only to start a new\ntransaction. For this application the resulting cursor object would be a useful\nresult, so the method will return it. The user can then add as cursor\nto\ntheir \u2018with\n\u2019 statement to bind the cursor to a variable name.\nclass DatabaseConnection:\n...\ndef __enter__ (self):\n# Code to start a new transaction\ncursor = self.cursor()\nreturn cursor\nThe __exit__()\nmethod is the most complicated because it\u2019s where most of\nthe work has to be done. The method has to check if an exception occurred. If\nthere was no exception, the transaction is committed. The transaction is rolled\nback if there was an exception.\nIn the code below, execution will just fall off the end of the function,\nreturning the default value of None\n. None\nis false, so the exception\nwill be re-raised automatically. If you wished, you could be more explicit and\nadd a return\nstatement at the marked location.\nclass DatabaseConnection:\n...\ndef __exit__ (self, type, value, tb):\nif tb is None:\n# No exception, so commit\nself.commit()\nelse:\n# Exception occurred, so rollback.\nself.rollback()\n# return False\nThe contextlib module\u00b6\nThe new contextlib\nmodule provides some functions and a decorator that\nare useful for writing objects for use with the \u2018with\n\u2019 statement.\nThe decorator is called contextmanager()\n, and lets you write a single\ngenerator function instead of defining a new class. The generator should yield\nexactly one value. The code up to the yield\nwill be executed as the\n__enter__()\nmethod, and the value yielded will be the method\u2019s return\nvalue that will get bound to the variable in the \u2018with\n\u2019 statement\u2019s\nas\nclause, if any. The code after the yield\nwill be\nexecuted in the __exit__()\nmethod. Any exception raised in the block will\nbe raised by the yield\nstatement.\nOur database example from the previous section could be written using this decorator as:\nfrom contextlib import contextmanager\n@contextmanager\ndef db_transaction (connection):\ncursor = connection.cursor()\ntry:\nyield cursor\nexcept:\nconnection.rollback()\nraise\nelse:\nconnection.commit()\ndb = DatabaseConnection()\nwith db_transaction(db) as cursor:\n...\nThe contextlib\nmodule also has a nested(mgr1, mgr2, ...)\nfunction\nthat combines a number of context managers so you don\u2019t need to write nested\n\u2018with\n\u2019 statements. In this example, the single \u2018with\n\u2019\nstatement both starts a database transaction and acquires a thread lock:\nlock = threading.Lock()\nwith nested (db_transaction(db), lock) as (cursor, locked):\n...\nFinally, the closing(object)\nfunction returns object so that it can be\nbound to a variable, and calls object.close\nat the end of the block.\nimport urllib, sys\nfrom contextlib import closing\nwith closing(urllib.urlopen('http://www.yahoo.com')) as f:\nfor line in f:\nsys.stdout.write(line)\nSee also\n- PEP 343 - The \u201cwith\u201d statement\nPEP written by Guido van Rossum and Nick Coghlan; implemented by Mike Bland, Guido van Rossum, and Neal Norwitz. The PEP shows the code generated for a \u2018\nwith\n\u2019 statement, which can be helpful in learning how the statement works.\nThe documentation for the contextlib\nmodule.\nPEP 352: Exceptions as New-Style Classes\u00b6\nException classes can now be new-style classes, not just classic classes, and\nthe built-in Exception\nclass and all the standard built-in exceptions\n(NameError\n, ValueError\n, etc.) are now new-style classes.\nThe inheritance hierarchy for exceptions has been rearranged a bit. In 2.5, the inheritance relationships are:\nBaseException # New in Python 2.5\n|- KeyboardInterrupt\n|- SystemExit\n|- Exception\n|- (all other current built-in exceptions)\nThis rearrangement was done because people often want to catch all exceptions\nthat indicate program errors. KeyboardInterrupt\nand SystemExit\naren\u2019t errors, though, and usually represent an explicit action such as the user\nhitting Control-C or code calling sys.exit()\n. A bare except:\nwill\ncatch all exceptions, so you commonly need to list KeyboardInterrupt\nand\nSystemExit\nin order to re-raise them. The usual pattern is:\ntry:\n...\nexcept (KeyboardInterrupt, SystemExit):\nraise\nexcept:\n# Log error...\n# Continue running program...\nIn Python 2.5, you can now write except Exception\nto achieve the same\nresult, catching all the exceptions that usually indicate errors but leaving\nKeyboardInterrupt\nand SystemExit\nalone. As in previous versions,\na bare except:\nstill catches all exceptions.\nThe goal for Python 3.0 is to require any class raised as an exception to derive\nfrom BaseException\nor some descendant of BaseException\n, and future\nreleases in the Python 2.x series may begin to enforce this constraint.\nTherefore, I suggest you begin making all your exception classes derive from\nException\nnow. It\u2019s been suggested that the bare except:\nform should\nbe removed in Python 3.0, but Guido van Rossum hasn\u2019t decided whether to do this\nor not.\nRaising of strings as exceptions, as in the statement raise \"Error\noccurred\"\n, is deprecated in Python 2.5 and will trigger a warning. The aim is\nto be able to remove the string-exception feature in a few releases.\nSee also\n- PEP 352 - Required Superclass for Exceptions\nPEP written by Brett Cannon and Guido van Rossum; implemented by Brett Cannon.\nPEP 353: Using ssize_t as the index type\u00b6\nA wide-ranging change to Python\u2019s C API, using a new Py_ssize_t\ntype\ndefinition instead of int, will permit the interpreter to handle more\ndata on 64-bit platforms. This change doesn\u2019t affect Python\u2019s capacity on 32-bit\nplatforms.\nVarious pieces of the Python interpreter used C\u2019s int type to store\nsizes or counts; for example, the number of items in a list or tuple were stored\nin an int. The C compilers for most 64-bit platforms still define\nint as a 32-bit type, so that meant that lists could only hold up to\n2**31 - 1\n= 2147483647 items. (There are actually a few different\nprogramming models that 64-bit C compilers can use \u2013 see\nhttps://unix.org/version2/whatsnew/lp64_wp.html for a discussion \u2013 but the\nmost commonly available model leaves int as 32 bits.)\nA limit of 2147483647 items doesn\u2019t really matter on a 32-bit platform because\nyou\u2019ll run out of memory before hitting the length limit. Each list item\nrequires space for a pointer, which is 4 bytes, plus space for a\nPyObject\nrepresenting the item. 2147483647*4 is already more bytes\nthan a 32-bit address space can contain.\nIt\u2019s possible to address that much memory on a 64-bit platform, however. The pointers for a list that size would only require 16 GiB of space, so it\u2019s not unreasonable that Python programmers might construct lists that large. Therefore, the Python interpreter had to be changed to use some type other than int, and this will be a 64-bit type on 64-bit platforms. The change will cause incompatibilities on 64-bit machines, so it was deemed worth making the transition now, while the number of 64-bit users is still relatively small. (In 5 or 10 years, we may all be on 64-bit machines, and the transition would be more painful then.)\nThis change most strongly affects authors of C extension modules. Python\nstrings and container types such as lists and tuples now use\nPy_ssize_t\nto store their size. Functions such as\nPyList_Size()\nnow return Py_ssize_t\n. Code in extension modules\nmay therefore need to have some variables changed to Py_ssize_t\n.\nThe PyArg_ParseTuple()\nand Py_BuildValue()\nfunctions have a new\nconversion code, n\n, for Py_ssize_t\n. PyArg_ParseTuple()\n\u2019s\ns#\nand t#\nstill output int by default, but you can define the\nmacro PY_SSIZE_T_CLEAN\nbefore including Python.h\nto make\nthem return Py_ssize_t\n.\nPEP 353 has a section on conversion guidelines that extension authors should read to learn about supporting 64-bit platforms.\nSee also\n- PEP 353 - Using ssize_t as the index type\nPEP written and implemented by Martin von L\u00f6wis.\nPEP 357: The \u2018__index__\u2019 method\u00b6\nThe NumPy developers had a problem that could only be solved by adding a new\nspecial method, __index__()\n. When using slice notation, as in\n[start:stop:step]\n, the values of the start, stop, and step indexes\nmust all be either integers or long integers. NumPy defines a variety of\nspecialized integer types corresponding to unsigned and signed integers of 8,\n16, 32, and 64 bits, but there was no way to signal that these types could be\nused as slice indexes.\nSlicing can\u2019t just use the existing __int__()\nmethod because that method\nis also used to implement coercion to integers. If slicing used\n__int__()\n, floating-point numbers would also become legal slice indexes\nand that\u2019s clearly an undesirable behaviour.\nInstead, a new special method called __index__()\nwas added. It takes no\narguments and returns an integer giving the slice index to use. For example:\nclass C:\ndef __index__ (self):\nreturn self.value\nThe return value must be either a Python integer or long integer. The\ninterpreter will check that the type returned is correct, and raises a\nTypeError\nif this requirement isn\u2019t met.\nA corresponding nb_index\nslot was added to the C-level\nPyNumberMethods\nstructure to let C extensions implement this protocol.\nPyNumber_Index(obj)\ncan be used in extension code to call the\n__index__()\nfunction and retrieve its result.\nSee also\n- PEP 357 - Allowing Any Object to be Used for Slicing\nPEP written and implemented by Travis Oliphant.\nOther Language Changes\u00b6\nHere are all of the changes that Python 2.5 makes to the core Python language.\nThe\ndict\ntype has a new hook for letting subclasses provide a default value when a key isn\u2019t contained in the dictionary. When a key isn\u2019t found, the dictionary\u2019s__missing__(key)\nmethod will be called. This hook is used to implement the newdefaultdict\nclass in thecollections\nmodule. The following example defines a dictionary that returns zero for any missing key:class zerodict (dict): def __missing__ (self, key): return 0 d = zerodict({1:1, 2:2}) print d[1], d[2] # Prints 1, 2 print d[3], d[4] # Prints 0, 0\nBoth 8-bit and Unicode strings have new\npartition(sep)\nandrpartition(sep)\nmethods that simplify a common use case.The\nfind(S)\nmethod is often used to get an index which is then used to slice the string and obtain the pieces that are before and after the separator.partition(sep)\ncondenses this pattern into a single method call that returns a 3-tuple containing the substring before the separator, the separator itself, and the substring after the separator. If the separator isn\u2019t found, the first element of the tuple is the entire string and the other two elements are empty.rpartition(sep)\nalso returns a 3-tuple but starts searching from the end of the string; ther\nstands for \u2018reverse\u2019.Some examples:\n>>> ('http://www.python.org').partition('://') ('http', '://', 'www.python.org') >>> ('file:/usr/share/doc/index.html').partition('://') ('file:/usr/share/doc/index.html', '', '') >>> (u'Subject: a quick question').partition(':') (u'Subject', u':', u' a quick question') >>> 'www.python.org'.rpartition('.') ('www.python', '.', 'org') >>> 'www.python.org'.rpartition(':') ('', '', 'www.python.org')\n(Implemented by Fredrik Lundh following a suggestion by Raymond Hettinger.)\nThe\nstartswith()\nandendswith()\nmethods of string types now accept tuples of strings to check for.def is_image_file (filename): return filename.endswith(('.gif', '.jpg', '.tiff'))\n(Implemented by Georg Brandl following a suggestion by Tom Lynn.)\nThe\nmin()\nandmax()\nbuilt-in functions gained akey\nkeyword parameter analogous to thekey\nargument forsort()\n. This parameter supplies a function that takes a single argument and is called for every value in the list;min()\n/max()\nwill return the element with the smallest/largest return value from this function. For example, to find the longest string in a list, you can do:L = ['medium', 'longest', 'short'] # Prints 'longest' print max(L, key=len) # Prints 'short', because lexicographically 'short' has the largest value print max(L)\n(Contributed by Steven Bethard and Raymond Hettinger.)\nTwo new built-in functions,\nany()\nandall()\n, evaluate whether an iterator contains any true or false values.any()\nreturnsTrue\nif any value returned by the iterator is true; otherwise it will returnFalse\n.all()\nreturnsTrue\nonly if all of the values returned by the iterator evaluate as true. (Suggested by Guido van Rossum, and implemented by Raymond Hettinger.)The result of a class\u2019s\n__hash__()\nmethod can now be either a long integer or a regular integer. If a long integer is returned, the hash of that value is taken. In earlier versions the hash value was required to be a regular integer, but in 2.5 theid()\nbuilt-in was changed to always return non-negative numbers, and users often seem to useid(self)\nin__hash__()\nmethods (though this is discouraged).ASCII is now the default encoding for modules. It\u2019s now a syntax error if a module contains string literals with 8-bit characters but doesn\u2019t have an encoding declaration. In Python 2.4 this triggered a warning, not a syntax error. See PEP 263 for how to declare a module\u2019s encoding; for example, you might add a line like this near the top of the source file:\n# -*- coding: latin1 -*-\nA new warning,\nUnicodeWarning\n, is triggered when you attempt to compare a Unicode string and an 8-bit string that can\u2019t be converted to Unicode using the default ASCII encoding. The result of the comparison is false:>>> chr(128) == unichr(128) # Can't convert chr(128) to Unicode __main__:1: UnicodeWarning: Unicode equal comparison failed to convert both arguments to Unicode - interpreting them as being unequal False >>> chr(127) == unichr(127) # chr(127) can be converted True\nPreviously this would raise a\nUnicodeDecodeError\nexception, but in 2.5 this could result in puzzling problems when accessing a dictionary. If you looked upunichr(128)\nandchr(128)\nwas being used as a key, you\u2019d get aUnicodeDecodeError\nexception. Other changes in 2.5 resulted in this exception being raised instead of suppressed by the code indictobject.c\nthat implements dictionaries.Raising an exception for such a comparison is strictly correct, but the change might have broken code, so instead\nUnicodeWarning\nwas introduced.(Implemented by Marc-Andr\u00e9 Lemburg.)\nOne error that Python programmers sometimes make is forgetting to include an\n__init__.py\nmodule in a package directory. Debugging this mistake can be confusing, and usually requires running Python with the-v\nswitch to log all the paths searched. In Python 2.5, a newImportWarning\nwarning is triggered when an import would have picked up a directory as a package but no__init__.py\nwas found. This warning is silently ignored by default; provide the-Wd\noption when running the Python executable to display the warning message. (Implemented by Thomas Wouters.)The list of base classes in a class definition can now be empty. As an example, this is now legal:\nclass C(): pass\n(Implemented by Brett Cannon.)\nInteractive Interpreter Changes\u00b6\nIn the interactive interpreter, quit\nand exit\nhave long been strings so\nthat new users get a somewhat helpful message when they try to quit:\n>>> quit\n'Use Ctrl-D (i.e. EOF) to exit.'\nIn Python 2.5, quit\nand exit\nare now objects that still produce string\nrepresentations of themselves, but are also callable. Newbies who try quit()\nor exit()\nwill now exit the interpreter as they expect. (Implemented by\nGeorg Brandl.)\nThe Python executable now accepts the standard long options --help\nand --version\n; on Windows, it also accepts the /?\noption\nfor displaying a help message. (Implemented by Georg Brandl.)\nOptimizations\u00b6\nSeveral of the optimizations were developed at the NeedForSpeed sprint, an event held in Reykjavik, Iceland, from May 21\u201328 2006. The sprint focused on speed enhancements to the CPython implementation and was funded by EWT LLC with local support from CCP Games. Those optimizations added at this sprint are specially marked in the following list.\nWhen they were introduced in Python 2.4, the built-in\nset\nandfrozenset\ntypes were built on top of Python\u2019s dictionary type. In 2.5 the internal data structure has been customized for implementing sets, and as a result sets will use a third less memory and are somewhat faster. (Implemented by Raymond Hettinger.)The speed of some Unicode operations, such as finding substrings, string splitting, and character map encoding and decoding, has been improved. (Substring search and splitting improvements were added by Fredrik Lundh and Andrew Dalke at the NeedForSpeed sprint. Character maps were improved by Walter D\u00f6rwald and Martin von L\u00f6wis.)\nThe\nlong(str, base)\nfunction is now faster on long digit strings because fewer intermediate results are calculated. The peak is for strings of around 800\u20131000 digits where the function is 6 times faster. (Contributed by Alan McIntyre and committed at the NeedForSpeed sprint.)It\u2019s now illegal to mix iterating over a file with\nfor line in file\nand calling the file object\u2019sread()\n/readline()\n/readlines()\nmethods. Iteration uses an internal buffer and theread*()\nmethods don\u2019t use that buffer. Instead they would return the data following the buffer, causing the data to appear out of order. Mixing iteration and these methods will now trigger aValueError\nfrom theread*()\nmethod. (Implemented by Thomas Wouters.)The\nstruct\nmodule now compiles structure format strings into an internal representation and caches this representation, yielding a 20% speedup. (Contributed by Bob Ippolito at the NeedForSpeed sprint.)The\nre\nmodule got a 1 or 2% speedup by switching to Python\u2019s allocator functions instead of the system\u2019smalloc()\nandfree()\n. (Contributed by Jack Diederich at the NeedForSpeed sprint.)The code generator\u2019s peephole optimizer now performs simple constant folding in expressions. If you write something like\na = 2+3\n, the code generator will do the arithmetic and produce code corresponding toa = 5\n. (Proposed and implemented by Raymond Hettinger.)Function calls are now faster because code objects now keep the most recently finished frame (a \u201czombie frame\u201d) in an internal field of the code object, reusing it the next time the code object is invoked. (Original patch by Michael Hudson, modified by Armin Rigo and Richard Jones; committed at the NeedForSpeed sprint.) Frame objects are also slightly smaller, which may improve cache locality and reduce memory usage a bit. (Contributed by Neal Norwitz.)\nPython\u2019s built-in exceptions are now new-style classes, a change that speeds up instantiation considerably. Exception handling in Python 2.5 is therefore about 30% faster than in 2.4. (Contributed by Richard Jones, Georg Brandl and Sean Reifschneider at the NeedForSpeed sprint.)\nImporting now caches the paths tried, recording whether they exist or not so that the interpreter makes fewer\nopen()\nandstat()\ncalls on startup. (Contributed by Martin von L\u00f6wis and Georg Brandl.)\nNew, Improved, and Removed Modules\u00b6\nThe standard library received many enhancements and bug fixes in Python 2.5.\nHere\u2019s a partial list of the most notable changes, sorted alphabetically by\nmodule name. Consult the Misc/NEWS\nfile in the source tree for a more\ncomplete list of changes, or look through the SVN logs for all the details.\nThe\naudioop\nmodule now supports the a-LAW encoding, and the code for u-LAW encoding has been improved. (Contributed by Lars Immisch.)The\ncodecs\nmodule gained support for incremental codecs. Thecodec.lookup()\nfunction now returns aCodecInfo\ninstance instead of a tuple.CodecInfo\ninstances behave like a 4-tuple to preserve backward compatibility but also have the attributesencode\n,decode\n,incrementalencoder\n,incrementaldecoder\n,streamwriter\n, andstreamreader\n. Incremental codecs can receive input and produce output in multiple chunks; the output is the same as if the entire input was fed to the non-incremental codec. See thecodecs\nmodule documentation for details. (Designed and implemented by Walter D\u00f6rwald.)The\ncollections\nmodule gained a new type,defaultdict\n, that subclasses the standarddict\ntype. The new type mostly behaves like a dictionary but constructs a default value when a key isn\u2019t present, automatically adding it to the dictionary for the requested key value.The first argument to\ndefaultdict\n\u2019s constructor is a factory function that gets called whenever a key is requested but not found. This factory function receives no arguments, so you can use built-in type constructors such aslist()\norint()\n. For example, you can make an index of words based on their initial letter like this:words = \"\"\"Nel mezzo del cammin di nostra vita mi ritrovai per una selva oscura che la diritta via era smarrita\"\"\".lower().split() index = defaultdict(list) for w in words: init_letter = w[0] index[init_letter].append(w)\nPrinting\nindex\nresults in the following output:defaultdict(, {'c': ['cammin', 'che'], 'e': ['era'], 'd': ['del', 'di', 'diritta'], 'm': ['mezzo', 'mi'], 'l': ['la'], 'o': ['oscura'], 'n': ['nel', 'nostra'], 'p': ['per'], 's': ['selva', 'smarrita'], 'r': ['ritrovai'], 'u': ['una'], 'v': ['vita', 'via']}\n(Contributed by Guido van Rossum.)\nThe\ndeque\ndouble-ended queue type supplied by thecollections\nmodule now has aremove(value)\nmethod that removes the first occurrence of value in the queue, raisingValueError\nif the value isn\u2019t found. (Contributed by Raymond Hettinger.)New module: The\ncontextlib\nmodule contains helper functions for use with the new \u2018with\n\u2019 statement. See section The contextlib module for more about this module.New module: The\ncProfile\nmodule is a C implementation of the existingprofile\nmodule that has much lower overhead. The module\u2019s interface is the same asprofile\n: you runcProfile.run('main()')\nto profile a function, can save profile data to a file, etc. It\u2019s not yet known if the Hotshot profiler, which is also written in C but doesn\u2019t match theprofile\nmodule\u2019s interface, will continue to be maintained in future versions of Python. (Contributed by Armin Rigo.)Also, the\npstats\nmodule for analyzing the data measured by the profiler now supports directing the output to any file object by supplying a stream argument to theStats\nconstructor. (Contributed by Skip Montanaro.)The\ncsv\nmodule, which parses files in comma-separated value format, received several enhancements and a number of bugfixes. You can now set the maximum size in bytes of a field by calling thecsv.field_size_limit(new_limit)\nfunction; omitting the new_limit argument will return the currently set limit. Thereader\nclass now has aline_num\nattribute that counts the number of physical lines read from the source; records can span multiple physical lines, soline_num\nis not the same as the number of records read.The CSV parser is now stricter about multi-line quoted fields. Previously, if a line ended within a quoted field without a terminating newline character, a newline would be inserted into the returned field. This behavior caused problems when reading files that contained carriage return characters within fields, so the code was changed to return the field without inserting newlines. As a consequence, if newlines embedded within fields are important, the input should be split into lines in a manner that preserves the newline characters.\n(Contributed by Skip Montanaro and Andrew McNamara.)\nThe\ndatetime\nclass in thedatetime\nmodule now has astrptime(string, format)\nmethod for parsing date strings, contributed by Josh Spoerri. It uses the same format characters astime.strptime()\nandtime.strftime()\n:from datetime import datetime ts = datetime.strptime('10:13:15 2006-03-07', '%H:%M:%S %Y-%m-%d')\nThe\nSequenceMatcher.get_matching_blocks()\nmethod in thedifflib\nmodule now guarantees to return a minimal list of blocks describing matching subsequences. Previously, the algorithm would occasionally break a block of matching elements into two list entries. (Enhancement by Tim Peters.)The\ndoctest\nmodule gained aSKIP\noption that keeps an example from being executed at all. This is intended for code snippets that are usage examples intended for the reader and aren\u2019t actually test cases.An encoding parameter was added to the\ntestfile()\nfunction and theDocFileSuite\nclass to specify the file\u2019s encoding. This makes it easier to use non-ASCII characters in tests contained within a docstring. (Contributed by Bjorn Tillenius.)The\nemail\npackage has been updated to version 4.0. (Contributed by Barry Warsaw.)The\nfileinput\nmodule was made more flexible. Unicode filenames are now supported, and a mode parameter that defaults to\"r\"\nwas added to theinput()\nfunction to allow opening files in binary or universal newlines mode. Another new parameter, openhook, lets you use a function other thanopen()\nto open the input files. Once you\u2019re iterating over the set of files, theFileInput\nobject\u2019s newfileno()\nreturns the file descriptor for the currently opened file. (Contributed by Georg Brandl.)In the\ngc\nmodule, the newget_count()\nfunction returns a 3-tuple containing the current collection counts for the three GC generations. This is accounting information for the garbage collector; when these counts reach a specified threshold, a garbage collection sweep will be made. The existinggc.collect()\nfunction now takes an optional generation argument of 0, 1, or 2 to specify which generation to collect. (Contributed by Barry Warsaw.)The\nnsmallest()\nandnlargest()\nfunctions in theheapq\nmodule now support akey\nkeyword parameter similar to the one provided by themin()\n/max()\nfunctions and thesort()\nmethods. For example:>>> import heapq >>> L = [\"short\", 'medium', 'longest', 'longer still'] >>> heapq.nsmallest(2, L) # Return two lowest elements, lexicographically ['longer still', 'longest'] >>> heapq.nsmallest(2, L, key=len) # Return two shortest elements ['short', 'medium']\n(Contributed by Raymond Hettinger.)\nThe\nitertools.islice()\nfunction now acceptsNone\nfor the start and step arguments. This makes it more compatible with the attributes of slice objects, so that you can now write the following:s = slice(5) # Create slice object itertools.islice(iterable, s.start, s.stop, s.step)\n(Contributed by Raymond Hettinger.)\nThe\nformat()\nfunction in thelocale\nmodule has been modified and two new functions were added,format_string()\nandcurrency()\n.The\nformat()\nfunction\u2019s val parameter could previously be a string as long as no more than one %char specifier appeared; now the parameter must be exactly one %char specifier with no surrounding text. An optional monetary parameter was also added which, ifTrue\n, will use the locale\u2019s rules for formatting currency in placing a separator between groups of three digits.To format strings with multiple %char specifiers, use the new\nformat_string()\nfunction that works likeformat()\nbut also supports mixing %char specifiers with arbitrary text.A new\ncurrency()\nfunction was also added that formats a number according to the current locale\u2019s settings.(Contributed by Georg Brandl.)\nThe\nmailbox\nmodule underwent a massive rewrite to add the capability to modify mailboxes in addition to reading them. A new set of classes that includembox\n,MH\n, andMaildir\nare used to read mailboxes, and have anadd(message)\nmethod to add messages,remove(key)\nto remove messages, andlock()\n/unlock()\nto lock/unlock the mailbox. The following example converts a maildir-format mailbox into an mbox-format one:import mailbox # 'factory=None' uses email.Message.Message as the class representing # individual messages. src = mailbox.Maildir('maildir', factory=None) dest = mailbox.mbox('/tmp/mbox') for msg in src: dest.add(msg)\n(Contributed by Gregory K. Johnson. Funding was provided by Google\u2019s 2005 Summer of Code.)\nNew module: the\nmsilib\nmodule allows creating Microsoft Installer.msi\nfiles and CAB files. Some support for reading the.msi\ndatabase is also included. (Contributed by Martin von L\u00f6wis.)The\nnis\nmodule now supports accessing domains other than the system default domain by supplying a domain argument to thenis.match()\nandnis.maps()\nfunctions. (Contributed by Ben Bell.)The\noperator\nmodule\u2019sitemgetter()\nandattrgetter()\nfunctions now support multiple fields. A call such asoperator.attrgetter('a', 'b')\nwill return a function that retrieves thea\nandb\nattributes. Combining this new feature with thesort()\nmethod\u2019skey\nparameter lets you easily sort lists using multiple fields. (Contributed by Raymond Hettinger.)The\noptparse\nmodule was updated to version 1.5.1 of the Optik library. TheOptionParser\nclass gained anepilog\nattribute, a string that will be printed after the help message, and adestroy()\nmethod to break reference cycles created by the object. (Contributed by Greg Ward.)The\nos\nmodule underwent several changes. Thestat_float_times\nvariable now defaults to true, meaning thatos.stat()\nwill now return time values as floats. (This doesn\u2019t necessarily mean thatos.stat()\nwill return times that are precise to fractions of a second; not all systems support such precision.)Constants named\nos.SEEK_SET\n,os.SEEK_CUR\n, andos.SEEK_END\nhave been added; these are the parameters to theos.lseek()\nfunction. Two new constants for locking areos.O_SHLOCK\nandos.O_EXLOCK\n.Two new functions,\nwait3()\nandwait4()\n, were added. They\u2019re similar thewaitpid()\nfunction which waits for a child process to exit and returns a tuple of the process ID and its exit status, butwait3()\nandwait4()\nreturn additional information.wait3()\ndoesn\u2019t take a process ID as input, so it waits for any child process to exit and returns a 3-tuple of process-id, exit-status, resource-usage as returned from theresource.getrusage()\nfunction.wait4(pid)\ndoes take a process ID. (Contributed by Chad J. Schroeder.)On FreeBSD, the\nos.stat()\nfunction now returns times with nanosecond resolution, and the returned object now hasst_gen\nandst_birthtime\n. Thest_flags\nattribute is also available, if the platform supports it. (Contributed by Antti Louko and Diego Petten\u00f2.)The Python debugger provided by the\npdb\nmodule can now store lists of commands to execute when a breakpoint is reached and execution stops. Once breakpoint #1 has been created, entercommands 1\nand enter a series of commands to be executed, finishing the list withend\n. The command list can include commands that resume execution, such ascontinue\nornext\n. (Contributed by Gr\u00e9goire Dooms.)The\npickle\nandcPickle\nmodules no longer accept a return value ofNone\nfrom the__reduce__()\nmethod; the method must return a tuple of arguments instead. The ability to returnNone\nwas deprecated in Python 2.4, so this completes the removal of the feature.The\npkgutil\nmodule, containing various utility functions for finding packages, was enhanced to support PEP 302\u2019s import hooks and now also works for packages stored in ZIP-format archives. (Contributed by Phillip J. Eby.)The pybench benchmark suite by Marc-Andr\u00e9 Lemburg is now included in the\nTools/pybench\ndirectory. The pybench suite is an improvement on the commonly usedpystone.py\nprogram because pybench provides a more detailed measurement of the interpreter\u2019s speed. It times particular operations such as function calls, tuple slicing, method lookups, and numeric operations, instead of performing many different operations and reducing the result to a single number aspystone.py\ndoes.The\npyexpat\nmodule now uses version 2.0 of the Expat parser. (Contributed by Trent Mick.)The\nQueue\nclass provided by theQueue\nmodule gained two new methods.join()\nblocks until all items in the queue have been retrieved and all processing work on the items have been completed. Worker threads call the other new method,task_done()\n, to signal that processing for an item has been completed. (Contributed by Raymond Hettinger.)The old\nregex\nandregsub\nmodules, which have been deprecated ever since Python 2.0, have finally been deleted. Other deleted modules:statcache\n,tzparse\n,whrandom\n.Also deleted: the\nlib-old\ndirectory, which includes ancient modules such asdircmp\nandni\n, was removed.lib-old\nwasn\u2019t on the defaultsys.path\n, so unless your programs explicitly added the directory tosys.path\n, this removal shouldn\u2019t affect your code.The\nrlcompleter\nmodule is no longer dependent on importing thereadline\nmodule and therefore now works on non-Unix platforms. (Patch from Robert Kiendl.)The\nSimpleXMLRPCServer\nandDocXMLRPCServer\nclasses now have arpc_paths\nattribute that constrains XML-RPC operations to a limited set of URL paths; the default is to allow only'/'\nand'/RPC2'\n. Settingrpc_paths\ntoNone\nor an empty tuple disables this path checking.The\nsocket\nmodule now supportsAF_NETLINK\nsockets on Linux, thanks to a patch from Philippe Biondi. Netlink sockets are a Linux-specific mechanism for communications between a user-space process and kernel code; an introductory article about them is at https://www.linuxjournal.com/article/7356. In Python code, netlink addresses are represented as a tuple of 2 integers,(pid, group_mask)\n.Two new methods on socket objects,\nrecv_into(buffer)\nandrecvfrom_into(buffer)\n, store the received data in an object that supports the buffer protocol instead of returning the data as a string. This means you can put the data directly into an array or a memory-mapped file.Socket objects also gained\ngetfamily()\n,gettype()\n, andgetproto()\naccessor methods to retrieve the family, type, and protocol values for the socket.New module: the\nspwd\nmodule provides functions for accessing the shadow password database on systems that support shadow passwords.The\nstruct\nis now faster because it compiles format strings intoStruct\nobjects withpack()\nandunpack()\nmethods. This is similar to how there\nmodule lets you create compiled regular expression objects. You can still use the module-levelpack()\nandunpack()\nfunctions; they\u2019ll createStruct\nobjects and cache them. Or you can useStruct\ninstances directly:s = struct.Struct('ih3s') data = s.pack(1972, 187, 'abc') year, number, name = s.unpack(data)\nYou can also pack and unpack data to and from buffer objects directly using the\npack_into(buffer, offset, v1, v2, ...)\nandunpack_from(buffer, offset)\nmethods. This lets you store data directly into an array or a memory-mapped file.(\nStruct\nobjects were implemented by Bob Ippolito at the NeedForSpeed sprint. Support for buffer objects was added by Martin Blais, also at the NeedForSpeed sprint.)The Python developers switched from CVS to Subversion during the 2.5 development process. Information about the exact build version is available as the\nsys.subversion\nvariable, a 3-tuple of(interpreter-name, branch-name, revision-range)\n. For example, at the time of writing my copy of 2.5 was reporting('CPython', 'trunk', '45313:45315')\n.This information is also available to C extensions via the\nPy_GetBuildInfo()\nfunction that returns a string of build information like this:\"trunk:45355:45356M, Apr 13 2006, 07:42:19\"\n. (Contributed by Barry Warsaw.)Another new function,\nsys._current_frames()\n, returns the current stack frames for all running threads as a dictionary mapping thread identifiers to the topmost stack frame currently active in that thread at the time the function is called. (Contributed by Tim Peters.)The\nTarFile\nclass in thetarfile\nmodule now has anextractall()\nmethod that extracts all members from the archive into the current working directory. It\u2019s also possible to set a different directory as the extraction target, and to unpack only a subset of the archive\u2019s members.The compression used for a tarfile opened in stream mode can now be autodetected using the mode\n'r|*'\n. (Contributed by Lars Gust\u00e4bel.)The\nthreading\nmodule now lets you set the stack size used when new threads are created. Thestack_size([*size*])\nfunction returns the currently configured stack size, and supplying the optional size parameter sets a new value. Not all platforms support changing the stack size, but Windows, POSIX threading, and OS/2 all do. (Contributed by Andrew MacIntyre.)The\nunicodedata\nmodule has been updated to use version 4.1.0 of the Unicode character database. Version 3.2.0 is required by some specifications, so it\u2019s still available asunicodedata.ucd_3_2_0\n.New module: the\nuuid\nmodule generates universally unique identifiers (UUIDs) according to RFC 4122. The RFC defines several different UUID versions that are generated from a starting string, from system properties, or purely randomly. This module contains aUUID\nclass and functions nameduuid1()\n,uuid3()\n,uuid4()\n, anduuid5()\nto generate different versions of UUID. (Version 2 UUIDs are not specified in RFC 4122 and are not supported by this module.)>>> import uuid >>> # make a UUID based on the host ID and current time >>> uuid.uuid1() UUID('a8098c1a-f86e-11da-bd1a-00112444be1e') >>> # make a UUID using an MD5 hash of a namespace UUID and a name >>> uuid.uuid3(uuid.NAMESPACE_DNS, 'python.org') UUID('6fa459ea-ee8a-3ca4-894e-db77e160355e') >>> # make a random UUID >>> uuid.uuid4() UUID('16fd2706-8baf-433b-82eb-8c7fada847da') >>> # make a UUID using a SHA-1 hash of a namespace UUID and a name >>> uuid.uuid5(uuid.NAMESPACE_DNS, 'python.org') UUID('886313e1-3b8a-5372-9b90-0c9aee199e5d')\n(Contributed by Ka-Ping Yee.)\nThe\nweakref\nmodule\u2019sWeakKeyDictionary\nandWeakValueDictionary\ntypes gained new methods for iterating over the weak references contained in the dictionary.iterkeyrefs()\nandkeyrefs()\nmethods were added toWeakKeyDictionary\n, anditervaluerefs()\nandvaluerefs()\nwere added toWeakValueDictionary\n. (Contributed by Fred L. Drake, Jr.)The\nwebbrowser\nmodule received a number of enhancements. It\u2019s now usable as a script withpython -m webbrowser\n, taking a URL as the argument; there are a number of switches to control the behaviour (-n\nfor a new browser window,-t\nfor a new tab). New module-level functions,open_new()\nandopen_new_tab()\n, were added to support this. The module\u2019sopen()\nfunction supports an additional feature, an autoraise parameter that signals whether to raise the open window when possible. A number of additional browsers were added to the supported list such as Firefox, Opera, Konqueror, and elinks. (Contributed by Oleg Broytmann and Georg Brandl.)The\nxmlrpclib\nmodule now supports returningdatetime\nobjects for the XML-RPC date type. Supplyuse_datetime=True\nto theloads()\nfunction or theUnmarshaller\nclass to enable this feature. (Contributed by Skip Montanaro.)The\nzipfile\nmodule now supports the ZIP64 version of the format, meaning that a .zip archive can now be larger than 4 GiB and can contain individual files larger than 4 GiB. (Contributed by Ronald Oussoren.)The\nzlib\nmodule\u2019sCompress\nandDecompress\nobjects now support acopy()\nmethod that makes a copy of the object\u2019s internal state and returns a newCompress\norDecompress\nobject. (Contributed by Chris AtLee.)\nThe ctypes package\u00b6\nThe ctypes\npackage, written by Thomas Heller, has been added to the\nstandard library. ctypes\nlets you call arbitrary functions in shared\nlibraries or DLLs. Long-time users may remember the dl\nmodule, which\nprovides functions for loading shared libraries and calling functions in them.\nThe ctypes\npackage is much fancier.\nTo load a shared library or DLL, you must create an instance of the\nCDLL\nclass and provide the name or path of the shared library or DLL.\nOnce that\u2019s done, you can call arbitrary functions by accessing them as\nattributes of the CDLL\nobject.\nimport ctypes\nlibc = ctypes.CDLL('libc.so.6')\nresult = libc.printf(\"Line of output\\n\")\nType constructors for the various C types are provided: c_int()\n,\nc_float()\n, c_double()\n, c_char_p()\n(equivalent to char*), and so forth. Unlike Python\u2019s types, the C versions are all mutable; you\ncan assign to their value\nattribute to change the wrapped value. Python\nintegers and strings will be automatically converted to the corresponding C\ntypes, but for other types you must call the correct type constructor. (And I\nmean must; getting it wrong will often result in the interpreter crashing\nwith a segmentation fault.)\nYou shouldn\u2019t use c_char_p()\nwith a Python string when the C function will\nbe modifying the memory area, because Python strings are supposed to be\nimmutable; breaking this rule will cause puzzling bugs. When you need a\nmodifiable memory area, use create_string_buffer()\n:\ns = \"this is a string\"\nbuf = ctypes.create_string_buffer(s)\nlibc.strfry(buf)\nC functions are assumed to return integers, but you can set the restype\nattribute of the function object to change this:\n>>> libc.atof('2.71828')\n-1783957616\n>>> libc.atof.restype = ctypes.c_double\n>>> libc.atof('2.71828')\n2.71828\nctypes\nalso provides a wrapper for Python\u2019s C API as the\nctypes.pythonapi\nobject. This object does not release the global\ninterpreter lock before calling a function, because the lock must be held when\ncalling into the interpreter\u2019s code. There\u2019s a py_object\ntype\nconstructor that will create a PyObject* pointer. A simple usage:\nimport ctypes\nd = {}\nctypes.pythonapi.PyObject_SetItem(ctypes.py_object(d),\nctypes.py_object(\"abc\"), ctypes.py_object(1))\n# d is now {'abc', 1}.\nDon\u2019t forget to use py_object()\n; if it\u2019s omitted you end up with a\nsegmentation fault.\nctypes\nhas been around for a while, but people still write and\ndistribution hand-coded extension modules because you can\u2019t rely on\nctypes\nbeing present. Perhaps developers will begin to write Python\nwrappers atop a library accessed through ctypes\ninstead of extension\nmodules, now that ctypes\nis included with core Python.\nSee also\n- https://web.archive.org/web/20180410025338/http://starship.python.net/crew/theller/ctypes/\nThe pre-stdlib ctypes web page, with a tutorial, reference, and FAQ.\nThe documentation for the ctypes\nmodule.\nThe ElementTree package\u00b6\nA subset of Fredrik Lundh\u2019s ElementTree library for processing XML has been\nadded to the standard library as xml.etree\n. The available modules are\nElementTree\n, ElementPath\n, and ElementInclude\nfrom\nElementTree 1.2.6. The cElementTree\naccelerator module is also\nincluded.\nThe rest of this section will provide a brief overview of using ElementTree. Full documentation for ElementTree is available at https://web.archive.org/web/20201124024954/http://effbot.org/zone/element-index.htm.\nElementTree represents an XML document as a tree of element nodes. The text\ncontent of the document is stored as the text\nand tail\nattributes of (This is one of the major differences between ElementTree and\nthe Document Object Model; in the DOM there are many different types of node,\nincluding TextNode\n.)\nThe most commonly used parsing function is parse()\n, that takes either a\nstring (assumed to contain a filename) or a file-like object and returns an\nElementTree\ninstance:\nfrom xml.etree import ElementTree as ET\ntree = ET.parse('ex-1.xml')\nfeed = urllib.urlopen(\n'http://planet.python.org/rss10.xml')\ntree = ET.parse(feed)\nOnce you have an ElementTree\ninstance, you can call its getroot()\nmethod to get the root Element\nnode.\nThere\u2019s also an XML()\nfunction that takes a string literal and returns an\nElement\nnode (not an ElementTree\n). This function provides a\ntidy way to incorporate XML fragments, approaching the convenience of an XML\nliteral:\nsvg = ET.XML(\"\"\"\n\"\"\")\nsvg.set('height', '320px')\nsvg.append(elem1)\nEach XML element supports some dictionary-like and some list-like access methods. Dictionary-like operations are used to access attribute values, and list-like operations are used to access child nodes.\nOperation |\nResult |\n|---|---|\n|\nReturns n\u2019th child element. |\n|\nReturns list of m\u2019th through n\u2019th child elements. |\n|\nReturns number of child elements. |\n|\nReturns list of child elements. |\n|\nAdds elem2 as a child. |\n|\nInserts elem2 at the specified location. |\n|\nDeletes n\u2019th child element. |\n|\nReturns list of attribute names. |\n|\nReturns value of attribute name. |\n|\nSets new value for attribute name. |\n|\nRetrieves the dictionary containing attributes. |\n|\nDeletes attribute name. |\nComments and processing instructions are also represented as Element\nnodes. To check if a node is a comment or processing instructions:\nif elem.tag is ET.Comment:\n...\nelif elem.tag is ET.ProcessingInstruction:\n...\nTo generate XML output, you should call the ElementTree.write()\nmethod.\nLike parse()\n, it can take either a string or a file-like object:\n# Encoding is US-ASCII\ntree.write('output.xml')\n# Encoding is UTF-8\nf = open('output.xml', 'w')\ntree.write(f, encoding='utf-8')\n(Caution: the default encoding used for output is ASCII. For general XML work, where an element\u2019s name may contain arbitrary Unicode characters, ASCII isn\u2019t a very useful encoding because it will raise an exception if an element\u2019s name contains any characters with values greater than 127. Therefore, it\u2019s best to specify a different encoding such as UTF-8 that can handle any Unicode character.)\nThis section is only a partial description of the ElementTree interfaces. Please read the package\u2019s official documentation for more details.\nSee also\n- https://web.archive.org/web/20201124024954/http://effbot.org/zone/element-index.htm\nOfficial documentation for ElementTree.\nThe hashlib package\u00b6\nA new hashlib\nmodule, written by Gregory P. Smith, has been added to\nreplace the md5\nand sha\nmodules. hashlib\nadds support for\nadditional secure hashes (SHA-224, SHA-256, SHA-384, and SHA-512). When\navailable, the module uses OpenSSL for fast platform optimized implementations\nof algorithms.\nThe old md5\nand sha\nmodules still exist as wrappers around hashlib\nto preserve backwards compatibility. The new module\u2019s interface is very close\nto that of the old modules, but not identical. The most significant difference\nis that the constructor functions for creating new hashing objects are named\ndifferently.\n# Old versions\nh = md5.md5()\nh = md5.new()\n# New version\nh = hashlib.md5()\n# Old versions\nh = sha.sha()\nh = sha.new()\n# New version\nh = hashlib.sha1()\n# Hash that weren't previously available\nh = hashlib.sha224()\nh = hashlib.sha256()\nh = hashlib.sha384()\nh = hashlib.sha512()\n# Alternative form\nh = hashlib.new('md5') # Provide algorithm as a string\nOnce a hash object has been created, its methods are the same as before:\nupdate(string)\nhashes the specified string into the current digest\nstate, digest()\nand hexdigest()\nreturn the digest value as a binary\nstring or a string of hex digits, and copy()\nreturns a new hashing object\nwith the same digest state.\nSee also\nThe documentation for the hashlib\nmodule.\nThe sqlite3 package\u00b6\nThe pysqlite module (https://www.pysqlite.org), a wrapper for the SQLite embedded\ndatabase, has been added to the standard library under the package name\nsqlite3\n.\nSQLite is a C library that provides a lightweight disk-based database that doesn\u2019t require a separate server process and allows accessing the database using a nonstandard variant of the SQL query language. Some applications can use SQLite for internal data storage. It\u2019s also possible to prototype an application using SQLite and then port the code to a larger database such as PostgreSQL or Oracle.\npysqlite was written by Gerhard H\u00e4ring and provides a SQL interface compliant with the DB-API 2.0 specification described by PEP 249.\nIf you\u2019re compiling the Python source yourself, note that the source tree doesn\u2019t include the SQLite code, only the wrapper module. You\u2019ll need to have the SQLite libraries and headers installed before compiling Python, and the build process will compile the module when the necessary headers are available.\nTo use the module, you must first create a Connection\nobject that\nrepresents the database. Here the data will be stored in the\n/tmp/example\nfile:\nconn = sqlite3.connect('/tmp/example')\nYou can also supply the special name :memory:\nto create a database in RAM.\nOnce you have a Connection\n, you can create a Cursor\nobject\nand call its execute()\nmethod to perform SQL commands:\nc = conn.cursor()\n# Create table\nc.execute('''create table stocks\n(date text, trans text, symbol text,\nqty real, price real)''')\n# Insert a row of data\nc.execute(\"\"\"insert into stocks\nvalues ('2006-01-05','BUY','RHAT',100,35.14)\"\"\")\nUsually your SQL operations will need to use values from Python variables. You shouldn\u2019t assemble your query using Python\u2019s string operations because doing so is insecure; it makes your program vulnerable to an SQL injection attack.\nInstead, use the DB-API\u2019s parameter substitution. Put ?\nas a placeholder\nwherever you want to use a value, and then provide a tuple of values as the\nsecond argument to the cursor\u2019s execute()\nmethod. (Other database modules\nmay use a different placeholder, such as %s\nor :1\n.) For example:\n# Never do this -- insecure!\nsymbol = 'IBM'\nc.execute(\"... where symbol = '%s'\" % symbol)\n# Do this instead\nt = (symbol,)\nc.execute('select * from stocks where symbol=?', t)\n# Larger example\nfor t in (('2006-03-28', 'BUY', 'IBM', 1000, 45.00),\n('2006-04-05', 'BUY', 'MSOFT', 1000, 72.00),\n('2006-04-06', 'SELL', 'IBM', 500, 53.00),\n):\nc.execute('insert into stocks values (?,?,?,?,?)', t)\nTo retrieve data after executing a SELECT statement, you can either treat the\ncursor as an iterator, call the cursor\u2019s fetchone()\nmethod to retrieve a\nsingle matching row, or call fetchall()\nto get a list of the matching\nrows.\nThis example uses the iterator form:\n>>> c = conn.cursor()\n>>> c.execute('select * from stocks order by price')\n>>> for row in c:\n... print row\n...\n(u'2006-01-05', u'BUY', u'RHAT', 100, 35.140000000000001)\n(u'2006-03-28', u'BUY', u'IBM', 1000, 45.0)\n(u'2006-04-06', u'SELL', u'IBM', 500, 53.0)\n(u'2006-04-05', u'BUY', u'MSOFT', 1000, 72.0)\n>>>\nFor more information about the SQL dialect supported by SQLite, see https://www.sqlite.org.\nSee also\n- https://www.pysqlite.org\nThe pysqlite web page.\n- https://www.sqlite.org\nThe SQLite web page; the documentation describes the syntax and the available data types for the supported SQL dialect.\nThe documentation for the sqlite3\nmodule.\n- PEP 249 - Database API Specification 2.0\nPEP written by Marc-Andr\u00e9 Lemburg.\nThe wsgiref package\u00b6\nThe Web Server Gateway Interface (WSGI) v1.0 defines a standard interface\nbetween web servers and Python web applications and is described in PEP 333.\nThe wsgiref\npackage is a reference implementation of the WSGI\nspecification.\nThe package includes a basic HTTP server that will run a WSGI application; this server is useful for debugging but isn\u2019t intended for production use. Setting up a server takes only a few lines of code:\nfrom wsgiref import simple_server\nwsgi_app = ...\nhost = ''\nport = 8000\nhttpd = simple_server.make_server(host, port, wsgi_app)\nhttpd.serve_forever()\nSee also\n- https://web.archive.org/web/20160331090247/http://wsgi.readthedocs.org/en/latest/\nA central web site for WSGI-related resources.\n- PEP 333 - Python Web Server Gateway Interface v1.0\nPEP written by Phillip J. Eby.\nBuild and C API Changes\u00b6\nChanges to Python\u2019s build process and to the C API include:\nThe Python source tree was converted from CVS to Subversion, in a complex migration procedure that was supervised and flawlessly carried out by Martin von L\u00f6wis. The procedure was developed as PEP 347.\nCoverity, a company that markets a source code analysis tool called Prevent, provided the results of their examination of the Python source code. The analysis found about 60 bugs that were quickly fixed. Many of the bugs were refcounting problems, often occurring in error-handling code. See https://scan.coverity.com for the statistics.\nThe largest change to the C API came from PEP 353, which modifies the interpreter to use a\nPy_ssize_t\ntype definition instead of int. See the earlier section PEP 353: Using ssize_t as the index type for a discussion of this change.The design of the bytecode compiler has changed a great deal, no longer generating bytecode by traversing the parse tree. Instead the parse tree is converted to an abstract syntax tree (or AST), and it is the abstract syntax tree that\u2019s traversed to produce the bytecode.\nIt\u2019s possible for Python code to obtain AST objects by using the\ncompile()\nbuilt-in and specifying_ast.PyCF_ONLY_AST\nas the value of the flags parameter:from _ast import PyCF_ONLY_AST ast = compile(\"\"\"a=0 for i in range(10): a += i \"\"\", \"\", 'exec', PyCF_ONLY_AST) assignment = ast.body[0] for_loop = ast.body[1]\nNo official documentation has been written for the AST code yet, but PEP 339 discusses the design. To start learning about the code, read the definition of the various AST nodes in\nParser/Python.asdl\n. A Python script reads this file and generates a set of C structure definitions inInclude/Python-ast.h\n. ThePyParser_ASTFromString()\nandPyParser_ASTFromFile()\n, defined inInclude/pythonrun.h\n, take Python source as input and return the root of an AST representing the contents. This AST can then be turned into a code object byPyAST_Compile()\n. For more information, read the source code, and then ask questions on python-dev.The AST code was developed under Jeremy Hylton\u2019s management, and implemented by (in alphabetical order) Brett Cannon, Nick Coghlan, Grant Edwards, John Ehresman, Kurt Kaiser, Neal Norwitz, Tim Peters, Armin Rigo, and Neil Schemenauer, plus the participants in a number of AST sprints at conferences such as PyCon.\nEvan Jones\u2019s patch to obmalloc, first described in a talk at PyCon DC 2005, was applied. Python 2.4 allocated small objects in 256K-sized arenas, but never freed arenas. With this patch, Python will free arenas when they\u2019re empty. The net effect is that on some platforms, when you allocate many objects, Python\u2019s memory usage may actually drop when you delete them and the memory may be returned to the operating system. (Implemented by Evan Jones, and reworked by Tim Peters.)\nNote that this change means extension modules must be more careful when allocating memory. Python\u2019s API has many different functions for allocating memory that are grouped into families. For example,\nPyMem_Malloc()\n,PyMem_Realloc()\n, andPyMem_Free()\nare one family that allocates raw memory, whilePyObject_Malloc()\n,PyObject_Realloc()\n, andPyObject_Free()\nare another family that\u2019s supposed to be used for creating Python objects.Previously these different families all reduced to the platform\u2019s\nmalloc()\nandfree()\nfunctions. This meant it didn\u2019t matter if you got things wrong and allocated memory with thePyMem\nfunction but freed it with thePyObject\nfunction. With 2.5\u2019s changes to obmalloc, these families now do different things and mismatches will probably result in a segfault. You should carefully test your C extension modules with Python 2.5.The built-in set types now have an official C API. Call\nPySet_New()\nandPyFrozenSet_New()\nto create a new set,PySet_Add()\nandPySet_Discard()\nto add and remove elements, andPySet_Contains()\nandPySet_Size()\nto examine the set\u2019s state. (Contributed by Raymond Hettinger.)C code can now obtain information about the exact revision of the Python interpreter by calling the\nPy_GetBuildInfo()\nfunction that returns a string of build information like this:\"trunk:45355:45356M, Apr 13 2006, 07:42:19\"\n. (Contributed by Barry Warsaw.)Two new macros can be used to indicate C functions that are local to the current file so that a faster calling convention can be used.\nPy_LOCAL\ndeclares the function as returning a value of the specified type and uses a fast-calling qualifier.Py_LOCAL_INLINE\ndoes the same thing and also requests the function be inlined. If macroPY_LOCAL_AGGRESSIVE\nis defined beforepython.h\nis included, a set of more aggressive optimizations are enabled for the module; you should benchmark the results to find out if these optimizations actually make the code faster. (Contributed by Fredrik Lundh at the NeedForSpeed sprint.)PyErr_NewException(name, base, dict)\ncan now accept a tuple of base classes as its base argument. (Contributed by Georg Brandl.)The\nPyErr_Warn()\nfunction for issuing warnings is now deprecated in favour ofPyErr_WarnEx(category, message, stacklevel)\nwhich lets you specify the number of stack frames separating this function and the caller. A stacklevel of 1 is the function callingPyErr_WarnEx()\n, 2 is the function above that, and so forth. (Added by Neal Norwitz.)The CPython interpreter is still written in C, but the code can now be compiled with a C++ compiler without errors. (Implemented by Anthony Baxter, Martin von L\u00f6wis, Skip Montanaro.)\nThe\nPyRange_New()\nfunction was removed. It was never documented, never used in the core code, and had dangerously lax error checking. In the unlikely case that your extensions were using it, you can replace it by something like the following:range = PyObject_CallFunction((PyObject*) &PyRange_Type, \"lll\", start, stop, step);\nPort-Specific Changes\u00b6\nMacOS X (10.3 and higher): dynamic loading of modules now uses the\ndlopen()\nfunction instead of MacOS-specific functions.MacOS X: an\n--enable-universalsdk\nswitch was added to the configure script that compiles the interpreter as a universal binary able to run on both PowerPC and Intel processors. (Contributed by Ronald Oussoren; bpo-2573.)Windows:\n.dll\nis no longer supported as a filename extension for extension modules..pyd\nis now the only filename extension that will be searched for.\nPorting to Python 2.5\u00b6\nThis section lists previously described changes that may require changes to your code:\nASCII is now the default encoding for modules. It\u2019s now a syntax error if a module contains string literals with 8-bit characters but doesn\u2019t have an encoding declaration. In Python 2.4 this triggered a warning, not a syntax error.\nPreviously, the\ngi_frame\nattribute of a generator was always a frame object. Because of the PEP 342 changes described in section PEP 342: New Generator Features, it\u2019s now possible forgi_frame\nto beNone\n.A new warning,\nUnicodeWarning\n, is triggered when you attempt to compare a Unicode string and an 8-bit string that can\u2019t be converted to Unicode using the default ASCII encoding. Previously such comparisons would raise aUnicodeDecodeError\nexception.Library: the\ncsv\nmodule is now stricter about multi-line quoted fields. If your files contain newlines embedded within fields, the input should be split into lines in a manner which preserves the newline characters.Library: the\nlocale\nmodule\u2019sformat()\nfunction\u2019s would previously accept any string as long as no more than one %char specifier appeared. In Python 2.5, the argument must be exactly one %char specifier with no surrounding text.Library: The\npickle\nandcPickle\nmodules no longer accept a return value ofNone\nfrom the__reduce__()\nmethod; the method must return a tuple of arguments instead. The modules also no longer accept the deprecated bin keyword parameter.Library: The\nSimpleXMLRPCServer\nandDocXMLRPCServer\nclasses now have arpc_paths\nattribute that constrains XML-RPC operations to a limited set of URL paths; the default is to allow only'/'\nand'/RPC2'\n. Settingrpc_paths\ntoNone\nor an empty tuple disables this path checking.C API: Many functions now use\nPy_ssize_t\ninstead of int to allow processing more data on 64-bit machines. Extension code may need to make the same change to avoid warnings and to support 64-bit machines. See the earlier section PEP 353: Using ssize_t as the index type for a discussion of this change.C API: The obmalloc changes mean that you must be careful to not mix usage of the\nPyMem_*\nandPyObject_*\nfamilies of functions. Memory allocated with one family\u2019s*_Malloc\nmust be freed with the corresponding family\u2019s*_Free\nfunction.\nAcknowledgements\u00b6\nThe author would like to thank the following people for offering suggestions, corrections and assistance with various drafts of this article: Georg Brandl, Nick Coghlan, Phillip J. Eby, Lars Gust\u00e4bel, Raymond Hettinger, Ralf W. Grosse-Kunstleve, Kent Johnson, Iain Lowe, Martin von L\u00f6wis, Fredrik Lundh, Andrew McNamara, Skip Montanaro, Gustavo Niemeyer, Paul Prescod, James Pryor, Mike Rovner, Scott Weikart, Barry Warsaw, Thomas Wouters.", "code_snippets": [" ", "\n ", " ", " ", "\n", "\n ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", " ", " ", " ", " ", " ", " ", "\n\n", "\n", " ", " ", " ", " ", " ", " ", "\n", "\n\n", " ", " ", "\n ", "\n ", " ", " ", " ", " ", "\n ", "\n\n", " ", " ", " ", "\n", "\n", "\n", "\n ", " ", "\n ", "\n ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n", "\n ", " ", "\n ", " ", "\n ", " ", " ", "\n ", " ", "\n ", " ", "\n", "\n ", "\n ", " ", "\n ", " ", "\n ", " ", " ", "\n ", " ", "\n", " ", " ", "\n", "\n ", "\n ", " ", "\n ", "\n ", "\n ", " ", "\n ", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n ", " ", "\n", " ", "\n ", " ", "\n", " ", "\n ", " ", "\n", "\n ", "\n", "\n ", "\n", " ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n", " ", " ", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n File ", ", line ", ", in ", "\n", " ", "\n", "\n", " ", " ", " ", "\n ", "\n", " ", "\n", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", " ", " ", " ", "\n", " ", " ", "\n", " ", "\n ", "\n ", "\n", " ", " ", " ", "\n\n", "\n", " ", " ", "\n", " ", "\n\n", " ", "\n ", "\n ", "\n ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n ", "\n ", "\n ", "\n", "\n ", "\n ", " ", "\n ", "\n ", " ", "\n ", "\n ", " ", "\n ", "\n", "\n ", "\n ", " ", "\n ", "\n ", " ", " ", "\n ", " ", "\n", "\n ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n", " ", "\n\n", "\n", " ", "\n ", " ", " ", "\n ", "\n ", " ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n\n", " ", " ", "\n", " ", " ", " ", "\n ", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n ", "\n", "\n", " ", "\n\n", " ", " ", " ", "\n ", " ", " ", " ", "\n ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n ", " ", " ", " ", " ", " ", "\n", "\n ", "\n", " ", " ", "\n ", "\n", "\n ", "\n ", "\n", "\n ", " ", "\n ", " ", "\n", " ", "\n ", " ", " ", "\n ", " ", "\n\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", "\n", "\n ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n\n", " ", " ", "\n\n", " ", " ", " ", "\n ", " ", " ", "\n ", "\n", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", "\n", " ", "\n\n", " ", " ", "\n ", "\n", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n\n", "\n", "\n", " ", " ", " ", "\n", " ", " ", "\n\n", " ", " ", " ", "\n ", "\n", " ", " ", "\n\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n\n", "\n", " ", "\n", "\n\n", "\n", "\n", "\n\n", "\n", " ", "\n", "\n", "\n\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n\n", " ", " ", "\n", "\n ", " ", "\n", "\n", " ", " ", " ", "\n\n", " ", " ", "\n\n", " ", " ", "\n ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", " ", "\n", "\n", " ", " ", " ", "\n ", "\n", " ", " ", " ", "\n ", "\n", "\n", "\n\n", "\n", " ", " ", " ", "\n", " ", "\n", "\n", " ", " ", "\n", " ", " ", "\n\n", "\n", " ", " ", "\n\n", "\n", " ", " ", "\n", " ", " ", "\n\n", "\n", " ", " ", "\n\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n\n", "\n", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n\n", "\n", "\n", "\n", "\n\n", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n\n", "\n", " ", " ", "\n", " ", "\n\n", "\n", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", "\n ", " ", "\n", " ", " ", "\n", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n\n", " ", " ", "\n\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n", "\n", " ", "\n", " ", " ", "\n", "\n", "\n", " ", " ", " ", "\n\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n ", " ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 21288} +{"url": "https://docs.python.org/3/whatsnew/2.6.html", "title": "What\u2019s New in Python 2.6", "content": "What\u2019s New in Python 2.6\u00b6\n- Author:\nA.M. Kuchling (amk at amk.ca)\nThis article explains the new features in Python 2.6, released on October 1, 2008. The release schedule is described in PEP 361.\nThe major theme of Python 2.6 is preparing the migration path to\nPython 3.0, a major redesign of the language. Whenever possible,\nPython 2.6 incorporates new features and syntax from 3.0 while\nremaining compatible with existing code by not removing older features\nor syntax. When it\u2019s not possible to do that, Python 2.6 tries to do\nwhat it can, adding compatibility functions in a\nfuture_builtins\nmodule and a -3\nswitch to warn about\nusages that will become unsupported in 3.0.\nSome significant new packages have been added to the standard library,\nsuch as the multiprocessing\nand json\nmodules, but\nthere aren\u2019t many new features that aren\u2019t related to Python 3.0 in\nsome way.\nPython 2.6 also sees a number of improvements and bugfixes throughout the source. A search through the change logs finds there were 259 patches applied and 612 bugs fixed between Python 2.5 and 2.6. Both figures are likely to be underestimates.\nThis article doesn\u2019t attempt to provide a complete specification of the new features, but instead provides a convenient overview. For full details, you should refer to the documentation for Python 2.6. If you want to understand the rationale for the design and implementation, refer to the PEP for a particular new feature. Whenever possible, \u201cWhat\u2019s New in Python\u201d links to the bug/patch item for each change.\nPython 3.0\u00b6\nThe development cycle for Python versions 2.6 and 3.0 was synchronized, with the alpha and beta releases for both versions being made on the same days. The development of 3.0 has influenced many features in 2.6.\nPython 3.0 is a far-ranging redesign of Python that breaks compatibility with the 2.x series. This means that existing Python code will need some conversion in order to run on Python 3.0. However, not all the changes in 3.0 necessarily break compatibility. In cases where new features won\u2019t cause existing code to break, they\u2019ve been backported to 2.6 and are described in this document in the appropriate place. Some of the 3.0-derived features are:\nA\n__complex__()\nmethod for converting objects to a complex number.Alternate syntax for catching exceptions:\nexcept TypeError as exc\n.The addition of\nfunctools.reduce()\nas a synonym for the built-inreduce()\nfunction.\nPython 3.0 adds several new built-in functions and changes the\nsemantics of some existing builtins. Functions that are new in 3.0\nsuch as bin()\nhave simply been added to Python 2.6, but existing\nbuiltins haven\u2019t been changed; instead, the future_builtins\nmodule has versions with the new 3.0 semantics. Code written to be\ncompatible with 3.0 can do from future_builtins import hex, map\nas\nnecessary.\nA new command-line switch, -3\n, enables warnings\nabout features that will be removed in Python 3.0. You can run code\nwith this switch to see how much work will be necessary to port\ncode to 3.0. The value of this switch is available\nto Python code as the boolean variable sys.py3kwarning\n,\nand to C extension code as Py_Py3kWarningFlag\n.\nChanges to the Development Process\u00b6\nWhile 2.6 was being developed, the Python development process underwent two significant changes: we switched from SourceForge\u2019s issue tracker to a customized Roundup installation, and the documentation was converted from LaTeX to reStructuredText.\nNew Issue Tracker: Roundup\u00b6\nFor a long time, the Python developers had been growing increasingly annoyed by SourceForge\u2019s bug tracker. SourceForge\u2019s hosted solution doesn\u2019t permit much customization; for example, it wasn\u2019t possible to customize the life cycle of issues.\nThe infrastructure committee of the Python Software Foundation therefore posted a call for issue trackers, asking volunteers to set up different products and import some of the bugs and patches from SourceForge. Four different trackers were examined: Jira, Launchpad, Roundup, and Trac. The committee eventually settled on Jira and Roundup as the two candidates. Jira is a commercial product that offers no-cost hosted instances to free-software projects; Roundup is an open-source project that requires volunteers to administer it and a server to host it.\nAfter posting a call for volunteers, a new Roundup installation was set up at https://bugs.python.org. One installation of Roundup can host multiple trackers, and this server now also hosts issue trackers for Jython and for the Python web site. It will surely find other uses in the future. Where possible, this edition of \u201cWhat\u2019s New in Python\u201d links to the bug/patch item for each change.\nHosting of the Python bug tracker is kindly provided by\nUpfront Systems\nof Stellenbosch, South Africa. Martin von L\u00f6wis put a\nlot of effort into importing existing bugs and patches from\nSourceForge; his scripts for this import operation are at\nhttps://svn.python.org/view/tracker/importer/\nand may be useful to\nother projects wishing to move from SourceForge to Roundup.\nSee also\n- https://bugs.python.org\nThe Python bug tracker.\n- https://bugs.jython.org:\nThe Jython bug tracker.\n- https://roundup.sourceforge.io/\nRoundup downloads and documentation.\n- https://svn.python.org/view/tracker/importer/\nMartin von L\u00f6wis\u2019s conversion scripts.\nNew Documentation Format: reStructuredText Using Sphinx\u00b6\nThe Python documentation was written using LaTeX since the project started around 1989. In the 1980s and early 1990s, most documentation was printed out for later study, not viewed online. LaTeX was widely used because it provided attractive printed output while remaining straightforward to write once the basic rules of the markup were learned.\nToday LaTeX is still used for writing publications destined for printing, but the landscape for programming tools has shifted. We no longer print out reams of documentation; instead, we browse through it online and HTML has become the most important format to support. Unfortunately, converting LaTeX to HTML is fairly complicated and Fred L. Drake Jr., the long-time Python documentation editor, spent a lot of time maintaining the conversion process. Occasionally people would suggest converting the documentation into SGML and later XML, but performing a good conversion is a major task and no one ever committed the time required to finish the job.\nDuring the 2.6 development cycle, Georg Brandl put a lot of effort into building a new toolchain for processing the documentation. The resulting package is called Sphinx, and is available from https://www.sphinx-doc.org/.\nSphinx concentrates on HTML output, producing attractively styled and modern HTML; printed output is still supported through conversion to LaTeX. The input format is reStructuredText, a markup syntax supporting custom extensions and directives that is commonly used in the Python community.\nSphinx is a standalone package that can be used for writing, and almost two dozen other projects (listed on the Sphinx web site) have adopted Sphinx as their documentation tool.\nSee also\n- Documenting Python\nDescribes how to write for Python\u2019s documentation.\n- Sphinx\nDocumentation and code for the Sphinx toolchain.\n- Docutils\nThe underlying reStructuredText parser and toolset.\nPEP 343: The \u2018with\u2019 statement\u00b6\nThe previous version, Python 2.5, added the \u2018with\n\u2019\nstatement as an optional feature, to be enabled by a from __future__\nimport with_statement\ndirective. In 2.6 the statement no longer needs to\nbe specially enabled; this means that with\nis now always a\nkeyword. The rest of this section is a copy of the corresponding\nsection from the \u201cWhat\u2019s New in Python 2.5\u201d document; if you\u2019re\nfamiliar with the \u2018with\n\u2019 statement\nfrom Python 2.5, you can skip this section.\nThe \u2018with\n\u2019 statement clarifies code that previously would use\ntry...finally\nblocks to ensure that clean-up code is executed. In this\nsection, I\u2019ll discuss the statement as it will commonly be used. In the next\nsection, I\u2019ll examine the implementation details and show how to write objects\nfor use with this statement.\nThe \u2018with\n\u2019 statement is a control-flow structure whose basic\nstructure is:\nwith expression [as variable]:\nwith-block\nThe expression is evaluated, and it should result in an object that supports the\ncontext management protocol (that is, has __enter__()\nand __exit__()\nmethods).\nThe object\u2019s __enter__()\nis called before with-block is executed and\ntherefore can run set-up code. It also may return a value that is bound to the\nname variable, if given. (Note carefully that variable is not assigned\nthe result of expression.)\nAfter execution of the with-block is finished, the object\u2019s __exit__()\nmethod is called, even if the block raised an exception, and can therefore run\nclean-up code.\nSome standard Python objects now support the context management protocol and can\nbe used with the \u2018with\n\u2019 statement. File objects are one example:\nwith open('/etc/passwd', 'r') as f:\nfor line in f:\nprint line\n... more processing code ...\nAfter this statement has executed, the file object in f will have been\nautomatically closed, even if the for\nloop raised an exception\npart-way through the block.\nNote\nIn this case, f is the same object created by open()\n, because\n__enter__()\nreturns self.\nThe threading\nmodule\u2019s locks and condition variables also support the\n\u2018with\n\u2019 statement:\nlock = threading.Lock()\nwith lock:\n# Critical section of code\n...\nThe lock is acquired before the block is executed and always released once the block is complete.\nThe localcontext()\nfunction in the decimal\nmodule makes\nit easy to save and restore the current decimal context, which encapsulates\nthe desired precision and rounding characteristics for computations:\nfrom decimal import Decimal, Context, localcontext\n# Displays with default precision of 28 digits\nv = Decimal('578')\nprint v.sqrt()\nwith localcontext(Context(prec=16)):\n# All code in this block uses a precision of 16 digits.\n# The original context is restored on exiting the block.\nprint v.sqrt()\nWriting Context Managers\u00b6\nUnder the hood, the \u2018with\n\u2019 statement is fairly complicated. Most\npeople will only use \u2018with\n\u2019 in company with existing objects and\ndon\u2019t need to know these details, so you can skip the rest of this section if\nyou like. Authors of new objects will need to understand the details of the\nunderlying implementation and should keep reading.\nA high-level explanation of the context management protocol is:\nThe expression is evaluated and should result in an object called a \u201ccontext manager\u201d. The context manager must have\n__enter__()\nand__exit__()\nmethods.The context manager\u2019s\n__enter__()\nmethod is called. The value returned is assigned to VAR. If noas VAR\nclause is present, the value is simply discarded.The code in BLOCK is executed.\nIf BLOCK raises an exception, the context manager\u2019s\n__exit__()\nmethod is called with three arguments, the exception details (type, value, traceback\n, the same values returned bysys.exc_info()\n, which can also beNone\nif no exception occurred). The method\u2019s return value controls whether an exception is re-raised: any false value re-raises the exception, andTrue\nwill result in suppressing it. You\u2019ll only rarely want to suppress the exception, because if you do the author of the code containing the \u2018with\n\u2019 statement will never realize anything went wrong.If BLOCK didn\u2019t raise an exception, the\n__exit__()\nmethod is still called, but type, value, and traceback are allNone\n.\nLet\u2019s think through an example. I won\u2019t present detailed code but will only sketch the methods necessary for a database that supports transactions.\n(For people unfamiliar with database terminology: a set of changes to the database are grouped into a transaction. Transactions can be either committed, meaning that all the changes are written into the database, or rolled back, meaning that the changes are all discarded and the database is unchanged. See any database textbook for more information.)\nLet\u2019s assume there\u2019s an object representing a database connection. Our goal will be to let the user write code like this:\ndb_connection = DatabaseConnection()\nwith db_connection as cursor:\ncursor.execute('insert into ...')\ncursor.execute('delete from ...')\n# ... more operations ...\nThe transaction should be committed if the code in the block runs flawlessly or\nrolled back if there\u2019s an exception. Here\u2019s the basic interface for\nDatabaseConnection\nthat I\u2019ll assume:\nclass DatabaseConnection:\n# Database interface\ndef cursor(self):\n\"Returns a cursor object and starts a new transaction\"\ndef commit(self):\n\"Commits current transaction\"\ndef rollback(self):\n\"Rolls back current transaction\"\nThe __enter__()\nmethod is pretty easy, having only to start a new\ntransaction. For this application the resulting cursor object would be a useful\nresult, so the method will return it. The user can then add as cursor\nto\ntheir \u2018with\n\u2019 statement to bind the cursor to a variable name.\nclass DatabaseConnection:\n...\ndef __enter__(self):\n# Code to start a new transaction\ncursor = self.cursor()\nreturn cursor\nThe __exit__()\nmethod is the most complicated because it\u2019s where most of\nthe work has to be done. The method has to check if an exception occurred. If\nthere was no exception, the transaction is committed. The transaction is rolled\nback if there was an exception.\nIn the code below, execution will just fall off the end of the function,\nreturning the default value of None\n. None\nis false, so the exception\nwill be re-raised automatically. If you wished, you could be more explicit and\nadd a return\nstatement at the marked location.\nclass DatabaseConnection:\n...\ndef __exit__(self, type, value, tb):\nif tb is None:\n# No exception, so commit\nself.commit()\nelse:\n# Exception occurred, so rollback.\nself.rollback()\n# return False\nThe contextlib module\u00b6\nThe contextlib\nmodule provides some functions and a decorator that\nare useful when writing objects for use with the \u2018with\n\u2019 statement.\nThe decorator is called contextmanager()\n, and lets you write\na single generator function instead of defining a new class. The generator\nshould yield exactly one value. The code up to the yield\nwill be\nexecuted as the __enter__()\nmethod, and the value yielded will\nbe the method\u2019s return value that will get bound to the variable in the\n\u2018with\n\u2019 statement\u2019s as\nclause, if any. The code after\nthe yield\nwill be executed in the __exit__()\nmethod.\nAny exception raised in the block will be raised by the yield\nstatement.\nUsing this decorator, our database example from the previous section could be written as:\nfrom contextlib import contextmanager\n@contextmanager\ndef db_transaction(connection):\ncursor = connection.cursor()\ntry:\nyield cursor\nexcept:\nconnection.rollback()\nraise\nelse:\nconnection.commit()\ndb = DatabaseConnection()\nwith db_transaction(db) as cursor:\n...\nThe contextlib\nmodule also has a nested(mgr1, mgr2, ...)\nfunction\nthat combines a number of context managers so you don\u2019t need to write nested\n\u2018with\n\u2019 statements. In this example, the single \u2018with\n\u2019\nstatement both starts a database transaction and acquires a thread lock:\nlock = threading.Lock()\nwith nested (db_transaction(db), lock) as (cursor, locked):\n...\nFinally, the closing()\nfunction returns its argument so that it can be\nbound to a variable, and calls the argument\u2019s .close()\nmethod at the end\nof the block.\nimport urllib, sys\nfrom contextlib import closing\nwith closing(urllib.urlopen('http://www.yahoo.com')) as f:\nfor line in f:\nsys.stdout.write(line)\nSee also\n- PEP 343 - The \u201cwith\u201d statement\nPEP written by Guido van Rossum and Nick Coghlan; implemented by Mike Bland, Guido van Rossum, and Neal Norwitz. The PEP shows the code generated for a \u2018\nwith\n\u2019 statement, which can be helpful in learning how the statement works.\nThe documentation for the contextlib\nmodule.\nPEP 366: Explicit Relative Imports From a Main Module\u00b6\nPython\u2019s -m\nswitch allows running a module as a script.\nWhen you ran a module that was located inside a package, relative\nimports didn\u2019t work correctly.\nThe fix for Python 2.6 adds a module.__package__\nattribute.\nWhen this attribute is present, relative imports will be\nrelative to the value of this attribute instead of the\n__name__\nattribute.\nPEP 302-style importers can then set __package__\nas necessary.\nThe runpy\nmodule that implements the -m\nswitch now\ndoes this, so relative imports will now work correctly in scripts\nrunning from inside a package.\nPEP 370: Per-user site-packages\nDirectory\u00b6\nWhen you run Python, the module search path sys.path\nusually\nincludes a directory whose path ends in \"site-packages\"\n. This\ndirectory is intended to hold locally installed packages available to\nall users using a machine or a particular site installation.\nPython 2.6 introduces a convention for user-specific site directories. The directory varies depending on the platform:\nUnix and Mac OS X:\n~/.local/\nWindows:\n%APPDATA%/Python\nWithin this directory, there will be version-specific subdirectories,\nsuch as lib/python2.6/site-packages\non Unix/Mac OS and\nPython26/site-packages\non Windows.\nIf you don\u2019t like the default directory, it can be overridden by an\nenvironment variable. PYTHONUSERBASE\nsets the root\ndirectory used for all Python versions supporting this feature. On\nWindows, the directory for application-specific data can be changed by\nsetting the APPDATA\nenvironment variable. You can also\nmodify the site.py\nfile for your Python installation.\nThe feature can be disabled entirely by running Python with the\n-s\noption or setting the PYTHONNOUSERSITE\nenvironment variable.\nSee also\n- PEP 370 - Per-user\nsite-packages\nDirectory PEP written and implemented by Christian Heimes.\nPEP 371: The multiprocessing\nPackage\u00b6\nThe new multiprocessing\npackage lets Python programs create new\nprocesses that will perform a computation and return a result to the\nparent. The parent and child processes can communicate using queues\nand pipes, synchronize their operations using locks and semaphores,\nand can share simple arrays of data.\nThe multiprocessing\nmodule started out as an exact emulation of\nthe threading\nmodule using processes instead of threads. That\ngoal was discarded along the path to Python 2.6, but the general\napproach of the module is still similar. The fundamental class\nis the Process\n, which is passed a callable object and\na collection of arguments. The start()\nmethod\nsets the callable running in a subprocess, after which you can call\nthe is_alive()\nmethod to check whether the\nsubprocess is still running and the join()\nmethod to wait for the process to exit.\nHere\u2019s a simple example where the subprocess will calculate a factorial. The function doing the calculation is written strangely so that it takes significantly longer when the input argument is a multiple of 4.\nimport time\nfrom multiprocessing import Process, Queue\ndef factorial(queue, N):\n\"Compute a factorial.\"\n# If N is a multiple of 4, this function will take much longer.\nif (N % 4) == 0:\ntime.sleep(.05 * N/4)\n# Calculate the result\nfact = 1L\nfor i in range(1, N+1):\nfact = fact * i\n# Put the result on the queue\nqueue.put(fact)\nif __name__ == '__main__':\nqueue = Queue()\nN = 5\np = Process(target=factorial, args=(queue, N))\np.start()\np.join()\nresult = queue.get()\nprint 'Factorial', N, '=', result\nA Queue\nis used to communicate the result of the factorial.\nThe Queue\nobject is stored in a global variable.\nThe child process will use the value of the variable when the child\nwas created; because it\u2019s a Queue\n, parent and child can use\nthe object to communicate. (If the parent were to change the value of\nthe global variable, the child\u2019s value would be unaffected, and vice\nversa.)\nTwo other classes, Pool\nand\nManager\n, provide higher-level interfaces.\nPool\nwill create a fixed number of worker\nprocesses, and requests can then be distributed to the workers by calling\napply()\nor\napply_async()\nto add a single request, and\nmap()\nor\nmap_async()\nto add a number of\nrequests. The following code uses a Pool\nto\nspread requests across 5 worker processes and retrieve a list of results:\nfrom multiprocessing import Pool\ndef factorial(N, dictionary):\n\"Compute a factorial.\"\n...\np = Pool(5)\nresult = p.map(factorial, range(1, 1000, 10))\nfor v in result:\nprint v\nThis produces the following output:\n1\n39916800\n51090942171709440000\n8222838654177922817725562880000000\n33452526613163807108170062053440751665152000000000\n...\nThe other high-level interface, the Manager\nclass,\ncreates a separate server process that can hold master copies of Python data\nstructures. Other processes can then access and modify these data\nstructures using proxy objects. The following example creates a\nshared dictionary by calling the dict()\nmethod; the worker\nprocesses then insert values into the dictionary. (Locking is not\ndone for you automatically, which doesn\u2019t matter in this example.\nManager\n\u2019s methods also include\nLock()\n,\nRLock()\n,\nand Semaphore()\nto create\nshared locks.)\nimport time\nfrom multiprocessing import Pool, Manager\ndef factorial(N, dictionary):\n\"Compute a factorial.\"\n# Calculate the result\nfact = 1L\nfor i in range(1, N+1):\nfact = fact * i\n# Store result in dictionary\ndictionary[N] = fact\nif __name__ == '__main__':\np = Pool(5)\nmgr = Manager()\nd = mgr.dict() # Create shared dictionary\n# Run tasks using the pool\nfor N in range(1, 1000, 10):\np.apply_async(factorial, (N, d))\n# Mark pool as closed -- no more tasks can be added.\np.close()\n# Wait for tasks to exit\np.join()\n# Output results\nfor k, v in sorted(d.items()):\nprint k, v\nThis will produce the output:\n1 1\n11 39916800\n21 51090942171709440000\n31 8222838654177922817725562880000000\n41 33452526613163807108170062053440751665152000000000\n51 15511187532873822802242430164693032110632597200169861120000...\nSee also\nThe documentation for the multiprocessing\nmodule.\n- PEP 371 - Addition of the multiprocessing package\nPEP written by Jesse Noller and Richard Oudkerk; implemented by Richard Oudkerk and Jesse Noller.\nPEP 3101: Advanced String Formatting\u00b6\nIn Python 3.0, the %\noperator is supplemented by a more powerful string\nformatting method, format()\n. Support for the str.format()\nmethod\nhas been backported to Python 2.6.\nIn 2.6, both 8-bit and Unicode strings have a .format()\nmethod that\ntreats the string as a template and takes the arguments to be formatted.\nThe formatting template uses curly brackets ({\n, }\n) as special characters:\n>>> # Substitute positional argument 0 into the string.\n>>> \"User ID: {0}\".format(\"root\")\n'User ID: root'\n>>> # Use the named keyword arguments\n>>> \"User ID: {uid} Last seen: {last_login}\".format(\n... uid=\"root\",\n... last_login = \"5 Mar 2008 07:20\")\n'User ID: root Last seen: 5 Mar 2008 07:20'\nCurly brackets can be escaped by doubling them:\n>>> \"Empty dict: {{}}\".format()\n\"Empty dict: {}\"\nField names can be integers indicating positional arguments, such as\n{0}\n, {1}\n, etc. or names of keyword arguments. You can also\nsupply compound field names that read attributes or access dictionary keys:\n>>> import sys\n>>> print 'Platform: {0.platform}\\nPython version: {0.version}'.format(sys)\nPlatform: darwin\nPython version: 2.6a1+ (trunk:61261M, Mar 5 2008, 20:29:41)\n[GCC 4.0.1 (Apple Computer, Inc. build 5367)]'\n>>> import mimetypes\n>>> 'Content-type: {0[.mp4]}'.format(mimetypes.types_map)\n'Content-type: video/mp4'\nNote that when using dictionary-style notation such as [.mp4]\n, you\ndon\u2019t need to put any quotation marks around the string; it will look\nup the value using .mp4\nas the key. Strings beginning with a\nnumber will be converted to an integer. You can\u2019t write more\ncomplicated expressions inside a format string.\nSo far we\u2019ve shown how to specify which field to substitute into the resulting string. The precise formatting used is also controllable by adding a colon followed by a format specifier. For example:\n>>> # Field 0: left justify, pad to 15 characters\n>>> # Field 1: right justify, pad to 6 characters\n>>> fmt = '{0:15} ${1:>6}'\n>>> fmt.format('Registration', 35)\n'Registration $ 35'\n>>> fmt.format('Tutorial', 50)\n'Tutorial $ 50'\n>>> fmt.format('Banquet', 125)\n'Banquet $ 125'\nFormat specifiers can reference other fields through nesting:\n>>> fmt = '{0:{1}}'\n>>> width = 15\n>>> fmt.format('Invoice #1234', width)\n'Invoice #1234 '\n>>> width = 35\n>>> fmt.format('Invoice #1234', width)\n'Invoice #1234 '\nThe alignment of a field within the desired width can be specified:\nCharacter |\nEffect |\n|---|---|\n< (default) |\nLeft-align |\n> |\nRight-align |\n^ |\nCenter |\n= |\n(For numeric types only) Pad after the sign. |\nFormat specifiers can also include a presentation type, which controls how the value is formatted. For example, floating-point numbers can be formatted as a general number or in exponential notation:\n>>> '{0:g}'.format(3.75)\n'3.75'\n>>> '{0:e}'.format(3.75)\n'3.750000e+00'\nA variety of presentation types are available. Consult the 2.6 documentation for a complete list; here\u2019s a sample:\n|\nBinary. Outputs the number in base 2. |\n|\nCharacter. Converts the integer to the corresponding Unicode character before printing. |\n|\nDecimal Integer. Outputs the number in base 10. |\n|\nOctal format. Outputs the number in base 8. |\n|\nHex format. Outputs the number in base 16, using lower-case letters for the digits above 9. |\n|\nExponent notation. Prints the number in scientific notation using the letter \u2018e\u2019 to indicate the exponent. |\n|\nGeneral format. This prints the number as a fixed-point number, unless the number is too large, in which case it switches to \u2018e\u2019 exponent notation. |\n|\nNumber. This is the same as \u2018g\u2019 (for floats) or \u2018d\u2019 (for integers), except that it uses the current locale setting to insert the appropriate number separator characters. |\n|\nPercentage. Multiplies the number by 100 and displays in fixed (\u2018f\u2019) format, followed by a percent sign. |\nClasses and types can define a __format__()\nmethod to control how they\u2019re\nformatted. It receives a single argument, the format specifier:\ndef __format__(self, format_spec):\nif isinstance(format_spec, unicode):\nreturn unicode(str(self))\nelse:\nreturn str(self)\nThere\u2019s also a format()\nbuiltin that will format a single\nvalue. It calls the type\u2019s __format__()\nmethod with the\nprovided specifier:\n>>> format(75.6564, '.2f')\n'75.66'\nSee also\n- Format String Syntax\nThe reference documentation for format fields.\n- PEP 3101 - Advanced String Formatting\nPEP written by Talin. Implemented by Eric Smith.\nPEP 3105: print\nAs a Function\u00b6\nThe print\nstatement becomes the print()\nfunction in Python 3.0.\nMaking print()\na function makes it possible to replace the function\nby doing def print(...)\nor importing a new function from somewhere else.\nPython 2.6 has a __future__\nimport that removes print\nas language\nsyntax, letting you use the functional form instead. For example:\n>>> from __future__ import print_function\n>>> print('# of entries', len(dictionary), file=sys.stderr)\nThe signature of the new function is:\ndef print(*args, sep=' ', end='\\n', file=None)\nThe parameters are:\nargs: positional arguments whose values will be printed out.\nsep: the separator, which will be printed between arguments.\nend: the ending text, which will be printed after all of the arguments have been output.\nfile: the file object to which the output will be sent.\nSee also\n- PEP 3105 - Make print a function\nPEP written by Georg Brandl.\nPEP 3110: Exception-Handling Changes\u00b6\nOne error that Python programmers occasionally make is writing the following code:\ntry:\n...\nexcept TypeError, ValueError: # Wrong!\n...\nThe author is probably trying to catch both TypeError\nand\nValueError\nexceptions, but this code actually does something\ndifferent: it will catch TypeError\nand bind the resulting\nexception object to the local name \"ValueError\"\n. The\nValueError\nexception will not be caught at all. The correct\ncode specifies a tuple of exceptions:\ntry:\n...\nexcept (TypeError, ValueError):\n...\nThis error happens because the use of the comma here is ambiguous: does it indicate two different nodes in the parse tree, or a single node that\u2019s a tuple?\nPython 3.0 makes this unambiguous by replacing the comma with the word\n\u201cas\u201d. To catch an exception and store the exception object in the\nvariable exc\n, you must write:\ntry:\n...\nexcept TypeError as exc:\n...\nPython 3.0 will only support the use of \u201cas\u201d, and therefore interprets the first example as catching two different exceptions. Python 2.6 supports both the comma and \u201cas\u201d, so existing code will continue to work. We therefore suggest using \u201cas\u201d when writing new Python code that will only be executed with 2.6.\nSee also\n- PEP 3110 - Catching Exceptions in Python 3000\nPEP written and implemented by Collin Winter.\nPEP 3112: Byte Literals\u00b6\nPython 3.0 adopts Unicode as the language\u2019s fundamental string type and\ndenotes 8-bit literals differently, either as b'string'\nor using a bytes\nconstructor. For future compatibility,\nPython 2.6 adds bytes\nas a synonym for the str\ntype,\nand it also supports the b''\nnotation.\nThe 2.6 str\ndiffers from 3.0\u2019s bytes\ntype in various\nways; most notably, the constructor is completely different. In 3.0,\nbytes([65, 66, 67])\nis 3 elements long, containing the bytes\nrepresenting ABC\n; in 2.6, bytes([65, 66, 67])\nreturns the\n12-byte string representing the str()\nof the list.\nThe primary use of bytes\nin 2.6 will be to write tests of\nobject type such as isinstance(x, bytes)\n. This will help the 2to3\nconverter, which can\u2019t tell whether 2.x code intends strings to\ncontain either characters or 8-bit bytes; you can now\nuse either bytes\nor str\nto represent your intention\nexactly, and the resulting code will also be correct in Python 3.0.\nThere\u2019s also a __future__\nimport that causes all string literals\nto become Unicode strings. This means that \\u\nescape sequences\ncan be used to include Unicode characters:\nfrom __future__ import unicode_literals\ns = ('\\u751f\\u3080\\u304e\\u3000\\u751f\\u3054'\n'\\u3081\\u3000\\u751f\\u305f\\u307e\\u3054')\nprint len(s) # 12 Unicode characters\nAt the C level, Python 3.0 will rename the existing 8-bit\nstring type, called PyStringObject\nin Python 2.x,\nto PyBytesObject\n. Python 2.6 uses #define\nto support using the names PyBytesObject()\n,\nPyBytes_Check()\n, PyBytes_FromStringAndSize()\n,\nand all the other functions and macros used with strings.\nInstances of the bytes\ntype are immutable just\nas strings are. A new bytearray\ntype stores a mutable\nsequence of bytes:\n>>> bytearray([65, 66, 67])\nbytearray(b'ABC')\n>>> b = bytearray(u'\\u21ef\\u3244', 'utf-8')\n>>> b\nbytearray(b'\\xe2\\x87\\xaf\\xe3\\x89\\x84')\n>>> b[0] = '\\xe3'\n>>> b\nbytearray(b'\\xe3\\x87\\xaf\\xe3\\x89\\x84')\n>>> unicode(str(b), 'utf-8')\nu'\\u31ef \\u3244'\nByte arrays support most of the methods of string types, such as\nstartswith()\n/endswith()\n,\nfind()\n/rfind()\n,\nand some of the methods of lists, such as append()\n,\npop()\n, and reverse()\n.\n>>> b = bytearray('ABC')\n>>> b.append('d')\n>>> b.append(ord('e'))\n>>> b\nbytearray(b'ABCde')\nThere\u2019s also a corresponding C API, with\nPyByteArray_FromObject()\n,\nPyByteArray_FromStringAndSize()\n,\nand various other functions.\nSee also\n- PEP 3112 - Bytes literals in Python 3000\nPEP written by Jason Orendorff; backported to 2.6 by Christian Heimes.\nPEP 3116: New I/O Library\u00b6\nPython\u2019s built-in file objects support a number of methods, but\nfile-like objects don\u2019t necessarily support all of them. Objects that\nimitate files usually support read()\nand\nwrite()\n, but they may not support readline()\n,\nfor example. Python 3.0 introduces a layered I/O library in the io\nmodule that separates buffering and text-handling features from the\nfundamental read and write operations.\nThere are three levels of abstract base classes provided by\nthe io\nmodule:\nRawIOBase\ndefines raw I/O operations:read()\n,readinto()\n,write()\n,seek()\n,tell()\n,truncate()\n, andclose()\n. Most of the methods of this class will often map to a single system call. There are alsoreadable()\n,writable()\n, andseekable()\nmethods for determining what operations a given object will allow.Python 3.0 has concrete implementations of this class for files and sockets, but Python 2.6 hasn\u2019t restructured its file and socket objects in this way.\nBufferedIOBase\nis an abstract base class that buffers data in memory to reduce the number of system calls used, making I/O processing more efficient. It supports all of the methods ofRawIOBase\n, and adds araw\nattribute holding the underlying raw object.There are five concrete classes implementing this ABC.\nBufferedWriter\nandBufferedReader\nare for objects that support write-only or read-only usage that have aseek()\nmethod for random access.BufferedRandom\nobjects support read and write access upon the same underlying stream, andBufferedRWPair\nis for objects such as TTYs that have both read and write operations acting upon unconnected streams of data. TheBytesIO\nclass supports reading, writing, and seeking over an in-memory buffer.TextIOBase\n: Provides functions for reading and writing strings (remember, strings will be Unicode in Python 3.0), and supporting universal newlines.TextIOBase\ndefines thereadline()\nmethod and supports iteration upon objects.There are two concrete implementations.\nTextIOWrapper\nwraps a buffered I/O object, supporting all of the methods for text I/O and adding abuffer\nattribute for access to the underlying object.StringIO\nsimply buffers everything in memory without ever writing anything to disk.(In Python 2.6,\nio.StringIO\nis implemented in pure Python, so it\u2019s pretty slow. You should therefore stick with the existingStringIO\nmodule orcStringIO\nfor now. At some point Python 3.0\u2019sio\nmodule will be rewritten into C for speed, and perhaps the C implementation will be backported to the 2.x releases.)\nIn Python 2.6, the underlying implementations haven\u2019t been\nrestructured to build on top of the io\nmodule\u2019s classes. The\nmodule is being provided to make it easier to write code that\u2019s\nforward-compatible with 3.0, and to save developers the effort of writing\ntheir own implementations of buffering and text I/O.\nSee also\n- PEP 3116 - New I/O\nPEP written by Daniel Stutzbach, Mike Verdone, and Guido van Rossum. Code by Guido van Rossum, Georg Brandl, Walter Doerwald, Jeremy Hylton, Martin von L\u00f6wis, Tony Lownds, and others.\nPEP 3118: Revised Buffer Protocol\u00b6\nThe buffer protocol is a C-level API that lets Python types\nexchange pointers into their internal representations. A\nmemory-mapped file can be viewed as a buffer of characters, for\nexample, and this lets another module such as re\ntreat memory-mapped files as a string of characters to be searched.\nThe primary users of the buffer protocol are numeric-processing packages such as NumPy, which expose the internal representation of arrays so that callers can write data directly into an array instead of going through a slower API. This PEP updates the buffer protocol in light of experience from NumPy development, adding a number of new features such as indicating the shape of an array or locking a memory region.\nThe most important new C API function is\nPyObject_GetBuffer(PyObject *obj, Py_buffer *view, int flags)\n, which\ntakes an object and a set of flags, and fills in the\nPy_buffer\nstructure with information\nabout the object\u2019s memory representation. Objects\ncan use this operation to lock memory in place\nwhile an external caller could be modifying the contents,\nso there\u2019s a corresponding PyBuffer_Release(Py_buffer *view)\nto\nindicate that the external caller is done.\nThe flags argument to PyObject_GetBuffer()\nspecifies\nconstraints upon the memory returned. Some examples are:\nPyBUF_WRITABLE\nindicates that the memory must be writable.PyBUF_LOCK\nrequests a read-only or exclusive lock on the memory.PyBUF_C_CONTIGUOUS\nandPyBUF_F_CONTIGUOUS\nrequests a C-contiguous (last dimension varies the fastest) or Fortran-contiguous (first dimension varies the fastest) array layout.\nTwo new argument codes for PyArg_ParseTuple()\n,\ns*\nand z*\n, return locked buffer objects for a parameter.\nSee also\n- PEP 3118 - Revising the buffer protocol\nPEP written by Travis Oliphant and Carl Banks; implemented by Travis Oliphant.\nPEP 3119: Abstract Base Classes\u00b6\nSome object-oriented languages such as Java support interfaces,\ndeclaring that a class has a given set of methods or supports a given\naccess protocol. Abstract Base Classes (or ABCs) are an equivalent\nfeature for Python. The ABC support consists of an abc\nmodule\ncontaining a metaclass called ABCMeta\n, special handling of\nthis metaclass by the isinstance()\nand issubclass()\nbuiltins, and a collection of basic ABCs that the Python developers\nthink will be widely useful. Future versions of Python will probably\nadd more ABCs.\nLet\u2019s say you have a particular class and wish to know whether it supports\ndictionary-style access. The phrase \u201cdictionary-style\u201d is vague, however.\nIt probably means that accessing items with obj[1]\nworks.\nDoes it imply that setting items with obj[2] = value\nworks?\nOr that the object will have keys()\n, values()\n, and items()\nmethods? What about the iterative variants such as iterkeys()\n?\ncopy`and :meth:()\n!update`? Iterating over the object with iter()\n?\nThe Python 2.6 collections\nmodule includes a number of\ndifferent ABCs that represent these distinctions. Iterable\nindicates that a class defines __iter__()\n, and\nContainer\nmeans the class defines a __contains__()\nmethod and therefore supports x in y\nexpressions. The basic\ndictionary interface of getting items, setting items, and\nkeys()\n, values()\n, and items()\n, is defined by the\nMutableMapping\nABC.\nYou can derive your own classes from a particular ABC to indicate they support that ABC\u2019s interface:\nimport collections\nclass Storage(collections.MutableMapping):\n...\nAlternatively, you could write the class without deriving from\nthe desired ABC and instead register the class by\ncalling the ABC\u2019s register()\nmethod:\nimport collections\nclass Storage:\n...\ncollections.MutableMapping.register(Storage)\nFor classes that you write, deriving from the ABC is probably clearer.\nThe register()\nmethod is useful when you\u2019ve written a new\nABC that can describe an existing type or class, or if you want\nto declare that some third-party class implements an ABC.\nFor example, if you defined a PrintableType\nABC,\nit\u2019s legal to do:\n# Register Python's types\nPrintableType.register(int)\nPrintableType.register(float)\nPrintableType.register(str)\nClasses should obey the semantics specified by an ABC, but Python can\u2019t check this; it\u2019s up to the class author to understand the ABC\u2019s requirements and to implement the code accordingly.\nTo check whether an object supports a particular interface, you can now write:\ndef func(d):\nif not isinstance(d, collections.MutableMapping):\nraise ValueError(\"Mapping object expected, not %r\" % d)\nDon\u2019t feel that you must now begin writing lots of checks as in the above example. Python has a strong tradition of duck-typing, where explicit type-checking is never done and code simply calls methods on an object, trusting that those methods will be there and raising an exception if they aren\u2019t. Be judicious in checking for ABCs and only do it where it\u2019s absolutely necessary.\nYou can write your own ABCs by using abc.ABCMeta\nas the\nmetaclass in a class definition:\nfrom abc import ABCMeta, abstractmethod\nclass Drawable():\n__metaclass__ = ABCMeta\n@abstractmethod\ndef draw(self, x, y, scale=1.0):\npass\ndef draw_doubled(self, x, y):\nself.draw(x, y, scale=2.0)\nclass Square(Drawable):\ndef draw(self, x, y, scale):\n...\nIn the Drawable\nABC above, the draw_doubled()\nmethod\nrenders the object at twice its size and can be implemented in terms\nof other methods described in Drawable\n. Classes implementing\nthis ABC therefore don\u2019t need to provide their own implementation\nof draw_doubled()\n, though they can do so. An implementation\nof draw()\nis necessary, though; the ABC can\u2019t provide\na useful generic implementation.\nYou can apply the @~abc.abstractmethod\ndecorator to methods such as\ndraw()\nthat must be implemented; Python will then raise an\nexception for classes that don\u2019t define the method.\nNote that the exception is only raised when you actually\ntry to create an instance of a subclass lacking the method:\n>>> class Circle(Drawable):\n... pass\n...\n>>> c = Circle()\nTraceback (most recent call last):\nFile \"\", line 1, in \nTypeError: Can't instantiate abstract class Circle with abstract methods draw\n>>>\nAbstract data attributes can be declared using the\n@abstractproperty\ndecorator:\nfrom abc import abstractproperty\n...\n@abstractproperty\ndef readonly(self):\nreturn self._x\nSubclasses must then define a readonly\nproperty.\nSee also\n- PEP 3119 - Introducing Abstract Base Classes\nPEP written by Guido van Rossum and Talin. Implemented by Guido van Rossum. Backported to 2.6 by Benjamin Aranguren, with Alex Martelli.\nPEP 3127: Integer Literal Support and Syntax\u00b6\nPython 3.0 changes the syntax for octal (base-8) integer literals, prefixing them with \u201c0o\u201d or \u201c0O\u201d instead of a leading zero, and adds support for binary (base-2) integer literals, signalled by a \u201c0b\u201d or \u201c0B\u201d prefix.\nPython 2.6 doesn\u2019t drop support for a leading 0 signalling an octal number, but it does add support for \u201c0o\u201d and \u201c0b\u201d:\n>>> 0o21, 2*8 + 1\n(17, 17)\n>>> 0b101111\n47\nThe oct()\nbuiltin still returns numbers\nprefixed with a leading zero, and a new bin()\nbuiltin returns the binary representation for a number:\n>>> oct(42)\n'052'\n>>> future_builtins.oct(42)\n'0o52'\n>>> bin(173)\n'0b10101101'\nThe int()\nand long()\nbuiltins will now accept the \u201c0o\u201d\nand \u201c0b\u201d prefixes when base-8 or base-2 are requested, or when the\nbase argument is zero (signalling that the base used should be\ndetermined from the string):\n>>> int ('0o52', 0)\n42\n>>> int('1101', 2)\n13\n>>> int('0b1101', 2)\n13\n>>> int('0b1101', 0)\n13\nSee also\n- PEP 3127 - Integer Literal Support and Syntax\nPEP written by Patrick Maupin; backported to 2.6 by Eric Smith.\nPEP 3129: Class Decorators\u00b6\nDecorators have been extended from functions to classes. It\u2019s now legal to write:\n@foo\n@bar\nclass A:\npass\nThis is equivalent to:\nclass A:\npass\nA = foo(bar(A))\nSee also\n- PEP 3129 - Class Decorators\nPEP written by Collin Winter.\nPEP 3141: A Type Hierarchy for Numbers\u00b6\nPython 3.0 adds several abstract base classes for numeric types\ninspired by Scheme\u2019s numeric tower. These classes were backported to\n2.6 as the numbers\nmodule.\nThe most general ABC is Number\n. It defines no operations at\nall, and only exists to allow checking if an object is a number by\ndoing isinstance(obj, Number)\n.\nComplex\nis a subclass of Number\n. Complex numbers\ncan undergo the basic operations of addition, subtraction,\nmultiplication, division, and exponentiation, and you can retrieve the\nreal and imaginary parts and obtain a number\u2019s conjugate. Python\u2019s built-in\ncomplex type is an implementation of Complex\n.\nReal\nfurther derives from Complex\n, and adds\noperations that only work on real numbers: floor()\n, trunc()\n,\nrounding, taking the remainder mod N, floor division,\nand comparisons.\nRational\nnumbers derive from Real\n, have\nnumerator\nand denominator\nproperties, and can be\nconverted to floats. Python 2.6 adds a simple rational-number class,\nFraction\n, in the fractions\nmodule. (It\u2019s called\nFraction\ninstead of Rational\nto avoid\na name clash with numbers.Rational\n.)\nIntegral\nnumbers derive from Rational\n, and\ncan be shifted left and right with <<\nand >>\n,\ncombined using bitwise operations such as &\nand |\n,\nand can be used as array indexes and slice boundaries.\nIn Python 3.0, the PEP slightly redefines the existing builtins\nround()\n, math.floor()\n, math.ceil()\n, and adds a new\none, math.trunc()\n, that\u2019s been backported to Python 2.6.\nmath.trunc()\nrounds toward zero, returning the closest\nIntegral\nthat\u2019s between the function\u2019s argument and zero.\nSee also\n- PEP 3141 - A Type Hierarchy for Numbers\nPEP written by Jeffrey Yasskin.\nScheme\u2019s numerical tower, from the Guile manual.\nScheme\u2019s number datatypes from the R5RS Scheme specification.\nThe fractions\nModule\u00b6\nTo fill out the hierarchy of numeric types, the fractions\nmodule provides a rational-number class. Rational numbers store their\nvalues as a numerator and denominator forming a fraction, and can\nexactly represent numbers such as 2/3\nthat floating-point numbers\ncan only approximate.\nThe Fraction\nconstructor takes two Integral\nvalues\nthat will be the numerator and denominator of the resulting fraction.\n>>> from fractions import Fraction\n>>> a = Fraction(2, 3)\n>>> b = Fraction(2, 5)\n>>> float(a), float(b)\n(0.66666666666666663, 0.40000000000000002)\n>>> a+b\nFraction(16, 15)\n>>> a/b\nFraction(5, 3)\nFor converting floating-point numbers to rationals,\nthe float type now has an as_integer_ratio()\nmethod that returns\nthe numerator and denominator for a fraction that evaluates to the same\nfloating-point value:\n>>> (2.5) .as_integer_ratio()\n(5, 2)\n>>> (3.1415) .as_integer_ratio()\n(7074029114692207L, 2251799813685248L)\n>>> (1./3) .as_integer_ratio()\n(6004799503160661L, 18014398509481984L)\nNote that values that can only be approximated by floating-point numbers, such as 1./3, are not simplified to the number being approximated; the fraction attempts to match the floating-point value exactly.\nThe fractions\nmodule is based upon an implementation by Sjoerd\nMullender that was in Python\u2019s Demo/classes/\ndirectory for a\nlong time. This implementation was significantly updated by Jeffrey\nYasskin.\nOther Language Changes\u00b6\nSome smaller changes made to the core Python language are:\nDirectories and zip archives containing a\n__main__.py\nfile can now be executed directly by passing their name to the interpreter. The directory or zip archive is automatically inserted as the first entry in sys.path. (Suggestion and initial patch by Andy Chu, subsequently revised by Phillip J. Eby and Nick Coghlan; bpo-1739468.)The\nhasattr()\nfunction was catching and ignoring all errors, under the assumption that they meant a__getattr__()\nmethod was failing somehow and the return value ofhasattr()\nwould therefore beFalse\n. This logic shouldn\u2019t be applied toKeyboardInterrupt\nandSystemExit\n, however; Python 2.6 will no longer discard such exceptions whenhasattr()\nencounters them. (Fixed by Benjamin Peterson; bpo-2196.)When calling a function using the\n**\nsyntax to provide keyword arguments, you are no longer required to use a Python dictionary; any mapping will now work:>>> def f(**kw): ... print sorted(kw) ... >>> ud=UserDict.UserDict() >>> ud['a'] = 1 >>> ud['b'] = 'string' >>> f(**ud) ['a', 'b']\n(Contributed by Alexander Belopolsky; bpo-1686487.)\nIt\u2019s also become legal to provide keyword arguments after a\n*args\nargument to a function call.>>> def f(*args, **kw): ... print args, kw ... >>> f(1,2,3, *(4,5,6), keyword=13) (1, 2, 3, 4, 5, 6) {'keyword': 13}\nPreviously this would have been a syntax error. (Contributed by Amaury Forgeot d\u2019Arc; bpo-3473.)\nA new builtin,\nnext(iterator, [default])\nreturns the next item from the specified iterator. If the default argument is supplied, it will be returned if iterator has been exhausted; otherwise, theStopIteration\nexception will be raised. (Backported in bpo-2719.)Tuples now have\nindex()\nandcount()\nmethods matching the list type\u2019sindex()\nandcount()\nmethods:>>> t = (0,1,2,3,4,0,1,2) >>> t.index(3) 3 >>> t.count(0) 2\n(Contributed by Raymond Hettinger)\nThe built-in types now have improved support for extended slicing syntax, accepting various combinations of\n(start, stop, step)\n. Previously, the support was partial and certain corner cases wouldn\u2019t work. (Implemented by Thomas Wouters.)Properties now have three attributes,\ngetter\n,setter\nanddeleter\n, that are decorators providing useful shortcuts for adding a getter, setter or deleter function to an existing property. You would use them like this:class C(object): @property def x(self): return self._x @x.setter def x(self, value): self._x = value @x.deleter def x(self): del self._x class D(C): @C.x.getter def x(self): return self._x * 2 @x.setter def x(self, value): self._x = value / 2\nSeveral methods of the built-in set types now accept multiple iterables:\nintersection()\n,intersection_update()\n,union()\n,update()\n,difference()\nanddifference_update()\n.>>> s=set('1234567890') >>> s.intersection('abc123', 'cdf246') # Intersection between all inputs set(['2']) >>> s.difference('246', '789') set(['1', '0', '3', '5'])\n(Contributed by Raymond Hettinger.)\nMany floating-point features were added. The\nfloat()\nfunction will now turn the stringnan\ninto an IEEE 754 Not A Number value, and+inf\nand-inf\ninto positive or negative infinity. This works on any platform with IEEE 754 semantics. (Contributed by Christian Heimes; bpo-1635.)Other functions in the\nmath\nmodule,isinf()\nandisnan()\n, return true if their floating-point argument is infinite or Not A Number. (bpo-1640)Conversion functions were added to convert floating-point numbers into hexadecimal strings (bpo-3008). These functions convert floats to and from a string representation without introducing rounding errors from the conversion between decimal and binary. Floats have a\nhex()\nmethod that returns a string representation, and thefloat.fromhex()\nmethod converts a string back into a number:>>> a = 3.75 >>> a.hex() '0x1.e000000000000p+1' >>> float.fromhex('0x1.e000000000000p+1') 3.75 >>> b=1./3 >>> b.hex() '0x1.5555555555555p-2'\nA numerical nicety: when creating a complex number from two floats on systems that support signed zeros (-0 and +0), the\ncomplex()\nconstructor will now preserve the sign of the zero. (Fixed by Mark T. Dickinson; bpo-1507.)Classes that inherit a\n__hash__()\nmethod from a parent class can set__hash__ = None\nto indicate that the class isn\u2019t hashable. This will makehash(obj)\nraise aTypeError\nand the class will not be indicated as implementing theHashable\nABC.You should do this when you\u2019ve defined a\n__cmp__()\nor__eq__()\nmethod that compares objects by their value rather than by identity. All objects have a default hash method that usesid(obj)\nas the hash value. There\u2019s no tidy way to remove the__hash__()\nmethod inherited from a parent class, so assigningNone\nwas implemented as an override. At the C level, extensions can settp_hash\ntoPyObject_HashNotImplemented()\n. (Fixed by Nick Coghlan and Amaury Forgeot d\u2019Arc; bpo-2235.)The\nGeneratorExit\nexception now subclassesBaseException\ninstead ofException\n. This means that an exception handler that doesexcept Exception:\nwill not inadvertently catchGeneratorExit\n. (Contributed by Chad Austin; bpo-1537.)Generator objects now have a\ngi_code\nattribute that refers to the original code object backing the generator. (Contributed by Collin Winter; bpo-1473257.)The\ncompile()\nbuilt-in function now accepts keyword arguments as well as positional parameters. (Contributed by Thomas Wouters; bpo-1444529.)The\ncomplex()\nconstructor now accepts strings containing parenthesized complex numbers, meaning thatcomplex(repr(cplx))\nwill now round-trip values. For example,complex('(3+4j)')\nnow returns the value (3+4j). (bpo-1491866)The string\ntranslate()\nmethod now acceptsNone\nas the translation table parameter, which is treated as the identity transformation. This makes it easier to carry out operations that only delete characters. (Contributed by Bengt Richter and implemented by Raymond Hettinger; bpo-1193128.)The built-in\ndir()\nfunction now checks for a__dir__()\nmethod on the objects it receives. This method must return a list of strings containing the names of valid attributes for the object, and lets the object control the value thatdir()\nproduces. Objects that have__getattr__()\nor__getattribute__()\nmethods can use this to advertise pseudo-attributes they will honor. (bpo-1591665)Instance method objects have new attributes for the object and function comprising the method; the new synonym for\nim_self\nis__self__\n, andim_func\nis also available as__func__\n. The old names are still supported in Python 2.6, but are gone in 3.0.An obscure change: when you use the\nlocals()\nfunction inside aclass\nstatement, the resulting dictionary no longer returns free variables. (Free variables, in this case, are variables referenced in theclass\nstatement that aren\u2019t attributes of the class.)\nOptimizations\u00b6\nThe\nwarnings\nmodule has been rewritten in C. This makes it possible to invoke warnings from the parser, and may also make the interpreter\u2019s startup faster. (Contributed by Neal Norwitz and Brett Cannon; bpo-1631171.)Type objects now have a cache of methods that can reduce the work required to find the correct method implementation for a particular class; once cached, the interpreter doesn\u2019t need to traverse base classes to figure out the right method to call. The cache is cleared if a base class or the class itself is modified, so the cache should remain correct even in the face of Python\u2019s dynamic nature. (Original optimization implemented by Armin Rigo, updated for Python 2.6 by Kevin Jacobs; bpo-1700288.)\nBy default, this change is only applied to types that are included with the Python core. Extension modules may not necessarily be compatible with this cache, so they must explicitly add\nPy_TPFLAGS_HAVE_VERSION_TAG\nto the module\u2019stp_flags\nfield to enable the method cache. (To be compatible with the method cache, the extension module\u2019s code must not directly access and modify thetp_dict\nmember of any of the types it implements. Most modules don\u2019t do this, but it\u2019s impossible for the Python interpreter to determine that. See bpo-1878 for some discussion.)Function calls that use keyword arguments are significantly faster by doing a quick pointer comparison, usually saving the time of a full string comparison. (Contributed by Raymond Hettinger, after an initial implementation by Antoine Pitrou; bpo-1819.)\nAll of the functions in the\nstruct\nmodule have been rewritten in C, thanks to work at the Need For Speed sprint. (Contributed by Raymond Hettinger.)Some of the standard built-in types now set a bit in their type objects. This speeds up checking whether an object is a subclass of one of these types. (Contributed by Neal Norwitz.)\nUnicode strings now use faster code for detecting whitespace and line breaks; this speeds up the\nsplit()\nmethod by about 25% andsplitlines()\nby 35%. (Contributed by Antoine Pitrou.) Memory usage is reduced by using pymalloc for the Unicode string\u2019s data.The\nwith\nstatement now stores the__exit__()\nmethod on the stack, producing a small speedup. (Implemented by Jeffrey Yasskin.)To reduce memory usage, the garbage collector will now clear internal free lists when garbage-collecting the highest generation of objects. This may return memory to the operating system sooner.\nInterpreter Changes\u00b6\nTwo command-line options have been reserved for use by other Python\nimplementations. The -J\nswitch has been reserved for use by\nJython for Jython-specific options, such as switches that are passed to\nthe underlying JVM. -X\nhas been reserved for options\nspecific to a particular implementation of Python such as CPython,\nJython, or IronPython. If either option is used with Python 2.6, the\ninterpreter will report that the option isn\u2019t currently used.\nPython can now be prevented from writing .pyc\nor .pyo\nfiles by supplying the -B\nswitch to the Python interpreter,\nor by setting the PYTHONDONTWRITEBYTECODE\nenvironment\nvariable before running the interpreter. This setting is available to\nPython programs as the sys.dont_write_bytecode\nvariable, and\nPython code can change the value to modify the interpreter\u2019s\nbehaviour. (Contributed by Neal Norwitz and Georg Brandl.)\nThe encoding used for standard input, output, and standard error can\nbe specified by setting the PYTHONIOENCODING\nenvironment\nvariable before running the interpreter. The value should be a string\nin the form \nor :\n.\nThe encoding part specifies the encoding\u2019s name, e.g. utf-8\nor\nlatin-1\n; the optional errorhandler part specifies\nwhat to do with characters that can\u2019t be handled by the encoding,\nand should be one of \u201cerror\u201d, \u201cignore\u201d, or \u201creplace\u201d. (Contributed\nby Martin von L\u00f6wis.)\nNew and Improved Modules\u00b6\nAs in every release, Python\u2019s standard library received a number of\nenhancements and bug fixes. Here\u2019s a partial list of the most notable\nchanges, sorted alphabetically by module name. Consult the\nMisc/NEWS\nfile in the source tree for a more complete list of\nchanges, or look through the Subversion logs for all the details.\nThe\nasyncore\nandasynchat\nmodules are being actively maintained again, and a number of patches and bugfixes were applied. (Maintained by Josiah Carlson; see bpo-1736190 for one patch.)The\nbsddb\nmodule also has a new maintainer, Jes\u00fas Cea Avi\u00f3n, and the package is now available as a standalone package. The web page for the package is www.jcea.es/programacion/pybsddb.htm. The plan is to remove the package from the standard library in Python 3.0, because its pace of releases is much more frequent than Python\u2019s.The\nbsddb.dbshelve\nmodule now uses the highest pickling protocol available, instead of restricting itself to protocol 1. (Contributed by W. Barnes.)The\ncgi\nmodule will now read variables from the query string of an HTTP POST request. This makes it possible to use form actions with URLs that include query strings such as \u201c/cgi-bin/add.py?category=1\u201d. (Contributed by Alexandre Fiori and Nubis; bpo-1817.)The\nparse_qs()\nandparse_qsl()\nfunctions have been relocated from thecgi\nmodule to theurlparse\nmodule. The versions still available in thecgi\nmodule will triggerPendingDeprecationWarning\nmessages in 2.6 (bpo-600362).The\ncmath\nmodule underwent extensive revision, contributed by Mark Dickinson and Christian Heimes. Five new functions were added:polar()\nconverts a complex number to polar form, returning the modulus and argument of the complex number.rect()\ndoes the opposite, turning a modulus, argument pair back into the corresponding complex number.phase()\nreturns the argument (also called the angle) of a complex number.isnan()\nreturns True if either the real or imaginary part of its argument is a NaN.isinf()\nreturns True if either the real or imaginary part of its argument is infinite.\nThe revisions also improved the numerical soundness of the\ncmath\nmodule. For all functions, the real and imaginary parts of the results are accurate to within a few units of least precision (ulps) whenever possible. See bpo-1381 for the details. The branch cuts forasinh()\n,atanh()\n: andatan()\nhave also been corrected.The tests for the module have been greatly expanded; nearly 2000 new test cases exercise the algebraic functions.\nOn IEEE 754 platforms, the\ncmath\nmodule now handles IEEE 754 special values and floating-point exceptions in a manner consistent with Annex \u2018G\u2019 of the C99 standard.A new data type in the\ncollections\nmodule:namedtuple(typename, fieldnames)\nis a factory function that creates subclasses of the standard tuple whose fields are accessible by name as well as index. For example:>>> var_type = collections.namedtuple('variable', ... 'id name type size') >>> # Names are separated by spaces or commas. >>> # 'id, name, type, size' would also work. >>> var_type._fields ('id', 'name', 'type', 'size') >>> var = var_type(1, 'frequency', 'int', 4) >>> print var[0], var.id # Equivalent 1 1 >>> print var[2], var.type # Equivalent int int >>> var._asdict() {'size': 4, 'type': 'int', 'id': 1, 'name': 'frequency'} >>> v2 = var._replace(name='amplitude') >>> v2 variable(id=1, name='amplitude', type='int', size=4)\nSeveral places in the standard library that returned tuples have been modified to return\nnamedtuple()\ninstances. For example, theDecimal.as_tuple()\nmethod now returns a named tuple withsign\n,digits\n, andexponent\nfields.(Contributed by Raymond Hettinger.)\nAnother change to the\ncollections\nmodule is that thedeque\ntype now supports an optional maxlen parameter; if supplied, the deque\u2019s size will be restricted to no more than maxlen items. Adding more items to a full deque causes old items to be discarded.>>> from collections import deque >>> dq=deque(maxlen=3) >>> dq deque([], maxlen=3) >>> dq.append(1); dq.append(2); dq.append(3) >>> dq deque([1, 2, 3], maxlen=3) >>> dq.append(4) >>> dq deque([2, 3, 4], maxlen=3)\n(Contributed by Raymond Hettinger.)\nThe\nCookie\nmodule\u2019sMorsel\nobjects now support anhttponly\nattribute. In some browsers. cookies with this attribute set cannot be accessed or manipulated by JavaScript code. (Contributed by Arvin Schnell; bpo-1638033.)A new window method in the\ncurses\nmodule,chgat()\n, changes the display attributes for a certain number of characters on a single line. (Contributed by Fabian Kreutz.)# Boldface text starting at y=0,x=21 # and affecting the rest of the line. stdscr.chgat(0, 21, curses.A_BOLD)\nThe\nTextbox\nclass in thecurses.textpad\nmodule now supports editing in insert mode as well as overwrite mode. Insert mode is enabled by supplying a true value for the insert_mode parameter when creating theTextbox\ninstance.The\ndatetime\nmodule\u2019sstrftime()\nmethods now support a%f\nformat code that expands to the number of microseconds in the object, zero-padded on the left to six places. (Contributed by Skip Montanaro; bpo-1158.)The\ndecimal\nmodule was updated to version 1.66 of the General Decimal Specification. New features include some methods for some basic mathematical functions such asexp()\nandlog10()\n:>>> Decimal(1).exp() Decimal(\"2.718281828459045235360287471\") >>> Decimal(\"2.7182818\").ln() Decimal(\"0.9999999895305022877376682436\") >>> Decimal(1000).log10() Decimal(\"3\")\nThe\nas_tuple()\nmethod ofDecimal\nobjects now returns a named tuple withsign\n,digits\n, andexponent\nfields.(Implemented by Facundo Batista and Mark Dickinson. Named tuple support added by Raymond Hettinger.)\nThe\ndifflib\nmodule\u2019sSequenceMatcher\nclass now returns named tuples representing matches, witha\n,b\n, andsize\nattributes. (Contributed by Raymond Hettinger.)An optional\ntimeout\nparameter, specifying a timeout measured in seconds, was added to theftplib.FTP\nclass constructor as well as theconnect()\nmethod. (Added by Facundo Batista.) Also, theFTP\nclass\u2019sstorbinary()\nandstorlines()\nnow take an optional callback parameter that will be called with each block of data after the data has been sent. (Contributed by Phil Schwartz; bpo-1221598.)The\nreduce()\nbuilt-in function is also available in thefunctools\nmodule. In Python 3.0, the builtin has been dropped andreduce()\nis only available fromfunctools\n; currently there are no plans to drop the builtin in the 2.x series. (Patched by Christian Heimes; bpo-1739906.)When possible, the\ngetpass\nmodule will now use/dev/tty\nto print a prompt message and read the password, falling back to standard error and standard input. If the password may be echoed to the terminal, a warning is printed before the prompt is displayed. (Contributed by Gregory P. Smith.)The\nglob.glob()\nfunction can now return Unicode filenames if a Unicode path was used and Unicode filenames are matched within the directory. (bpo-1001604)A new function in the\nheapq\nmodule,merge(iter1, iter2, ...)\n, takes any number of iterables returning data in sorted order, and returns a new generator that returns the contents of all the iterators, also in sorted order. For example:>>> list(heapq.merge([1, 3, 5, 9], [2, 8, 16])) [1, 2, 3, 5, 8, 9, 16]\nAnother new function,\nheappushpop(heap, item)\n, pushes item onto heap, then pops off and returns the smallest item. This is more efficient than making a call toheappush()\nand thenheappop()\n.heapq\nis now implemented to only use less-than comparison, instead of the less-than-or-equal comparison it previously used. This makesheapq\n\u2019s usage of a type match thelist.sort()\nmethod. (Contributed by Raymond Hettinger.)An optional\ntimeout\nparameter, specifying a timeout measured in seconds, was added to thehttplib.HTTPConnection\nandHTTPSConnection\nclass constructors. (Added by Facundo Batista.)Most of the\ninspect\nmodule\u2019s functions, such asgetmoduleinfo()\nandgetargs()\n, now return named tuples. In addition to behaving like tuples, the elements of the return value can also be accessed as attributes. (Contributed by Raymond Hettinger.)Some new functions in the module include\nisgenerator()\n,isgeneratorfunction()\n, andisabstract()\n.The\nitertools\nmodule gained several new functions.izip_longest(iter1, iter2, ...[, fillvalue])\nmakes tuples from each of the elements; if some of the iterables are shorter than others, the missing values are set to fillvalue. For example:>>> tuple(itertools.izip_longest([1,2,3], [1,2,3,4,5])) ((1, 1), (2, 2), (3, 3), (None, 4), (None, 5))\nproduct(iter1, iter2, ..., [repeat=N])\nreturns the Cartesian product of the supplied iterables, a set of tuples containing every possible combination of the elements returned from each iterable.>>> list(itertools.product([1,2,3], [4,5,6])) [(1, 4), (1, 5), (1, 6), (2, 4), (2, 5), (2, 6), (3, 4), (3, 5), (3, 6)]\nThe optional repeat keyword argument is used for taking the product of an iterable or a set of iterables with themselves, repeated N times. With a single iterable argument, N-tuples are returned:\n>>> list(itertools.product([1,2], repeat=3)) [(1, 1, 1), (1, 1, 2), (1, 2, 1), (1, 2, 2), (2, 1, 1), (2, 1, 2), (2, 2, 1), (2, 2, 2)]\nWith two iterables, 2N-tuples are returned.\n>>> list(itertools.product([1,2], [3,4], repeat=2)) [(1, 3, 1, 3), (1, 3, 1, 4), (1, 3, 2, 3), (1, 3, 2, 4), (1, 4, 1, 3), (1, 4, 1, 4), (1, 4, 2, 3), (1, 4, 2, 4), (2, 3, 1, 3), (2, 3, 1, 4), (2, 3, 2, 3), (2, 3, 2, 4), (2, 4, 1, 3), (2, 4, 1, 4), (2, 4, 2, 3), (2, 4, 2, 4)]\ncombinations(iterable, r)\nreturns sub-sequences of length r from the elements of iterable.>>> list(itertools.combinations('123', 2)) [('1', '2'), ('1', '3'), ('2', '3')] >>> list(itertools.combinations('123', 3)) [('1', '2', '3')] >>> list(itertools.combinations('1234', 3)) [('1', '2', '3'), ('1', '2', '4'), ('1', '3', '4'), ('2', '3', '4')]\npermutations(iter[, r])\nreturns all the permutations of length r of the iterable\u2019s elements. If r is not specified, it will default to the number of elements produced by the iterable.>>> list(itertools.permutations([1,2,3,4], 2)) [(1, 2), (1, 3), (1, 4), (2, 1), (2, 3), (2, 4), (3, 1), (3, 2), (3, 4), (4, 1), (4, 2), (4, 3)]\nitertools.chain(*iterables)\nis an existing function initertools\nthat gained a new constructor in Python 2.6.itertools.chain.from_iterable(iterable)\ntakes a single iterable that should return other iterables.chain()\nwill then return all the elements of the first iterable, then all the elements of the second, and so on.>>> list(itertools.chain.from_iterable([[1,2,3], [4,5,6]])) [1, 2, 3, 4, 5, 6]\n(All contributed by Raymond Hettinger.)\nThe\nlogging\nmodule\u2019sFileHandler\nclass and its subclassesWatchedFileHandler\n,RotatingFileHandler\n, andTimedRotatingFileHandler\nnow have an optional delay parameter to their constructors. If delay is true, opening of the log file is deferred until the firstemit()\ncall is made. (Contributed by Vinay Sajip.)TimedRotatingFileHandler\nalso has a utc constructor parameter. If the argument is true, UTC time will be used in determining when midnight occurs and in generating filenames; otherwise local time will be used.Several new functions were added to the\nmath\nmodule:isinf()\nandisnan()\ndetermine whether a given float is a (positive or negative) infinity or a NaN (Not a Number), respectively.copysign()\ncopies the sign bit of an IEEE 754 number, returning the absolute value of x combined with the sign bit of y. For example,math.copysign(1, -0.0)\nreturns -1.0. (Contributed by Christian Heimes.)factorial()\ncomputes the factorial of a number. (Contributed by Raymond Hettinger; bpo-2138.)fsum()\nadds up the stream of numbers from an iterable, and is careful to avoid loss of precision through using partial sums. (Contributed by Jean Brouwers, Raymond Hettinger, and Mark Dickinson; bpo-2819.)acosh()\n,asinh()\nandatanh()\ncompute the inverse hyperbolic functions.log1p()\nreturns the natural logarithm of 1+x (base e).trunc()\nrounds a number toward zero, returning the closestIntegral\nthat\u2019s between the function\u2019s argument and zero. Added as part of the backport of PEP 3141\u2019s type hierarchy for numbers.\nThe\nmath\nmodule has been improved to give more consistent behaviour across platforms, especially with respect to handling of floating-point exceptions and IEEE 754 special values.Whenever possible, the module follows the recommendations of the C99 standard about 754\u2019s special values. For example,\nsqrt(-1.)\nshould now give aValueError\nacross almost all platforms, whilesqrt(float('NaN'))\nshould return a NaN on all IEEE 754 platforms. Where Annex \u2018F\u2019 of the C99 standard recommends signaling \u2018divide-by-zero\u2019 or \u2018invalid\u2019, Python will raiseValueError\n. Where Annex \u2018F\u2019 of the C99 standard recommends signaling \u2018overflow\u2019, Python will raiseOverflowError\n. (See bpo-711019 and bpo-1640.)(Contributed by Christian Heimes and Mark Dickinson.)\nmmap\nobjects now have arfind()\nmethod that searches for a substring beginning at the end of the string and searching backwards. Thefind()\nmethod also gained an end parameter giving an index at which to stop searching. (Contributed by John Lenton.)The\noperator\nmodule gained amethodcaller()\nfunction that takes a name and an optional set of arguments, returning a callable that will call the named function on any arguments passed to it. For example:>>> # Equivalent to lambda s: s.replace('old', 'new') >>> replacer = operator.methodcaller('replace', 'old', 'new') >>> replacer('old wine in old bottles') 'new wine in new bottles'\n(Contributed by Georg Brandl, after a suggestion by Gregory Petrosyan.)\nThe\nattrgetter()\nfunction now accepts dotted names and performs the corresponding attribute lookups:>>> inst_name = operator.attrgetter( ... '__class__.__name__') >>> inst_name('') 'str' >>> inst_name(help) '_Helper'\n(Contributed by Georg Brandl, after a suggestion by Barry Warsaw.)\nThe\nos\nmodule now wraps several new system calls.fchmod(fd, mode)\nandfchown(fd, uid, gid)\nchange the mode and ownership of an opened file, andlchmod(path, mode)\nchanges the mode of a symlink. (Contributed by Georg Brandl and Christian Heimes.)chflags()\nandlchflags()\nare wrappers for the corresponding system calls (where they\u2019re available), changing the flags set on a file. Constants for the flag values are defined in thestat\nmodule; some possible values includeUF_IMMUTABLE\nto signal the file may not be changed andUF_APPEND\nto indicate that data can only be appended to the file. (Contributed by M. Levinson.)os.closerange(low, high)\nefficiently closes all file descriptors from low to high, ignoring any errors and not including high itself. This function is now used by thesubprocess\nmodule to make starting processes faster. (Contributed by Georg Brandl; bpo-1663329.)The\nos.environ\nobject\u2019sclear()\nmethod will now unset the environment variables usingos.unsetenv()\nin addition to clearing the object\u2019s keys. (Contributed by Martin Horcicka; bpo-1181.)The\nos.walk()\nfunction now has afollowlinks\nparameter. If set to True, it will follow symlinks pointing to directories and visit the directory\u2019s contents. For backward compatibility, the parameter\u2019s default value is false. Note that the function can fall into an infinite recursion if there\u2019s a symlink that points to a parent directory. (bpo-1273829)In the\nos.path\nmodule, thesplitext()\nfunction has been changed to not split on leading period characters. This produces better results when operating on Unix\u2019s dot-files. For example,os.path.splitext('.ipython')\nnow returns('.ipython', '')\ninstead of('', '.ipython')\n. (bpo-1115886)A new function,\nos.path.relpath(path, start='.')\n, returns a relative path from thestart\npath, if it\u2019s supplied, or from the current working directory to the destinationpath\n. (Contributed by Richard Barran; bpo-1339796.)On Windows,\nos.path.expandvars()\nwill now expand environment variables given in the form \u201c%var%\u201d, and \u201c~user\u201d will be expanded into the user\u2019s home directory path. (Contributed by Josiah Carlson; bpo-957650.)The Python debugger provided by the\npdb\nmodule gained a new command: \u201crun\u201d restarts the Python program being debugged and can optionally take new command-line arguments for the program. (Contributed by Rocky Bernstein; bpo-1393667.)The\npdb.post_mortem()\nfunction, used to begin debugging a traceback, will now use the traceback returned bysys.exc_info()\nif no traceback is supplied. (Contributed by Facundo Batista; bpo-1106316.)The\npickletools\nmodule now has anoptimize()\nfunction that takes a string containing a pickle and removes some unused opcodes, returning a shorter pickle that contains the same data structure. (Contributed by Raymond Hettinger.)A\nget_data()\nfunction was added to thepkgutil\nmodule that returns the contents of resource files included with an installed Python package. For example:>>> import pkgutil >>> print pkgutil.get_data('test', 'exception_hierarchy.txt') BaseException +-- SystemExit +-- KeyboardInterrupt +-- GeneratorExit +-- Exception +-- StopIteration +-- StandardError ...\n(Contributed by Paul Moore; bpo-2439.)\nThe\npyexpat\nmodule\u2019sParser\nobjects now allow setting theirbuffer_size\nattribute to change the size of the buffer used to hold character data. (Contributed by Achim Gaedke; bpo-1137.)The\nQueue\nmodule now provides queue variants that retrieve entries in different orders. ThePriorityQueue\nclass stores queued items in a heap and retrieves them in priority order, andLifoQueue\nretrieves the most recently added entries first, meaning that it behaves like a stack. (Contributed by Raymond Hettinger.)The\nrandom\nmodule\u2019sRandom\nobjects can now be pickled on a 32-bit system and unpickled on a 64-bit system, and vice versa. Unfortunately, this change also means that Python 2.6\u2019sRandom\nobjects can\u2019t be unpickled correctly on earlier versions of Python. (Contributed by Shawn Ligocki; bpo-1727780.)The new\ntriangular(low, high, mode)\nfunction returns random numbers following a triangular distribution. The returned values are between low and high, not including high itself, and with mode as the most frequently occurring value in the distribution. (Contributed by Wladmir van der Laan and Raymond Hettinger; bpo-1681432.)Long regular expression searches carried out by the\nre\nmodule will check for signals being delivered, so time-consuming searches can now be interrupted. (Contributed by Josh Hoyt and Ralf Schmitt; bpo-846388.)The regular expression module is implemented by compiling bytecodes for a tiny regex-specific virtual machine. Untrusted code could create malicious strings of bytecode directly and cause crashes, so Python 2.6 includes a verifier for the regex bytecode. (Contributed by Guido van Rossum from work for Google App Engine; bpo-3487.)\nThe\nrlcompleter\nmodule\u2019sCompleter.complete()\nmethod will now ignore exceptions triggered while evaluating a name. (Fixed by Lorenz Quack; bpo-2250.)The\nsched\nmodule\u2019sscheduler\ninstances now have a read-onlyqueue\nattribute that returns the contents of the scheduler\u2019s queue, represented as a list of named tuples with the fields(time, priority, action, argument)\n. (Contributed by Raymond Hettinger; bpo-1861.)The\nselect\nmodule now has wrapper functions for the Linuxepoll()\nand BSDkqueue()\nsystem calls.modify()\nmethod was added to the existingpoll\nobjects;pollobj.modify(fd, eventmask)\ntakes a file descriptor or file object and an event mask, modifying the recorded event mask for that file. (Contributed by Christian Heimes; bpo-1657.)The\nshutil.copytree()\nfunction now has an optional ignore argument that takes a callable object. This callable will receive each directory path and a list of the directory\u2019s contents, and returns a list of names that will be ignored, not copied.The\nshutil\nmodule also provides anignore_patterns()\nfunction for use with this new parameter.ignore_patterns()\ntakes an arbitrary number of glob-style patterns and returns a callable that will ignore any files and directories that match any of these patterns. The following example copies a directory tree, but skips both.svn\ndirectories and Emacs backup files, which have names ending with \u2018~\u2019:shutil.copytree('Doc/library', '/tmp/library', ignore=shutil.ignore_patterns('*~', '.svn'))\n(Contributed by Tarek Ziad\u00e9; bpo-2663.)\nIntegrating signal handling with GUI handling event loops like those used by Tkinter or GTk+ has long been a problem; most software ends up polling, waking up every fraction of a second to check if any GUI events have occurred. The\nsignal\nmodule can now make this more efficient. Callingsignal.set_wakeup_fd(fd)\nsets a file descriptor to be used; when a signal is received, a byte is written to that file descriptor. There\u2019s also a C-level function,PySignal_SetWakeupFd()\n, for setting the descriptor.Event loops will use this by opening a pipe to create two descriptors, one for reading and one for writing. The writable descriptor will be passed to\nset_wakeup_fd()\n, and the readable descriptor will be added to the list of descriptors monitored by the event loop viaselect()\norpoll()\n. On receiving a signal, a byte will be written and the main event loop will be woken up, avoiding the need to poll.(Contributed by Adam Olsen; bpo-1583.)\nThe\nsiginterrupt()\nfunction is now available from Python code, and allows changing whether signals can interrupt system calls or not. (Contributed by Ralf Schmitt.)The\nsetitimer()\nandgetitimer()\nfunctions have also been added (where they\u2019re available).setitimer()\nallows setting interval timers that will cause a signal to be delivered to the process after a specified time, measured in wall-clock time, consumed process time, or combined process+system time. (Contributed by Guilherme Polo; bpo-2240.)The\nsmtplib\nmodule now supports SMTP over SSL thanks to the addition of theSMTP_SSL\nclass. This class supports an interface identical to the existingSMTP\nclass. (Contributed by Monty Taylor.) Both class constructors also have an optionaltimeout\nparameter that specifies a timeout for the initial connection attempt, measured in seconds. (Contributed by Facundo Batista.)An implementation of the LMTP protocol (RFC 2033) was also added to the module. LMTP is used in place of SMTP when transferring e-mail between agents that don\u2019t manage a mail queue. (LMTP implemented by Leif Hedstrom; bpo-957003.)\nSMTP.starttls()\nnow complies with RFC 3207 and forgets any knowledge obtained from the server not obtained from the TLS negotiation itself. (Patch contributed by Bill Fenner; bpo-829951.)The\nsocket\nmodule now supports TIPC (https://tipc.sourceforge.net/), a high-performance non-IP-based protocol designed for use in clustered environments. TIPC addresses are 4- or 5-tuples. (Contributed by Alberto Bertogli; bpo-1646.)A new function,\ncreate_connection()\n, takes an address and connects to it using an optional timeout value, returning the connected socket object. This function also looks up the address\u2019s type and connects to it using IPv4 or IPv6 as appropriate. Changing your code to usecreate_connection()\ninstead ofsocket(socket.AF_INET, ...)\nmay be all that\u2019s required to make your code work with IPv6.The base classes in the\nSocketServer\nmodule now support calling ahandle_timeout()\nmethod after a span of inactivity specified by the server\u2019stimeout\nattribute. (Contributed by Michael Pomraning.) Theserve_forever()\nmethod now takes an optional poll interval measured in seconds, controlling how often the server will check for a shutdown request. (Contributed by Pedro Werneck and Jeffrey Yasskin; bpo-742598, bpo-1193577.)The\nsqlite3\nmodule, maintained by Gerhard H\u00e4ring, has been updated from version 2.3.2 in Python 2.5 to version 2.4.1.The\nstruct\nmodule now supports the C99 _Bool type, using the format character'?'\n. (Contributed by David Remahl.)The\nPopen\nobjects provided by thesubprocess\nmodule now haveterminate()\n,kill()\n, andsend_signal()\nmethods. On Windows,send_signal()\nonly supports theSIGTERM\nsignal, and all these methods are aliases for the Win32 API functionTerminateProcess()\n. (Contributed by Christian Heimes.)A new variable in the\nsys\nmodule,float_info\n, is an object containing information derived from thefloat.h\nfile about the platform\u2019s floating-point support. Attributes of this object includemant_dig\n(number of digits in the mantissa),epsilon\n(smallest difference between 1.0 and the next largest value representable), and several others. (Contributed by Christian Heimes; bpo-1534.)Another new variable,\ndont_write_bytecode\n, controls whether Python writes any.pyc\nor.pyo\nfiles on importing a module. If this variable is true, the compiled files are not written. The variable is initially set on start-up by supplying the-B\nswitch to the Python interpreter, or by setting thePYTHONDONTWRITEBYTECODE\nenvironment variable before running the interpreter. Python code can subsequently change the value of this variable to control whether bytecode files are written or not. (Contributed by Neal Norwitz and Georg Brandl.)Information about the command-line arguments supplied to the Python interpreter is available by reading attributes of a named tuple available as\nsys.flags\n. For example, theverbose\nattribute is true if Python was executed in verbose mode,debug\nis true in debugging mode, etc. These attributes are all read-only. (Contributed by Christian Heimes.)A new function,\ngetsizeof()\n, takes a Python object and returns the amount of memory used by the object, measured in bytes. Built-in objects return correct results; third-party extensions may not, but can define a__sizeof__()\nmethod to return the object\u2019s size. (Contributed by Robert Schuppenies; bpo-2898.)It\u2019s now possible to determine the current profiler and tracer functions by calling\nsys.getprofile()\nandsys.gettrace()\n. (Contributed by Georg Brandl; bpo-1648.)The\ntarfile\nmodule now supports POSIX.1-2001 (pax) tarfiles in addition to the POSIX.1-1988 (ustar) and GNU tar formats that were already supported. The default format is GNU tar; specify theformat\nparameter to open a file using a different format:tar = tarfile.open(\"output.tar\", \"w\", format=tarfile.PAX_FORMAT)\nThe new\nencoding\nanderrors\nparameters specify an encoding and an error handling scheme for character conversions.'strict'\n,'ignore'\n, and'replace'\nare the three standard ways Python can handle errors,;'utf-8'\nis a special value that replaces bad characters with their UTF-8 representation. (Character conversions occur because the PAX format supports Unicode filenames, defaulting to UTF-8 encoding.)The\nTarFile.add()\nmethod now accepts anexclude\nargument that\u2019s a function that can be used to exclude certain filenames from an archive. The function must take a filename and return true if the file should be excluded or false if it should be archived. The function is applied to both the name initially passed toadd()\nand to the names of files in recursively added directories.(All changes contributed by Lars Gust\u00e4bel).\nAn optional\ntimeout\nparameter was added to thetelnetlib.Telnet\nclass constructor, specifying a timeout measured in seconds. (Added by Facundo Batista.)The\ntempfile.NamedTemporaryFile\nclass usually deletes the temporary file it created when the file is closed. This behaviour can now be changed by passingdelete=False\nto the constructor. (Contributed by Damien Miller; bpo-1537850.)A new class,\nSpooledTemporaryFile\n, behaves like a temporary file but stores its data in memory until a maximum size is exceeded. On reaching that limit, the contents will be written to an on-disk temporary file. (Contributed by Dustin J. Mitchell.)The\nNamedTemporaryFile\nandSpooledTemporaryFile\nclasses both work as context managers, so you can writewith tempfile.NamedTemporaryFile() as tmp: ...\n. (Contributed by Alexander Belopolsky; bpo-2021.)The\ntest.test_support\nmodule gained a number of context managers useful for writing tests.EnvironmentVarGuard()\nis a context manager that temporarily changes environment variables and automatically restores them to their old values.Another context manager,\nTransientResource\n, can surround calls to resources that may or may not be available; it will catch and ignore a specified list of exceptions. For example, a network test may ignore certain failures when connecting to an external web site:with test_support.TransientResource(IOError, errno=errno.ETIMEDOUT): f = urllib.urlopen('https://sf.net') ...\nFinally,\ncheck_warnings()\nresets thewarning\nmodule\u2019s warning filters and returns an object that will record all warning messages triggered (bpo-3781):with test_support.check_warnings() as wrec: warnings.simplefilter(\"always\") # ... code that triggers a warning ... assert str(wrec.message) == \"function is outdated\" assert len(wrec.warnings) == 1, \"Multiple warnings raised\"\n(Contributed by Brett Cannon.)\nThe\ntextwrap\nmodule can now preserve existing whitespace at the beginnings and ends of the newly created lines by specifyingdrop_whitespace=False\nas an argument:>>> S = \"\"\"This sentence has a bunch of ... extra whitespace.\"\"\" >>> print textwrap.fill(S, width=15) This sentence has a bunch of extra whitespace. >>> print textwrap.fill(S, drop_whitespace=False, width=15) This sentence has a bunch of extra whitespace. >>>\n(Contributed by Dwayne Bailey; bpo-1581073.)\nThe\nthreading\nmodule API is being changed to use properties such asdaemon\ninstead ofsetDaemon()\nandisDaemon()\nmethods, and some methods have been renamed to use underscores instead of camel-case; for example, theactiveCount()\nmethod is renamed toactive_count()\n. Both the 2.6 and 3.0 versions of the module support the same properties and renamed methods, but don\u2019t remove the old methods. No date has been set for the deprecation of the old APIs in Python 3.x; the old APIs won\u2019t be removed in any 2.x version. (Carried out by several people, most notably Benjamin Peterson.)The\nthreading\nmodule\u2019sThread\nobjects gained anident\nproperty that returns the thread\u2019s identifier, a nonzero integer. (Contributed by Gregory P. Smith; bpo-2871.)The\ntimeit\nmodule now accepts callables as well as strings for the statement being timed and for the setup code. Two convenience functions were added for creatingTimer\ninstances:repeat(stmt, setup, time, repeat, number)\nandtimeit(stmt, setup, time, number)\ncreate an instance and call the corresponding method. (Contributed by Erik Demaine; bpo-1533909.)The\nTkinter\nmodule now accepts lists and tuples for options, separating the elements by spaces before passing the resulting value to Tcl/Tk. (Contributed by Guilherme Polo; bpo-2906.)The\nturtle\nmodule for turtle graphics was greatly enhanced by Gregor Lingl. New features in the module include:Better animation of turtle movement and rotation.\nControl over turtle movement using the new\ndelay()\n,tracer()\n, andspeed()\nmethods.The ability to set new shapes for the turtle, and to define a new coordinate system.\nTurtles now have an\nundo()\nmethod that can roll back actions.Simple support for reacting to input events such as mouse and keyboard activity, making it possible to write simple games.\nA\nturtle.cfg\nfile can be used to customize the starting appearance of the turtle\u2019s screen.The module\u2019s docstrings can be replaced by new docstrings that have been translated into another language.\nAn optional\ntimeout\nparameter was added to theurllib.urlopen\nfunction and theurllib.ftpwrapper\nclass constructor, as well as theurllib2.urlopen\nfunction. The parameter specifies a timeout measured in seconds. For example:>>> u = urllib2.urlopen(\"http://slow.example.com\", timeout=3) Traceback (most recent call last): ... urllib2.URLError: >>>\n(Added by Facundo Batista.)\nThe Unicode database provided by the\nunicodedata\nmodule has been updated to version 5.1.0. (Updated by Martin von L\u00f6wis; bpo-3811.)The\nwarnings\nmodule\u2019sformatwarning()\nandshowwarning()\ngained an optional line argument that can be used to supply the line of source code. (Added as part of bpo-1631171, which re-implemented part of thewarnings\nmodule in C code.)A new function,\ncatch_warnings()\n, is a context manager intended for testing purposes that lets you temporarily modify the warning filters and then restore their original values (bpo-3781).The XML-RPC\nSimpleXMLRPCServer\nandDocXMLRPCServer\nclasses can now be prevented from immediately opening and binding to their socket by passingFalse\nas the bind_and_activate constructor parameter. This can be used to modify the instance\u2019sallow_reuse_address\nattribute before calling theserver_bind()\nandserver_activate()\nmethods to open the socket and begin listening for connections. (Contributed by Peter Parente; bpo-1599845.)SimpleXMLRPCServer\nalso has a_send_traceback_header\nattribute; if true, the exception and formatted traceback are returned as HTTP headers \u201cX-Exception\u201d and \u201cX-Traceback\u201d. This feature is for debugging purposes only and should not be used on production servers because the tracebacks might reveal passwords or other sensitive information. (Contributed by Alan McIntyre as part of his project for Google\u2019s Summer of Code 2007.)The\nxmlrpclib\nmodule no longer automatically convertsdatetime.date\nanddatetime.time\nto thexmlrpclib.DateTime\ntype; the conversion semantics were not necessarily correct for all applications. Code usingxmlrpclib\nshould convertdate\nandtime\ninstances. (bpo-1330538) The code can also handle dates before 1900 (contributed by Ralf Schmitt; bpo-2014) and 64-bit integers represented by using\nin XML-RPC responses (contributed by Riku Lindblad; bpo-2985).The\nzipfile\nmodule\u2019sZipFile\nclass now hasextract()\nandextractall()\nmethods that will unpack a single file or all the files in the archive to the current directory, or to a specified directory:z = zipfile.ZipFile('python-251.zip') # Unpack a single file, writing it relative # to the /tmp directory. z.extract('Python/sysmodule.c', '/tmp') # Unpack all the files in the archive. z.extractall()\n(Contributed by Alan McIntyre; bpo-467924.)\nThe\nopen()\n,read()\nandextract()\nmethods can now take either a filename or aZipInfo\nobject. This is useful when an archive accidentally contains a duplicated filename. (Contributed by Graham Horler; bpo-1775025.)Finally,\nzipfile\nnow supports using Unicode filenames for archived files. (Contributed by Alexey Borzenkov; bpo-1734346.)\nThe ast\nmodule\u00b6\nThe ast\nmodule provides an Abstract Syntax Tree\nrepresentation of Python code, and Armin Ronacher\ncontributed a set of helper functions that perform a variety of\ncommon tasks. These will be useful for HTML templating\npackages, code analyzers, and similar tools that process\nPython code.\nThe parse()\nfunction takes an expression and returns an AST.\nThe dump()\nfunction outputs a representation of a tree, suitable\nfor debugging:\nimport ast\nt = ast.parse(\"\"\"\nd = {}\nfor i in 'abcdefghijklm':\nd[i + i] = ord(i) - ord('a') + 1\nprint d\n\"\"\")\nprint ast.dump(t)\nThis outputs a deeply nested tree:\nModule(body=[\nAssign(targets=[\nName(id='d', ctx=Store())\n], value=Dict(keys=[], values=[]))\nFor(target=Name(id='i', ctx=Store()),\niter=Str(s='abcdefghijklm'), body=[\nAssign(targets=[\nSubscript(value=\nName(id='d', ctx=Load()),\nslice=\nIndex(value=\nBinOp(left=Name(id='i', ctx=Load()), op=Add(),\nright=Name(id='i', ctx=Load()))), ctx=Store())\n], value=\nBinOp(left=\nBinOp(left=\nCall(func=\nName(id='ord', ctx=Load()), args=[\nName(id='i', ctx=Load())\n], keywords=[], starargs=None, kwargs=None),\nop=Sub(), right=Call(func=\nName(id='ord', ctx=Load()), args=[\nStr(s='a')\n], keywords=[], starargs=None, kwargs=None)),\nop=Add(), right=Num(n=1)))\n], orelse=[])\nPrint(dest=None, values=[\nName(id='d', ctx=Load())\n], nl=True)\n])\nThe literal_eval()\nmethod takes a string or an AST\nrepresenting a literal expression, parses and evaluates it, and\nreturns the resulting value. A literal expression is a Python\nexpression containing only strings, numbers, dictionaries,\netc. but no statements or function calls. If you need to\nevaluate an expression but cannot accept the security risk of using an\neval()\ncall, literal_eval()\nwill handle it safely:\n>>> literal = '(\"a\", \"b\", {2:4, 3:8, 1:2})'\n>>> print ast.literal_eval(literal)\n('a', 'b', {1: 2, 2: 4, 3: 8})\n>>> print ast.literal_eval('\"a\" + \"b\"')\nTraceback (most recent call last):\n...\nValueError: malformed string\nThe module also includes NodeVisitor\nand\nNodeTransformer\nclasses for traversing and modifying an AST,\nand functions for common transformations such as changing line\nnumbers.\nThe future_builtins\nmodule\u00b6\nPython 3.0 makes many changes to the repertoire of built-in\nfunctions, and most of the changes can\u2019t be introduced in the Python\n2.x series because they would break compatibility.\nThe future_builtins\nmodule provides versions\nof these built-in functions that can be imported when writing\n3.0-compatible code.\nThe functions in this module currently include:\nascii(obj)\n: equivalent torepr()\n. In Python 3.0,repr()\nwill return a Unicode string, whileascii()\nwill return a pure ASCII bytestring.filter(predicate, iterable)\n,map(func, iterable1, ...)\n: the 3.0 versions return iterators, unlike the 2.x builtins which return lists.hex(value)\n,oct(value)\n: instead of calling the__hex__()\nor__oct__()\nmethods, these versions will call the__index__()\nmethod and convert the result to hexadecimal or octal.oct()\nwill use the new0o\nnotation for its result.\nThe json\nmodule: JavaScript Object Notation\u00b6\nThe new json\nmodule supports the encoding and decoding of Python types in\nJSON (Javascript Object Notation). JSON is a lightweight interchange format\noften used in web applications. For more information about JSON, see\nhttp://www.json.org.\njson\ncomes with support for decoding and encoding most built-in Python\ntypes. The following example encodes and decodes a dictionary:\n>>> import json\n>>> data = {\"spam\": \"foo\", \"parrot\": 42}\n>>> in_json = json.dumps(data) # Encode the data\n>>> in_json\n'{\"parrot\": 42, \"spam\": \"foo\"}'\n>>> json.loads(in_json) # Decode into a Python object\n{\"spam\": \"foo\", \"parrot\": 42}\nIt\u2019s also possible to write your own decoders and encoders to support more types. Pretty-printing of the JSON strings is also supported.\njson\n(originally called simplejson) was written by Bob\nIppolito.\nThe plistlib\nmodule: A Property-List Parser\u00b6\nThe .plist\nformat is commonly used on Mac OS X to\nstore basic data types (numbers, strings, lists,\nand dictionaries) by serializing them into an XML-based format.\nIt resembles the XML-RPC serialization of data types.\nDespite being primarily used on Mac OS X, the format\nhas nothing Mac-specific about it and the Python implementation works\non any platform that Python supports, so the plistlib\nmodule\nhas been promoted to the standard library.\nUsing the module is simple:\nimport sys\nimport plistlib\nimport datetime\n# Create data structure\ndata_struct = dict(lastAccessed=datetime.datetime.now(),\nversion=1,\ncategories=('Personal','Shared','Private'))\n# Create string containing XML.\nplist_str = plistlib.writePlistToString(data_struct)\nnew_struct = plistlib.readPlistFromString(plist_str)\nprint data_struct\nprint new_struct\n# Write data structure to a file and read it back.\nplistlib.writePlist(data_struct, '/tmp/customizations.plist')\nnew_struct = plistlib.readPlist('/tmp/customizations.plist')\n# read/writePlist accepts file-like objects as well as paths.\nplistlib.writePlist(data_struct, sys.stdout)\nctypes Enhancements\u00b6\nThomas Heller continued to maintain and enhance the\nctypes\nmodule.\nctypes\nnow supports a c_bool\ndatatype\nthat represents the C99 bool\ntype. (Contributed by David Remahl;\nbpo-1649190.)\nThe ctypes\nstring, buffer and array types have improved\nsupport for extended slicing syntax,\nwhere various combinations of (start, stop, step)\nare supplied.\n(Implemented by Thomas Wouters.)\nAll ctypes\ndata types now support\nfrom_buffer()\nand from_buffer_copy()\nmethods that create a ctypes instance based on a\nprovided buffer object. from_buffer_copy()\ncopies\nthe contents of the object,\nwhile from_buffer()\nwill share the same memory area.\nA new calling convention tells ctypes\nto clear the errno\nor\nWin32 LastError variables at the outset of each wrapped call.\n(Implemented by Thomas Heller; bpo-1798.)\nYou can now retrieve the Unix errno\nvariable after a function\ncall. When creating a wrapped function, you can supply\nuse_errno=True\nas a keyword parameter to the DLL()\nfunction\nand then call the module-level methods set_errno()\nand\nget_errno()\nto set and retrieve the error value.\nThe Win32 LastError variable is similarly supported by\nthe DLL()\n, OleDLL()\n, and WinDLL()\nfunctions.\nYou supply use_last_error=True\nas a keyword parameter\nand then call the module-level methods set_last_error()\nand get_last_error()\n.\nThe byref()\nfunction, used to retrieve a pointer to a ctypes\ninstance, now has an optional offset parameter that is a byte\ncount that will be added to the returned pointer.\nImproved SSL Support\u00b6\nBill Janssen made extensive improvements to Python 2.6\u2019s support for\nthe Secure Sockets Layer by adding a new module, ssl\n, that\u2019s\nbuilt atop the OpenSSL library.\nThis new module provides more control over the protocol negotiated,\nthe X.509 certificates used, and has better support for writing SSL\nservers (as opposed to clients) in Python. The existing SSL support\nin the socket\nmodule hasn\u2019t been removed and continues to work,\nthough it will be removed in Python 3.0.\nTo use the new module, you must first create a TCP connection in the\nusual way and then pass it to the ssl.wrap_socket()\nfunction.\nIt\u2019s possible to specify whether a certificate is required, and to\nobtain certificate info by calling the getpeercert()\nmethod.\nSee also\nThe documentation for the ssl\nmodule.\nDeprecations and Removals\u00b6\nString exceptions have been removed. Attempting to use them raises a\nTypeError\n.Changes to the\nException\ninterface as dictated by PEP 352 continue to be made. For 2.6, themessage\nattribute is being deprecated in favor of theargs\nattribute.(3.0-warning mode) Python 3.0 will feature a reorganized standard library that will drop many outdated modules and rename others. Python 2.6 running in 3.0-warning mode will warn about these modules when they are imported.\nThe list of deprecated modules is:\naudiodev\n,bgenlocations\n,buildtools\n,bundlebuilder\n,Canvas\n,compiler\n,dircache\n,dl\n,fpformat\n,gensuitemodule\n,ihooks\n,imageop\n,imgfile\n,linuxaudiodev\n,mhlib\n,mimetools\n,multifile\n,new\n,pure\n,statvfs\n,sunaudiodev\n,test.testall\n, andtoaiff\n.The\ngopherlib\nmodule has been removed.The\nMimeWriter\nmodule andmimify\nmodule have been deprecated; use theemail\npackage instead.The\nmd5\nmodule has been deprecated; use thehashlib\nmodule instead.The\nposixfile\nmodule has been deprecated;fcntl.lockf()\nprovides better locking.The\npopen2\nmodule has been deprecated; use thesubprocess\nmodule.The\nrgbimg\nmodule has been removed.The\nsets\nmodule has been deprecated; it\u2019s better to use the built-inset\nandfrozenset\ntypes.The\nsha\nmodule has been deprecated; use thehashlib\nmodule instead.\nBuild and C API Changes\u00b6\nChanges to Python\u2019s build process and to the C API include:\nPython now must be compiled with C89 compilers (after 19 years!). This means that the Python source tree has dropped its own implementations of\nmemmove()\nandstrerror()\n, which are in the C89 standard library.Python 2.6 can be built with Microsoft Visual Studio 2008 (version 9.0), and this is the new default compiler. See the\nPCbuild\ndirectory for the build files. (Implemented by Christian Heimes.)On Mac OS X, Python 2.6 can be compiled as a 4-way universal build. The configure script can take a\n--with-universal-archs=[32-bit|64-bit|all]\nswitch, controlling whether the binaries are built for 32-bit architectures (x86, PowerPC), 64-bit (x86-64 and PPC-64), or both. (Contributed by Ronald Oussoren.)A new function added in Python 2.6.6,\nPySys_SetArgvEx()\n, sets the value ofsys.argv\nand can optionally updatesys.path\nto include the directory containing the script named bysys.argv[0]\ndepending on the value of an updatepath parameter.This function was added to close a security hole for applications that embed Python. The old function,\nPySys_SetArgv()\n, would always updatesys.path\n, and sometimes it would add the current directory. This meant that, if you ran an application embedding Python in a directory controlled by someone else, attackers could put a Trojan-horse module in the directory (say, a file namedos.py\n) that your application would then import and run.If you maintain a C/C++ application that embeds Python, check whether you\u2019re calling\nPySys_SetArgv()\nand carefully consider whether the application should be usingPySys_SetArgvEx()\nwith updatepath set to false. Note that using this function will break compatibility with Python versions 2.6.5 and earlier; if you have to continue working with earlier versions, you can leave the call toPySys_SetArgv()\nalone and callPyRun_SimpleString(\"sys.path.pop(0)\\n\")\nafterwards to discard the firstsys.path\ncomponent.Security issue reported as CVE 2008-5983; discussed in gh-50003, and fixed by Antoine Pitrou.\nThe BerkeleyDB module now has a C API object, available as\nbsddb.db.api\n. This object can be used by other C extensions that wish to use thebsddb\nmodule for their own purposes. (Contributed by Duncan Grisby.)The new buffer interface, previously described in the PEP 3118 section, adds\nPyObject_GetBuffer()\nandPyBuffer_Release()\n, as well as a few other functions.Python\u2019s use of the C stdio library is now thread-safe, or at least as thread-safe as the underlying library is. A long-standing potential bug occurred if one thread closed a file object while another thread was reading from or writing to the object. In 2.6 file objects have a reference count, manipulated by the\nPyFile_IncUseCount()\nandPyFile_DecUseCount()\nfunctions. File objects can\u2019t be closed unless the reference count is zero.PyFile_IncUseCount()\nshould be called while the GIL is still held, before carrying out an I/O operation using theFILE *\npointer, andPyFile_DecUseCount()\nshould be called immediately after the GIL is re-acquired. (Contributed by Antoine Pitrou and Gregory P. Smith.)Importing modules simultaneously in two different threads no longer deadlocks; it will now raise an\nImportError\n. A new API function,PyImport_ImportModuleNoBlock()\n, will look for a module insys.modules\nfirst, then try to import it after acquiring an import lock. If the import lock is held by another thread, anImportError\nis raised. (Contributed by Christian Heimes.)Several functions return information about the platform\u2019s floating-point support.\nPyFloat_GetMax()\nreturns the maximum representable floating-point value, andPyFloat_GetMin()\nreturns the minimum positive value.PyFloat_GetInfo()\nreturns an object containing more information from thefloat.h\nfile, such as\"mant_dig\"\n(number of digits in the mantissa),\"epsilon\"\n(smallest difference between 1.0 and the next largest value representable), and several others. (Contributed by Christian Heimes; bpo-1534.)C functions and methods that use\nPyComplex_AsCComplex()\nwill now accept arguments that have a__complex__()\nmethod. In particular, the functions in thecmath\nmodule will now accept objects with this method. This is a backport of a Python 3.0 change. (Contributed by Mark Dickinson; bpo-1675423.)Python\u2019s C API now includes two functions for case-insensitive string comparisons,\nPyOS_stricmp(char*, char*)\nandPyOS_strnicmp(char*, char*, Py_ssize_t)\n. (Contributed by Christian Heimes; bpo-1635.)Many C extensions define their own little macro for adding integers and strings to the module\u2019s dictionary in the\ninit*\nfunction. Python 2.6 finally defines standard macros for adding values to a module,PyModule_AddStringMacro\nandPyModule_AddIntMacro()\n. (Contributed by Christian Heimes.)Some macros were renamed in both 3.0 and 2.6 to make it clearer that they are macros, not functions.\nPy_Size()\nbecamePy_SIZE()\n,Py_Type()\nbecamePy_TYPE()\n, andPy_Refcnt()\nbecamePy_REFCNT()\n. The mixed-case macros are still available in Python 2.6 for backward compatibility. (bpo-1629)Distutils now places C extensions it builds in a different directory when running on a debug version of Python. (Contributed by Collin Winter; bpo-1530959.)\nSeveral basic data types, such as integers and strings, maintain internal free lists of objects that can be re-used. The data structures for these free lists now follow a naming convention: the variable is always named\nfree_list\n, the counter is always namednumfree\n, and a macroPy_MAXFREELIST\nis always defined.A new Makefile target, \u201cmake patchcheck\u201d, prepares the Python source tree for making a patch: it fixes trailing whitespace in all modified\n.py\nfiles, checks whether the documentation has been changed, and reports whether theMisc/ACKS\nandMisc/NEWS\nfiles have been updated. (Contributed by Brett Cannon.)Another new target, \u201cmake profile-opt\u201d, compiles a Python binary using GCC\u2019s profile-guided optimization. It compiles Python with profiling enabled, runs the test suite to obtain a set of profiling results, and then compiles using these results for optimization. (Contributed by Gregory P. Smith.)\nPort-Specific Changes: Windows\u00b6\nThe support for Windows 95, 98, ME and NT4 has been dropped. Python 2.6 requires at least Windows 2000 SP4.\nThe new default compiler on Windows is Visual Studio 2008 (version 9.0). The build directories for Visual Studio 2003 (version 7.1) and 2005 (version 8.0) were moved into the PC/ directory. The new\nPCbuild\ndirectory supports cross compilation for X64, debug builds and Profile Guided Optimization (PGO). PGO builds are roughly 10% faster than normal builds. (Contributed by Christian Heimes with help from Amaury Forgeot d\u2019Arc and Martin von L\u00f6wis.)The\nmsvcrt\nmodule now supports both the normal and wide char variants of the console I/O API. Thegetwch()\nfunction reads a keypress and returns a Unicode value, as does thegetwche()\nfunction. Theputwch()\nfunction takes a Unicode character and writes it to the console. (Contributed by Christian Heimes.)os.path.expandvars()\nwill now expand environment variables in the form \u201c%var%\u201d, and \u201c~user\u201d will be expanded into the user\u2019s home directory path. (Contributed by Josiah Carlson; bpo-957650.)The\nsocket\nmodule\u2019s socket objects now have anioctl()\nmethod that provides a limited interface to theWSAIoctl()\nsystem interface.The\n_winreg\nmodule now has a function,ExpandEnvironmentStrings()\n, that expands environment variable references such as%NAME%\nin an input string. The handle objects provided by this module now support the context protocol, so they can be used inwith\nstatements. (Contributed by Christian Heimes.)_winreg\nalso has better support for x64 systems, exposing theDisableReflectionKey()\n,EnableReflectionKey()\n, andQueryReflectionKey()\nfunctions, which enable and disable registry reflection for 32-bit processes running on 64-bit systems. (bpo-1753245)The\nmsilib\nmodule\u2019sRecord\nobject gainedGetInteger()\nandGetString()\nmethods that return field values as an integer or a string. (Contributed by Floris Bruynooghe; bpo-2125.)\nPort-Specific Changes: Mac OS X\u00b6\nWhen compiling a framework build of Python, you can now specify the framework name to be used by providing the\n--with-framework-name=\noption to the configure script.The\nmacfs\nmodule has been removed. This in turn required themacostools.touched()\nfunction to be removed because it depended on themacfs\nmodule. (bpo-1490190)Many other Mac OS modules have been deprecated and will be removed in Python 3.0:\n_builtinSuites\n,aepack\n,aetools\n,aetypes\n,applesingle\n,appletrawmain\n,appletrunner\n,argvemulator\n,Audio_mac\n,autoGIL\n,Carbon\n,cfmfile\n,CodeWarrior\n,ColorPicker\n,EasyDialogs\n,Explorer\n,Finder\n,FrameWork\n,findertools\n,ic\n,icglue\n,icopen\n,macerrors\n,MacOS\n,macfs\n,macostools\n,macresource\n,MiniAEFrame\n,Nav\n,Netscape\n,OSATerminology\n,pimp\n,PixMapWrapper\n,StdSuites\n,SystemEvents\n,Terminal\n, andterminalcommand\n.\nPort-Specific Changes: IRIX\u00b6\nA number of old IRIX-specific modules were deprecated and will\nbe removed in Python 3.0:\nal\nand AL\n,\ncd\n,\ncddb\n,\ncdplayer\n,\nCL\nand cl\n,\nDEVICE\n,\nERRNO\n,\nFILE\n,\nFL\nand fl\n,\nflp\n,\nfm\n,\nGET\n,\nGLWS\n,\nGL\nand gl\n,\nIN\n,\nIOCTL\n,\njpeg\n,\npanelparser\n,\nreadcd\n,\nSV\nand sv\n,\ntorgb\n,\nvideoreader\n, and\nWAIT\n.\nPorting to Python 2.6\u00b6\nThis section lists previously described changes and other bugfixes that may require changes to your code:\nClasses that aren\u2019t supposed to be hashable should set\n__hash__ = None\nin their definitions to indicate the fact.String exceptions have been removed. Attempting to use them raises a\nTypeError\n.The\n__init__()\nmethod ofcollections.deque\nnow clears any existing contents of the deque before adding elements from the iterable. This change makes the behavior matchlist.__init__()\n.object.__init__()\npreviously accepted arbitrary arguments and keyword arguments, ignoring them. In Python 2.6, this is no longer allowed and will result in aTypeError\n. This will affect__init__()\nmethods that end up calling the corresponding method onobject\n(perhaps through usingsuper()\n). See bpo-1683368 for discussion.The\nDecimal\nconstructor now accepts leading and trailing whitespace when passed a string. Previously it would raise anInvalidOperation\nexception. On the other hand, thecreate_decimal()\nmethod ofContext\nobjects now explicitly disallows extra whitespace, raising aConversionSyntax\nexception.Due to an implementation accident, if you passed a file path to the built-in\n__import__()\nfunction, it would actually import the specified file. This was never intended to work, however, and the implementation now explicitly checks for this case and raises anImportError\n.C API: the\nPyImport_Import()\nandPyImport_ImportModule()\nfunctions now default to absolute imports, not relative imports. This will affect C extensions that import other modules.C API: extension data types that shouldn\u2019t be hashable should define their\ntp_hash\nslot toPyObject_HashNotImplemented()\n.The\nsocket\nmodule exceptionsocket.error\nnow inherits fromIOError\n. Previously it wasn\u2019t a subclass ofStandardError\nbut now it is, throughIOError\n. (Implemented by Gregory P. Smith; bpo-1706815.)The\nxmlrpclib\nmodule no longer automatically convertsdatetime.date\nanddatetime.time\nto thexmlrpclib.DateTime\ntype; the conversion semantics were not necessarily correct for all applications. Code usingxmlrpclib\nshould convertdate\nandtime\ninstances. (bpo-1330538)(3.0-warning mode) The\nException\nclass now warns when accessed using slicing or index access; havingException\nbehave like a tuple is being phased out.(3.0-warning mode) inequality comparisons between two dictionaries or two objects that don\u2019t implement comparison methods are reported as warnings.\ndict1 == dict2\nstill works, butdict1 < dict2\nis being phased out.Comparisons between cells, which are an implementation detail of Python\u2019s scoping rules, also cause warnings because such comparisons are forbidden entirely in 3.0.\nFor applications that embed Python:\nThe\nPySys_SetArgvEx()\nfunction was added in Python 2.6.6, letting applications close a security hole when the existingPySys_SetArgv()\nfunction was used. Check whether you\u2019re callingPySys_SetArgv()\nand carefully consider whether the application should be usingPySys_SetArgvEx()\nwith updatepath set to false.\nAcknowledgements\u00b6\nThe author would like to thank the following people for offering suggestions, corrections and assistance with various drafts of this article: Georg Brandl, Steve Brown, Nick Coghlan, Ralph Corderoy, Jim Jewett, Kent Johnson, Chris Lambacher, Martin Michlmayr, Antoine Pitrou, Brian Warner.", "code_snippets": [" ", " ", " ", "\n ", "\n", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", " ", " ", " ", "\n", " ", " ", "\n", " ", "\n ", "\n ", "\n", " ", " ", " ", "\n\n", "\n", " ", " ", "\n", " ", "\n\n", " ", "\n ", "\n ", "\n ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n ", "\n ", "\n ", "\n", "\n ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n", "\n ", "\n ", "\n ", "\n ", " ", " ", "\n ", " ", "\n", "\n ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n", " ", "\n\n", "\n", "\n ", " ", " ", "\n ", "\n ", " ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n\n", " ", " ", "\n", " ", " ", " ", "\n ", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n ", "\n", "\n", " ", "\n\n", " ", " ", " ", "\n ", " ", " ", " ", "\n ", "\n", "\n", " ", " ", "\n\n\n", " ", "\n ", "\n ", "\n ", " ", " ", " ", " ", " ", "\n ", " ", " ", "\n\n ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n\n ", "\n ", "\n\n", " ", " ", " ", "\n ", " ", " ", "\n\n ", " ", " ", "\n\n ", " ", " ", " ", " ", "\n ", "\n ", "\n\n ", " ", " ", "\n ", " ", " ", " ", " ", "\n", " ", "\n\n", " ", "\n ", "\n ", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n\n", " ", "\n ", "\n ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n\n ", "\n ", " ", " ", "\n\n", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n\n ", "\n ", " ", " ", " ", " ", " ", "\n ", " ", " ", "\n\n ", "\n ", "\n\n ", "\n ", "\n\n ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", "\n", "\n", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n ", " ", " ", "\n ", " ", "\n ", "\n ", " ", "\n", " ", "\n", "\n", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n", "\n ", "\n", " ", " ", " ", "\n ", "\n", "\n ", "\n", " ", " ", "\n ", "\n", "\n ", "\n", " ", " ", " ", "\n ", "\n", " ", "\n\n", " ", " ", "\n ", "\n\n", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n\n", "\n ", "\n", "\n\n", "\n ", "\n\n", "\n", "\n", "\n", "\n", "\n", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n", " ", " ", "\n\n", "\n ", " ", " ", "\n\n ", "\n ", " ", " ", " ", "\n ", "\n\n ", " ", " ", "\n ", " ", " ", "\n\n\n", "\n ", " ", " ", " ", "\n ", "\n", "\n", " ", "\n", "\n", " ", " ", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", "\n", " ", "\n", "\n\n", "\n", "\n ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", "\n", "\n", "\n ", "\n", "\n ", "\n\n", " ", " ", "\n", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", " ", "\n", " ", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n ", "\n ", "\n ", " ", "\n\n ", "\n ", " ", "\n ", " ", " ", "\n\n ", "\n ", "\n ", " ", "\n\n", "\n ", "\n ", "\n ", " ", " ", " ", "\n\n ", "\n ", " ", "\n ", " ", " ", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", "\n", "\n", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", "\n", " ", " ", " ", " ", "\n", "\n", "\n", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n ", " ", "\n", " ", " ", " ", "\n ", "\n", " ", "\n ", "\n ", " ", " ", "\n ", "\n", " ", " ", " ", "\n ", "\n ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", ": ", "\n", "\n", " ", " ", "\n\n", "\n", "\n", " ", "\n\n", "\n", "\n", "\n\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n ", "\n ", " ", "\n ", " ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", "\n ", " ", "\n ", "\n ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", "\n ", "\n ", "\n ", "\n ", " ", " ", "\n ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", " ", "\n ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", "\n", " ", " ", "\n", " ", "\n", "\n", " ", "\n", "\n", "\n", ": ", "\n", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", "\n\n", "\n", " ", " ", "\n ", "\n ", "\n\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", "\n", " ", "\n\n", "\n", " ", "\n", " ", " ", "\n\n", "\n", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 27955} +{"url": "https://docs.python.org/3/whatsnew/2.7.html", "title": "What\u2019s New in Python 2.7", "content": "What\u2019s New in Python 2.7\u00b6\n- Author:\nA.M. Kuchling (amk at amk.ca)\nThis article explains the new features in Python 2.7. Python 2.7 was released on July 3, 2010.\nNumeric handling has been improved in many ways, for both\nfloating-point numbers and for the Decimal\nclass.\nThere are some useful additions to the standard library, such as a\ngreatly enhanced unittest\nmodule, the argparse\nmodule\nfor parsing command-line options, convenient OrderedDict\nand Counter\nclasses in the collections\nmodule,\nand many other improvements.\nPython 2.7 is planned to be the last of the 2.x releases, so we worked on making it a good release for the long term. To help with porting to Python 3, several new features from the Python 3.x series have been included in 2.7.\nThis article doesn\u2019t attempt to provide a complete specification of the new features, but instead provides a convenient overview. For full details, you should refer to the documentation for Python 2.7 at https://docs.python.org. If you want to understand the rationale for the design and implementation, refer to the PEP for a particular new feature or the issue on https://bugs.python.org in which a change was discussed. Whenever possible, \u201cWhat\u2019s New in Python\u201d links to the bug/patch item for each change.\nThe Future for Python 2.x\u00b6\nPython 2.7 is the last major release in the 2.x series, as the Python maintainers have shifted the focus of their new feature development efforts to the Python 3.x series. This means that while Python 2 continues to receive bug fixes, and to be updated to build correctly on new hardware and versions of supported operated systems, there will be no new full feature releases for the language or standard library.\nHowever, while there is a large common subset between Python 2.7 and Python 3, and many of the changes involved in migrating to that common subset, or directly to Python 3, can be safely automated, some other changes (notably those associated with Unicode handling) may require careful consideration, and preferably robust automated regression test suites, to migrate effectively.\nThis means that Python 2.7 will remain in place for a long time, providing a stable and supported base platform for production systems that have not yet been ported to Python 3. The full expected lifecycle of the Python 2.7 series is detailed in PEP 373.\nSome key consequences of the long-term significance of 2.7 are:\nAs noted above, the 2.7 release has a much longer period of maintenance when compared to earlier 2.x versions. Python 2.7 is currently expected to remain supported by the core development team (receiving security updates and other bug fixes) until at least 2020 (10 years after its initial release, compared to the more typical support period of 18\u201324 months).\nAs the Python 2.7 standard library ages, making effective use of the Python Package Index (either directly or via a redistributor) becomes more important for Python 2 users. In addition to a wide variety of third party packages for various tasks, the available packages include backports of new modules and features from the Python 3 standard library that are compatible with Python 2, as well as various tools and libraries that can make it easier to migrate to Python 3. The Python Packaging User Guide provides guidance on downloading and installing software from the Python Package Index.\nWhile the preferred approach to enhancing Python 2 is now the publication of new packages on the Python Package Index, this approach doesn\u2019t necessarily work in all cases, especially those related to network security. In exceptional cases that cannot be handled adequately by publishing new or updated packages on PyPI, the Python Enhancement Proposal process may be used to make the case for adding new features directly to the Python 2 standard library. Any such additions, and the maintenance releases where they were added, will be noted in the New Features Added to Python 2.7 Maintenance Releases section below.\nFor projects wishing to migrate from Python 2 to Python 3, or for library and framework developers wishing to support users on both Python 2 and Python 3, there are a variety of tools and guides available to help decide on a suitable approach and manage some of the technical details involved. The recommended starting point is the How to port Python 2 Code to Python 3 HOWTO guide.\nChanges to the Handling of Deprecation Warnings\u00b6\nFor Python 2.7, a policy decision was made to silence warnings only of\ninterest to developers by default. DeprecationWarning\nand its\ndescendants are now ignored unless otherwise requested, preventing\nusers from seeing warnings triggered by an application. This change\nwas also made in the branch that became Python 3.2. (Discussed\non stdlib-sig and carried out in bpo-7319.)\nIn previous releases, DeprecationWarning\nmessages were\nenabled by default, providing Python developers with a clear\nindication of where their code may break in a future major version\nof Python.\nHowever, there are increasingly many users of Python-based\napplications who are not directly involved in the development of\nthose applications. DeprecationWarning\nmessages are\nirrelevant to such users, making them worry about an application\nthat\u2019s actually working correctly and burdening application developers\nwith responding to these concerns.\nYou can re-enable display of DeprecationWarning\nmessages by\nrunning Python with the -Wdefault\n(short form:\n-Wd\n) switch, or by setting the PYTHONWARNINGS\nenvironment variable to \"default\"\n(or \"d\"\n) before running\nPython. Python code can also re-enable them\nby calling warnings.simplefilter('default')\n.\nThe unittest\nmodule also automatically reenables deprecation warnings\nwhen running tests.\nPython 3.1 Features\u00b6\nMuch as Python 2.6 incorporated features from Python 3.0, version 2.7 incorporates some of the new features in Python 3.1. The 2.x series continues to provide tools for migrating to the 3.x series.\nA partial list of 3.1 features that were backported to 2.7:\nThe syntax for set literals (\n{1,2,3}\nis a mutable set).Dictionary and set comprehensions (\n{i: i*2 for i in range(3)}\n).Multiple context managers in a single\nwith\nstatement.A new version of the\nio\nlibrary, rewritten in C for performance.The ordered-dictionary type described in PEP 372: Adding an Ordered Dictionary to collections.\nThe new\n\",\"\nformat specifier described in PEP 378: Format Specifier for Thousands Separator.The\nmemoryview\nobject.A small subset of the\nimportlib\nmodule, described below.The\nrepr()\nof a floatx\nis shorter in many cases: it\u2019s now based on the shortest decimal string that\u2019s guaranteed to round back tox\n. As in previous versions of Python, it\u2019s guaranteed thatfloat(repr(x))\nrecoversx\n.Float-to-string and string-to-float conversions are correctly rounded. The\nround()\nfunction is also now correctly rounded.The\nPyCapsule\ntype, used to provide a C API for extension modules.The\nPyLong_AsLongAndOverflow()\nC API function.\nOther new Python3-mode warnings include:\noperator.isCallable()\nandoperator.sequenceIncludes()\n, which are not supported in 3.x, now trigger warnings.The\n-3\nswitch now automatically enables the-Qwarn\nswitch that causes warnings about using classic division with integers and long integers.\nPEP 372: Adding an Ordered Dictionary to collections\u00b6\nRegular Python dictionaries iterate over key/value pairs in arbitrary order.\nOver the years, a number of authors have written alternative implementations\nthat remember the order that the keys were originally inserted. Based on\nthe experiences from those implementations, 2.7 introduces a new\nOrderedDict\nclass in the collections\nmodule.\nThe OrderedDict\nAPI provides the same interface as regular\ndictionaries but iterates over keys and values in a guaranteed order\ndepending on when a key was first inserted:\n>>> from collections import OrderedDict\n>>> d = OrderedDict([('first', 1),\n... ('second', 2),\n... ('third', 3)])\n>>> d.items()\n[('first', 1), ('second', 2), ('third', 3)]\nIf a new entry overwrites an existing entry, the original insertion position is left unchanged:\n>>> d['second'] = 4\n>>> d.items()\n[('first', 1), ('second', 4), ('third', 3)]\nDeleting an entry and reinserting it will move it to the end:\n>>> del d['second']\n>>> d['second'] = 5\n>>> d.items()\n[('first', 1), ('third', 3), ('second', 5)]\nThe popitem()\nmethod has an optional last\nargument that defaults to True\n. If last is true, the most recently\nadded key is returned and removed; if it\u2019s false, the\noldest key is selected:\n>>> od = OrderedDict([(x,0) for x in range(20)])\n>>> od.popitem()\n(19, 0)\n>>> od.popitem()\n(18, 0)\n>>> od.popitem(last=False)\n(0, 0)\n>>> od.popitem(last=False)\n(1, 0)\nComparing two ordered dictionaries checks both the keys and values, and requires that the insertion order was the same:\n>>> od1 = OrderedDict([('first', 1),\n... ('second', 2),\n... ('third', 3)])\n>>> od2 = OrderedDict([('third', 3),\n... ('first', 1),\n... ('second', 2)])\n>>> od1 == od2\nFalse\n>>> # Move 'third' key to the end\n>>> del od2['third']; od2['third'] = 3\n>>> od1 == od2\nTrue\nComparing an OrderedDict\nwith a regular dictionary\nignores the insertion order and just compares the keys and values.\nHow does the OrderedDict\nwork? It maintains a\ndoubly linked list of keys, appending new keys to the list as they\u2019re inserted.\nA secondary dictionary maps keys to their corresponding list node, so\ndeletion doesn\u2019t have to traverse the entire linked list and therefore\nremains O(1).\nThe standard library now supports use of ordered dictionaries in several modules.\nThe\nConfigParser\nmodule uses them by default, meaning that configuration files can now be read, modified, and then written back in their original order.The\n_asdict()\nmethod forcollections.namedtuple()\nnow returns an ordered dictionary with the values appearing in the same order as the underlying tuple indices.The\njson\nmodule\u2019sJSONDecoder\nclass constructor was extended with an object_pairs_hook parameter to allowOrderedDict\ninstances to be built by the decoder. Support was also added for third-party tools like PyYAML.\nSee also\n- PEP 372 - Adding an ordered dictionary to collections\nPEP written by Armin Ronacher and Raymond Hettinger; implemented by Raymond Hettinger.\nPEP 378: Format Specifier for Thousands Separator\u00b6\nTo make program output more readable, it can be useful to add separators to large numbers, rendering them as 18,446,744,073,709,551,616 instead of 18446744073709551616.\nThe fully general solution for doing this is the locale\nmodule,\nwhich can use different separators (\u201c,\u201d in North America, \u201c.\u201d in\nEurope) and different grouping sizes, but locale\nis complicated\nto use and unsuitable for multi-threaded applications where different\nthreads are producing output for different locales.\nTherefore, a simple comma-grouping mechanism has been added to the\nmini-language used by the str.format()\nmethod. When\nformatting a floating-point number, simply include a comma between the\nwidth and the precision:\n>>> '{:20,.2f}'.format(18446744073709551616.0)\n'18,446,744,073,709,551,616.00'\nWhen formatting an integer, include the comma after the width:\n>>> '{:20,d}'.format(18446744073709551616)\n'18,446,744,073,709,551,616'\nThis mechanism is not adaptable at all; commas are always used as the\nseparator and the grouping is always into three-digit groups. The\ncomma-formatting mechanism isn\u2019t as general as the locale\nmodule, but it\u2019s easier to use.\nSee also\n- PEP 378 - Format Specifier for Thousands Separator\nPEP written by Raymond Hettinger; implemented by Eric Smith.\nPEP 389: The argparse Module for Parsing Command Lines\u00b6\nThe argparse\nmodule for parsing command-line arguments was\nadded as a more powerful replacement for the\noptparse\nmodule.\nThis means Python now supports three different modules for parsing\ncommand-line arguments: getopt\n, optparse\n, and\nargparse\n. The getopt\nmodule closely resembles the C\nlibrary\u2019s getopt()\nfunction, so it remains useful if you\u2019re writing a\nPython prototype that will eventually be rewritten in C.\noptparse\nbecomes redundant, but there are no plans to remove it\nbecause there are many scripts still using it, and there\u2019s no\nautomated way to update these scripts. (Making the argparse\nAPI consistent with optparse\n\u2019s interface was discussed but\nrejected as too messy and difficult.)\nIn short, if you\u2019re writing a new script and don\u2019t need to worry\nabout compatibility with earlier versions of Python, use\nargparse\ninstead of optparse\n.\nHere\u2019s an example:\nimport argparse\nparser = argparse.ArgumentParser(description='Command-line example.')\n# Add optional switches\nparser.add_argument('-v', action='store_true', dest='is_verbose',\nhelp='produce verbose output')\nparser.add_argument('-o', action='store', dest='output',\nmetavar='FILE',\nhelp='direct output to FILE instead of stdout')\nparser.add_argument('-C', action='store', type=int, dest='context',\nmetavar='NUM', default=0,\nhelp='display NUM lines of added context')\n# Allow any number of additional arguments.\nparser.add_argument(nargs='*', action='store', dest='inputs',\nhelp='input filenames (default is stdin)')\nargs = parser.parse_args()\nprint args.__dict__\nUnless you override it, -h\nand --help\nswitches\nare automatically added, and produce neatly formatted output:\n-> ./python.exe argparse-example.py --help\nusage: argparse-example.py [-h] [-v] [-o FILE] [-C NUM] [inputs [inputs ...]]\nCommand-line example.\npositional arguments:\ninputs input filenames (default is stdin)\noptional arguments:\n-h, --help show this help message and exit\n-v produce verbose output\n-o FILE direct output to FILE instead of stdout\n-C NUM display NUM lines of added context\nAs with optparse\n, the command-line switches and arguments\nare returned as an object with attributes named by the dest parameters:\n-> ./python.exe argparse-example.py -v\n{'output': None,\n'is_verbose': True,\n'context': 0,\n'inputs': []}\n-> ./python.exe argparse-example.py -v -o /tmp/output -C 4 file1 file2\n{'output': '/tmp/output',\n'is_verbose': True,\n'context': 4,\n'inputs': ['file1', 'file2']}\nargparse\nhas much fancier validation than optparse\n; you\ncan specify an exact number of arguments as an integer, 0 or more\narguments by passing '*'\n, 1 or more by passing '+'\n, or an\noptional argument with '?'\n. A top-level parser can contain\nsub-parsers to define subcommands that have different sets of\nswitches, as in svn commit\n, svn checkout\n, etc. You can\nspecify an argument\u2019s type as FileType\n, which will\nautomatically open files for you and understands that '-'\nmeans\nstandard input or output.\nSee also\nargparse\ndocumentationThe documentation page of the argparse module.\n- Migrating optparse code to argparse\nPart of the Python documentation, describing how to convert code that uses\noptparse\n.- PEP 389 - argparse - New Command Line Parsing Module\nPEP written and implemented by Steven Bethard.\nPEP 391: Dictionary-Based Configuration For Logging\u00b6\nThe logging\nmodule is very flexible; applications can define\na tree of logging subsystems, and each logger in this tree can filter\nout certain messages, format them differently, and direct messages to\na varying number of handlers.\nAll this flexibility can require a lot of configuration. You can\nwrite Python statements to create objects and set their properties,\nbut a complex set-up requires verbose but boring code.\nlogging\nalso supports a fileConfig()\nfunction that parses a file, but the file format doesn\u2019t support\nconfiguring filters, and it\u2019s messier to generate programmatically.\nPython 2.7 adds a dictConfig()\nfunction that\nuses a dictionary to configure logging. There are many ways to\nproduce a dictionary from different sources: construct one with code;\nparse a file containing JSON; or use a YAML parsing library if one is\ninstalled. For more information see Configuration functions.\nThe following example configures two loggers, the root logger and a\nlogger named \u201cnetwork\u201d. Messages sent to the root logger will be\nsent to the system log using the syslog protocol, and messages\nto the \u201cnetwork\u201d logger will be written to a network.log\nfile\nthat will be rotated once the log reaches 1MB.\nimport logging\nimport logging.config\nconfigdict = {\n'version': 1, # Configuration schema in use; must be 1 for now\n'formatters': {\n'standard': {\n'format': ('%(asctime)s %(name)-15s '\n'%(levelname)-8s %(message)s')}},\n'handlers': {'netlog': {'backupCount': 10,\n'class': 'logging.handlers.RotatingFileHandler',\n'filename': '/logs/network.log',\n'formatter': 'standard',\n'level': 'INFO',\n'maxBytes': 1000000},\n'syslog': {'class': 'logging.handlers.SysLogHandler',\n'formatter': 'standard',\n'level': 'ERROR'}},\n# Specify all the subordinate loggers\n'loggers': {\n'network': {\n'handlers': ['netlog']\n}\n},\n# Specify properties of the root logger\n'root': {\n'handlers': ['syslog']\n},\n}\n# Set up configuration\nlogging.config.dictConfig(configdict)\n# As an example, log two error messages\nlogger = logging.getLogger('/')\nlogger.error('Database not found')\nnetlogger = logging.getLogger('network')\nnetlogger.error('Connection failed')\nThree smaller enhancements to the logging\nmodule, all\nimplemented by Vinay Sajip, are:\nThe\nSysLogHandler\nclass now supports syslogging over TCP. The constructor has a socktype parameter giving the type of socket to use, eithersocket.SOCK_DGRAM\nfor UDP orsocket.SOCK_STREAM\nfor TCP. The default protocol remains UDP.Logger\ninstances gained agetChild()\nmethod that retrieves a descendant logger using a relative path. For example, once you retrieve a logger by doinglog = getLogger('app')\n, callinglog.getChild('network.listen')\nis equivalent togetLogger('app.network.listen')\n.The\nLoggerAdapter\nclass gained anisEnabledFor()\nmethod that takes a level and returns whether the underlying logger would process a message of that level of importance.\nSee also\n- PEP 391 - Dictionary-Based Configuration For Logging\nPEP written and implemented by Vinay Sajip.\nPEP 3106: Dictionary Views\u00b6\nThe dictionary methods keys()\n, values()\n, and\nitems()\nare different in Python 3.x. They return an object\ncalled a view instead of a fully materialized list.\nIt\u2019s not possible to change the return values of keys()\n,\nvalues()\n, and items()\nin Python 2.7 because\ntoo much code would break. Instead the 3.x versions were added\nunder the new names viewkeys()\n, viewvalues()\n,\nand viewitems()\n.\n>>> d = dict((i*10, chr(65+i)) for i in range(26))\n>>> d\n{0: 'A', 130: 'N', 10: 'B', 140: 'O', 20: ..., 250: 'Z'}\n>>> d.viewkeys()\ndict_keys([0, 130, 10, 140, 20, 150, 30, ..., 250])\nViews can be iterated over, but the key and item views also behave\nlike sets. The &\noperator performs intersection, and |\nperforms a union:\n>>> d1 = dict((i*10, chr(65+i)) for i in range(26))\n>>> d2 = dict((i**.5, i) for i in range(1000))\n>>> d1.viewkeys() & d2.viewkeys()\nset([0.0, 10.0, 20.0, 30.0])\n>>> d1.viewkeys() | range(0, 30)\nset([0, 1, 130, 3, 4, 5, 6, ..., 120, 250])\nThe view keeps track of the dictionary and its contents change as the dictionary is modified:\n>>> vk = d.viewkeys()\n>>> vk\ndict_keys([0, 130, 10, ..., 250])\n>>> d[260] = '&'\n>>> vk\ndict_keys([0, 130, 260, 10, ..., 250])\nHowever, note that you can\u2019t add or remove keys while you\u2019re iterating over the view:\n>>> for k in vk:\n... d[k*2] = k\n...\nTraceback (most recent call last):\nFile \"\", line 1, in \nRuntimeError: dictionary changed size during iteration\nYou can use the view methods in Python 2.x code, and the 2to3\nconverter will change them to the standard keys()\n,\nvalues()\n, and items()\nmethods.\nPEP 3137: The memoryview Object\u00b6\nThe memoryview\nobject provides a view of another object\u2019s\nmemory content that matches the bytes\ntype\u2019s interface.\n>>> import string\n>>> m = memoryview(string.letters)\n>>> m\n\n>>> len(m) # Returns length of underlying object\n52\n>>> m[0], m[25], m[26] # Indexing returns one byte\n('a', 'z', 'A')\n>>> m2 = m[0:26] # Slicing returns another memoryview\n>>> m2\n\nThe content of the view can be converted to a string of bytes or a list of integers:\n>>> m2.tobytes()\n'abcdefghijklmnopqrstuvwxyz'\n>>> m2.tolist()\n[97, 98, 99, 100, 101, 102, 103, ... 121, 122]\n>>>\nmemoryview\nobjects allow modifying the underlying object if\nit\u2019s a mutable object.\n>>> m2[0] = 75\nTraceback (most recent call last):\nFile \"\", line 1, in \nTypeError: cannot modify read-only memory\n>>> b = bytearray(string.letters) # Creating a mutable object\n>>> b\nbytearray(b'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ')\n>>> mb = memoryview(b)\n>>> mb[0] = '*' # Assign to view, changing the bytearray.\n>>> b[0:5] # The bytearray has been changed.\nbytearray(b'*bcde')\n>>>\nOther Language Changes\u00b6\nSome smaller changes made to the core Python language are:\nThe syntax for set literals has been backported from Python 3.x. Curly brackets are used to surround the contents of the resulting mutable set; set literals are distinguished from dictionaries by not containing colons and values.\n{}\ncontinues to represent an empty dictionary; useset()\nfor an empty set.>>> {1, 2, 3, 4, 5} set([1, 2, 3, 4, 5]) >>> set() # empty set set([]) >>> {} # empty dict {}\nBackported by Alexandre Vassalotti; bpo-2335.\nDictionary and set comprehensions are another feature backported from 3.x, generalizing list/generator comprehensions to use the literal syntax for sets and dictionaries.\n>>> {x: x*x for x in range(6)} {0: 0, 1: 1, 2: 4, 3: 9, 4: 16, 5: 25} >>> {('a'*x) for x in range(6)} set(['', 'a', 'aa', 'aaa', 'aaaa', 'aaaaa'])\nBackported by Alexandre Vassalotti; bpo-2333.\nThe\nwith\nstatement can now use multiple context managers in one statement. Context managers are processed from left to right and each one is treated as beginning a newwith\nstatement. This means that:with A() as a, B() as b: ... suite of statements ...\nis equivalent to:\nwith A() as a: with B() as b: ... suite of statements ...\nThe\ncontextlib.nested()\nfunction provides a very similar function, so it\u2019s no longer necessary and has been deprecated.(Proposed in https://codereview.appspot.com/53094; implemented by Georg Brandl.)\nConversions between floating-point numbers and strings are now correctly rounded on most platforms. These conversions occur in many different places:\nstr()\non floats and complex numbers; thefloat\nandcomplex\nconstructors; numeric formatting; serializing and deserializing floats and complex numbers using themarshal\n,pickle\nandjson\nmodules; parsing of float and imaginary literals in Python code; andDecimal\n-to-float conversion.Related to this, the\nrepr()\nof a floating-point number x now returns a result based on the shortest decimal string that\u2019s guaranteed to round back to x under correct rounding (with round-half-to-even rounding mode). Previously it gave a string based on rounding x to 17 decimal digits.The rounding library responsible for this improvement works on Windows and on Unix platforms using the gcc, icc, or suncc compilers. There may be a small number of platforms where correct operation of this code cannot be guaranteed, so the code is not used on such systems. You can find out which code is being used by checking\nsys.float_repr_style\n, which will beshort\nif the new code is in use andlegacy\nif it isn\u2019t.Implemented by Eric Smith and Mark Dickinson, using David Gay\u2019s\ndtoa.c\nlibrary; bpo-7117.Conversions from long integers and regular integers to floating point now round differently, returning the floating-point number closest to the number. This doesn\u2019t matter for small integers that can be converted exactly, but for large numbers that will unavoidably lose precision, Python 2.7 now approximates more closely. For example, Python 2.6 computed the following:\n>>> n = 295147905179352891391 >>> float(n) 2.9514790517935283e+20 >>> n - long(float(n)) 65535L\nPython 2.7\u2019s floating-point result is larger, but much closer to the true value:\n>>> n = 295147905179352891391 >>> float(n) 2.9514790517935289e+20 >>> n - long(float(n)) -1L\n(Implemented by Mark Dickinson; bpo-3166.)\nInteger division is also more accurate in its rounding behaviours. (Also implemented by Mark Dickinson; bpo-1811.)\nImplicit coercion for complex numbers has been removed; the interpreter will no longer ever attempt to call a\n__coerce__()\nmethod on complex objects. (Removed by Meador Inge and Mark Dickinson; bpo-5211.)The\nstr.format()\nmethod now supports automatic numbering of the replacement fields. This makes usingstr.format()\nmore closely resemble using%s\nformatting:>>> '{}:{}:{}'.format(2009, 04, 'Sunday') '2009:4:Sunday' >>> '{}:{}:{day}'.format(2009, 4, day='Sunday') '2009:4:Sunday'\nThe auto-numbering takes the fields from left to right, so the first\n{...}\nspecifier will use the first argument tostr.format()\n, the next specifier will use the next argument, and so on. You can\u2019t mix auto-numbering and explicit numbering \u2013 either number all of your specifier fields or none of them \u2013 but you can mix auto-numbering and named fields, as in the second example above. (Contributed by Eric Smith; bpo-5237.)Complex numbers now correctly support usage with\nformat()\n, and default to being right-aligned. Specifying a precision or comma-separation applies to both the real and imaginary parts of the number, but a specified field width and alignment is applied to the whole of the resulting1.5+3j\noutput. (Contributed by Eric Smith; bpo-1588 and bpo-7988.)The \u2018F\u2019 format code now always formats its output using uppercase characters, so it will now produce \u2018INF\u2019 and \u2018NAN\u2019. (Contributed by Eric Smith; bpo-3382.)\nA low-level change: the\nobject.__format__()\nmethod now triggers aPendingDeprecationWarning\nif it\u2019s passed a format string, because the__format__()\nmethod forobject\nconverts the object to a string representation and formats that. Previously the method silently applied the format string to the string representation, but that could hide mistakes in Python code. If you\u2019re supplying formatting information such as an alignment or precision, presumably you\u2019re expecting the formatting to be applied in some object-specific way. (Fixed by Eric Smith; bpo-7994.)The\nint()\nandlong()\ntypes gained abit_length\nmethod that returns the number of bits necessary to represent its argument in binary:>>> n = 37 >>> bin(n) '0b100101' >>> n.bit_length() 6 >>> n = 2**123-1 >>> n.bit_length() 123 >>> (n+1).bit_length() 124\n(Contributed by Fredrik Johansson and Victor Stinner; bpo-3439.)\nThe\nimport\nstatement will no longer try an absolute import if a relative import (e.g.from .os import sep\n) fails. This fixes a bug, but could possibly break certainimport\nstatements that were only working by accident. (Fixed by Meador Inge; bpo-7902.)It\u2019s now possible for a subclass of the built-in\nunicode\ntype to override the__unicode__()\nmethod. (Implemented by Victor Stinner; bpo-1583863.)The\nbytearray\ntype\u2019stranslate()\nmethod now acceptsNone\nas its first argument. (Fixed by Georg Brandl; bpo-4759.)When using\n@classmethod\nand@staticmethod\nto wrap methods as class or static methods, the wrapper object now exposes the wrapped function as their__func__\nattribute. (Contributed by Amaury Forgeot d\u2019Arc, after a suggestion by George Sakkis; bpo-5982.)When a restricted set of attributes were set using\n__slots__\n, deleting an unset attribute would not raiseAttributeError\nas you would expect. Fixed by Benjamin Peterson; bpo-7604.)Two new encodings are now supported: \u201ccp720\u201d, used primarily for Arabic text; and \u201ccp858\u201d, a variant of CP 850 that adds the euro symbol. (CP720 contributed by Alexander Belchenko and Amaury Forgeot d\u2019Arc in bpo-1616979; CP858 contributed by Tim Hatch in bpo-8016.)\nThe\nfile\nobject will now set thefilename\nattribute on theIOError\nexception when trying to open a directory on POSIX platforms (noted by Jan Kaliszewski; bpo-4764), and now explicitly checks for and forbids writing to read-only file objects instead of trusting the C library to catch and report the error (fixed by Stefan Krah; bpo-5677).The Python tokenizer now translates line endings itself, so the\ncompile()\nbuilt-in function now accepts code using any line-ending convention. Additionally, it no longer requires that the code end in a newline.Extra parentheses in function definitions are illegal in Python 3.x, meaning that you get a syntax error from\ndef f((x)): pass\n. In Python3-warning mode, Python 2.7 will now warn about this odd usage. (Noted by James Lingard; bpo-7362.)It\u2019s now possible to create weak references to old-style class objects. New-style classes were always weak-referenceable. (Fixed by Antoine Pitrou; bpo-8268.)\nWhen a module object is garbage-collected, the module\u2019s dictionary is now only cleared if no one else is holding a reference to the dictionary (bpo-7140).\nInterpreter Changes\u00b6\nA new environment variable, PYTHONWARNINGS\n,\nallows controlling warnings. It should be set to a string\ncontaining warning settings, equivalent to those\nused with the -W\nswitch, separated by commas.\n(Contributed by Brian Curtin; bpo-7301.)\nFor example, the following setting will print warnings every time\nthey occur, but turn warnings from the Cookie\nmodule into an\nerror. (The exact syntax for setting an environment variable varies\nacross operating systems and shells.)\nexport PYTHONWARNINGS=all,error:::Cookie:0\nOptimizations\u00b6\nSeveral performance enhancements have been added:\nA new opcode was added to perform the initial setup for\nwith\nstatements, looking up the__enter__()\nand__exit__()\nmethods. (Contributed by Benjamin Peterson.)The garbage collector now performs better for one common usage pattern: when many objects are being allocated without deallocating any of them. This would previously take quadratic time for garbage collection, but now the number of full garbage collections is reduced as the number of objects on the heap grows. The new logic only performs a full garbage collection pass when the middle generation has been collected 10 times and when the number of survivor objects from the middle generation exceeds 10% of the number of objects in the oldest generation. (Suggested by Martin von L\u00f6wis and implemented by Antoine Pitrou; bpo-4074.)\nThe garbage collector tries to avoid tracking simple containers which can\u2019t be part of a cycle. In Python 2.7, this is now true for tuples and dicts containing atomic types (such as ints, strings, etc.). Transitively, a dict containing tuples of atomic types won\u2019t be tracked either. This helps reduce the cost of each garbage collection by decreasing the number of objects to be considered and traversed by the collector. (Contributed by Antoine Pitrou; bpo-4688.)\nLong integers are now stored internally either in base\n2**15\nor in base2**30\n, the base being determined at build time. Previously, they were always stored in base2**15\n. Using base2**30\ngives significant performance improvements on 64-bit machines, but benchmark results on 32-bit machines have been mixed. Therefore, the default is to use base2**30\non 64-bit machines and base2**15\non 32-bit machines; on Unix, there\u2019s a new configure option--enable-big-digits\nthat can be used to override this default.Apart from the performance improvements this change should be invisible to end users, with one exception: for testing and debugging purposes there\u2019s a new structseq\nsys.long_info\nthat provides information about the internal format, giving the number of bits per digit and the size in bytes of the C type used to store each digit:>>> import sys >>> sys.long_info sys.long_info(bits_per_digit=30, sizeof_digit=4)\n(Contributed by Mark Dickinson; bpo-4258.)\nAnother set of changes made long objects a few bytes smaller: 2 bytes smaller on 32-bit systems and 6 bytes on 64-bit. (Contributed by Mark Dickinson; bpo-5260.)\nThe division algorithm for long integers has been made faster by tightening the inner loop, doing shifts instead of multiplications, and fixing an unnecessary extra iteration. Various benchmarks show speedups of between 50% and 150% for long integer divisions and modulo operations. (Contributed by Mark Dickinson; bpo-5512.) Bitwise operations are also significantly faster (initial patch by Gregory Smith; bpo-1087418).\nThe implementation of\n%\nchecks for the left-side operand being a Python string and special-cases it; this results in a 1\u20133% performance increase for applications that frequently use%\nwith strings, such as templating libraries. (Implemented by Collin Winter; bpo-5176.)List comprehensions with an\nif\ncondition are compiled into faster bytecode. (Patch by Antoine Pitrou, back-ported to 2.7 by Jeffrey Yasskin; bpo-4715.)Converting an integer or long integer to a decimal string was made faster by special-casing base 10 instead of using a generalized conversion function that supports arbitrary bases. (Patch by Gawain Bolton; bpo-6713.)\nThe\nsplit()\n,replace()\n,rindex()\n,rpartition()\n, andrsplit()\nmethods of string-like types (strings, Unicode strings, andbytearray\nobjects) now use a fast reverse-search algorithm instead of a character-by-character scan. This is sometimes faster by a factor of 10. (Added by Florent Xicluna; bpo-7462 and bpo-7622.)The\npickle\nandcPickle\nmodules now automatically intern the strings used for attribute names, reducing memory usage of the objects resulting from unpickling. (Contributed by Jake McGuire; bpo-5084.)The\ncPickle\nmodule now special-cases dictionaries, nearly halving the time required to pickle them. (Contributed by Collin Winter; bpo-5670.)\nNew and Improved Modules\u00b6\nAs in every release, Python\u2019s standard library received a number of\nenhancements and bug fixes. Here\u2019s a partial list of the most notable\nchanges, sorted alphabetically by module name. Consult the\nMisc/NEWS\nfile in the source tree for a more complete list of\nchanges, or look through the Subversion logs for all the details.\nThe\nbdb\nmodule\u2019s base debugging classBdb\ngained a feature for skipping modules. The constructor now takes an iterable containing glob-style patterns such asdjango.*\n; the debugger will not step into stack frames from a module that matches one of these patterns. (Contributed by Maru Newby after a suggestion by Senthil Kumaran; bpo-5142.)The\nbinascii\nmodule now supports the buffer API, so it can be used withmemoryview\ninstances and other similar buffer objects. (Backported from 3.x by Florent Xicluna; bpo-7703.)Updated module: the\nbsddb\nmodule has been updated from 4.7.2devel9 to version 4.8.4 of the pybsddb package. The new version features better Python 3.x compatibility, various bug fixes, and adds several new BerkeleyDB flags and methods. (Updated by Jes\u00fas Cea Avi\u00f3n; bpo-8156. The pybsddb changelog can be read at https://hg.jcea.es/pybsddb/file/tip/ChangeLog.)The\nbz2\nmodule\u2019sBZ2File\nnow supports the context management protocol, so you can writewith bz2.BZ2File(...) as f:\n. (Contributed by Hagen F\u00fcrstenau; bpo-3860.)New class: the\nCounter\nclass in thecollections\nmodule is useful for tallying data.Counter\ninstances behave mostly like dictionaries but return zero for missing keys instead of raising aKeyError\n:>>> from collections import Counter >>> c = Counter() >>> for letter in 'here is a sample of english text': ... c[letter] += 1 ... >>> c Counter({' ': 6, 'e': 5, 's': 3, 'a': 2, 'i': 2, 'h': 2, 'l': 2, 't': 2, 'g': 1, 'f': 1, 'm': 1, 'o': 1, 'n': 1, 'p': 1, 'r': 1, 'x': 1}) >>> c['e'] 5 >>> c['z'] 0\nThere are three additional\nCounter\nmethods.most_common()\nreturns the N most common elements and their counts.elements()\nreturns an iterator over the contained elements, repeating each element as many times as its count.subtract()\ntakes an iterable and subtracts one for each element instead of adding; if the argument is a dictionary or anotherCounter\n, the counts are subtracted.>>> c.most_common(5) [(' ', 6), ('e', 5), ('s', 3), ('a', 2), ('i', 2)] >>> c.elements() -> 'a', 'a', ' ', ' ', ' ', ' ', ' ', ' ', 'e', 'e', 'e', 'e', 'e', 'g', 'f', 'i', 'i', 'h', 'h', 'm', 'l', 'l', 'o', 'n', 'p', 's', 's', 's', 'r', 't', 't', 'x' >>> c['e'] 5 >>> c.subtract('very heavy on the letter e') >>> c['e'] # Count is now lower -1\nContributed by Raymond Hettinger; bpo-1696199.\nNew class:\nOrderedDict\nis described in the earlier section PEP 372: Adding an Ordered Dictionary to collections.New method: The\ndeque\ndata type now has acount()\nmethod that returns the number of contained elements equal to the supplied argument x, and areverse()\nmethod that reverses the elements of the deque in-place.deque\nalso exposes its maximum length as the read-onlymaxlen\nattribute. (Both features added by Raymond Hettinger.)The\nnamedtuple\nclass now has an optional rename parameter. If rename is true, field names that are invalid because they\u2019ve been repeated or aren\u2019t legal Python identifiers will be renamed to legal names that are derived from the field\u2019s position within the list of fields:>>> from collections import namedtuple >>> T = namedtuple('T', ['field1', '$illegal', 'for', 'field2'], rename=True) >>> T._fields ('field1', '_1', '_2', 'field2')\n(Added by Raymond Hettinger; bpo-1818.)\nFinally, the\nMapping\nabstract base class now returnsNotImplemented\nif a mapping is compared to another type that isn\u2019t aMapping\n. (Fixed by Daniel Stutzbach; bpo-8729.)Constructors for the parsing classes in the\nConfigParser\nmodule now take an allow_no_value parameter, defaulting to false; if true, options without values will be allowed. For example:>>> import ConfigParser, StringIO >>> sample_config = \"\"\" ... [mysqld] ... user = mysql ... pid-file = /var/run/mysqld/mysqld.pid ... skip-bdb ... \"\"\" >>> config = ConfigParser.RawConfigParser(allow_no_value=True) >>> config.readfp(StringIO.StringIO(sample_config)) >>> config.get('mysqld', 'user') 'mysql' >>> print config.get('mysqld', 'skip-bdb') None >>> print config.get('mysqld', 'unknown') Traceback (most recent call last): ... NoOptionError: No option 'unknown' in section: 'mysqld'\n(Contributed by Mats Kindahl; bpo-7005.)\nDeprecated function:\ncontextlib.nested()\n, which allows handling more than one context manager with a singlewith\nstatement, has been deprecated, because thewith\nstatement now supports multiple context managers.The\ncookielib\nmodule now ignores cookies that have an invalid version field, one that doesn\u2019t contain an integer value. (Fixed by John J. Lee; bpo-3924.)The\ncopy\nmodule\u2019sdeepcopy()\nfunction will now correctly copy bound instance methods. (Implemented by Robert Collins; bpo-1515.)The\nctypes\nmodule now always convertsNone\nto a CNULL\npointer for arguments declared as pointers. (Changed by Thomas Heller; bpo-4606.) The underlying libffi library has been updated to version 3.0.9, containing various fixes for different platforms. (Updated by Matthias Klose; bpo-8142.)New method: the\ndatetime\nmodule\u2019stimedelta\nclass gained atotal_seconds()\nmethod that returns the number of seconds in the duration. (Contributed by Brian Quinlan; bpo-5788.)New method: the\nDecimal\nclass gained afrom_float()\nclass method that performs an exact conversion of a floating-point number to aDecimal\n. This exact conversion strives for the closest decimal approximation to the floating-point representation\u2019s value; the resulting decimal value will therefore still include the inaccuracy, if any. For example,Decimal.from_float(0.1)\nreturnsDecimal('0.1000000000000000055511151231257827021181583404541015625')\n. (Implemented by Raymond Hettinger; bpo-4796.)Comparing instances of\nDecimal\nwith floating-point numbers now produces sensible results based on the numeric values of the operands. Previously such comparisons would fall back to Python\u2019s default rules for comparing objects, which produced arbitrary results based on their type. Note that you still cannot combineDecimal\nand floating point in other operations such as addition, since you should be explicitly choosing how to convert between float andDecimal\n. (Fixed by Mark Dickinson; bpo-2531.)The constructor for\nDecimal\nnow accepts floating-point numbers (added by Raymond Hettinger; bpo-8257) and non-European Unicode characters such as Arabic-Indic digits (contributed by Mark Dickinson; bpo-6595).Most of the methods of the\nContext\nclass now accept integers as well asDecimal\ninstances; the only exceptions are thecanonical()\nandis_canonical()\nmethods. (Patch by Juan Jos\u00e9 Conti; bpo-7633.)When using\nDecimal\ninstances with a string\u2019sformat()\nmethod, the default alignment was previously left-alignment. This has been changed to right-alignment, which is more sensible for numeric types. (Changed by Mark Dickinson; bpo-6857.)Comparisons involving a signaling NaN value (or\nsNAN\n) now signalInvalidOperation\ninstead of silently returning a true or false value depending on the comparison operator. Quiet NaN values (orNaN\n) are now hashable. (Fixed by Mark Dickinson; bpo-7279.)The\ndifflib\nmodule now produces output that is more compatible with modern diff/patch tools through one small change, using a tab character instead of spaces as a separator in the header giving the filename. (Fixed by Anatoly Techtonik; bpo-7585.)The Distutils\nsdist\ncommand now always regenerates theMANIFEST\nfile, since even if theMANIFEST.in\norsetup.py\nfiles haven\u2019t been modified, the user might have created some new files that should be included. (Fixed by Tarek Ziad\u00e9; bpo-8688.)The\ndoctest\nmodule\u2019sIGNORE_EXCEPTION_DETAIL\nflag will now ignore the name of the module containing the exception being tested. (Patch by Lennart Regebro; bpo-7490.)The\nemail\nmodule\u2019sMessage\nclass will now accept a Unicode-valued payload, automatically converting the payload to the encoding specified byoutput_charset\n. (Added by R. David Murray; bpo-1368247.)The\nFraction\nclass now accepts a single float orDecimal\ninstance, or two rational numbers, as arguments to its constructor. (Implemented by Mark Dickinson; rationals added in bpo-5812, and float/decimal in bpo-8294.)Ordering comparisons (\n<\n,<=\n,>\n,>=\n) between fractions and complex numbers now raise aTypeError\n. This fixes an oversight, making theFraction\nmatch the other numeric types.New class:\nFTP_TLS\nin theftplib\nmodule provides secure FTP connections using TLS encapsulation of authentication as well as subsequent control and data transfers. (Contributed by Giampaolo Rodola; bpo-2054.)The\nstorbinary()\nmethod for binary uploads can now restart uploads thanks to an added rest parameter (patch by Pablo Mouzo; bpo-6845.)New class decorator:\ntotal_ordering()\nin thefunctools\nmodule takes a class that defines an__eq__()\nmethod and one of__lt__()\n,__le__()\n,__gt__()\n, or__ge__()\n, and generates the missing comparison methods. Since the__cmp__()\nmethod is being deprecated in Python 3.x, this decorator makes it easier to define ordered classes. (Added by Raymond Hettinger; bpo-5479.)New function:\ncmp_to_key()\nwill take an old-style comparison function that expects two arguments and return a new callable that can be used as the key parameter to functions such assorted()\n,min()\nandmax()\n, etc. The primary intended use is to help with making code compatible with Python 3.x. (Added by Raymond Hettinger.)New function: the\ngc\nmodule\u2019sis_tracked()\nreturns true if a given instance is tracked by the garbage collector, false otherwise. (Contributed by Antoine Pitrou; bpo-4688.)The\ngzip\nmodule\u2019sGzipFile\nnow supports the context management protocol, so you can writewith gzip.GzipFile(...) as f:\n(contributed by Hagen F\u00fcrstenau; bpo-3860), and it now implements theio.BufferedIOBase\nABC, so you can wrap it withio.BufferedReader\nfor faster processing (contributed by Nir Aides; bpo-7471). It\u2019s also now possible to override the modification time recorded in a gzipped file by providing an optional timestamp to the constructor. (Contributed by Jacques Frechet; bpo-4272.)Files in gzip format can be padded with trailing zero bytes; the\ngzip\nmodule will now consume these trailing bytes. (Fixed by Tadek Pietraszek and Brian Curtin; bpo-2846.)New attribute: the\nhashlib\nmodule now has analgorithms\nattribute containing a tuple naming the supported algorithms. In Python 2.7,hashlib.algorithms\ncontains('md5', 'sha1', 'sha224', 'sha256', 'sha384', 'sha512')\n. (Contributed by Carl Chenet; bpo-7418.)The default\nHTTPResponse\nclass used by thehttplib\nmodule now supports buffering, resulting in much faster reading of HTTP responses. (Contributed by Kristj\u00e1n Valur J\u00f3nsson; bpo-4879.)The\nHTTPConnection\nandHTTPSConnection\nclasses now support a source_address parameter, a(host, port)\n2-tuple giving the source address that will be used for the connection. (Contributed by Eldon Ziegler; bpo-3972.)The\nihooks\nmodule now supports relative imports. Note thatihooks\nis an older module for customizing imports, superseded by theimputil\nmodule added in Python 2.0. (Relative import support added by Neil Schemenauer.)The\nimaplib\nmodule now supports IPv6 addresses. (Contributed by Derek Morr; bpo-1655.)New function: the\ninspect\nmodule\u2019sgetcallargs()\ntakes a callable and its positional and keyword arguments, and figures out which of the callable\u2019s parameters will receive each argument, returning a dictionary mapping argument names to their values. For example:>>> from inspect import getcallargs >>> def f(a, b=1, *pos, **named): ... pass ... >>> getcallargs(f, 1, 2, 3) {'a': 1, 'b': 2, 'pos': (3,), 'named': {}} >>> getcallargs(f, a=2, x=4) {'a': 2, 'b': 1, 'pos': (), 'named': {'x': 4}} >>> getcallargs(f) Traceback (most recent call last): ... TypeError: f() takes at least 1 argument (0 given)\nContributed by George Sakkis; bpo-3135.\nUpdated module: The\nio\nlibrary has been upgraded to the version shipped with Python 3.1. For 3.1, the I/O library was entirely rewritten in C and is 2 to 20 times faster depending on the task being performed. The original Python version was renamed to the_pyio\nmodule.One minor resulting change: the\nio.TextIOBase\nclass now has anerrors\nattribute giving the error setting used for encoding and decoding errors (one of'strict'\n,'replace'\n,'ignore'\n).The\nio.FileIO\nclass now raises anOSError\nwhen passed an invalid file descriptor. (Implemented by Benjamin Peterson; bpo-4991.) Thetruncate()\nmethod now preserves the file position; previously it would change the file position to the end of the new file. (Fixed by Pascal Chambon; bpo-6939.)New function:\nitertools.compress(data, selectors)\ntakes two iterators. Elements of data are returned if the corresponding value in selectors is true:itertools.compress('ABCDEF', [1,0,1,0,1,1]) => A, C, E, F\nNew function:\nitertools.combinations_with_replacement(iter, r)\nreturns all the possible r-length combinations of elements from the iterable iter. Unlikecombinations()\n, individual elements can be repeated in the generated combinations:itertools.combinations_with_replacement('abc', 2) => ('a', 'a'), ('a', 'b'), ('a', 'c'), ('b', 'b'), ('b', 'c'), ('c', 'c')\nNote that elements are treated as unique depending on their position in the input, not their actual values.\nThe\nitertools.count()\nfunction now has a step argument that allows incrementing by values other than 1.count()\nalso now allows keyword arguments, and using non-integer values such as floats orDecimal\ninstances. (Implemented by Raymond Hettinger; bpo-5032.)itertools.combinations()\nanditertools.product()\npreviously raisedValueError\nfor values of r larger than the input iterable. This was deemed a specification error, so they now return an empty iterator. (Fixed by Raymond Hettinger; bpo-4816.)Updated module: The\njson\nmodule was upgraded to version 2.0.9 of the simplejson package, which includes a C extension that makes encoding and decoding faster. (Contributed by Bob Ippolito; bpo-4136.)To support the new\ncollections.OrderedDict\ntype,json.load()\nnow has an optional object_pairs_hook parameter that will be called with any object literal that decodes to a list of pairs. (Contributed by Raymond Hettinger; bpo-5381.)The\nmailbox\nmodule\u2019sMaildir\nclass now records the timestamp on the directories it reads, and only re-reads them if the modification time has subsequently changed. This improves performance by avoiding unneeded directory scans. (Fixed by A.M. Kuchling and Antoine Pitrou; bpo-1607951, bpo-6896.)New functions: the\nmath\nmodule gainederf()\nanderfc()\nfor the error function and the complementary error function,expm1()\nwhich computese**x - 1\nwith more precision than usingexp()\nand subtracting 1,gamma()\nfor the Gamma function, andlgamma()\nfor the natural log of the Gamma function. (Contributed by Mark Dickinson and nirinA raseliarison; bpo-3366.)The\nmultiprocessing\nmodule\u2019sManager*\nclasses can now be passed a callable that will be called whenever a subprocess is started, along with a set of arguments that will be passed to the callable. (Contributed by lekma; bpo-5585.)The\nPool\nclass, which controls a pool of worker processes, now has an optional maxtasksperchild parameter. Worker processes will perform the specified number of tasks and then exit, causing thePool\nto start a new worker. This is useful if tasks may leak memory or other resources, or if some tasks will cause the worker to become very large. (Contributed by Charles Cazabon; bpo-6963.)The\nnntplib\nmodule now supports IPv6 addresses. (Contributed by Derek Morr; bpo-1664.)New functions: the\nos\nmodule wraps the following POSIX system calls:getresgid()\nandgetresuid()\n, which return the real, effective, and saved GIDs and UIDs;setresgid()\nandsetresuid()\n, which set real, effective, and saved GIDs and UIDs to new values;initgroups()\n, which initialize the group access list for the current process. (GID/UID functions contributed by Travis H.; bpo-6508. Support for initgroups added by Jean-Paul Calderone; bpo-7333.)The\nos.fork()\nfunction now re-initializes the import lock in the child process; this fixes problems on Solaris whenfork()\nis called from a thread. (Fixed by Zsolt Cserna; bpo-7242.)In the\nos.path\nmodule, thenormpath()\nandabspath()\nfunctions now preserve Unicode; if their input path is a Unicode string, the return value is also a Unicode string. (normpath()\nfixed by Matt Giuca in bpo-5827;abspath()\nfixed by Ezio Melotti in bpo-3426.)The\npydoc\nmodule now has help for the various symbols that Python uses. You can now dohelp('<<')\norhelp('@')\n, for example. (Contributed by David Laban; bpo-4739.)The\nre\nmodule\u2019ssplit()\n,sub()\n, andsubn()\nnow accept an optional flags argument, for consistency with the other functions in the module. (Added by Gregory P. Smith.)New function:\nrun_path()\nin therunpy\nmodule will execute the code at a provided path argument. path can be the path of a Python source file (example.py\n), a compiled bytecode file (example.pyc\n), a directory (./package/\n), or a zip archive (example.zip\n). If a directory or zip path is provided, it will be added to the front ofsys.path\nand the module__main__\nwill be imported. It\u2019s expected that the directory or zip contains a__main__.py\n; if it doesn\u2019t, some other__main__.py\nmight be imported from a location later insys.path\n. This makes more of the machinery ofrunpy\navailable to scripts that want to mimic the way Python\u2019s command line processes an explicit path name. (Added by Nick Coghlan; bpo-6816.)New function: in the\nshutil\nmodule,make_archive()\ntakes a filename, archive type (zip or tar-format), and a directory path, and creates an archive containing the directory\u2019s contents. (Added by Tarek Ziad\u00e9.)shutil\n\u2019scopyfile()\nandcopytree()\nfunctions now raise aSpecialFileError\nexception when asked to copy a named pipe. Previously the code would treat named pipes like a regular file by opening them for reading, and this would block indefinitely. (Fixed by Antoine Pitrou; bpo-3002.)The\nsignal\nmodule no longer re-installs the signal handler unless this is truly necessary, which fixes a bug that could make it impossible to catch the EINTR signal robustly. (Fixed by Charles-Francois Natali; bpo-8354.)New functions: in the\nsite\nmodule, three new functions return various site- and user-specific paths.getsitepackages()\nreturns a list containing all global site-packages directories,getusersitepackages()\nreturns the path of the user\u2019s site-packages directory, andgetuserbase()\nreturns the value of theUSER_BASE\nenvironment variable, giving the path to a directory that can be used to store data. (Contributed by Tarek Ziad\u00e9; bpo-6693.)The\nsite\nmodule now reports exceptions occurring when thesitecustomize\nmodule is imported, and will no longer catch and swallow theKeyboardInterrupt\nexception. (Fixed by Victor Stinner; bpo-3137.)The\ncreate_connection()\nfunction gained a source_address parameter, a(host, port)\n2-tuple giving the source address that will be used for the connection. (Contributed by Eldon Ziegler; bpo-3972.)The\nrecv_into()\nandrecvfrom_into()\nmethods will now write into objects that support the buffer API, most usefully thebytearray\nandmemoryview\nobjects. (Implemented by Antoine Pitrou; bpo-8104.)The\nSocketServer\nmodule\u2019sTCPServer\nclass now supports socket timeouts and disabling the Nagle algorithm. Thedisable_nagle_algorithm\nclass attribute defaults toFalse\n; if overridden to be true, new request connections will have the TCP_NODELAY option set to prevent buffering many small sends into a single TCP packet. Thetimeout\nclass attribute can hold a timeout in seconds that will be applied to the request socket; if no request is received within that time,handle_timeout()\nwill be called andhandle_request()\nwill return. (Contributed by Kristj\u00e1n Valur J\u00f3nsson; bpo-6192 and bpo-6267.)Updated module: the\nsqlite3\nmodule has been updated to version 2.6.0 of the pysqlite package. Version 2.6.0 includes a number of bugfixes, and adds the ability to load SQLite extensions from shared libraries. Call theenable_load_extension(True)\nmethod to enable extensions, and then callload_extension()\nto load a particular shared library. (Updated by Gerhard H\u00e4ring.)The\nssl\nmodule\u2019sSSLSocket\nobjects now support the buffer API, which fixed a test suite failure (fix by Antoine Pitrou; bpo-7133) and automatically set OpenSSL\u2019sSSL_MODE_AUTO_RETRY\n, which will prevent an error code being returned fromrecv()\noperations that trigger an SSL renegotiation (fix by Antoine Pitrou; bpo-8222).The\nwrap_socket()\nconstructor function now takes a ciphers argument that\u2019s a string listing the encryption algorithms to be allowed; the format of the string is described in the OpenSSL documentation. (Added by Antoine Pitrou; bpo-8322.)Another change makes the extension load all of OpenSSL\u2019s ciphers and digest algorithms so that they\u2019re all available. Some SSL certificates couldn\u2019t be verified, reporting an \u201cunknown algorithm\u201d error. (Reported by Beda Kosata, and fixed by Antoine Pitrou; bpo-8484.)\nThe version of OpenSSL being used is now available as the module attributes\nssl.OPENSSL_VERSION\n(a string),ssl.OPENSSL_VERSION_INFO\n(a 5-tuple), andssl.OPENSSL_VERSION_NUMBER\n(an integer). (Added by Antoine Pitrou; bpo-8321.)The\nstruct\nmodule will no longer silently ignore overflow errors when a value is too large for a particular integer format code (one ofbBhHiIlLqQ\n); it now always raises astruct.error\nexception. (Changed by Mark Dickinson; bpo-1523.) Thepack()\nfunction will also attempt to use__index__()\nto convert and pack non-integers before trying the__int__()\nmethod or reporting an error. (Changed by Mark Dickinson; bpo-8300.)New function: the\nsubprocess\nmodule\u2019scheck_output()\nruns a command with a specified set of arguments and returns the command\u2019s output as a string when the command runs without error, or raises aCalledProcessError\nexception otherwise.>>> subprocess.check_output(['df', '-h', '.']) 'Filesystem Size Used Avail Capacity Mounted on\\n /dev/disk0s2 52G 49G 3.0G 94% /\\n' >>> subprocess.check_output(['df', '-h', '/bogus']) ... subprocess.CalledProcessError: Command '['df', '-h', '/bogus']' returned non-zero exit status 1\n(Contributed by Gregory P. Smith.)\nThe\nsubprocess\nmodule will now retry its internal system calls on receiving anEINTR\nsignal. (Reported by several people; final patch by Gregory P. Smith in bpo-1068268.)New function:\nis_declared_global()\nin thesymtable\nmodule returns true for variables that are explicitly declared to be global, false for ones that are implicitly global. (Contributed by Jeremy Hylton.)The\nsyslog\nmodule will now use the value ofsys.argv[0]\nas the identifier instead of the previous default value of'python'\n. (Changed by Sean Reifschneider; bpo-8451.)The\nsys.version_info\nvalue is now a named tuple, with attributes namedmajor\n,minor\n,micro\n,releaselevel\n, andserial\n. (Contributed by Ross Light; bpo-4285.)sys.getwindowsversion()\nalso returns a named tuple, with attributes namedmajor\n,minor\n,build\n,platform\n,service_pack\n,service_pack_major\n,service_pack_minor\n,suite_mask\n, andproduct_type\n. (Contributed by Brian Curtin; bpo-7766.)The\ntarfile\nmodule\u2019s default error handling has changed, to no longer suppress fatal errors. The default error level was previously 0, which meant that errors would only result in a message being written to the debug log, but because the debug log is not activated by default, these errors go unnoticed. The default error level is now 1, which raises an exception if there\u2019s an error. (Changed by Lars Gust\u00e4bel; bpo-7357.)tarfile\nnow supports filtering theTarInfo\nobjects being added to a tar file. When you calladd()\n, you may supply an optional filter argument that\u2019s a callable. The filter callable will be passed theTarInfo\nfor every file being added, and can modify and return it. If the callable returnsNone\n, the file will be excluded from the resulting archive. This is more powerful than the existing exclude argument, which has therefore been deprecated. (Added by Lars Gust\u00e4bel; bpo-6856.) TheTarFile\nclass also now supports the context management protocol. (Added by Lars Gust\u00e4bel; bpo-7232.)The\nwait()\nmethod of thethreading.Event\nclass now returns the internal flag on exit. This means the method will usually return true becausewait()\nis supposed to block until the internal flag becomes true. The return value will only be false if a timeout was provided and the operation timed out. (Contributed by Tim Lesher; bpo-1674032.)The Unicode database provided by the\nunicodedata\nmodule is now used internally to determine which characters are numeric, whitespace, or represent line breaks. The database also includes information from theUnihan.txt\ndata file (patch by Anders Chrigstr\u00f6m and Amaury Forgeot d\u2019Arc; bpo-1571184) and has been updated to version 5.2.0 (updated by Florent Xicluna; bpo-8024).The\nurlparse\nmodule\u2019surlsplit()\nnow handles unknown URL schemes in a fashion compliant with RFC 3986: if the URL is of the form\"://...\"\n, the text before the://\nis treated as the scheme, even if it\u2019s a made-up scheme that the module doesn\u2019t know about. This change may break code that worked around the old behaviour. For example, Python 2.6.4 or 2.5 will return the following:>>> import urlparse >>> urlparse.urlsplit('invented://host/filename?query') ('invented', '', '//host/filename?query', '', '')\nPython 2.7 (and Python 2.6.5) will return:\n>>> import urlparse >>> urlparse.urlsplit('invented://host/filename?query') ('invented', 'host', '/filename?query', '', '')\n(Python 2.7 actually produces slightly different output, since it returns a named tuple instead of a standard tuple.)\nThe\nurlparse\nmodule also supports IPv6 literal addresses as defined by RFC 2732 (contributed by Senthil Kumaran; bpo-2987).>>> urlparse.urlparse('http://[1080::8:800:200C:417A]/foo') ParseResult(scheme='http', netloc='[1080::8:800:200C:417A]', path='/foo', params='', query='', fragment='')\nNew class: the\nWeakSet\nclass in theweakref\nmodule is a set that only holds weak references to its elements; elements will be removed once there are no references pointing to them. (Originally implemented in Python 3.x by Raymond Hettinger, and backported to 2.7 by Michael Foord.)The\nxml.etree.ElementTree\nlibrary, no longer escapes ampersands and angle brackets when outputting an XML processing instruction (which looks like\n) or comment (which looks like\n). (Patch by Neil Muller; bpo-2746.)The XML-RPC client and server, provided by the\nxmlrpclib\nandSimpleXMLRPCServer\nmodules, have improved performance by supporting HTTP/1.1 keep-alive and by optionally using gzip encoding to compress the XML being exchanged. The gzip compression is controlled by theencode_threshold\nattribute ofSimpleXMLRPCRequestHandler\n, which contains a size in bytes; responses larger than this will be compressed. (Contributed by Kristj\u00e1n Valur J\u00f3nsson; bpo-6267.)The\nzipfile\nmodule\u2019sZipFile\nnow supports the context management protocol, so you can writewith zipfile.ZipFile(...) as f:\n. (Contributed by Brian Curtin; bpo-5511.)zipfile\nnow also supports archiving empty directories and extracts them correctly. (Fixed by Kuba Wieczorek; bpo-4710.) Reading files out of an archive is faster, and interleavingread()\nandreadline()\nnow works correctly. (Contributed by Nir Aides; bpo-7610.)The\nis_zipfile()\nfunction now accepts a file object, in addition to the path names accepted in earlier versions. (Contributed by Gabriel Genellina; bpo-4756.)The\nwritestr()\nmethod now has an optional compress_type parameter that lets you override the default compression method specified in theZipFile\nconstructor. (Contributed by Ronald Oussoren; bpo-6003.)\nNew module: importlib\u00b6\nPython 3.1 includes the importlib\npackage, a re-implementation\nof the logic underlying Python\u2019s import\nstatement.\nimportlib\nis useful for implementers of Python interpreters and\nto users who wish to write new importers that can participate in the\nimport process. Python 2.7 doesn\u2019t contain the complete\nimportlib\npackage, but instead has a tiny subset that contains\na single function, import_module()\n.\nimport_module(name, package=None)\nimports a module. name is\na string containing the module or package\u2019s name. It\u2019s possible to do\nrelative imports by providing a string that begins with a .\ncharacter, such as ..utils.errors\n. For relative imports, the\npackage argument must be provided and is the name of the package that\nwill be used as the anchor for\nthe relative import. import_module()\nboth inserts the imported\nmodule into sys.modules\nand returns the module object.\nHere are some examples:\n>>> from importlib import import_module\n>>> anydbm = import_module('anydbm') # Standard absolute import\n>>> anydbm\n\n>>> # Relative import\n>>> file_util = import_module('..file_util', 'distutils.command')\n>>> file_util\n\nimportlib\nwas implemented by Brett Cannon and introduced in\nPython 3.1.\nNew module: sysconfig\u00b6\nThe sysconfig\nmodule has been pulled out of the Distutils\npackage, becoming a new top-level module in its own right.\nsysconfig\nprovides functions for getting information about\nPython\u2019s build process: compiler switches, installation paths, the\nplatform name, and whether Python is running from its source\ndirectory.\nSome of the functions in the module are:\nget_config_var()\nreturns variables from Python\u2019s Makefile and thepyconfig.h\nfile.get_config_vars()\nreturns a dictionary containing all of the configuration variables.get_path()\nreturns the configured path for a particular type of module: the standard library, site-specific modules, platform-specific modules, etc.is_python_build()\nreturns true if you\u2019re running a binary from a Python source tree, and false otherwise.\nConsult the sysconfig\ndocumentation for more details and for\na complete list of functions.\nThe Distutils package and sysconfig\nare now maintained by Tarek\nZiad\u00e9, who has also started a Distutils2 package (source repository at\nhttps://hg.python.org/distutils2/) for developing a next-generation\nversion of Distutils.\nttk: Themed Widgets for Tk\u00b6\nTcl/Tk 8.5 includes a set of themed widgets that re-implement basic Tk widgets but have a more customizable appearance and can therefore more closely resemble the native platform\u2019s widgets. This widget set was originally called Tile, but was renamed to Ttk (for \u201cthemed Tk\u201d) on being added to Tcl/Tck release 8.5.\nTo learn more, read the ttk\nmodule documentation. You may also\nwish to read the Tcl/Tk manual page describing the\nTtk theme engine, available at\nhttps://www.tcl.tk/man/tcl8.5/TkCmd/ttk_intro.html. Some\nscreenshots of the Python/Ttk code in use are at\nhttps://code.google.com/archive/p/python-ttk/wikis/Screenshots.wiki.\nThe tkinter.ttk\nmodule was written by Guilherme Polo and added in\nbpo-2983. An alternate version called Tile.py\n, written by\nMartin Franklin and maintained by Kevin Walzer, was proposed for\ninclusion in bpo-2618, but the authors argued that Guilherme\nPolo\u2019s work was more comprehensive.\nUpdated module: unittest\u00b6\nThe unittest\nmodule was greatly enhanced; many\nnew features were added. Most of these features were implemented\nby Michael Foord, unless otherwise noted. The enhanced version of\nthe module is downloadable separately for use with Python versions 2.4 to 2.6,\npackaged as the unittest2\npackage, from unittest2.\nWhen used from the command line, the module can automatically discover\ntests. It\u2019s not as fancy as py.test or\nnose, but provides a\nsimple way to run tests kept within a set of package directories. For example,\nthe following command will search the test/\nsubdirectory for\nany importable test files named test*.py\n:\npython -m unittest discover -s test\nConsult the unittest\nmodule documentation for more details.\n(Developed in bpo-6001.)\nThe main()\nfunction supports some other new options:\n-b\nor--buffer\nwill buffer the standard output and standard error streams during each test. If the test passes, any resulting output will be discarded; on failure, the buffered output will be displayed.-c\nor--catch\nwill cause the control-C interrupt to be handled more gracefully. Instead of interrupting the test process immediately, the currently running test will be completed and then the partial results up to the interruption will be reported. If you\u2019re impatient, a second press of control-C will cause an immediate interruption.This control-C handler tries to avoid causing problems when the code being tested or the tests being run have defined a signal handler of their own, by noticing that a signal handler was already set and calling it. If this doesn\u2019t work for you, there\u2019s a\nremoveHandler()\ndecorator that can be used to mark tests that should have the control-C handling disabled.-f\nor--failfast\nmakes test execution stop immediately when a test fails instead of continuing to execute further tests. (Suggested by Cliff Dyer and implemented by Michael Foord; bpo-8074.)\nThe progress messages now show \u2018x\u2019 for expected failures and \u2018u\u2019 for unexpected successes when run in verbose mode. (Contributed by Benjamin Peterson.)\nTest cases can raise the SkipTest\nexception to skip a\ntest (bpo-1034053).\nThe error messages for assertEqual()\n,\nassertTrue()\n, and assertFalse()\nfailures now provide more information. If you set the\nlongMessage\nattribute of your TestCase\nclasses to\ntrue, both the standard error message and any additional message you\nprovide will be printed for failures. (Added by Michael Foord; bpo-5663.)\nThe assertRaises()\nmethod now\nreturns a context handler when called without providing a callable\nobject to run. For example, you can write this:\nwith self.assertRaises(KeyError):\n{}['foo']\n(Implemented by Antoine Pitrou; bpo-4444.)\nModule- and class-level setup and teardown fixtures are now supported.\nModules can contain setUpModule()\nand tearDownModule()\nfunctions. Classes can have setUpClass()\nand\ntearDownClass()\nmethods that must be defined as class methods\n(using @classmethod\nor equivalent). These functions and\nmethods are invoked when the test runner switches to a test case in a\ndifferent module or class.\nThe methods addCleanup()\nand\ndoCleanups()\nwere added.\naddCleanup()\nlets you add cleanup functions that\nwill be called unconditionally (after setUp()\nif\nsetUp()\nfails, otherwise after tearDown()\n). This allows\nfor much simpler resource allocation and deallocation during tests\n(bpo-5679).\nA number of new methods were added that provide more specialized\ntests. Many of these methods were written by Google engineers\nfor use in their test suites; Gregory P. Smith, Michael Foord, and\nGvR worked on merging them into Python\u2019s version of unittest\n.\nassertIsNone()\nandassertIsNotNone()\ntake one expression and verify that the result is or is notNone\n.assertIs()\nandassertIsNot()\ntake two values and check whether the two values evaluate to the same object or not. (Added by Michael Foord; bpo-2578.)assertIsInstance()\nandassertNotIsInstance()\ncheck whether the resulting object is an instance of a particular class, or of one of a tuple of classes. (Added by Georg Brandl; bpo-7031.)assertGreater()\n,assertGreaterEqual()\n,assertLess()\n, andassertLessEqual()\ncompare two quantities.assertMultiLineEqual()\ncompares two strings, and if they\u2019re not equal, displays a helpful comparison that highlights the differences in the two strings. This comparison is now used by default when Unicode strings are compared withassertEqual()\n.assertRegexpMatches()\nandassertNotRegexpMatches()\nchecks whether the first argument is a string matching or not matching the regular expression provided as the second argument (bpo-8038).assertRaisesRegexp()\nchecks whether a particular exception is raised, and then also checks that the string representation of the exception matches the provided regular expression.assertIn()\nandassertNotIn()\ntests whether first is or is not in second.assertItemsEqual()\ntests whether two provided sequences contain the same elements.assertSetEqual()\ncompares whether two sets are equal, and only reports the differences between the sets in case of error.Similarly,\nassertListEqual()\nandassertTupleEqual()\ncompare the specified types and explain any differences without necessarily printing their full values; these methods are now used by default when comparing lists and tuples usingassertEqual()\n. More generally,assertSequenceEqual()\ncompares two sequences and can optionally check whether both sequences are of a particular type.assertDictEqual()\ncompares two dictionaries and reports the differences; it\u2019s now used by default when you compare two dictionaries usingassertEqual()\n.assertDictContainsSubset()\nchecks whether all of the key/value pairs in first are found in second.assertAlmostEqual()\nandassertNotAlmostEqual()\ntest whether first and second are approximately equal. This method can either round their difference to an optionally specified number of places (the default is 7) and compare it to zero, or require the difference to be smaller than a supplied delta value.loadTestsFromName()\nproperly honors thesuiteClass\nattribute of theTestLoader\n. (Fixed by Mark Roddy; bpo-6866.)A new hook lets you extend the\nassertEqual()\nmethod to handle new data types. TheaddTypeEqualityFunc()\nmethod takes a type object and a function. The function will be used when both of the objects being compared are of the specified type. This function should compare the two objects and raise an exception if they don\u2019t match; it\u2019s a good idea for the function to provide additional information about why the two objects aren\u2019t matching, much as the new sequence comparison methods do.\nunittest.main()\nnow takes an optional exit\nargument. If\nfalse, main()\ndoesn\u2019t call sys.exit()\n, allowing\nmain()\nto be used from the interactive interpreter.\n(Contributed by J. Pablo Fern\u00e1ndez; bpo-3379.)\nTestResult\nhas new startTestRun()\nand\nstopTestRun()\nmethods that are called immediately before\nand after a test run. (Contributed by Robert Collins; bpo-5728.)\nWith all these changes, the unittest.py\nwas becoming awkwardly\nlarge, so the module was turned into a package and the code split into\nseveral files (by Benjamin Peterson). This doesn\u2019t affect how the\nmodule is imported or used.\nSee also\n- https://web.archive.org/web/20210619163128/http://www.voidspace.org.uk/python/articles/unittest2.shtml\nDescribes the new features, how to use them, and the rationale for various design decisions. (By Michael Foord.)\nUpdated module: ElementTree 1.3\u00b6\nThe version of the ElementTree library included with Python was updated to version 1.3. Some of the new features are:\nThe various parsing functions now take a parser keyword argument giving an\nXMLParser\ninstance that will be used. This makes it possible to override the file\u2019s internal encoding:p = ET.XMLParser(encoding='utf-8') t = ET.XML(\"\"\"\"\"\", parser=p)\nErrors in parsing XML now raise a\nParseError\nexception, whose instances have aposition\nattribute containing a (line, column) tuple giving the location of the problem.ElementTree\u2019s code for converting trees to a string has been significantly reworked, making it roughly twice as fast in many cases. The\nElementTree.write()\nandElement.write()\nmethods now have a method parameter that can be \u201cxml\u201d (the default), \u201chtml\u201d, or \u201ctext\u201d. HTML mode will output empty elements as\ninstead of\n, and text mode will skip over elements and only output the text chunks. If you set thetag\nattribute of an element toNone\nbut leave its children in place, the element will be omitted when the tree is written out, so you don\u2019t need to do more extensive rearrangement to remove a single element.Namespace handling has also been improved. All\nxmlns:\ndeclarations are now output on the root element, not scattered throughout the resulting XML. You can set the default namespace for a tree by setting thedefault_namespace\nattribute and can register new prefixes withregister_namespace()\n. In XML mode, you can use the true/false xml_declaration parameter to suppress the XML declaration.New\nElement\nmethod:extend()\nappends the items from a sequence to the element\u2019s children. Elements themselves behave like sequences, so it\u2019s easy to move children from one element to another:from xml.etree import ElementTree as ET t = ET.XML(\"\"\" 1 2 3 \"\"\") new = ET.XML('') new.extend(t) # Outputs 1... print ET.tostring(new)\nNew\nElement\nmethod:iter()\nyields the children of the element as a generator. It\u2019s also possible to writefor child in elem:\nto loop over an element\u2019s children. The existing methodgetiterator()\nis now deprecated, as isgetchildren()\nwhich constructs and returns a list of children.New\nElement\nmethod:itertext()\nyields all chunks of text that are descendants of the element. For example:t = ET.XML(\"\"\" 1 2 3 \"\"\") # Outputs ['\\n ', '1', ' ', '2', ' ', '3', '\\n'] print list(t.itertext())\nDeprecated: using an element as a Boolean (i.e.,\nif elem:\n) would return true if the element had any children, or false if there were no children. This behaviour is confusing \u2013None\nis false, but so is a childless element? \u2013 so it will now trigger aFutureWarning\n. In your code, you should be explicit: writelen(elem) != 0\nif you\u2019re interested in the number of children, orelem is not None\n.\nFredrik Lundh develops ElementTree and produced the 1.3 version; you can read his article describing 1.3 at https://web.archive.org/web/20200703234532/http://effbot.org/zone/elementtree-13-intro.htm. Florent Xicluna updated the version included with Python, after discussions on python-dev and in bpo-6472.)\nBuild and C API Changes\u00b6\nChanges to Python\u2019s build process and to the C API include:\nThe latest release of the GNU Debugger, GDB 7, can be scripted using Python. When you begin debugging an executable program P, GDB will look for a file named\nP-gdb.py\nand automatically read it. Dave Malcolm contributed apython-gdb.py\nthat adds a number of commands useful when debugging Python itself. For example,py-up\nandpy-down\ngo up or down one Python stack frame, which usually corresponds to several C stack frames.py-print\nprints the value of a Python variable, andpy-bt\nprints the Python stack trace. (Added as a result of bpo-8032.)If you use the\n.gdbinit\nfile provided with Python, the \u201cpyo\u201d macro in the 2.7 version now works correctly when the thread being debugged doesn\u2019t hold the GIL; the macro now acquires it before printing. (Contributed by Victor Stinner; bpo-3632.)Py_AddPendingCall()\nis now thread-safe, letting any worker thread submit notifications to the main Python thread. This is particularly useful for asynchronous IO operations. (Contributed by Kristj\u00e1n Valur J\u00f3nsson; bpo-4293.)New function:\nPyCode_NewEmpty()\ncreates an empty code object; only the filename, function name, and first line number are required. This is useful for extension modules that are attempting to construct a more useful traceback stack. Previously such extensions needed to callPyCode_New()\n, which had many more arguments. (Added by Jeffrey Yasskin.)New function:\nPyErr_NewExceptionWithDoc()\ncreates a new exception class, just as the existingPyErr_NewException()\ndoes, but takes an extrachar *\nargument containing the docstring for the new exception class. (Added by \u2018lekma\u2019 on the Python bug tracker; bpo-7033.)New function:\nPyFrame_GetLineNumber()\ntakes a frame object and returns the line number that the frame is currently executing. Previously code would need to get the index of the bytecode instruction currently executing, and then look up the line number corresponding to that address. (Added by Jeffrey Yasskin.)New functions:\nPyLong_AsLongAndOverflow()\nandPyLong_AsLongLongAndOverflow()\napproximates a Python long integer as a C long or long long. If the number is too large to fit into the output type, an overflow flag is set and returned to the caller. (Contributed by Case Van Horsen; bpo-7528 and bpo-7767.)New function: stemming from the rewrite of string-to-float conversion, a new\nPyOS_string_to_double()\nfunction was added. The oldPyOS_ascii_strtod()\nandPyOS_ascii_atof()\nfunctions are now deprecated.New function:\nPySys_SetArgvEx()\nsets the value ofsys.argv\nand can optionally updatesys.path\nto include the directory containing the script named bysys.argv[0]\ndepending on the value of an updatepath parameter.This function was added to close a security hole for applications that embed Python. The old function,\nPySys_SetArgv()\n, would always updatesys.path\n, and sometimes it would add the current directory. This meant that, if you ran an application embedding Python in a directory controlled by someone else, attackers could put a Trojan-horse module in the directory (say, a file namedos.py\n) that your application would then import and run.If you maintain a C/C++ application that embeds Python, check whether you\u2019re calling\nPySys_SetArgv()\nand carefully consider whether the application should be usingPySys_SetArgvEx()\nwith updatepath set to false.Security issue reported as CVE 2008-5983; discussed in bpo-5753, and fixed by Antoine Pitrou.\nNew macros: the Python header files now define the following macros:\nPy_ISALNUM\n,Py_ISALPHA\n,Py_ISDIGIT\n,Py_ISLOWER\n,Py_ISSPACE\n,Py_ISUPPER\n,Py_ISXDIGIT\n,Py_TOLOWER\n, andPy_TOUPPER\n. All of these functions are analogous to the C standard macros for classifying characters, but ignore the current locale setting, because in several places Python needs to analyze characters in a locale-independent way. (Added by Eric Smith; bpo-5793.)Removed function:\nPyEval_CallObject()\nis now only available as a macro. A function version was being kept around to preserve ABI linking compatibility, but that was in 1997; it can certainly be deleted by now. (Removed by Antoine Pitrou; bpo-8276.)New format codes: the\nPyString_FromFormat()\n,PyString_FromFormatV()\n, andPyErr_Format()\nfunctions now accept%lld\nand%llu\nformat codes for displaying C\u2019s long long types. (Contributed by Mark Dickinson; bpo-7228.)The complicated interaction between threads and process forking has been changed. Previously, the child process created by\nos.fork()\nmight fail because the child is created with only a single thread running, the thread performing theos.fork()\n. If other threads were holding a lock, such as Python\u2019s import lock, when the fork was performed, the lock would still be marked as \u201cheld\u201d in the new process. But in the child process nothing would ever release the lock, since the other threads weren\u2019t replicated, and the child process would no longer be able to perform imports.Python 2.7 acquires the import lock before performing an\nos.fork()\n, and will also clean up any locks created using thethreading\nmodule. C extension modules that have internal locks, or that callfork()\nthemselves, will not benefit from this clean-up.(Fixed by Thomas Wouters; bpo-1590864.)\nThe\nPy_Finalize()\nfunction now calls the internalthreading._shutdown()\nfunction; this prevents some exceptions from being raised when an interpreter shuts down. (Patch by Adam Olsen; bpo-1722344.)When using the\nPyMemberDef\nstructure to define attributes of a type, Python will no longer let you try to delete or set aT_STRING_INPLACE\nattribute.Global symbols defined by the\nctypes\nmodule are now prefixed withPy\n, or with_ctypes\n. (Implemented by Thomas Heller; bpo-3102.)New configure option: the\n--with-system-expat\nswitch allows building thepyexpat\nmodule to use the system Expat library. (Contributed by Arfrever Frehtes Taifersar Arahesis; bpo-7609.)New configure option: the\n--with-valgrind\noption will now disable the pymalloc allocator, which is difficult for the Valgrind memory-error detector to analyze correctly. Valgrind will therefore be better at detecting memory leaks and overruns. (Contributed by James Henstridge; bpo-2422.)New configure option: you can now supply an empty string to\n--with-dbmliborder=\nin order to disable all of the various DBM modules. (Added by Arfrever Frehtes Taifersar Arahesis; bpo-6491.)The configure script now checks for floating-point rounding bugs on certain 32-bit Intel chips and defines a\nX87_DOUBLE_ROUNDING\npreprocessor definition. No code currently uses this definition, but it\u2019s available if anyone wishes to use it. (Added by Mark Dickinson; bpo-2937.)configure also now sets a\nLDCXXSHARED\nMakefile variable for supporting C++ linking. (Contributed by Arfrever Frehtes Taifersar Arahesis; bpo-1222585.)The build process now creates the necessary files for pkg-config support. (Contributed by Clinton Roy; bpo-3585.)\nThe build process now supports Subversion 1.7. (Contributed by Arfrever Frehtes Taifersar Arahesis; bpo-6094.)\nCapsules\u00b6\nPython 3.1 adds a new C datatype, PyCapsule\n, for providing a\nC API to an extension module. A capsule is essentially the holder of\na C void *\npointer, and is made available as a module attribute; for\nexample, the socket\nmodule\u2019s API is exposed as socket.CAPI\n,\nand unicodedata\nexposes ucnhash_CAPI\n. Other extensions\ncan import the module, access its dictionary to get the capsule\nobject, and then get the void *\npointer, which will usually point\nto an array of pointers to the module\u2019s various API functions.\nThere is an existing data type already used for this,\nPyCObject\n, but it doesn\u2019t provide type safety. Evil code\nwritten in pure Python could cause a segmentation fault by taking a\nPyCObject\nfrom module A and somehow substituting it for the\nPyCObject\nin module B. Capsules know their own name,\nand getting the pointer requires providing the name:\nvoid *vtable;\nif (!PyCapsule_IsValid(capsule, \"mymodule.CAPI\") {\nPyErr_SetString(PyExc_ValueError, \"argument type invalid\");\nreturn NULL;\n}\nvtable = PyCapsule_GetPointer(capsule, \"mymodule.CAPI\");\nYou are assured that vtable\npoints to whatever you\u2019re expecting.\nIf a different capsule was passed in, PyCapsule_IsValid()\nwould\ndetect the mismatched name and return false. Refer to\nProviding a C API for an Extension Module for more information on using these objects.\nPython 2.7 now uses capsules internally to provide various\nextension-module APIs, but the PyCObject_AsVoidPtr()\nwas\nmodified to handle capsules, preserving compile-time compatibility\nwith the PyCObject\ninterface. Use of\nPyCObject_AsVoidPtr()\nwill signal a\nPendingDeprecationWarning\n, which is silent by default.\nImplemented in Python 3.1 and backported to 2.7 by Larry Hastings; discussed in bpo-5630.\nPort-Specific Changes: Windows\u00b6\nThe\nmsvcrt\nmodule now contains some constants from thecrtassem.h\nheader file:CRT_ASSEMBLY_VERSION\n,VC_ASSEMBLY_PUBLICKEYTOKEN\n, andLIBRARIES_ASSEMBLY_NAME_PREFIX\n. (Contributed by David Cournapeau; bpo-4365.)The\n_winreg\nmodule for accessing the registry now implements theCreateKeyEx()\nandDeleteKeyEx()\nfunctions, extended versions of previously supported functions that take several extra arguments. TheDisableReflectionKey()\n,EnableReflectionKey()\n, andQueryReflectionKey()\nwere also tested and documented. (Implemented by Brian Curtin: bpo-7347.)The new\n_beginthreadex()\nAPI is used to start threads, and the native thread-local storage functions are now used. (Contributed by Kristj\u00e1n Valur J\u00f3nsson; bpo-3582.)The\nos.kill()\nfunction now works on Windows. The signal value can be the constantsCTRL_C_EVENT\n,CTRL_BREAK_EVENT\n, or any integer. The first two constants will send Control-C and Control-Break keystroke events to subprocesses; any other value will use theTerminateProcess()\nAPI. (Contributed by Miki Tebeka; bpo-1220212.)The\nos.listdir()\nfunction now correctly fails for an empty path. (Fixed by Hirokazu Yamamoto; bpo-5913.)The\nmimetypes\nmodule will now read the MIME database from the Windows registry when initializing. (Patch by Gabriel Genellina; bpo-4969.)\nPort-Specific Changes: Mac OS X\u00b6\nThe path\n/Library/Python/2.7/site-packages\nis now appended tosys.path\n, in order to share added packages between the system installation and a user-installed copy of the same version. (Changed by Ronald Oussoren; bpo-4865.)Changed in version 2.7.13: As of 2.7.13, this change was removed.\n/Library/Python/2.7/site-packages\n, the site-packages directory used by the Apple-supplied system Python 2.7 is no longer appended tosys.path\nfor user-installed Pythons such as from the python.org installers. As of macOS 10.12, Apple changed how the system site-packages directory is configured, which could cause installation of pip components, like setuptools, to fail. Packages installed for the system Python will no longer be shared with user-installed Pythons. (bpo-28440)\nPort-Specific Changes: FreeBSD\u00b6\nFreeBSD 7.1\u2019s\nSO_SETFIB\nconstant, used with thesocket()\nmethodsgetsockopt()\n/setsockopt()\nto select an alternate routing table, is now available in thesocket\nmodule. (Added by Kyle VanderBeek; bpo-8235.)\nOther Changes and Fixes\u00b6\nTwo benchmark scripts,\niobench\nandccbench\n, were added to theTools\ndirectory.iobench\nmeasures the speed of the built-in file I/O objects returned byopen()\nwhile performing various operations, andccbench\nis a concurrency benchmark that tries to measure computing throughput, thread switching latency, and IO processing bandwidth when performing several tasks using a varying number of threads.The\nTools/i18n/msgfmt.py\nscript now understands plural forms in.po\nfiles. (Fixed by Martin von L\u00f6wis; bpo-5464.)When importing a module from a\n.pyc\nor.pyo\nfile with an existing.py\ncounterpart, theco_filename\nattributes of the resulting code objects are overwritten when the original filename is obsolete. This can happen if the file has been renamed, moved, or is accessed through different paths. (Patch by Ziga Seilnacht and Jean-Paul Calderone; bpo-1180193.)The\nregrtest.py\nscript now takes a--randseed=\nswitch that takes an integer that will be used as the random seed for the-r\noption that executes tests in random order. The-r\noption also reports the seed that was used (Added by Collin Winter.)Another\nregrtest.py\nswitch is-j\n, which takes an integer specifying how many tests run in parallel. This allows reducing the total runtime on multi-core machines. This option is compatible with several other options, including the-R\nswitch which is known to produce long runtimes. (Added by Antoine Pitrou, bpo-6152.) This can also be used with a new-F\nswitch that runs selected tests in a loop until they fail. (Added by Antoine Pitrou; bpo-7312.)When executed as a script, the\npy_compile.py\nmodule now accepts'-'\nas an argument, which will read standard input for the list of filenames to be compiled. (Contributed by Piotr O\u017carowski; bpo-8233.)\nPorting to Python 2.7\u00b6\nThis section lists previously described changes and other bugfixes that may require changes to your code:\nThe\nrange()\nfunction processes its arguments more consistently; it will now call__int__()\non non-float, non-integer arguments that are supplied to it. (Fixed by Alexander Belopolsky; bpo-1533.)The string\nformat()\nmethod changed the default precision used for floating-point and complex numbers from 6 decimal places to 12, which matches the precision used bystr()\n. (Changed by Eric Smith; bpo-5920.)Because of an optimization for the\nwith\nstatement, the special methods__enter__()\nand__exit__()\nmust belong to the object\u2019s type, and cannot be directly attached to the object\u2019s instance. This affects new-style classes (derived fromobject\n) and C extension types. (bpo-6101.)Due to a bug in Python 2.6, the exc_value parameter to\n__exit__()\nmethods was often the string representation of the exception, not an instance. This was fixed in 2.7, so exc_value will be an instance as expected. (Fixed by Florent Xicluna; bpo-7853.)When a restricted set of attributes were set using\n__slots__\n, deleting an unset attribute would not raiseAttributeError\nas you would expect. Fixed by Benjamin Peterson; bpo-7604.)\nIn the standard library:\nOperations with\ndatetime\ninstances that resulted in a year falling outside the supported range didn\u2019t always raiseOverflowError\n. Such errors are now checked more carefully and will now raise the exception. (Reported by Mark Leander, patch by Anand B. Pillai and Alexander Belopolsky; bpo-7150.)When using\nDecimal\ninstances with a string\u2019sformat()\nmethod, the default alignment was previously left-alignment. This has been changed to right-alignment, which might change the output of your programs. (Changed by Mark Dickinson; bpo-6857.)Comparisons involving a signaling NaN value (or\nsNAN\n) now signalInvalidOperation\ninstead of silently returning a true or false value depending on the comparison operator. Quiet NaN values (orNaN\n) are now hashable. (Fixed by Mark Dickinson; bpo-7279.)The\nxml.etree.ElementTree\nlibrary no longer escapes ampersands and angle brackets when outputting an XML processing instruction (which looks like\n) or comment (which looks like\n). (Patch by Neil Muller; bpo-2746.)The\nreadline()\nmethod ofStringIO\nobjects now does nothing when a negative length is requested, as other file-like objects do. (bpo-7348).The\nsyslog\nmodule will now use the value ofsys.argv[0]\nas the identifier instead of the previous default value of'python'\n. (Changed by Sean Reifschneider; bpo-8451.)The\ntarfile\nmodule\u2019s default error handling has changed, to no longer suppress fatal errors. The default error level was previously 0, which meant that errors would only result in a message being written to the debug log, but because the debug log is not activated by default, these errors go unnoticed. The default error level is now 1, which raises an exception if there\u2019s an error. (Changed by Lars Gust\u00e4bel; bpo-7357.)The\nurlparse\nmodule\u2019surlsplit()\nnow handles unknown URL schemes in a fashion compliant with RFC 3986: if the URL is of the form\"://...\"\n, the text before the://\nis treated as the scheme, even if it\u2019s a made-up scheme that the module doesn\u2019t know about. This change may break code that worked around the old behaviour. For example, Python 2.6.4 or 2.5 will return the following:>>> import urlparse >>> urlparse.urlsplit('invented://host/filename?query') ('invented', '', '//host/filename?query', '', '')\nPython 2.7 (and Python 2.6.5) will return:\n>>> import urlparse >>> urlparse.urlsplit('invented://host/filename?query') ('invented', 'host', '/filename?query', '', '')\n(Python 2.7 actually produces slightly different output, since it returns a named tuple instead of a standard tuple.)\nFor C extensions:\nC extensions that use integer format codes with the\nPyArg_Parse*\nfamily of functions will now raise aTypeError\nexception instead of triggering aDeprecationWarning\n(bpo-5080).Use the new\nPyOS_string_to_double()\nfunction instead of the oldPyOS_ascii_strtod()\nandPyOS_ascii_atof()\nfunctions, which are now deprecated.\nFor applications that embed Python:\nThe\nPySys_SetArgvEx()\nfunction was added, letting applications close a security hole when the existingPySys_SetArgv()\nfunction was used. Check whether you\u2019re callingPySys_SetArgv()\nand carefully consider whether the application should be usingPySys_SetArgvEx()\nwith updatepath set to false.\nNew Features Added to Python 2.7 Maintenance Releases\u00b6\nNew features may be added to Python 2.7 maintenance releases when the situation genuinely calls for it. Any such additions must go through the Python Enhancement Proposal process, and make a compelling case for why they can\u2019t be adequately addressed by either adding the new feature solely to Python 3, or else by publishing it on the Python Package Index.\nIn addition to the specific proposals listed below, there is a general\nexemption allowing new -3\nwarnings to be added in any Python 2.7\nmaintenance release.\nTwo new environment variables for debug mode\u00b6\nIn debug mode, the [xxx refs]\nstatistic is not written by default, the\nPYTHONSHOWREFCOUNT\nenvironment variable now must also be set.\n(Contributed by Victor Stinner; bpo-31733.)\nWhen Python is compiled with COUNT_ALLOC\ndefined, allocation counts are no\nlonger dumped by default anymore: the PYTHONSHOWALLOCCOUNT\nenvironment\nvariable must now also be set. Moreover, allocation counts are now dumped into\nstderr, rather than stdout. (Contributed by Victor Stinner; bpo-31692.)\nAdded in version 2.7.15.\nPEP 434: IDLE Enhancement Exception for All Branches\u00b6\nPEP 434 describes a general exemption for changes made to the IDLE development environment shipped along with Python. This exemption makes it possible for the IDLE developers to provide a more consistent user experience across all supported versions of Python 2 and 3.\nFor details of any IDLE changes, refer to the NEWS file for the specific release.\nPEP 466: Network Security Enhancements for Python 2.7\u00b6\nPEP 466 describes a number of network security enhancement proposals that have been approved for inclusion in Python 2.7 maintenance releases, with the first of those changes appearing in the Python 2.7.7 release.\nPEP 466 related features added in Python 2.7.7:\nhmac.compare_digest()\nwas backported from Python 3 to make a timing attack resistant comparison operation available to Python 2 applications. (Contributed by Alex Gaynor; bpo-21306.)OpenSSL 1.0.1g was upgraded in the official Windows installers published on python.org. (Contributed by Zachary Ware; bpo-21462.)\nPEP 466 related features added in Python 2.7.8:\nhashlib.pbkdf2_hmac()\nwas backported from Python 3 to make a hashing algorithm suitable for secure password storage broadly available to Python 2 applications. (Contributed by Alex Gaynor; bpo-21304.)OpenSSL 1.0.1h was upgraded for the official Windows installers published on python.org. (Contributed by Zachary Ware in bpo-21671 for CVE 2014-0224.)\nPEP 466 related features added in Python 2.7.9:\nMost of Python 3.4\u2019s\nssl\nmodule was backported. This meansssl\nnow supports Server Name Indication, TLS1.x settings, access to the platform certificate store, theSSLContext\nclass, and other features. (Contributed by Alex Gaynor and David Reid; bpo-21308.)Refer to the \u201cVersion added: 2.7.9\u201d notes in the module documentation for specific details.\nos.urandom()\nwas changed to cache a file descriptor to/dev/urandom\ninstead of reopening/dev/urandom\non every call. (Contributed by Alex Gaynor; bpo-21305.)hashlib.algorithms_guaranteed\nandhashlib.algorithms_available\nwere backported from Python 3 to make it easier for Python 2 applications to select the strongest available hash algorithm. (Contributed by Alex Gaynor in bpo-21307)\nPEP 477: Backport ensurepip (PEP 453) to Python 2.7\u00b6\nPEP 477 approves the inclusion of the PEP 453 ensurepip module and the improved documentation that was enabled by it in the Python 2.7 maintenance releases, appearing first in the Python 2.7.9 release.\nBootstrapping pip By Default\u00b6\nThe new ensurepip\nmodule (defined in PEP 453) provides a standard\ncross-platform mechanism to bootstrap the pip installer into Python\ninstallations. The version of pip\nincluded with Python 2.7.9 is pip\n1.5.6, and future 2.7.x maintenance releases will update the bundled version to\nthe latest version of pip\nthat is available at the time of creating the\nrelease candidate.\nBy default, the commands pip\n, pipX\nand pipX.Y\nwill be installed on\nall platforms (where X.Y stands for the version of the Python installation),\nalong with the pip\nPython package and its dependencies.\nFor CPython source builds on POSIX systems,\nthe make install\nand make altinstall\ncommands do not bootstrap pip\nby default. This behaviour can be controlled through configure options, and\noverridden through Makefile options.\nOn Windows and Mac OS X, the CPython installers now default to installing\npip\nalong with CPython itself (users may opt out of installing it\nduring the installation process). Window users will need to opt in to the\nautomatic PATH\nmodifications to have pip\navailable from the command\nline by default, otherwise it can still be accessed through the Python\nlauncher for Windows as py -m pip\n.\nAs discussed in the PEP, platform packagers may choose not to install these commands by default, as long as, when invoked, they provide clear and simple directions on how to install them on that platform (usually using the system package manager).\nDocumentation Changes\u00b6\nAs part of this change, the Installing Python Modules and Distributing Python Modules sections of the documentation have been completely redesigned as short getting started and FAQ documents. Most packaging documentation has now been moved out to the Python Packaging Authority maintained Python Packaging User Guide and the documentation of the individual projects.\nHowever, as this migration is currently still incomplete, the legacy versions of those guides remaining available as Building C and C++ Extensions with setuptools and Building C and C++ Extensions with setuptools.\nSee also\n- PEP 453 \u2013 Explicit bootstrapping of pip in Python installations\nPEP written by Donald Stufft and Nick Coghlan, implemented by Donald Stufft, Nick Coghlan, Martin von L\u00f6wis and Ned Deily.\nPEP 476: Enabling certificate verification by default for stdlib http clients\u00b6\nPEP 476 updated httplib\nand modules which use it, such as\nurllib2\nand xmlrpclib\n, to now\nverify that the server\npresents a certificate which is signed by a Certificate Authority in the\nplatform trust store and whose hostname matches the hostname being requested\nby default, significantly improving security for many applications. This\nchange was made in the Python 2.7.9 release.\nFor applications which require the old previous behavior, they can pass an alternate context:\nimport urllib2\nimport ssl\n# This disables all verification\ncontext = ssl._create_unverified_context()\n# This allows using a specific certificate for the host, which doesn't need\n# to be in the trust store\ncontext = ssl.create_default_context(cafile=\"/path/to/file.crt\")\nurllib2.urlopen(\"https://invalid-cert\", context=context)\nPEP 493: HTTPS verification migration tools for Python 2.7\u00b6\nPEP 493 provides additional migration tools to support a more incremental infrastructure upgrade process for environments containing applications and services relying on the historically permissive processing of server certificates when establishing client HTTPS connections. These additions were made in the Python 2.7.12 release.\nThese tools are intended for use in cases where affected applications and services can\u2019t be modified to explicitly pass a more permissive SSL context when establishing the connection.\nFor applications and services which can\u2019t be modified at all, the new\nPYTHONHTTPSVERIFY\nenvironment variable may be set to 0\nto revert an\nentire Python process back to the default permissive behaviour of Python 2.7.8\nand earlier.\nFor cases where the connection establishment code can\u2019t be modified, but the\noverall application can be, the new ssl._https_verify_certificates()\nfunction can be used to adjust the default behaviour at runtime.\nNew make regen-all\nbuild target\u00b6\nTo simplify cross-compilation, and to ensure that CPython can reliably be compiled without requiring an existing version of Python to already be available, the autotools-based build system no longer attempts to implicitly recompile generated files based on file modification times.\nInstead, a new make regen-all\ncommand has been added to force regeneration\nof these files when desired (e.g. after an initial version of Python has\nalready been built based on the pregenerated versions).\nMore selective regeneration targets are also defined - see Makefile.pre.in for details.\n(Contributed by Victor Stinner in bpo-23404.)\nAdded in version 2.7.14.\nRemoval of make touch\nbuild target\u00b6\nThe make touch\nbuild target previously used to request implicit regeneration\nof generated files by updating their modification times has been removed.\nIt has been replaced by the new make regen-all\ntarget.\n(Contributed by Victor Stinner in bpo-23404.)\nChanged in version 2.7.14.\nAcknowledgements\u00b6\nThe author would like to thank the following people for offering suggestions, corrections and assistance with various drafts of this article: Nick Coghlan, Philip Jenvey, Ryan Lovett, R. David Murray, Hugh Secker-Walker.", "code_snippets": [" ", "\n", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", "\n", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n\n", " ", " ", "\n\n", "\n", " ", " ", "\n ", "\n", " ", " ", "\n ", "\n ", "\n", " ", " ", " ", "\n ", " ", "\n ", "\n\n", "\n", " ", " ", "\n ", "\n\n", " ", " ", "\n", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n\n", " ", "\n\n", " ", "\n ", " ", " ", " ", " ", " ", "\n\n", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", "\n ", " ", "\n ", " ", "\n ", " ", " ", "\n", "\n", "\n\n", " ", " ", "\n ", " ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", "\n\n ", " ", " ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", " ", "\n ", " ", "\n ", " ", "\n\n ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", "\n ", "\n ", " ", "\n ", " ", "\n ", "\n", "\n\n", "\n", "\n\n", "\n", " ", " ", "\n", "\n\n", " ", " ", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", ": ", "\n", " ", "\n", " ", " ", " ", "\n", " ", "\n", "\n", " ", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", ": ", "\n", " ", " ", "\n ", " ", " ", " ", "\n", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n\n", " ", " ", "\n", "\n", "\n", " ", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", " ", "\n", " ", "\n ", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n\n", "\n", " ", "\n", " ", " ", "\n", "\n", "\n\n", "\n", " ", "\n", "\n", "\n\n", "\n", " ", " ", "\n\n", "\n", "\n", " ", " ", "\n\n", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 25383} +{"url": "https://docs.python.org/3/whatsnew/3.0.html", "title": "What\u2019s New In Python 3.0", "content": "What\u2019s New In Python 3.0\u00b6\n- Author:\nGuido van Rossum\nThis article explains the new features in Python 3.0, compared to 2.6. Python 3.0, also known as \u201cPython 3000\u201d or \u201cPy3K\u201d, is the first ever intentionally backwards incompatible Python release. Python 3.0 was released on December 3, 2008. There are more changes than in a typical release, and more that are important for all Python users. Nevertheless, after digesting the changes, you\u2019ll find that Python really hasn\u2019t changed all that much \u2013 by and large, we\u2019re mostly fixing well-known annoyances and warts, and removing a lot of old cruft.\nThis article doesn\u2019t attempt to provide a complete specification of all new features, but instead tries to give a convenient overview. For full details, you should refer to the documentation for Python 3.0, and/or the many PEPs referenced in the text. If you want to understand the complete implementation and design rationale for a particular feature, PEPs usually have more details than the regular documentation; but note that PEPs usually are not kept up-to-date once a feature has been fully implemented.\nDue to time constraints this document is not as complete as it should\nhave been. As always for a new release, the Misc/NEWS\nfile in the\nsource distribution contains a wealth of detailed information about\nevery small thing that was changed.\nCommon Stumbling Blocks\u00b6\nThis section lists those few changes that are most likely to trip you up if you\u2019re used to Python 2.5.\nPrint Is A Function\u00b6\nThe print\nstatement has been replaced with a print()\nfunction, with keyword arguments to replace most of the special syntax\nof the old print\nstatement (PEP 3105). Examples:\nOld: print \"The answer is\", 2*2\nNew: print(\"The answer is\", 2*2)\nOld: print x, # Trailing comma suppresses newline\nNew: print(x, end=\" \") # Appends a space instead of a newline\nOld: print # Prints a newline\nNew: print() # You must call the function!\nOld: print >>sys.stderr, \"fatal error\"\nNew: print(\"fatal error\", file=sys.stderr)\nOld: print (x, y) # prints repr((x, y))\nNew: print((x, y)) # Not the same as print(x, y)!\nYou can also customize the separator between items, e.g.:\nprint(\"There are <\", 2**32, \"> possibilities!\", sep=\"\")\nwhich produces:\nThere are <4294967296> possibilities!\nNote:\nThe\nprint()\nfunction doesn\u2019t support the \u201csoftspace\u201d feature of the oldprint\nstatement. For example, in Python 2.x,print \"A\\n\", \"B\"\nwould write\"A\\nB\\n\"\n; but in Python 3.0,print(\"A\\n\", \"B\")\nwrites\"A\\n B\\n\"\n.Initially, you\u2019ll be finding yourself typing the old\nprint x\na lot in interactive mode. Time to retrain your fingers to typeprint(x)\ninstead!When using the\n2to3\nsource-to-source conversion tool, allprint\nstatements are automatically converted toprint()\nfunction calls, so this is mostly a non-issue for larger projects.\nViews And Iterators Instead Of Lists\u00b6\nSome well-known APIs no longer return lists:\ndict\nmethodsdict.keys()\n,dict.items()\nanddict.values()\nreturn \u201cviews\u201d instead of lists. For example, this no longer works:k = d.keys(); k.sort()\n. Usek = sorted(d)\ninstead (this works in Python 2.5 too and is just as efficient).Also, the\ndict.iterkeys()\n,dict.iteritems()\nanddict.itervalues()\nmethods are no longer supported.map()\nandfilter()\nreturn iterators. If you really need a list and the input sequences are all of equal length, a quick fix is to wrapmap()\ninlist()\n, e.g.list(map(...))\n, but a better fix is often to use a list comprehension (especially when the original code useslambda\n), or rewriting the code so it doesn\u2019t need a list at all. Particularly tricky ismap()\ninvoked for the side effects of the function; the correct transformation is to use a regularfor\nloop (since creating a list would just be wasteful).If the input sequences are not of equal length,\nmap()\nwill stop at the termination of the shortest of the sequences. For full compatibility withmap()\nfrom Python 2.x, also wrap the sequences initertools.zip_longest()\n, e.g.map(func, *sequences)\nbecomeslist(map(func, itertools.zip_longest(*sequences)))\n.range()\nnow behaves likexrange()\nused to behave, except it works with values of arbitrary size. The latter no longer exists.zip()\nnow returns an iterator.\nOrdering Comparisons\u00b6\nPython 3.0 has simplified the rules for ordering comparisons:\nThe ordering comparison operators (\n<\n,<=\n,>=\n,>\n) raise a TypeError exception when the operands don\u2019t have a meaningful natural ordering. Thus, expressions like1 < ''\n,0 > None\norlen <= len\nare no longer valid, and e.g.None < None\nraisesTypeError\ninstead of returningFalse\n. A corollary is that sorting a heterogeneous list no longer makes sense \u2013 all the elements must be comparable to each other. Note that this does not apply to the==\nand!=\noperators: objects of different incomparable types always compare unequal to each other.sorted()\nandlist.sort()\nno longer accept the cmp argument providing a comparison function. Use the key argument instead. N.B. the key and reverse arguments are now \u201ckeyword-only\u201d.The\ncmp()\nfunction should be treated as gone, and the__cmp__()\nspecial method is no longer supported. Use__lt__()\nfor sorting,__eq__()\nwith__hash__()\n, and other rich comparisons as needed. (If you really need thecmp()\nfunctionality, you could use the expression(a > b) - (a < b)\nas the equivalent forcmp(a, b)\n.)\nIntegers\u00b6\nPEP 237: Essentially,\nlong\nrenamed toint\n. That is, there is only one built-in integral type, namedint\n; but it behaves mostly like the oldlong\ntype.PEP 238: An expression like\n1/2\nreturns a float. Use1//2\nto get the truncating behavior. (The latter syntax has existed for years, at least since Python 2.2.)The\nsys.maxint\nconstant was removed, since there is no longer a limit to the value of integers. However,sys.maxsize\ncan be used as an integer larger than any practical list or string index. It conforms to the implementation\u2019s \u201cnatural\u201d integer size and is typically the same assys.maxint\nin previous releases on the same platform (assuming the same build options).The\nrepr()\nof a long integer doesn\u2019t include the trailingL\nanymore, so code that unconditionally strips that character will chop off the last digit instead. (Usestr()\ninstead.)Octal literals are no longer of the form\n0720\n; use0o720\ninstead.\nText Vs. Data Instead Of Unicode Vs. 8-bit\u00b6\nEverything you thought you knew about binary data and Unicode has changed.\nPython 3.0 uses the concepts of text and (binary) data instead of Unicode strings and 8-bit strings. All text is Unicode; however encoded Unicode is represented as binary data. The type used to hold text is\nstr\n, the type used to hold data isbytes\n. The biggest difference with the 2.x situation is that any attempt to mix text and data in Python 3.0 raisesTypeError\n, whereas if you were to mix Unicode and 8-bit strings in Python 2.x, it would work if the 8-bit string happened to contain only 7-bit (ASCII) bytes, but you would getUnicodeDecodeError\nif it contained non-ASCII values. This value-specific behavior has caused numerous sad faces over the years.As a consequence of this change in philosophy, pretty much all code that uses Unicode, encodings or binary data most likely has to change. The change is for the better, as in the 2.x world there were numerous bugs having to do with mixing encoded and unencoded text. To be prepared in Python 2.x, start using\nunicode\nfor all unencoded text, andstr\nfor binary or encoded data only. Then the2to3\ntool will do most of the work for you.You can no longer use\nu\"...\"\nliterals for Unicode text. However, you must useb\"...\"\nliterals for binary data.As the\nstr\nandbytes\ntypes cannot be mixed, you must always explicitly convert between them. Usestr.encode()\nto go fromstr\ntobytes\n, andbytes.decode()\nto go frombytes\ntostr\n. You can also usebytes(s, encoding=...)\nandstr(b, encoding=...)\n, respectively.Like\nstr\n, thebytes\ntype is immutable. There is a separate mutable type to hold buffered binary data,bytearray\n. Nearly all APIs that acceptbytes\nalso acceptbytearray\n. The mutable API is based oncollections.MutableSequence\n.All backslashes in raw string literals are interpreted literally. This means that\n'\\U'\nand'\\u'\nescapes in raw strings are not treated specially. For example,r'\\u20ac'\nis a string of 6 characters in Python 3.0, whereas in 2.6,ur'\\u20ac'\nwas the single \u201ceuro\u201d character. (Of course, this change only affects raw string literals; the euro character is'\\u20ac'\nin Python 3.0.)The built-in\nbasestring\nabstract type was removed. Usestr\ninstead. Thestr\nandbytes\ntypes don\u2019t have functionality enough in common to warrant a shared base class. The2to3\ntool (see below) replaces every occurrence ofbasestring\nwithstr\n.Files opened as text files (still the default mode for\nopen()\n) always use an encoding to map between strings (in memory) and bytes (on disk). Binary files (opened with ab\nin the mode argument) always use bytes in memory. This means that if a file is opened using an incorrect mode or encoding, I/O will likely fail loudly, instead of silently producing incorrect data. It also means that even Unix users will have to specify the correct mode (text or binary) when opening a file. There is a platform-dependent default encoding, which on Unixy platforms can be set with theLANG\nenvironment variable (and sometimes also with some other platform-specific locale-related environment variables). In many cases, but not all, the system default is UTF-8; you should never count on this default. Any application reading or writing more than pure ASCII text should probably have a way to override the encoding. There is no longer any need for using the encoding-aware streams in thecodecs\nmodule.The initial values of\nsys.stdin\n,sys.stdout\nandsys.stderr\nare now unicode-only text files (i.e., they are instances ofio.TextIOBase\n). To read and write bytes data with these streams, you need to use theirio.TextIOBase.buffer\nattribute.Filenames are passed to and returned from APIs as (Unicode) strings. This can present platform-specific problems because on some platforms filenames are arbitrary byte strings. (On the other hand, on Windows filenames are natively stored as Unicode.) As a work-around, most APIs (e.g.\nopen()\nand many functions in theos\nmodule) that take filenames acceptbytes\nobjects as well as strings, and a few APIs have a way to ask for abytes\nreturn value. Thus,os.listdir()\nreturns a list ofbytes\ninstances if the argument is abytes\ninstance, andos.getcwdb()\nreturns the current working directory as abytes\ninstance. Note that whenos.listdir()\nreturns a list of strings, filenames that cannot be decoded properly are omitted rather than raisingUnicodeError\n.Some system APIs like\nos.environ\nandsys.argv\ncan also present problems when the bytes made available by the system is not interpretable using the default encoding. Setting theLANG\nvariable and rerunning the program is probably the best approach.PEP 3138: The\nrepr()\nof a string no longer escapes non-ASCII characters. It still escapes control characters and code points with non-printable status in the Unicode standard, however.PEP 3120: The default source encoding is now UTF-8.\nPEP 3131: Non-ASCII letters are now allowed in identifiers. (However, the standard library remains ASCII-only with the exception of contributor names in comments.)\nThe\nStringIO\nandcStringIO\nmodules are gone. Instead, import theio\nmodule and useio.StringIO\norio.BytesIO\nfor text and data respectively.See also the Unicode HOWTO, which was updated for Python 3.0.\nOverview Of Syntax Changes\u00b6\nThis section gives a brief overview of every syntactic change in Python 3.0.\nNew Syntax\u00b6\nPEP 3107: Function argument and return value annotations. This provides a standardized way of annotating a function\u2019s parameters and return value. There are no semantics attached to such annotations except that they can be introspected at runtime using the\n__annotations__\nattribute. The intent is to encourage experimentation through metaclasses, decorators or frameworks.PEP 3102: Keyword-only arguments. Named parameters occurring after\n*args\nin the parameter list must be specified using keyword syntax in the call. You can also use a bare*\nin the parameter list to indicate that you don\u2019t accept a variable-length argument list, but you do have keyword-only arguments.Keyword arguments are allowed after the list of base classes in a class definition. This is used by the new convention for specifying a metaclass (see next section), but can be used for other purposes as well, as long as the metaclass supports it.\nPEP 3104:\nnonlocal\nstatement. Usingnonlocal x\nyou can now assign directly to a variable in an outer (but non-global) scope.nonlocal\nis a new reserved word.PEP 3132: Extended Iterable Unpacking. You can now write things like\na, b, *rest = some_sequence\n. And even*rest, a = stuff\n. Therest\nobject is always a (possibly empty) list; the right-hand side may be any iterable. Example:(a, *rest, b) = range(5)\nThis sets a to\n0\n, b to4\n, and rest to[1, 2, 3]\n.Dictionary comprehensions:\n{k: v for k, v in stuff}\nmeans the same thing asdict(stuff)\nbut is more flexible. (This is PEP 274 vindicated. :-)Set literals, e.g.\n{1, 2}\n. Note that{}\nis an empty dictionary; useset()\nfor an empty set. Set comprehensions are also supported; e.g.,{x for x in stuff}\nmeans the same thing asset(stuff)\nbut is more flexible.New octal literals, e.g.\n0o720\n(already in 2.6). The old octal literals (0720\n) are gone.New binary literals, e.g.\n0b1010\n(already in 2.6), and there is a new corresponding built-in function,bin()\n.Bytes literals are introduced with a leading\nb\norB\n, and there is a new corresponding built-in function,bytes()\n.\nChanged Syntax\u00b6\nPEP 3109 and PEP 3134: new\nraise\nstatement syntax:raise [expr [from expr]]\n. See below.as\nandwith\nare now reserved words. (Since 2.6, actually.)True\n,False\n, andNone\nare reserved words. (2.6 partially enforced the restrictions onNone\nalready.)Change from\nexcept\nexc, var toexcept\nexcas\nvar. See PEP 3110.PEP 3115: New Metaclass Syntax. Instead of:\nclass C: __metaclass__ = M ...\nyou must now use:\nclass C(metaclass=M): ...\nThe module-global\n__metaclass__\nvariable is no longer supported. (It was a crutch to make it easier to default to new-style classes without deriving every class fromobject\n.)List comprehensions no longer support the syntactic form\n[... for var in item1, item2, ...]\n. Use[... for var in (item1, item2, ...)]\ninstead. Also note that list comprehensions have different semantics: they are closer to syntactic sugar for a generator expression inside alist()\nconstructor, and in particular the loop control variables are no longer leaked into the surrounding scope.The ellipsis (\n...\n) can be used as an atomic expression anywhere. (Previously it was only allowed in slices.) Also, it must now be spelled as...\n. (Previously it could also be spelled as. . .\n, by a mere accident of the grammar.)\nRemoved Syntax\u00b6\nPEP 3113: Tuple parameter unpacking removed. You can no longer write\ndef foo(a, (b, c)): ...\n. Usedef foo(a, b_c): b, c = b_c\ninstead.Removed backticks (use\nrepr()\ninstead).Removed\n<>\n(use!=\ninstead).Removed keyword:\nexec()\nis no longer a keyword; it remains as a function. (Fortunately the function syntax was also accepted in 2.x.) Also note thatexec()\nno longer takes a stream argument; instead ofexec(f)\nyou can useexec(f.read())\n.Integer literals no longer support a trailing\nl\norL\n.String literals no longer support a leading\nu\norU\n.The\nfrom\nmoduleimport\n*\nsyntax is only allowed at the module level, no longer inside functions.The only acceptable syntax for relative imports is\nfrom .[module] import name\n. Allimport\nforms not starting with.\nare interpreted as absolute imports. (PEP 328)Classic classes are gone.\nChanges Already Present In Python 2.6\u00b6\nSince many users presumably make the jump straight from Python 2.5 to Python 3.0, this section reminds the reader of new features that were originally designed for Python 3.0 but that were back-ported to Python 2.6. The corresponding sections in What\u2019s New in Python 2.6 should be consulted for longer descriptions.\nPEP 343: The \u2018with\u2019 statement. The\nwith\nstatement is now a standard feature and no longer needs to be imported from the__future__\n. Also check out Writing Context Managers and The contextlib module.PEP 366: Explicit Relative Imports From a Main Module. This enhances the usefulness of the\n-m\noption when the referenced module lives in a package.PEP 3101: Advanced String Formatting. Note: the 2.6 description mentions the\nformat()\nmethod for both 8-bit and Unicode strings. In 3.0, only thestr\ntype (text strings with Unicode support) supports this method; thebytes\ntype does not. The plan is to eventually make this the only API for string formatting, and to start deprecating the%\noperator in Python 3.1.PEP 3105: print As a Function. This is now a standard feature and no longer needs to be imported from\n__future__\n. More details were given above.PEP 3110: Exception-Handling Changes. The\nexcept\nexcas\nvar syntax is now standard andexcept\nexc, var is no longer supported. (Of course, theas\nvar part is still optional.)PEP 3112: Byte Literals. The\nb\"...\"\nstring literal notation (and its variants likeb'...'\n,b\"\"\"...\"\"\"\n, andbr\"...\"\n) now produces a literal of typebytes\n.PEP 3116: New I/O Library. The\nio\nmodule is now the standard way of doing file I/O. The built-inopen()\nfunction is now an alias forio.open()\nand has additional keyword arguments encoding, errors, newline and closefd. Also note that an invalid mode argument now raisesValueError\n, notIOError\n. The binary file object underlying a text file object can be accessed asf.buffer\n(but beware that the text object maintains a buffer of itself in order to speed up the encoding and decoding operations).PEP 3118: Revised Buffer Protocol. The old builtin\nbuffer()\nis now really gone; the new builtinmemoryview()\nprovides (mostly) similar functionality.PEP 3119: Abstract Base Classes. The\nabc\nmodule and the ABCs defined in thecollections\nmodule plays a somewhat more prominent role in the language now, and built-in collection types likedict\nandlist\nconform to thecollections.MutableMapping\nandcollections.MutableSequence\nABCs, respectively.PEP 3127: Integer Literal Support and Syntax. As mentioned above, the new octal literal notation is the only one supported, and binary literals have been added.\nPEP 3141: A Type Hierarchy for Numbers. The\nnumbers\nmodule is another new use of ABCs, defining Python\u2019s \u201cnumeric tower\u201d. Also note the newfractions\nmodule which implementsnumbers.Rational\n.\nLibrary Changes\u00b6\nDue to time constraints, this document does not exhaustively cover the very extensive changes to the standard library. PEP 3108 is the reference for the major changes to the library. Here\u2019s a capsule review:\nMany old modules were removed. Some, like\ngopherlib\n(no longer used) andmd5\n(replaced byhashlib\n), were already deprecated by PEP 4. Others were removed as a result of the removal of support for various platforms such as Irix, BeOS and Mac OS 9 (see PEP 11). Some modules were also selected for removal in Python 3.0 due to lack of use or because a better replacement exists. See PEP 3108 for an exhaustive list.The\nbsddb3\npackage was removed because its presence in the core standard library has proved over time to be a particular burden for the core developers due to testing instability and Berkeley DB\u2019s release schedule. However, the package is alive and well, externally maintained at https://www.jcea.es/programacion/pybsddb.htm.Some modules were renamed because their old name disobeyed PEP 8, or for various other reasons. Here\u2019s the list:\nOld Name\nNew Name\n_winreg\nwinreg\nConfigParser\nconfigparser\ncopy_reg\ncopyreg\nQueue\nqueue\nSocketServer\nsocketserver\nmarkupbase\n_markupbase\nrepr\nreprlib\ntest.test_support\ntest.support\nA common pattern in Python 2.x is to have one version of a module implemented in pure Python, with an optional accelerated version implemented as a C extension; for example,\npickle\nandcPickle\n. This places the burden of importing the accelerated version and falling back on the pure Python version on each user of these modules. In Python 3.0, the accelerated versions are considered implementation details of the pure Python versions. Users should always import the standard version, which attempts to import the accelerated version and falls back to the pure Python version. Thepickle\n/cPickle\npair received this treatment. Theprofile\nmodule is on the list for 3.1. TheStringIO\nmodule has been turned into a class in theio\nmodule.Some related modules have been grouped into packages, and usually the submodule names have been simplified. The resulting new packages are:\ndbm\n(anydbm\n,dbhash\n,dbm\n,dumbdbm\n,gdbm\n,whichdb\n).html\n(HTMLParser\n,htmlentitydefs\n).http\n(httplib\n,BaseHTTPServer\n,CGIHTTPServer\n,SimpleHTTPServer\n,Cookie\n,cookielib\n).tkinter\n(allTkinter\n-related modules exceptturtle\n). The target audience ofturtle\ndoesn\u2019t really care abouttkinter\n. Also note that as of Python 2.6, the functionality ofturtle\nhas been greatly enhanced.urllib\n(urllib\n,urllib2\n,urlparse\n,robotparse\n).xmlrpc\n(xmlrpclib\n,DocXMLRPCServer\n,SimpleXMLRPCServer\n).\nSome other changes to standard library modules, not covered by PEP 3108:\nKilled\nsets\n. Use the built-inset()\nclass.Cleanup of the\nsys\nmodule: removedsys.exitfunc()\n,sys.exc_clear()\n,sys.exc_type\n,sys.exc_value\n,sys.exc_traceback\n. (Note thatsys.last_type\netc. remain.)Cleanup of the\narray.array\ntype: theread()\nandwrite()\nmethods are gone; usefromfile()\nandtofile()\ninstead. Also, the'c'\ntypecode for array is gone \u2013 use either'b'\nfor bytes or'u'\nfor Unicode characters.Cleanup of the\noperator\nmodule: removedsequenceIncludes()\nandisCallable()\n.Cleanup of the\nthread\nmodule:acquire_lock()\nandrelease_lock()\nare gone; useacquire()\nandrelease()\ninstead.Cleanup of the\nrandom\nmodule: removed thejumpahead()\nAPI.The\nnew\nmodule is gone.The functions\nos.tmpnam()\n,os.tempnam()\nandos.tmpfile()\nhave been removed in favor of thetempfile\nmodule.The\ntokenize\nmodule has been changed to work with bytes. The main entry point is nowtokenize.tokenize()\n, instead of generate_tokens.string.letters\nand its friends (string.lowercase\nandstring.uppercase\n) are gone. Usestring.ascii_letters\netc. instead. (The reason for the removal is thatstring.letters\nand friends had locale-specific behavior, which is a bad idea for such attractively named global \u201cconstants\u201d.)Renamed module\n__builtin__\ntobuiltins\n(removing the underscores, adding an \u2018s\u2019). The__builtins__\nvariable found in most global namespaces is unchanged. To modify a builtin, you should usebuiltins\n, not__builtins__\n!\nPEP 3101: A New Approach To String Formatting\u00b6\nA new system for built-in string formatting operations replaces the\n%\nstring formatting operator. (However, the%\noperator is still supported; it will be deprecated in Python 3.1 and removed from the language at some later time.) Read PEP 3101 for the full scoop.\nChanges To Exceptions\u00b6\nThe APIs for raising and catching exception have been cleaned up and new powerful features added:\nPEP 352: All exceptions must be derived (directly or indirectly) from\nBaseException\n. This is the root of the exception hierarchy. This is not new as a recommendation, but the requirement to inherit fromBaseException\nis new. (Python 2.6 still allowed classic classes to be raised, and placed no restriction on what you can catch.) As a consequence, string exceptions are finally truly and utterly dead.Almost all exceptions should actually derive from\nException\n;BaseException\nshould only be used as a base class for exceptions that should only be handled at the top level, such asSystemExit\norKeyboardInterrupt\n. The recommended idiom for handling all exceptions except for this latter category is to useexcept\nException\n.StandardError\nwas removed.Exceptions no longer behave as sequences. Use the\nargs\nattribute instead.PEP 3109: Raising exceptions. You must now use\nraise Exception(args)\ninstead ofraise Exception, args\n. Additionally, you can no longer explicitly specify a traceback; instead, if you have to do this, you can assign directly to the__traceback__\nattribute (see below).PEP 3110: Catching exceptions. You must now use\nexcept SomeException as variable\ninstead ofexcept SomeException, variable\n. Moreover, the variable is explicitly deleted when theexcept\nblock is left.PEP 3134: Exception chaining. There are two cases: implicit chaining and explicit chaining. Implicit chaining happens when an exception is raised in an\nexcept\norfinally\nhandler block. This usually happens due to a bug in the handler block; we call this a secondary exception. In this case, the original exception (that was being handled) is saved as the__context__\nattribute of the secondary exception. Explicit chaining is invoked with this syntax:raise SecondaryException() from primary_exception\n(where primary_exception is any expression that produces an exception object, probably an exception that was previously caught). In this case, the primary exception is stored on the\n__cause__\nattribute of the secondary exception. The traceback printed when an unhandled exception occurs walks the chain of__cause__\nand__context__\nattributes and prints a separate traceback for each component of the chain, with the primary exception at the top. (Java users may recognize this behavior.)PEP 3134: Exception objects now store their traceback as the\n__traceback__\nattribute. This means that an exception object now contains all the information pertaining to an exception, and there are fewer reasons to usesys.exc_info()\n(though the latter is not removed).A few exception messages are improved when Windows fails to load an extension module. For example,\nerror code 193\nis now%1 is not a valid Win32 application\n. Strings now deal with non-English locales.\nMiscellaneous Other Changes\u00b6\nOperators And Special Methods\u00b6\n!=\nnow returns the opposite of==\n, unless==\nreturnsNotImplemented\n.The concept of \u201cunbound methods\u201d has been removed from the language. When referencing a method as a class attribute, you now get a plain function object.\n__getslice__()\n,__setslice__()\nand__delslice__()\nwere killed. The syntaxa[i:j]\nnow translates toa.__getitem__(slice(i, j))\n(or__setitem__()\nor__delitem__()\n, when used as an assignment or deletion target, respectively).PEP 3114: the standard\nnext()\nmethod has been renamed to__next__()\n.The\n__oct__()\nand__hex__()\nspecial methods are removed \u2013oct()\nandhex()\nuse__index__()\nnow to convert the argument to an integer.Removed support for\n__members__\nand__methods__\n.The function attributes named\nfunc_X\nhave been renamed to use the__X__\nform, freeing up these names in the function attribute namespace for user-defined attributes. To wit,func_closure\n,func_code\n,func_defaults\n,func_dict\n,func_doc\n,func_globals\n,func_name\nwere renamed to__closure__\n,__code__\n,__defaults__\n,__dict__\n,__doc__\n,__globals__\n,__name__\n, respectively.__nonzero__()\nis now__bool__()\n.\nBuiltins\u00b6\nPEP 3135: New\nsuper()\n. You can now invokesuper()\nwithout arguments and (assuming this is in a regular instance method defined inside aclass\nstatement) the right class and instance will automatically be chosen. With arguments, the behavior ofsuper()\nis unchanged.PEP 3111:\nraw_input()\nwas renamed toinput()\n. That is, the newinput()\nfunction reads a line fromsys.stdin\nand returns it with the trailing newline stripped. It raisesEOFError\nif the input is terminated prematurely. To get the old behavior ofinput()\n, useeval(input())\n.A new built-in function\nnext()\nwas added to call the__next__()\nmethod on an object.The\nround()\nfunction rounding strategy and return type have changed. Exact halfway cases are now rounded to the nearest even result instead of away from zero. (For example,round(2.5)\nnow returns2\nrather than3\n.)round(x[, n])\nnow delegates tox.__round__([n])\ninstead of always returning a float. It generally returns an integer when called with a single argument and a value of the same type asx\nwhen called with two arguments.Moved\nintern()\ntosys.intern()\n.Removed:\napply()\n. Instead ofapply(f, args)\nusef(*args)\n.Removed\ncallable()\n. Instead ofcallable(f)\nyou can useisinstance(f, collections.Callable)\n. Theoperator.isCallable()\nfunction is also gone.Removed\ncoerce()\n. This function no longer serves a purpose now that classic classes are gone.Removed\nexecfile()\n. Instead ofexecfile(fn)\nuseexec(open(fn).read())\n.Removed the\nfile\ntype. Useopen()\n. There are now several different kinds of streams that open can return in theio\nmodule.Removed\nreduce()\n. Usefunctools.reduce()\nif you really need it; however, 99 percent of the time an explicitfor\nloop is more readable.Removed\nreload()\n. Useimp.reload()\n.Removed.\ndict.has_key()\n\u2013 use thein\noperator instead.\nBuild and C API Changes\u00b6\nDue to time constraints, here is a very incomplete list of changes to the C API.\nSupport for several platforms was dropped, including but not limited to Mac OS 9, BeOS, RISCOS, Irix, and Tru64.\nPEP 3118: New Buffer API.\nPEP 3121: Extension Module Initialization & Finalization.\nPEP 3123: Making\nPyObject_HEAD\nconform to standard C.No more C API support for restricted execution.\nPyNumber_Coerce()\n,PyNumber_CoerceEx()\n,PyMember_Get()\n, andPyMember_Set()\nC APIs are removed.New C API\nPyImport_ImportModuleNoBlock()\n, works likePyImport_ImportModule()\nbut won\u2019t block on the import lock (returning an error instead).Renamed the boolean conversion C-level slot and method:\nnb_nonzero\nis nownb_bool\n.Removed\nMETH_OLDARGS\nandWITH_CYCLE_GC\nfrom the C API.\nPerformance\u00b6\nThe net result of the 3.0 generalizations is that Python 3.0 runs the pystone benchmark around 10% slower than Python 2.5. Most likely the biggest cause is the removal of special-casing for small integers. There\u2019s room for improvement, but it will happen after 3.0 is released!\nPorting To Python 3.0\u00b6\nFor porting existing Python 2.5 or 2.6 source code to Python 3.0, the best strategy is the following:\n(Prerequisite:) Start with excellent test coverage.\nPort to Python 2.6. This should be no more work than the average port from Python 2.x to Python 2.(x+1). Make sure all your tests pass.\n(Still using 2.6:) Turn on the\n-3\ncommand line switch. This enables warnings about features that will be removed (or change) in 3.0. Run your test suite again, and fix code that you get warnings about until there are no warnings left, and all your tests still pass.Run the\n2to3\nsource-to-source translator over your source code tree. Run the result of the translation under Python 3.0. Manually fix up any remaining issues, fixing problems until all tests pass again.\nIt is not recommended to try to write source code that runs unchanged\nunder both Python 2.6 and 3.0; you\u2019d have to use a very contorted\ncoding style, e.g. avoiding print\nstatements, metaclasses,\nand much more. If you are maintaining a library that needs to support\nboth Python 2.6 and Python 3.0, the best approach is to modify step 3\nabove by editing the 2.6 version of the source code and running the\n2to3\ntranslator again, rather than editing the 3.0 version of the\nsource code.\nFor porting C extensions to Python 3.0, please see Porting Extension Modules to Python 3.", "code_snippets": [" ", " ", " ", "\n", " ", " ", "\n\n", " ", " ", " ", "\n", " ", " ", " ", "\n\n", " ", " ", "\n", " ", " ", "\n\n", " ", " ", " ", "\n", " ", " ", "\n\n", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", "\n ", " ", " ", "\n ", "\n", "\n ", "\n", " ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 7760} +{"url": "https://docs.python.org/3/whatsnew/3.1.html", "title": "What\u2019s New In Python 3.1", "content": "What\u2019s New In Python 3.1\u00b6\n- Author:\nRaymond Hettinger\nThis article explains the new features in Python 3.1, compared to 3.0. Python 3.1 was released on June 27, 2009.\nPEP 372: Ordered Dictionaries\u00b6\nRegular Python dictionaries iterate over key/value pairs in arbitrary order.\nOver the years, a number of authors have written alternative implementations\nthat remember the order that the keys were originally inserted. Based on\nthe experiences from those implementations, a new\ncollections.OrderedDict\nclass has been introduced.\nThe OrderedDict API is substantially the same as regular dictionaries but will iterate over keys and values in a guaranteed order depending on when a key was first inserted. If a new entry overwrites an existing entry, the original insertion position is left unchanged. Deleting an entry and reinserting it will move it to the end.\nThe standard library now supports use of ordered dictionaries in several\nmodules. The configparser\nmodule uses them by default. This lets\nconfiguration files be read, modified, and then written back in their original\norder. The _asdict() method for collections.namedtuple()\nnow\nreturns an ordered dictionary with the values appearing in the same order as\nthe underlying tuple indices. The json\nmodule is being built-out with\nan object_pairs_hook to allow OrderedDicts to be built by the decoder.\nSupport was also added for third-party tools like PyYAML.\nSee also\n- PEP 372 - Ordered Dictionaries\nPEP written by Armin Ronacher and Raymond Hettinger. Implementation written by Raymond Hettinger.\nSince an ordered dictionary remembers its insertion order, it can be used in conjunction with sorting to make a sorted dictionary:\n>>> # regular unsorted dictionary\n>>> d = {'banana': 3, 'apple':4, 'pear': 1, 'orange': 2}\n>>> # dictionary sorted by key\n>>> OrderedDict(sorted(d.items(), key=lambda t: t[0]))\nOrderedDict([('apple', 4), ('banana', 3), ('orange', 2), ('pear', 1)])\n>>> # dictionary sorted by value\n>>> OrderedDict(sorted(d.items(), key=lambda t: t[1]))\nOrderedDict([('pear', 1), ('orange', 2), ('banana', 3), ('apple', 4)])\n>>> # dictionary sorted by length of the key string\n>>> OrderedDict(sorted(d.items(), key=lambda t: len(t[0])))\nOrderedDict([('pear', 1), ('apple', 4), ('orange', 2), ('banana', 3)])\nThe new sorted dictionaries maintain their sort order when entries are deleted. But when new keys are added, the keys are appended to the end and the sort is not maintained.\nPEP 378: Format Specifier for Thousands Separator\u00b6\nThe built-in format()\nfunction and the str.format()\nmethod use\na mini-language that now includes a simple, non-locale aware way to format\na number with a thousands separator. That provides a way to humanize a\nprogram\u2019s output, improving its professional appearance and readability:\n>>> format(1234567, ',d')\n'1,234,567'\n>>> format(1234567.89, ',.2f')\n'1,234,567.89'\n>>> format(12345.6 + 8901234.12j, ',f')\n'12,345.600000+8,901,234.120000j'\n>>> format(Decimal('1234567.89'), ',f')\n'1,234,567.89'\nThe supported types are int\n, float\n, complex\nand decimal.Decimal\n.\nDiscussions are underway about how to specify alternative separators like dots, spaces, apostrophes, or underscores. Locale-aware applications should use the existing n format specifier which already has some support for thousands separators.\nSee also\n- PEP 378 - Format Specifier for Thousands Separator\nPEP written by Raymond Hettinger and implemented by Eric Smith and Mark Dickinson.\nOther Language Changes\u00b6\nSome smaller changes made to the core Python language are:\nDirectories and zip archives containing a\n__main__.py\nfile can now be executed directly by passing their name to the interpreter. The directory/zipfile is automatically inserted as the first entry in sys.path. (Suggestion and initial patch by Andy Chu; revised patch by Phillip J. Eby and Nick Coghlan; bpo-1739468.)The\nint()\ntype gained abit_length\nmethod that returns the number of bits necessary to represent its argument in binary:>>> n = 37 >>> bin(37) '0b100101' >>> n.bit_length() 6 >>> n = 2**123-1 >>> n.bit_length() 123 >>> (n+1).bit_length() 124\n(Contributed by Fredrik Johansson, Victor Stinner, Raymond Hettinger, and Mark Dickinson; bpo-3439.)\nThe fields in\nformat()\nstrings can now be automatically numbered:>>> 'Sir {} of {}'.format('Gallahad', 'Camelot') 'Sir Gallahad of Camelot'\nFormerly, the string would have required numbered fields such as:\n'Sir {0} of {1}'\n.(Contributed by Eric Smith; bpo-5237.)\nThe\nstring.maketrans()\nfunction is deprecated and is replaced by new static methods,bytes.maketrans()\nandbytearray.maketrans()\n. This change solves the confusion around which types were supported by thestring\nmodule. Now,str\n,bytes\n, andbytearray\neach have their own maketrans and translate methods with intermediate translation tables of the appropriate type.(Contributed by Georg Brandl; bpo-5675.)\nThe syntax of the\nwith\nstatement now allows multiple context managers in a single statement:>>> with open('mylog.txt') as infile, open('a.out', 'w') as outfile: ... for line in infile: ... if '' in line: ... outfile.write(line)\nWith the new syntax, the\ncontextlib.nested()\nfunction is no longer needed and is now deprecated.(Contributed by Georg Brandl and Mattias Br\u00e4ndstr\u00f6m; appspot issue 53094.)\nround(x, n)\nnow returns an integer if x is an integer. Previously it returned a float:>>> round(1123, -2) 1100\n(Contributed by Mark Dickinson; bpo-4707.)\nPython now uses David Gay\u2019s algorithm for finding the shortest floating-point representation that doesn\u2019t change its value. This should help mitigate some of the confusion surrounding binary floating-point numbers.\nThe significance is easily seen with a number like\n1.1\nwhich does not have an exact equivalent in binary floating point. Since there is no exact equivalent, an expression likefloat('1.1')\nevaluates to the nearest representable value which is0x1.199999999999ap+0\nin hex or1.100000000000000088817841970012523233890533447265625\nin decimal. That nearest value was and still is used in subsequent floating-point calculations.What is new is how the number gets displayed. Formerly, Python used a simple approach. The value of\nrepr(1.1)\nwas computed asformat(1.1, '.17g')\nwhich evaluated to'1.1000000000000001'\n. The advantage of using 17 digits was that it relied on IEEE-754 guarantees to assure thateval(repr(1.1))\nwould round-trip exactly to its original value. The disadvantage is that many people found the output to be confusing (mistaking intrinsic limitations of binary floating-point representation as being a problem with Python itself).The new algorithm for\nrepr(1.1)\nis smarter and returns'1.1'\n. Effectively, it searches all equivalent string representations (ones that get stored with the same underlying float value) and returns the shortest representation.The new algorithm tends to emit cleaner representations when possible, but it does not change the underlying values. So, it is still the case that\n1.1 + 2.2 != 3.3\neven though the representations may suggest otherwise.The new algorithm depends on certain features in the underlying floating-point implementation. If the required features are not found, the old algorithm will continue to be used. Also, the text pickle protocols assure cross-platform portability by using the old algorithm.\n(Contributed by Eric Smith and Mark Dickinson; bpo-1580)\nNew, Improved, and Deprecated Modules\u00b6\nAdded a\ncollections.Counter\nclass to support convenient counting of unique items in a sequence or iterable:>>> Counter(['red', 'blue', 'red', 'green', 'blue', 'blue']) Counter({'blue': 3, 'red': 2, 'green': 1})\n(Contributed by Raymond Hettinger; bpo-1696199.)\nAdded a new module,\ntkinter.ttk\nfor access to the Tk themed widget set. The basic idea of ttk is to separate, to the extent possible, the code implementing a widget\u2019s behavior from the code implementing its appearance.(Contributed by Guilherme Polo; bpo-2983.)\nThe\ngzip.GzipFile\nandbz2.BZ2File\nclasses now support the context management protocol:>>> # Automatically close file after writing >>> with gzip.GzipFile(filename, \"wb\") as f: ... f.write(b\"xxx\")\n(Contributed by Antoine Pitrou.)\nThe\ndecimal\nmodule now supports methods for creating a decimal object from a binaryfloat\n. The conversion is exact but can sometimes be surprising:>>> Decimal.from_float(1.1) Decimal('1.100000000000000088817841970012523233890533447265625')\nThe long decimal result shows the actual binary fraction being stored for 1.1. The fraction has many digits because 1.1 cannot be exactly represented in binary.\n(Contributed by Raymond Hettinger and Mark Dickinson.)\nThe\nitertools\nmodule grew two new functions. Theitertools.combinations_with_replacement()\nfunction is one of four for generating combinatorics including permutations and Cartesian products. Theitertools.compress()\nfunction mimics its namesake from APL. Also, the existingitertools.count()\nfunction now has an optional step argument and can accept any type of counting sequence includingfractions.Fraction\nanddecimal.Decimal\n:>>> [p+q for p,q in combinations_with_replacement('LOVE', 2)] ['LL', 'LO', 'LV', 'LE', 'OO', 'OV', 'OE', 'VV', 'VE', 'EE'] >>> list(compress(data=range(10), selectors=[0,0,1,1,0,1,0,1,0,0])) [2, 3, 5, 7] >>> c = count(start=Fraction(1,2), step=Fraction(1,6)) >>> [next(c), next(c), next(c), next(c)] [Fraction(1, 2), Fraction(2, 3), Fraction(5, 6), Fraction(1, 1)]\n(Contributed by Raymond Hettinger.)\ncollections.namedtuple()\nnow supports a keyword argument rename which lets invalid fieldnames be automatically converted to positional names in the form _0, _1, etc. This is useful when the field names are being created by an external source such as a CSV header, SQL field list, or user input:>>> query = input() SELECT region, dept, count(*) FROM main GROUPBY region, dept >>> cursor.execute(query) >>> query_fields = [desc[0] for desc in cursor.description] >>> UserQuery = namedtuple('UserQuery', query_fields, rename=True) >>> pprint.pprint([UserQuery(*row) for row in cursor]) [UserQuery(region='South', dept='Shipping', _2=185), UserQuery(region='North', dept='Accounting', _2=37), UserQuery(region='West', dept='Sales', _2=419)]\n(Contributed by Raymond Hettinger; bpo-1818.)\nThe\nre.sub()\n,re.subn()\nandre.split()\nfunctions now accept a flags parameter.(Contributed by Gregory Smith.)\nThe\nlogging\nmodule now implements a simplelogging.NullHandler\nclass for applications that are not using logging but are calling library code that does. Setting-up a null handler will suppress spurious warnings such as \u201cNo handlers could be found for logger foo\u201d:>>> h = logging.NullHandler() >>> logging.getLogger(\"foo\").addHandler(h)\n(Contributed by Vinay Sajip; bpo-4384).\nThe\nrunpy\nmodule which supports the-m\ncommand line switch now supports the execution of packages by looking for and executing a__main__\nsubmodule when a package name is supplied.(Contributed by Andi Vajda; bpo-4195.)\nThe\npdb\nmodule can now access and display source code loaded viazipimport\n(or any other conformant PEP 302 loader).(Contributed by Alexander Belopolsky; bpo-4201.)\nfunctools.partial\nobjects can now be pickled.\n(Suggested by Antoine Pitrou and Jesse Noller. Implemented by Jack Diederich; bpo-5228.)\nAdd\npydoc\nhelp topics for symbols so thathelp('@')\nworks as expected in the interactive environment.(Contributed by David Laban; bpo-4739.)\nThe\nunittest\nmodule now supports skipping individual tests or classes of tests. And it supports marking a test as an expected failure, a test that is known to be broken, but shouldn\u2019t be counted as a failure on a TestResult:class TestGizmo(unittest.TestCase): @unittest.skipUnless(sys.platform.startswith(\"win\"), \"requires Windows\") def test_gizmo_on_windows(self): ... @unittest.expectedFailure def test_gimzo_without_required_library(self): ...\nAlso, tests for exceptions have been builtout to work with context managers using the\nwith\nstatement:def test_division_by_zero(self): with self.assertRaises(ZeroDivisionError): x / 0\nIn addition, several new assertion methods were added including\nassertSetEqual()\n,assertDictEqual()\n,assertDictContainsSubset()\n,assertListEqual()\n,assertTupleEqual()\n,assertSequenceEqual()\n,assertRaisesRegexp()\n,assertIsNone()\n, andassertIsNotNone()\n.(Contributed by Benjamin Peterson and Antoine Pitrou.)\nThe\nio\nmodule has three new constants for theseek()\nmethod:SEEK_SET\n,SEEK_CUR\n, andSEEK_END\n.The\nsys.version_info\ntuple is now a named tuple:>>> sys.version_info sys.version_info(major=3, minor=1, micro=0, releaselevel='alpha', serial=2)\n(Contributed by Ross Light; bpo-4285.)\nThe\nnntplib\nandimaplib\nmodules now support IPv6.The\npickle\nmodule has been adapted for better interoperability with Python 2.x when used with protocol 2 or lower. The reorganization of the standard library changed the formal reference for many objects. For example,__builtin__.set\nin Python 2 is calledbuiltins.set\nin Python 3. This change confounded efforts to share data between different versions of Python. But now when protocol 2 or lower is selected, the pickler will automatically use the old Python 2 names for both loading and dumping. This remapping is turned-on by default but can be disabled with the fix_imports option:>>> s = {1, 2, 3} >>> pickle.dumps(s, protocol=0) b'c__builtin__\\nset\\np0\\n((lp1\\nL1L\\naL2L\\naL3L\\natp2\\nRp3\\n.' >>> pickle.dumps(s, protocol=0, fix_imports=False) b'cbuiltins\\nset\\np0\\n((lp1\\nL1L\\naL2L\\naL3L\\natp2\\nRp3\\n.'\nAn unfortunate but unavoidable side-effect of this change is that protocol 2 pickles produced by Python 3.1 won\u2019t be readable with Python 3.0. The latest pickle protocol, protocol 3, should be used when migrating data between Python 3.x implementations, as it doesn\u2019t attempt to remain compatible with Python 2.x.\n(Contributed by Alexandre Vassalotti and Antoine Pitrou, bpo-6137.)\nA new module,\nimportlib\nwas added. It provides a complete, portable, pure Python reference implementation of theimport\nstatement and its counterpart, the__import__()\nfunction. It represents a substantial step forward in documenting and defining the actions that take place during imports.(Contributed by Brett Cannon.)\nOptimizations\u00b6\nMajor performance enhancements have been added:\nThe new I/O library (as defined in PEP 3116) was mostly written in Python and quickly proved to be a problematic bottleneck in Python 3.0. In Python 3.1, the I/O library has been entirely rewritten in C and is 2 to 20 times faster depending on the task at hand. The pure Python version is still available for experimentation purposes through the\n_pyio\nmodule.(Contributed by Amaury Forgeot d\u2019Arc and Antoine Pitrou.)\nAdded a heuristic so that tuples and dicts containing only untrackable objects are not tracked by the garbage collector. This can reduce the size of collections and therefore the garbage collection overhead on long-running programs, depending on their particular use of datatypes.\n(Contributed by Antoine Pitrou, bpo-4688.)\nEnabling a configure option named\n--with-computed-gotos\non compilers that support it (notably: gcc, SunPro, icc), the bytecode evaluation loop is compiled with a new dispatch mechanism which gives speedups of up to 20%, depending on the system, the compiler, and the benchmark.(Contributed by Antoine Pitrou along with a number of other participants, bpo-4753).\nThe decoding of UTF-8, UTF-16 and LATIN-1 is now two to four times faster.\n(Contributed by Antoine Pitrou and Amaury Forgeot d\u2019Arc, bpo-4868.)\nThe\njson\nmodule now has a C extension to substantially improve its performance. In addition, the API was modified so that json works only withstr\n, not withbytes\n. That change makes the module closely match the JSON specification which is defined in terms of Unicode.(Contributed by Bob Ippolito and converted to Py3.1 by Antoine Pitrou and Benjamin Peterson; bpo-4136.)\nUnpickling now interns the attribute names of pickled objects. This saves memory and allows pickles to be smaller.\n(Contributed by Jake McGuire and Antoine Pitrou; bpo-5084.)\nIDLE\u00b6\nIDLE\u2019s format menu now provides an option to strip trailing whitespace from a source file.\n(Contributed by Roger D. Serwy; bpo-5150.)\nBuild and C API Changes\u00b6\nChanges to Python\u2019s build process and to the C API include:\nIntegers are now stored internally either in base\n2**15\nor in base2**30\n, the base being determined at build time. Previously, they were always stored in base2**15\n. Using base2**30\ngives significant performance improvements on 64-bit machines, but benchmark results on 32-bit machines have been mixed. Therefore, the default is to use base2**30\non 64-bit machines and base2**15\non 32-bit machines; on Unix, there\u2019s a new configure option--enable-big-digits\nthat can be used to override this default.Apart from the performance improvements this change should be invisible to end users, with one exception: for testing and debugging purposes there\u2019s a new\nsys.int_info\nthat provides information about the internal format, giving the number of bits per digit and the size in bytes of the C type used to store each digit:>>> import sys >>> sys.int_info sys.int_info(bits_per_digit=30, sizeof_digit=4)\n(Contributed by Mark Dickinson; bpo-4258.)\nThe\nPyLong_AsUnsignedLongLong()\nfunction now handles a negative pylong by raisingOverflowError\ninstead ofTypeError\n.(Contributed by Mark Dickinson and Lisandro Dalcrin; bpo-5175.)\nDeprecated\nPyNumber_Int()\n. UsePyNumber_Long()\ninstead.(Contributed by Mark Dickinson; bpo-4910.)\nAdded a new\nPyOS_string_to_double()\nfunction to replace the deprecated functionsPyOS_ascii_strtod()\nandPyOS_ascii_atof()\n.(Contributed by Mark Dickinson; bpo-5914.)\nAdded\nPyCapsule\nas a replacement for thePyCObject\nAPI. The principal difference is that the new type has a well defined interface for passing typing safety information and a less complicated signature for calling a destructor. The old type had a problematic API and is now deprecated.(Contributed by Larry Hastings; bpo-5630.)\nPorting to Python 3.1\u00b6\nThis section lists previously described changes and other bugfixes that may require changes to your code:\nThe new floating-point string representations can break existing doctests. For example:\ndef e(): '''Compute the base of natural logarithms. >>> e() 2.7182818284590451 ''' return sum(1/math.factorial(x) for x in reversed(range(30))) doctest.testmod() ********************************************************************** Failed example: e() Expected: 2.7182818284590451 Got: 2.718281828459045 **********************************************************************\nThe automatic name remapping in the pickle module for protocol 2 or lower can make Python 3.1 pickles unreadable in Python 3.0. One solution is to use protocol 3. Another solution is to set the fix_imports option to\nFalse\n. See the discussion above for more details.", "code_snippets": ["\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n\n", "\n", " ", " ", " ", "\n", "\n\n", "\n", " ", " ", " ", "\n", "\n\n", "\n", " ", " ", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", " ", " ", "\n", "\n", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", "\n", " ", "\n", "\n", " ", " ", " ", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", "\n", "\n", "\n", " ", " ", " ", " ", " ", "\n", "\n\n", " ", "\n", "\n\n", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", " ", " ", "\n", "\n\n", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n\n ", " ", "\n ", "\n ", "\n\n ", "\n ", "\n ", "\n", "\n ", " ", "\n ", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n\n", "\n", "\n\n", "\n ", " ", " ", " ", " ", " ", "\n\n", "\n\n", "\n", " ", "\n ", "\n", "\n ", "\n", "\n ", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 4690} +{"url": "https://docs.python.org/3/whatsnew/3.5.html", "title": "What\u2019s New In Python 3.5", "content": "What\u2019s New In Python 3.5\u00b6\n- Editors:\nElvis Pranskevichus , Yury Selivanov \nThis article explains the new features in Python 3.5, compared to 3.4. Python 3.5 was released on September 13, 2015. See the changelog for a full list of changes.\nSee also\nPEP 478 - Python 3.5 Release Schedule\nSummary \u2013 Release highlights\u00b6\nNew syntax features:\nPEP 492, coroutines with async and await syntax.\nPEP 465, a new matrix multiplication operator:\na @ b\n.PEP 448, additional unpacking generalizations.\nNew library modules:\nNew built-in features:\nbytes % args\n,bytearray % args\n: PEP 461 \u2013 Adding%\nformatting to bytes and bytearray.New\nbytes.hex()\n,bytearray.hex()\nandmemoryview.hex()\nmethods. (Contributed by Arnon Yaari in bpo-9951.)memoryview\nnow supports tuple indexing (including multi-dimensional). (Contributed by Antoine Pitrou in bpo-23632.)Generators have a new\ngi_yieldfrom\nattribute, which returns the object being iterated byyield from\nexpressions. (Contributed by Benno Leslie and Yury Selivanov in bpo-24450.)A new\nRecursionError\nexception is now raised when maximum recursion depth is reached. (Contributed by Georg Brandl in bpo-19235.)\nCPython implementation improvements:\nWhen the\nLC_TYPE\nlocale is the POSIX locale (C\nlocale),sys.stdin\nandsys.stdout\nnow use thesurrogateescape\nerror handler, instead of thestrict\nerror handler. (Contributed by Victor Stinner in bpo-19977.).pyo\nfiles are no longer used and have been replaced by a more flexible scheme that includes the optimization level explicitly in.pyc\nname. (See PEP 488 overview.)Builtin and extension modules are now initialized in a multi-phase process, which is similar to how Python modules are loaded. (See PEP 489 overview.)\nSignificant improvements in the standard library:\ncollections.OrderedDict\nis now implemented in C, which makes it 4 to 100 times faster.The\nssl\nmodule gained support for Memory BIO, which decouples SSL protocol handling from network IO.The new\nos.scandir()\nfunction provides a better and significantly faster way of directory traversal.functools.lru_cache()\nhas been mostly reimplemented in C, yielding much better performance.The new\nsubprocess.run()\nfunction provides a streamlined way to run subprocesses.The\ntraceback\nmodule has been significantly enhanced for improved performance and developer convenience.\nSecurity improvements:\nSSLv3 is now disabled throughout the standard library. It can still be enabled by instantiating a\nssl.SSLContext\nmanually. (See bpo-22638 for more details; this change was backported to CPython 3.4 and 2.7.)HTTP cookie parsing is now stricter, in order to protect against potential injection attacks. (Contributed by Antoine Pitrou in bpo-22796.)\nWindows improvements:\nA new installer for Windows has replaced the old MSI. See Using Python on Windows for more information.\nWindows builds now use Microsoft Visual C++ 14.0, and extension modules should use the same.\nPlease read on for a comprehensive list of user-facing changes, including many other smaller improvements, CPython optimizations, deprecations, and potential porting issues.\nNew Features\u00b6\nPEP 492 - Coroutines with async and await syntax\u00b6\nPEP 492 greatly improves support for asynchronous programming in Python by adding awaitable objects, coroutine functions, asynchronous iteration, and asynchronous context managers.\nCoroutine functions are declared using the new async def\nsyntax:\n>>> async def coro():\n... return 'spam'\nInside a coroutine function, the new await\nexpression can be used\nto suspend coroutine execution until the result is available. Any object\ncan be awaited, as long as it implements the awaitable protocol by\ndefining the __await__()\nmethod.\nPEP 492 also adds async for\nstatement for convenient iteration\nover asynchronous iterables.\nAn example of a rudimentary HTTP client written using the new syntax:\nimport asyncio\nasync def http_get(domain):\nreader, writer = await asyncio.open_connection(domain, 80)\nwriter.write(b'\\r\\n'.join([\nb'GET / HTTP/1.1',\nb'Host: %b' % domain.encode('latin-1'),\nb'Connection: close',\nb'', b''\n]))\nasync for line in reader:\nprint('>>>', line)\nwriter.close()\nloop = asyncio.get_event_loop()\ntry:\nloop.run_until_complete(http_get('example.com'))\nfinally:\nloop.close()\nSimilarly to asynchronous iteration, there is a new syntax for asynchronous context managers. The following script:\nimport asyncio\nasync def coro(name, lock):\nprint('coro {}: waiting for lock'.format(name))\nasync with lock:\nprint('coro {}: holding the lock'.format(name))\nawait asyncio.sleep(1)\nprint('coro {}: releasing the lock'.format(name))\nloop = asyncio.get_event_loop()\nlock = asyncio.Lock()\ncoros = asyncio.gather(coro(1, lock), coro(2, lock))\ntry:\nloop.run_until_complete(coros)\nfinally:\nloop.close()\nwill output:\ncoro 2: waiting for lock\ncoro 2: holding the lock\ncoro 1: waiting for lock\ncoro 2: releasing the lock\ncoro 1: holding the lock\ncoro 1: releasing the lock\nNote that both async for\nand async with\ncan only\nbe used inside a coroutine function declared with async def\n.\nCoroutine functions are intended to be run inside a compatible event loop, such as the asyncio loop.\nNote\nChanged in version 3.5.2: Starting with CPython 3.5.2, __aiter__\ncan directly return\nasynchronous iterators. Returning\nan awaitable object will result in a\nPendingDeprecationWarning\n.\nSee more details in the Asynchronous Iterators documentation section.\nSee also\n- PEP 492 \u2013 Coroutines with async and await syntax\nPEP written and implemented by Yury Selivanov.\nPEP 465 - A dedicated infix operator for matrix multiplication\u00b6\nPEP 465 adds the @\ninfix operator for matrix multiplication.\nCurrently, no builtin Python types implement the new operator, however, it\ncan be implemented by defining __matmul__()\n,\n__rmatmul__()\n, and __imatmul__()\nfor regular,\nreflected, and in-place matrix multiplication.\nThe semantics of these methods is similar to that of\nmethods defining other infix arithmetic operators.\nMatrix multiplication is a notably common operation in many fields of\nmathematics, science, engineering, and the addition of @\nallows writing\ncleaner code:\nS = (H @ beta - r).T @ inv(H @ V @ H.T) @ (H @ beta - r)\ninstead of:\nS = dot((dot(H, beta) - r).T,\ndot(inv(dot(dot(H, V), H.T)), dot(H, beta) - r))\nNumPy 1.10 has support for the new operator:\n>>> import numpy\n>>> x = numpy.ones(3)\n>>> x\narray([ 1., 1., 1.])\n>>> m = numpy.eye(3)\n>>> m\narray([[ 1., 0., 0.],\n[ 0., 1., 0.],\n[ 0., 0., 1.]])\n>>> x @ m\narray([ 1., 1., 1.])\nSee also\n- PEP 465 \u2013 A dedicated infix operator for matrix multiplication\nPEP written by Nathaniel J. Smith; implemented by Benjamin Peterson.\nPEP 448 - Additional Unpacking Generalizations\u00b6\nPEP 448 extends the allowed uses of the *\niterable unpacking\noperator and **\ndictionary unpacking operator. It is now possible\nto use an arbitrary number of unpackings in function calls:\n>>> print(*[1], *[2], 3, *[4, 5])\n1 2 3 4 5\n>>> def fn(a, b, c, d):\n... print(a, b, c, d)\n...\n>>> fn(**{'a': 1, 'c': 3}, **{'b': 2, 'd': 4})\n1 2 3 4\nSimilarly, tuple, list, set, and dictionary displays allow multiple unpackings (see Expression lists and Dictionary displays):\n>>> *range(4), 4\n(0, 1, 2, 3, 4)\n>>> [*range(4), 4]\n[0, 1, 2, 3, 4]\n>>> {*range(4), 4, *(5, 6, 7)}\n{0, 1, 2, 3, 4, 5, 6, 7}\n>>> {'x': 1, **{'y': 2}}\n{'x': 1, 'y': 2}\nSee also\n- PEP 448 \u2013 Additional Unpacking Generalizations\nPEP written by Joshua Landau; implemented by Neil Girdhar, Thomas Wouters, and Joshua Landau.\nPEP 461 - percent formatting support for bytes and bytearray\u00b6\nPEP 461 adds support for the %\ninterpolation operator to bytes\nand bytearray\n.\nWhile interpolation is usually thought of as a string operation, there are\ncases where interpolation on bytes\nor bytearrays\nmakes sense, and the\nwork needed to make up for this missing functionality detracts from the\noverall readability of the code. This issue is particularly important when\ndealing with wire format protocols, which are often a mixture of binary\nand ASCII compatible text.\nExamples:\n>>> b'Hello %b!' % b'World'\nb'Hello World!'\n>>> b'x=%i y=%f' % (1, 2.5)\nb'x=1 y=2.500000'\nUnicode is not allowed for %b\n, but it is accepted by %a\n(equivalent of\nrepr(obj).encode('ascii', 'backslashreplace')\n):\n>>> b'Hello %b!' % 'World'\nTraceback (most recent call last):\nFile \"\", line 1, in \nTypeError: %b requires bytes, or an object that implements __bytes__, not 'str'\n>>> b'price: %a' % '10\u20ac'\nb\"price: '10\\\\u20ac'\"\nNote that %s\nand %r\nconversion types, although supported, should\nonly be used in codebases that need compatibility with Python 2.\nSee also\n- PEP 461 \u2013 Adding % formatting to bytes and bytearray\nPEP written by Ethan Furman; implemented by Neil Schemenauer and Ethan Furman.\nPEP 484 - Type Hints\u00b6\nFunction annotation syntax has been a Python feature since version 3.0 (PEP 3107), however the semantics of annotations has been left undefined.\nExperience has shown that the majority of function annotation uses were to provide type hints to function parameters and return values. It became evident that it would be beneficial for Python users, if the standard library included the base definitions and tools for type annotations.\nPEP 484 introduces a provisional module to provide these standard definitions and tools, along with some conventions for situations where annotations are not available.\nFor example, here is a simple function whose argument and return type are declared in the annotations:\ndef greeting(name: str) -> str:\nreturn 'Hello ' + name\nWhile these annotations are available at runtime through the usual\n__annotations__\nattribute, no automatic type checking happens\nat runtime. Instead, it is assumed that a separate off-line type checker\n(e.g. mypy) will be used for on-demand\nsource code analysis.\nThe type system supports unions, generic types, and a special type\nnamed Any\nwhich is consistent with (i.e. assignable to\nand from) all types.\nPEP 471 - os.scandir() function \u2013 a better and faster directory iterator\u00b6\nPEP 471 adds a new directory iteration function, os.scandir()\n,\nto the standard library. Additionally, os.walk()\nis now\nimplemented using scandir\n, which makes it 3 to 5 times faster\non POSIX systems and 7 to 20 times faster on Windows systems. This is\nlargely achieved by greatly reducing the number of calls to os.stat()\nrequired to walk a directory tree.\nAdditionally, scandir\nreturns an iterator, as opposed to returning\na list of file names, which improves memory efficiency when iterating\nover very large directories.\nThe following example shows a simple use of os.scandir()\nto display all\nthe files (excluding directories) in the given path that don\u2019t start with\n'.'\n. The entry.is_file()\ncall will generally\nnot make an additional system call:\nfor entry in os.scandir(path):\nif not entry.name.startswith('.') and entry.is_file():\nprint(entry.name)\nSee also\n- PEP 471 \u2013 os.scandir() function \u2013 a better and faster directory iterator\nPEP written and implemented by Ben Hoyt with the help of Victor Stinner.\nPEP 475: Retry system calls failing with EINTR\u00b6\nAn errno.EINTR\nerror code is returned whenever a system call, that\nis waiting for I/O, is interrupted by a signal. Previously, Python would\nraise InterruptedError\nin such cases. This meant that, when writing a\nPython application, the developer had two choices:\nIgnore the\nInterruptedError\n.Handle the\nInterruptedError\nand attempt to restart the interrupted system call at every call site.\nThe first option makes an application fail intermittently. The second option adds a large amount of boilerplate that makes the code nearly unreadable. Compare:\nprint(\"Hello World\")\nand:\nwhile True:\ntry:\nprint(\"Hello World\")\nbreak\nexcept InterruptedError:\ncontinue\nPEP 475 implements automatic retry of system calls on\nEINTR\n. This removes the burden of dealing with EINTR\nor InterruptedError\nin user code in most situations and makes\nPython programs, including the standard library, more robust. Note that\nthe system call is only retried if the signal handler does not raise an\nexception.\nBelow is a list of functions which are now retried when interrupted by a signal:\nfunctions of the\nfaulthandler\nmodule;os\nfunctions:fchdir()\n,fchmod()\n,fchown()\n,fdatasync()\n,fstat()\n,fstatvfs()\n,fsync()\n,ftruncate()\n,mkfifo()\n,mknod()\n,open()\n,posix_fadvise()\n,posix_fallocate()\n,pread()\n,pwrite()\n,read()\n,readv()\n,sendfile()\n,wait3()\n,wait4()\n,wait()\n,waitid()\n,waitpid()\n,write()\n,writev()\n;special cases:\nos.close()\nandos.dup2()\nnow ignoreEINTR\nerrors; the syscall is not retried (see the PEP for the rationale);select\nfunctions:devpoll.poll()\n,epoll.poll()\n,kqueue.control()\n,poll.poll()\n,select()\n;methods of the\nsocket\nclass:accept()\n,connect()\n(except for non-blocking sockets),recv()\n,recvfrom()\n,recvmsg()\n,send()\n,sendall()\n,sendmsg()\n,sendto()\n;\nSee also\n- PEP 475 \u2013 Retry system calls failing with EINTR\nPEP and implementation written by Charles-Fran\u00e7ois Natali and Victor Stinner, with the help of Antoine Pitrou (the French connection).\nPEP 479: Change StopIteration handling inside generators\u00b6\nThe interaction of generators and StopIteration\nin Python 3.4 and\nearlier was sometimes surprising, and could conceal obscure bugs. Previously,\nStopIteration\nraised accidentally inside a generator function was\ninterpreted as the end of the iteration by the loop construct driving the\ngenerator.\nPEP 479 changes the behavior of generators: when a StopIteration\nexception is raised inside a generator, it is replaced with a\nRuntimeError\nbefore it exits the generator frame. The main goal of\nthis change is to ease debugging in the situation where an unguarded\nnext()\ncall raises StopIteration\nand causes the iteration controlled\nby the generator to terminate silently. This is particularly pernicious in\ncombination with the yield from\nconstruct.\nThis is a backwards incompatible change, so to enable the new behavior, a __future__ import is necessary:\n>>> from __future__ import generator_stop\n>>> def gen():\n... next(iter([]))\n... yield\n...\n>>> next(gen())\nTraceback (most recent call last):\nFile \"\", line 2, in gen\nStopIteration\nThe above exception was the direct cause of the following exception:\nTraceback (most recent call last):\nFile \"\", line 1, in \nRuntimeError: generator raised StopIteration\nWithout a __future__\nimport, a PendingDeprecationWarning\nwill be\nraised whenever a StopIteration\nexception is raised inside a generator.\nSee also\n- PEP 479 \u2013 Change StopIteration handling inside generators\nPEP written by Chris Angelico and Guido van Rossum. Implemented by Chris Angelico, Yury Selivanov and Nick Coghlan.\nPEP 485: A function for testing approximate equality\u00b6\nPEP 485 adds the math.isclose()\nand cmath.isclose()\nfunctions which tell whether two values are approximately equal or\n\u201cclose\u201d to each other. Whether or not two values are considered\nclose is determined according to given absolute and relative tolerances.\nRelative tolerance is the maximum allowed difference between isclose\narguments, relative to the larger absolute value:\n>>> import math\n>>> a = 5.0\n>>> b = 4.99998\n>>> math.isclose(a, b, rel_tol=1e-5)\nTrue\n>>> math.isclose(a, b, rel_tol=1e-6)\nFalse\nIt is also possible to compare two values using absolute tolerance, which must be a non-negative value:\n>>> import math\n>>> a = 5.0\n>>> b = 4.99998\n>>> math.isclose(a, b, abs_tol=0.00003)\nTrue\n>>> math.isclose(a, b, abs_tol=0.00001)\nFalse\nSee also\n- PEP 485 \u2013 A function for testing approximate equality\nPEP written by Christopher Barker; implemented by Chris Barker and Tal Einat.\nPEP 486: Make the Python Launcher aware of virtual environments\u00b6\nPEP 486 makes the Windows launcher (see PEP 397) aware of an active\nvirtual environment. When the default interpreter would be used and the\nVIRTUAL_ENV\nenvironment variable is set, the interpreter in the virtual\nenvironment will be used.\nSee also\n- PEP 486 \u2013 Make the Python Launcher aware of virtual environments\nPEP written and implemented by Paul Moore.\nPEP 488: Elimination of PYO files\u00b6\nPEP 488 does away with the concept of .pyo\nfiles. This means that\n.pyc\nfiles represent both unoptimized and optimized bytecode. To prevent the\nneed to constantly regenerate bytecode files, .pyc\nfiles now have an\noptional opt-\ntag in their name when the bytecode is optimized. This has the\nside-effect of no more bytecode file name clashes when running under either\n-O\nor -OO\n. Consequently, bytecode files generated from\n-O\n, and -OO\nmay now exist simultaneously.\nimportlib.util.cache_from_source()\nhas an updated API to help with\nthis change.\nSee also\n- PEP 488 \u2013 Elimination of PYO files\nPEP written and implemented by Brett Cannon.\nPEP 489: Multi-phase extension module initialization\u00b6\nPEP 489 updates extension module initialization to take advantage of the two step module loading mechanism introduced by PEP 451 in Python 3.4.\nThis change brings the import semantics of extension modules that opt-in to using the new mechanism much closer to those of Python source and bytecode modules, including the ability to use any valid identifier as a module name, rather than being restricted to ASCII.\nSee also\n- PEP 489 \u2013 Multi-phase extension module initialization\nPEP written by Petr Viktorin, Stefan Behnel, and Nick Coghlan; implemented by Petr Viktorin.\nOther Language Changes\u00b6\nSome smaller changes made to the core Python language are:\nAdded the\n\"namereplace\"\nerror handlers. The\"backslashreplace\"\nerror handlers now work with decoding and translating. (Contributed by Serhiy Storchaka in bpo-19676 and bpo-22286.)The\n-b\noption now affects comparisons ofbytes\nwithint\n. (Contributed by Serhiy Storchaka in bpo-23681.)New Kazakh\nkz1048\nand Tajikkoi8_t\ncodecs. (Contributed by Serhiy Storchaka in bpo-22682 and bpo-22681.)Property docstrings are now writable. This is especially useful for\ncollections.namedtuple()\ndocstrings. (Contributed by Berker Peksag in bpo-24064.)Circular imports involving relative imports are now supported. (Contributed by Brett Cannon and Antoine Pitrou in bpo-17636.)\nNew Modules\u00b6\ntyping\u00b6\nThe new typing\nprovisional module\nprovides standard definitions and tools for function type annotations.\nSee Type Hints for more information.\nzipapp\u00b6\nThe new zipapp\nmodule (specified in PEP 441) provides an API and\ncommand line tool for creating executable Python Zip Applications, which\nwere introduced in Python 2.6 in bpo-1739468, but which were not well\npublicized, either at the time or since.\nWith the new module, bundling your application is as simple as putting all\nthe files, including a __main__.py\nfile, into a directory myapp\nand running:\n$ python -m zipapp myapp\n$ python myapp.pyz\nThe module implementation has been contributed by Paul Moore in bpo-23491.\nSee also\nPEP 441 \u2013 Improving Python ZIP Application Support\nImproved Modules\u00b6\nargparse\u00b6\nThe ArgumentParser\nclass now allows disabling\nabbreviated usage of long options by setting\nallow_abbrev to False\n. (Contributed by Jonathan Paugh,\nSteven Bethard, paul j3 and Daniel Eriksson in bpo-14910.)\nasyncio\u00b6\nSince the asyncio\nmodule is provisional,\nall changes introduced in Python 3.5 have also been backported to Python 3.4.x.\nNotable changes in the asyncio\nmodule since Python 3.4.0:\nNew debugging APIs:\nloop.set_debug()\nandloop.get_debug()\nmethods. (Contributed by Victor Stinner.)The proactor event loop now supports SSL. (Contributed by Antoine Pitrou and Victor Stinner in bpo-22560.)\nA new\nloop.is_closed()\nmethod to check if the event loop is closed. (Contributed by Victor Stinner in bpo-21326.)A new\nloop.create_task()\nto conveniently create and schedule a newTask\nfor a coroutine. Thecreate_task\nmethod is also used by all asyncio functions that wrap coroutines into tasks, such asasyncio.wait()\n,asyncio.gather()\n, etc. (Contributed by Victor Stinner.)A new\ntransport.get_write_buffer_limits()\nmethod to inquire for high- and low- water limits of the flow control. (Contributed by Victor Stinner.)The\nasync()\nfunction is deprecated in favor ofensure_future()\n. (Contributed by Yury Selivanov.)New\nloop.set_task_factory()\nandloop.get_task_factory()\nmethods to customize the task factory thatloop.create_task()\nmethod uses. (Contributed by Yury Selivanov.)New\nQueue.join()\nandQueue.task_done()\nqueue methods. (Contributed by Victor Stinner.)The\nJoinableQueue\nclass was removed, in favor of theasyncio.Queue\nclass. (Contributed by Victor Stinner.)\nUpdates in 3.5.1:\nThe\nensure_future()\nfunction and all functions that use it, such asloop.run_until_complete()\n, now accept all kinds of awaitable objects. (Contributed by Yury Selivanov.)New\nrun_coroutine_threadsafe()\nfunction to submit coroutines to event loops from other threads. (Contributed by Vincent Michel.)New\nTransport.is_closing()\nmethod to check if the transport is closing or closed. (Contributed by Yury Selivanov.)The\nloop.create_server()\nmethod can now accept a list of hosts. (Contributed by Yann Sionneau.)\nUpdates in 3.5.2:\nNew\nloop.create_future()\nmethod to create Future objects. This allows alternative event loop implementations, such as uvloop, to provide a fasterasyncio.Future\nimplementation. (Contributed by Yury Selivanov.)New\nloop.get_exception_handler()\nmethod to get the current exception handler. (Contributed by Yury Selivanov.)New\nStreamReader.readuntil()\nmethod to read data from the stream until a separator bytes sequence appears. (Contributed by Mark Korenberg.)The\nloop.create_connection()\nandloop.create_server()\nmethods are optimized to avoid calling the systemgetaddrinfo\nfunction if the address is already resolved. (Contributed by A. Jesse Jiryu Davis.)The\nloop.sock_connect(sock, address)\nno longer requires the address to be resolved prior to the call. (Contributed by A. Jesse Jiryu Davis.)\nbz2\u00b6\nThe BZ2Decompressor.decompress\nmethod now accepts an optional max_length argument to limit the maximum\nsize of decompressed data. (Contributed by Nikolaus Rath in bpo-15955.)\ncgi\u00b6\nThe FieldStorage\nclass now supports the context manager\nprotocol. (Contributed by Berker Peksag in bpo-20289.)\ncmath\u00b6\nA new function isclose()\nprovides a way to test for approximate\nequality. (Contributed by Chris Barker and Tal Einat in bpo-24270.)\ncode\u00b6\nThe InteractiveInterpreter.showtraceback()\nmethod now prints the full chained traceback, just like the interactive\ninterpreter. (Contributed by Claudiu Popa in bpo-17442.)\ncollections\u00b6\nThe OrderedDict\nclass is now implemented in C, which\nmakes it 4 to 100 times faster. (Contributed by Eric Snow in bpo-16991.)\nOrderedDict.items()\n, OrderedDict.keys()\n,\nand OrderedDict.values()\nviews now support reversed()\niteration.\n(Contributed by Serhiy Storchaka in bpo-19505.)\nThe deque\nclass now defines\nindex()\n, insert()\n, and\ncopy()\n, and supports the +\nand *\noperators.\nThis allows deques to be recognized as a MutableSequence\nand improves their substitutability for lists.\n(Contributed by Raymond Hettinger in bpo-23704.)\nDocstrings produced by namedtuple()\ncan now be updated:\nPoint = namedtuple('Point', ['x', 'y'])\nPoint.__doc__ += ': Cartesian coordinate'\nPoint.x.__doc__ = 'abscissa'\nPoint.y.__doc__ = 'ordinate'\n(Contributed by Berker Peksag in bpo-24064.)\nThe UserString\nclass now implements the\n__getnewargs__()\n, __rmod__()\n, casefold()\n,\nformat_map()\n, isprintable()\n, and maketrans()\nmethods to match the corresponding methods of str\n.\n(Contributed by Joe Jevnik in bpo-22189.)\ncollections.abc\u00b6\nThe Sequence.index()\nmethod now\naccepts start and stop arguments to match the corresponding methods\nof tuple\n, list\n, etc.\n(Contributed by Devin Jeanpierre in bpo-23086.)\nA new Generator\nabstract base class. (Contributed\nby Stefan Behnel in bpo-24018.)\nNew Awaitable\n, Coroutine\n,\nAsyncIterator\n, and\nAsyncIterable\nabstract base classes.\n(Contributed by Yury Selivanov in bpo-24184.)\nFor earlier Python versions, a backport of the new ABCs is available in an external PyPI package.\ncompileall\u00b6\nA new compileall\noption, -j N\n, allows running N workers\nsimultaneously to perform parallel bytecode compilation.\nThe compile_dir()\nfunction has a corresponding workers\nparameter. (Contributed by Claudiu Popa in bpo-16104.)\nAnother new option, -r\n, allows controlling the maximum recursion\nlevel for subdirectories. (Contributed by Claudiu Popa in bpo-19628.)\nThe -q\ncommand line option can now be specified more than once, in\nwhich case all output, including errors, will be suppressed. The corresponding\nquiet\nparameter in compile_dir()\n,\ncompile_file()\n, and compile_path()\ncan now\naccept an integer value indicating the level of output suppression.\n(Contributed by Thomas Kluyver in bpo-21338.)\nconcurrent.futures\u00b6\nThe Executor.map()\nmethod now accepts a\nchunksize argument to allow batching of tasks to improve performance when\nProcessPoolExecutor()\nis used.\n(Contributed by Dan O\u2019Reilly in bpo-11271.)\nThe number of workers in the ThreadPoolExecutor\nconstructor is optional now. The default value is 5 times the number of CPUs.\n(Contributed by Claudiu Popa in bpo-21527.)\nconfigparser\u00b6\nconfigparser\nnow provides a way to customize the conversion\nof values by specifying a dictionary of converters in the\nConfigParser\nconstructor, or by defining them\nas methods in ConfigParser\nsubclasses. Converters defined in\na parser instance are inherited by its section proxies.\nExample:\n>>> import configparser\n>>> conv = {}\n>>> conv['list'] = lambda v: [e.strip() for e in v.split() if e.strip()]\n>>> cfg = configparser.ConfigParser(converters=conv)\n>>> cfg.read_string(\"\"\"\n... [s]\n... list = a b c d e f g\n... \"\"\")\n>>> cfg.get('s', 'list')\n'a b c d e f g'\n>>> cfg.getlist('s', 'list')\n['a', 'b', 'c', 'd', 'e', 'f', 'g']\n>>> section = cfg['s']\n>>> section.getlist('list')\n['a', 'b', 'c', 'd', 'e', 'f', 'g']\n(Contributed by \u0141ukasz Langa in bpo-18159.)\ncontextlib\u00b6\nThe new redirect_stderr()\ncontext manager (similar to\nredirect_stdout()\n) makes it easier for utility scripts to\nhandle inflexible APIs that write their output to sys.stderr\nand\ndon\u2019t provide any options to redirect it:\n>>> import contextlib, io, logging\n>>> f = io.StringIO()\n>>> with contextlib.redirect_stderr(f):\n... logging.warning('warning')\n...\n>>> f.getvalue()\n'WARNING:root:warning\\n'\n(Contributed by Berker Peksag in bpo-22389.)\ncsv\u00b6\nThe writerow()\nmethod now supports arbitrary iterables,\nnot just sequences. (Contributed by Serhiy Storchaka in bpo-23171.)\ncurses\u00b6\nThe new update_lines_cols()\nfunction updates the LINES\nand COLS\nmodule variables. This is useful for detecting\nmanual screen resizing. (Contributed by Arnon Yaari in bpo-4254.)\ndbm\u00b6\ndumb.open\nalways creates a new database when the flag\nhas the value \"n\"\n. (Contributed by Claudiu Popa in bpo-18039.)\ndifflib\u00b6\nThe charset of HTML documents generated by\nHtmlDiff.make_file()\ncan now be customized by using a new charset keyword-only argument.\nThe default charset of HTML document changed from \"ISO-8859-1\"\nto \"utf-8\"\n.\n(Contributed by Berker Peksag in bpo-2052.)\nThe diff_bytes()\nfunction can now compare lists of byte\nstrings. This fixes a regression from Python 2.\n(Contributed by Terry J. Reedy and Greg Ward in bpo-17445.)\ndistutils\u00b6\nBoth the build\nand build_ext\ncommands now accept a -j\noption to\nenable parallel building of extension modules.\n(Contributed by Antoine Pitrou in bpo-5309.)\nThe distutils\nmodule now supports xz\ncompression, and can be\nenabled by passing xztar\nas an argument to bdist --format\n.\n(Contributed by Serhiy Storchaka in bpo-16314.)\ndoctest\u00b6\nThe DocTestSuite()\nfunction returns an empty\nunittest.TestSuite\nif module contains no docstrings, instead of\nraising ValueError\n. (Contributed by Glenn Jones in bpo-15916.)\nemail\u00b6\nA new policy option Policy.mangle_from_\ncontrols whether or not lines that start with \"From \"\nin email bodies are\nprefixed with a \">\"\ncharacter by generators. The default is True\nfor\ncompat32\nand False\nfor all other policies.\n(Contributed by Milan Oberkirch in bpo-20098.)\nA new\nMessage.get_content_disposition()\nmethod provides easy access to a canonical value for the\nContent-Disposition header.\n(Contributed by Abhilash Raj in bpo-21083.)\nA new policy option EmailPolicy.utf8\ncan be set to True\nto encode email headers using the UTF-8 charset instead\nof using encoded words. This allows Messages\nto be formatted according to\nRFC 6532 and used with an SMTP server that supports the RFC 6531\nSMTPUTF8\nextension. (Contributed by R. David Murray in\nbpo-24211.)\nThe mime.text.MIMEText\nconstructor now\naccepts a charset.Charset\ninstance.\n(Contributed by Claude Paroz and Berker Peksag in bpo-16324.)\nenum\u00b6\nThe Enum\ncallable has a new parameter start to\nspecify the initial number of enum values if only names are provided:\n>>> Animal = enum.Enum('Animal', 'cat dog', start=10)\n>>> Animal.cat\n\n>>> Animal.dog\n\n(Contributed by Ethan Furman in bpo-21706.)\nfaulthandler\u00b6\nThe enable()\n, register()\n,\ndump_traceback()\nand\ndump_traceback_later()\nfunctions now accept file\ndescriptors in addition to file-like objects.\n(Contributed by Wei Wu in bpo-23566.)\nfunctools\u00b6\nMost of the lru_cache()\nmachinery is now implemented in C, making\nit significantly faster. (Contributed by Matt Joiner, Alexey Kachayev, and\nSerhiy Storchaka in bpo-14373.)\nglob\u00b6\nThe iglob()\nand glob()\nfunctions now support recursive\nsearch in subdirectories, using the \"**\"\npattern.\n(Contributed by Serhiy Storchaka in bpo-13968.)\ngzip\u00b6\nThe mode argument of the GzipFile\nconstructor now\naccepts \"x\"\nto request exclusive creation.\n(Contributed by Tim Heaney in bpo-19222.)\nheapq\u00b6\nElement comparison in merge()\ncan now be customized by\npassing a key function in a new optional key keyword argument,\nand a new optional reverse keyword argument can be used to reverse element\ncomparison:\n>>> import heapq\n>>> a = ['9', '777', '55555']\n>>> b = ['88', '6666']\n>>> list(heapq.merge(a, b, key=len))\n['9', '88', '777', '6666', '55555']\n>>> list(heapq.merge(reversed(a), reversed(b), key=len, reverse=True))\n['55555', '6666', '777', '88', '9']\n(Contributed by Raymond Hettinger in bpo-13742.)\nhttp\u00b6\nA new HTTPStatus\nenum that defines a set of\nHTTP status codes, reason phrases and long descriptions written in English.\n(Contributed by Demian Brecht in bpo-21793.)\nhttp.client\u00b6\nHTTPConnection.getresponse()\nnow raises a RemoteDisconnected\nexception when a\nremote server connection is closed unexpectedly. Additionally, if a\nConnectionError\n(of which RemoteDisconnected\nis a subclass) is raised, the client socket is now closed automatically,\nand will reconnect on the next request:\nimport http.client\nconn = http.client.HTTPConnection('www.python.org')\nfor retries in range(3):\ntry:\nconn.request('GET', '/')\nresp = conn.getresponse()\nexcept http.client.RemoteDisconnected:\npass\n(Contributed by Martin Panter in bpo-3566.)\nidlelib and IDLE\u00b6\nSince idlelib implements the IDLE shell and editor and is not intended for\nimport by other programs, it gets improvements with every release. See\nLib/idlelib/NEWS.txt\nfor a cumulative list of changes since 3.4.0,\nas well as changes made in future 3.5.x releases. This file is also available\nfrom the IDLE dialog.\nimaplib\u00b6\nThe IMAP4\nclass now supports the context manager protocol.\nWhen used in a with\nstatement, the IMAP4 LOGOUT\ncommand will be called automatically at the end of the block.\n(Contributed by Tarek Ziad\u00e9 and Serhiy Storchaka in bpo-4972.)\nThe imaplib\nmodule now supports RFC 5161 (ENABLE Extension)\nand RFC 6855 (UTF-8 Support) via the IMAP4.enable()\nmethod. A new IMAP4.utf8_enabled\nattribute tracks whether or not RFC 6855 support is enabled.\n(Contributed by Milan Oberkirch, R. David Murray, and Maciej Szulik in\nbpo-21800.)\nThe imaplib\nmodule now automatically encodes non-ASCII string usernames\nand passwords using UTF-8, as recommended by the RFCs. (Contributed by Milan\nOberkirch in bpo-21800.)\nimghdr\u00b6\nThe what()\nfunction now recognizes the\nOpenEXR format\n(contributed by Martin Vignali and Claudiu Popa in bpo-20295),\nand the WebP format\n(contributed by Fabrice Aneche and Claudiu Popa in bpo-20197.)\nimportlib\u00b6\nThe util.LazyLoader\nclass allows for\nlazy loading of modules in applications where startup time is important.\n(Contributed by Brett Cannon in bpo-17621.)\nThe abc.InspectLoader.source_to_code()\nmethod is now a static method. This makes it easier to initialize a module\nobject with code compiled from a string by running\nexec(code, module.__dict__)\n.\n(Contributed by Brett Cannon in bpo-21156.)\nThe new util.module_from_spec()\nfunction is now the preferred way to create a new module. As opposed to\ncreating a types.ModuleType\ninstance directly, this new function\nwill set the various import-controlled attributes based on the passed-in\nspec object. (Contributed by Brett Cannon in bpo-20383.)\ninspect\u00b6\nBoth the Signature\nand Parameter\nclasses are\nnow picklable and hashable. (Contributed by Yury Selivanov in bpo-20726\nand bpo-20334.)\nA new\nBoundArguments.apply_defaults()\nmethod provides a way to set default values for missing arguments:\n>>> def foo(a, b='ham', *args): pass\n>>> ba = inspect.signature(foo).bind('spam')\n>>> ba.apply_defaults()\n>>> ba.arguments\nOrderedDict([('a', 'spam'), ('b', 'ham'), ('args', ())])\n(Contributed by Yury Selivanov in bpo-24190.)\nA new class method\nSignature.from_callable()\nmakes\nsubclassing of Signature\neasier. (Contributed\nby Yury Selivanov and Eric Snow in bpo-17373.)\nThe signature()\nfunction now accepts a follow_wrapped\noptional keyword argument, which, when set to False\n, disables automatic\nfollowing of __wrapped__\nlinks.\n(Contributed by Yury Selivanov in bpo-20691.)\nA set of new functions to inspect\ncoroutine functions and\ncoroutine objects has been added:\niscoroutine()\n, iscoroutinefunction()\n,\nisawaitable()\n, getcoroutinelocals()\n,\nand getcoroutinestate()\n.\n(Contributed by Yury Selivanov in bpo-24017 and bpo-24400.)\nThe stack()\n, trace()\n,\ngetouterframes()\n, and getinnerframes()\nfunctions now return a list of named tuples.\n(Contributed by Daniel Shahaf in bpo-16808.)\nio\u00b6\nA new BufferedIOBase.readinto1()\nmethod, that uses at most one call to the underlying raw stream\u2019s\nRawIOBase.read()\nor\nRawIOBase.readinto()\nmethods.\n(Contributed by Nikolaus Rath in bpo-20578.)\nipaddress\u00b6\nBoth the IPv4Network\nand IPv6Network\nclasses\nnow accept an (address, netmask)\ntuple argument, so as to easily construct\nnetwork objects from existing addresses:\n>>> import ipaddress\n>>> ipaddress.IPv4Network(('127.0.0.0', 8))\nIPv4Network('127.0.0.0/8')\n>>> ipaddress.IPv4Network(('127.0.0.0', '255.0.0.0'))\nIPv4Network('127.0.0.0/8')\n(Contributed by Peter Moody and Antoine Pitrou in bpo-16531.)\nA new reverse_pointer\nattribute for the\nIPv4Address\nand IPv6Address\nclasses\nreturns the name of the reverse DNS PTR record:\n>>> import ipaddress\n>>> addr = ipaddress.IPv4Address('127.0.0.1')\n>>> addr.reverse_pointer\n'1.0.0.127.in-addr.arpa'\n>>> addr6 = ipaddress.IPv6Address('::1')\n>>> addr6.reverse_pointer\n'1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa'\n(Contributed by Leon Weber in bpo-20480.)\njson\u00b6\nThe json.tool\ncommand line interface now preserves the order of keys in\nJSON objects passed in input. The new --sort-keys\noption can be used\nto sort the keys alphabetically. (Contributed by Berker Peksag\nin bpo-21650.)\nJSON decoder now raises JSONDecodeError\ninstead of\nValueError\nto provide better context information about the error.\n(Contributed by Serhiy Storchaka in bpo-19361.)\nlinecache\u00b6\nA new lazycache()\nfunction can be used to capture information\nabout a non-file-based module to permit getting its lines later via\ngetline()\n. This avoids doing I/O until a line is actually\nneeded, without having to carry the module globals around indefinitely.\n(Contributed by Robert Collins in bpo-17911.)\nlocale\u00b6\nA new delocalize()\nfunction can be used to convert a string into\na normalized number string, taking the LC_NUMERIC\nsettings into account:\n>>> import locale\n>>> locale.setlocale(locale.LC_NUMERIC, 'de_DE.UTF-8')\n'de_DE.UTF-8'\n>>> locale.delocalize('1.234,56')\n'1234.56'\n>>> locale.setlocale(locale.LC_NUMERIC, 'en_US.UTF-8')\n'en_US.UTF-8'\n>>> locale.delocalize('1,234.56')\n'1234.56'\n(Contributed by C\u00e9dric Krier in bpo-13918.)\nlogging\u00b6\nAll logging methods (Logger\nlog()\n,\nexception()\n, critical()\n,\ndebug()\n, etc.), now accept exception instances\nas an exc_info argument, in addition to boolean values and exception\ntuples:\n>>> import logging\n>>> try:\n... 1/0\n... except ZeroDivisionError as ex:\n... logging.error('exception', exc_info=ex)\nERROR:root:exception\n(Contributed by Yury Selivanov in bpo-20537.)\nThe handlers.HTTPHandler\nclass now\naccepts an optional ssl.SSLContext\ninstance to configure SSL\nsettings used in an HTTP connection.\n(Contributed by Alex Gaynor in bpo-22788.)\nThe handlers.QueueListener\nclass now\ntakes a respect_handler_level keyword argument which, if set to True\n,\nwill pass messages to handlers taking handler levels into account.\n(Contributed by Vinay Sajip.)\nlzma\u00b6\nThe LZMADecompressor.decompress()\nmethod now accepts an optional max_length argument to limit the maximum\nsize of decompressed data.\n(Contributed by Martin Panter in bpo-15955.)\nmath\u00b6\nTwo new constants have been added to the math\nmodule: inf\nand nan\n. (Contributed by Mark Dickinson in bpo-23185.)\nA new function isclose()\nprovides a way to test for approximate\nequality. (Contributed by Chris Barker and Tal Einat in bpo-24270.)\nA new gcd()\nfunction has been added. The fractions.gcd()\nfunction is now deprecated. (Contributed by Mark Dickinson and Serhiy\nStorchaka in bpo-22486.)\nmultiprocessing\u00b6\nsharedctypes.synchronized()\nobjects now support the context manager protocol.\n(Contributed by Charles-Fran\u00e7ois Natali in bpo-21565.)\noperator\u00b6\nattrgetter()\n, itemgetter()\n,\nand methodcaller()\nobjects now support pickling.\n(Contributed by Josh Rosenberg and Serhiy Storchaka in bpo-22955.)\nNew matmul()\nand imatmul()\nfunctions\nto perform matrix multiplication.\n(Contributed by Benjamin Peterson in bpo-21176.)\nos\u00b6\nThe new scandir()\nfunction returning an iterator of\nDirEntry\nobjects has been added. If possible, scandir()\nextracts file attributes while scanning a directory, removing the need to\nperform subsequent system calls to determine file type or attributes, which may\nsignificantly improve performance. (Contributed by Ben Hoyt with the help\nof Victor Stinner in bpo-22524.)\nOn Windows, a new\nstat_result.st_file_attributes\nattribute is now available. It corresponds to the dwFileAttributes\nmember\nof the BY_HANDLE_FILE_INFORMATION\nstructure returned by\nGetFileInformationByHandle()\n. (Contributed by Ben Hoyt in bpo-21719.)\nThe urandom()\nfunction now uses the getrandom()\nsyscall on Linux 3.17\nor newer, and getentropy()\non OpenBSD 5.6 and newer, removing the need to\nuse /dev/urandom\nand avoiding failures due to potential file descriptor\nexhaustion. (Contributed by Victor Stinner in bpo-22181.)\nNew get_blocking()\nand set_blocking()\nfunctions allow\ngetting and setting a file descriptor\u2019s blocking mode (O_NONBLOCK\n.)\n(Contributed by Victor Stinner in bpo-22054.)\nThe truncate()\nand ftruncate()\nfunctions are now supported\non Windows. (Contributed by Steve Dower in bpo-23668.)\nThere is a new os.path.commonpath()\nfunction returning the longest\ncommon sub-path of each passed pathname. Unlike the\nos.path.commonprefix()\nfunction, it always returns a valid\npath:\n>>> os.path.commonprefix(['/usr/lib', '/usr/local/lib'])\n'/usr/l'\n>>> os.path.commonpath(['/usr/lib', '/usr/local/lib'])\n'/usr'\n(Contributed by Rafik Draoui and Serhiy Storchaka in bpo-10395.)\npathlib\u00b6\nThe new Path.samefile()\nmethod can be used\nto check whether the path points to the same file as another path, which can\nbe either another Path\nobject, or a string:\n>>> import pathlib\n>>> p1 = pathlib.Path('/etc/hosts')\n>>> p2 = pathlib.Path('/etc/../etc/hosts')\n>>> p1.samefile(p2)\nTrue\n(Contributed by Vajrasky Kok and Antoine Pitrou in bpo-19775.)\nThe Path.mkdir()\nmethod now accepts a new optional\nexist_ok argument to match mkdir -p\nand os.makedirs()\nfunctionality. (Contributed by Berker Peksag in bpo-21539.)\nThere is a new Path.expanduser()\nmethod to\nexpand ~\nand ~user\nprefixes. (Contributed by Serhiy Storchaka and\nClaudiu Popa in bpo-19776.)\nA new Path.home()\nclass method can be used to get\na Path\ninstance representing the user\u2019s home\ndirectory.\n(Contributed by Victor Salgado and Mayank Tripathi in bpo-19777.)\nNew Path.write_text()\n,\nPath.read_text()\n,\nPath.write_bytes()\n,\nPath.read_bytes()\nmethods to simplify\nread/write operations on files.\nThe following code snippet will create or rewrite existing file\n~/spam42\n:\n>>> import pathlib\n>>> p = pathlib.Path('~/spam42')\n>>> p.expanduser().write_text('ham')\n3\n(Contributed by Christopher Welborn in bpo-20218.)\npickle\u00b6\nNested objects, such as unbound methods or nested classes, can now be pickled using pickle protocols older than protocol version 4. Protocol version 4 already supports these cases. (Contributed by Serhiy Storchaka in bpo-23611.)\npoplib\u00b6\nA new POP3.utf8()\ncommand enables RFC 6856\n(Internationalized Email) support, if a POP server supports it.\n(Contributed by Milan OberKirch in bpo-21804.)\nre\u00b6\nReferences and conditional references to groups with fixed length are now allowed in lookbehind assertions:\n>>> import re\n>>> pat = re.compile(r'(a|b).(?<=\\1)c')\n>>> pat.match('aac')\n<_sre.SRE_Match object; span=(0, 3), match='aac'>\n>>> pat.match('bbc')\n<_sre.SRE_Match object; span=(0, 3), match='bbc'>\n(Contributed by Serhiy Storchaka in bpo-9179.)\nThe number of capturing groups in regular expressions is no longer limited to 100. (Contributed by Serhiy Storchaka in bpo-22437.)\nThe sub()\nand subn()\nfunctions now replace unmatched\ngroups with empty strings instead of raising an exception.\n(Contributed by Serhiy Storchaka in bpo-1519638.)\nThe re.error\nexceptions have new attributes,\nmsg\n, pattern\n,\npos\n, lineno\n,\nand colno\n, that provide better context\ninformation about the error:\n>>> re.compile(\"\"\"\n... (?x)\n... .++\n... \"\"\")\nTraceback (most recent call last):\n...\nsre_constants.error: multiple repeat at position 16 (line 3, column 7)\n(Contributed by Serhiy Storchaka in bpo-22578.)\nreadline\u00b6\nA new append_history_file()\nfunction can be used to append\nthe specified number of trailing elements in history to the given file.\n(Contributed by Bruno Cauet in bpo-22940.)\nselectors\u00b6\nThe new DevpollSelector\nsupports efficient\n/dev/poll\npolling on Solaris.\n(Contributed by Giampaolo Rodola\u2019 in bpo-18931.)\nshutil\u00b6\nThe move()\nfunction now accepts a copy_function argument,\nallowing, for example, the copy()\nfunction to be used instead of\nthe default copy2()\nif there is a need to ignore file metadata\nwhen moving.\n(Contributed by Claudiu Popa in bpo-19840.)\nThe make_archive()\nfunction now supports the xztar format.\n(Contributed by Serhiy Storchaka in bpo-5411.)\nsignal\u00b6\nOn Windows, the set_wakeup_fd()\nfunction now also supports\nsocket handles. (Contributed by Victor Stinner in bpo-22018.)\nVarious SIG*\nconstants in the signal\nmodule have been converted into\nEnums\n. This allows meaningful names to be printed\nduring debugging, instead of integer \u201cmagic numbers\u201d.\n(Contributed by Giampaolo Rodola\u2019 in bpo-21076.)\nsmtpd\u00b6\nBoth the SMTPServer\nand SMTPChannel\nclasses now\naccept a decode_data keyword argument to determine if the DATA\nportion of\nthe SMTP transaction is decoded using the \"utf-8\"\ncodec or is instead\nprovided to the\nSMTPServer.process_message()\nmethod as a byte string. The default is True\nfor backward compatibility\nreasons, but will change to False\nin Python 3.6. If decode_data is set\nto False\n, the process_message\nmethod must be prepared to accept keyword\narguments.\n(Contributed by Maciej Szulik in bpo-19662.)\nThe SMTPServer\nclass now advertises the 8BITMIME\nextension\n(RFC 6152) if decode_data has been set True\n. If the client\nspecifies BODY=8BITMIME\non the MAIL\ncommand, it is passed to\nSMTPServer.process_message()\nvia the mail_options keyword.\n(Contributed by Milan Oberkirch and R. David Murray in bpo-21795.)\nThe SMTPServer\nclass now also supports the SMTPUTF8\nextension (RFC 6531: Internationalized Email). If the client specified\nSMTPUTF8 BODY=8BITMIME\non the MAIL\ncommand, they are passed to\nSMTPServer.process_message()\nvia the mail_options keyword. It is the responsibility of the\nprocess_message\nmethod to correctly handle the SMTPUTF8\ndata.\n(Contributed by Milan Oberkirch in bpo-21725.)\nIt is now possible to provide, directly or via name resolution, IPv6\naddresses in the SMTPServer\nconstructor, and have it\nsuccessfully connect. (Contributed by Milan Oberkirch in bpo-14758.)\nsmtplib\u00b6\nA new SMTP.auth()\nmethod provides a convenient way to\nimplement custom authentication mechanisms. (Contributed by Milan\nOberkirch in bpo-15014.)\nThe SMTP.set_debuglevel()\nmethod now\naccepts an additional debuglevel (2), which enables timestamps in debug\nmessages. (Contributed by Gavin Chappell and Maciej Szulik in bpo-16914.)\nBoth the SMTP.sendmail()\nand\nSMTP.send_message()\nmethods now\nsupport RFC 6531 (SMTPUTF8).\n(Contributed by Milan Oberkirch and R. David Murray in bpo-22027.)\nsndhdr\u00b6\nThe what()\nand whathdr()\nfunctions now return\na namedtuple()\n. (Contributed by Claudiu Popa in\nbpo-18615.)\nsocket\u00b6\nFunctions with timeouts now use a monotonic clock, instead of a system clock. (Contributed by Victor Stinner in bpo-22043.)\nA new socket.sendfile()\nmethod allows\nsending a file over a socket by using the high-performance os.sendfile()\nfunction on UNIX, resulting in uploads being from 2 to 3 times faster than when\nusing plain socket.send()\n.\n(Contributed by Giampaolo Rodola\u2019 in bpo-17552.)\nThe socket.sendall()\nmethod no longer resets the\nsocket timeout every time bytes are received or sent. The socket timeout is\nnow the maximum total duration to send all data.\n(Contributed by Victor Stinner in bpo-23853.)\nThe backlog argument of the socket.listen()\nmethod is now optional. By default it is set to\nSOMAXCONN\nor to 128\n, whichever is less.\n(Contributed by Charles-Fran\u00e7ois Natali in bpo-21455.)\nssl\u00b6\nMemory BIO Support\u00b6\n(Contributed by Geert Jansen in bpo-21965.)\nThe new SSLObject\nclass has been added to provide SSL protocol\nsupport for cases when the network I/O capabilities of SSLSocket\nare not necessary or are suboptimal. SSLObject\nrepresents\nan SSL protocol instance, but does not implement any network I/O methods, and\ninstead provides a memory buffer interface. The new MemoryBIO\nclass can be used to pass data between Python and an SSL protocol instance.\nThe memory BIO SSL support is primarily intended to be used in frameworks\nimplementing asynchronous I/O for which SSLSocket\n\u2019s readiness\nmodel (\u201cselect/poll\u201d) is inefficient.\nA new SSLContext.wrap_bio()\nmethod can be used\nto create a new SSLObject\ninstance.\nApplication-Layer Protocol Negotiation Support\u00b6\n(Contributed by Benjamin Peterson in bpo-20188.)\nWhere OpenSSL support is present, the ssl\nmodule now implements\nthe Application-Layer Protocol Negotiation TLS extension as described\nin RFC 7301.\nThe new SSLContext.set_alpn_protocols()\ncan be used to specify which protocols a socket should advertise during\nthe TLS handshake.\nThe new\nSSLSocket.selected_alpn_protocol()\nreturns the protocol that was selected during the TLS handshake.\nThe HAS_ALPN\nflag indicates whether ALPN support is present.\nOther Changes\u00b6\nThere is a new SSLSocket.version()\nmethod to\nquery the actual protocol version in use.\n(Contributed by Antoine Pitrou in bpo-20421.)\nThe SSLSocket\nclass now implements\na SSLSocket.sendfile()\nmethod.\n(Contributed by Giampaolo Rodola\u2019 in bpo-17552.)\nThe SSLSocket.send()\nmethod now raises either\nthe ssl.SSLWantReadError\nor ssl.SSLWantWriteError\nexception on a\nnon-blocking socket if the operation would block. Previously, it would return\n0\n. (Contributed by Nikolaus Rath in bpo-20951.)\nThe cert_time_to_seconds()\nfunction now interprets the input time\nas UTC and not as local time, per RFC 5280. Additionally, the return\nvalue is always an int\n. (Contributed by Akira Li in bpo-19940.)\nNew SSLObject.shared_ciphers()\nand\nSSLSocket.shared_ciphers()\nmethods return\nthe list of ciphers sent by the client during the handshake.\n(Contributed by Benjamin Peterson in bpo-23186.)\nThe SSLSocket.do_handshake()\n,\nSSLSocket.read()\n,\nSSLSocket.shutdown()\n, and\nSSLSocket.write()\nmethods of the SSLSocket\nclass no longer reset the socket timeout every time bytes are received or sent.\nThe socket timeout is now the maximum total duration of the method.\n(Contributed by Victor Stinner in bpo-23853.)\nThe match_hostname()\nfunction now supports matching of IP addresses.\n(Contributed by Antoine Pitrou in bpo-23239.)\nsqlite3\u00b6\nThe Row\nclass now fully supports the sequence protocol,\nin particular reversed()\niteration and slice indexing.\n(Contributed by Claudiu Popa in bpo-10203; by Lucas Sinclair,\nJessica McKellar, and Serhiy Storchaka in bpo-13583.)\nsubprocess\u00b6\nThe new run()\nfunction has been added.\nIt runs the specified command and returns a\nCompletedProcess\nobject, which describes a finished\nprocess. The new API is more consistent and is the recommended approach\nto invoking subprocesses in Python code that does not need to maintain\ncompatibility with earlier Python versions.\n(Contributed by Thomas Kluyver in bpo-23342.)\nExamples:\n>>> subprocess.run([\"ls\", \"-l\"]) # doesn't capture output\nCompletedProcess(args=['ls', '-l'], returncode=0)\n>>> subprocess.run(\"exit 1\", shell=True, check=True)\nTraceback (most recent call last):\n...\nsubprocess.CalledProcessError: Command 'exit 1' returned non-zero exit status 1\n>>> subprocess.run([\"ls\", \"-l\", \"/dev/null\"], stdout=subprocess.PIPE)\nCompletedProcess(args=['ls', '-l', '/dev/null'], returncode=0,\nstdout=b'crw-rw-rw- 1 root root 1, 3 Jan 23 16:23 /dev/null\\n')\nsys\u00b6\nA new set_coroutine_wrapper()\nfunction allows setting a global\nhook that will be called whenever a coroutine object\nis created by an async def\nfunction. A corresponding\nget_coroutine_wrapper()\ncan be used to obtain a currently set\nwrapper. Both functions are provisional,\nand are intended for debugging purposes only. (Contributed by Yury Selivanov\nin bpo-24017.)\nA new is_finalizing()\nfunction can be used to check if the Python\ninterpreter is shutting down.\n(Contributed by Antoine Pitrou in bpo-22696.)\nsysconfig\u00b6\nThe name of the user scripts directory on Windows now includes the first two components of the Python version. (Contributed by Paul Moore in bpo-23437.)\ntarfile\u00b6\nThe mode argument of the open()\nfunction now accepts \"x\"\nto request exclusive creation. (Contributed by Berker Peksag in bpo-21717.)\nThe TarFile.extractall()\nand\nTarFile.extract()\nmethods now take a keyword\nargument numeric_owner. If set to True\n, the extracted files and\ndirectories will be owned by the numeric uid\nand gid\nfrom the tarfile.\nIf set to False\n(the default, and the behavior in versions prior to 3.5),\nthey will be owned by the named user and group in the tarfile.\n(Contributed by Michael Vogt and Eric Smith in bpo-23193.)\nThe TarFile.list()\nnow accepts an optional\nmembers keyword argument that can be set to a subset of the list returned\nby TarFile.getmembers()\n.\n(Contributed by Serhiy Storchaka in bpo-21549.)\nthreading\u00b6\nBoth the Lock.acquire()\nand\nRLock.acquire()\nmethods\nnow use a monotonic clock for timeout management.\n(Contributed by Victor Stinner in bpo-22043.)\ntime\u00b6\nThe monotonic()\nfunction is now always available.\n(Contributed by Victor Stinner in bpo-22043.)\ntimeit\u00b6\nA new command line option -u\nor --unit=U\ncan be used to specify the time\nunit for the timer output. Supported options are usec\n, msec\n,\nor sec\n. (Contributed by Julian Gindi in bpo-18983.)\nThe timeit()\nfunction has a new globals parameter for\nspecifying the namespace in which the code will be running.\n(Contributed by Ben Roberts in bpo-2527.)\ntkinter\u00b6\nThe tkinter._fix\nmodule used for setting up the Tcl/Tk environment\non Windows has been replaced by a private function in the _tkinter\nmodule which makes no permanent changes to environment variables.\n(Contributed by Zachary Ware in bpo-20035.)\ntraceback\u00b6\nNew walk_stack()\nand walk_tb()\nfunctions to conveniently traverse frame and\ntraceback objects.\n(Contributed by Robert Collins in bpo-17911.)\nNew lightweight classes: TracebackException\n,\nStackSummary\n, and FrameSummary\n.\n(Contributed by Robert Collins in bpo-17911.)\nBoth the print_tb()\nand print_stack()\nfunctions\nnow support negative values for the limit argument.\n(Contributed by Dmitry Kazakov in bpo-22619.)\ntypes\u00b6\nA new coroutine()\nfunction to transform\ngenerator and\ngenerator-like\nobjects into\nawaitables.\n(Contributed by Yury Selivanov in bpo-24017.)\nA new type called CoroutineType\n, which is used for\ncoroutine objects created by async def\nfunctions.\n(Contributed by Yury Selivanov in bpo-24400.)\nunicodedata\u00b6\nThe unicodedata\nmodule now uses data from Unicode 8.0.0.\nunittest\u00b6\nThe TestLoader.loadTestsFromModule()\nmethod now accepts a keyword-only argument pattern which is passed to\nload_tests\nas the third argument. Found packages are now checked for\nload_tests\nregardless of whether their path matches pattern, because it\nis impossible for a package name to match the default pattern.\n(Contributed by Robert Collins and Barry A. Warsaw in bpo-16662.)\nUnittest discovery errors now are exposed in the\nTestLoader.errors\nattribute of the\nTestLoader\ninstance.\n(Contributed by Robert Collins in bpo-19746.)\nA new command line option --locals\nto show local variables in\ntracebacks. (Contributed by Robert Collins in bpo-22936.)\nunittest.mock\u00b6\nThe Mock\nclass has the following improvements:\nThe class constructor has a new unsafe parameter, which causes mock objects to raise\nAttributeError\non attribute names starting with\"assert\"\n. (Contributed by Kushal Das in bpo-21238.)A new\nMock.assert_not_called()\nmethod to check if the mock object was called. (Contributed by Kushal Das in bpo-21262.)\nThe MagicMock\nclass now supports\n__truediv__()\n, __divmod__()\nand __matmul__()\noperators.\n(Contributed by Johannes Baiter in bpo-20968, and H\u00e5kan L\u00f6vdahl\nin bpo-23581 and bpo-23568.)\nIt is no longer necessary to explicitly pass create=True\nto the\npatch()\nfunction when patching builtin names.\n(Contributed by Kushal Das in bpo-17660.)\nurllib\u00b6\nA new\nrequest.HTTPPasswordMgrWithPriorAuth\nclass allows HTTP Basic Authentication credentials to be managed so as to\neliminate unnecessary 401\nresponse handling, or to unconditionally send\ncredentials on the first request in order to communicate with servers that\nreturn a 404\nresponse instead of a 401\nif the Authorization\nheader\nis not sent. (Contributed by Matej Cepl in bpo-19494 and Akshit Khurana in\nbpo-7159.)\nA new quote_via argument for the\nparse.urlencode()\nfunction provides a way to control the encoding of query parts if needed.\n(Contributed by Samwyse and Arnon Yaari in bpo-13866.)\nThe request.urlopen()\nfunction accepts an\nssl.SSLContext\nobject as a context argument, which will be used for\nthe HTTPS connection. (Contributed by Alex Gaynor in bpo-22366.)\nThe parse.urljoin()\nwas updated to use the\nRFC 3986 semantics for the resolution of relative URLs, rather than\nRFC 1808 and RFC 2396.\n(Contributed by Demian Brecht and Senthil Kumaran in bpo-22118.)\nwsgiref\u00b6\nThe headers argument of the headers.Headers\nclass constructor is now optional.\n(Contributed by Pablo Torres Navarrete and SilentGhost in bpo-5800.)\nxmlrpc\u00b6\nThe client.ServerProxy\nclass now supports\nthe context manager protocol.\n(Contributed by Claudiu Popa in bpo-20627.)\nThe client.ServerProxy\nconstructor now accepts\nan optional ssl.SSLContext\ninstance.\n(Contributed by Alex Gaynor in bpo-22960.)\nxml.sax\u00b6\nSAX parsers now support a character stream of the\nxmlreader.InputSource\nobject.\n(Contributed by Serhiy Storchaka in bpo-2175.)\nparseString()\nnow accepts a str\ninstance.\n(Contributed by Serhiy Storchaka in bpo-10590.)\nzipfile\u00b6\nZIP output can now be written to unseekable streams. (Contributed by Serhiy Storchaka in bpo-23252.)\nThe mode argument of ZipFile.open()\nmethod now\naccepts \"x\"\nto request exclusive creation.\n(Contributed by Serhiy Storchaka in bpo-21717.)\nOther module-level changes\u00b6\nMany functions in the mmap\n, ossaudiodev\n, socket\n,\nssl\n, and codecs\nmodules now accept writable\nbytes-like objects.\n(Contributed by Serhiy Storchaka in bpo-23001.)\nOptimizations\u00b6\nThe os.walk()\nfunction has been sped up by 3 to 5 times on POSIX systems,\nand by 7 to 20 times on Windows. This was done using the new os.scandir()\nfunction, which exposes file information from the underlying readdir\nor\nFindFirstFile\n/FindNextFile\nsystem calls. (Contributed by\nBen Hoyt with help from Victor Stinner in bpo-23605.)\nConstruction of bytes(int)\n(filled by zero bytes) is faster and uses less\nmemory for large objects. calloc()\nis used instead of malloc()\nto\nallocate memory for these objects.\n(Contributed by Victor Stinner in bpo-21233.)\nSome operations on ipaddress\nIPv4Network\nand\nIPv6Network\nhave been massively sped up, such as\nsubnets()\n, supernet()\n,\nsummarize_address_range()\n, collapse_addresses()\n.\nThe speed up can range from 3 to 15 times.\n(Contributed by Antoine Pitrou, Michel Albert, and Markus in\nbpo-21486, bpo-21487, bpo-20826, bpo-23266.)\nPickling of ipaddress\nobjects was optimized to produce significantly\nsmaller output. (Contributed by Serhiy Storchaka in bpo-23133.)\nMany operations on io.BytesIO\nare now 50% to 100% faster.\n(Contributed by Serhiy Storchaka in bpo-15381 and David Wilson in\nbpo-22003.)\nThe marshal.dumps()\nfunction is now faster: 65\u201385% with versions 3\nand 4, 20\u201325% with versions 0 to 2 on typical data, and up to 5 times in\nbest cases.\n(Contributed by Serhiy Storchaka in bpo-20416 and bpo-23344.)\nThe UTF-32 encoder is now 3 to 7 times faster. (Contributed by Serhiy Storchaka in bpo-15027.)\nRegular expressions are now parsed up to 10% faster. (Contributed by Serhiy Storchaka in bpo-19380.)\nThe json.dumps()\nfunction was optimized to run with\nensure_ascii=False\nas fast as with ensure_ascii=True\n.\n(Contributed by Naoki Inada in bpo-23206.)\nThe PyObject_IsInstance()\nand PyObject_IsSubclass()\nfunctions have been sped up in the common case that the second argument\nhas type\nas its metaclass.\n(Contributed Georg Brandl by in bpo-22540.)\nMethod caching was slightly improved, yielding up to 5% performance improvement in some benchmarks. (Contributed by Antoine Pitrou in bpo-22847.)\nObjects from the random\nmodule now use 50% less memory on 64-bit\nbuilds. (Contributed by Serhiy Storchaka in bpo-23488.)\nThe property()\ngetter calls are up to 25% faster.\n(Contributed by Joe Jevnik in bpo-23910.)\nInstantiation of fractions.Fraction\nis now up to 30% faster.\n(Contributed by Stefan Behnel in bpo-22464.)\nString methods find()\n, rfind()\n, split()\n,\npartition()\nand the in\nstring operator are now significantly\nfaster for searching 1-character substrings.\n(Contributed by Serhiy Storchaka in bpo-23573.)\nBuild and C API Changes\u00b6\nNew calloc\nfunctions were added:\n(Contributed by Victor Stinner in bpo-21233.)\nNew encoding/decoding helper functions:\nPy_DecodeLocale()\n(replaced_Py_char2wchar()\n),Py_EncodeLocale()\n(replaced_Py_wchar2char()\n).\n(Contributed by Victor Stinner in bpo-18395.)\nA new PyCodec_NameReplaceErrors()\nfunction to replace the unicode\nencode error with \\N{...}\nescapes.\n(Contributed by Serhiy Storchaka in bpo-19676.)\nA new PyErr_FormatV()\nfunction similar to PyErr_Format()\n,\nbut accepts a va_list\nargument.\n(Contributed by Antoine Pitrou in bpo-18711.)\nA new PyExc_RecursionError\nexception.\n(Contributed by Georg Brandl in bpo-19235.)\nNew PyModule_FromDefAndSpec()\n, PyModule_FromDefAndSpec2()\n,\nand PyModule_ExecDef()\nfunctions introduced by PEP 489 \u2013\nmulti-phase extension module initialization.\n(Contributed by Petr Viktorin in bpo-24268.)\nNew PyNumber_MatrixMultiply()\nand\nPyNumber_InPlaceMatrixMultiply()\nfunctions to perform matrix\nmultiplication.\n(Contributed by Benjamin Peterson in bpo-21176. See also PEP 465\nfor details.)\nThe PyTypeObject.tp_finalize\nslot is now part of the stable ABI.\nWindows builds now require Microsoft Visual C++ 14.0, which is available as part of Visual Studio 2015.\nExtension modules now include a platform information tag in their filename on some platforms (the tag is optional, and CPython will import extensions without it, although if the tag is present and mismatched, the extension won\u2019t be loaded):\nOn Linux, extension module filenames end with\n.cpython-m--.pyd\n:\nis the major number of the Python version; for Python 3.5 this is3\n.\nis the minor number of the Python version; for Python 3.5 this is5\n.\nis the hardware architecture the extension module was built to run on. It\u2019s most commonly eitheri386\nfor 32-bit Intel platforms orx86_64\nfor 64-bit Intel (and AMD) platforms.\nis alwayslinux-gnu\n, except for extensions built to talk to the 32-bit ABI on 64-bit platforms, in which case it islinux-gnu32\n(and\nwill bex86_64\n).\nOn Windows, extension module filenames end with\n.cp-.pyd\n:\nis the major number of the Python version; for Python 3.5 this is3\n.\nis the minor number of the Python version; for Python 3.5 this is5\n.\nis the platform the extension module was built for, eitherwin32\nfor Win32,win_amd64\nfor Win64,win_ia64\nfor Windows Itanium 64, andwin_arm\nfor Windows on ARM.If built in debug mode,\n\nwill be_d\n, otherwise it will be blank.\nOn OS X platforms, extension module filenames now end with\n-darwin.so\n.On all other platforms, extension module filenames are the same as they were with Python 3.4.\nDeprecated\u00b6\nNew Keywords\u00b6\nasync\nand await\nare not recommended to be used as variable, class,\nfunction or module names. Introduced by PEP 492 in Python 3.5, they will\nbecome proper keywords in Python 3.7.\nDeprecated Python Behavior\u00b6\nRaising the StopIteration\nexception inside a generator will now generate a silent\nPendingDeprecationWarning\n, which will become a non-silent deprecation\nwarning in Python 3.6 and will trigger a RuntimeError\nin Python 3.7.\nSee PEP 479: Change StopIteration handling inside generators\nfor details.\nUnsupported Operating Systems\u00b6\nWindows XP is no longer supported by Microsoft, thus, per PEP 11, CPython 3.5 is no longer officially supported on this OS.\nDeprecated Python modules, functions and methods\u00b6\nThe formatter\nmodule has now graduated to full deprecation and is still\nslated for removal in Python 3.6.\nThe asyncio.async()\nfunction is deprecated in favor of\nensure_future()\n.\nThe smtpd\nmodule has in the past always decoded the DATA portion of\nemail messages using the utf-8\ncodec. This can now be controlled by the\nnew decode_data keyword to SMTPServer\n. The default value is\nTrue\n, but this default is deprecated. Specify the decode_data keyword\nwith an appropriate value to avoid the deprecation warning.\nDirectly assigning values to the key\n,\nvalue\nand\ncoded_value\nof http.cookies.Morsel\nobjects is deprecated. Use the set()\nmethod\ninstead. In addition, the undocumented LegalChars parameter of\nset()\nis deprecated, and is now ignored.\nPassing a format string as keyword argument format_string to the\nformat()\nmethod of the string.Formatter\nclass has been deprecated.\n(Contributed by Serhiy Storchaka in bpo-23671.)\nThe platform.dist()\nand platform.linux_distribution()\nfunctions\nare now deprecated. Linux distributions use too many different ways of\ndescribing themselves, so the functionality is left to a package.\n(Contributed by Vajrasky Kok and Berker Peksag in bpo-1322.)\nThe previously undocumented from_function\nand from_builtin\nmethods of\ninspect.Signature\nare deprecated. Use the new\nSignature.from_callable()\nmethod instead. (Contributed by Yury Selivanov in bpo-24248.)\nThe inspect.getargspec()\nfunction is deprecated and scheduled to be\nremoved in Python 3.6. (See bpo-20438 for details.)\nThe inspect\ngetfullargspec()\n,\ngetcallargs()\n, and formatargspec()\nfunctions are\ndeprecated in favor of the inspect.signature()\nAPI. (Contributed by Yury\nSelivanov in bpo-20438.)\ngetargvalues()\nand formatargvalues()\nfunctions\nwere inadvertently marked as deprecated with the release of Python 3.5.0.\nUse of re.LOCALE\nflag with str patterns or re.ASCII\nis now\ndeprecated. (Contributed by Serhiy Storchaka in bpo-22407.)\nUse of unrecognized special sequences consisting of '\\'\nand an ASCII letter\nin regular expression patterns and replacement patterns now raises a\ndeprecation warning and will be forbidden in Python 3.6.\n(Contributed by Serhiy Storchaka in bpo-23622.)\nThe undocumented and unofficial use_load_tests default argument of the\nunittest.TestLoader.loadTestsFromModule()\nmethod now is\ndeprecated and ignored.\n(Contributed by Robert Collins and Barry A. Warsaw in bpo-16662.)\nRemoved\u00b6\nAPI and Feature Removals\u00b6\nThe following obsolete and previously deprecated APIs and features have been removed:\nThe\n__version__\nattribute has been dropped from the email package. The email code hasn\u2019t been shipped separately from the stdlib for a long time, and the__version__\nstring was not updated in the last few releases.The internal\nNetrc\nclass in theftplib\nmodule was deprecated in 3.4, and has now been removed. (Contributed by Matt Chaput in bpo-6623.)The concept of\n.pyo\nfiles has been removed.The JoinableQueue class in the provisional\nasyncio\nmodule was deprecated in 3.4.4 and is now removed. (Contributed by A. Jesse Jiryu Davis in bpo-23464.)\nPorting to Python 3.5\u00b6\nThis section lists previously described changes and other bugfixes that may require changes to your code.\nChanges in Python behavior\u00b6\nDue to an oversight, earlier Python versions erroneously accepted the following syntax:\nf(1 for x in [1], *args) f(1 for x in [1], **kwargs)\nPython 3.5 now correctly raises a\nSyntaxError\n, as generator expressions must be put in parentheses if not a sole argument to a function.\nChanges in the Python API\u00b6\nPEP 475: System calls are now retried when interrupted by a signal instead of raising\nInterruptedError\nif the Python signal handler does not raise an exception.Before Python 3.5, a\ndatetime.time\nobject was considered to be false if it represented midnight in UTC. This behavior was considered obscure and error-prone and has been removed in Python 3.5. See bpo-13936 for full details.The\nssl.SSLSocket.send()\nmethod now raises eitherssl.SSLWantReadError\norssl.SSLWantWriteError\non a non-blocking socket if the operation would block. Previously, it would return0\n. (Contributed by Nikolaus Rath in bpo-20951.)The\n__name__\nattribute of generators is now set from the function name, instead of being set from the code name. Usegen.gi_code.co_name\nto retrieve the code name. Generators also have a new__qualname__\nattribute, the qualified name, which is now used for the representation of a generator (repr(gen)\n). (Contributed by Victor Stinner in bpo-21205.)The deprecated \u201cstrict\u201d mode and argument of\nHTMLParser\n,HTMLParser.error()\n, and theHTMLParserError\nexception have been removed. (Contributed by Ezio Melotti in bpo-15114.) The convert_charrefs argument ofHTMLParser\nis nowTrue\nby default. (Contributed by Berker Peksag in bpo-21047.)Although it is not formally part of the API, it is worth noting for porting purposes (ie: fixing tests) that error messages that were previously of the form \u201c\u2018sometype\u2019 does not support the buffer protocol\u201d are now of the form \u201ca bytes-like object is required, not \u2018sometype\u2019\u201d. (Contributed by Ezio Melotti in bpo-16518.)\nIf the current directory is set to a directory that no longer exists then\nFileNotFoundError\nwill no longer be raised and insteadfind_spec()\nwill returnNone\nwithout cachingNone\ninsys.path_importer_cache\n, which is different than the typical case (bpo-22834).HTTP status code and messages from\nhttp.client\nandhttp.server\nwere refactored into a commonHTTPStatus\nenum. The values inhttp.client\nandhttp.server\nremain available for backwards compatibility. (Contributed by Demian Brecht in bpo-21793.)When an import loader defines\nexec_module()\nit is now expected to also definecreate_module()\n(raises aDeprecationWarning\nnow, will be an error in Python 3.6). If the loader inherits fromimportlib.abc.Loader\nthen there is nothing to do, else simply definecreate_module()\nto returnNone\n. (Contributed by Brett Cannon in bpo-23014.)The\nre.split()\nfunction always ignored empty pattern matches, so the\"x*\"\npattern worked the same as\"x+\"\n, and the\"\\b\"\npattern never worked. Nowre.split()\nraises a warning if the pattern could match an empty string. For compatibility, use patterns that never match an empty string (e.g.\"x+\"\ninstead of\"x*\"\n). Patterns that could only match an empty string (such as\"\\b\"\n) now raise an error. (Contributed by Serhiy Storchaka in bpo-22818.)The\nhttp.cookies.Morsel\ndict-like interface has been made self consistent: morsel comparison now takes thekey\nandvalue\ninto account,copy()\nnow results in aMorsel\ninstance rather than adict\n, andupdate()\nwill now raise an exception if any of the keys in the update dictionary are invalid. In addition, the undocumented LegalChars parameter ofset()\nis deprecated and is now ignored. (Contributed by Demian Brecht in bpo-2211.)PEP 488 has removed\n.pyo\nfiles from Python and introduced the optionalopt-\ntag in.pyc\nfile names. Theimportlib.util.cache_from_source()\nhas gained an optimization parameter to help control theopt-\ntag. Because of this, the debug_override parameter of the function is now deprecated..pyo\nfiles are also no longer supported as a file argument to the Python interpreter and thus serve no purpose when distributed on their own (i.e. sourceless code distribution). Due to the fact that the magic number for bytecode has changed in Python 3.5, all old.pyo\nfiles from previous versions of Python are invalid regardless of this PEP.The\nsocket\nmodule now exports theCAN_RAW_FD_FRAMES\nconstant on linux 3.6 and greater.The\nssl.cert_time_to_seconds()\nfunction now interprets the input time as UTC and not as local time, per RFC 5280. Additionally, the return value is always anint\n. (Contributed by Akira Li in bpo-19940.)The\npygettext.py\nTool now uses the standard +NNNN format for timezones in the POT-Creation-Date header.The\nsmtplib\nmodule now usessys.stderr\ninstead of the previous module-levelstderr\nvariable for debug output. If your (test) program depends on patching the module-level variable to capture the debug output, you will need to update it to capture sys.stderr instead.The\nstr.startswith()\nandstr.endswith()\nmethods no longer returnTrue\nwhen finding the empty string and the indexes are completely out of range. (Contributed by Serhiy Storchaka in bpo-24284.)The\ninspect.getdoc()\nfunction now returns documentation strings inherited from base classes. Documentation strings no longer need to be duplicated if the inherited documentation is appropriate. To suppress an inherited string, an empty string must be specified (or the documentation may be filled in). This change affects the output of thepydoc\nmodule and thehelp()\nfunction. (Contributed by Serhiy Storchaka in bpo-15582.)Nested\nfunctools.partial()\ncalls are now flattened. If you were relying on the previous behavior, you can now either add an attribute to afunctools.partial()\nobject or you can create a subclass offunctools.partial()\n. (Contributed by Alexander Belopolsky in bpo-7830.)\nChanges in the C API\u00b6\nThe undocumented\nformat\nmember of the (non-public)PyMemoryViewObject\nstructure has been removed. All extensions relying on the relevant parts inmemoryobject.h\nmust be rebuilt.The\nPyMemAllocator\nstructure was renamed toPyMemAllocatorEx\nand a newcalloc\nfield was added.Removed non-documented macro\nPyObject_REPR()\nwhich leaked references. Use format character%R\ninPyUnicode_FromFormat()\n-like functions to format therepr()\nof the object. (Contributed by Serhiy Storchaka in bpo-22453.)Because the lack of the\n__module__\nattribute breaks pickling and introspection, a deprecation warning is now raised for builtin types without the__module__\nattribute. This will be anAttributeError\nin the future. (Contributed by Serhiy Storchaka in bpo-20204.)As part of the PEP 492 implementation, the\ntp_reserved\nslot ofPyTypeObject\nwas replaced with atp_as_async\nslot. Refer to Coroutine Objects for new types, structures and functions.\nNotable changes in Python 3.5.4\u00b6\nNew make regen-all\nbuild target\u00b6\nTo simplify cross-compilation, and to ensure that CPython can reliably be compiled without requiring an existing version of Python to already be available, the autotools-based build system no longer attempts to implicitly recompile generated files based on file modification times.\nInstead, a new make regen-all\ncommand has been added to force regeneration\nof these files when desired (e.g. after an initial version of Python has\nalready been built based on the pregenerated versions).\nMore selective regeneration targets are also defined - see Makefile.pre.in for details.\n(Contributed by Victor Stinner in bpo-23404.)\nAdded in version 3.5.4.\nRemoval of make touch\nbuild target\u00b6\nThe make touch\nbuild target previously used to request implicit regeneration\nof generated files by updating their modification times has been removed.\nIt has been replaced by the new make regen-all\ntarget.\n(Contributed by Victor Stinner in bpo-23404.)\nChanged in version 3.5.4.", "code_snippets": [" ", "\n", " ", " ", "\n", "\n\n", " ", "\n ", " ", " ", " ", " ", " ", "\n\n ", "\n ", "\n ", " ", " ", "\n ", "\n ", " ", "\n ", "\n\n ", " ", " ", " ", " ", "\n ", " ", "\n\n ", "\n\n", " ", " ", "\n", "\n ", "\n", "\n ", "\n", "\n\n", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n ", " ", "\n ", "\n\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", "\n ", "\n", "\n ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", "\n", "\n\n", " ", " ", "\n", "\n", "\n\n", " ", " ", "\n", "\n", "\n", "\n", "\n\n", " ", " ", "\n", "\n", " ", " ", " ", " ", "\n", "\n\n", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", "\n\n", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", " ", "\n", "\n\n", " ", "\n", "\n\n", " ", " ", " ", " ", "\n", "\n\n", " ", " ", " ", "\n", "\n", " ", " ", "\n", "\n\n", " ", " ", " ", "\n", "\n", " ", " ", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n\n", " ", " ", "\n", "\n", " ", " ", " ", "\n ", " ", " ", " ", "\n", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", "\n", "\n", " ", "\n ", "\n ", "\n ", "\n ", " ", "\n ", "\n", " ", "\n\n", "\n", " ", "\n", " ", "\n", "\n", "\n", "\n File ", ", line ", ", in ", "\n", "\n\n", "\n\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", "\n", " ", "\n", "\n", "\n", "\n", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", " ", " ", "\n", " ", " ", " ", "\n ", "\n ", " ", "\n ", " ", " ", "\n ", " ", "\n ", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n", " ", "\n", "\n\n", " ", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", ": ", "\n", " ", " ", "\n", "\n\n", " ", " ", "\n", "\n", "\n", ": ", "\n\n", " ", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 18296} +{"url": "https://docs.python.org/3/whatsnew/3.6.html", "title": "What\u2019s New In Python 3.6", "content": "What\u2019s New In Python 3.6\u00b6\n- Editors:\nElvis Pranskevichus , Yury Selivanov \nThis article explains the new features in Python 3.6, compared to 3.5. Python 3.6 was released on December 23, 2016. See the changelog for a full list of changes.\nSee also\nPEP 494 - Python 3.6 Release Schedule\nSummary \u2013 Release highlights\u00b6\nNew syntax features:\nPEP 498, formatted string literals.\nPEP 515, underscores in numeric literals.\nPEP 526, syntax for variable annotations.\nPEP 525, asynchronous generators.\nPEP 530: asynchronous comprehensions.\nNew library modules:\nCPython implementation improvements:\nThe dict type has been reimplemented to use a more compact representation based on a proposal by Raymond Hettinger and similar to the PyPy dict implementation. This resulted in dictionaries using 20% to 25% less memory when compared to Python 3.5.\nCustomization of class creation has been simplified with the new protocol.\nThe class attribute definition order is now preserved.\nThe order of elements in\n**kwargs\nnow corresponds to the order in which keyword arguments were passed to the function.DTrace and SystemTap probing support has been added.\nThe new PYTHONMALLOC environment variable can now be used to debug the interpreter memory allocation and access errors.\nSignificant improvements in the standard library:\nThe\nasyncio\nmodule has received new features, significant usability and performance improvements, and a fair amount of bug fixes. Starting with Python 3.6 theasyncio\nmodule is no longer provisional and its API is considered stable.A new file system path protocol has been implemented to support path-like objects. All standard library functions operating on paths have been updated to work with the new protocol.\nThe\ndatetime\nmodule has gained support for Local Time Disambiguation.The\ntyping\nmodule received a number of improvements.The\ntracemalloc\nmodule has been significantly reworked and is now used to provide better output forResourceWarning\nas well as provide better diagnostics for memory allocation errors. See the PYTHONMALLOC section for more information.\nSecurity improvements:\nThe new\nsecrets\nmodule has been added to simplify the generation of cryptographically strong pseudo-random numbers suitable for managing secrets such as account authentication, tokens, and similar.On Linux,\nos.urandom()\nnow blocks until the system urandom entropy pool is initialized to increase the security. See the PEP 524 for the rationale.The default settings and feature set of the\nssl\nmodule have been improved.The\nhashlib\nmodule received support for the BLAKE2, SHA-3 and SHAKE hash algorithms and thescrypt()\nkey derivation function.\nWindows improvements:\nPEP 528 and PEP 529, Windows filesystem and console encoding changed to UTF-8.\nThe\npy.exe\nlauncher, when used interactively, no longer prefers Python 2 over Python 3 when the user doesn\u2019t specify a version (via command line arguments or a config file). Handling of shebang lines remains unchanged - \u201cpython\u201d refers to Python 2 in that case.python.exe\nandpythonw.exe\nhave been marked as long-path aware, which means that the 260 character path limit may no longer apply. See removing the MAX_PATH limitation for details.A\n._pth\nfile can be added to force isolated mode and fully specify all search paths to avoid registry and environment lookup. See the documentation for more information.A\npython36.zip\nfile now works as a landmark to inferPYTHONHOME\n. See the documentation for more information.\nNew Features\u00b6\nPEP 498: Formatted string literals\u00b6\nPEP 498 introduces a new kind of string literals: f-strings, or formatted string literals.\nFormatted string literals are prefixed with 'f'\nand are similar to\nthe format strings accepted by str.format()\n. They contain replacement\nfields surrounded by curly braces. The replacement fields are expressions,\nwhich are evaluated at run time, and then formatted using the\nformat()\nprotocol:\n>>> name = \"Fred\"\n>>> f\"He said his name is {name}.\"\n'He said his name is Fred.'\n>>> width = 10\n>>> precision = 4\n>>> value = decimal.Decimal(\"12.34567\")\n>>> f\"result: {value:{width}.{precision}}\" # nested fields\n'result: 12.35'\nSee also\n- PEP 498 \u2013 Literal String Interpolation.\nPEP written and implemented by Eric V. Smith.\nPEP 526: Syntax for variable annotations\u00b6\nPEP 484 introduced the standard for type annotations of function parameters, a.k.a. type hints. This PEP adds syntax to Python for annotating the types of variables including class variables and instance variables:\nprimes: List[int] = []\ncaptain: str # Note: no initial value!\nclass Starship:\nstats: Dict[str, int] = {}\nJust as for function annotations, the Python interpreter does not attach any\nparticular meaning to variable annotations and only stores them in the\n__annotations__\nattribute of a class or module.\nIn contrast to variable declarations in statically typed languages,\nthe goal of annotation syntax is to provide an easy way to specify structured\ntype metadata for third party tools and libraries via the abstract syntax tree\nand the __annotations__\nattribute.\nPEP 515: Underscores in Numeric Literals\u00b6\nPEP 515 adds the ability to use underscores in numeric literals for improved readability. For example:\n>>> 1_000_000_000_000_000\n1000000000000000\n>>> 0x_FF_FF_FF_FF\n4294967295\nSingle underscores are allowed between digits and after any base specifier. Leading, trailing, or multiple underscores in a row are not allowed.\nThe string formatting language also now has support\nfor the '_'\noption to signal the use of an underscore for a thousands\nseparator for floating-point presentation types and for integer\npresentation type 'd'\n. For integer presentation types 'b'\n,\n'o'\n, 'x'\n, and 'X'\n, underscores will be inserted every 4\ndigits:\n>>> '{:_}'.format(1000000)\n'1_000_000'\n>>> '{:_x}'.format(0xFFFFFFFF)\n'ffff_ffff'\nSee also\n- PEP 515 \u2013 Underscores in Numeric Literals\nPEP written by Georg Brandl and Serhiy Storchaka.\nPEP 525: Asynchronous Generators\u00b6\nPEP 492 introduced support for native coroutines and async\n/ await\nsyntax to Python 3.5. A notable limitation of the Python 3.5 implementation\nis that it was not possible to use await\nand yield\nin the same\nfunction body. In Python 3.6 this restriction has been lifted, making it\npossible to define asynchronous generators:\nasync def ticker(delay, to):\n\"\"\"Yield numbers from 0 to *to* every *delay* seconds.\"\"\"\nfor i in range(to):\nyield i\nawait asyncio.sleep(delay)\nThe new syntax allows for faster and more concise code.\nSee also\n- PEP 525 \u2013 Asynchronous Generators\nPEP written and implemented by Yury Selivanov.\nPEP 530: Asynchronous Comprehensions\u00b6\nPEP 530 adds support for using async for\nin list, set, dict\ncomprehensions and generator expressions:\nresult = [i async for i in aiter() if i % 2]\nAdditionally, await\nexpressions are supported in all kinds\nof comprehensions:\nresult = [await fun() for fun in funcs if await condition()]\nSee also\n- PEP 530 \u2013 Asynchronous Comprehensions\nPEP written and implemented by Yury Selivanov.\nPEP 487: Simpler customization of class creation\u00b6\nIt is now possible to customize subclass creation without using a metaclass.\nThe new __init_subclass__\nclassmethod will be called on the base class\nwhenever a new subclass is created:\nclass PluginBase:\nsubclasses = []\ndef __init_subclass__(cls, **kwargs):\nsuper().__init_subclass__(**kwargs)\ncls.subclasses.append(cls)\nclass Plugin1(PluginBase):\npass\nclass Plugin2(PluginBase):\npass\nIn order to allow zero-argument super()\ncalls to work correctly from\n__init_subclass__()\nimplementations, custom metaclasses must\nensure that the new __classcell__\nnamespace entry is propagated to\ntype.__new__\n(as described in Creating the class object).\nSee also\n- PEP 487 \u2013 Simpler customization of class creation\nPEP written and implemented by Martin Teichmann.\nPEP 487: Descriptor Protocol Enhancements\u00b6\nPEP 487 extends the descriptor protocol to include the new optional\n__set_name__()\nmethod. Whenever a new class is defined, the new\nmethod will be called on all descriptors included in the definition, providing\nthem with a reference to the class being defined and the name given to the\ndescriptor within the class namespace. In other words, instances of\ndescriptors can now know the attribute name of the descriptor in the\nowner class:\nclass IntField:\ndef __get__(self, instance, owner):\nreturn instance.__dict__[self.name]\ndef __set__(self, instance, value):\nif not isinstance(value, int):\nraise ValueError(f'expecting integer in {self.name}')\ninstance.__dict__[self.name] = value\n# this is the new initializer:\ndef __set_name__(self, owner, name):\nself.name = name\nclass Model:\nint_field = IntField()\nSee also\n- PEP 487 \u2013 Simpler customization of class creation\nPEP written and implemented by Martin Teichmann.\nPEP 519: Adding a file system path protocol\u00b6\nFile system paths have historically been represented as str\nor bytes\nobjects. This has led to people who write code which\noperate on file system paths to assume that such objects are only one\nof those two types (an int\nrepresenting a file descriptor\ndoes not count as that is not a file path). Unfortunately that\nassumption prevents alternative object representations of file system\npaths like pathlib\nfrom working with pre-existing code,\nincluding Python\u2019s standard library.\nTo fix this situation, a new interface represented by\nos.PathLike\nhas been defined. By implementing the\n__fspath__()\nmethod, an object signals that it\nrepresents a path. An object can then provide a low-level\nrepresentation of a file system path as a str\nor\nbytes\nobject. This means an object is considered\npath-like if it implements\nos.PathLike\nor is a str\nor bytes\nobject\nwhich represents a file system path. Code can use os.fspath()\n,\nos.fsdecode()\n, or os.fsencode()\nto explicitly get a\nstr\nand/or bytes\nrepresentation of a path-like\nobject.\nThe built-in open()\nfunction has been updated to accept\nos.PathLike\nobjects, as have all relevant functions in the\nos\nand os.path\nmodules, and most other functions and\nclasses in the standard library. The os.DirEntry\nclass\nand relevant classes in pathlib\nhave also been updated to\nimplement os.PathLike\n.\nThe hope is that updating the fundamental functions for operating\non file system paths will lead to third-party code to implicitly\nsupport all path-like objects without any\ncode changes, or at least very minimal ones (e.g. calling\nos.fspath()\nat the beginning of code before operating on a\npath-like object).\nHere are some examples of how the new interface allows for\npathlib.Path\nto be used more easily and transparently with\npre-existing code:\n>>> import pathlib\n>>> with open(pathlib.Path(\"README\")) as f:\n... contents = f.read()\n...\n>>> import os.path\n>>> os.path.splitext(pathlib.Path(\"some_file.txt\"))\n('some_file', '.txt')\n>>> os.path.join(\"/a/b\", pathlib.Path(\"c\"))\n'/a/b/c'\n>>> import os\n>>> os.fspath(pathlib.Path(\"some_file.txt\"))\n'some_file.txt'\n(Implemented by Brett Cannon, Ethan Furman, Dusty Phillips, and Jelle Zijlstra.)\nSee also\n- PEP 519 \u2013 Adding a file system path protocol\nPEP written by Brett Cannon and Koos Zevenhoven.\nPEP 495: Local Time Disambiguation\u00b6\nIn most world locations, there have been and will be times when local clocks are moved back. In those times, intervals are introduced in which local clocks show the same time twice in the same day. In these situations, the information displayed on a local clock (or stored in a Python datetime instance) is insufficient to identify a particular moment in time.\nPEP 495 adds the new fold attribute to instances of\ndatetime.datetime\nand datetime.time\nclasses to differentiate\nbetween two moments in time for which local times are the same:\n>>> u0 = datetime(2016, 11, 6, 4, tzinfo=timezone.utc)\n>>> for i in range(4):\n... u = u0 + i*HOUR\n... t = u.astimezone(Eastern)\n... print(u.time(), 'UTC =', t.time(), t.tzname(), t.fold)\n...\n04:00:00 UTC = 00:00:00 EDT 0\n05:00:00 UTC = 01:00:00 EDT 0\n06:00:00 UTC = 01:00:00 EST 1\n07:00:00 UTC = 02:00:00 EST 0\nThe values of the fold\nattribute have the\nvalue 0\nfor all instances except those that represent the second\n(chronologically) moment in time in an ambiguous case.\nSee also\n- PEP 495 \u2013 Local Time Disambiguation\nPEP written by Alexander Belopolsky and Tim Peters, implementation by Alexander Belopolsky.\nPEP 529: Change Windows filesystem encoding to UTF-8\u00b6\nRepresenting filesystem paths is best performed with str (Unicode) rather than bytes. However, there are some situations where using bytes is sufficient and correct.\nPrior to Python 3.6, data loss could result when using bytes paths on Windows.\nWith this change, using bytes to represent paths is now supported on Windows,\nprovided those bytes are encoded with the encoding returned by\nsys.getfilesystemencoding()\n, which now defaults to 'utf-8'\n.\nApplications that do not use str to represent paths should use\nos.fsencode()\nand os.fsdecode()\nto ensure their bytes are\ncorrectly encoded. To revert to the previous behaviour, set\nPYTHONLEGACYWINDOWSFSENCODING\nor call\nsys._enablelegacywindowsfsencoding()\n.\nSee PEP 529 for more information and discussion of code modifications that may be required.\nPEP 528: Change Windows console encoding to UTF-8\u00b6\nThe default console on Windows will now accept all Unicode characters and\nprovide correctly read str objects to Python code. sys.stdin\n,\nsys.stdout\nand sys.stderr\nnow default to utf-8 encoding.\nThis change only applies when using an interactive console, and not when\nredirecting files or pipes. To revert to the previous behaviour for interactive\nconsole use, set PYTHONLEGACYWINDOWSSTDIO\n.\nSee also\n- PEP 528 \u2013 Change Windows console encoding to UTF-8\nPEP written and implemented by Steve Dower.\nPEP 520: Preserving Class Attribute Definition Order\u00b6\nAttributes in a class definition body have a natural ordering: the same\norder in which the names appear in the source. This order is now\npreserved in the new class\u2019s __dict__\nattribute.\nAlso, the effective default class execution namespace (returned from type.__prepare__()) is now an insertion-order-preserving mapping.\nSee also\n- PEP 520 \u2013 Preserving Class Attribute Definition Order\nPEP written and implemented by Eric Snow.\nPEP 468: Preserving Keyword Argument Order\u00b6\n**kwargs\nin a function signature is now guaranteed to be an\ninsertion-order-preserving mapping.\nSee also\n- PEP 468 \u2013 Preserving Keyword Argument Order\nPEP written and implemented by Eric Snow.\nNew dict implementation\u00b6\nThe dict type now uses a \u201ccompact\u201d representation\nbased on a proposal by Raymond Hettinger\nwhich was first implemented by PyPy.\nThe memory usage of the new dict()\nis between 20% and 25% smaller\ncompared to Python 3.5.\nThe order-preserving aspect of this new implementation is considered an implementation detail and should not be relied upon (this may change in the future, but it is desired to have this new dict implementation in the language for a few releases before changing the language spec to mandate order-preserving semantics for all current and future Python implementations; this also helps preserve backwards-compatibility with older versions of the language where random iteration order is still in effect, e.g. Python 3.5).\n(Contributed by INADA Naoki in bpo-27350. Idea originally suggested by Raymond Hettinger.)\nPEP 523: Adding a frame evaluation API to CPython\u00b6\nWhile Python provides extensive support to customize how code executes, one place it has not done so is in the evaluation of frame objects. If you wanted some way to intercept frame evaluation in Python there really wasn\u2019t any way without directly manipulating function pointers for defined functions.\nPEP 523 changes this by providing an API to make frame evaluation pluggable at the C level. This will allow for tools such as debuggers and JITs to intercept frame evaluation before the execution of Python code begins. This enables the use of alternative evaluation implementations for Python code, tracking frame evaluation, etc.\nThis API is not part of the limited C API and is marked as private to signal that usage of this API is expected to be limited and only applicable to very select, low-level use-cases. Semantics of the API will change with Python as necessary.\nSee also\n- PEP 523 \u2013 Adding a frame evaluation API to CPython\nPEP written by Brett Cannon and Dino Viehland.\nPYTHONMALLOC environment variable\u00b6\nThe new PYTHONMALLOC\nenvironment variable allows setting the Python\nmemory allocators and installing debug hooks.\nIt is now possible to install debug hooks on Python memory allocators on Python\ncompiled in release mode using PYTHONMALLOC=debug\n. Effects of debug hooks:\nNewly allocated memory is filled with the byte\n0xCB\nFreed memory is filled with the byte\n0xDB\nDetect violations of the Python memory allocator API. For example,\nPyObject_Free()\ncalled on a memory block allocated byPyMem_Malloc()\n.Detect writes before the start of a buffer (buffer underflows)\nDetect writes after the end of a buffer (buffer overflows)\nCheck that the GIL is held when allocator functions of\nPYMEM_DOMAIN_OBJ\n(ex:PyObject_Malloc()\n) andPYMEM_DOMAIN_MEM\n(ex:PyMem_Malloc()\n) domains are called.\nChecking if the GIL is held is also a new feature of Python 3.6.\nSee the PyMem_SetupDebugHooks()\nfunction for debug hooks on Python\nmemory allocators.\nIt is now also possible to force the usage of the malloc()\nallocator of\nthe C library for all Python memory allocations using PYTHONMALLOC=malloc\n.\nThis is helpful when using external memory debuggers like Valgrind on\na Python compiled in release mode.\nOn error, the debug hooks on Python memory allocators now use the\ntracemalloc\nmodule to get the traceback where a memory block was\nallocated.\nExample of fatal error on buffer overflow using\npython3.6 -X tracemalloc=5\n(store 5 frames in traces):\nDebug memory block at address p=0x7fbcd41666f8: API 'o'\n4 bytes originally requested\nThe 7 pad bytes at p-7 are FORBIDDENBYTE, as expected.\nThe 8 pad bytes at tail=0x7fbcd41666fc are not all FORBIDDENBYTE (0xfb):\nat tail+0: 0x02 *** OUCH\nat tail+1: 0xfb\nat tail+2: 0xfb\nat tail+3: 0xfb\nat tail+4: 0xfb\nat tail+5: 0xfb\nat tail+6: 0xfb\nat tail+7: 0xfb\nThe block was made by call #1233329 to debug malloc/realloc.\nData at p: 1a 2b 30 00\nMemory block allocated at (most recent call first):\nFile \"test/test_bytes.py\", line 323\nFile \"unittest/case.py\", line 600\nFile \"unittest/case.py\", line 648\nFile \"unittest/suite.py\", line 122\nFile \"unittest/suite.py\", line 84\nFatal Python error: bad trailing pad byte\nCurrent thread 0x00007fbcdbd32700 (most recent call first):\nFile \"test/test_bytes.py\", line 323 in test_hex\nFile \"unittest/case.py\", line 600 in run\nFile \"unittest/case.py\", line 648 in __call__\nFile \"unittest/suite.py\", line 122 in run\nFile \"unittest/suite.py\", line 84 in __call__\nFile \"unittest/suite.py\", line 122 in run\nFile \"unittest/suite.py\", line 84 in __call__\n...\nDTrace and SystemTap probing support\u00b6\nPython can now be built --with-dtrace\nwhich enables static markers\nfor the following events in the interpreter:\nfunction call/return\ngarbage collection started/finished\nline of code executed.\nThis can be used to instrument running interpreters in production, without the need to recompile specific debug builds or providing application-specific profiling/debugging code.\nMore details in Instrumenting CPython with DTrace and SystemTap.\nThe current implementation is tested on Linux and macOS. Additional markers may be added in the future.\n(Contributed by \u0141ukasz Langa in bpo-21590, based on patches by Jes\u00fas Cea Avi\u00f3n, David Malcolm, and Nikhil Benesch.)\nOther Language Changes\u00b6\nSome smaller changes made to the core Python language are:\nA\nglobal\nornonlocal\nstatement must now textually appear before the first use of the affected name in the same scope. Previously this was aSyntaxWarning\n.It is now possible to set a special method to\nNone\nto indicate that the corresponding operation is not available. For example, if a class sets__iter__()\ntoNone\n, the class is not iterable. (Contributed by Andrew Barnert and Ivan Levkivskyi in bpo-25958.)Long sequences of repeated traceback lines are now abbreviated as\n\"[Previous line repeated {count} more times]\"\n(see traceback for an example). (Contributed by Emanuel Barry in bpo-26823.)Import now raises the new exception\nModuleNotFoundError\n(subclass ofImportError\n) when it cannot find a module. Code that currently checks for ImportError (in try-except) will still work. (Contributed by Eric Snow in bpo-15767.)Class methods relying on zero-argument\nsuper()\nwill now work correctly when called from metaclass methods during class creation. (Contributed by Martin Teichmann in bpo-23722.)\nNew Modules\u00b6\nsecrets\u00b6\nThe main purpose of the new secrets\nmodule is to provide an obvious way\nto reliably generate cryptographically strong pseudo-random values suitable\nfor managing secrets, such as account authentication, tokens, and similar.\nWarning\nNote that the pseudo-random generators in the random\nmodule\nshould NOT be used for security purposes. Use secrets\non Python 3.6+ and os.urandom()\non Python 3.5 and earlier.\nSee also\n- PEP 506 \u2013 Adding A Secrets Module To The Standard Library\nPEP written and implemented by Steven D\u2019Aprano.\nImproved Modules\u00b6\narray\u00b6\nExhausted iterators of array.array\nwill now stay exhausted even\nif the iterated array is extended. This is consistent with the behavior\nof other mutable sequences.\nContributed by Serhiy Storchaka in bpo-26492.\nast\u00b6\nThe new ast.Constant\nAST node has been added. It can be used\nby external AST optimizers for the purposes of constant folding.\nContributed by Victor Stinner in bpo-26146.\nasyncio\u00b6\nStarting with Python 3.6 the asyncio\nmodule is no longer provisional and its\nAPI is considered stable.\nNotable changes in the asyncio\nmodule since Python 3.5.0\n(all backported to 3.5.x due to the provisional status):\nThe\nget_event_loop()\nfunction has been changed to always return the currently running loop when called from coroutines and callbacks. (Contributed by Yury Selivanov in bpo-28613.)The\nensure_future()\nfunction and all functions that use it, such asloop.run_until_complete()\n, now accept all kinds of awaitable objects. (Contributed by Yury Selivanov.)New\nrun_coroutine_threadsafe()\nfunction to submit coroutines to event loops from other threads. (Contributed by Vincent Michel.)New\nTransport.is_closing()\nmethod to check if the transport is closing or closed. (Contributed by Yury Selivanov.)The\nloop.create_server()\nmethod can now accept a list of hosts. (Contributed by Yann Sionneau.)New\nloop.create_future()\nmethod to create Future objects. This allows alternative event loop implementations, such as uvloop, to provide a fasterasyncio.Future\nimplementation. (Contributed by Yury Selivanov in bpo-27041.)New\nloop.get_exception_handler()\nmethod to get the current exception handler. (Contributed by Yury Selivanov in bpo-27040.)New\nStreamReader.readuntil()\nmethod to read data from the stream until a separator bytes sequence appears. (Contributed by Mark Korenberg.)The performance of\nStreamReader.readexactly()\nhas been improved. (Contributed by Mark Korenberg in bpo-28370.)The\nloop.getaddrinfo()\nmethod is optimized to avoid calling the systemgetaddrinfo\nfunction if the address is already resolved. (Contributed by A. Jesse Jiryu Davis.)The\nloop.stop()\nmethod has been changed to stop the loop immediately after the current iteration. Any new callbacks scheduled as a result of the last iteration will be discarded. (Contributed by Guido van Rossum in bpo-25593.)Future.set_exception\nwill now raiseTypeError\nwhen passed an instance of theStopIteration\nexception. (Contributed by Chris Angelico in bpo-26221.)New\nloop.connect_accepted_socket()\nmethod to be used by servers that accept connections outside of asyncio, but that use asyncio to handle them. (Contributed by Jim Fulton in bpo-27392.)TCP_NODELAY\nflag is now set for all TCP transports by default. (Contributed by Yury Selivanov in bpo-27456.)New\nloop.shutdown_asyncgens()\nto properly close pending asynchronous generators before closing the loop. (Contributed by Yury Selivanov in bpo-28003.)Future\nandTask\nclasses now have an optimized C implementation which makes asyncio code up to 30% faster. (Contributed by Yury Selivanov and INADA Naoki in bpo-26081 and bpo-28544.)\nbinascii\u00b6\nThe b2a_base64()\nfunction now accepts an optional newline\nkeyword argument to control whether the newline character is appended to the\nreturn value.\n(Contributed by Victor Stinner in bpo-25357.)\ncmath\u00b6\nThe new cmath.tau\n(\u03c4) constant has been added.\n(Contributed by Lisa Roach in bpo-12345, see PEP 628 for details.)\nNew constants: cmath.inf\nand cmath.nan\nto\nmatch math.inf\nand math.nan\n, and also cmath.infj\nand cmath.nanj\nto match the format used by complex repr.\n(Contributed by Mark Dickinson in bpo-23229.)\ncollections\u00b6\nThe new Collection\nabstract base class has been\nadded to represent sized iterable container classes.\n(Contributed by Ivan Levkivskyi, docs by Neil Girdhar in bpo-27598.)\nThe new Reversible\nabstract base class represents\niterable classes that also provide the __reversed__()\nmethod.\n(Contributed by Ivan Levkivskyi in bpo-25987.)\nThe new AsyncGenerator\nabstract base class represents\nasynchronous generators.\n(Contributed by Yury Selivanov in bpo-28720.)\nThe namedtuple()\nfunction now accepts an optional\nkeyword argument module, which, when specified, is used for\nthe __module__\nattribute of the returned named tuple class.\n(Contributed by Raymond Hettinger in bpo-17941.)\nThe verbose and rename arguments for\nnamedtuple()\nare now keyword-only.\n(Contributed by Raymond Hettinger in bpo-25628.)\nRecursive collections.deque\ninstances can now be pickled.\n(Contributed by Serhiy Storchaka in bpo-26482.)\nconcurrent.futures\u00b6\nThe ThreadPoolExecutor\nclass constructor now accepts an optional thread_name_prefix argument\nto make it possible to customize the names of the threads created by the\npool.\n(Contributed by Gregory P. Smith in bpo-27664.)\ncontextlib\u00b6\nThe contextlib.AbstractContextManager\nclass has been added to\nprovide an abstract base class for context managers. It provides a\nsensible default implementation for __enter__()\nwhich returns\nself\nand leaves __exit__()\nan abstract method. A matching\nclass has been added to the typing\nmodule as\ntyping.ContextManager\n.\n(Contributed by Brett Cannon in bpo-25609.)\ndatetime\u00b6\nThe datetime\nand time\nclasses have\nthe new fold\nattribute used to disambiguate local time\nwhen necessary. Many functions in the datetime\nhave been\nupdated to support local time disambiguation.\nSee Local Time Disambiguation section for more\ninformation.\n(Contributed by Alexander Belopolsky in bpo-24773.)\nThe datetime.strftime()\nand\ndate.strftime()\nmethods now support\nISO 8601 date directives %G\n, %u\nand %V\n.\n(Contributed by Ashley Anderson in bpo-12006.)\nThe datetime.isoformat()\nfunction\nnow accepts an optional timespec argument that specifies the number\nof additional components of the time value to include.\n(Contributed by Alessandro Cucci and Alexander Belopolsky in bpo-19475.)\nThe datetime.combine()\nnow\naccepts an optional tzinfo argument.\n(Contributed by Alexander Belopolsky in bpo-27661.)\ndecimal\u00b6\nNew Decimal.as_integer_ratio()\nmethod that returns a pair (n, d)\nof integers that represent the given\nDecimal\ninstance as a fraction, in lowest terms and\nwith a positive denominator:\n>>> Decimal('-3.14').as_integer_ratio()\n(-157, 50)\n(Contributed by Stefan Krah amd Mark Dickinson in bpo-25928.)\ndistutils\u00b6\nThe default_format\nattribute has been removed from\ndistutils.command.sdist.sdist\nand the formats\nattribute defaults to ['gztar']\n. Although not anticipated,\nany code relying on the presence of default_format\nmay\nneed to be adapted. See bpo-27819 for more details.\nemail\u00b6\nThe new email API, enabled via the policy keyword to various constructors, is\nno longer provisional. The email\ndocumentation has been reorganized and\nrewritten to focus on the new API, while retaining the old documentation for\nthe legacy API. (Contributed by R. David Murray in bpo-24277.)\nThe email.mime\nclasses now all accept an optional policy keyword.\n(Contributed by Berker Peksag in bpo-27331.)\nThe DecodedGenerator\nnow supports the policy\nkeyword.\nThere is a new policy\nattribute,\nmessage_factory\n, that controls what class is used\nby default when the parser creates new message objects. For the\nemail.policy.compat32\npolicy this is Message\n,\nfor the new policies it is EmailMessage\n.\n(Contributed by R. David Murray in bpo-20476.)\nencodings\u00b6\nOn Windows, added the 'oem'\nencoding to use CP_OEMCP\n, and the 'ansi'\nalias for the existing 'mbcs'\nencoding, which uses the CP_ACP\ncode page.\n(Contributed by Steve Dower in bpo-27959.)\nenum\u00b6\nTwo new enumeration base classes have been added to the enum\nmodule:\nFlag\nand IntFlag\n. Both are used to define\nconstants that can be combined using the bitwise operators.\n(Contributed by Ethan Furman in bpo-23591.)\nMany standard library modules have been updated to use the\nIntFlag\nclass for their constants.\nThe new enum.auto\nvalue can be used to assign values to enum\nmembers automatically:\n>>> from enum import Enum, auto\n>>> class Color(Enum):\n... red = auto()\n... blue = auto()\n... green = auto()\n...\n>>> list(Color)\n[, , ]\nfaulthandler\u00b6\nOn Windows, the faulthandler\nmodule now installs a handler for Windows\nexceptions: see faulthandler.enable()\n. (Contributed by Victor Stinner in\nbpo-23848.)\nfileinput\u00b6\nhook_encoded()\nnow supports the errors argument.\n(Contributed by Joseph Hackman in bpo-25788.)\nhashlib\u00b6\nhashlib\nsupports OpenSSL 1.1.0. The minimum recommend version is 1.0.2.\n(Contributed by Christian Heimes in bpo-26470.)\nBLAKE2 hash functions were added to the module. blake2b()\nand blake2s()\nare always available and support the full\nfeature set of BLAKE2.\n(Contributed by Christian Heimes in bpo-26798 based on code by\nDmitry Chestnykh and Samuel Neves. Documentation written by Dmitry Chestnykh.)\nThe SHA-3 hash functions sha3_224()\n, sha3_256()\n,\nsha3_384()\n, sha3_512()\n, and SHAKE hash functions\nshake_128()\nand shake_256()\nwere added.\n(Contributed by Christian Heimes in bpo-16113. Keccak Code Package\nby Guido Bertoni, Joan Daemen, Micha\u00ebl Peeters, Gilles Van Assche, and\nRonny Van Keer.)\nThe password-based key derivation function scrypt()\nis now\navailable with OpenSSL 1.1.0 and newer.\n(Contributed by Christian Heimes in bpo-27928.)\nhttp.client\u00b6\nHTTPConnection.request()\nand\nendheaders()\nboth now support\nchunked encoding request bodies.\n(Contributed by Demian Brecht and Rolf Krahl in bpo-12319.)\nidlelib and IDLE\u00b6\nThe idlelib package is being modernized and refactored to make IDLE look and work better and to make the code easier to understand, test, and improve. Part of making IDLE look better, especially on Linux and Mac, is using ttk widgets, mostly in the dialogs. As a result, IDLE no longer runs with tcl/tk 8.4. It now requires tcl/tk 8.5 or 8.6. We recommend running the latest release of either.\n\u2018Modernizing\u2019 includes renaming and consolidation of idlelib modules. The renaming of files with partial uppercase names is similar to the renaming of, for instance, Tkinter and TkFont to tkinter and tkinter.font in 3.0. As a result, imports of idlelib files that worked in 3.5 will usually not work in 3.6. At least a module name change will be needed (see idlelib/README.txt), sometimes more. (Name changes contributed by Al Swiegart and Terry Reedy in bpo-24225. Most idlelib patches since have been and will be part of the process.)\nIn compensation, the eventual result with be that some idlelib classes will be easier to use, with better APIs and docstrings explaining them. Additional useful information will be added to idlelib when available.\nNew in 3.6.2:\nMultiple fixes for autocompletion. (Contributed by Louie Lu in bpo-15786.)\nNew in 3.6.3:\nModule Browser (on the File menu, formerly called Class Browser), now displays nested functions and classes in addition to top-level functions and classes. (Contributed by Guilherme Polo, Cheryl Sabella, and Terry Jan Reedy in bpo-1612262.)\nThe IDLE features formerly implemented as extensions have been reimplemented as normal features. Their settings have been moved from the Extensions tab to other dialog tabs. (Contributed by Charles Wohlganger and Terry Jan Reedy in bpo-27099.)\nThe Settings dialog (Options, Configure IDLE) has been partly rewritten to improve both appearance and function. (Contributed by Cheryl Sabella and Terry Jan Reedy in multiple issues.)\nNew in 3.6.4:\nThe font sample now includes a selection of non-Latin characters so that users can better see the effect of selecting a particular font. (Contributed by Terry Jan Reedy in bpo-13802.) The sample can be edited to include other characters. (Contributed by Serhiy Storchaka in bpo-31860.)\nNew in 3.6.6:\nEditor code context option revised. Box displays all context lines up to maxlines. Clicking on a context line jumps the editor to that line. Context colors for custom themes is added to Highlights tab of Settings dialog. (Contributed by Cheryl Sabella and Terry Jan Reedy in bpo-33642, bpo-33768, and bpo-33679.)\nOn Windows, a new API call tells Windows that tk scales for DPI. On Windows 8.1+ or 10, with DPI compatibility properties of the Python binary unchanged, and a monitor resolution greater than 96 DPI, this should make text and lines sharper. It should otherwise have no effect. (Contributed by Terry Jan Reedy in bpo-33656.)\nNew in 3.6.7:\nOutput over N lines (50 by default) is squeezed down to a button. N can be changed in the PyShell section of the General page of the Settings dialog. Fewer, but possibly extra long, lines can be squeezed by right clicking on the output. Squeezed output can be expanded in place by double-clicking the button or into the clipboard or a separate window by right-clicking the button. (Contributed by Tal Einat in bpo-1529353.)\nimportlib\u00b6\nImport now raises the new exception ModuleNotFoundError\n(subclass of ImportError\n) when it cannot find a module. Code\nthat current checks for ImportError\n(in try-except) will still work.\n(Contributed by Eric Snow in bpo-15767.)\nimportlib.util.LazyLoader\nnow calls\ncreate_module()\non the wrapped loader, removing the\nrestriction that importlib.machinery.BuiltinImporter\nand\nimportlib.machinery.ExtensionFileLoader\ncouldn\u2019t be used with\nimportlib.util.LazyLoader\n.\nimportlib.util.cache_from_source()\n,\nimportlib.util.source_from_cache()\n, and\nimportlib.util.spec_from_file_location()\nnow accept a\npath-like object.\ninspect\u00b6\nThe inspect.signature()\nfunction now reports the\nimplicit .0\nparameters generated by the compiler for comprehension and\ngenerator expression scopes as if they were positional-only parameters called\nimplicit0\n. (Contributed by Jelle Zijlstra in bpo-19611.)\nTo reduce code churn when upgrading from Python 2.7 and the legacy\ninspect.getargspec()\nAPI, the previously documented deprecation of\ninspect.getfullargspec()\nhas been reversed. While this function is\nconvenient for single/source Python 2/3 code bases, the richer\ninspect.signature()\ninterface remains the recommended approach for new\ncode. (Contributed by Nick Coghlan in bpo-27172)\njson\u00b6\njson.load()\nand json.loads()\nnow support binary input. Encoded\nJSON should be represented using either UTF-8, UTF-16, or UTF-32.\n(Contributed by Serhiy Storchaka in bpo-17909.)\nlogging\u00b6\nThe new WatchedFileHandler.reopenIfNeeded()\nmethod has been added to add the ability to check if the log file needs to\nbe reopened.\n(Contributed by Marian Horban in bpo-24884.)\nmath\u00b6\nThe tau (\u03c4) constant has been added to the math\nand cmath\nmodules.\n(Contributed by Lisa Roach in bpo-12345, see PEP 628 for details.)\nmultiprocessing\u00b6\nProxy Objects returned by\nmultiprocessing.Manager()\ncan now be nested.\n(Contributed by Davin Potts in bpo-6766.)\nos\u00b6\nSee the summary of PEP 519 for details on how the\nos\nand os.path\nmodules now support\npath-like objects.\nscandir()\nnow supports bytes\npaths on Windows.\nA new close()\nmethod allows explicitly closing a\nscandir()\niterator. The scandir()\niterator now\nsupports the context manager protocol. If a scandir()\niterator is neither exhausted nor explicitly closed a ResourceWarning\nwill be emitted in its destructor.\n(Contributed by Serhiy Storchaka in bpo-25994.)\nOn Linux, os.urandom()\nnow blocks until the system urandom entropy pool\nis initialized to increase the security. See the PEP 524 for the rationale.\nThe Linux getrandom()\nsyscall (get random bytes) is now exposed as the new\nos.getrandom()\nfunction.\n(Contributed by Victor Stinner, part of the PEP 524)\npathlib\u00b6\npathlib\nnow supports path-like objects.\n(Contributed by Brett Cannon in bpo-27186.)\nSee the summary of PEP 519 for details.\npdb\u00b6\nThe Pdb\nclass constructor has a new optional readrc argument\nto control whether .pdbrc\nfiles should be read.\npickle\u00b6\nObjects that need __new__\ncalled with keyword arguments can now be pickled\nusing pickle protocols older than protocol version 4.\nProtocol version 4 already supports this case. (Contributed by Serhiy\nStorchaka in bpo-24164.)\npickletools\u00b6\npickletools.dis()\nnow outputs the implicit memo index for the\nMEMOIZE\nopcode.\n(Contributed by Serhiy Storchaka in bpo-25382.)\npydoc\u00b6\nThe pydoc\nmodule has learned to respect the MANPAGER\nenvironment variable.\n(Contributed by Matthias Klose in bpo-8637.)\nhelp()\nand pydoc\ncan now list named tuple fields in the\norder they were defined rather than alphabetically.\n(Contributed by Raymond Hettinger in bpo-24879.)\nrandom\u00b6\nThe new choices()\nfunction returns a list of elements of\nspecified size from the given population with optional weights.\n(Contributed by Raymond Hettinger in bpo-18844.)\nre\u00b6\nAdded support of modifier spans in regular expressions. Examples:\n'(?i:p)ython'\nmatches 'python'\nand 'Python'\n, but not 'PYTHON'\n;\n'(?i)g(?-i:v)r'\nmatches 'GvR'\nand 'gvr'\n, but not 'GVR'\n.\n(Contributed by Serhiy Storchaka in bpo-433028.)\nMatch object groups can be accessed by __getitem__\n, which is\nequivalent to group()\n. So mo['name']\nis now equivalent to\nmo.group('name')\n. (Contributed by Eric Smith in bpo-24454.)\nMatch\nobjects now support\nindex-like objects\nas group\nindices.\n(Contributed by Jeroen Demeyer and Xiang Zhang in bpo-27177.)\nreadline\u00b6\nAdded set_auto_history()\nto enable or disable\nautomatic addition of input to the history list. (Contributed by\nTyler Crompton in bpo-26870.)\nrlcompleter\u00b6\nPrivate and special attribute names now are omitted unless the prefix starts with underscores. A space or a colon is added after some completed keywords. (Contributed by Serhiy Storchaka in bpo-25011 and bpo-25209.)\nshlex\u00b6\nThe shlex\nhas much\nimproved shell compatibility\nthrough the new punctuation_chars argument to control which characters\nare treated as punctuation.\n(Contributed by Vinay Sajip in bpo-1521950.)\nsite\u00b6\nWhen specifying paths to add to sys.path\nin a .pth\nfile,\nyou may now specify file paths on top of directories (e.g. zip files).\n(Contributed by Wolfgang Langner in bpo-26587).\nsqlite3\u00b6\nsqlite3.Cursor.lastrowid\nnow supports the REPLACE\nstatement.\n(Contributed by Alex LordThorsen in bpo-16864.)\nsocket\u00b6\nThe ioctl()\nfunction now supports the\nSIO_LOOPBACK_FAST_PATH\ncontrol code.\n(Contributed by Daniel Stokes in bpo-26536.)\nThe getsockopt()\nconstants SO_DOMAIN\n,\nSO_PROTOCOL\n, SO_PEERSEC\n, and SO_PASSSEC\nare now supported.\n(Contributed by Christian Heimes in bpo-26907.)\nThe setsockopt()\nnow supports the\nsetsockopt(level, optname, None, optlen: int)\nform.\n(Contributed by Christian Heimes in bpo-27744.)\nThe socket module now supports the address family\nAF_ALG\nto interface with Linux Kernel crypto API. ALG_*\n,\nSOL_ALG\nand sendmsg_afalg()\nwere added.\n(Contributed by Christian Heimes in bpo-27744 with support from\nVictor Stinner.)\nNew Linux constants TCP_USER_TIMEOUT\nand TCP_CONGESTION\nwere added.\n(Contributed by Omar Sandoval, bpo-26273).\nsocketserver\u00b6\nServers based on the socketserver\nmodule, including those\ndefined in http.server\n, xmlrpc.server\nand\nwsgiref.simple_server\n, now support the context manager\nprotocol.\n(Contributed by Aviv Palivoda in bpo-26404.)\nThe wfile\nattribute of\nStreamRequestHandler\nclasses now implements\nthe io.BufferedIOBase\nwritable interface. In particular,\ncalling write()\nis now guaranteed to send the\ndata in full. (Contributed by Martin Panter in bpo-26721.)\nssl\u00b6\nssl\nsupports OpenSSL 1.1.0. The minimum recommend version is 1.0.2.\n(Contributed by Christian Heimes in bpo-26470.)\n3DES has been removed from the default cipher suites and ChaCha20 Poly1305 cipher suites have been added. (Contributed by Christian Heimes in bpo-27850 and bpo-27766.)\nSSLContext\nhas better default configuration for options\nand ciphers.\n(Contributed by Christian Heimes in bpo-28043.)\nSSL session can be copied from one client-side connection to another\nwith the new SSLSession\nclass. TLS session resumption can\nspeed up the initial handshake, reduce latency and improve performance\n(Contributed by Christian Heimes in bpo-19500 based on a draft by\nAlex Warhawk.)\nThe new get_ciphers()\nmethod can be used to\nget a list of enabled ciphers in order of cipher priority.\nAll constants and flags have been converted to IntEnum\nand\nIntFlag\n.\n(Contributed by Christian Heimes in bpo-28025.)\nServer and client-side specific TLS protocols for SSLContext\nwere added.\n(Contributed by Christian Heimes in bpo-28085.)\nAdded ssl.SSLContext.post_handshake_auth\nto enable and\nssl.SSLSocket.verify_client_post_handshake()\nto initiate TLS 1.3\npost-handshake authentication.\n(Contributed by Christian Heimes in gh-78851.)\nstatistics\u00b6\nA new harmonic_mean()\nfunction has been added.\n(Contributed by Steven D\u2019Aprano in bpo-27181.)\nstruct\u00b6\nstruct\nnow supports IEEE 754 half-precision floats via the 'e'\nformat specifier.\n(Contributed by Eli Stevens, Mark Dickinson in bpo-11734.)\nsubprocess\u00b6\nsubprocess.Popen\ndestructor now emits a ResourceWarning\nwarning\nif the child process is still running. Use the context manager protocol (with\nproc: ...\n) or explicitly call the wait()\nmethod to\nread the exit status of the child process. (Contributed by Victor Stinner in\nbpo-26741.)\nThe subprocess.Popen\nconstructor and all functions that pass arguments\nthrough to it now accept encoding and errors arguments. Specifying either\nof these will enable text mode for the stdin, stdout and stderr streams.\n(Contributed by Steve Dower in bpo-6135.)\nsys\u00b6\nThe new getfilesystemencodeerrors()\nfunction returns the name of\nthe error mode used to convert between Unicode filenames and bytes filenames.\n(Contributed by Steve Dower in bpo-27781.)\nOn Windows the return value of the getwindowsversion()\nfunction\nnow includes the platform_version field which contains the accurate major\nversion, minor version and build number of the current operating system,\nrather than the version that is being emulated for the process\n(Contributed by Steve Dower in bpo-27932.)\ntelnetlib\u00b6\ntelnetlib.Telnet\nis now a context manager (contributed by\nSt\u00e9phane Wirtel in bpo-25485).\ntime\u00b6\nThe struct_time\nattributes tm_gmtoff\nand\ntm_zone\nare now available on all platforms.\ntimeit\u00b6\nThe new Timer.autorange()\nconvenience\nmethod has been added to call Timer.timeit()\nrepeatedly so that the total run time is greater or equal to 200 milliseconds.\n(Contributed by Steven D\u2019Aprano in bpo-6422.)\ntimeit\nnow warns when there is substantial (4x) variance\nbetween best and worst times.\n(Contributed by Serhiy Storchaka in bpo-23552.)\ntkinter\u00b6\nAdded methods Variable.trace_add()\n,\nVariable.trace_remove()\nand trace_info()\nin the tkinter.Variable\nclass. They replace old methods\ntrace_variable()\n, trace()\n,\ntrace_vdelete()\nand\ntrace_vinfo()\nthat use obsolete Tcl commands and might\nnot work in future versions of Tcl.\n(Contributed by Serhiy Storchaka in bpo-22115).\ntraceback\u00b6\nBoth the traceback module and the interpreter\u2019s builtin exception display now abbreviate long sequences of repeated lines in tracebacks as shown in the following example:\n>>> def f(): f()\n...\n>>> f()\nTraceback (most recent call last):\nFile \"\", line 1, in \nFile \"\", line 1, in f\nFile \"\", line 1, in f\nFile \"\", line 1, in f\n[Previous line repeated 995 more times]\nRecursionError: maximum recursion depth exceeded\n(Contributed by Emanuel Barry in bpo-26823.)\ntracemalloc\u00b6\nThe tracemalloc\nmodule now supports tracing memory allocations in\nmultiple different address spaces.\nThe new DomainFilter\nfilter class has been added\nto filter block traces by their address space (domain).\n(Contributed by Victor Stinner in bpo-26588.)\ntyping\u00b6\nSince the typing\nmodule is provisional,\nall changes introduced in Python 3.6 have also been\nbackported to Python 3.5.x.\nThe typing\nmodule has a much improved support for generic type\naliases. For example Dict[str, Tuple[S, T]]\nis now a valid\ntype annotation.\n(Contributed by Guido van Rossum in Github #195.)\nThe typing.ContextManager\nclass has been added for\nrepresenting contextlib.AbstractContextManager\n.\n(Contributed by Brett Cannon in bpo-25609.)\nThe typing.Collection\nclass has been added for\nrepresenting collections.abc.Collection\n.\n(Contributed by Ivan Levkivskyi in bpo-27598.)\nThe typing.ClassVar\ntype construct has been added to\nmark class variables. As introduced in PEP 526, a variable annotation\nwrapped in ClassVar indicates that a given attribute is intended to be used as\na class variable and should not be set on instances of that class.\n(Contributed by Ivan Levkivskyi in Github #280.)\nA new TYPE_CHECKING\nconstant that is assumed to be\nTrue\nby the static type checkers, but is False\nat runtime.\n(Contributed by Guido van Rossum in Github #230.)\nA new NewType()\nhelper function has been added to create\nlightweight distinct types for annotations:\nfrom typing import NewType\nUserId = NewType('UserId', int)\nsome_id = UserId(524313)\nThe static type checker will treat the new type as if it were a subclass of the original type. (Contributed by Ivan Levkivskyi in Github #189.)\nunicodedata\u00b6\nThe unicodedata\nmodule now uses data from Unicode 9.0.0.\n(Contributed by Benjamin Peterson.)\nunittest.mock\u00b6\nThe Mock\nclass has the following improvements:\nTwo new methods,\nMock.assert_called()\nandMock.assert_called_once()\nto check if the mock object was called. (Contributed by Amit Saha in bpo-26323.)The\nMock.reset_mock()\nmethod now has two optional keyword only arguments: return_value and side_effect. (Contributed by Kushal Das in bpo-21271.)\nurllib.request\u00b6\nIf a HTTP request has a file or iterable body (other than a\nbytes object) but no Content-Length\nheader, rather than\nthrowing an error, AbstractHTTPHandler\nnow falls back to use chunked transfer encoding.\n(Contributed by Demian Brecht and Rolf Krahl in bpo-12319.)\nurllib.robotparser\u00b6\nRobotFileParser\nnow supports the Crawl-delay\nand\nRequest-rate\nextensions.\n(Contributed by Nikolay Bogoychev in bpo-16099.)\nvenv\u00b6\nvenv\naccepts a new parameter --prompt\n. This parameter provides an\nalternative prefix for the virtual environment. (Proposed by \u0141ukasz Balcerzak\nand ported to 3.6 by St\u00e9phane Wirtel in bpo-22829.)\nwarnings\u00b6\nA new optional source parameter has been added to the\nwarnings.warn_explicit()\nfunction: the destroyed object which emitted a\nResourceWarning\n. A source attribute has also been added to\nwarnings.WarningMessage\n(contributed by Victor Stinner in\nbpo-26568 and bpo-26567).\nWhen a ResourceWarning\nwarning is logged, the tracemalloc\nmodule is now\nused to try to retrieve the traceback where the destroyed object was allocated.\nExample with the script example.py\n:\nimport warnings\ndef func():\nreturn open(__file__)\nf = func()\nf = None\nOutput of the command python3.6 -Wd -X tracemalloc=5 example.py\n:\nexample.py:7: ResourceWarning: unclosed file <_io.TextIOWrapper name='example.py' mode='r' encoding='UTF-8'>\nf = None\nObject allocated at (most recent call first):\nFile \"example.py\", lineno 4\nreturn open(__file__)\nFile \"example.py\", lineno 6\nf = func()\nThe \u201cObject allocated at\u201d traceback is new and is only displayed if\ntracemalloc\nis tracing Python memory allocations and if the\nwarnings\nmodule was already imported.\nwinreg\u00b6\nAdded the 64-bit integer type REG_QWORD\n.\n(Contributed by Clement Rouault in bpo-23026.)\nwinsound\u00b6\nAllowed keyword arguments to be passed to Beep\n,\nMessageBeep\n, and PlaySound\n(bpo-27982).\nxmlrpc.client\u00b6\nThe xmlrpc.client\nmodule now supports unmarshalling\nadditional data types used by the Apache XML-RPC implementation\nfor numerics and None\n.\n(Contributed by Serhiy Storchaka in bpo-26885.)\nzipfile\u00b6\nA new ZipInfo.from_file()\nclass method\nallows making a ZipInfo\ninstance from a filesystem file.\nA new ZipInfo.is_dir()\nmethod can be used\nto check if the ZipInfo\ninstance represents a directory.\n(Contributed by Thomas Kluyver in bpo-26039.)\nThe ZipFile.open()\nmethod can now be used to\nwrite data into a ZIP file, as well as for extracting data.\n(Contributed by Thomas Kluyver in bpo-26039.)\nzlib\u00b6\nThe compress()\nand decompress()\nfunctions now accept\nkeyword arguments.\n(Contributed by Aviv Palivoda in bpo-26243 and\nXiang Zhang in bpo-16764 respectively.)\nOptimizations\u00b6\nThe Python interpreter now uses a 16-bit wordcode instead of bytecode which made a number of opcode optimizations possible. (Contributed by Demur Rumed with input and reviews from Serhiy Storchaka and Victor Stinner in bpo-26647 and bpo-28050.)\nThe\nasyncio.Future\nclass now has an optimized C implementation. (Contributed by Yury Selivanov and INADA Naoki in bpo-26081.)The\nasyncio.Task\nclass now has an optimized C implementation. (Contributed by Yury Selivanov in bpo-28544.)Various implementation improvements in the\ntyping\nmodule (such as caching of generic types) allow up to 30 times performance improvements and reduced memory footprint.The ASCII decoder is now up to 60 times as fast for error handlers\nsurrogateescape\n,ignore\nandreplace\n(Contributed by Victor Stinner in bpo-24870).The ASCII and the Latin1 encoders are now up to 3 times as fast for the error handler\nsurrogateescape\n(Contributed by Victor Stinner in bpo-25227).The UTF-8 encoder is now up to 75 times as fast for error handlers\nignore\n,replace\n,surrogateescape\n,surrogatepass\n(Contributed by Victor Stinner in bpo-25267).The UTF-8 decoder is now up to 15 times as fast for error handlers\nignore\n,replace\nandsurrogateescape\n(Contributed by Victor Stinner in bpo-25301).bytes % args\nis now up to 2 times faster. (Contributed by Victor Stinner in bpo-25349).bytearray % args\nis now between 2.5 and 5 times faster. (Contributed by Victor Stinner in bpo-25399).Optimize\nbytes.fromhex()\nandbytearray.fromhex()\n: they are now between 2x and 3.5x faster. (Contributed by Victor Stinner in bpo-25401).Optimize\nbytes.replace(b'', b'.')\nandbytearray.replace(b'', b'.')\n: up to 80% faster. (Contributed by Josh Snider in bpo-26574).Allocator functions of the\nPyMem_Malloc()\ndomain (PYMEM_DOMAIN_MEM\n) now use the pymalloc memory allocator instead ofmalloc()\nfunction of the C library. The pymalloc allocator is optimized for objects smaller or equal to 512 bytes with a short lifetime, and usemalloc()\nfor larger memory blocks. (Contributed by Victor Stinner in bpo-26249).pickle.load()\nandpickle.loads()\nare now up to 10% faster when deserializing many small objects (Contributed by Victor Stinner in bpo-27056).Passing keyword arguments to a function has an overhead in comparison with passing positional arguments. Now in extension functions implemented with using Argument Clinic this overhead is significantly decreased. (Contributed by Serhiy Storchaka in bpo-27574).\nOptimized\nglob()\nandiglob()\nfunctions in theglob\nmodule; they are now about 3\u20136 times faster. (Contributed by Serhiy Storchaka in bpo-25596).Optimized globbing in\npathlib\nby usingos.scandir()\n; it is now about 1.5\u20134 times faster. (Contributed by Serhiy Storchaka in bpo-26032).xml.etree.ElementTree\nparsing, iteration and deepcopy performance has been significantly improved. (Contributed by Serhiy Storchaka in bpo-25638, bpo-25873, and bpo-25869.)Creation of\nfractions.Fraction\ninstances from floats and decimals is now 2 to 3 times faster. (Contributed by Serhiy Storchaka in bpo-25971.)\nBuild and C API Changes\u00b6\nPython now requires some C99 support in the toolchain to build. Most notably, Python now uses standard integer types and macros in place of custom macros like\nPY_LONG_LONG\n. For more information, see PEP 7 and bpo-17884.Cross-compiling CPython with the Android NDK and the Android API level set to 21 (Android 5.0 Lollipop) or greater runs successfully. While Android is not yet a supported platform, the Python test suite runs on the Android emulator with only about 16 tests failures. See the Android meta-issue bpo-26865.\nThe\n--enable-optimizations\nconfigure flag has been added. Turning it on will activate expensive optimizations like PGO. (Original patch by Alecsandru Patrascu of Intel in bpo-26359.)The GIL must now be held when allocator functions of\nPYMEM_DOMAIN_OBJ\n(ex:PyObject_Malloc()\n) andPYMEM_DOMAIN_MEM\n(ex:PyMem_Malloc()\n) domains are called.New\nPy_FinalizeEx()\nAPI which indicates if flushing buffered data failed. (Contributed by Martin Panter in bpo-5319.)PyArg_ParseTupleAndKeywords()\nnow supports positional-only parameters. Positional-only parameters are defined by empty names. (Contributed by Serhiy Storchaka in bpo-26282).PyTraceback_Print\nmethod now abbreviates long sequences of repeated lines as\"[Previous line repeated {count} more times]\"\n. (Contributed by Emanuel Barry in bpo-26823.)The new\nPyErr_SetImportErrorSubclass()\nfunction allows for specifying a subclass ofImportError\nto raise. (Contributed by Eric Snow in bpo-15767.)The new\nPyErr_ResourceWarning()\nfunction can be used to generate aResourceWarning\nproviding the source of the resource allocation. (Contributed by Victor Stinner in bpo-26567.)The new\nPyOS_FSPath()\nfunction returns the file system representation of a path-like object. (Contributed by Brett Cannon in bpo-27186.)The\nPyUnicode_FSConverter()\nandPyUnicode_FSDecoder()\nfunctions will now accept path-like objects.\nOther Improvements\u00b6\nWhen\n--version\n(short form:-V\n) is supplied twice, Python printssys.version\nfor detailed information.$ ./python -VV Python 3.6.0b4+ (3.6:223967b49e49+, Nov 21 2016, 20:55:04) [GCC 4.2.1 Compatible Apple LLVM 8.0.0 (clang-800.0.42.1)]\nDeprecated\u00b6\nNew Keywords\u00b6\nasync\nand await\nare not recommended to be used as variable, class,\nfunction or module names. Introduced by PEP 492 in Python 3.5, they will\nbecome proper keywords in Python 3.7. Starting in Python 3.6, the use of\nasync\nor await\nas names will generate a DeprecationWarning\n.\nDeprecated Python behavior\u00b6\nRaising the StopIteration\nexception inside a generator will now\ngenerate a DeprecationWarning\n, and will trigger a RuntimeError\nin Python 3.7. See PEP 479: Change StopIteration handling inside generators for details.\nThe __aiter__()\nmethod is now expected to return an asynchronous\niterator directly instead of returning an awaitable as previously.\nDoing the former will trigger a DeprecationWarning\n. Backward\ncompatibility will be removed in Python 3.7.\n(Contributed by Yury Selivanov in bpo-27243.)\nA backslash-character pair that is not a valid escape sequence now generates\na DeprecationWarning\n. Although this will eventually become a\nSyntaxError\n, that will not be for several Python releases.\n(Contributed by Emanuel Barry in bpo-27364.)\nWhen performing a relative import, falling back on __name__\nand\n__path__\nfrom the calling module when __spec__\nor\n__package__\nare not defined now raises an ImportWarning\n.\n(Contributed by Rose Ames in bpo-25791.)\nDeprecated Python modules, functions and methods\u00b6\nasynchat\u00b6\nThe asynchat\nhas been deprecated in favor of asyncio\n.\n(Contributed by Mariatta in bpo-25002.)\nasyncore\u00b6\nThe asyncore\nhas been deprecated in favor of asyncio\n.\n(Contributed by Mariatta in bpo-25002.)\ndbm\u00b6\nUnlike other dbm\nimplementations, the dbm.dumb\nmodule\ncreates databases with the 'rw'\nmode and allows modifying the database\nopened with the 'r'\nmode. This behavior is now deprecated and will\nbe removed in 3.8.\n(Contributed by Serhiy Storchaka in bpo-21708.)\ndistutils\u00b6\nThe undocumented extra_path\nargument to the\ndistutils.Distribution\nconstructor is now considered deprecated\nand will raise a warning if set. Support for this parameter will be\nremoved in a future Python release. See bpo-27919 for details.\ngrp\u00b6\nThe support of non-integer arguments in getgrgid()\nhas been\ndeprecated.\n(Contributed by Serhiy Storchaka in bpo-26129.)\nimportlib\u00b6\nThe importlib.machinery.SourceFileLoader.load_module()\nand\nimportlib.machinery.SourcelessFileLoader.load_module()\nmethods\nare now deprecated. They were the only remaining implementations of\nimportlib.abc.Loader.load_module()\nin importlib\nthat had not\nbeen deprecated in previous versions of Python in favour of\nimportlib.abc.Loader.exec_module()\n.\nThe importlib.machinery.WindowsRegistryFinder\nclass is now\ndeprecated. As of 3.6.0, it is still added to sys.meta_path\nby\ndefault (on Windows), but this may change in future releases.\nos\u00b6\nUndocumented support of general bytes-like objects\nas paths in os\nfunctions, compile()\nand similar functions is\nnow deprecated.\n(Contributed by Serhiy Storchaka in bpo-25791 and bpo-26754.)\nre\u00b6\nSupport for inline flags (?letters)\nin the middle of the regular\nexpression has been deprecated and will be removed in a future Python\nversion. Flags at the start of a regular expression are still allowed.\n(Contributed by Serhiy Storchaka in bpo-22493.)\nssl\u00b6\nOpenSSL 0.9.8, 1.0.0 and 1.0.1 are deprecated and no longer supported.\nIn the future the ssl\nmodule will require at least OpenSSL 1.0.2 or\n1.1.0.\nSSL-related arguments like certfile\n, keyfile\nand check_hostname\nin ftplib\n, http.client\n, imaplib\n, poplib\n,\nand smtplib\nhave been deprecated in favor of context\n.\n(Contributed by Christian Heimes in bpo-28022.)\nA couple of protocols and functions of the ssl\nmodule are now\ndeprecated. Some features will no longer be available in future versions\nof OpenSSL. Other features are deprecated in favor of a different API.\n(Contributed by Christian Heimes in bpo-28022 and bpo-26470.)\ntkinter\u00b6\nThe tkinter.tix\nmodule is now deprecated. tkinter\nusers\nshould use tkinter.ttk\ninstead.\nvenv\u00b6\nThe pyvenv\nscript has been deprecated in favour of python3 -m venv\n.\nThis prevents confusion as to what Python interpreter pyvenv\nis\nconnected to and thus what Python interpreter will be used by the virtual\nenvironment. (Contributed by Brett Cannon in bpo-25154.)\nxml\u00b6\nAs mitigation against DTD and external entity retrieval, the\nxml.dom.minidom\nandxml.sax\nmodules no longer process external entities by default. (Contributed by Christian Heimes in gh-61441.)\nDeprecated functions and types of the C API\u00b6\nUndocumented functions PyUnicode_AsEncodedObject()\n,\nPyUnicode_AsDecodedObject()\n, PyUnicode_AsEncodedUnicode()\nand PyUnicode_AsDecodedUnicode()\nare deprecated now.\nUse the generic codec based API instead.\nDeprecated Build Options\u00b6\nThe --with-system-ffi\nconfigure flag is now on by default on non-macOS\nUNIX platforms. It may be disabled by using --without-system-ffi\n, but\nusing the flag is deprecated and will not be accepted in Python 3.7.\nmacOS is unaffected by this change. Note that many OS distributors already\nuse the --with-system-ffi\nflag when building their system Python.\nRemoved\u00b6\nAPI and Feature Removals\u00b6\nUnknown escapes consisting of\n'\\'\nand an ASCII letter in regular expressions will now cause an error. In replacement templates forre.sub()\nthey are still allowed, but deprecated. There.LOCALE\nflag can now only be used with binary patterns.inspect.getmoduleinfo()\nwas removed (was deprecated since CPython 3.3).inspect.getmodulename()\nshould be used for obtaining the module name for a given path. (Contributed by Yury Selivanov in bpo-13248.)traceback.Ignore\nclass andtraceback.usage\n,traceback.modname\n,traceback.fullmodname\n,traceback.find_lines_from_code\n,traceback.find_lines\n,traceback.find_strings\n,traceback.find_executable_lines\nmethods were removed from thetraceback\nmodule. They were undocumented methods deprecated since Python 3.2 and equivalent functionality is available from private methods.The\ntk_menuBar()\nandtk_bindForTraversal()\ndummy methods intkinter\nwidget classes were removed (corresponding Tk commands were obsolete since Tk 4.0).The\nopen()\nmethod of thezipfile.ZipFile\nclass no longer supports the'U'\nmode (was deprecated since Python 3.4). Useio.TextIOWrapper\nfor reading compressed text files in universal newlines mode.The undocumented\nIN\n,CDROM\n,DLFCN\n,TYPES\n,CDIO\n, andSTROPTS\nmodules have been removed. They had been available in the platform specificLib/plat-*/\ndirectories, but were chronically out of date, inconsistently available across platforms, and unmaintained. The script that created these modules is still available in the source distribution at Tools/scripts/h2py.py.The deprecated\nasynchat.fifo\nclass has been removed.\nPorting to Python 3.6\u00b6\nThis section lists previously described changes and other bugfixes that may require changes to your code.\nChanges in \u2018python\u2019 Command Behavior\u00b6\nThe output of a special Python build with defined\nCOUNT_ALLOCS\n,SHOW_ALLOC_COUNT\norSHOW_TRACK_COUNT\nmacros is now off by default. It can be re-enabled using the-X showalloccount\noption. It now outputs tostderr\ninstead ofstdout\n. (Contributed by Serhiy Storchaka in bpo-23034.)\nChanges in the Python API\u00b6\nopen()\nwill no longer allow combining the'U'\nmode flag with'+'\n. (Contributed by Jeff Balogh and John O\u2019Connor in bpo-2091.)sqlite3\nno longer implicitly commits an open transaction before DDL statements.On Linux,\nos.urandom()\nnow blocks until the system urandom entropy pool is initialized to increase the security.When\nimportlib.abc.Loader.exec_module()\nis defined,importlib.abc.Loader.create_module()\nmust also be defined.PyErr_SetImportError()\nnow setsTypeError\nwhen its msg argument is not set. Previously onlyNULL\nwas returned.The format of the\nco_lnotab\nattribute of code objects changed to support a negative line number delta. By default, Python does not emit bytecode with a negative line number delta. Functions usingframe.f_lineno\n,PyFrame_GetLineNumber()\norPyCode_Addr2Line()\nare not affected. Functions directly decodingco_lnotab\nshould be updated to use a signed 8-bit integer type for the line number delta, but this is only required to support applications using a negative line number delta. SeeObjects/lnotab_notes.txt\nfor theco_lnotab\nformat and how to decode it, and see the PEP 511 for the rationale.The functions in the\ncompileall\nmodule now return booleans instead of1\nor0\nto represent success or failure, respectively. Thanks to booleans being a subclass of integers, this should only be an issue if you were doing identity checks for1\nor0\n. See bpo-25768.Reading the\nport\nattribute ofurllib.parse.urlsplit()\nandurlparse()\nresults now raisesValueError\nfor out-of-range values, rather than returningNone\n. See bpo-20059.The\nimp\nmodule now raises aDeprecationWarning\ninstead ofPendingDeprecationWarning\n.The following modules have had missing APIs added to their\n__all__\nattributes to match the documented APIs:calendar\n,cgi\n,csv\n,ElementTree\n,enum\n,fileinput\n,ftplib\n,logging\n,mailbox\n,mimetypes\n,optparse\n,plistlib\n,smtpd\n,subprocess\n,tarfile\n,threading\nandwave\n. This means they will export new symbols whenimport *\nis used. (Contributed by Joel Taddei and Jacek Ko\u0142odziej in bpo-23883.)When performing a relative import, if\n__package__\ndoes not compare equal to__spec__.parent\nthenImportWarning\nis raised. (Contributed by Brett Cannon in bpo-25791.)When a relative import is performed and no parent package is known, then\nImportError\nwill be raised. Previously,SystemError\ncould be raised. (Contributed by Brett Cannon in bpo-18018.)Servers based on the\nsocketserver\nmodule, including those defined inhttp.server\n,xmlrpc.server\nandwsgiref.simple_server\n, now only catch exceptions derived fromException\n. Therefore if a request handler raises an exception likeSystemExit\norKeyboardInterrupt\n,handle_error()\nis no longer called, and the exception will stop a single-threaded server. (Contributed by Martin Panter in bpo-23430.)spwd.getspnam()\nnow raises aPermissionError\ninstead ofKeyError\nif the user doesn\u2019t have privileges.The\nsocket.socket.close()\nmethod now raises an exception if an error (e.g.EBADF\n) was reported by the underlying system call. (Contributed by Martin Panter in bpo-26685.)The decode_data argument for the\nsmtpd.SMTPChannel\nandsmtpd.SMTPServer\nconstructors is nowFalse\nby default. This means that the argument passed toprocess_message()\nis now a bytes object by default, andprocess_message()\nwill be passed keyword arguments. Code that has already been updated in accordance with the deprecation warning generated by 3.5 will not be affected.All optional arguments of the\ndump()\n,dumps()\n,load()\nandloads()\nfunctions andJSONEncoder\nandJSONDecoder\nclass constructors in thejson\nmodule are now keyword-only. (Contributed by Serhiy Storchaka in bpo-18726.)Subclasses of\ntype\nwhich don\u2019t overridetype.__new__\nmay no longer use the one-argument form to get the type of an object.As part of PEP 487, the handling of keyword arguments passed to\ntype\n(other than the metaclass hint,metaclass\n) is now consistently delegated toobject.__init_subclass__()\n. This means thattype.__new__\nandtype.__init__\nboth now accept arbitrary keyword arguments, butobject.__init_subclass__()\n(which is called fromtype.__new__\n) will reject them by default. Custom metaclasses accepting additional keyword arguments will need to adjust their calls totype.__new__\n(whether direct or viasuper\n) accordingly.In\ndistutils.command.sdist.sdist\n, thedefault_format\nattribute has been removed and is no longer honored. Instead, the gzipped tarfile format is the default on all platforms and no platform-specific selection is made. In environments where distributions are built on Windows and zip distributions are required, configure the project with asetup.cfg\nfile containing the following:[sdist] formats=zip\nThis behavior has also been backported to earlier Python versions by Setuptools 26.0.0.\nIn the\nurllib.request\nmodule and thehttp.client.HTTPConnection.request()\nmethod, if no Content-Length header field has been specified and the request body is a file object, it is now sent with HTTP 1.1 chunked encoding. If a file object has to be sent to a HTTP 1.0 server, the Content-Length value now has to be specified by the caller. (Contributed by Demian Brecht and Rolf Krahl with tweaks from Martin Panter in bpo-12319.)The\nDictReader\nnow returns rows of typeOrderedDict\n. (Contributed by Steve Holden in bpo-27842.)The\ncrypt.METHOD_CRYPT\nwill no longer be added tocrypt.methods\nif unsupported by the platform. (Contributed by Victor Stinner in bpo-25287.)The verbose and rename arguments for\nnamedtuple()\nare now keyword-only. (Contributed by Raymond Hettinger in bpo-25628.)On Linux,\nctypes.util.find_library()\nnow looks inLD_LIBRARY_PATH\nfor shared libraries. (Contributed by Vinay Sajip in bpo-9998.)The\nimaplib.IMAP4\nclass now handles flags containing the']'\ncharacter in messages sent from the server to improve real-world compatibility. (Contributed by Lita Cho in bpo-21815.)The\nmmap.mmap.write()\nfunction now returns the number of bytes written like other write methods. (Contributed by Jakub Stasiak in bpo-26335.)The\npkgutil.iter_modules()\nandpkgutil.walk_packages()\nfunctions now returnModuleInfo\nnamed tuples. (Contributed by Ramchandra Apte in bpo-17211.)re.sub()\nnow raises an error for invalid numerical group references in replacement templates even if the pattern is not found in the string. The error message for invalid group references now includes the group index and the position of the reference. (Contributed by SilentGhost, Serhiy Storchaka in bpo-25953.)zipfile.ZipFile\nwill now raiseNotImplementedError\nfor unrecognized compression values. Previously a plainRuntimeError\nwas raised. Additionally, callingZipFile\nmethods on a closed ZipFile or calling thewrite()\nmethod on a ZipFile created with mode'r'\nwill raise aValueError\n. Previously, aRuntimeError\nwas raised in those scenarios.when custom metaclasses are combined with zero-argument\nsuper()\nor direct references from methods to the implicit__class__\nclosure variable, the implicit__classcell__\nnamespace entry must now be passed up totype.__new__\nfor initialisation. Failing to do so will result in aDeprecationWarning\nin Python 3.6 and aRuntimeError\nin Python 3.8.With the introduction of\nModuleNotFoundError\n, import system consumers may start expecting import system replacements to raise that more specific exception when appropriate, rather than the less-specificImportError\n. To provide future compatibility with such consumers, implementers of alternative import systems that completely replace__import__()\nwill need to update their implementations to raise the new subclass when a module can\u2019t be found at all. Implementers of compliant plugins to the default import system shouldn\u2019t need to make any changes, as the default import system will raise the new subclass when appropriate.\nChanges in the C API\u00b6\nThe\nPyMem_Malloc()\nallocator family now uses the pymalloc allocator rather than the systemmalloc()\n. Applications callingPyMem_Malloc()\nwithout holding the GIL can now crash. Set thePYTHONMALLOC\nenvironment variable todebug\nto validate the usage of memory allocators in your application. See bpo-26249.Py_Exit()\n(and the main interpreter) now override the exit status with 120 if flushing buffered data failed. See bpo-5319.\nCPython bytecode changes\u00b6\nThere have been several major changes to the bytecode in Python 3.6.\nThe Python interpreter now uses a 16-bit wordcode instead of bytecode. (Contributed by Demur Rumed with input and reviews from Serhiy Storchaka and Victor Stinner in bpo-26647 and bpo-28050.)\nThe new\nFORMAT_VALUE\nandBUILD_STRING\nopcodes as part of the formatted string literal implementation. (Contributed by Eric Smith in bpo-25483 and Serhiy Storchaka in bpo-27078.)The new\nBUILD_CONST_KEY_MAP\nopcode to optimize the creation of dictionaries with constant keys. (Contributed by Serhiy Storchaka in bpo-27140.)The function call opcodes have been heavily reworked for better performance and simpler implementation. The\nMAKE_FUNCTION\n,CALL_FUNCTION\n,CALL_FUNCTION_KW\nandBUILD_MAP_UNPACK_WITH_CALL\nopcodes have been modified, the newCALL_FUNCTION_EX\nandBUILD_TUPLE_UNPACK_WITH_CALL\nhave been added, andCALL_FUNCTION_VAR\n,CALL_FUNCTION_VAR_KW\nandMAKE_CLOSURE\nopcodes have been removed. (Contributed by Demur Rumed in bpo-27095, and Serhiy Storchaka in bpo-27213, bpo-28257.)The new\nSETUP_ANNOTATIONS\nandSTORE_ANNOTATION\nopcodes have been added to support the new variable annotation syntax. (Contributed by Ivan Levkivskyi in bpo-27985.)\nNotable changes in Python 3.6.2\u00b6\nNew make regen-all\nbuild target\u00b6\nTo simplify cross-compilation, and to ensure that CPython can reliably be compiled without requiring an existing version of Python to already be available, the autotools-based build system no longer attempts to implicitly recompile generated files based on file modification times.\nInstead, a new make regen-all\ncommand has been added to force regeneration\nof these files when desired (e.g. after an initial version of Python has\nalready been built based on the pregenerated versions).\nMore selective regeneration targets are also defined - see Makefile.pre.in for details.\n(Contributed by Victor Stinner in bpo-23404.)\nAdded in version 3.6.2.\nRemoval of make touch\nbuild target\u00b6\nThe make touch\nbuild target previously used to request implicit regeneration\nof generated files by updating their modification times has been removed.\nIt has been replaced by the new make regen-all\ntarget.\n(Contributed by Victor Stinner in bpo-23404.)\nChanged in version 3.6.2.\nNotable changes in Python 3.6.4\u00b6\nThe PyExc_RecursionErrorInst\nsingleton that was part of the public API\nhas been removed as its members being never cleared may cause a segfault\nduring finalization of the interpreter.\n(Contributed by Xavier de Gaye in bpo-22898 and bpo-30697.)\nNotable changes in Python 3.6.5\u00b6\nThe locale.localeconv()\nfunction now sets temporarily the LC_CTYPE\nlocale to the LC_NUMERIC\nlocale in some cases.\n(Contributed by Victor Stinner in bpo-31900.)\nNotable changes in Python 3.6.7\u00b6\nxml.dom.minidom\nand xml.sax\nmodules no longer process\nexternal entities by default. See also gh-61441.\nIn 3.6.7 the tokenize\nmodule now implicitly emits a NEWLINE\ntoken\nwhen provided with input that does not have a trailing new line. This behavior\nnow matches what the C tokenizer does internally.\n(Contributed by Ammar Askar in bpo-33899.)\nNotable changes in Python 3.6.10\u00b6\nDue to significant security concerns, the reuse_address parameter of\nasyncio.loop.create_datagram_endpoint()\nis no longer supported. This is\nbecause of the behavior of the socket option SO_REUSEADDR\nin UDP. For more\ndetails, see the documentation for loop.create_datagram_endpoint()\n.\n(Contributed by Kyle Stanley, Antoine Pitrou, and Yury Selivanov in\nbpo-37228.)\nNotable changes in Python 3.6.13\u00b6\nEarlier Python versions allowed using both ;\nand &\nas\nquery parameter separators in urllib.parse.parse_qs()\nand\nurllib.parse.parse_qsl()\n. Due to security concerns, and to conform with\nnewer W3C recommendations, this has been changed to allow only a single\nseparator key, with &\nas the default. This change also affects\ncgi.parse()\nand cgi.parse_multipart()\nas they use the affected\nfunctions internally. For more details, please see their respective\ndocumentation.\n(Contributed by Adam Goldschmidt, Senthil Kumaran and Ken Jin in bpo-42967.)\nNotable changes in Python 3.6.14\u00b6\nA security fix alters the ftplib.FTP\nbehavior to not trust the\nIPv4 address sent from the remote server when setting up a passive data\nchannel. We reuse the ftp server IP address instead. For unusual code\nrequiring the old behavior, set a trust_server_pasv_ipv4_address\nattribute on your FTP instance to True\n. (See gh-87451)\nThe presence of newline or tab characters in parts of a URL allows for some\nforms of attacks. Following the WHATWG specification that updates RFC 3986,\nASCII newline \\n\n, \\r\nand tab \\t\ncharacters are stripped from the\nURL by the parser urllib.parse()\npreventing such attacks. The removal\ncharacters are controlled by a new module level variable\nurllib.parse._UNSAFE_URL_BYTES_TO_REMOVE\n. (See gh-88048)", "code_snippets": [" ", " ", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", "\n", "\n", " ", " ", " ", "\n\n", " ", " ", "\n\n", "\n ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n ", " ", " ", "\n\n ", " ", "\n ", "\n ", "\n\n", "\n ", "\n\n", "\n ", "\n", "\n ", " ", " ", "\n ", " ", "\n\n ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", " ", "\n\n ", "\n ", " ", " ", "\n ", " ", " ", "\n\n", "\n ", " ", " ", "\n", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", "\n\n", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n\n", " ", " ", " ", " ", " ", " ", "\n\n", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n ", "\n", "\n", "\n", " ", " ", "\n", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n File ", ", line ", ", in ", "\n File ", ", line ", ", in ", "\n File ", ", line ", ", in ", "\n File ", ", line ", ", in ", "\n", "\n", ": ", "\n", " ", "\n\n", " ", " ", " ", "\n", " ", " ", "\n", "\n\n", "\n ", " ", "\n\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 18541} +{"url": "https://docs.python.org/3/library/email.errors.html", "title": ": Exception and Defect classes", "content": "email.errors\n: Exception and Defect classes\u00b6\nSource code: Lib/email/errors.py\nThe following exception classes are defined in the email.errors\nmodule:\n- exception email.errors.MessageError\u00b6\nThis is the base class for all exceptions that the\nemail\npackage can raise. It is derived from the standardException\nclass and defines no additional methods.\n- exception email.errors.MessageParseError\u00b6\nThis is the base class for exceptions raised by the\nParser\nclass. It is derived fromMessageError\n. This class is also used internally by the parser used byheaderregistry\n.\n- exception email.errors.HeaderParseError\u00b6\nRaised under some error conditions when parsing the RFC 5322 headers of a message, this class is derived from\nMessageParseError\n. Theset_boundary()\nmethod will raise this error if the content type is unknown when the method is called.Header\nmay raise this error for certain base64 decoding errors, and when an attempt is made to create a header that appears to contain an embedded header (that is, there is what is supposed to be a continuation line that has no leading whitespace and looks like a header).\n- exception email.errors.BoundaryError\u00b6\nDeprecated and no longer used.\n- exception email.errors.MultipartConversionError\u00b6\nRaised if the\nattach()\nmethod is called on an instance of a class derived fromMIMENonMultipart\n(e.g.MIMEImage\n).MultipartConversionError\nmultiply inherits fromMessageError\nand the built-inTypeError\n.\n- exception email.errors.HeaderWriteError\u00b6\nRaised when an error occurs when the\ngenerator\noutputs headers.\n- exception email.errors.MessageDefect\u00b6\nThis is the base class for all defects found when parsing email messages. It is derived from\nValueError\n.\n- exception email.errors.HeaderDefect\u00b6\nThis is the base class for all defects found when parsing email headers. It is derived from\nMessageDefect\n.\nHere is the list of the defects that the FeedParser\ncan find while parsing messages. Note that the defects are added to the message\nwhere the problem was found, so for example, if a message nested inside a\nmultipart/alternative had a malformed header, that nested message\nobject would have a defect, but the containing messages would not.\nAll defect classes are subclassed from email.errors.MessageDefect\n.\n- exception email.errors.NoBoundaryInMultipartDefect\u00b6\nA message claimed to be a multipart, but had no boundary parameter.\n- exception email.errors.StartBoundaryNotFoundDefect\u00b6\nThe start boundary claimed in the Content-Type header was never found.\n- exception email.errors.CloseBoundaryNotFoundDefect\u00b6\nA start boundary was found, but no corresponding close boundary was ever found.\nAdded in version 3.3.\n- exception email.errors.FirstHeaderLineIsContinuationDefect\u00b6\nThe message had a continuation line as its first header line.\n- exception email.errors.MisplacedEnvelopeHeaderDefect\u00b6\nA \u201cUnix From\u201d header was found in the middle of a header block.\n- exception email.errors.MissingHeaderBodySeparatorDefect\u00b6\nA line was found while parsing headers that had no leading white space but contained no \u2018:\u2019. Parsing continues assuming that the line represents the first line of the body.\nAdded in version 3.3.\n- exception email.errors.MalformedHeaderDefect\u00b6\nA header was found that was missing a colon, or was otherwise malformed.\nDeprecated since version 3.3: This defect has not been used for several Python versions.\n- exception email.errors.MultipartInvariantViolationDefect\u00b6\nA message claimed to be a multipart, but no subparts were found. Note that when a message has this defect, its\nis_multipart()\nmethod may returnFalse\neven though its content type claims to be multipart.\n- exception email.errors.InvalidBase64PaddingDefect\u00b6\nWhen decoding a block of base64 encoded bytes, the padding was not correct. Enough padding is added to perform the decode, but the resulting decoded bytes may be invalid.\n- exception email.errors.InvalidBase64CharactersDefect\u00b6\nWhen decoding a block of base64 encoded bytes, characters outside the base64 alphabet were encountered. The characters are ignored, but the resulting decoded bytes may be invalid.\n- exception email.errors.InvalidBase64LengthDefect\u00b6\nWhen decoding a block of base64 encoded bytes, the number of non-padding base64 characters was invalid (1 more than a multiple of 4). The encoded block was kept as-is.\n- exception email.errors.InvalidDateDefect\u00b6\nWhen decoding an invalid or unparsable date field. The original value is kept as-is.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1106} +{"url": "https://docs.python.org/3/library/email.contentmanager.html", "title": ": Managing MIME Content", "content": "email.contentmanager\n: Managing MIME Content\u00b6\nSource code: Lib/email/contentmanager.py\nAdded in version 3.6: [1]\n- class email.contentmanager.ContentManager\u00b6\nBase class for content managers. Provides the standard registry mechanisms to register converters between MIME content and other representations, as well as the\nget_content\nandset_content\ndispatch methods.- get_content(msg, *args, **kw)\u00b6\nLook up a handler function based on the\nmimetype\nof msg (see next paragraph), call it, passing through all arguments, and return the result of the call. The expectation is that the handler will extract the payload from msg and return an object that encodes information about the extracted data.To find the handler, look for the following keys in the registry, stopping with the first one found:\nthe string representing the full MIME type (\nmaintype/subtype\n)the string representing the\nmaintype\nthe empty string\nIf none of these keys produce a handler, raise a\nKeyError\nfor the full MIME type.\n- set_content(msg, obj, *args, **kw)\u00b6\nIf the\nmaintype\nismultipart\n, raise aTypeError\n; otherwise look up a handler function based on the type of obj (see next paragraph), callclear_content()\non the msg, and call the handler function, passing through all arguments. The expectation is that the handler will transform and store obj into msg, possibly making other changes to msg as well, such as adding various MIME headers to encode information needed to interpret the stored data.To find the handler, obtain the type of obj (\ntyp = type(obj)\n), and look for the following keys in the registry, stopping with the first one found:the type itself (\ntyp\n)the type\u2019s fully qualified name (\ntyp.__module__ + '.' + typ.__qualname__\n).the type\u2019s\nqualname\n(typ.__qualname__\n)the type\u2019s\nname\n(typ.__name__\n).\nIf none of the above match, repeat all of the checks above for each of the types in the MRO (\ntyp.__mro__\n). Finally, if no other key yields a handler, check for a handler for the keyNone\n. If there is no handler forNone\n, raise aKeyError\nfor the fully qualified name of the type.Also add a MIME-Version header if one is not present (see also\nMIMEPart\n).\n- add_get_handler(key, handler)\u00b6\nRecord the function handler as the handler for key. For the possible values of key, see\nget_content()\n.\n- add_set_handler(typekey, handler)\u00b6\nRecord handler as the function to call when an object of a type matching typekey is passed to\nset_content()\n. For the possible values of typekey, seeset_content()\n.\nContent Manager Instances\u00b6\nCurrently the email package provides only one concrete content manager,\nraw_data_manager\n, although more may be added in the future.\nraw_data_manager\nis the\ncontent_manager\nprovided by\nEmailPolicy\nand its derivatives.\n- email.contentmanager.raw_data_manager\u00b6\nThis content manager provides only a minimum interface beyond that provided by\nMessage\nitself: it deals only with text, raw byte strings, andMessage\nobjects. Nevertheless, it provides significant advantages compared to the base API:get_content\non a text part will return a unicode string without the application needing to manually decode it,set_content\nprovides a rich set of options for controlling the headers added to a part and controlling the content transfer encoding, and it enables the use of the variousadd_\nmethods, thereby simplifying the creation of multipart messages.- email.contentmanager.get_content(msg, errors='replace')\u00b6\nReturn the payload of the part as either a string (for\ntext\nparts), anEmailMessage\nobject (formessage/rfc822\nparts), or abytes\nobject (for all other non-multipart types). Raise aKeyError\nif called on amultipart\n. If the part is atext\npart and errors is specified, use it as the error handler when decoding the payload to unicode. The default error handler isreplace\n.\n- email.contentmanager.set_content(msg, <'str'>, subtype=\"plain\", charset='utf-8', cte=None, disposition=None, filename=None, cid=None, params=None, headers=None)\u00b6\n- email.contentmanager.set_content(msg, <'bytes'>, maintype, subtype, cte=\"base64\", disposition=None, filename=None, cid=None, params=None, headers=None)\n- email.contentmanager.set_content(msg, <'EmailMessage'>, cte=None, disposition=None, filename=None, cid=None, params=None, headers=None)\nAdd headers and payload to msg:\nAdd a Content-Type header with a\nmaintype/subtype\nvalue.For\nstr\n, set the MIMEmaintype\ntotext\n, and set the subtype to subtype if it is specified, orplain\nif it is not.For\nbytes\n, use the specified maintype and subtype, or raise aTypeError\nif they are not specified.For\nEmailMessage\nobjects, set the maintype tomessage\n, and set the subtype to subtype if it is specified orrfc822\nif it is not. If subtype ispartial\n, raise an error (bytes\nobjects must be used to constructmessage/partial\nparts).\nIf charset is provided (which is valid only for\nstr\n), encode the string to bytes using the specified character set. The default isutf-8\n. If the specified charset is a known alias for a standard MIME charset name, use the standard charset instead.If cte is set, encode the payload using the specified content transfer encoding, and set the Content-Transfer-Encoding header to that value. Possible values for cte are\nquoted-printable\n,base64\n,7bit\n,8bit\n, andbinary\n. If the input cannot be encoded in the specified encoding (for example, specifying a cte of7bit\nfor an input that contains non-ASCII values), raise aValueError\n.For\nstr\nobjects, if cte is not set use heuristics to determine the most compact encoding. Prior to encoding,str.splitlines()\nis used to normalize all line boundaries, ensuring that each line of the payload is terminated by the current policy\u2019slinesep\nproperty (even if the original string did not end with one).For\nbytes\nobjects, cte is taken to be base64 if not set, and the aforementioned newline translation is not performed.For\nEmailMessage\n, per RFC 2046, raise an error if a cte ofquoted-printable\norbase64\nis requested for subtyperfc822\n, and for any cte other than7bit\nfor subtypeexternal-body\n. Formessage/rfc822\n, use8bit\nif cte is not specified. For all other values of subtype, use7bit\n.\nNote\nA cte of\nbinary\ndoes not actually work correctly yet. TheEmailMessage\nobject as modified byset_content\nis correct, butBytesGenerator\ndoes not serialize it correctly.If disposition is set, use it as the value of the Content-Disposition header. If not specified, and filename is specified, add the header with the value\nattachment\n. If disposition is not specified and filename is also not specified, do not add the header. The only valid values for disposition areattachment\nandinline\n.If filename is specified, use it as the value of the\nfilename\nparameter of the Content-Disposition header.If cid is specified, add a Content-ID header with cid as its value.\nIf params is specified, iterate its\nitems\nmethod and use the resulting(key, value)\npairs to set additional parameters on the Content-Type header.If headers is specified and is a list of strings of the form\nheadername: headervalue\nor a list ofheader\nobjects (distinguished from strings by having aname\nattribute), add the headers to msg.\nFootnotes", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1775} +{"url": "https://docs.python.org/3/whatsnew/3.7.html", "title": "What\u2019s New In Python 3.7", "content": "What\u2019s New In Python 3.7\u00b6\n- Editor:\nElvis Pranskevichus \nThis article explains the new features in Python 3.7, compared to 3.6. Python 3.7 was released on June 27, 2018. For full details, see the changelog.\nSummary \u2013 Release Highlights\u00b6\nNew syntax features:\nPEP 563, postponed evaluation of type annotations.\nBackwards incompatible syntax changes:\nNew library modules:\nNew built-in features:\nPEP 553, the new\nbreakpoint()\nfunction.\nPython data model improvements:\nPEP 562, customization of access to module attributes.\nPEP 560, core support for typing module and generic types.\nthe insertion-order preservation nature of dict objects has been declared to be an official part of the Python language spec.\nSignificant improvements in the standard library:\nThe\nasyncio\nmodule has received new features, significant usability and performance improvements.The\ntime\nmodule gained support for functions with nanosecond resolution.\nCPython implementation improvements:\nAvoiding the use of ASCII as a default text encoding:\nPEP 552, deterministic .pycs\nPEP 565, improved\nDeprecationWarning\nhandling\nC API improvements:\nPEP 539, new C API for thread-local storage\nDocumentation improvements:\nPEP 545, Python documentation translations\nNew documentation translations: Japanese, French, and Korean.\nThis release features notable performance improvements in many areas. The Optimizations section lists them in detail.\nFor a list of changes that may affect compatibility with previous Python releases please refer to the Porting to Python 3.7 section.\nNew Features\u00b6\nPEP 563: Postponed Evaluation of Annotations\u00b6\nThe advent of type hints in Python uncovered two glaring usability issues with the functionality of annotations added in PEP 3107 and refined further in PEP 526:\nannotations could only use names which were already available in the current scope, in other words they didn\u2019t support forward references of any kind; and\nannotating source code had adverse effects on startup time of Python programs.\nBoth of these issues are fixed by postponing the evaluation of\nannotations. Instead of compiling code which executes expressions in\nannotations at their definition time, the compiler stores the annotation\nin a string form equivalent to the AST of the expression in question.\nIf needed, annotations can be resolved at runtime using\ntyping.get_type_hints()\n. In the common case where this is not\nrequired, the annotations are cheaper to store (since short strings\nare interned by the interpreter) and make startup time faster.\nUsability-wise, annotations now support forward references, making the following syntax valid:\nclass C:\n@classmethod\ndef from_string(cls, source: str) -> C:\n...\ndef validate_b(self, obj: B) -> bool:\n...\nclass B:\n...\nSince this change breaks compatibility, the new behavior needs to be enabled\non a per-module basis in Python 3.7 using a __future__\nimport:\nfrom __future__ import annotations\nIt will become the default in Python 3.10.\nSee also\n- PEP 563 \u2013 Postponed evaluation of annotations\nPEP written and implemented by \u0141ukasz Langa.\nPEP 538: Legacy C Locale Coercion\u00b6\nAn ongoing challenge within the Python 3 series has been determining a sensible default strategy for handling the \u201c7-bit ASCII\u201d text encoding assumption currently implied by the use of the default C or POSIX locale on non-Windows platforms.\nPEP 538 updates the default interpreter command line interface to\nautomatically coerce that locale to an available UTF-8 based locale as\ndescribed in the documentation of the new PYTHONCOERCECLOCALE\nenvironment variable. Automatically setting LC_CTYPE\nthis way means that\nboth the core interpreter and locale-aware C extensions (such as\nreadline\n) will assume the use of UTF-8 as the default text encoding,\nrather than ASCII.\nThe platform support definition in PEP 11 has also been updated to limit full text handling support to suitably configured non-ASCII based locales.\nAs part of this change, the default error handler for stdin\nand\nstdout\nis now surrogateescape\n(rather than strict\n) when\nusing any of the defined coercion target locales (currently C.UTF-8\n,\nC.utf8\n, and UTF-8\n). The default error handler for stderr\ncontinues to be backslashreplace\n, regardless of locale.\nLocale coercion is silent by default, but to assist in debugging potentially\nlocale related integration problems, explicit warnings (emitted directly on\nstderr\n) can be requested by setting PYTHONCOERCECLOCALE=warn\n.\nThis setting will also cause the Python runtime to emit a warning if the\nlegacy C locale remains active when the core interpreter is initialized.\nWhile PEP 538\u2019s locale coercion has the benefit of also affecting extension\nmodules (such as GNU readline\n), as well as child processes (including those\nrunning non-Python applications and older versions of Python), it has the\ndownside of requiring that a suitable target locale be present on the running\nsystem. To better handle the case where no suitable target locale is available\n(as occurs on RHEL/CentOS 7, for example), Python 3.7 also implements\nPEP 540: Forced UTF-8 Runtime Mode.\nSee also\n- PEP 538 \u2013 Coercing the legacy C locale to a UTF-8 based locale\nPEP written and implemented by Nick Coghlan.\nPEP 540: Forced UTF-8 Runtime Mode\u00b6\nThe new -X\nutf8\ncommand line option and PYTHONUTF8\nenvironment variable can be used to enable the Python UTF-8 Mode.\nWhen in UTF-8 mode, CPython ignores the locale settings, and uses the\nUTF-8 encoding by default. The error handlers for sys.stdin\nand\nsys.stdout\nstreams are set to surrogateescape\n.\nThe forced UTF-8 mode can be used to change the text handling behavior in an embedded Python interpreter without changing the locale settings of an embedding application.\nWhile PEP 540\u2019s UTF-8 mode has the benefit of working regardless of which\nlocales are available on the running system, it has the downside of having no\neffect on extension modules (such as GNU readline\n), child processes running\nnon-Python applications, and child processes running older versions of Python.\nTo reduce the risk of corrupting text data when communicating with such\ncomponents, Python 3.7 also implements PEP 540: Forced UTF-8 Runtime Mode).\nThe UTF-8 mode is enabled by default when the locale is C\nor POSIX\n, and\nthe PEP 538 locale coercion feature fails to change it to a UTF-8 based\nalternative (whether that failure is due to PYTHONCOERCECLOCALE=0\nbeing set,\nLC_ALL\nbeing set, or the lack of a suitable target locale).\nSee also\n- PEP 540 \u2013 Add a new UTF-8 mode\nPEP written and implemented by Victor Stinner\nPEP 553: Built-in breakpoint()\n\u00b6\nPython 3.7 includes the new built-in breakpoint()\nfunction as\nan easy and consistent way to enter the Python debugger.\nBuilt-in breakpoint()\ncalls sys.breakpointhook()\n. By default, the\nlatter imports pdb\nand then calls pdb.set_trace()\n, but by binding\nsys.breakpointhook()\nto the function of your choosing, breakpoint()\ncan\nenter any debugger. Additionally, the environment variable\nPYTHONBREAKPOINT\ncan be set to the callable of your debugger of\nchoice. Set PYTHONBREAKPOINT=0\nto completely disable built-in\nbreakpoint()\n.\nSee also\n- PEP 553 \u2013 Built-in breakpoint()\nPEP written and implemented by Barry Warsaw\nPEP 539: New C API for Thread-Local Storage\u00b6\nWhile Python provides a C API for thread-local storage support; the existing Thread Local Storage (TLS) API has used int to represent TLS keys across all platforms. This has not generally been a problem for officially support platforms, but that is neither POSIX-compliant, nor portable in any practical sense.\nPEP 539 changes this by providing a new Thread Specific Storage (TSS)\nAPI to CPython which supersedes use of the\nexisting TLS API within the CPython interpreter, while deprecating the existing\nAPI. The TSS API uses a new type Py_tss_t\ninstead of int\nto represent TSS keys\u2013an opaque type the definition of which may depend on\nthe underlying TLS implementation. Therefore, this will allow to build CPython\non platforms where the native TLS key is defined in a way that cannot be safely\ncast to int.\nNote that on platforms where the native TLS key is defined in a way that cannot be safely cast to int, all functions of the existing TLS API will be no-op and immediately return failure. This indicates clearly that the old API is not supported on platforms where it cannot be used reliably, and that no effort will be made to add such support.\nSee also\n- PEP 539 \u2013 A New C-API for Thread-Local Storage in CPython\nPEP written by Erik M. Bray; implementation by Masayuki Yamamoto.\nPEP 562: Customization of Access to Module Attributes\u00b6\nPython 3.7 allows defining __getattr__()\non modules and will call\nit whenever a module attribute is otherwise not found. Defining\n__dir__()\non modules is now also allowed.\nA typical example of where this may be useful is module attribute deprecation and lazy loading.\nSee also\n- PEP 562 \u2013 Module\n__getattr__\nand__dir__\nPEP written and implemented by Ivan Levkivskyi\nPEP 564: New Time Functions With Nanosecond Resolution\u00b6\nThe resolution of clocks in modern systems can exceed the limited precision\nof a floating-point number returned by the time.time()\nfunction\nand its variants. To avoid loss of precision, PEP 564 adds six new\n\u201cnanosecond\u201d variants of the existing timer functions to the time\nmodule:\nThe new functions return the number of nanoseconds as an integer value.\nMeasurements\nshow that on Linux and Windows the resolution of time.time_ns()\nis\napproximately 3 times better than that of time.time()\n.\nSee also\n- PEP 564 \u2013 Add new time functions with nanosecond resolution\nPEP written and implemented by Victor Stinner\nPEP 565: Show DeprecationWarning in __main__\n\u00b6\nThe default handling of DeprecationWarning\nhas been changed such that\nthese warnings are once more shown by default, but only when the code\ntriggering them is running directly in the __main__\nmodule. As a result,\ndevelopers of single file scripts and those using Python interactively should\nonce again start seeing deprecation warnings for the APIs they use, but\ndeprecation warnings triggered by imported application, library and framework\nmodules will continue to be hidden by default.\nAs a result of this change, the standard library now allows developers to choose between three different deprecation warning behaviours:\nFutureWarning\n: always displayed by default, recommended for warnings intended to be seen by application end users (e.g. for deprecated application configuration settings).DeprecationWarning\n: displayed by default only in__main__\nand when running tests, recommended for warnings intended to be seen by other Python developers where a version upgrade may result in changed behaviour or an error.PendingDeprecationWarning\n: displayed by default only when running tests, intended for cases where a future version upgrade will change the warning category toDeprecationWarning\norFutureWarning\n.\nPreviously both DeprecationWarning\nand PendingDeprecationWarning\nwere only visible when running tests, which meant that developers primarily\nwriting single file scripts or using Python interactively could be surprised\nby breaking changes in the APIs they used.\nSee also\n- PEP 565 \u2013 Show DeprecationWarning in\n__main__\nPEP written and implemented by Nick Coghlan\nPEP 560: Core Support for typing\nmodule and Generic Types\u00b6\nInitially PEP 484 was designed in such way that it would not introduce any\nchanges to the core CPython interpreter. Now type hints and the typing\nmodule are extensively used by the community, so this restriction is removed.\nThe PEP introduces two special methods __class_getitem__()\nand\n__mro_entries__()\n, these methods are now used by most classes and special\nconstructs in typing\n. As a result, the speed of various operations\nwith types increased up to 7 times, the generic types can be used without\nmetaclass conflicts, and several long standing bugs in typing\nmodule are\nfixed.\nSee also\n- PEP 560 \u2013 Core support for typing module and generic types\nPEP written and implemented by Ivan Levkivskyi\nPEP 552: Hash-based .pyc Files\u00b6\nPython has traditionally checked the up-to-dateness of bytecode cache files\n(i.e., .pyc\nfiles) by comparing the source metadata (last-modified timestamp\nand size) with source metadata saved in the cache file header when it was\ngenerated. While effective, this invalidation method has its drawbacks. When\nfilesystem timestamps are too coarse, Python can miss source updates, leading to\nuser confusion. Additionally, having a timestamp in the cache file is\nproblematic for build reproducibility and\ncontent-based build systems.\nPEP 552 extends the pyc format to allow the hash of the source file to be\nused for invalidation instead of the source timestamp. Such .pyc\nfiles are\ncalled \u201chash-based\u201d. By default, Python still uses timestamp-based invalidation\nand does not generate hash-based .pyc\nfiles at runtime. Hash-based .pyc\nfiles may be generated with py_compile\nor compileall\n.\nHash-based .pyc\nfiles come in two variants: checked and unchecked. Python\nvalidates checked hash-based .pyc\nfiles against the corresponding source\nfiles at runtime but doesn\u2019t do so for unchecked hash-based pycs. Unchecked\nhash-based .pyc\nfiles are a useful performance optimization for environments\nwhere a system external to Python (e.g., the build system) is responsible for\nkeeping .pyc\nfiles up-to-date.\nSee Cached bytecode invalidation for more information.\nSee also\n- PEP 552 \u2013 Deterministic pycs\nPEP written and implemented by Benjamin Peterson\nPEP 545: Python Documentation Translations\u00b6\nPEP 545 describes the process of creating and maintaining Python documentation translations.\nThree new translations have been added:\nJapanese: https://docs.python.org/ja/\nFrench: https://docs.python.org/fr/\nKorean: https://docs.python.org/ko/\nSee also\n- PEP 545 \u2013 Python Documentation Translations\nPEP written and implemented by Julien Palard, Inada Naoki, and Victor Stinner.\nPython Development Mode (-X dev)\u00b6\nThe new -X\ndev\ncommand line option or the new\nPYTHONDEVMODE\nenvironment variable can be used to enable\nPython Development Mode. When in development mode, Python performs\nadditional runtime checks that are too expensive to be enabled by default.\nSee Python Development Mode documentation for the full\ndescription.\nOther Language Changes\u00b6\nAn\nawait\nexpression and comprehensions containing anasync for\nclause were illegal in the expressions in formatted string literals due to a problem with the implementation. In Python 3.7 this restriction was lifted.More than 255 arguments can now be passed to a function, and a function can now have more than 255 parameters. (Contributed by Serhiy Storchaka in bpo-12844 and bpo-18896.)\nbytes.fromhex()\nandbytearray.fromhex()\nnow ignore all ASCII whitespace, not only spaces. (Contributed by Robert Xiao in bpo-28927.)str\n,bytes\n, andbytearray\ngained support for the newisascii()\nmethod, which can be used to test if a string or bytes contain only the ASCII characters. (Contributed by INADA Naoki in bpo-32677.)ImportError\nnow displays module name and module__file__\npath whenfrom ... import ...\nfails. (Contributed by Matthias Bussonnier in bpo-29546.)Circular imports involving absolute imports with binding a submodule to a name are now supported. (Contributed by Serhiy Storchaka in bpo-30024.)\nobject.__format__(x, '')\nis now equivalent tostr(x)\nrather thanformat(str(self), '')\n. (Contributed by Serhiy Storchaka in bpo-28974.)In order to better support dynamic creation of stack traces,\ntypes.TracebackType\ncan now be instantiated from Python code, and thetb_next\nattribute on tracebacks is now writable. (Contributed by Nathaniel J. Smith in bpo-30579.)When using the\n-m\nswitch,sys.path[0]\nis now eagerly expanded to the full starting directory path, rather than being left as the empty directory (which allows imports from the current working directory at the time when an import occurs) (Contributed by Nick Coghlan in bpo-33053.)The new\n-X\nimporttime\noption or thePYTHONPROFILEIMPORTTIME\nenvironment variable can be used to show the timing of each module import. (Contributed by Inada Naoki in bpo-31415.)\nNew Modules\u00b6\ncontextvars\u00b6\nThe new contextvars\nmodule and a set of\nnew C APIs introduce\nsupport for context variables. Context variables are conceptually\nsimilar to thread-local variables. Unlike TLS, context variables\nsupport asynchronous code correctly.\nThe asyncio\nand decimal\nmodules have been updated to use\nand support context variables out of the box. Particularly the active\ndecimal context is now stored in a context variable, which allows\ndecimal operations to work with the correct context in asynchronous code.\nSee also\n- PEP 567 \u2013 Context Variables\nPEP written and implemented by Yury Selivanov\ndataclasses\u00b6\nThe new dataclass()\ndecorator provides a way to declare\ndata classes. A data class describes its attributes using class variable\nannotations. Its constructor and other magic methods, such as\n__repr__()\n, __eq__()\n, and\n__hash__()\nare generated automatically.\nExample:\n@dataclass\nclass Point:\nx: float\ny: float\nz: float = 0.0\np = Point(1.5, 2.5)\nprint(p) # produces \"Point(x=1.5, y=2.5, z=0.0)\"\nSee also\n- PEP 557 \u2013 Data Classes\nPEP written and implemented by Eric V. Smith\nimportlib.resources\u00b6\nThe new importlib.resources\nmodule provides several new APIs and one\nnew ABC for access to, opening, and reading resources inside packages.\nResources are roughly similar to files inside packages, but they needn\u2019t\nbe actual files on the physical file system. Module loaders can provide a\nget_resource_reader()\nfunction which returns\na importlib.abc.ResourceReader\ninstance to support this\nnew API. Built-in file path loaders and zip file loaders both support this.\nContributed by Barry Warsaw and Brett Cannon in bpo-32248.\nSee also\nimportlib_resources \u2013 a PyPI backport for earlier Python versions.\nImproved Modules\u00b6\nargparse\u00b6\nThe new ArgumentParser.parse_intermixed_args()\nmethod allows intermixing options and positional arguments.\n(Contributed by paul.j3 in bpo-14191.)\nasyncio\u00b6\nThe asyncio\nmodule has received many new features, usability and\nperformance improvements. Notable changes\ninclude:\nThe new provisional\nasyncio.run()\nfunction can be used to run a coroutine from synchronous code by automatically creating and destroying the event loop. (Contributed by Yury Selivanov in bpo-32314.)asyncio gained support for\ncontextvars\n.loop.call_soon()\n,loop.call_soon_threadsafe()\n,loop.call_later()\n,loop.call_at()\n, andFuture.add_done_callback()\nhave a new optional keyword-only context parameter.Tasks\nnow track their context automatically. See PEP 567 for more details. (Contributed by Yury Selivanov in bpo-32436.)The new\nasyncio.create_task()\nfunction has been added as a shortcut toasyncio.get_event_loop().create_task()\n. (Contributed by Andrew Svetlov in bpo-32311.)The new\nloop.start_tls()\nmethod can be used to upgrade an existing connection to TLS. (Contributed by Yury Selivanov in bpo-23749.)The new\nloop.sock_recv_into()\nmethod allows reading data from a socket directly into a provided buffer making it possible to reduce data copies. (Contributed by Antoine Pitrou in bpo-31819.)The new\nasyncio.current_task()\nfunction returns the currently runningTask\ninstance, and the newasyncio.all_tasks()\nfunction returns a set of all existingTask\ninstances in a given loop. TheTask.current_task()\nandTask.all_tasks()\nmethods have been deprecated. (Contributed by Andrew Svetlov in bpo-32250.)The new provisional\nBufferedProtocol\nclass allows implementing streaming protocols with manual control over the receive buffer. (Contributed by Yury Selivanov in bpo-32251.)The new\nasyncio.get_running_loop()\nfunction returns the currently running loop, and raises aRuntimeError\nif no loop is running. This is in contrast withasyncio.get_event_loop()\n, which will create a new event loop if none is running. (Contributed by Yury Selivanov in bpo-32269.)The new\nStreamWriter.wait_closed()\ncoroutine method allows waiting until the stream writer is closed. The newStreamWriter.is_closing()\nmethod can be used to determine if the writer is closing. (Contributed by Andrew Svetlov in bpo-32391.)The new\nloop.sock_sendfile()\ncoroutine method allows sending files usingos.sendfile\nwhen possible. (Contributed by Andrew Svetlov in bpo-32410.)The new\nFuture.get_loop()\nandTask.get_loop()\nmethods return the instance of the loop on which a task or a future were created.Server.get_loop()\nallows doing the same forasyncio.Server\nobjects. (Contributed by Yury Selivanov in bpo-32415 and Srinivas Reddy Thatiparthy in bpo-32418.)It is now possible to control how instances of\nasyncio.Server\nbegin serving. Previously, the server would start serving immediately when created. The new start_serving keyword argument toloop.create_server()\nandloop.create_unix_server()\n, as well asServer.start_serving()\n, andServer.serve_forever()\ncan be used to decouple server instantiation and serving. The newServer.is_serving()\nmethod returnsTrue\nif the server is serving.Server\nobjects are now asynchronous context managers:srv = await loop.create_server(...) async with srv: # some code # At this point, srv is closed and no longer accepts new connections.\n(Contributed by Yury Selivanov in bpo-32662.)\nCallback objects returned by\nloop.call_later()\ngained the newwhen()\nmethod which returns an absolute scheduled callback timestamp. (Contributed by Andrew Svetlov in bpo-32741.)The\nloop.create_datagram_endpoint()\nmethod gained support for Unix sockets. (Contributed by Quentin Dawans in bpo-31245.)The\nasyncio.open_connection()\n,asyncio.start_server()\nfunctions,loop.create_connection()\n,loop.create_server()\n,loop.create_accepted_socket()\nmethods and their corresponding UNIX socket variants now accept the ssl_handshake_timeout keyword argument. (Contributed by Neil Aspinall in bpo-29970.)The new\nHandle.cancelled()\nmethod returnsTrue\nif the callback was cancelled. (Contributed by Marat Sharafutdinov in bpo-31943.)The asyncio source has been converted to use the\nasync\n/await\nsyntax. (Contributed by Andrew Svetlov in bpo-32193.)The new\nReadTransport.is_reading()\nmethod can be used to determine the reading state of the transport. Additionally, calls toReadTransport.resume_reading()\nandReadTransport.pause_reading()\nare now idempotent. (Contributed by Yury Selivanov in bpo-32356.)Loop methods which accept socket paths now support passing path-like objects. (Contributed by Yury Selivanov in bpo-32066.)\nIn\nasyncio\nTCP sockets on Linux are now created withTCP_NODELAY\nflag set by default. (Contributed by Yury Selivanov and Victor Stinner in bpo-27456.)Exceptions occurring in cancelled tasks are no longer logged. (Contributed by Yury Selivanov in bpo-30508.)\nNew\nWindowsSelectorEventLoopPolicy\nandWindowsProactorEventLoopPolicy\nclasses. (Contributed by Yury Selivanov in bpo-33792.)\nSeveral asyncio\nAPIs have been\ndeprecated.\nbinascii\u00b6\nThe b2a_uu()\nfunction now accepts an optional backtick\nkeyword argument. When it\u2019s true, zeros are represented by '`'\ninstead of spaces. (Contributed by Xiang Zhang in bpo-30103.)\ncalendar\u00b6\nThe HTMLCalendar\nclass has new class attributes which ease\nthe customization of CSS classes in the produced HTML calendar.\n(Contributed by Oz Tiram in bpo-30095.)\ncollections\u00b6\ncollections.namedtuple()\nnow supports default values.\n(Contributed by Raymond Hettinger in bpo-32320.)\ncompileall\u00b6\ncompileall.compile_dir()\nlearned the new invalidation_mode parameter,\nwhich can be used to enable\nhash-based .pyc invalidation. The invalidation\nmode can also be specified on the command line using the new\n--invalidation-mode\nargument.\n(Contributed by Benjamin Peterson in bpo-31650.)\nconcurrent.futures\u00b6\nProcessPoolExecutor\nand\nThreadPoolExecutor\nnow\nsupport the new initializer and initargs constructor arguments.\n(Contributed by Antoine Pitrou in bpo-21423.)\nThe ProcessPoolExecutor\ncan now take the multiprocessing context via the new mp_context argument.\n(Contributed by Thomas Moreau in bpo-31540.)\ncontextlib\u00b6\nThe new nullcontext()\nis a simpler and faster no-op\ncontext manager than ExitStack\n.\n(Contributed by Jesse-Bakker in bpo-10049.)\nThe new asynccontextmanager()\n,\nAbstractAsyncContextManager\n, and\nAsyncExitStack\nhave been added to\ncomplement their synchronous counterparts. (Contributed\nby Jelle Zijlstra in bpo-29679 and bpo-30241,\nand by Alexander Mohr and Ilya Kulakov in bpo-29302.)\ncProfile\u00b6\nThe cProfile\ncommand line now accepts -m module_name\nas an\nalternative to script path. (Contributed by Sanyam Khurana in bpo-21862.)\ncrypt\u00b6\nThe crypt\nmodule now supports the Blowfish hashing method.\n(Contributed by Serhiy Storchaka in bpo-31664.)\nThe mksalt()\nfunction now allows specifying the number of rounds\nfor hashing. (Contributed by Serhiy Storchaka in bpo-31702.)\ndatetime\u00b6\nThe new datetime.fromisoformat()\nmethod constructs a datetime\nobject from a string\nin one of the formats output by\ndatetime.isoformat()\n.\n(Contributed by Paul Ganssle in bpo-15873.)\nThe tzinfo\nclass now supports sub-minute offsets.\n(Contributed by Alexander Belopolsky in bpo-5288.)\ndbm\u00b6\ndbm.dumb\nnow supports reading read-only files and no longer writes the\nindex file when it is not changed.\ndecimal\u00b6\nThe decimal\nmodule now uses context variables\nto store the decimal context.\n(Contributed by Yury Selivanov in bpo-32630.)\ndis\u00b6\nThe dis()\nfunction is now able to\ndisassemble nested code objects (the code of comprehensions, generator\nexpressions and nested functions, and the code used for building nested\nclasses). The maximum depth of disassembly recursion is controlled by\nthe new depth parameter.\n(Contributed by Serhiy Storchaka in bpo-11822.)\ndistutils\u00b6\nREADME.rst\nis now included in the list of distutils standard READMEs and\ntherefore included in source distributions.\n(Contributed by Ryan Gonzalez in bpo-11913.)\nenum\u00b6\nThe Enum\nlearned the new _ignore_\nclass property,\nwhich allows listing the names of properties which should not become\nenum members.\n(Contributed by Ethan Furman in bpo-31801.)\nIn Python 3.8, attempting to check for non-Enum objects in Enum\nclasses will raise a TypeError\n(e.g. 1 in Color\n); similarly,\nattempting to check for non-Flag objects in a Flag\nmember will\nraise TypeError\n(e.g. 1 in Perm.RW\n); currently, both operations\nreturn False\ninstead and are deprecated.\n(Contributed by Ethan Furman in bpo-33217.)\nfunctools\u00b6\nfunctools.singledispatch()\nnow supports registering implementations\nusing type annotations.\n(Contributed by \u0141ukasz Langa in bpo-32227.)\ngc\u00b6\nThe new gc.freeze()\nfunction allows freezing all objects tracked\nby the garbage collector and excluding them from future collections.\nThis can be used before a POSIX fork()\ncall to make the GC copy-on-write\nfriendly or to speed up collection. The new gc.unfreeze()\nfunctions\nreverses this operation. Additionally, gc.get_freeze_count()\ncan\nbe used to obtain the number of frozen objects.\n(Contributed by Li Zekun in bpo-31558.)\nhmac\u00b6\nThe hmac\nmodule now has an optimized one-shot digest()\nfunction, which is up to three times faster than HMAC()\n.\n(Contributed by Christian Heimes in bpo-32433.)\nhttp.client\u00b6\nHTTPConnection\nand HTTPSConnection\nnow support the new blocksize argument for improved upload throughput.\n(Contributed by Nir Soffer in bpo-31945.)\nhttp.server\u00b6\nSimpleHTTPRequestHandler\nnow supports the HTTP\nIf-Modified-Since\nheader. The server returns the 304 response status if\nthe target file was not modified after the time specified in the header.\n(Contributed by Pierre Quentel in bpo-29654.)\nSimpleHTTPRequestHandler\naccepts the new directory\nargument, in addition to the new --directory\ncommand line argument.\nWith this parameter, the server serves the specified directory, by default it\nuses the current working directory.\n(Contributed by St\u00e9phane Wirtel and Julien Palard in bpo-28707.)\nThe new ThreadingHTTPServer\nclass\nuses threads to handle requests using ThreadingMixIn\n.\nIt is used when http.server\nis run with -m\n.\n(Contributed by Julien Palard in bpo-31639.)\nidlelib and IDLE\u00b6\nMultiple fixes for autocompletion. (Contributed by Louie Lu in bpo-15786.)\nModule Browser (on the File menu, formerly called Class Browser), now displays nested functions and classes in addition to top-level functions and classes. (Contributed by Guilherme Polo, Cheryl Sabella, and Terry Jan Reedy in bpo-1612262.)\nThe Settings dialog (Options, Configure IDLE) has been partly rewritten to improve both appearance and function. (Contributed by Cheryl Sabella and Terry Jan Reedy in multiple issues.)\nThe font sample now includes a selection of non-Latin characters so that users can better see the effect of selecting a particular font. (Contributed by Terry Jan Reedy in bpo-13802.) The sample can be edited to include other characters. (Contributed by Serhiy Storchaka in bpo-31860.)\nThe IDLE features formerly implemented as extensions have been reimplemented as normal features. Their settings have been moved from the Extensions tab to other dialog tabs. (Contributed by Charles Wohlganger and Terry Jan Reedy in bpo-27099.)\nEditor code context option revised. Box displays all context lines up to maxlines. Clicking on a context line jumps the editor to that line. Context colors for custom themes is added to Highlights tab of Settings dialog. (Contributed by Cheryl Sabella and Terry Jan Reedy in bpo-33642, bpo-33768, and bpo-33679.)\nOn Windows, a new API call tells Windows that tk scales for DPI. On Windows 8.1+ or 10, with DPI compatibility properties of the Python binary unchanged, and a monitor resolution greater than 96 DPI, this should make text and lines sharper. It should otherwise have no effect. (Contributed by Terry Jan Reedy in bpo-33656.)\nNew in 3.7.1:\nOutput over N lines (50 by default) is squeezed down to a button. N can be changed in the PyShell section of the General page of the Settings dialog. Fewer, but possibly extra long, lines can be squeezed by right clicking on the output. Squeezed output can be expanded in place by double-clicking the button or into the clipboard or a separate window by right-clicking the button. (Contributed by Tal Einat in bpo-1529353.)\nThe changes above have been backported to 3.6 maintenance releases.\nNEW in 3.7.4:\nAdd \u201cRun Customized\u201d to the Run menu to run a module with customized settings. Any command line arguments entered are added to sys.argv. They re-appear in the box for the next customized run. One can also suppress the normal Shell main module restart. (Contributed by Cheryl Sabella, Terry Jan Reedy, and others in bpo-5680 and bpo-37627.)\nNew in 3.7.5:\nAdd optional line numbers for IDLE editor windows. Windows open without line numbers unless set otherwise in the General tab of the configuration dialog. Line numbers for an existing window are shown and hidden in the Options menu. (Contributed by Tal Einat and Saimadhav Heblikar in bpo-17535.)\nimportlib\u00b6\nThe importlib.abc.ResourceReader\nABC was introduced to\nsupport the loading of resources from packages. See also\nimportlib.resources.\n(Contributed by Barry Warsaw, Brett Cannon in bpo-32248.)\nimportlib.reload()\nnow raises ModuleNotFoundError\nif the module\nlacks a spec.\n(Contributed by Garvit Khatri in bpo-29851.)\nimportlib.util.find_spec()\nnow raises ModuleNotFoundError\ninstead of\nAttributeError\nif the specified parent module is not a package (i.e.\nlacks a __path__\nattribute).\n(Contributed by Milan Oberkirch in bpo-30436.)\nThe new importlib.util.source_hash()\ncan be used to compute the hash of\nthe passed source. A hash-based .pyc file\nembeds the value returned by this function.\nio\u00b6\nThe new TextIOWrapper.reconfigure()\nmethod can be used to reconfigure the text stream with the new settings.\n(Contributed by Antoine Pitrou in bpo-30526 and\nINADA Naoki in bpo-15216.)\nipaddress\u00b6\nThe new subnet_of()\nand supernet_of()\nmethods of\nipaddress.IPv6Network\nand ipaddress.IPv4Network\ncan\nbe used for network containment tests.\n(Contributed by Michel Albert and Cheryl Sabella in bpo-20825.)\nitertools\u00b6\nitertools.islice()\nnow accepts\ninteger-like objects\nas start, stop,\nand slice arguments.\n(Contributed by Will Roberts in bpo-30537.)\nlocale\u00b6\nThe new monetary argument to locale.format_string()\ncan be used\nto make the conversion use monetary thousands separators and\ngrouping strings. (Contributed by Garvit in bpo-10379.)\nThe locale.getpreferredencoding()\nfunction now always returns 'UTF-8'\non Android or when in the forced UTF-8 mode.\nlogging\u00b6\nLogger\ninstances can now be pickled.\n(Contributed by Vinay Sajip in bpo-30520.)\nThe new StreamHandler.setStream()\nmethod can be used to replace the logger stream after handler creation.\n(Contributed by Vinay Sajip in bpo-30522.)\nIt is now possible to specify keyword arguments to handler constructors in\nconfiguration passed to logging.config.fileConfig()\n.\n(Contributed by Preston Landers in bpo-31080.)\nmath\u00b6\nThe new math.remainder()\nfunction implements the IEEE 754-style remainder\noperation. (Contributed by Mark Dickinson in bpo-29962.)\nmimetypes\u00b6\nThe MIME type of .bmp has been changed from 'image/x-ms-bmp'\nto\n'image/bmp'\n.\n(Contributed by Nitish Chandra in bpo-22589.)\nmsilib\u00b6\nThe new Database.Close()\nmethod can be used\nto close the MSI database.\n(Contributed by Berker Peksag in bpo-20486.)\nmultiprocessing\u00b6\nThe new Process.close()\nmethod\nexplicitly closes the process object and releases all resources associated\nwith it. ValueError\nis raised if the underlying process is still\nrunning.\n(Contributed by Antoine Pitrou in bpo-30596.)\nThe new Process.kill()\nmethod can\nbe used to terminate the process using the SIGKILL\nsignal on Unix.\n(Contributed by Vitor Pereira in bpo-30794.)\nNon-daemonic threads created by Process\nare now\njoined on process exit.\n(Contributed by Antoine Pitrou in bpo-18966.)\nos\u00b6\nos.fwalk()\nnow accepts the path argument as bytes\n.\n(Contributed by Serhiy Storchaka in bpo-28682.)\nos.scandir()\ngained support for file descriptors.\n(Contributed by Serhiy Storchaka in bpo-25996.)\nThe new register_at_fork()\nfunction allows registering Python\ncallbacks to be executed at process fork.\n(Contributed by Antoine Pitrou in bpo-16500.)\nAdded os.preadv()\n(combine the functionality of os.readv()\nand\nos.pread()\n) and os.pwritev()\nfunctions (combine the functionality\nof os.writev()\nand os.pwrite()\n). (Contributed by Pablo Galindo in\nbpo-31368.)\nThe mode argument of os.makedirs()\nno longer affects the file\npermission bits of newly created intermediate-level directories.\n(Contributed by Serhiy Storchaka in bpo-19930.)\nos.dup2()\nnow returns the new file descriptor. Previously, None\nwas always returned.\n(Contributed by Benjamin Peterson in bpo-32441.)\nThe structure returned by os.stat()\nnow contains the\nst_fstype\nattribute on Solaris and its derivatives.\n(Contributed by Jes\u00fas Cea Avi\u00f3n in bpo-32659.)\npathlib\u00b6\nThe new Path.is_mount()\nmethod is now available\non POSIX systems and can be used to determine whether a path is a mount point.\n(Contributed by Cooper Ry Lees in bpo-30897.)\npdb\u00b6\npdb.set_trace()\nnow takes an optional header keyword-only\nargument. If given, it is printed to the console just before debugging\nbegins. (Contributed by Barry Warsaw in bpo-31389.)\npdb\ncommand line now accepts -m module_name\nas an alternative to\nscript file. (Contributed by Mario Corchero in bpo-32206.)\npy_compile\u00b6\npy_compile.compile()\n\u2013 and by extension, compileall\n\u2013 now\nrespects the SOURCE_DATE_EPOCH\nenvironment variable by\nunconditionally creating .pyc\nfiles for hash-based validation.\nThis allows for guaranteeing\nreproducible builds of .pyc\nfiles when they are created eagerly. (Contributed by Bernhard M. Wiedemann\nin bpo-29708.)\npydoc\u00b6\nThe pydoc server can now bind to an arbitrary hostname specified by the\nnew -n\ncommand-line argument.\n(Contributed by Feanil Patel in bpo-31128.)\nqueue\u00b6\nThe new SimpleQueue\nclass is an unbounded FIFO queue.\n(Contributed by Antoine Pitrou in bpo-14976.)\nre\u00b6\nThe flags re.ASCII\n, re.LOCALE\nand re.UNICODE\ncan be set within the scope of a group.\n(Contributed by Serhiy Storchaka in bpo-31690.)\nre.split()\nnow supports splitting on a pattern like r'\\b'\n,\n'^$'\nor (?=-)\nthat matches an empty string.\n(Contributed by Serhiy Storchaka in bpo-25054.)\nRegular expressions compiled with the re.LOCALE\nflag no longer\ndepend on the locale at compile time. Locale settings are applied only\nwhen the compiled regular expression is used.\n(Contributed by Serhiy Storchaka in bpo-30215.)\nFutureWarning\nis now emitted if a regular expression contains\ncharacter set constructs that will change semantically in the future,\nsuch as nested sets and set operations.\n(Contributed by Serhiy Storchaka in bpo-30349.)\nCompiled regular expression and match objects can now be copied\nusing copy.copy()\nand copy.deepcopy()\n.\n(Contributed by Serhiy Storchaka in bpo-10076.)\nsignal\u00b6\nThe new warn_on_full_buffer argument to the signal.set_wakeup_fd()\nfunction makes it possible to specify whether Python prints a warning on\nstderr when the wakeup buffer overflows.\n(Contributed by Nathaniel J. Smith in bpo-30050.)\nsocket\u00b6\nThe new socket.getblocking()\nmethod\nreturns True\nif the socket is in blocking mode and False\notherwise.\n(Contributed by Yury Selivanov in bpo-32373.)\nThe new socket.close()\nfunction closes the passed socket file descriptor.\nThis function should be used instead of os.close()\nfor better\ncompatibility across platforms.\n(Contributed by Christian Heimes in bpo-32454.)\nThe socket\nmodule now exposes the socket.TCP_CONGESTION (Linux 2.6.13), socket.TCP_USER_TIMEOUT (Linux 2.6.37), and socket.TCP_NOTSENT_LOWAT (Linux 3.12) constants.\n(Contributed by Omar Sandoval in bpo-26273 and\nNathaniel J. Smith in bpo-29728.)\nSupport for socket.AF_VSOCK\nsockets has been added to allow\ncommunication between virtual machines and their hosts.\n(Contributed by Cathy Avery in bpo-27584.)\nSockets now auto-detect family, type and protocol from file descriptor by default. (Contributed by Christian Heimes in bpo-28134.)\nsocketserver\u00b6\nsocketserver.ThreadingMixIn.server_close\nnow waits until all non-daemon\nthreads complete. socketserver.ForkingMixIn.server_close\nnow waits\nuntil all child processes complete.\nAdd a new socketserver.ForkingMixIn.block_on_close\nclass attribute to\nsocketserver.ForkingMixIn\nand socketserver.ThreadingMixIn\nclasses. Set the class attribute to False\nto get the pre-3.7 behaviour.\nsqlite3\u00b6\nsqlite3.Connection\nnow exposes the backup()\nmethod when the underlying SQLite library is at version 3.6.11 or higher.\n(Contributed by Lele Gaifax in bpo-27645.)\nThe database argument of sqlite3.connect()\nnow accepts any\npath-like object, instead of just a string.\n(Contributed by Anders Lorentsen in bpo-31843.)\nssl\u00b6\nThe ssl\nmodule now uses OpenSSL\u2019s builtin API instead of\nmatch_hostname()\nto check a host name or an IP address. Values\nare validated during TLS handshake. Any certificate validation error\nincluding failing the host name check now raises\nSSLCertVerificationError\nand aborts the handshake with a proper\nTLS Alert message. The new exception contains additional information.\nHost name validation can be customized with\nSSLContext.hostname_checks_common_name\n.\n(Contributed by Christian Heimes in bpo-31399.)\nNote\nThe improved host name check requires a libssl implementation compatible with OpenSSL 1.0.2 or 1.1. Consequently, OpenSSL 0.9.8 and 1.0.1 are no longer supported (see Platform Support Removals for more details). The ssl module is mostly compatible with LibreSSL 2.7.2 and newer.\nThe ssl\nmodule no longer sends IP addresses in SNI TLS extension.\n(Contributed by Christian Heimes in bpo-32185.)\nmatch_hostname()\nno longer supports partial wildcards like\nwww*.example.org\n.\n(Contributed by Mandeep Singh in bpo-23033 and Christian Heimes in\nbpo-31399.)\nThe default cipher suite selection of the ssl\nmodule now uses a blacklist\napproach rather than a hard-coded whitelist. Python no longer re-enables\nciphers that have been blocked by OpenSSL security updates. Default cipher\nsuite selection can be configured at compile time.\n(Contributed by Christian Heimes in bpo-31429.)\nValidation of server certificates containing internationalized domain names\n(IDNs) is now supported. As part of this change, the\nSSLSocket.server_hostname\nattribute\nnow stores the expected hostname in A-label form (\"xn--pythn-mua.org\"\n),\nrather than the U-label form (\"pyth\u00f6n.org\"\n). (Contributed by\nNathaniel J. Smith and Christian Heimes in bpo-28414.)\nThe ssl\nmodule has preliminary and experimental support for TLS 1.3 and\nOpenSSL 1.1.1. At the time of Python 3.7.0 release, OpenSSL 1.1.1 is still\nunder development and TLS 1.3 hasn\u2019t been finalized yet. The TLS 1.3\nhandshake and protocol behaves slightly differently than TLS 1.2 and earlier,\nsee TLS 1.3.\n(Contributed by Christian Heimes in bpo-32947, bpo-20995,\nbpo-29136, bpo-30622 and bpo-33618)\nSSLSocket\nand SSLObject\nno longer have a public\nconstructor. Direct instantiation was never a documented and supported\nfeature. Instances must be created with SSLContext\nmethods\nwrap_socket()\nand wrap_bio()\n.\n(Contributed by Christian Heimes in bpo-32951)\nOpenSSL 1.1 APIs for setting the minimum and maximum TLS protocol version are\navailable as SSLContext.minimum_version\nand SSLContext.maximum_version\n.\nSupported protocols are indicated by several new flags, such as\nHAS_TLSv1_1\n.\n(Contributed by Christian Heimes in bpo-32609.)\nAdded ssl.SSLContext.post_handshake_auth\nto enable and\nssl.SSLSocket.verify_client_post_handshake()\nto initiate TLS 1.3\npost-handshake authentication.\n(Contributed by Christian Heimes in gh-78851.)\nstring\u00b6\nstring.Template\nnow lets you to optionally modify the regular\nexpression pattern for braced placeholders and non-braced placeholders\nseparately. (Contributed by Barry Warsaw in bpo-1198569.)\nsubprocess\u00b6\nThe subprocess.run()\nfunction accepts the new capture_output\nkeyword argument. When true, stdout and stderr will be captured.\nThis is equivalent to passing subprocess.PIPE\nas stdout and\nstderr arguments.\n(Contributed by Bo Bayles in bpo-32102.)\nThe subprocess.run\nfunction and the subprocess.Popen\nconstructor\nnow accept the text keyword argument as an alias\nto universal_newlines.\n(Contributed by Andrew Clegg in bpo-31756.)\nOn Windows the default for close_fds was changed from False\nto\nTrue\nwhen redirecting the standard handles. It\u2019s now possible to set\nclose_fds to true when redirecting the standard handles. See\nsubprocess.Popen\n. This means that close_fds now defaults to\nTrue\non all supported platforms.\n(Contributed by Segev Finer in bpo-19764.)\nThe subprocess module is now more graceful when handling\nKeyboardInterrupt\nduring subprocess.call()\n,\nsubprocess.run()\n, or in a Popen\ncontext manager. It now waits a short amount of time for the child\nto exit, before continuing the handling of the KeyboardInterrupt\nexception.\n(Contributed by Gregory P. Smith in bpo-25942.)\nsys\u00b6\nThe new sys.breakpointhook()\nhook function is called by the\nbuilt-in breakpoint()\n.\n(Contributed by Barry Warsaw in bpo-31353.)\nOn Android, the new sys.getandroidapilevel()\nreturns the build-time\nAndroid API version.\n(Contributed by Victor Stinner in bpo-28740.)\nThe new sys.get_coroutine_origin_tracking_depth()\nfunction returns\nthe current coroutine origin tracking depth, as set by\nthe new sys.set_coroutine_origin_tracking_depth()\n. asyncio\nhas been converted to use this new API instead of\nthe deprecated sys.set_coroutine_wrapper()\n.\n(Contributed by Nathaniel J. Smith in bpo-32591.)\ntime\u00b6\nPEP 564 adds six new functions with nanosecond resolution to the\ntime\nmodule:\nNew clock identifiers have been added:\ntime.CLOCK_BOOTTIME\n(Linux): Identical totime.CLOCK_MONOTONIC\n, except it also includes any time that the system is suspended.time.CLOCK_PROF\n(FreeBSD, NetBSD and OpenBSD): High-resolution per-process CPU timer.time.CLOCK_UPTIME\n(FreeBSD, OpenBSD): Time whose absolute value is the time the system has been running and not suspended, providing accurate uptime measurement.\nThe new time.thread_time()\nand time.thread_time_ns()\nfunctions\ncan be used to get per-thread CPU time measurements.\n(Contributed by Antoine Pitrou in bpo-32025.)\nThe new time.pthread_getcpuclockid()\nfunction returns the clock ID\nof the thread-specific CPU-time clock.\ntkinter\u00b6\nThe new tkinter.ttk.Spinbox\nclass is now available.\n(Contributed by Alan Moore in bpo-32585.)\ntracemalloc\u00b6\ntracemalloc.Traceback\nbehaves more like regular tracebacks,\nsorting the frames from oldest to most recent.\nTraceback.format()\nnow accepts negative limit, truncating the result to the\nabs(limit)\noldest frames. To get the old behaviour, use\nthe new most_recent_first argument to Traceback.format()\n.\n(Contributed by Jesse Bakker in bpo-32121.)\ntypes\u00b6\nThe new WrapperDescriptorType\n,\nMethodWrapperType\n, MethodDescriptorType\n,\nand ClassMethodDescriptorType\nclasses are now available.\n(Contributed by Manuel Krebber and Guido van Rossum in bpo-29377,\nand Serhiy Storchaka in bpo-32265.)\nThe new types.resolve_bases()\nfunction resolves MRO entries\ndynamically as specified by PEP 560.\n(Contributed by Ivan Levkivskyi in bpo-32717.)\nunicodedata\u00b6\nThe internal unicodedata\ndatabase has been upgraded to use Unicode 11. (Contributed by Benjamin\nPeterson.)\nunittest\u00b6\nThe new -k\ncommand-line option allows filtering tests by a name\nsubstring or a Unix shell-like pattern.\nFor example, python -m unittest -k foo\nruns\nfoo_tests.SomeTest.test_something\n, bar_tests.SomeTest.test_foo\n,\nbut not bar_tests.FooTest.test_something\n.\n(Contributed by Jonas Haag in bpo-32071.)\nunittest.mock\u00b6\nThe sentinel\nattributes now preserve their identity\nwhen they are copied\nor pickled\n. (Contributed by\nSerhiy Storchaka in bpo-20804.)\nThe new seal()\nfunction allows sealing\nMock\ninstances, which will disallow further creation\nof attribute mocks. The seal is applied recursively to all attributes that\nare themselves mocks.\n(Contributed by Mario Corchero in bpo-30541.)\nurllib.parse\u00b6\nurllib.parse.quote()\nhas been updated from RFC 2396 to RFC 3986,\nadding ~\nto the set of characters that are never quoted by default.\n(Contributed by Christian Theune and Ratnadeep Debnath in bpo-16285.)\nuu\u00b6\nThe uu.encode()\nfunction now accepts an optional backtick\nkeyword argument. When it\u2019s true, zeros are represented by '`'\ninstead of spaces. (Contributed by Xiang Zhang in bpo-30103.)\nuuid\u00b6\nThe new UUID.is_safe\nattribute relays information\nfrom the platform about whether generated UUIDs are generated with a\nmultiprocessing-safe method.\n(Contributed by Barry Warsaw in bpo-22807.)\nuuid.getnode()\nnow prefers universally administered\nMAC addresses over locally administered MAC addresses.\nThis makes a better guarantee for global uniqueness of UUIDs returned\nfrom uuid.uuid1()\n. If only locally administered MAC addresses are\navailable, the first such one found is returned.\n(Contributed by Barry Warsaw in bpo-32107.)\nwarnings\u00b6\nThe initialization of the default warnings filters has changed as follows:\nwarnings enabled via command line options (including those for\n-b\nand the new CPython-specific-X\ndev\noption) are always passed to the warnings machinery via thesys.warnoptions\nattribute.warnings filters enabled via the command line or the environment now have the following order of precedence:\nthe\nBytesWarning\nfilter for-b\n(or-bb\n)any filters specified with the\n-W\noptionany filters specified with the\nPYTHONWARNINGS\nenvironment variableany other CPython specific filters (e.g. the\ndefault\nfilter added for the new-X dev\nmode)any implicit filters defined directly by the warnings machinery\nin CPython debug builds, all warnings are now displayed by default (the implicit filter list is empty)\n(Contributed by Nick Coghlan and Victor Stinner in bpo-20361, bpo-32043, and bpo-32230.)\nDeprecation warnings are once again shown by default in single-file scripts and at the interactive prompt. See PEP 565: Show DeprecationWarning in __main__ for details. (Contributed by Nick Coghlan in bpo-31975.)\nxml\u00b6\nAs mitigation against DTD and external entity retrieval, the\nxml.dom.minidom\nand xml.sax\nmodules no longer process\nexternal entities by default.\n(Contributed by Christian Heimes in gh-61441.)\nxml.etree\u00b6\nElementPath predicates in the find()\nmethods can now compare text of the current node with [. = \"text\"]\n,\nnot only text in children. Predicates also allow adding spaces for\nbetter readability. (Contributed by Stefan Behnel in bpo-31648.)\nxmlrpc.server\u00b6\nSimpleXMLRPCDispatcher.register_function()\ncan now be used as a decorator. (Contributed by Xiang Zhang in\nbpo-7769.)\nzipapp\u00b6\nFunction create_archive()\nnow accepts an optional filter\nargument to allow the user to select which files should be included in the\narchive. (Contributed by Irmen de Jong in bpo-31072.)\nFunction create_archive()\nnow accepts an optional compressed\nargument to generate a compressed archive. A command line option\n--compress\nhas also been added to support compression.\n(Contributed by Zhiming Wang in bpo-31638.)\nzipfile\u00b6\nZipFile\nnow accepts the new compresslevel parameter to\ncontrol the compression level.\n(Contributed by Bo Bayles in bpo-21417.)\nSubdirectories in archives created by ZipFile\nare now stored in\nalphabetical order.\n(Contributed by Bernhard M. Wiedemann in bpo-30693.)\nC API Changes\u00b6\nA new API for thread-local storage has been implemented. See PEP 539: New C API for Thread-Local Storage for an overview and Thread Specific Storage (TSS) API for a complete reference. (Contributed by Masayuki Yamamoto in bpo-25658.)\nThe new context variables functionality exposes a number of new C APIs.\nThe new PyImport_GetModule()\nfunction returns the previously\nimported module with the given name.\n(Contributed by Eric Snow in bpo-28411.)\nThe new Py_RETURN_RICHCOMPARE\nmacro eases writing rich\ncomparison functions.\n(Contributed by Petr Victorin in bpo-23699.)\nThe new Py_UNREACHABLE\nmacro can be used to mark unreachable\ncode paths.\n(Contributed by Barry Warsaw in bpo-31338.)\nThe tracemalloc\nnow exposes a C API through the new\nPyTraceMalloc_Track()\nand PyTraceMalloc_Untrack()\nfunctions.\n(Contributed by Victor Stinner in bpo-30054.)\nThe new import__find__load__start and import__find__load__done static markers can be used to trace module imports. (Contributed by Christian Heimes in bpo-31574.)\nThe fields name\nand doc\nof structures\nPyMemberDef\n, PyGetSetDef\n,\nPyStructSequence_Field\n, PyStructSequence_Desc\n,\nand wrapperbase\nare now of type const char *\nrather of\nchar *\n. (Contributed by Serhiy Storchaka in bpo-28761.)\nThe result of PyUnicode_AsUTF8AndSize()\nand PyUnicode_AsUTF8()\nis now of type const char *\nrather of char *\n. (Contributed by Serhiy\nStorchaka in bpo-28769.)\nThe result of PyMapping_Keys()\n, PyMapping_Values()\nand\nPyMapping_Items()\nis now always a list, rather than a list or a\ntuple. (Contributed by Oren Milman in bpo-28280.)\nAdded functions PySlice_Unpack()\nand PySlice_AdjustIndices()\n.\n(Contributed by Serhiy Storchaka in bpo-27867.)\nPyOS_AfterFork()\nis deprecated in favour of the new functions\nPyOS_BeforeFork()\n, PyOS_AfterFork_Parent()\nand\nPyOS_AfterFork_Child()\n. (Contributed by Antoine Pitrou in\nbpo-16500.)\nThe PyExc_RecursionErrorInst\nsingleton that was part of the public API\nhas been removed as its members being never cleared may cause a segfault\nduring finalization of the interpreter. Contributed by Xavier de Gaye in\nbpo-22898 and bpo-30697.\nAdded C API support for timezones with timezone constructors\nPyTimeZone_FromOffset()\nand PyTimeZone_FromOffsetAndName()\n,\nand access to the UTC singleton with PyDateTime_TimeZone_UTC\n.\nContributed by Paul Ganssle in bpo-10381.\nThe type of results of PyThread_start_new_thread()\nand\nPyThread_get_thread_ident()\n, and the id parameter of\nPyThreadState_SetAsyncExc()\nchanged from long to\nunsigned long.\n(Contributed by Serhiy Storchaka in bpo-6532.)\nPyUnicode_AsWideCharString()\nnow raises a ValueError\nif the\nsecond argument is NULL\nand the wchar_t* string contains null\ncharacters. (Contributed by Serhiy Storchaka in bpo-30708.)\nChanges to the startup sequence and the management of dynamic memory\nallocators mean that the long documented requirement to call\nPy_Initialize()\nbefore calling most C API functions is now\nrelied on more heavily, and failing to abide by it may lead to segfaults in\nembedding applications. See the Porting to Python 3.7 section in this\ndocument and the Before Python Initialization section in the C API documentation\nfor more details.\nThe new PyInterpreterState_GetID()\nreturns the unique ID for a\ngiven interpreter.\n(Contributed by Eric Snow in bpo-29102.)\nPy_DecodeLocale()\n, Py_EncodeLocale()\nnow use the UTF-8\nencoding when the UTF-8 mode is enabled.\n(Contributed by Victor Stinner in bpo-29240.)\nPyUnicode_DecodeLocaleAndSize()\nand PyUnicode_EncodeLocale()\nnow use the current locale encoding for surrogateescape\nerror handler.\n(Contributed by Victor Stinner in bpo-29240.)\nThe start and end parameters of PyUnicode_FindChar()\nare\nnow adjusted to behave like string slices.\n(Contributed by Xiang Zhang in bpo-28822.)\nBuild Changes\u00b6\nSupport for building --without-threads\nhas been removed. The\nthreading\nmodule is now always available.\n(Contributed by Antoine Pitrou in bpo-31370.).\nA full copy of libffi is no longer bundled for use when building the\n_ctypes\nmodule on non-OSX UNIX platforms. An installed copy\nof libffi is now required when building _ctypes\non such platforms.\n(Contributed by Zachary Ware in bpo-27979.)\nThe Windows build process no longer depends on Subversion to pull in external\nsources, a Python script is used to download zipfiles from GitHub instead.\nIf Python 3.6 is not found on the system (via py -3.6\n), NuGet is used to\ndownload a copy of 32-bit Python for this purpose. (Contributed by Zachary\nWare in bpo-30450.)\nThe ssl\nmodule requires OpenSSL 1.0.2 or 1.1 compatible libssl.\nOpenSSL 1.0.1 has reached end of lifetime on 2016-12-31 and is no longer\nsupported. LibreSSL is temporarily not supported as well. LibreSSL releases\nup to version 2.6.4 are missing required OpenSSL 1.0.2 APIs.\nOptimizations\u00b6\nThe overhead of calling many methods of various standard library classes\nimplemented in C has been significantly reduced by porting more code\nto use the METH_FASTCALL\nconvention.\n(Contributed by Victor Stinner in bpo-29300, bpo-29507,\nbpo-29452, and bpo-29286.)\nVarious optimizations have reduced Python startup time by 10% on Linux and up to 30% on macOS. (Contributed by Victor Stinner, INADA Naoki in bpo-29585, and Ivan Levkivskyi in bpo-31333.)\nMethod calls are now up to 20% faster due to the bytecode changes which avoid creating bound method instances. (Contributed by Yury Selivanov and INADA Naoki in bpo-26110.)\nThe asyncio\nmodule received a number of notable optimizations for\ncommonly used functions:\nThe\nasyncio.get_event_loop()\nfunction has been reimplemented in C to make it up to 15 times faster. (Contributed by Yury Selivanov in bpo-32296.)asyncio.Future\ncallback management has been optimized. (Contributed by Yury Selivanov in bpo-32348.)asyncio.gather()\nis now up to 15% faster. (Contributed by Yury Selivanov in bpo-32355.)asyncio.sleep()\nis now up to 2 times faster when the delay argument is zero or negative. (Contributed by Andrew Svetlov in bpo-32351.)The performance overhead of asyncio debug mode has been reduced. (Contributed by Antoine Pitrou in bpo-31970.)\nAs a result of PEP 560 work, the import time\nof typing\nhas been reduced by a factor of 7, and many typing operations\nare now faster.\n(Contributed by Ivan Levkivskyi in bpo-32226.)\nsorted()\nand list.sort()\nhave been optimized for common cases\nto be up to 40-75% faster.\n(Contributed by Elliot Gorokhovsky in bpo-28685.)\ndict.copy()\nis now up to 5.5 times faster.\n(Contributed by Yury Selivanov in bpo-31179.)\nhasattr()\nand getattr()\nare now about 4 times faster when\nname is not found and obj does not override object.__getattr__()\nor object.__getattribute__()\n.\n(Contributed by INADA Naoki in bpo-32544.)\nSearching for certain Unicode characters (like Ukrainian capital \u201c\u0404\u201d) in a string was up to 25 times slower than searching for other characters. It is now only 3 times slower in the worst case. (Contributed by Serhiy Storchaka in bpo-24821.)\nThe collections.namedtuple()\nfactory has been reimplemented to\nmake the creation of named tuples 4 to 6 times faster.\n(Contributed by Jelle Zijlstra with further improvements by INADA Naoki,\nSerhiy Storchaka, and Raymond Hettinger in bpo-28638.)\ndatetime.date.fromordinal()\nand datetime.date.fromtimestamp()\nare now up to 30% faster in the common case.\n(Contributed by Paul Ganssle in bpo-32403.)\nThe os.fwalk()\nfunction is now up to 2 times faster thanks to\nthe use of os.scandir()\n.\n(Contributed by Serhiy Storchaka in bpo-25996.)\nThe speed of the shutil.rmtree()\nfunction has been improved by\n20\u201340% thanks to the use of the os.scandir()\nfunction.\n(Contributed by Serhiy Storchaka in bpo-28564.)\nOptimized case-insensitive matching and searching of regular\nexpressions\n. Searching some patterns can now be up to 20 times faster.\n(Contributed by Serhiy Storchaka in bpo-30285.)\nre.compile()\nnow converts flags\nparameter to int object if\nit is RegexFlag\n. It is now as fast as Python 3.5, and faster than\nPython 3.6 by about 10% depending on the pattern.\n(Contributed by INADA Naoki in bpo-31671.)\nThe modify()\nmethods of classes\nselectors.EpollSelector\n, selectors.PollSelector\nand selectors.DevpollSelector\nmay be around 10% faster under\nheavy loads. (Contributed by Giampaolo Rodola\u2019 in bpo-30014)\nConstant folding has been moved from the peephole optimizer to the new AST optimizer, which is able perform optimizations more consistently. (Contributed by Eugene Toder and INADA Naoki in bpo-29469 and bpo-11549.)\nMost functions and methods in abc\nhave been rewritten in C.\nThis makes creation of abstract base classes, and calling isinstance()\nand issubclass()\non them 1.5x faster. This also reduces Python\nstart-up time by up to 10%. (Contributed by Ivan Levkivskyi and INADA Naoki\nin bpo-31333)\nSignificant speed improvements to alternate constructors for\ndatetime.date\nand datetime.datetime\nby using fast-path\nconstructors when not constructing subclasses. (Contributed by Paul Ganssle\nin bpo-32403)\nThe speed of comparison of array.array\ninstances has been\nimproved considerably in certain cases. It is now from 10x to 70x faster\nwhen comparing arrays holding values of the same integer type.\n(Contributed by Adrian Wielgosik in bpo-24700.)\nThe math.erf()\nand math.erfc()\nfunctions now use the (faster)\nC library implementation on most platforms.\n(Contributed by Serhiy Storchaka in bpo-26121.)\nOther CPython Implementation Changes\u00b6\nTrace hooks may now opt out of receiving the\nline\nand opt into receiving theopcode\nevents from the interpreter by setting the corresponding newf_trace_lines\nandf_trace_opcodes\nattributes on the frame being traced. (Contributed by Nick Coghlan in bpo-31344.)Fixed some consistency problems with namespace package module attributes. Namespace module objects now have an\n__file__\nthat is set toNone\n(previously unset), and their__spec__.origin\nis also set toNone\n(previously the string\"namespace\"\n). See bpo-32305. Also, the namespace module object\u2019s__spec__.loader\nis set to the same value as__loader__\n(previously, the former was set toNone\n). See bpo-32303.The\nlocals()\ndictionary now displays in the lexical order that variables were defined. Previously, the order was undefined. (Contributed by Raymond Hettinger in bpo-32690.)The\ndistutils\nupload\ncommand no longer tries to change CR end-of-line characters to CRLF. This fixes a corruption issue with sdists that ended with a byte equivalent to CR. (Contributed by Bo Bayles in bpo-32304.)\nDeprecated Python Behavior\u00b6\nYield expressions (both yield\nand yield from\nclauses) are now deprecated\nin comprehensions and generator expressions (aside from the iterable expression\nin the leftmost for\nclause). This ensures that comprehensions\nalways immediately return a container of the appropriate type (rather than\npotentially returning a generator iterator object), while generator\nexpressions won\u2019t attempt to interleave their implicit output with the output\nfrom any explicit yield expressions. In Python 3.7, such expressions emit\nDeprecationWarning\nwhen compiled, in Python 3.8 this will be a\nSyntaxError\n.\n(Contributed by Serhiy Storchaka in bpo-10544.)\nReturning a subclass of complex\nfrom object.__complex__()\nis\ndeprecated and will be an error in future Python versions. This makes\n__complex__()\nconsistent with object.__int__()\nand\nobject.__float__()\n.\n(Contributed by Serhiy Storchaka in bpo-28894.)\nDeprecated Python modules, functions and methods\u00b6\naifc\u00b6\naifc.openfp()\nhas been deprecated and will be removed in Python 3.9.\nUse aifc.open()\ninstead.\n(Contributed by Brian Curtin in bpo-31985.)\nasyncio\u00b6\nSupport for directly await\n-ing instances of asyncio.Lock\nand\nother asyncio synchronization primitives has been deprecated. An\nasynchronous context manager must be used in order to acquire and release\nthe synchronization resource.\n(Contributed by Andrew Svetlov in bpo-32253.)\nThe asyncio.Task.current_task()\nand asyncio.Task.all_tasks()\nmethods have been deprecated.\n(Contributed by Andrew Svetlov in bpo-32250.)\ncollections\u00b6\nIn Python 3.8, the abstract base classes in collections.abc\nwill no\nlonger be exposed in the regular collections\nmodule. This will help\ncreate a clearer distinction between the concrete classes and the abstract\nbase classes.\n(Contributed by Serhiy Storchaka in bpo-25988.)\ndbm\u00b6\ndbm.dumb\nnow supports reading read-only files and no longer writes the\nindex file when it is not changed. A deprecation warning is now emitted\nif the index file is missing and recreated in the 'r'\nand 'w'\nmodes (this will be an error in future Python releases).\n(Contributed by Serhiy Storchaka in bpo-28847.)\nenum\u00b6\nIn Python 3.8, attempting to check for non-Enum objects in Enum\nclasses will raise a TypeError\n(e.g. 1 in Color\n); similarly,\nattempting to check for non-Flag objects in a Flag\nmember will\nraise TypeError\n(e.g. 1 in Perm.RW\n); currently, both operations\nreturn False\ninstead.\n(Contributed by Ethan Furman in bpo-33217.)\ngettext\u00b6\nUsing non-integer value for selecting a plural form in gettext\nis\nnow deprecated. It never correctly worked. (Contributed by Serhiy Storchaka\nin bpo-28692.)\nimportlib\u00b6\nMethods\nMetaPathFinder.find_module()\n(replaced by\nMetaPathFinder.find_spec()\n)\nand\nPathEntryFinder.find_loader()\n(replaced by\nPathEntryFinder.find_spec()\n)\nboth deprecated in Python 3.4 now emit DeprecationWarning\n.\n(Contributed by Matthias Bussonnier in bpo-29576.)\nThe importlib.abc.ResourceLoader\nABC has been deprecated in\nfavour of importlib.abc.ResourceReader\n.\nlocale\u00b6\nlocale.format()\nhas been deprecated, use locale.format_string()\ninstead. (Contributed by Garvit in bpo-10379.)\nmacpath\u00b6\nThe macpath\nis now deprecated and will be removed in Python 3.8.\n(Contributed by Chi Hsuan Yen in bpo-9850.)\nthreading\u00b6\ndummy_threading\nand _dummy_thread\nhave been deprecated. It is\nno longer possible to build Python with threading disabled.\nUse threading\ninstead.\n(Contributed by Antoine Pitrou in bpo-31370.)\nsocket\u00b6\nThe silent argument value truncation in socket.htons()\nand\nsocket.ntohs()\nhas been deprecated. In future versions of Python,\nif the passed argument is larger than 16 bits, an exception will be raised.\n(Contributed by Oren Milman in bpo-28332.)\nssl\u00b6\nssl.wrap_socket()\nis deprecated. Use\nssl.SSLContext.wrap_socket()\ninstead.\n(Contributed by Christian Heimes in bpo-28124.)\nsunau\u00b6\nsunau.openfp()\nhas been deprecated and will be removed in Python 3.9.\nUse sunau.open()\ninstead.\n(Contributed by Brian Curtin in bpo-31985.)\nsys\u00b6\nDeprecated sys.set_coroutine_wrapper()\nand\nsys.get_coroutine_wrapper()\n.\nThe undocumented sys.callstats()\nfunction has been deprecated and\nwill be removed in a future Python version.\n(Contributed by Victor Stinner in bpo-28799.)\nwave\u00b6\nwave.openfp()\nhas been deprecated and will be removed in Python 3.9.\nUse wave.open()\ninstead.\n(Contributed by Brian Curtin in bpo-31985.)\nDeprecated functions and types of the C API\u00b6\nFunction PySlice_GetIndicesEx()\nis deprecated and replaced with\na macro if Py_LIMITED_API\nis not set or set to a value in the range\nbetween 0x03050400\nand 0x03060000\n(not inclusive), or is 0x03060100\nor higher. (Contributed by Serhiy Storchaka in bpo-27867.)\nPyOS_AfterFork()\nhas been deprecated. Use PyOS_BeforeFork()\n,\nPyOS_AfterFork_Parent()\nor PyOS_AfterFork_Child()\ninstead.\n(Contributed by Antoine Pitrou in bpo-16500.)\nPlatform Support Removals\u00b6\nFreeBSD 9 and older are no longer officially supported.\nFor full Unicode support, including within extension modules, *nix platforms are now expected to provide at least one of\nC.UTF-8\n(full locale),C.utf8\n(full locale) orUTF-8\n(LC_CTYPE\n-only locale) as an alternative to the legacyASCII\n-basedC\nlocale.OpenSSL 0.9.8 and 1.0.1 are no longer supported, which means building CPython 3.7 with SSL/TLS support on older platforms still using these versions requires custom build options that link to a more recent version of OpenSSL.\nNotably, this issue affects the Debian 8 (aka \u201cjessie\u201d) and Ubuntu 14.04 (aka \u201cTrusty\u201d) LTS Linux distributions, as they still use OpenSSL 1.0.1 by default.\nDebian 9 (\u201cstretch\u201d) and Ubuntu 16.04 (\u201cxenial\u201d), as well as recent releases of other LTS Linux releases (e.g. RHEL/CentOS 7.5, SLES 12-SP3), use OpenSSL 1.0.2 or later, and remain supported in the default build configuration.\nCPython\u2019s own CI configuration file provides an example of using the SSL compatibility testing infrastructure in CPython\u2019s test suite to build and link against OpenSSL 1.1.0 rather than an outdated system provided OpenSSL.\nAPI and Feature Removals\u00b6\nThe following features and APIs have been removed from Python 3.7:\nThe\nos.stat_float_times()\nfunction has been removed. It was introduced in Python 2.3 for backward compatibility with Python 2.2, and was deprecated since Python 3.1.Unknown escapes consisting of\n'\\'\nand an ASCII letter in replacement templates forre.sub()\nwere deprecated in Python 3.5, and will now cause an error.Removed support of the exclude argument in\ntarfile.TarFile.add()\n. It was deprecated in Python 2.7 and 3.2. Use the filter argument instead.The\nntpath.splitunc()\nfunction was deprecated in Python 3.1, and has now been removed. Usesplitdrive()\ninstead.collections.namedtuple()\nno longer supports the verbose parameter or_source\nattribute which showed the generated source code for the named tuple class. This was part of an optimization designed to speed-up class creation. (Contributed by Jelle Zijlstra with further improvements by INADA Naoki, Serhiy Storchaka, and Raymond Hettinger in bpo-28638.)Functions\nbool()\n,float()\n,list()\nandtuple()\nno longer take keyword arguments. The first argument ofint()\ncan now be passed only as positional argument.Removed previously deprecated in Python 2.4 classes\nPlist\n,Dict\nand_InternalDict\nin theplistlib\nmodule. Dict values in the result of functionsreadPlist()\nandreadPlistFromBytes()\nare now normal dicts. You no longer can use attribute access to access items of these dictionaries.The\nasyncio.windows_utils.socketpair()\nfunction has been removed. Use thesocket.socketpair()\nfunction instead, it is available on all platforms since Python 3.5.asyncio.windows_utils.socketpair\nwas just an alias tosocket.socketpair\non Python 3.5 and newer.asyncio\nno longer exports theselectors\nand_overlapped\nmodules asasyncio.selectors\nandasyncio._overlapped\n. Replacefrom asyncio import selectors\nwithimport selectors\n.Direct instantiation of\nssl.SSLSocket\nandssl.SSLObject\nobjects is now prohibited. The constructors were never documented, tested, or designed as public constructors. Users were supposed to usessl.wrap_socket()\norssl.SSLContext\n. (Contributed by Christian Heimes in bpo-32951.)The unused\ndistutils\ninstall_misc\ncommand has been removed. (Contributed by Eric N. Vander Weele in bpo-29218.)\nModule Removals\u00b6\nThe fpectl\nmodule has been removed. It was never enabled by\ndefault, never worked correctly on x86-64, and it changed the Python\nABI in ways that caused unexpected breakage of C extensions.\n(Contributed by Nathaniel J. Smith in bpo-29137.)\nWindows-only Changes\u00b6\nThe python launcher, (py.exe), can accept 32 & 64 bit specifiers without\nhaving to specify a minor version as well. So py -3-32\nand py -3-64\nbecome valid as well as py -3.7-32\n, also the -m-64 and -m.n-64 forms\nare now accepted to force 64 bit python even if 32 bit would have otherwise\nbeen used. If the specified version is not available py.exe will error exit.\n(Contributed by Steve Barnes in bpo-30291.)\nThe launcher can be run as py -0\nto produce a list of the installed pythons,\nwith default marked with an asterisk. Running py -0p\nwill include the paths.\nIf py is run with a version specifier that cannot be matched it will also print\nthe short form list of available specifiers.\n(Contributed by Steve Barnes in bpo-30362.)\nPorting to Python 3.7\u00b6\nThis section lists previously described changes and other bugfixes that may require changes to your code.\nChanges in Python Behavior\u00b6\nasync\nandawait\nnames are now reserved keywords. Code using these names as identifiers will now raise aSyntaxError\n. (Contributed by Jelle Zijlstra in bpo-30406.)PEP 479 is enabled for all code in Python 3.7, meaning that\nStopIteration\nexceptions raised directly or indirectly in coroutines and generators are transformed intoRuntimeError\nexceptions. (Contributed by Yury Selivanov in bpo-32670.)object.__aiter__()\nmethods can no longer be declared as asynchronous. (Contributed by Yury Selivanov in bpo-31709.)Due to an oversight, earlier Python versions erroneously accepted the following syntax:\nf(1 for x in [1],) class C(1 for x in [1]): pass\nPython 3.7 now correctly raises a\nSyntaxError\n, as a generator expression always needs to be directly inside a set of parentheses and cannot have a comma on either side, and the duplication of the parentheses can be omitted only on calls. (Contributed by Serhiy Storchaka in bpo-32012 and bpo-32023.)When using the\n-m\nswitch, the initial working directory is now added tosys.path\n, rather than an empty string (which dynamically denoted the current working directory at the time of each import). Any programs that are checking for the empty string, or otherwise relying on the previous behaviour, will need to be updated accordingly (e.g. by also checking foros.getcwd()\noros.path.dirname(__main__.__file__)\n, depending on why the code was checking for the empty string in the first place).\nChanges in the Python API\u00b6\nsocketserver.ThreadingMixIn.server_close\nnow waits until all non-daemon threads complete. Set the newsocketserver.ThreadingMixIn.block_on_close\nclass attribute toFalse\nto get the pre-3.7 behaviour. (Contributed by Victor Stinner in bpo-31233 and bpo-33540.)socketserver.ForkingMixIn.server_close\nnow waits until all child processes complete. Set the newsocketserver.ForkingMixIn.block_on_close\nclass attribute toFalse\nto get the pre-3.7 behaviour. (Contributed by Victor Stinner in bpo-31151 and bpo-33540.)The\nlocale.localeconv()\nfunction now temporarily sets theLC_CTYPE\nlocale to the value ofLC_NUMERIC\nin some cases. (Contributed by Victor Stinner in bpo-31900.)pkgutil.walk_packages()\nnow raises aValueError\nif path is a string. Previously an empty list was returned. (Contributed by Sanyam Khurana in bpo-24744.)A format string argument for\nstring.Formatter.format()\nis now positional-only. Passing it as a keyword argument was deprecated in Python 3.5. (Contributed by Serhiy Storchaka in bpo-29193.)Attributes\nkey\n,value\nandcoded_value\nof classhttp.cookies.Morsel\nare now read-only. Assigning to them was deprecated in Python 3.5. Use theset()\nmethod for setting them. (Contributed by Serhiy Storchaka in bpo-29192.)The mode argument of\nos.makedirs()\nno longer affects the file permission bits of newly created intermediate-level directories. To set their file permission bits you can set the umask before invokingmakedirs()\n. (Contributed by Serhiy Storchaka in bpo-19930.)The\nstruct.Struct.format\ntype is nowstr\ninstead ofbytes\n. (Contributed by Victor Stinner in bpo-21071.)cgi.parse_multipart()\nnow accepts the encoding and errors arguments and returns the same results asFieldStorage\n: for non-file fields, the value associated to a key is a list of strings, not bytes. (Contributed by Pierre Quentel in bpo-29979.)Due to internal changes in\nsocket\n, callingsocket.fromshare()\non a socket created bysocket.share\nin older Python versions is not supported.repr\nforBaseException\nhas changed to not include the trailing comma. Most exceptions are affected by this change. (Contributed by Serhiy Storchaka in bpo-30399.)repr\nfordatetime.timedelta\nhas changed to include the keyword arguments in the output. (Contributed by Utkarsh Upadhyay in bpo-30302.)Because\nshutil.rmtree()\nis now implemented using theos.scandir()\nfunction, the user specified handler onerror is now called with the first argumentos.scandir\ninstead ofos.listdir\nwhen listing the directory is failed.Support for nested sets and set operations in regular expressions as in Unicode Technical Standard #18 might be added in the future. This would change the syntax. To facilitate this future change a\nFutureWarning\nwill be raised in ambiguous cases for the time being. That include sets starting with a literal'['\nor containing literal character sequences'--'\n,'&&'\n,'~~'\n, and'||'\n. To avoid a warning, escape them with a backslash. (Contributed by Serhiy Storchaka in bpo-30349.)The result of splitting a string on a\nregular expression\nthat could match an empty string has been changed. For example splitting onr'\\s*'\nwill now split not only on whitespaces as it did previously, but also on empty strings before all non-whitespace characters and just before the end of the string. The previous behavior can be restored by changing the pattern tor'\\s+'\n. AFutureWarning\nwas emitted for such patterns since Python 3.5.For patterns that match both empty and non-empty strings, the result of searching for all matches may also be changed in other cases. For example in the string\n'a\\n\\n'\n, the patternr'(?m)^\\s*?$'\nwill not only match empty strings at positions 2 and 3, but also the string'\\n'\nat positions 2\u20133. To match only blank lines, the pattern should be rewritten asr'(?m)^[^\\S\\n]*$'\n.re.sub()\nnow replaces empty matches adjacent to a previous non-empty match. For examplere.sub('x*', '-', 'abxd')\nreturns now'-a-b--d-'\ninstead of'-a-b-d-'\n(the first minus between \u2018b\u2019 and \u2018d\u2019 replaces \u2018x\u2019, and the second minus replaces an empty string between \u2018x\u2019 and \u2018d\u2019).(Contributed by Serhiy Storchaka in bpo-25054 and bpo-32308.)\nChange\nre.escape()\nto only escape regex special characters instead of escaping all characters other than ASCII letters, numbers, and'_'\n. (Contributed by Serhiy Storchaka in bpo-29995.)tracemalloc.Traceback\nframes are now sorted from oldest to most recent to be more consistent withtraceback\n. (Contributed by Jesse Bakker in bpo-32121.)On OSes that support\nsocket.SOCK_NONBLOCK\norsocket.SOCK_CLOEXEC\nbit flags, thesocket.type\nno longer has them applied. Therefore, checks likeif sock.type == socket.SOCK_STREAM\nwork as expected on all platforms. (Contributed by Yury Selivanov in bpo-32331.)On Windows the default for the close_fds argument of\nsubprocess.Popen\nwas changed fromFalse\ntoTrue\nwhen redirecting the standard handles. If you previously depended on handles being inherited when usingsubprocess.Popen\nwith standard io redirection, you will have to passclose_fds=False\nto preserve the previous behaviour, or useSTARTUPINFO.lpAttributeList\n.importlib.machinery.PathFinder.invalidate_caches()\n\u2013 which implicitly affectsimportlib.invalidate_caches()\n\u2013 now deletes entries insys.path_importer_cache\nwhich are set toNone\n. (Contributed by Brett Cannon in bpo-33169.)In\nasyncio\n,loop.sock_recv()\n,loop.sock_sendall()\n,loop.sock_accept()\n,loop.getaddrinfo()\n,loop.getnameinfo()\nhave been changed to be proper coroutine methods to match their documentation. Previously, these methods returnedasyncio.Future\ninstances. (Contributed by Yury Selivanov in bpo-32327.)asyncio.Server.sockets\nnow returns a copy of the internal list of server sockets, instead of returning it directly. (Contributed by Yury Selivanov in bpo-32662.)Struct.format\nis now astr\ninstance instead of abytes\ninstance. (Contributed by Victor Stinner in bpo-21071.)argparse\nsubparsers can now be made mandatory by passingrequired=True\ntoArgumentParser.add_subparsers()\n. (Contributed by Anthony Sottile in bpo-26510.)ast.literal_eval()\nis now stricter. Addition and subtraction of arbitrary numbers are no longer allowed. (Contributed by Serhiy Storchaka in bpo-31778.)Calendar.itermonthdates\nwill now consistently raise an exception when a date falls outside of the0001-01-01\nthrough9999-12-31\nrange. To support applications that cannot tolerate such exceptions, the newCalendar.itermonthdays3\nandCalendar.itermonthdays4\ncan be used. The new methods return tuples and are not restricted by the range supported bydatetime.date\n. (Contributed by Alexander Belopolsky in bpo-28292.)collections.ChainMap\nnow preserves the order of the underlying mappings. (Contributed by Raymond Hettinger in bpo-32792.)The\nsubmit()\nmethod ofconcurrent.futures.ThreadPoolExecutor\nandconcurrent.futures.ProcessPoolExecutor\nnow raises aRuntimeError\nif called during interpreter shutdown. (Contributed by Mark Nemec in bpo-33097.)The\nconfigparser.ConfigParser\nconstructor now usesread_dict()\nto process the default values, making its behavior consistent with the rest of the parser. Non-string keys and values in the defaults dictionary are now being implicitly converted to strings. (Contributed by James Tocknell in bpo-23835.)Several undocumented internal imports were removed. One example is that\nos.errno\nis no longer available; useimport errno\ndirectly instead. Note that such undocumented internal imports may be removed any time without notice, even in micro version releases.\nChanges in the C API\u00b6\nThe function PySlice_GetIndicesEx()\nis considered unsafe for\nresizable sequences. If the slice indices are not instances of int\n,\nbut objects that implement the __index__()\nmethod, the sequence can be\nresized after passing its length to PySlice_GetIndicesEx()\n. This\ncan lead to returning indices out of the length of the sequence. For\navoiding possible problems use new functions PySlice_Unpack()\nand\nPySlice_AdjustIndices()\n.\n(Contributed by Serhiy Storchaka in bpo-27867.)\nCPython bytecode changes\u00b6\nThere are two new opcodes: LOAD_METHOD\nand CALL_METHOD\n.\n(Contributed by Yury Selivanov and INADA Naoki in bpo-26110.)\nThe STORE_ANNOTATION\nopcode has been removed.\n(Contributed by Mark Shannon in bpo-32550.)\nWindows-only Changes\u00b6\nThe file used to override sys.path\nis now called\n._pth\ninstead of 'sys.path'\n.\nSee Finding modules for more information.\n(Contributed by Steve Dower in bpo-28137.)\nOther CPython implementation changes\u00b6\nIn preparation for potential future changes to the public CPython runtime initialization API (see PEP 432 for an initial, but somewhat outdated, draft), CPython\u2019s internal startup and configuration management logic has been significantly refactored. While these updates are intended to be entirely transparent to both embedding applications and users of the regular CPython CLI, they\u2019re being mentioned here as the refactoring changes the internal order of various operations during interpreter startup, and hence may uncover previously latent defects, either in embedding applications, or in CPython itself. (Initially contributed by Nick Coghlan and Eric Snow as part of bpo-22257, and further updated by Nick, Eric, and Victor Stinner in a number of other issues). Some known details affected:\nPySys_AddWarnOptionUnicode()\nis not currently usable by embedding applications due to the requirement to create a Unicode object prior to callingPy_Initialize\n. UsePySys_AddWarnOption()\ninstead.warnings filters added by an embedding application with\nPySys_AddWarnOption()\nshould now more consistently take precedence over the default filters set by the interpreter\nDue to changes in the way the default warnings filters are configured,\nsetting Py_BytesWarningFlag\nto a value greater than one is no longer\nsufficient to both emit BytesWarning\nmessages and have them converted\nto exceptions. Instead, the flag must be set (to cause the warnings to be\nemitted in the first place), and an explicit error::BytesWarning\nwarnings filter added to convert them to exceptions.\nDue to a change in the way docstrings are handled by the compiler, the\nimplicit return None\nin a function body consisting solely of a docstring\nis now marked as occurring on the same line as the docstring, not on the\nfunction\u2019s header line.\nThe current exception state has been moved from the frame object to the co-routine. This simplified the interpreter and fixed a couple of obscure bugs caused by having swap exception state when entering or exiting a generator. (Contributed by Mark Shannon in bpo-25612.)\nNotable changes in Python 3.7.1\u00b6\nStarting in 3.7.1, Py_Initialize()\nnow consistently reads and respects\nall of the same environment settings as Py_Main()\n(in earlier Python\nversions, it respected an ill-defined subset of those environment variables,\nwhile in Python 3.7.0 it didn\u2019t read any of them due to bpo-34247). If\nthis behavior is unwanted, set Py_IgnoreEnvironmentFlag\nto 1 before\ncalling Py_Initialize()\n.\nIn 3.7.1 the C API for Context Variables\nwas updated to use\nPyObject\npointers. See also bpo-34762.\nIn 3.7.1 the tokenize\nmodule now implicitly emits a NEWLINE\ntoken\nwhen provided with input that does not have a trailing new line. This behavior\nnow matches what the C tokenizer does internally.\n(Contributed by Ammar Askar in bpo-33899.)\nNotable changes in Python 3.7.2\u00b6\nIn 3.7.2, venv\non Windows no longer copies the original binaries, but\ncreates redirector scripts named python.exe\nand pythonw.exe\ninstead.\nThis resolves a long standing issue where all virtual environments would have\nto be upgraded or recreated with each Python update. However, note that this\nrelease will still require recreation of virtual environments in order to get\nthe new scripts.\nNotable changes in Python 3.7.6\u00b6\nDue to significant security concerns, the reuse_address parameter of\nasyncio.loop.create_datagram_endpoint()\nis no longer supported. This is\nbecause of the behavior of the socket option SO_REUSEADDR\nin UDP. For more\ndetails, see the documentation for loop.create_datagram_endpoint()\n.\n(Contributed by Kyle Stanley, Antoine Pitrou, and Yury Selivanov in\nbpo-37228.)\nNotable changes in Python 3.7.10\u00b6\nEarlier Python versions allowed using both ;\nand &\nas\nquery parameter separators in urllib.parse.parse_qs()\nand\nurllib.parse.parse_qsl()\n. Due to security concerns, and to conform with\nnewer W3C recommendations, this has been changed to allow only a single\nseparator key, with &\nas the default. This change also affects\ncgi.parse()\nand cgi.parse_multipart()\nas they use the affected\nfunctions internally. For more details, please see their respective\ndocumentation.\n(Contributed by Adam Goldschmidt, Senthil Kumaran and Ken Jin in bpo-42967.)\nNotable changes in Python 3.7.11\u00b6\nA security fix alters the ftplib.FTP\nbehavior to not trust the\nIPv4 address sent from the remote server when setting up a passive data\nchannel. We reuse the ftp server IP address instead. For unusual code\nrequiring the old behavior, set a trust_server_pasv_ipv4_address\nattribute on your FTP instance to True\n. (See gh-87451)\nThe presence of newline or tab characters in parts of a URL allows for some\nforms of attacks. Following the WHATWG specification that updates RFC 3986,\nASCII newline \\n\n, \\r\nand tab \\t\ncharacters are stripped from the\nURL by the parser urllib.parse()\npreventing such attacks. The removal\ncharacters are controlled by a new module level variable\nurllib.parse._UNSAFE_URL_BYTES_TO_REMOVE\n. (See gh-88048)\nNotable security feature in 3.7.14\u00b6\nConverting between int\nand str\nin bases other than 2\n(binary), 4, 8 (octal), 16 (hexadecimal), or 32 such as base 10 (decimal)\nnow raises a ValueError\nif the number of digits in string form is\nabove a limit to avoid potential denial of service attacks due to the\nalgorithmic complexity. This is a mitigation for CVE 2020-10735.\nThis limit can be configured or disabled by environment variable, command\nline flag, or sys\nAPIs. See the integer string conversion\nlength limitation documentation. The default limit\nis 4300 digits in string form.", "code_snippets": ["\n ", "\n ", " ", " ", " ", " ", "\n ", "\n\n ", " ", " ", " ", " ", "\n ", "\n\n", "\n ", "\n", " ", "\n", "\n", "\n ", " ", "\n ", " ", "\n ", " ", " ", " ", "\n\n", " ", " ", " ", "\n", " ", "\n", " ", " ", " ", "\n\n", " ", " ", "\n ", "\n\n", "\n", " ", " ", " ", " ", "\n\n", " ", " ", " ", " ", "\n ", "\n"], "language": "Python", "source": "python.org", "token_count": 21032} +{"url": "https://docs.python.org/3/library/email.examples.html", "title": ": Examples", "content": "email\n: Examples\u00b6\nHere are a few examples of how to use the email\npackage to read, write,\nand send simple email messages, as well as more complex MIME messages.\nFirst, let\u2019s see how to create and send a simple text message (both the text content and the addresses may contain unicode characters):\n# Import smtplib for the actual sending function\nimport smtplib\n# Import the email modules we'll need\nfrom email.message import EmailMessage\n# Open the plain text file whose name is in textfile for reading.\nwith open(textfile) as fp:\n# Create a text/plain message\nmsg = EmailMessage()\nmsg.set_content(fp.read())\n# me == the sender's email address\n# you == the recipient's email address\nmsg['Subject'] = f'The contents of {textfile}'\nmsg['From'] = me\nmsg['To'] = you\n# Send the message via our own SMTP server.\ns = smtplib.SMTP('localhost')\ns.send_message(msg)\ns.quit()\nParsing RFC 822 headers can easily be done by the using the classes\nfrom the parser\nmodule:\n# Import the email modules we'll need\n#from email.parser import BytesParser\nfrom email.parser import Parser\nfrom email.policy import default\n# If the e-mail headers are in a file, uncomment these two lines:\n# with open(messagefile, 'rb') as fp:\n# headers = BytesParser(policy=default).parse(fp)\n# Or for parsing headers in a string (this is an uncommon operation), use:\nheaders = Parser(policy=default).parsestr(\n'From: Foo Bar \\n'\n'To: \\n'\n'Subject: Test message\\n'\n'\\n'\n'Body would go here\\n')\n# Now the header items can be accessed as a dictionary:\nprint('To: {}'.format(headers['to']))\nprint('From: {}'.format(headers['from']))\nprint('Subject: {}'.format(headers['subject']))\n# You can also access the parts of the addresses:\nprint('Recipient username: {}'.format(headers['to'].addresses[0].username))\nprint('Sender name: {}'.format(headers['from'].addresses[0].display_name))\nHere\u2019s an example of how to send a MIME message containing a bunch of family pictures that may be residing in a directory:\n# Import smtplib for the actual sending function.\nimport smtplib\n# Here are the email package modules we'll need.\nfrom email.message import EmailMessage\n# Create the container email message.\nmsg = EmailMessage()\nmsg['Subject'] = 'Our family reunion'\n# me == the sender's email address\n# family = the list of all recipients' email addresses\nmsg['From'] = me\nmsg['To'] = ', '.join(family)\nmsg.preamble = 'You will not see this in a MIME-aware mail reader.\\n'\n# Open the files in binary mode. You can also omit the subtype\n# if you want MIMEImage to guess it.\nfor file in pngfiles:\nwith open(file, 'rb') as fp:\nimg_data = fp.read()\nmsg.add_attachment(img_data, maintype='image',\nsubtype='png')\n# Send the email via our own SMTP server.\nwith smtplib.SMTP('localhost') as s:\ns.send_message(msg)\nHere\u2019s an example of how to send the entire contents of a directory as an email message: [1]\n#!/usr/bin/env python3\n\"\"\"Send the contents of a directory as a MIME message.\"\"\"\nimport os\nimport smtplib\n# For guessing MIME type based on file name extension\nimport mimetypes\nfrom argparse import ArgumentParser\nfrom email.message import EmailMessage\nfrom email.policy import SMTP\ndef main():\nparser = ArgumentParser(description=\"\"\"\\\nSend the contents of a directory as a MIME message.\nUnless the -o option is given, the email is sent by forwarding to your local\nSMTP server, which then does the normal delivery process. Your local machine\nmust be running an SMTP server.\n\"\"\")\nparser.add_argument('-d', '--directory',\nhelp=\"\"\"Mail the contents of the specified directory,\notherwise use the current directory. Only the regular\nfiles in the directory are sent, and we don't recurse to\nsubdirectories.\"\"\")\nparser.add_argument('-o', '--output',\nmetavar='FILE',\nhelp=\"\"\"Print the composed message to FILE instead of\nsending the message to the SMTP server.\"\"\")\nparser.add_argument('-s', '--sender', required=True,\nhelp='The value of the From: header (required)')\nparser.add_argument('-r', '--recipient', required=True,\naction='append', metavar='RECIPIENT',\ndefault=[], dest='recipients',\nhelp='A To: header value (at least one required)')\nargs = parser.parse_args()\ndirectory = args.directory\nif not directory:\ndirectory = '.'\n# Create the message\nmsg = EmailMessage()\nmsg['Subject'] = f'Contents of directory {os.path.abspath(directory)}'\nmsg['To'] = ', '.join(args.recipients)\nmsg['From'] = args.sender\nmsg.preamble = 'You will not see this in a MIME-aware mail reader.\\n'\nfor filename in os.listdir(directory):\npath = os.path.join(directory, filename)\nif not os.path.isfile(path):\ncontinue\n# Guess the content type based on the file's extension. Encoding\n# will be ignored, although we should check for simple things like\n# gzip'd or compressed files.\nctype, encoding = mimetypes.guess_file_type(path)\nif ctype is None or encoding is not None:\n# No guess could be made, or the file is encoded (compressed), so\n# use a generic bag-of-bits type.\nctype = 'application/octet-stream'\nmaintype, subtype = ctype.split('/', 1)\nwith open(path, 'rb') as fp:\nmsg.add_attachment(fp.read(),\nmaintype=maintype,\nsubtype=subtype,\nfilename=filename)\n# Now send or store the message\nif args.output:\nwith open(args.output, 'wb') as fp:\nfp.write(msg.as_bytes(policy=SMTP))\nelse:\nwith smtplib.SMTP('localhost') as s:\ns.send_message(msg)\nif __name__ == '__main__':\nmain()\nHere\u2019s an example of how to unpack a MIME message like the one above, into a directory of files:\n#!/usr/bin/env python3\n\"\"\"Unpack a MIME message into a directory of files.\"\"\"\nimport os\nimport email\nimport mimetypes\nfrom email.policy import default\nfrom argparse import ArgumentParser\ndef main():\nparser = ArgumentParser(description=\"\"\"\\\nUnpack a MIME message into a directory of files.\n\"\"\")\nparser.add_argument('-d', '--directory', required=True,\nhelp=\"\"\"Unpack the MIME message into the named\ndirectory, which will be created if it doesn't already\nexist.\"\"\")\nparser.add_argument('msgfile')\nargs = parser.parse_args()\nwith open(args.msgfile, 'rb') as fp:\nmsg = email.message_from_binary_file(fp, policy=default)\ntry:\nos.mkdir(args.directory)\nexcept FileExistsError:\npass\ncounter = 1\nfor part in msg.walk():\n# multipart/* are just containers\nif part.get_content_maintype() == 'multipart':\ncontinue\n# Applications should really sanitize the given filename so that an\n# email message can't be used to overwrite important files\nfilename = part.get_filename()\nif not filename:\next = mimetypes.guess_extension(part.get_content_type())\nif not ext:\n# Use a generic bag-of-bits extension\next = '.bin'\nfilename = f'part-{counter:03d}{ext}'\ncounter += 1\nwith open(os.path.join(args.directory, filename), 'wb') as fp:\nfp.write(part.get_payload(decode=True))\nif __name__ == '__main__':\nmain()\nHere\u2019s an example of how to create an HTML message with an alternative plain text version. To make things a bit more interesting, we include a related image in the html part, and we save a copy of what we are going to send to disk, as well as sending it.\n#!/usr/bin/env python3\nimport smtplib\nfrom email.message import EmailMessage\nfrom email.headerregistry import Address\nfrom email.utils import make_msgid\n# Create the base text message.\nmsg = EmailMessage()\nmsg['Subject'] = \"Pourquoi pas des asperges pour ce midi ?\"\nmsg['From'] = Address(\"Pep\u00e9 Le Pew\", \"pepe\", \"example.com\")\nmsg['To'] = (Address(\"Penelope Pussycat\", \"penelope\", \"example.com\"),\nAddress(\"Fabrette Pussycat\", \"fabrette\", \"example.com\"))\nmsg.set_content(\"\"\"\\\nSalut!\nCette recette [1] sera s\u00fbrement un tr\u00e8s bon repas.\n[1] http://www.yummly.com/recipe/Roasted-Asparagus-Epicurious-203718\n--Pep\u00e9\n\"\"\")\n# Add the html version. This converts the message into a multipart/alternative\n# container, with the original text message as the first part and the new html\n# message as the second part.\nasparagus_cid = make_msgid()\nmsg.add_alternative(\"\"\"\\\n\n\n\n

Salut!

\n

Cette\n\nrecette\n sera s\u00fbrement un tr\u00e8s bon repas.\n

\n\n\n\n\"\"\".format(asparagus_cid=asparagus_cid[1:-1]), subtype='html')\n# note that we needed to peel the <> off the msgid for use in the html.\n# Now add the related image to the html part.\nwith open(\"roasted-asparagus.jpg\", 'rb') as img:\nmsg.get_payload()[1].add_related(img.read(), 'image', 'jpeg',\ncid=asparagus_cid)\n# Make a local copy of what we are going to send.\nwith open('outgoing.msg', 'wb') as f:\nf.write(bytes(msg))\n# Send the message via local SMTP server.\nwith smtplib.SMTP('localhost') as s:\ns.send_message(msg)\nIf we were sent the message from the last example, here is one way we could process it:\nimport os\nimport sys\nimport tempfile\nimport mimetypes\nimport webbrowser\n# Import the email modules we'll need\nfrom email import policy\nfrom email.parser import BytesParser\ndef magic_html_parser(html_text, partfiles):\n\"\"\"Return safety-sanitized html linked to partfiles.\nRewrite the href=\"cid:....\" attributes to point to the filenames in partfiles.\nThough not trivial, this should be possible using html.parser.\n\"\"\"\nraise NotImplementedError(\"Add the magic needed\")\n# In a real program you'd get the filename from the arguments.\nwith open('outgoing.msg', 'rb') as fp:\nmsg = BytesParser(policy=policy.default).parse(fp)\n# Now the header items can be accessed as a dictionary, and any non-ASCII will\n# be converted to unicode:\nprint('To:', msg['to'])\nprint('From:', msg['from'])\nprint('Subject:', msg['subject'])\n# If we want to print a preview of the message content, we can extract whatever\n# the least formatted payload is and print the first three lines. Of course,\n# if the message has no plain text part printing the first three lines of html\n# is probably useless, but this is just a conceptual example.\nsimplest = msg.get_body(preferencelist=('plain', 'html'))\nprint()\nprint(''.join(simplest.get_content().splitlines(keepends=True)[:3]))\nans = input(\"View full message?\")\nif ans.lower()[0] == 'n':\nsys.exit()\n# We can extract the richest alternative in order to display it:\nrichest = msg.get_body()\npartfiles = {}\nif richest['content-type'].maintype == 'text':\nif richest['content-type'].subtype == 'plain':\nfor line in richest.get_content().splitlines():\nprint(line)\nsys.exit()\nelif richest['content-type'].subtype == 'html':\nbody = richest\nelse:\nprint(\"Don't know how to display {}\".format(richest.get_content_type()))\nsys.exit()\nelif richest['content-type'].content_type == 'multipart/related':\nbody = richest.get_body(preferencelist=('html'))\nfor part in richest.iter_attachments():\nfn = part.get_filename()\nif fn:\nextension = os.path.splitext(part.get_filename())[1]\nelse:\nextension = mimetypes.guess_extension(part.get_content_type())\nwith tempfile.NamedTemporaryFile(suffix=extension, delete=False) as f:\nf.write(part.get_content())\n# again strip the <> to go from email form of cid to html form.\npartfiles[part['content-id'][1:-1]] = f.name\nelse:\nprint(\"Don't know how to display {}\".format(richest.get_content_type()))\nsys.exit()\nwith tempfile.NamedTemporaryFile(mode='w', delete=False) as f:\nf.write(magic_html_parser(body.get_content(), partfiles))\nwebbrowser.open(f.name)\nos.remove(f.name)\nfor fn in partfiles.values():\nos.remove(fn)\n# Of course, there are lots of email messages that could break this simple\n# minded program, but it will handle the most common ones.\nUp to the prompt, the output from the above is:\nTo: Penelope Pussycat , Fabrette Pussycat \nFrom: Pep\u00e9 Le Pew \nSubject: Pourquoi pas des asperges pour ce midi ?\nSalut!\nCette recette [1] sera s\u00fbrement un tr\u00e8s bon repas.\nFootnotes", "code_snippets": ["\n", "\n\n", "\n", " ", "\n\n", "\n", " ", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", "\n", " ", "\n\n", "\n", "\n", "\n\n", "\n", " ", " ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n\n", "\n", "\n", "\n", "\n\n", "\n", "\n", "\n", "\n", "\n\n", "\n", " ", "\n\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n\n", "\n", "\n", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", "\n ", "\n\n", "\n", " ", " ", " ", "\n ", "\n", "\n\n", "\n\n", "\n", "\n", "\n", "\n\n", " ", "\n\n", " ", "\n", " ", "\n\n\n", "\n ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n ", " ", "\n ", "\n", "\n", "\n", "\n ", " ", "\n ", "\n ", "\n", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", "\n ", "\n ", "\n ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", "\n ", "\n ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n ", " ", "\n ", " ", " ", " ", " ", "\n ", "\n ", "\n ", " ", " ", " ", "\n ", "\n\n\n", " ", " ", " ", "\n ", "\n", "\n\n", "\n\n", "\n", "\n", "\n\n", " ", "\n\n", " ", "\n\n\n", "\n ", " ", " ", "\n", "\n", "\n ", " ", " ", "\n ", "\n", "\n", "\n ", "\n ", " ", " ", "\n\n ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n\n ", "\n ", "\n ", " ", "\n ", "\n\n ", " ", " ", "\n ", " ", " ", " ", "\n ", "\n ", " ", " ", " ", "\n ", "\n ", "\n ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n ", "\n\n\n", " ", " ", " ", "\n ", "\n", "\n\n", "\n\n", " ", "\n", " ", "\n", " ", "\n\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n ", " ", " ", "\n", "\n", "\n\n", "\n\n", "\n\n", "\n", "\n\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n\n", "\n", " ", " ", " ", " ", "\n ", " ", " ", "\n ", "\n\n", "\n", " ", " ", " ", " ", "\n ", "\n\n", "\n", " ", " ", " ", "\n ", "\n", "\n", "\n", "\n", "\n", "\n\n", "\n", " ", "\n", " ", "\n\n\n", " ", "\n", "\n\n", "\n", "\n", "\n ", " ", "\n\n\n", "\n", " ", " ", " ", " ", "\n ", " ", " ", "\n\n", "\n", "\n", " ", "\n", " ", "\n", " ", "\n\n", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", "\n", "\n\n", " ", " ", "\n", " ", " ", " ", "\n ", "\n\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", "\n ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", "\n ", "\n ", "\n", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", "\n ", "\n ", " ", " ", "\n", "\n ", "\n ", "\n", " ", " ", " ", " ", "\n ", " ", "\n", "\n", "\n", " ", " ", " ", "\n ", "\n\n", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 2908} +{"url": "https://docs.python.org/3/whatsnew/3.9.html", "title": "What\u2019s New In Python 3.9", "content": "What\u2019s New In Python 3.9\u00b6\n- Editor:\n\u0141ukasz Langa\nThis article explains the new features in Python 3.9, compared to 3.8. Python 3.9 was released on October 5, 2020. For full details, see the changelog.\nSee also\nPEP 596 - Python 3.9 Release Schedule\nSummary \u2013 Release highlights\u00b6\nNew syntax features:\nPEP 584, union operators added to\ndict\n;PEP 585, type hinting generics in standard collections;\nPEP 614, relaxed grammar restrictions on decorators.\nNew built-in features:\nPEP 616, string methods to remove prefixes and suffixes.\nNew features in the standard library:\nPEP 593, flexible function and variable annotations;\nos.pidfd_open()\nadded that allows process management without races and signals.\nInterpreter improvements:\nPEP 573, fast access to module state from methods of C extension types;\nPEP 617, CPython now uses a new parser based on PEG;\na number of Python builtins (range, tuple, set, frozenset, list, dict) are now sped up using PEP 590 vectorcall;\ngarbage collection does not block on resurrected objects;\na number of Python modules (\n_abc\n,audioop\n,_bz2\n,_codecs\n,_contextvars\n,_crypt\n,_functools\n,_json\n,_locale\n,math\n,operator\n,resource\n,time\n,_weakref\n) now use multiphase initialization as defined by PEP 489;a number of standard library modules (\naudioop\n,ast\n,grp\n,_hashlib\n,pwd\n,_posixsubprocess\n,random\n,select\n,struct\n,termios\n,zlib\n) are now using the stable ABI defined by PEP 384.\nNew library modules:\nPEP 615, the IANA Time Zone Database is now present in the standard library in the\nzoneinfo\nmodule;an implementation of a topological sort of a graph is now provided in the new\ngraphlib\nmodule.\nRelease process changes:\nPEP 602, CPython adopts an annual release cycle.\nYou should check for DeprecationWarning in your code\u00b6\nWhen Python 2.7 was still supported, a lot of functionality in Python 3\nwas kept for backward compatibility with Python 2.7. With the end of Python\n2 support, these backward compatibility layers have been removed, or will\nbe removed soon. Most of them emitted a DeprecationWarning\nwarning for\nseveral years. For example, using collections.Mapping\ninstead of\ncollections.abc.Mapping\nemits a DeprecationWarning\nsince Python\n3.3, released in 2012.\nTest your application with the -W\ndefault\ncommand-line option to see\nDeprecationWarning\nand PendingDeprecationWarning\n, or even with\n-W\nerror\nto treat them as errors. Warnings Filter can be used to ignore warnings from third-party code.\nPython 3.9 is the last version providing those Python 2 backward compatibility layers, to give more time to Python projects maintainers to organize the removal of the Python 2 support and add support for Python 3.9.\nAliases to Abstract Base Classes in\nthe collections\nmodule, like collections.Mapping\nalias to\ncollections.abc.Mapping\n, are kept for one last release for backward\ncompatibility. They will be removed from Python 3.10.\nMore generally, try to run your tests in the Python Development Mode which helps to prepare your code to make it compatible with the next Python version.\nNote: a number of pre-existing deprecations were removed in this version of Python as well. Consult the Removed section.\nNew Features\u00b6\nDictionary Merge & Update Operators\u00b6\nMerge (|\n) and update (|=\n) operators have been added to the built-in\ndict\nclass. Those complement the existing dict.update\nand\n{**d1, **d2}\nmethods of merging dictionaries.\nExample:\n>>> x = {\"key1\": \"value1 from x\", \"key2\": \"value2 from x\"}\n>>> y = {\"key2\": \"value2 from y\", \"key3\": \"value3 from y\"}\n>>> x | y\n{'key1': 'value1 from x', 'key2': 'value2 from y', 'key3': 'value3 from y'}\n>>> y | x\n{'key2': 'value2 from x', 'key3': 'value3 from y', 'key1': 'value1 from x'}\nSee PEP 584 for a full description. (Contributed by Brandt Bucher in bpo-36144.)\nNew String Methods to Remove Prefixes and Suffixes\u00b6\nstr.removeprefix(prefix)\nand\nstr.removesuffix(suffix)\nhave been added\nto easily remove an unneeded prefix or a suffix from a string. Corresponding\nbytes\n, bytearray\n, and collections.UserString\nmethods have also been\nadded. See PEP 616 for a full description. (Contributed by Dennis Sweeney in\nbpo-39939.)\nType Hinting Generics in Standard Collections\u00b6\nIn type annotations you can now use built-in collection types such as\nlist\nand dict\nas generic types instead of importing the\ncorresponding capitalized types (e.g. List\nor Dict\n) from\ntyping\n. Some other types in the standard library are also now generic,\nfor example queue.Queue\n.\nExample:\ndef greet_all(names: list[str]) -> None:\nfor name in names:\nprint(\"Hello\", name)\nSee PEP 585 for more details. (Contributed by Guido van Rossum, Ethan Smith, and Batuhan Ta\u015fkaya in bpo-39481.)\nNew Parser\u00b6\nPython 3.9 uses a new parser, based on PEG instead of LL(1). The new parser\u2019s performance is roughly comparable to that of the old parser, but the PEG formalism is more flexible than LL(1) when it comes to designing new language features. We\u2019ll start using this flexibility in Python 3.10 and later.\nThe ast\nmodule uses the new parser and produces the same AST as\nthe old parser.\nIn Python 3.10, the old parser will be deleted and so will all\nfunctionality that depends on it (primarily the parser\nmodule,\nwhich has long been deprecated). In Python 3.9 only, you can switch\nback to the LL(1) parser using a command line switch (-X\noldparser\n) or an environment variable (PYTHONOLDPARSER=1\n).\nSee PEP 617 for more details. (Contributed by Guido van Rossum, Pablo Galindo and Lysandros Nikolaou in bpo-40334.)\nOther Language Changes\u00b6\n__import__()\nnow raisesImportError\ninstead ofValueError\n, which used to occur when a relative import went past its top-level package. (Contributed by Ngalim Siregar in bpo-37444.)Python now gets the absolute path of the script filename specified on the command line (ex:\npython3 script.py\n): the__file__\nattribute of the__main__\nmodule became an absolute path, rather than a relative path. These paths now remain valid after the current directory is changed byos.chdir()\n. As a side effect, the traceback also displays the absolute path for__main__\nmodule frames in this case. (Contributed by Victor Stinner in bpo-20443.)In the Python Development Mode and in debug build, the encoding and errors arguments are now checked for string encoding and decoding operations. Examples:\nopen()\n,str.encode()\nandbytes.decode()\n.By default, for best performance, the errors argument is only checked at the first encoding/decoding error and the encoding argument is sometimes ignored for empty strings. (Contributed by Victor Stinner in bpo-37388.)\n\"\".replace(\"\", s, n)\nnow returnss\ninstead of an empty string for all non-zeron\n. It is now consistent with\"\".replace(\"\", s)\n. There are similar changes forbytes\nandbytearray\nobjects. (Contributed by Serhiy Storchaka in bpo-28029.)Any valid expression can now be used as a decorator. Previously, the grammar was much more restrictive. See PEP 614 for details. (Contributed by Brandt Bucher in bpo-39702.)\nImproved help for the\ntyping\nmodule. Docstrings are now shown for all special forms and special generic aliases (likeUnion\nandList\n). Usinghelp()\nwith generic alias likeList[int]\nwill show the help for the correspondent concrete type (list\nin this case). (Contributed by Serhiy Storchaka in bpo-40257.)Parallel running of\naclose()\n/asend()\n/athrow()\nis now prohibited, andag_running\nnow reflects the actual running status of the async generator. (Contributed by Yury Selivanov in bpo-30773.)Unexpected errors in calling the\n__iter__\nmethod are no longer masked byTypeError\nin thein\noperator and functionscontains()\n,indexOf()\nandcountOf()\nof theoperator\nmodule. (Contributed by Serhiy Storchaka in bpo-40824.)Unparenthesized lambda expressions can no longer be the expression part in an\nif\nclause in comprehensions and generator expressions. See bpo-41848 and bpo-43755 for details.\nNew Modules\u00b6\nzoneinfo\u00b6\nThe zoneinfo\nmodule brings support for the IANA time zone database to\nthe standard library. It adds zoneinfo.ZoneInfo\n, a concrete\ndatetime.tzinfo\nimplementation backed by the system\u2019s time zone data.\nExample:\n>>> from zoneinfo import ZoneInfo\n>>> from datetime import datetime, timedelta\n>>> # Daylight saving time\n>>> dt = datetime(2020, 10, 31, 12, tzinfo=ZoneInfo(\"America/Los_Angeles\"))\n>>> print(dt)\n2020-10-31 12:00:00-07:00\n>>> dt.tzname()\n'PDT'\n>>> # Standard time\n>>> dt += timedelta(days=7)\n>>> print(dt)\n2020-11-07 12:00:00-08:00\n>>> print(dt.tzname())\nPST\nAs a fall-back source of data for platforms that don\u2019t ship the IANA database, the tzdata module was released as a first-party package \u2013 distributed via PyPI and maintained by the CPython core team.\nSee also\n- PEP 615 \u2013 Support for the IANA Time Zone Database in the Standard Library\nPEP written and implemented by Paul Ganssle\ngraphlib\u00b6\nA new module, graphlib\n, was added that contains the\ngraphlib.TopologicalSorter\nclass to offer functionality to perform\ntopological sorting of graphs. (Contributed by Pablo Galindo, Tim Peters and\nLarry Hastings in bpo-17005.)\nImproved Modules\u00b6\nast\u00b6\nAdded the indent option to dump()\nwhich allows it to produce a\nmultiline indented output.\n(Contributed by Serhiy Storchaka in bpo-37995.)\nAdded ast.unparse()\nas a function in the ast\nmodule that can\nbe used to unparse an ast.AST\nobject and produce a string with code\nthat would produce an equivalent ast.AST\nobject when parsed.\n(Contributed by Pablo Galindo and Batuhan Taskaya in bpo-38870.)\nAdded docstrings to AST nodes that contains the ASDL signature used to construct that node. (Contributed by Batuhan Taskaya in bpo-39638.)\nasyncio\u00b6\nDue to significant security concerns, the reuse_address parameter of\nasyncio.loop.create_datagram_endpoint()\nis no longer supported. This is\nbecause of the behavior of the socket option SO_REUSEADDR\nin UDP. For more\ndetails, see the documentation for loop.create_datagram_endpoint()\n.\n(Contributed by Kyle Stanley, Antoine Pitrou, and Yury Selivanov in\nbpo-37228.)\nAdded a new coroutine shutdown_default_executor()\nthat schedules a shutdown for the default executor that waits on the\nThreadPoolExecutor\nto finish closing. Also,\nasyncio.run()\nhas been updated to use the new coroutine.\n(Contributed by Kyle Stanley in bpo-34037.)\nAdded asyncio.PidfdChildWatcher\n, a Linux-specific child watcher\nimplementation that polls process file descriptors. (bpo-38692)\nAdded a new coroutine asyncio.to_thread()\n. It is mainly used for\nrunning IO-bound functions in a separate thread to avoid blocking the event\nloop, and essentially works as a high-level version of\nrun_in_executor()\nthat can directly take keyword arguments.\n(Contributed by Kyle Stanley and Yury Selivanov in bpo-32309.)\nWhen cancelling the task due to a timeout, asyncio.wait_for()\nwill now\nwait until the cancellation is complete also in the case when timeout is\n<= 0, like it does with positive timeouts.\n(Contributed by Elvis Pranskevichus in bpo-32751.)\nasyncio\nnow raises TypeError\nwhen calling incompatible\nmethods with an ssl.SSLSocket\nsocket.\n(Contributed by Ido Michael in bpo-37404.)\ncompileall\u00b6\nAdded new possibility to use hardlinks for duplicated .pyc\nfiles: hardlink_dupes parameter and \u2013hardlink-dupes command line option.\n(Contributed by Lum\u00edr \u2018Frenzy\u2019 Balhar in bpo-40495.)\nAdded new options for path manipulation in resulting .pyc\nfiles: stripdir, prependdir, limit_sl_dest parameters and -s, -p, -e command line options.\nAdded the possibility to specify the option for an optimization level multiple times.\n(Contributed by Lum\u00edr \u2018Frenzy\u2019 Balhar in bpo-38112.)\nconcurrent.futures\u00b6\nAdded a new cancel_futures parameter to\nconcurrent.futures.Executor.shutdown()\nthat cancels all pending futures\nwhich have not started running, instead of waiting for them to complete before\nshutting down the executor.\n(Contributed by Kyle Stanley in bpo-39349.)\nRemoved daemon threads from ThreadPoolExecutor\nand ProcessPoolExecutor\n. This improves\ncompatibility with subinterpreters and predictability in their shutdown\nprocesses. (Contributed by Kyle Stanley in bpo-39812.)\nWorkers in ProcessPoolExecutor\nare now spawned on\ndemand, only when there are no available idle workers to reuse. This optimizes\nstartup overhead and reduces the amount of lost CPU time to idle workers.\n(Contributed by Kyle Stanley in bpo-39207.)\ncurses\u00b6\nAdded curses.get_escdelay()\n, curses.set_escdelay()\n,\ncurses.get_tabsize()\n, and curses.set_tabsize()\nfunctions.\n(Contributed by Anthony Sottile in bpo-38312.)\ndatetime\u00b6\nThe isocalendar()\nof datetime.date\nand isocalendar()\nof datetime.datetime\nmethods now returns a namedtuple()\ninstead of a tuple\n.\n(Contributed by Donghee Na in bpo-24416.)\ndistutils\u00b6\nThe upload command now creates SHA2-256 and Blake2b-256 hash digests. It skips MD5 on platforms that block MD5 digest. (Contributed by Christian Heimes in bpo-40698.)\nfcntl\u00b6\nAdded constants fcntl.F_OFD_GETLK\n, fcntl.F_OFD_SETLK\nand fcntl.F_OFD_SETLKW\n.\n(Contributed by Donghee Na in bpo-38602.)\nftplib\u00b6\nFTP\nand FTP_TLS\nnow raise a ValueError\nif the given timeout for their constructor is zero to prevent the creation of\na non-blocking socket. (Contributed by Donghee Na in bpo-39259.)\ngc\u00b6\nWhen the garbage collector makes a collection in which some objects resurrect (they are reachable from outside the isolated cycles after the finalizers have been executed), do not block the collection of all objects that are still unreachable. (Contributed by Pablo Galindo and Tim Peters in bpo-38379.)\nAdded a new function gc.is_finalized()\nto check if an object has been\nfinalized by the garbage collector. (Contributed by Pablo Galindo in\nbpo-39322.)\nhashlib\u00b6\nThe hashlib\nmodule can now use SHA3 hashes and SHAKE XOF from OpenSSL\nwhen available.\n(Contributed by Christian Heimes in bpo-37630.)\nBuiltin hash modules can now be disabled with\n./configure --without-builtin-hashlib-hashes\nor selectively enabled with\ne.g. ./configure --with-builtin-hashlib-hashes=sha3,blake2\nto force use\nof OpenSSL based implementation.\n(Contributed by Christian Heimes in bpo-40479)\nhttp\u00b6\nHTTP status codes 103 EARLY_HINTS\n, 418 IM_A_TEAPOT\nand 425 TOO_EARLY\nare added to\nhttp.HTTPStatus\n. (Contributed by Donghee Na in bpo-39509 and Ross Rhodes in bpo-39507.)\nIDLE and idlelib\u00b6\nAdded option to toggle cursor blink off. (Contributed by Zackery Spytz in bpo-4603.)\nEscape key now closes IDLE completion windows. (Contributed by Johnny Najera in bpo-38944.)\nAdded keywords to module name completion list. (Contributed by Terry J. Reedy in bpo-37765.)\nNew in 3.9 maintenance releases\nMake IDLE invoke sys.excepthook()\n(when started without \u2018-n\u2019).\nUser hooks were previously ignored. (Contributed by Ken Hilton in\nbpo-43008.)\nThe changes above have been backported to 3.8 maintenance releases.\nRearrange the settings dialog. Split the General tab into Windows and Shell/Ed tabs. Move help sources, which extend the Help menu, to the Extensions tab. Make space for new options and shorten the dialog. The latter makes the dialog better fit small screens. (Contributed by Terry Jan Reedy in bpo-40468.) Move the indent space setting from the Font tab to the new Windows tab. (Contributed by Mark Roseman and Terry Jan Reedy in bpo-33962.)\nApply syntax highlighting to .pyi\nfiles. (Contributed by Alex\nWaygood and Terry Jan Reedy in bpo-45447.)\nimaplib\u00b6\nIMAP4\nand IMAP4_SSL\nnow have\nan optional timeout parameter for their constructors.\nAlso, the open()\nmethod now has an optional timeout parameter\nwith this change. The overridden methods of IMAP4_SSL\nand\nIMAP4_stream\nwere applied to this change.\n(Contributed by Donghee Na in bpo-38615.)\nimaplib.IMAP4.unselect()\nis added.\nimaplib.IMAP4.unselect()\nfrees server\u2019s resources associated with the\nselected mailbox and returns the server to the authenticated\nstate. This command performs the same actions as imaplib.IMAP4.close()\n, except\nthat no messages are permanently removed from the currently\nselected mailbox. (Contributed by Donghee Na in bpo-40375.)\nimportlib\u00b6\nTo improve consistency with import statements, importlib.util.resolve_name()\nnow raises ImportError\ninstead of ValueError\nfor invalid relative\nimport attempts.\n(Contributed by Ngalim Siregar in bpo-37444.)\nImport loaders which publish immutable module objects can now publish immutable packages in addition to individual modules. (Contributed by Dino Viehland in bpo-39336.)\nAdded importlib.resources.files()\nfunction with support for\nsubdirectories in package data, matching backport in importlib_resources\nversion 1.5.\n(Contributed by Jason R. Coombs in bpo-39791.)\nRefreshed importlib.metadata\nfrom importlib_metadata\nversion 1.6.1.\ninspect\u00b6\ninspect.BoundArguments.arguments\nis changed from OrderedDict\nto regular\ndict. (Contributed by Inada Naoki in bpo-36350 and bpo-39775.)\nipaddress\u00b6\nipaddress\nnow supports IPv6 Scoped Addresses (IPv6 address with suffix %\n).\nScoped IPv6 addresses can be parsed using ipaddress.IPv6Address\n.\nIf present, scope zone ID is available through the scope_id\nattribute.\n(Contributed by Oleksandr Pavliuk in bpo-34788.)\nStarting with Python 3.9.5 the ipaddress\nmodule no longer\naccepts any leading zeros in IPv4 address strings.\n(Contributed by Christian Heimes in bpo-36384).\nmath\u00b6\nExpanded the math.gcd()\nfunction to handle multiple arguments.\nFormerly, it only supported two arguments.\n(Contributed by Serhiy Storchaka in bpo-39648.)\nAdded math.lcm()\n: return the least common multiple of specified arguments.\n(Contributed by Mark Dickinson, Ananthakrishnan and Serhiy Storchaka in\nbpo-39479 and bpo-39648.)\nAdded math.nextafter()\n: return the next floating-point value after x\ntowards y.\n(Contributed by Victor Stinner in bpo-39288.)\nAdded math.ulp()\n: return the value of the least significant bit\nof a float.\n(Contributed by Victor Stinner in bpo-39310.)\nmultiprocessing\u00b6\nThe multiprocessing.SimpleQueue\nclass has a new\nclose()\nmethod to explicitly close the\nqueue.\n(Contributed by Victor Stinner in bpo-30966.)\nnntplib\u00b6\nNNTP\nand NNTP_SSL\nnow raise a ValueError\nif the given timeout for their constructor is zero to prevent the creation of\na non-blocking socket. (Contributed by Donghee Na in bpo-39259.)\nos\u00b6\nAdded CLD_KILLED\nand CLD_STOPPED\nfor si_code\n.\n(Contributed by Donghee Na in bpo-38493.)\nExposed the Linux-specific os.pidfd_open()\n(bpo-38692) and\nos.P_PIDFD\n(bpo-38713) for process management with file\ndescriptors.\nThe os.unsetenv()\nfunction is now also available on Windows.\n(Contributed by Victor Stinner in bpo-39413.)\nThe os.putenv()\nand os.unsetenv()\nfunctions are now always\navailable.\n(Contributed by Victor Stinner in bpo-39395.)\nAdded os.waitstatus_to_exitcode()\nfunction:\nconvert a wait status to an exit code.\n(Contributed by Victor Stinner in bpo-40094.)\npathlib\u00b6\nAdded pathlib.Path.readlink()\nwhich acts similarly to\nos.readlink()\n.\n(Contributed by Girts Folkmanis in bpo-30618)\npdb\u00b6\nOn Windows now Pdb\nsupports ~/.pdbrc\n.\n(Contributed by Tim Hopper and Dan Lidral-Porter in bpo-20523.)\npoplib\u00b6\nPOP3\nand POP3_SSL\nnow raise a ValueError\nif the given timeout for their constructor is zero to prevent the creation of\na non-blocking socket. (Contributed by Donghee Na in bpo-39259.)\npprint\u00b6\npprint\ncan now pretty-print types.SimpleNamespace\n.\n(Contributed by Carl Bordum Hansen in bpo-37376.)\npydoc\u00b6\nThe documentation string is now shown not only for class, function,\nmethod etc, but for any object that has its own __doc__\nattribute.\n(Contributed by Serhiy Storchaka in bpo-40257.)\nrandom\u00b6\nAdded a new random.Random.randbytes()\nmethod: generate random bytes.\n(Contributed by Victor Stinner in bpo-40286.)\nsignal\u00b6\nExposed the Linux-specific signal.pidfd_send_signal()\nfor sending to\nsignals to a process using a file descriptor instead of a pid. (bpo-38712)\nsmtplib\u00b6\nSMTP\nand SMTP_SSL\nnow raise a ValueError\nif the given timeout for their constructor is zero to prevent the creation of\na non-blocking socket. (Contributed by Donghee Na in bpo-39259.)\nLMTP\nconstructor now has an optional timeout parameter.\n(Contributed by Donghee Na in bpo-39329.)\nsocket\u00b6\nThe socket\nmodule now exports the CAN_RAW_JOIN_FILTERS\nconstant on Linux 4.1 and greater.\n(Contributed by Stefan Tatschner and Zackery Spytz in bpo-25780.)\nThe socket module now supports the CAN_J1939\nprotocol on\nplatforms that support it. (Contributed by Karl Ding in bpo-40291.)\nThe socket module now has the socket.send_fds()\nand\nsocket.recv_fds()\nfunctions. (Contributed by Joannah Nanjekye, Shinya\nOkano and Victor Stinner in bpo-28724.)\ntime\u00b6\nOn AIX, thread_time()\nis now implemented with thread_cputime()\nwhich has nanosecond resolution, rather than\nclock_gettime(CLOCK_THREAD_CPUTIME_ID)\nwhich has a resolution of 10 milliseconds.\n(Contributed by Batuhan Taskaya in bpo-40192)\nsys\u00b6\nAdded a new sys.platlibdir\nattribute: name of the platform-specific\nlibrary directory. It is used to build the path of standard library and the\npaths of installed extension modules. It is equal to \"lib\"\non most\nplatforms. On Fedora and SuSE, it is equal to \"lib64\"\non 64-bit platforms.\n(Contributed by Jan Mat\u011bjek, Mat\u011bj Cepl, Charalampos Stratakis and Victor Stinner in bpo-1294959.)\nPreviously, sys.stderr\nwas block-buffered when non-interactive. Now\nstderr\ndefaults to always being line-buffered.\n(Contributed by Jendrik Seipp in bpo-13601.)\ntracemalloc\u00b6\nAdded tracemalloc.reset_peak()\nto set the peak size of traced memory\nblocks to the current size, to measure the peak of specific pieces of code.\n(Contributed by Huon Wilson in bpo-40630.)\ntyping\u00b6\nPEP 593 introduced an typing.Annotated\ntype to decorate existing\ntypes with context-specific metadata and new include_extras\nparameter to\ntyping.get_type_hints()\nto access the metadata at runtime. (Contributed\nby Till Varoquaux and Konstantin Kashin.)\nunicodedata\u00b6\nThe Unicode database has been updated to version 13.0.0. (bpo-39926).\nvenv\u00b6\nThe activation scripts provided by venv\nnow all specify their prompt\ncustomization consistently by always using the value specified by\n__VENV_PROMPT__\n. Previously some scripts unconditionally used\n__VENV_PROMPT__\n, others only if it happened to be set (which was the default\ncase), and one used __VENV_NAME__\ninstead.\n(Contributed by Brett Cannon in bpo-37663.)\nxml\u00b6\nWhite space characters within attributes are now preserved when serializing\nxml.etree.ElementTree\nto XML file. EOLNs are no longer normalized\nto \u201cn\u201d. This is the result of discussion about how to interpret\nsection 2.11 of XML spec.\n(Contributed by Mefistotelis in bpo-39011.)\nOptimizations\u00b6\nOptimized the idiom for assignment a temporary variable in comprehensions. Now\nfor y in [expr]\nin comprehensions is as fast as a simple assignmenty = expr\n. For example:sums = [s for s in [0] for x in data for s in [s + x]]\nUnlike the\n:=\noperator this idiom does not leak a variable to the outer scope.(Contributed by Serhiy Storchaka in bpo-32856.)\nOptimized signal handling in multithreaded applications. If a thread different than the main thread gets a signal, the bytecode evaluation loop is no longer interrupted at each bytecode instruction to check for pending signals which cannot be handled. Only the main thread of the main interpreter can handle signals.\nPreviously, the bytecode evaluation loop was interrupted at each instruction until the main thread handles signals. (Contributed by Victor Stinner in bpo-40010.)\nOptimized the\nsubprocess\nmodule on FreeBSD usingclosefrom()\n. (Contributed by Ed Maste, Conrad Meyer, Kyle Evans, Kubilay Kocak and Victor Stinner in bpo-38061.)PyLong_FromDouble()\nis now up to 1.87x faster for values that fit into long. (Contributed by Sergey Fedoseev in bpo-37986.)A number of Python builtins (\nrange\n,tuple\n,set\n,frozenset\n,list\n,dict\n) are now sped up by using PEP 590 vectorcall protocol. (Contributed by Donghee Na, Mark Shannon, Jeroen Demeyer and Petr Viktorin in bpo-37207.)Optimized\nset.difference_update()\nfor the case when the other set is much larger than the base set. (Suggested by Evgeny Kapun with code contributed by Michele Orr\u00f9 in bpo-8425.)Python\u2019s small object allocator (\nobmalloc.c\n) now allows (no more than) one empty arena to remain available for immediate reuse, without returning it to the OS. This prevents thrashing in simple loops where an arena could be created and destroyed anew on each iteration. (Contributed by Tim Peters in bpo-37257.)floor division of float operation now has a better performance. Also the message of\nZeroDivisionError\nfor this operation is updated. (Contributed by Donghee Na in bpo-39434.)Decoding short ASCII strings with UTF-8 and ascii codecs is now about 15% faster. (Contributed by Inada Naoki in bpo-37348.)\nHere\u2019s a summary of performance improvements from Python 3.4 through Python 3.9:\nPython version 3.4 3.5 3.6 3.7 3.8 3.9\n-------------- --- --- --- --- --- ---\nVariable and attribute read access:\nread_local 7.1 7.1 5.4 5.1 3.9 3.9\nread_nonlocal 7.1 8.1 5.8 5.4 4.4 4.5\nread_global 15.5 19.0 14.3 13.6 7.6 7.8\nread_builtin 21.1 21.6 18.5 19.0 7.5 7.8\nread_classvar_from_class 25.6 26.5 20.7 19.5 18.4 17.9\nread_classvar_from_instance 22.8 23.5 18.8 17.1 16.4 16.9\nread_instancevar 32.4 33.1 28.0 26.3 25.4 25.3\nread_instancevar_slots 27.8 31.3 20.8 20.8 20.2 20.5\nread_namedtuple 73.8 57.5 45.0 46.8 18.4 18.7\nread_boundmethod 37.6 37.9 29.6 26.9 27.7 41.1\nVariable and attribute write access:\nwrite_local 8.7 9.3 5.5 5.3 4.3 4.3\nwrite_nonlocal 10.5 11.1 5.6 5.5 4.7 4.8\nwrite_global 19.7 21.2 18.0 18.0 15.8 16.7\nwrite_classvar 92.9 96.0 104.6 102.1 39.2 39.8\nwrite_instancevar 44.6 45.8 40.0 38.9 35.5 37.4\nwrite_instancevar_slots 35.6 36.1 27.3 26.6 25.7 25.8\nData structure read access:\nread_list 24.2 24.5 20.8 20.8 19.0 19.5\nread_deque 24.7 25.5 20.2 20.6 19.8 20.2\nread_dict 24.3 25.7 22.3 23.0 21.0 22.4\nread_strdict 22.6 24.3 19.5 21.2 18.9 21.5\nData structure write access:\nwrite_list 27.1 28.5 22.5 21.6 20.0 20.0\nwrite_deque 28.7 30.1 22.7 21.8 23.5 21.7\nwrite_dict 31.4 33.3 29.3 29.2 24.7 25.4\nwrite_strdict 28.4 29.9 27.5 25.2 23.1 24.5\nStack (or queue) operations:\nlist_append_pop 93.4 112.7 75.4 74.2 50.8 50.6\ndeque_append_pop 43.5 57.0 49.4 49.2 42.5 44.2\ndeque_append_popleft 43.7 57.3 49.7 49.7 42.8 46.4\nTiming loop:\nloop_overhead 0.5 0.6 0.4 0.3 0.3 0.3\nThese results were generated from the variable access benchmark script at:\nTools/scripts/var_access_benchmark.py\n. The benchmark script displays timings\nin nanoseconds. The benchmarks were measured on an\nIntel\u00ae Core\u2122 i7-4960HQ processor\nrunning the macOS 64-bit builds found at\npython.org.\nDeprecated\u00b6\nThe distutils\nbdist_msi\ncommand is now deprecated, usebdist_wheel\n(wheel packages) instead. (Contributed by Hugo van Kemenade in bpo-39586.)Currently\nmath.factorial()\nacceptsfloat\ninstances with non-negative integer values (like5.0\n). It raises aValueError\nfor non-integral and negative floats. It is now deprecated. In future Python versions it will raise aTypeError\nfor all floats. (Contributed by Serhiy Storchaka in bpo-37315.)The\nparser\nandsymbol\nmodules are deprecated and will be removed in future versions of Python. For the majority of use cases, users can leverage the Abstract Syntax Tree (AST) generation and compilation stage, using theast\nmodule.The Public C API functions\nPyParser_SimpleParseStringFlags()\n,PyParser_SimpleParseStringFlagsFilename()\n,PyParser_SimpleParseFileFlags()\nandPyNode_Compile()\nare deprecated and will be removed in Python 3.10 together with the old parser.Using\nNotImplemented\nin a boolean context has been deprecated, as it is almost exclusively the result of incorrect rich comparator implementations. It will be made aTypeError\nin a future version of Python. (Contributed by Josh Rosenberg in bpo-35712.)The\nrandom\nmodule currently accepts any hashable type as a possible seed value. Unfortunately, some of those types are not guaranteed to have a deterministic hash value. After Python 3.9, the module will restrict its seeds toNone\n,int\n,float\n,str\n,bytes\n, andbytearray\n.Opening the\nGzipFile\nfile for writing without specifying the mode argument is deprecated. In future Python versions it will always be opened for reading by default. Specify the mode argument for opening it for writing and silencing a warning. (Contributed by Serhiy Storchaka in bpo-28286.)Deprecated the\nsplit()\nmethod of_tkinter.TkappType\nin favour of thesplitlist()\nmethod which has more consistent and predictable behavior. (Contributed by Serhiy Storchaka in bpo-38371.)The explicit passing of coroutine objects to\nasyncio.wait()\nhas been deprecated and will be removed in version 3.11. (Contributed by Yury Selivanov and Kyle Stanley in bpo-34790.)binhex4 and hexbin4 standards are now deprecated. The\nbinhex\nmodule and the followingbinascii\nfunctions are now deprecated:b2a_hqx()\n,a2b_hqx()\nrlecode_hqx()\n,rledecode_hqx()\n(Contributed by Victor Stinner in bpo-39353.)\nast\nclassesslice\n,Index\nandExtSlice\nare considered deprecated and will be removed in future Python versions.value\nitself should be used instead ofIndex(value)\n.Tuple(slices, Load())\nshould be used instead ofExtSlice(slices)\n. (Contributed by Serhiy Storchaka in bpo-34822.)ast\nclassesSuite\n,Param\n,AugLoad\nandAugStore\nare considered deprecated and will be removed in future Python versions. They were not generated by the parser and not accepted by the code generator in Python 3. (Contributed by Batuhan Taskaya in bpo-39639 and bpo-39969 and Serhiy Storchaka in bpo-39988.)The\nPyEval_InitThreads()\nandPyEval_ThreadsInitialized()\nfunctions are now deprecated and will be removed in Python 3.11. CallingPyEval_InitThreads()\nnow does nothing. The GIL is initialized byPy_Initialize()\nsince Python 3.7. (Contributed by Victor Stinner in bpo-39877.)Passing\nNone\nas the first argument to theshlex.split()\nfunction has been deprecated. (Contributed by Zackery Spytz in bpo-33262.)smtpd.MailmanProxy()\nis now deprecated as it is unusable without an external module,mailman\n. (Contributed by Samuel Colvin in bpo-35800.)The\nlib2to3\nmodule now emits aPendingDeprecationWarning\n. Python 3.9 switched to a PEG parser (see PEP 617), and Python 3.10 may include new language syntax that is not parsable by lib2to3\u2019s LL(1) parser. Thelib2to3\nmodule may be removed from the standard library in a future Python version. Consider third-party alternatives such as LibCST or parso. (Contributed by Carl Meyer in bpo-40360.)The random parameter of\nrandom.shuffle()\nhas been deprecated. (Contributed by Raymond Hettinger in bpo-40465)\nRemoved\u00b6\nThe erroneous version at\nunittest.mock.__version__\nhas been removed.nntplib.NNTP\n:xpath()\nandxgtitle()\nmethods have been removed. These methods are deprecated since Python 3.3. Generally, these extensions are not supported or not enabled by NNTP server administrators. Forxgtitle()\n, please usenntplib.NNTP.descriptions()\nornntplib.NNTP.description()\ninstead. (Contributed by Donghee Na in bpo-39366.)array.array\n:tostring()\nandfromstring()\nmethods have been removed. They were aliases totobytes()\nandfrombytes()\n, deprecated since Python 3.2. (Contributed by Victor Stinner in bpo-38916.)The undocumented\nsys.callstats()\nfunction has been removed. Since Python 3.7, it was deprecated and always returnedNone\n. It required a special build optionCALL_PROFILE\nwhich was already removed in Python 3.7. (Contributed by Victor Stinner in bpo-37414.)The\nsys.getcheckinterval()\nandsys.setcheckinterval()\nfunctions have been removed. They were deprecated since Python 3.2. Usesys.getswitchinterval()\nandsys.setswitchinterval()\ninstead. (Contributed by Victor Stinner in bpo-37392.)The C function\nPyImport_Cleanup()\nhas been removed. It was documented as: \u201cEmpty the module table. For internal use only.\u201d (Contributed by Victor Stinner in bpo-36710.)_dummy_thread\nanddummy_threading\nmodules have been removed. These modules were deprecated since Python 3.7 which requires threading support. (Contributed by Victor Stinner in bpo-37312.)aifc.openfp()\nalias toaifc.open()\n,sunau.openfp()\nalias tosunau.open()\n, andwave.openfp()\nalias towave.open()\nhave been removed. They were deprecated since Python 3.7. (Contributed by Victor Stinner in bpo-37320.)The\nisAlive()\nmethod ofthreading.Thread\nhas been removed. It was deprecated since Python 3.8. Useis_alive()\ninstead. (Contributed by Donghee Na in bpo-37804.)Methods\ngetchildren()\nandgetiterator()\nof classesElementTree\nandElement\nin theElementTree\nmodule have been removed. They were deprecated in Python 3.2. Useiter(x)\norlist(x)\ninstead ofx.getchildren()\nandx.iter()\norlist(x.iter())\ninstead ofx.getiterator()\n. (Contributed by Serhiy Storchaka in bpo-36543.)The old\nplistlib\nAPI has been removed, it was deprecated since Python 3.4. Use theload()\n,loads()\n,dump()\n, anddumps()\nfunctions. Additionally, the use_builtin_types parameter was removed, standardbytes\nobjects are always used instead. (Contributed by Jon Janzen in bpo-36409.)The C function\nPyGen_NeedsFinalizing\nhas been removed. It was not documented, tested, or used anywhere within CPython after the implementation of PEP 442. Patch by Joannah Nanjekye. (Contributed by Joannah Nanjekye in bpo-15088)base64.encodestring()\nandbase64.decodestring()\n, aliases deprecated since Python 3.1, have been removed: usebase64.encodebytes()\nandbase64.decodebytes()\ninstead. (Contributed by Victor Stinner in bpo-39351.)fractions.gcd()\nfunction has been removed, it was deprecated since Python 3.5 (bpo-22486): usemath.gcd()\ninstead. (Contributed by Victor Stinner in bpo-39350.)The buffering parameter of\nbz2.BZ2File\nhas been removed. Since Python 3.0, it was ignored and using it emitted aDeprecationWarning\n. Pass an open file object to control how the file is opened. (Contributed by Victor Stinner in bpo-39357.)The encoding parameter of\njson.loads()\nhas been removed. As of Python 3.1, it was deprecated and ignored; using it has emitted aDeprecationWarning\nsince Python 3.8. (Contributed by Inada Naoki in bpo-39377)with (await asyncio.lock):\nandwith (yield from asyncio.lock):\nstatements are not longer supported, useasync with lock\ninstead. The same is correct forasyncio.Condition\nandasyncio.Semaphore\n. (Contributed by Andrew Svetlov in bpo-34793.)The\nsys.getcounts()\nfunction, the-X showalloccount\ncommand line option and theshow_alloc_count\nfield of the C structurePyConfig\nhave been removed. They required a special Python build by definingCOUNT_ALLOCS\nmacro. (Contributed by Victor Stinner in bpo-39489.)The\n_field_types\nattribute of thetyping.NamedTuple\nclass has been removed. It was deprecated since Python 3.8. Use the__annotations__\nattribute instead. (Contributed by Serhiy Storchaka in bpo-40182.)The\nsymtable.SymbolTable.has_exec()\nmethod has been removed. It was deprecated since 2006, and only returningFalse\nwhen it\u2019s called. (Contributed by Batuhan Taskaya in bpo-40208)The\nasyncio.Task.current_task()\nandasyncio.Task.all_tasks()\nhave been removed. They were deprecated since Python 3.7 and you can useasyncio.current_task()\nandasyncio.all_tasks()\ninstead. (Contributed by R\u00e9mi Lapeyre in bpo-40967)The\nunescape()\nmethod in thehtml.parser.HTMLParser\nclass has been removed (it was deprecated since Python 3.4).html.unescape()\nshould be used for converting character references to the corresponding unicode characters.\nPorting to Python 3.9\u00b6\nThis section lists previously described changes and other bugfixes that may require changes to your code.\nChanges in the Python API\u00b6\n__import__()\nandimportlib.util.resolve_name()\nnow raiseImportError\nwhere it previously raisedValueError\n. Callers catching the specific exception type and supporting both Python 3.9 and earlier versions will need to catch both usingexcept (ImportError, ValueError):\n.The\nvenv\nactivation scripts no longer special-case when__VENV_PROMPT__\nis set to\"\"\n.The\nselect.epoll.unregister()\nmethod no longer ignores theEBADF\nerror. (Contributed by Victor Stinner in bpo-39239.)The compresslevel parameter of\nbz2.BZ2File\nbecame keyword-only, since the buffering parameter has been removed. (Contributed by Victor Stinner in bpo-39357.)Simplified AST for subscription. Simple indices will be represented by their value, extended slices will be represented as tuples.\nIndex(value)\nwill return avalue\nitself,ExtSlice(slices)\nwill returnTuple(slices, Load())\n. (Contributed by Serhiy Storchaka in bpo-34822.)The\nimportlib\nmodule now ignores thePYTHONCASEOK\nenvironment variable when the-E\nor-I\ncommand line options are being used.The encoding parameter has been added to the classes\nftplib.FTP\nandftplib.FTP_TLS\nas a keyword-only parameter, and the default encoding is changed from Latin-1 to UTF-8 to follow RFC 2640.asyncio.loop.shutdown_default_executor()\nhas been added toAbstractEventLoop\n, meaning alternative event loops that inherit from it should have this method defined. (Contributed by Kyle Stanley in bpo-34037.)The constant values of future flags in the\n__future__\nmodule is updated in order to prevent collision with compiler flags. PreviouslyPyCF_ALLOW_TOP_LEVEL_AWAIT\nwas clashing withCO_FUTURE_DIVISION\n. (Contributed by Batuhan Taskaya in bpo-39562)array('u')\nnow useswchar_t\nas C type instead ofPy_UNICODE\n. This change doesn\u2019t affect to its behavior becausePy_UNICODE\nis alias ofwchar_t\nsince Python 3.3. (Contributed by Inada Naoki in bpo-34538.)The\nlogging.getLogger()\nAPI now returns the root logger when passed the name'root'\n, whereas previously it returned a non-root logger named'root'\n. This could affect cases where user code explicitly wants a non-root logger named'root'\n, or instantiates a logger usinglogging.getLogger(__name__)\nin some top-level module called'root.py'\n. (Contributed by Vinay Sajip in bpo-37742.)Division handling of\nPurePath\nnow returnsNotImplemented\ninstead of raising aTypeError\nwhen passed something other than an instance ofstr\norPurePath\n. This allows creating compatible classes that don\u2019t inherit from those mentioned types. (Contributed by Roger Aiudi in bpo-34775).Starting with Python 3.9.5 the\nipaddress\nmodule no longer accepts any leading zeros in IPv4 address strings. Leading zeros are ambiguous and interpreted as octal notation by some libraries. For example the legacy functionsocket.inet_aton()\ntreats leading zeros as octal notatation. glibc implementation of moderninet_pton()\ndoes not accept any leading zeros. (Contributed by Christian Heimes in bpo-36384).codecs.lookup()\nnow normalizes the encoding name the same way asencodings.normalize_encoding()\n, except thatcodecs.lookup()\nalso converts the name to lower case. For example,\"latex+latin1\"\nencoding name is now normalized to\"latex_latin1\"\n. (Contributed by Jordon Xu in bpo-37751.)\nChanges in the C API\u00b6\nInstances of heap-allocated types (such as those created with\nPyType_FromSpec()\nand similar APIs) hold a reference to their type object since Python 3.8. As indicated in the \u201cChanges in the C API\u201d of Python 3.8, for the vast majority of cases, there should be no side effect but for types that have a customtp_traverse\nfunction, ensure that all customtp_traverse\nfunctions of heap-allocated types visit the object\u2019s type.Example:\nint foo_traverse(PyObject *self, visitproc visit, void *arg) { // Rest of the traverse function #if PY_VERSION_HEX >= 0x03090000 // This was not needed before Python 3.9 (Python issue 35810 and 40217) Py_VISIT(Py_TYPE(self)); #endif }\nIf your traverse function delegates to\ntp_traverse\nof its base class (or another type), ensure thatPy_TYPE(self)\nis visited only once. Note that only heap type are expected to visit the type intp_traverse\n.For example, if your\ntp_traverse\nfunction includes:base->tp_traverse(self, visit, arg)\nthen add:\n#if PY_VERSION_HEX >= 0x03090000 // This was not needed before Python 3.9 (bpo-35810 and bpo-40217) if (base->tp_flags & Py_TPFLAGS_HEAPTYPE) { // a heap type's tp_traverse already visited Py_TYPE(self) } else { Py_VISIT(Py_TYPE(self)); } #else\nThe functions\nPyEval_CallObject\n,PyEval_CallFunction\n,PyEval_CallMethod\nandPyEval_CallObjectWithKeywords\nare deprecated. UsePyObject_Call()\nand its variants instead. (See more details in bpo-29548.)\nCPython bytecode changes\u00b6\nThe\nLOAD_ASSERTION_ERROR\nopcode was added for handling theassert\nstatement. Previously, the assert statement would not work correctly if theAssertionError\nexception was being shadowed. (Contributed by Zackery Spytz in bpo-34880.)The\nCOMPARE_OP\nopcode was split into four distinct instructions:COMPARE_OP\nfor rich comparisonsIS_OP\nfor \u2018is\u2019 and \u2018is not\u2019 testsCONTAINS_OP\nfor \u2018in\u2019 and \u2018not in\u2019 testsJUMP_IF_NOT_EXC_MATCH\nfor checking exceptions in \u2018try-except\u2019 statements.\n(Contributed by Mark Shannon in bpo-39156.)\nBuild Changes\u00b6\nAdded\n--with-platlibdir\noption to theconfigure\nscript: name of the platform-specific library directory, stored in the newsys.platlibdir\nattribute. Seesys.platlibdir\nattribute for more information. (Contributed by Jan Mat\u011bjek, Mat\u011bj Cepl, Charalampos Stratakis and Victor Stinner in bpo-1294959.)The\nCOUNT_ALLOCS\nspecial build macro has been removed. (Contributed by Victor Stinner in bpo-39489.)On non-Windows platforms, the\nsetenv()\nandunsetenv()\nfunctions are now required to build Python. (Contributed by Victor Stinner in bpo-39395.)On non-Windows platforms, creating\nbdist_wininst\ninstallers is now officially unsupported. (See bpo-10945 for more details.)When building Python on macOS from source,\n_tkinter\nnow links with non-system Tcl and Tk frameworks if they are installed in/Library/Frameworks\n, as had been the case on older releases of macOS. If a macOS SDK is explicitly configured, by using--enable-universalsdk\nor-isysroot\n, only the SDK itself is searched. The default behavior can still be overridden with--with-tcltk-includes\nand--with-tcltk-libs\n. (Contributed by Ned Deily in bpo-34956.)Python can now be built for Windows 10 ARM64. (Contributed by Steve Dower in bpo-33125.)\nSome individual tests are now skipped when\n--pgo\nis used. The tests in question increased the PGO task time significantly and likely didn\u2019t help improve optimization of the final executable. This speeds up the task by a factor of about 15x. Running the full unit test suite is slow. This change may result in a slightly less optimized build since not as many code branches will be executed. If you are willing to wait for the much slower build, the old behavior can be restored using./configure [..] PROFILE_TASK=\"-m test --pgo-extended\"\n. We make no guarantees as to which PGO task set produces a faster build. Users who care should run their own relevant benchmarks as results can depend on the environment, workload, and compiler tool chain. (See bpo-36044 and bpo-37707 for more details.)\nC API Changes\u00b6\nNew Features\u00b6\nPEP 573: Added\nPyType_FromModuleAndSpec()\nto associate a module with a class;PyType_GetModule()\nandPyType_GetModuleState()\nto retrieve the module and its state; andPyCMethod\nandMETH_METHOD\nto allow a method to access the class it was defined in. (Contributed by Marcel Plch and Petr Viktorin in bpo-38787.)Added\nPyFrame_GetCode()\nfunction: get a frame code. AddedPyFrame_GetBack()\nfunction: get the frame next outer frame. (Contributed by Victor Stinner in bpo-40421.)Added\nPyFrame_GetLineNumber()\nto the limited C API. (Contributed by Victor Stinner in bpo-40421.)Added\nPyThreadState_GetInterpreter()\nandPyInterpreterState_Get()\nfunctions to get the interpreter. AddedPyThreadState_GetFrame()\nfunction to get the current frame of a Python thread state. AddedPyThreadState_GetID()\nfunction: get the unique identifier of a Python thread state. (Contributed by Victor Stinner in bpo-39947.)Added a new public\nPyObject_CallNoArgs()\nfunction to the C API, which calls a callable Python object without any arguments. It is the most efficient way to call a callable Python object without any argument. (Contributed by Victor Stinner in bpo-37194.)Changes in the limited C API (if\nPy_LIMITED_API\nmacro is defined):Provide\nPy_EnterRecursiveCall()\nandPy_LeaveRecursiveCall()\nas regular functions for the limited API. Previously, there were defined as macros, but these macros didn\u2019t compile with the limited C API which cannot accessPyThreadState.recursion_depth\nfield (the structure is opaque in the limited C API).PyObject_INIT()\nandPyObject_INIT_VAR()\nbecome regular \u201copaque\u201d function to hide implementation details.\nThe\nPyModule_AddType()\nfunction is added to help adding a type to a module. (Contributed by Donghee Na in bpo-40024.)Added the functions\nPyObject_GC_IsTracked()\nandPyObject_GC_IsFinalized()\nto the public API to allow to query if Python objects are being currently tracked or have been already finalized by the garbage collector respectively. (Contributed by Pablo Galindo Salgado in bpo-40241.)Added\n_PyObject_FunctionStr()\nto get a user-friendly string representation of a function-like object. (Patch by Jeroen Demeyer in bpo-37645.)Added\nPyObject_CallOneArg()\nfor calling an object with one positional argument (Patch by Jeroen Demeyer in bpo-37483.)\nPorting to Python 3.9\u00b6\nPyInterpreterState.eval_frame\n(PEP 523) now requires a new mandatory tstate parameter (PyThreadState*\n). (Contributed by Victor Stinner in bpo-38500.)Extension modules:\nm_traverse\n,m_clear\nandm_free\nfunctions ofPyModuleDef\nare no longer called if the module state was requested but is not allocated yet. This is the case immediately after the module is created and before the module is executed (Py_mod_exec\nfunction). More precisely, these functions are not called ifm_size\nis greater than 0 and the module state (as returned byPyModule_GetState()\n) isNULL\n.Extension modules without module state (\nm_size <= 0\n) are not affected.If\nPy_AddPendingCall()\nis called in a subinterpreter, the function is now scheduled to be called from the subinterpreter, rather than being called from the main interpreter. Each subinterpreter now has its own list of scheduled calls. (Contributed by Victor Stinner in bpo-39984.)The Windows registry is no longer used to initialize\nsys.path\nwhen the-E\noption is used (ifPyConfig.use_environment\nis set to0\n). This is significant when embedding Python on Windows. (Contributed by Zackery Spytz in bpo-8901.)The global variable\nPyStructSequence_UnnamedField\nis now a constant and refers to a constant string. (Contributed by Serhiy Storchaka in bpo-38650.)The\nPyGC_Head\nstructure is now opaque. It is only defined in the internal C API (pycore_gc.h\n). (Contributed by Victor Stinner in bpo-40241.)The\nPy_UNICODE_COPY\n,Py_UNICODE_FILL\n,PyUnicode_WSTR_LENGTH\n,PyUnicode_FromUnicode()\n,PyUnicode_AsUnicode()\n,_PyUnicode_AsUnicode\n, andPyUnicode_AsUnicodeAndSize()\nare marked as deprecated in C. They have been deprecated by PEP 393 since Python 3.3. (Contributed by Inada Naoki in bpo-36346.)The\nPy_FatalError()\nfunction is replaced with a macro which logs automatically the name of the current function, unless thePy_LIMITED_API\nmacro is defined. (Contributed by Victor Stinner in bpo-39882.)The vectorcall protocol now requires that the caller passes only strings as keyword names. (See bpo-37540 for more information.)\nImplementation details of a number of macros and functions are now hidden:\nPyObject_IS_GC()\nmacro was converted to a function.The\nPyObject_NEW()\nmacro becomes an alias to thePyObject_New\nmacro, and thePyObject_NEW_VAR()\nmacro becomes an alias to thePyObject_NewVar\nmacro. They no longer access directly thePyTypeObject.tp_basicsize\nmember.PyObject_GET_WEAKREFS_LISTPTR()\nmacro was converted to a function: the macro accessed directly thePyTypeObject.tp_weaklistoffset\nmember.PyObject_CheckBuffer()\nmacro was converted to a function: the macro accessed directly thePyTypeObject.tp_as_buffer\nmember.PyIndex_Check()\nis now always declared as an opaque function to hide implementation details: removed thePyIndex_Check()\nmacro. The macro accessed directly thePyTypeObject.tp_as_number\nmember.\n(See bpo-40170 for more details.)\nRemoved\u00b6\nExcluded\nPyFPE_START_PROTECT()\nandPyFPE_END_PROTECT()\nmacros ofpyfpe.h\nfrom the limited C API. (Contributed by Victor Stinner in bpo-38835.)The\ntp_print\nslot of PyTypeObject has been removed. It was used for printing objects to files in Python 2.7 and before. Since Python 3.0, it has been ignored and unused. (Contributed by Jeroen Demeyer in bpo-36974.)Changes in the limited C API (if\nPy_LIMITED_API\nmacro is defined):Excluded the following functions from the limited C API:\nPyThreadState_DeleteCurrent()\n(Contributed by Joannah Nanjekye in bpo-37878.)_Py_CheckRecursionLimit\n_Py_NewReference()\n_Py_ForgetReference()\n_PyTraceMalloc_NewReference()\n_Py_GetRefTotal()\nThe trashcan mechanism which never worked in the limited C API.\nPyTrash_UNWIND_LEVEL\nPy_TRASHCAN_BEGIN_CONDITION\nPy_TRASHCAN_BEGIN\nPy_TRASHCAN_END\nPy_TRASHCAN_SAFE_BEGIN\nPy_TRASHCAN_SAFE_END\nMoved following functions and definitions to the internal C API:\n_PyDebug_PrintTotalRefs()\n_Py_PrintReferences()\n_Py_PrintReferenceAddresses()\n_Py_tracemalloc_config\n_Py_AddToAllObjects()\n(specific toPy_TRACE_REFS\nbuild)\nRemoved\n_PyRuntime.getframe\nhook and removed_PyThreadState_GetFrame\nmacro which was an alias to_PyRuntime.getframe\n. They were only exposed by the internal C API. Removed alsoPyThreadFrameGetter\ntype. (Contributed by Victor Stinner in bpo-39946.)Removed the following functions from the C API. Call\nPyGC_Collect()\nexplicitly to clear all free lists. (Contributed by Inada Naoki and Victor Stinner in bpo-37340, bpo-38896 and bpo-40428.)PyAsyncGen_ClearFreeLists()\nPyContext_ClearFreeList()\nPyDict_ClearFreeList()\nPyFloat_ClearFreeList()\nPyFrame_ClearFreeList()\nPyList_ClearFreeList()\nPyMethod_ClearFreeList()\nandPyCFunction_ClearFreeList()\n: the free lists of bound method objects have been removed.PySet_ClearFreeList()\n: the set free list has been removed in Python 3.4.PyTuple_ClearFreeList()\nPyUnicode_ClearFreeList()\n: the Unicode free list has been removed in Python 3.3.\nRemoved\n_PyUnicode_ClearStaticStrings()\nfunction. (Contributed by Victor Stinner in bpo-39465.)Removed\nPy_UNICODE_MATCH\n. It has been deprecated by PEP 393, and broken since Python 3.3. ThePyUnicode_Tailmatch()\nfunction can be used instead. (Contributed by Inada Naoki in bpo-36346.)Cleaned header files of interfaces defined but with no implementation. The public API symbols being removed are:\n_PyBytes_InsertThousandsGroupingLocale\n,_PyBytes_InsertThousandsGrouping\n,_Py_InitializeFromArgs\n,_Py_InitializeFromWideArgs\n,_PyFloat_Repr\n,_PyFloat_Digits\n,_PyFloat_DigitsInit\n,PyFrame_ExtendStack\n,_PyAIterWrapper_Type\n,PyNullImporter_Type\n,PyCmpWrapper_Type\n,PySortWrapper_Type\n,PyNoArgsFunction\n. (Contributed by Pablo Galindo Salgado in bpo-39372.)\nNotable changes in Python 3.9.1\u00b6\ntyping\u00b6\nThe behavior of typing.Literal\nwas changed to conform with PEP 586\nand to match the behavior of static type checkers specified in the PEP.\nLiteral\nnow de-duplicates parameters.Equality comparisons between\nLiteral\nobjects are now order independent.Literal\ncomparisons now respect types. For example,Literal[0] == Literal[False]\npreviously evaluated toTrue\n. It is nowFalse\n. To support this change, the internally used type cache now supports differentiating types.Literal\nobjects will now raise aTypeError\nexception during equality comparisons if any of their parameters are not hashable. Note that declaringLiteral\nwith mutable parameters will not throw an error:>>> from typing import Literal >>> Literal[{0}] >>> Literal[{0}] == Literal[{False}] Traceback (most recent call last): File \"\", line 1, in TypeError: unhashable type: 'set'\n(Contributed by Yurii Karabas in bpo-42345.)\nmacOS 11.0 (Big Sur) and Apple Silicon Mac support\u00b6\nAs of 3.9.1, Python now fully supports building and running on macOS 11.0\n(Big Sur) and on Apple Silicon Macs (based on the ARM64\narchitecture).\nA new universal build variant, universal2\n, is now available to natively\nsupport both ARM64\nand Intel 64\nin one set of executables. Binaries\ncan also now be built on current versions of macOS to be deployed on a range\nof older macOS versions (tested to 10.9) while making some newer OS\nfunctions and options conditionally available based on the operating system\nversion in use at runtime (\u201cweaklinking\u201d).\n(Contributed by Ronald Oussoren and Lawrence D\u2019Anna in bpo-41100.)\nNotable changes in Python 3.9.2\u00b6\ncollections.abc\u00b6\ncollections.abc.Callable\ngeneric now flattens type parameters, similar\nto what typing.Callable\ncurrently does. This means that\ncollections.abc.Callable[[int, str], str]\nwill have __args__\nof\n(int, str, str)\n; previously this was ([int, str], str)\n. To allow this\nchange, types.GenericAlias\ncan now be subclassed, and a subclass will\nbe returned when subscripting the collections.abc.Callable\ntype.\nCode which accesses the arguments via typing.get_args()\nor __args__\nneed to account for this change. A DeprecationWarning\nmay be emitted for\ninvalid forms of parameterizing collections.abc.Callable\nwhich may have\npassed silently in Python 3.9.1. This DeprecationWarning\nwill\nbecome a TypeError\nin Python 3.10.\n(Contributed by Ken Jin in bpo-42195.)\nurllib.parse\u00b6\nEarlier Python versions allowed using both ;\nand &\nas\nquery parameter separators in urllib.parse.parse_qs()\nand\nurllib.parse.parse_qsl()\n. Due to security concerns, and to conform with\nnewer W3C recommendations, this has been changed to allow only a single\nseparator key, with &\nas the default. This change also affects\ncgi.parse()\nand cgi.parse_multipart()\nas they use the affected\nfunctions internally. For more details, please see their respective\ndocumentation.\n(Contributed by Adam Goldschmidt, Senthil Kumaran and Ken Jin in bpo-42967.)\nNotable changes in Python 3.9.3\u00b6\nA security fix alters the ftplib.FTP\nbehavior to not trust the\nIPv4 address sent from the remote server when setting up a passive data\nchannel. We reuse the ftp server IP address instead. For unusual code\nrequiring the old behavior, set a trust_server_pasv_ipv4_address\nattribute on your FTP instance to True\n. (See gh-87451)\nNotable changes in Python 3.9.5\u00b6\nurllib.parse\u00b6\nThe presence of newline or tab characters in parts of a URL allows for some\nforms of attacks. Following the WHATWG specification that updates RFC 3986,\nASCII newline \\n\n, \\r\nand tab \\t\ncharacters are stripped from the\nURL by the parser in urllib.parse\npreventing such attacks. The removal\ncharacters are controlled by a new module level variable\nurllib.parse._UNSAFE_URL_BYTES_TO_REMOVE\n. (See gh-88048)\nNotable security feature in 3.9.14\u00b6\nConverting between int\nand str\nin bases other than 2\n(binary), 4, 8 (octal), 16 (hexadecimal), or 32 such as base 10 (decimal)\nnow raises a ValueError\nif the number of digits in string form is\nabove a limit to avoid potential denial of service attacks due to the\nalgorithmic complexity. This is a mitigation for CVE 2020-10735.\nThis limit can be configured or disabled by environment variable, command\nline flag, or sys\nAPIs. See the integer string conversion\nlength limitation documentation. The default limit\nis 4300 digits in string form.\nNotable changes in 3.9.17\u00b6\ntarfile\u00b6\nThe extraction methods in\ntarfile\n, andshutil.unpack_archive()\n, have a new a filter argument that allows limiting tar features than may be surprising or dangerous, such as creating files outside the destination directory. See Extraction filters for details. In Python 3.12, use without the filter argument will show aDeprecationWarning\n. In Python 3.14, the default will switch to'data'\n. (Contributed by Petr Viktorin in PEP 706.)", "code_snippets": [" ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", "\n", " ", " ", "\n\n", "\n", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", " ", " ", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n"], "language": "Python", "source": "python.org", "token_count": 13763} +{"url": "https://docs.python.org/3/library/asyncio-sync.html", "title": "Synchronization Primitives", "content": "Synchronization Primitives\u00b6\nSource code: Lib/asyncio/locks.py\nasyncio synchronization primitives are designed to be similar to\nthose of the threading\nmodule with two important caveats:\nasyncio primitives are not thread-safe, therefore they should not be used for OS thread synchronization (use\nthreading\nfor that);methods of these synchronization primitives do not accept the timeout argument; use the\nasyncio.wait_for()\nfunction to perform operations with timeouts.\nasyncio has the following basic synchronization primitives:\nLock\u00b6\n- class asyncio.Lock\u00b6\nImplements a mutex lock for asyncio tasks. Not thread-safe.\nAn asyncio lock can be used to guarantee exclusive access to a shared resource.\nThe preferred way to use a Lock is an\nasync with\nstatement:lock = asyncio.Lock() # ... later async with lock: # access shared state\nwhich is equivalent to:\nlock = asyncio.Lock() # ... later await lock.acquire() try: # access shared state finally: lock.release()\nChanged in version 3.10: Removed the loop parameter.\n- async acquire()\u00b6\nAcquire the lock.\nThis method waits until the lock is unlocked, sets it to locked and returns\nTrue\n.When more than one coroutine is blocked in\nacquire()\nwaiting for the lock to be unlocked, only one coroutine eventually proceeds.Acquiring a lock is fair: the coroutine that proceeds will be the first coroutine that started waiting on the lock.\n- release()\u00b6\nRelease the lock.\nWhen the lock is locked, reset it to unlocked and return.\nIf the lock is unlocked, a\nRuntimeError\nis raised.\n- locked()\u00b6\nReturn\nTrue\nif the lock is locked.\nEvent\u00b6\n- class asyncio.Event\u00b6\nAn event object. Not thread-safe.\nAn asyncio event can be used to notify multiple asyncio tasks that some event has happened.\nAn Event object manages an internal flag that can be set to true with the\nset()\nmethod and reset to false with theclear()\nmethod. Thewait()\nmethod blocks until the flag is set to true. The flag is set to false initially.Changed in version 3.10: Removed the loop parameter.\nExample:\nasync def waiter(event): print('waiting for it ...') await event.wait() print('... got it!') async def main(): # Create an Event object. event = asyncio.Event() # Spawn a Task to wait until 'event' is set. waiter_task = asyncio.create_task(waiter(event)) # Sleep for 1 second and set the event. await asyncio.sleep(1) event.set() # Wait until the waiter task is finished. await waiter_task asyncio.run(main())\n- async wait()\u00b6\nWait until the event is set.\nIf the event is set, return\nTrue\nimmediately. Otherwise block until another task callsset()\n.\n- set()\u00b6\nSet the event.\nAll tasks waiting for event to be set will be immediately awakened.\n- clear()\u00b6\nClear (unset) the event.\nSubsequent tasks awaiting on\nwait()\nwill now block until theset()\nmethod is called again.\n- is_set()\u00b6\nReturn\nTrue\nif the event is set.\nCondition\u00b6\n- class asyncio.Condition(lock=None)\u00b6\nA Condition object. Not thread-safe.\nAn asyncio condition primitive can be used by a task to wait for some event to happen and then get exclusive access to a shared resource.\nIn essence, a Condition object combines the functionality of an\nEvent\nand aLock\n. It is possible to have multiple Condition objects share one Lock, which allows coordinating exclusive access to a shared resource between different tasks interested in particular states of that shared resource.The optional lock argument must be a\nLock\nobject orNone\n. In the latter case a new Lock object is created automatically.Changed in version 3.10: Removed the loop parameter.\nThe preferred way to use a Condition is an\nasync with\nstatement:cond = asyncio.Condition() # ... later async with cond: await cond.wait()\nwhich is equivalent to:\ncond = asyncio.Condition() # ... later await cond.acquire() try: await cond.wait() finally: cond.release()\n- async acquire()\u00b6\nAcquire the underlying lock.\nThis method waits until the underlying lock is unlocked, sets it to locked and returns\nTrue\n.\n- notify(n=1)\u00b6\nWake up n tasks (1 by default) waiting on this condition. If fewer than n tasks are waiting they are all awakened.\nThe lock must be acquired before this method is called and released shortly after. If called with an unlocked lock a\nRuntimeError\nerror is raised.\n- locked()\u00b6\nReturn\nTrue\nif the underlying lock is acquired.\n- notify_all()\u00b6\nWake up all tasks waiting on this condition.\nThis method acts like\nnotify()\n, but wakes up all waiting tasks.The lock must be acquired before this method is called and released shortly after. If called with an unlocked lock a\nRuntimeError\nerror is raised.\n- release()\u00b6\nRelease the underlying lock.\nWhen invoked on an unlocked lock, a\nRuntimeError\nis raised.\n- async wait()\u00b6\nWait until notified.\nIf the calling task has not acquired the lock when this method is called, a\nRuntimeError\nis raised.This method releases the underlying lock, and then blocks until it is awakened by a\nnotify()\nornotify_all()\ncall. Once awakened, the Condition re-acquires its lock and this method returnsTrue\n.Note that a task may return from this call spuriously, which is why the caller should always re-check the state and be prepared to\nwait()\nagain. For this reason, you may prefer to usewait_for()\ninstead.\nSemaphore\u00b6\n- class asyncio.Semaphore(value=1)\u00b6\nA Semaphore object. Not thread-safe.\nA semaphore manages an internal counter which is decremented by each\nacquire()\ncall and incremented by eachrelease()\ncall. The counter can never go below zero; whenacquire()\nfinds that it is zero, it blocks, waiting until some task callsrelease()\n.The optional value argument gives the initial value for the internal counter (\n1\nby default). If the given value is less than0\naValueError\nis raised.Changed in version 3.10: Removed the loop parameter.\nThe preferred way to use a Semaphore is an\nasync with\nstatement:sem = asyncio.Semaphore(10) # ... later async with sem: # work with shared resource\nwhich is equivalent to:\nsem = asyncio.Semaphore(10) # ... later await sem.acquire() try: # work with shared resource finally: sem.release()\n- async acquire()\u00b6\nAcquire a semaphore.\nIf the internal counter is greater than zero, decrement it by one and return\nTrue\nimmediately. If it is zero, wait until arelease()\nis called and returnTrue\n.\n- locked()\u00b6\nReturns\nTrue\nif semaphore can not be acquired immediately.\n- release()\u00b6\nRelease a semaphore, incrementing the internal counter by one. Can wake up a task waiting to acquire the semaphore.\nUnlike\nBoundedSemaphore\n,Semaphore\nallows making morerelease()\ncalls thanacquire()\ncalls.\nBoundedSemaphore\u00b6\n- class asyncio.BoundedSemaphore(value=1)\u00b6\nA bounded semaphore object. Not thread-safe.\nBounded Semaphore is a version of\nSemaphore\nthat raises aValueError\ninrelease()\nif it increases the internal counter above the initial value.Changed in version 3.10: Removed the loop parameter.\nBarrier\u00b6\n- class asyncio.Barrier(parties)\u00b6\nA barrier object. Not thread-safe.\nA barrier is a simple synchronization primitive that allows to block until parties number of tasks are waiting on it. Tasks can wait on the\nwait()\nmethod and would be blocked until the specified number of tasks end up waiting onwait()\n. At that point all of the waiting tasks would unblock simultaneously.async with\ncan be used as an alternative to awaiting onwait()\n.The barrier can be reused any number of times.\nExample:\nasync def example_barrier(): # barrier with 3 parties b = asyncio.Barrier(3) # create 2 new waiting tasks asyncio.create_task(b.wait()) asyncio.create_task(b.wait()) await asyncio.sleep(0) print(b) # The third .wait() call passes the barrier await b.wait() print(b) print(\"barrier passed\") await asyncio.sleep(0) print(b) asyncio.run(example_barrier())\nResult of this example is:\n barrier passed \nAdded in version 3.11.\n- async wait()\u00b6\nPass the barrier. When all the tasks party to the barrier have called this function, they are all unblocked simultaneously.\nWhen a waiting or blocked task in the barrier is cancelled, this task exits the barrier which stays in the same state. If the state of the barrier is \u201cfilling\u201d, the number of waiting task decreases by 1.\nThe return value is an integer in the range of 0 to\nparties-1\n, different for each task. This can be used to select a task to do some special housekeeping, e.g.:... async with barrier as position: if position == 0: # Only one task prints this print('End of *draining phase*')\nThis method may raise a\nBrokenBarrierError\nexception if the barrier is broken or reset while a task is waiting. It could raise aCancelledError\nif a task is cancelled.\n- async reset()\u00b6\nReturn the barrier to the default, empty state. Any tasks waiting on it will receive the\nBrokenBarrierError\nexception.If a barrier is broken it may be better to just leave it and create a new one.\n- async abort()\u00b6\nPut the barrier into a broken state. This causes any active or future calls to\nwait()\nto fail with theBrokenBarrierError\n. Use this for example if one of the tasks needs to abort, to avoid infinite waiting tasks.\n- parties\u00b6\nThe number of tasks required to pass the barrier.\n- n_waiting\u00b6\nThe number of tasks currently waiting in the barrier while filling.\n- broken\u00b6\nA boolean that is\nTrue\nif the barrier is in the broken state.\n- exception asyncio.BrokenBarrierError\u00b6\nThis exception, a subclass of\nRuntimeError\n, is raised when theBarrier\nobject is reset or broken.\nChanged in version 3.9: Acquiring a lock using await lock\nor yield from lock\nand/or\nwith\nstatement (with await lock\n, with (yield from\nlock)\n) was removed. Use async with lock\ninstead.", "code_snippets": [" ", " ", "\n\n", "\n", " ", " ", "\n ", "\n", " ", " ", "\n\n", "\n", " ", "\n", "\n ", "\n", "\n ", "\n", " ", "\n ", "\n ", " ", "\n ", "\n\n", " ", "\n ", "\n ", " ", " ", "\n\n ", "\n ", " ", " ", "\n\n ", "\n ", " ", "\n ", "\n\n ", "\n ", " ", "\n\n", "\n", " ", " ", "\n\n", "\n", " ", " ", "\n ", " ", "\n", " ", " ", "\n\n", "\n", " ", "\n", "\n ", " ", "\n", "\n ", "\n", " ", " ", "\n\n", "\n", " ", " ", "\n ", "\n", " ", " ", "\n\n", "\n", " ", "\n", "\n ", "\n", "\n ", "\n", " ", "\n ", "\n ", " ", " ", "\n\n ", "\n ", "\n ", "\n\n ", " ", "\n ", "\n\n ", "\n ", " ", "\n ", "\n ", "\n\n ", " ", "\n ", "\n\n", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", "\n", " ", " ", " ", " ", " ", "\n", "\n", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", "\n ", "\n"], "language": "Python", "source": "python.org", "token_count": 2410} +{"url": "https://docs.python.org/3/whatsnew/3.10.html", "title": "What\u2019s New In Python 3.10", "content": "What\u2019s New In Python 3.10\u00b6\n- Editor:\nPablo Galindo Salgado\nThis article explains the new features in Python 3.10, compared to 3.9. Python 3.10 was released on October 4, 2021. For full details, see the changelog.\nSummary \u2013 Release highlights\u00b6\nNew syntax features:\nPEP 634, Structural Pattern Matching: Specification\nPEP 635, Structural Pattern Matching: Motivation and Rationale\nPEP 636, Structural Pattern Matching: Tutorial\nbpo-12782, Parenthesized context managers are now officially allowed.\nNew features in the standard library:\nPEP 618, Add Optional Length-Checking To zip.\nInterpreter improvements:\nPEP 626, Precise line numbers for debugging and other tools.\nNew typing features:\nPEP 604, Allow writing union types as X | Y\nPEP 612, Parameter Specification Variables\nPEP 613, Explicit Type Aliases\nPEP 647, User-Defined Type Guards\nImportant deprecations, removals or restrictions:\nNew Features\u00b6\nParenthesized context managers\u00b6\nUsing enclosing parentheses for continuation across multiple lines in context managers is now supported. This allows formatting a long collection of context managers in multiple lines in a similar way as it was previously possible with import statements. For instance, all these examples are now valid:\nwith (CtxManager() as example):\n...\nwith (\nCtxManager1(),\nCtxManager2()\n):\n...\nwith (CtxManager1() as example,\nCtxManager2()):\n...\nwith (CtxManager1(),\nCtxManager2() as example):\n...\nwith (\nCtxManager1() as example1,\nCtxManager2() as example2\n):\n...\nit is also possible to use a trailing comma at the end of the enclosed group:\nwith (\nCtxManager1() as example1,\nCtxManager2() as example2,\nCtxManager3() as example3,\n):\n...\nThis new syntax uses the non LL(1) capacities of the new parser. Check PEP 617 for more details.\n(Contributed by Guido van Rossum, Pablo Galindo and Lysandros Nikolaou in bpo-12782 and bpo-40334.)\nBetter error messages\u00b6\nSyntaxErrors\u00b6\nWhen parsing code that contains unclosed parentheses or brackets the interpreter now includes the location of the unclosed bracket of parentheses instead of displaying SyntaxError: unexpected EOF while parsing or pointing to some incorrect location. For instance, consider the following code (notice the unclosed \u2018{\u2018):\nexpected = {9: 1, 18: 2, 19: 2, 27: 3, 28: 3, 29: 3, 36: 4, 37: 4,\n38: 4, 39: 4, 45: 5, 46: 5, 47: 5, 48: 5, 49: 5, 54: 6,\nsome_other_code = foo()\nPrevious versions of the interpreter reported confusing places as the location of the syntax error:\nFile \"example.py\", line 3\nsome_other_code = foo()\n^\nSyntaxError: invalid syntax\nbut in Python 3.10 a more informative error is emitted:\nFile \"example.py\", line 1\nexpected = {9: 1, 18: 2, 19: 2, 27: 3, 28: 3, 29: 3, 36: 4, 37: 4,\n^\nSyntaxError: '{' was never closed\nIn a similar way, errors involving unclosed string literals (single and triple quoted) now point to the start of the string instead of reporting EOF/EOL.\nThese improvements are inspired by previous work in the PyPy interpreter.\n(Contributed by Pablo Galindo in bpo-42864 and Batuhan Taskaya in bpo-40176.)\nSyntaxError\nexceptions raised by the interpreter will now highlight the\nfull error range of the expression that constitutes the syntax error itself,\ninstead of just where the problem is detected. In this way, instead of displaying\n(before Python 3.10):\n>>> foo(x, z for z in range(10), t, w)\nFile \"\", line 1\nfoo(x, z for z in range(10), t, w)\n^\nSyntaxError: Generator expression must be parenthesized\nnow Python 3.10 will display the exception as:\n>>> foo(x, z for z in range(10), t, w)\nFile \"\", line 1\nfoo(x, z for z in range(10), t, w)\n^^^^^^^^^^^^^^^^^^^^\nSyntaxError: Generator expression must be parenthesized\nThis improvement was contributed by Pablo Galindo in bpo-43914.\nA considerable amount of new specialized messages for SyntaxError\nexceptions\nhave been incorporated. Some of the most notable ones are as follows:\nMissing\n:\nbefore blocks:>>> if rocket.position > event_horizon File \"\", line 1 if rocket.position > event_horizon ^ SyntaxError: expected ':'\n(Contributed by Pablo Galindo in bpo-42997.)\nUnparenthesised tuples in comprehensions targets:\n>>> {x,y for x,y in zip('abcd', '1234')} File \"\", line 1 {x,y for x,y in zip('abcd', '1234')} ^ SyntaxError: did you forget parentheses around the comprehension target?\n(Contributed by Pablo Galindo in bpo-43017.)\nMissing commas in collection literals and between expressions:\n>>> items = { ... x: 1, ... y: 2 ... z: 3, File \"\", line 3 y: 2 ^ SyntaxError: invalid syntax. Perhaps you forgot a comma?\n(Contributed by Pablo Galindo in bpo-43822.)\nMultiple Exception types without parentheses:\n>>> try: ... build_dyson_sphere() ... except NotEnoughScienceError, NotEnoughResourcesError: File \"\", line 3 except NotEnoughScienceError, NotEnoughResourcesError: ^ SyntaxError: multiple exception types must be parenthesized\n(Contributed by Pablo Galindo in bpo-43149.)\nMissing\n:\nand values in dictionary literals:>>> values = { ... x: 1, ... y: 2, ... z: ... } File \"\", line 4 z: ^ SyntaxError: expression expected after dictionary key and ':' >>> values = {x:1, y:2, z w:3} File \"\", line 1 values = {x:1, y:2, z w:3} ^ SyntaxError: ':' expected after dictionary key\n(Contributed by Pablo Galindo in bpo-43823.)\ntry\nblocks withoutexcept\norfinally\nblocks:>>> try: ... x = 2 ... something = 3 File \"\", line 3 something = 3 ^^^^^^^^^ SyntaxError: expected 'except' or 'finally' block\n(Contributed by Pablo Galindo in bpo-44305.)\nUsage of\n=\ninstead of==\nin comparisons:>>> if rocket.position = event_horizon: File \"\", line 1 if rocket.position = event_horizon: ^ SyntaxError: cannot assign to attribute here. Maybe you meant '==' instead of '='?\n(Contributed by Pablo Galindo in bpo-43797.)\nUsage of\n*\nin f-strings:>>> f\"Black holes {*all_black_holes} and revelations\" File \"\", line 1 (*all_black_holes) ^ SyntaxError: f-string: cannot use starred expression here\n(Contributed by Pablo Galindo in bpo-41064.)\nIndentationErrors\u00b6\nMany IndentationError\nexceptions now have more context regarding what kind of block\nwas expecting an indentation, including the location of the statement:\n>>> def foo():\n... if lel:\n... x = 2\nFile \"\", line 3\nx = 2\n^\nIndentationError: expected an indented block after 'if' statement in line 2\nAttributeErrors\u00b6\nWhen printing AttributeError\n, PyErr_Display()\nwill offer\nsuggestions of similar attribute names in the object that the exception was\nraised from:\n>>> collections.namedtoplo\nTraceback (most recent call last):\nFile \"\", line 1, in \nAttributeError: module 'collections' has no attribute 'namedtoplo'. Did you mean: namedtuple?\n(Contributed by Pablo Galindo in bpo-38530.)\nWarning\nNotice this won\u2019t work if PyErr_Display()\nis not called to display the error\nwhich can happen if some other custom error display function is used. This is a common\nscenario in some REPLs like IPython.\nNameErrors\u00b6\nWhen printing NameError\nraised by the interpreter, PyErr_Display()\nwill offer suggestions of similar variable names in the function that the exception\nwas raised from:\n>>> schwarzschild_black_hole = None\n>>> schwarschild_black_hole\nTraceback (most recent call last):\nFile \"\", line 1, in \nNameError: name 'schwarschild_black_hole' is not defined. Did you mean: schwarzschild_black_hole?\n(Contributed by Pablo Galindo in bpo-38530.)\nWarning\nNotice this won\u2019t work if PyErr_Display()\nis not called to display the error,\nwhich can happen if some other custom error display function is used. This is a common\nscenario in some REPLs like IPython.\nPEP 626: Precise line numbers for debugging and other tools\u00b6\nPEP 626 brings more precise and reliable line numbers for debugging, profiling and coverage tools. Tracing events, with the correct line number, are generated for all lines of code executed and only for lines of code that are executed.\nThe f_lineno\nattribute of frame objects will always contain the\nexpected line number.\nThe co_lnotab\nattribute of\ncode objects is deprecated and\nwill be removed in 3.12.\nCode that needs to convert from offset to line number should use the new\nco_lines()\nmethod instead.\nPEP 634: Structural Pattern Matching\u00b6\nStructural pattern matching has been added in the form of a match statement and case statements of patterns with associated actions. Patterns consist of sequences, mappings, primitive data types as well as class instances. Pattern matching enables programs to extract information from complex data types, branch on the structure of data, and apply specific actions based on different forms of data.\nSyntax and operations\u00b6\nThe generic syntax of pattern matching is:\nmatch subject:\ncase :\n\ncase :\n\ncase :\n\ncase _:\n\nA match statement takes an expression and compares its value to successive patterns given as one or more case blocks. Specifically, pattern matching operates by:\nusing data with type and shape (the\nsubject\n)evaluating the\nsubject\nin thematch\nstatementcomparing the subject with each pattern in a\ncase\nstatement from top to bottom until a match is confirmed.executing the action associated with the pattern of the confirmed match\nIf an exact match is not confirmed, the last case, a wildcard\n_\n, if provided, will be used as the matching case. If an exact match is not confirmed and a wildcard case does not exist, the entire match block is a no-op.\nDeclarative approach\u00b6\nReaders may be aware of pattern matching through the simple example of matching a subject (data object) to a literal (pattern) with the switch statement found in C, Java or JavaScript (and many other languages). Often the switch statement is used for comparison of an object/expression with case statements containing literals.\nMore powerful examples of pattern matching can be found in languages such as Scala and Elixir. With structural pattern matching, the approach is \u201cdeclarative\u201d and explicitly states the conditions (the patterns) for data to match.\nWhile an \u201cimperative\u201d series of instructions using nested \u201cif\u201d statements could be used to accomplish something similar to structural pattern matching, it is less clear than the \u201cdeclarative\u201d approach. Instead the \u201cdeclarative\u201d approach states the conditions to meet for a match and is more readable through its explicit patterns. While structural pattern matching can be used in its simplest form comparing a variable to a literal in a case statement, its true value for Python lies in its handling of the subject\u2019s type and shape.\nSimple pattern: match to a literal\u00b6\nLet\u2019s look at this example as pattern matching in its simplest form: a value,\nthe subject, being matched to several literals, the patterns. In the example\nbelow, status\nis the subject of the match statement. The patterns are\neach of the case statements, where literals represent request status codes.\nThe associated action to the case is executed after a match:\ndef http_error(status):\nmatch status:\ncase 400:\nreturn \"Bad request\"\ncase 404:\nreturn \"Not found\"\ncase 418:\nreturn \"I'm a teapot\"\ncase _:\nreturn \"Something's wrong with the internet\"\nIf the above function is passed a status\nof 418, \u201cI\u2019m a teapot\u201d is returned.\nIf the above function is passed a status\nof 500, the case statement with\n_\nwill match as a wildcard, and \u201cSomething\u2019s wrong with the internet\u201d is\nreturned.\nNote the last block: the variable name, _\n, acts as a wildcard and insures\nthe subject will always match. The use of _\nis optional.\nYou can combine several literals in a single pattern using |\n(\u201cor\u201d):\ncase 401 | 403 | 404:\nreturn \"Not allowed\"\nBehavior without the wildcard\u00b6\nIf we modify the above example by removing the last case block, the example becomes:\ndef http_error(status):\nmatch status:\ncase 400:\nreturn \"Bad request\"\ncase 404:\nreturn \"Not found\"\ncase 418:\nreturn \"I'm a teapot\"\nWithout the use of _\nin a case statement, a match may not exist. If no\nmatch exists, the behavior is a no-op. For example, if status\nof 500 is\npassed, a no-op occurs.\nPatterns with a literal and variable\u00b6\nPatterns can look like unpacking assignments, and a pattern may be used to bind variables. In this example, a data point can be unpacked to its x-coordinate and y-coordinate:\n# point is an (x, y) tuple\nmatch point:\ncase (0, 0):\nprint(\"Origin\")\ncase (0, y):\nprint(f\"Y={y}\")\ncase (x, 0):\nprint(f\"X={x}\")\ncase (x, y):\nprint(f\"X={x}, Y={y}\")\ncase _:\nraise ValueError(\"Not a point\")\nThe first pattern has two literals, (0, 0)\n, and may be thought of as an\nextension of the literal pattern shown above. The next two patterns combine a\nliteral and a variable, and the variable binds a value from the subject\n(point\n). The fourth pattern captures two values, which makes it\nconceptually similar to the unpacking assignment (x, y) = point\n.\nPatterns and classes\u00b6\nIf you are using classes to structure your data, you can use as a pattern the class name followed by an argument list resembling a constructor. This pattern has the ability to capture instance attributes into variables:\nclass Point:\ndef __init__(self, x, y):\nself.x = x\nself.y = y\ndef location(point):\nmatch point:\ncase Point(x=0, y=0):\nprint(\"Origin is the point's location.\")\ncase Point(x=0, y=y):\nprint(f\"Y={y} and the point is on the y-axis.\")\ncase Point(x=x, y=0):\nprint(f\"X={x} and the point is on the x-axis.\")\ncase Point():\nprint(\"The point is located somewhere else on the plane.\")\ncase _:\nprint(\"Not a point\")\nPatterns with positional parameters\u00b6\nYou can use positional parameters with some builtin classes that provide an\nordering for their attributes (e.g. dataclasses). You can also define a specific\nposition for attributes in patterns by setting the __match_args__\nspecial\nattribute in your classes. If it\u2019s set to (\u201cx\u201d, \u201cy\u201d), the following patterns\nare all equivalent (and all bind the y\nattribute to the var\nvariable):\nPoint(1, var)\nPoint(1, y=var)\nPoint(x=1, y=var)\nPoint(y=var, x=1)\nNested patterns\u00b6\nPatterns can be arbitrarily nested. For example, if our data is a short list of points, it could be matched like this:\nmatch points:\ncase []:\nprint(\"No points in the list.\")\ncase [Point(0, 0)]:\nprint(\"The origin is the only point in the list.\")\ncase [Point(x, y)]:\nprint(f\"A single point {x}, {y} is in the list.\")\ncase [Point(0, y1), Point(0, y2)]:\nprint(f\"Two points on the Y axis at {y1}, {y2} are in the list.\")\ncase _:\nprint(\"Something else is found in the list.\")\nComplex patterns and the wildcard\u00b6\nTo this point, the examples have used _\nalone in the last case statement.\nA wildcard can be used in more complex patterns, such as ('error', code, _)\n.\nFor example:\nmatch test_variable:\ncase ('warning', code, 40):\nprint(\"A warning has been received.\")\ncase ('error', code, _):\nprint(f\"An error {code} occurred.\")\nIn the above case, test_variable\nwill match for (\u2018error\u2019, code, 100) and\n(\u2018error\u2019, code, 800).\nGuard\u00b6\nWe can add an if\nclause to a pattern, known as a \u201cguard\u201d. If the\nguard is false, match\ngoes on to try the next case block. Note\nthat value capture happens before the guard is evaluated:\nmatch point:\ncase Point(x, y) if x == y:\nprint(f\"The point is located on the diagonal Y=X at {x}.\")\ncase Point(x, y):\nprint(f\"Point is not on the diagonal.\")\nOther Key Features\u00b6\nSeveral other key features:\nLike unpacking assignments, tuple and list patterns have exactly the same meaning and actually match arbitrary sequences. Technically, the subject must be a sequence. Therefore, an important exception is that patterns don\u2019t match iterators. Also, to prevent a common mistake, sequence patterns don\u2019t match strings.\nSequence patterns support wildcards:\n[x, y, *rest]\nand(x, y, *rest)\nwork similar to wildcards in unpacking assignments. The name after*\nmay also be_\n, so(x, y, *_)\nmatches a sequence of at least two items without binding the remaining items.Mapping patterns:\n{\"bandwidth\": b, \"latency\": l}\ncaptures the\"bandwidth\"\nand\"latency\"\nvalues from a dict. Unlike sequence patterns, extra keys are ignored. A wildcard**rest\nis also supported. (But**_\nwould be redundant, so is not allowed.)Subpatterns may be captured using the\nas\nkeyword:case (Point(x1, y1), Point(x2, y2) as p2): ...\nThis binds x1, y1, x2, y2 like you would expect without the\nas\nclause, and p2 to the entire second item of the subject.Most literals are compared by equality. However, the singletons\nTrue\n,False\nandNone\nare compared by identity.Named constants may be used in patterns. These named constants must be dotted names to prevent the constant from being interpreted as a capture variable:\nfrom enum import Enum class Color(Enum): RED = 0 GREEN = 1 BLUE = 2 color = Color.GREEN match color: case Color.RED: print(\"I see red!\") case Color.GREEN: print(\"Grass is green\") case Color.BLUE: print(\"I'm feeling the blues :(\")\nFor the full specification see PEP 634. Motivation and rationale are in PEP 635, and a longer tutorial is in PEP 636.\nOptional EncodingWarning\nand encoding=\"locale\"\noption\u00b6\nThe default encoding of TextIOWrapper\nand open()\nis\nplatform and locale dependent. Since UTF-8 is used on most Unix\nplatforms, omitting encoding\noption when opening UTF-8 files\n(e.g. JSON, YAML, TOML, Markdown) is a very common bug. For example:\n# BUG: \"rb\" mode or encoding=\"utf-8\" should be used.\nwith open(\"data.json\") as f:\ndata = json.load(f)\nTo find this type of bug, an optional EncodingWarning\nis added.\nIt is emitted when sys.flags.warn_default_encoding\nis true and locale-specific default encoding is used.\n-X warn_default_encoding\noption and PYTHONWARNDEFAULTENCODING\nare added to enable the warning.\nSee Text Encoding for more information.\nOther Language Changes\u00b6\nThe\nint\ntype has a new methodint.bit_count()\n, returning the number of ones in the binary expansion of a given integer, also known as the population count. (Contributed by Niklas Fiekas in bpo-29882.)The views returned by\ndict.keys()\n,dict.values()\nanddict.items()\nnow all have amapping\nattribute that gives atypes.MappingProxyType\nobject wrapping the original dictionary. (Contributed by Dennis Sweeney in bpo-40890.)PEP 618: The\nzip()\nfunction now has an optionalstrict\nflag, used to require that all the iterables have an equal length.Builtin and extension functions that take integer arguments no longer accept\nDecimal\ns,Fraction\ns and other objects that can be converted to integers only with a loss (e.g. that have the__int__()\nmethod but do not have the__index__()\nmethod). (Contributed by Serhiy Storchaka in bpo-37999.)If\nobject.__ipow__()\nreturnsNotImplemented\n, the operator will correctly fall back toobject.__pow__()\nandobject.__rpow__()\nas expected. (Contributed by Alex Shkop in bpo-38302.)Assignment expressions can now be used unparenthesized within set literals and set comprehensions, as well as in sequence indexes (but not slices).\nFunctions have a new\n__builtins__\nattribute which is used to look for builtin symbols when a function is executed, instead of looking into__globals__['__builtins__']\n. The attribute is initialized from__globals__[\"__builtins__\"]\nif it exists, else from the current builtins. (Contributed by Mark Shannon in bpo-42990.)Two new builtin functions \u2013\naiter()\nandanext()\nhave been added to provide asynchronous counterparts toiter()\nandnext()\n, respectively. (Contributed by Joshua Bronson, Daniel Pope, and Justin Wang in bpo-31861.)Static methods (\n@staticmethod\n) and class methods (@classmethod\n) now inherit the method attributes (__module__\n,__name__\n,__qualname__\n,__doc__\n,__annotations__\n) and have a new__wrapped__\nattribute. Moreover, static methods are now callable as regular functions. (Contributed by Victor Stinner in bpo-43682.)Annotations for complex targets (everything beside\nsimple name\ntargets defined by PEP 526) no longer cause any runtime effects withfrom __future__ import annotations\n. (Contributed by Batuhan Taskaya in bpo-42737.)Class and module objects now lazy-create empty annotations dicts on demand. The annotations dicts are stored in the object\u2019s\n__dict__\nfor backwards compatibility. This improves the best practices for working with__annotations__\n; for more information, please see Annotations Best Practices. (Contributed by Larry Hastings in bpo-43901.)Annotations consist of\nyield\n,yield from\n,await\nor named expressions are now forbidden underfrom __future__ import annotations\ndue to their side effects. (Contributed by Batuhan Taskaya in bpo-42725.)Usage of unbound variables,\nsuper()\nand other expressions that might alter the processing of symbol table as annotations are now rendered effectless underfrom __future__ import annotations\n. (Contributed by Batuhan Taskaya in bpo-42725.)Hashes of NaN values of both\nfloat\ntype anddecimal.Decimal\ntype now depend on object identity. Formerly, they always hashed to0\neven though NaN values are not equal to one another. This caused potentially quadratic runtime behavior due to excessive hash collisions when creating dictionaries and sets containing multiple NaNs. (Contributed by Raymond Hettinger in bpo-43475.)A\nSyntaxError\n(instead of aNameError\n) will be raised when deleting the__debug__\nconstant. (Contributed by Donghee Na in bpo-45000.)SyntaxError\nexceptions now haveend_lineno\nandend_offset\nattributes. They will beNone\nif not determined. (Contributed by Pablo Galindo in bpo-43914.)\nNew Modules\u00b6\nNone.\nImproved Modules\u00b6\nasyncio\u00b6\nAdd missing connect_accepted_socket()\nmethod.\n(Contributed by Alex Gr\u00f6nholm in bpo-41332.)\nargparse\u00b6\nMisleading phrase \u201coptional arguments\u201d was replaced with \u201coptions\u201d in argparse help. Some tests might require adaptation if they rely on exact output match. (Contributed by Raymond Hettinger in bpo-9694.)\narray\u00b6\nThe index()\nmethod of array.array\nnow has\noptional start and stop parameters.\n(Contributed by Anders Lorentsen and Zackery Spytz in bpo-31956.)\nasynchat, asyncore, smtpd\u00b6\nThese modules have been marked as deprecated in their module documentation\nsince Python 3.6. An import-time DeprecationWarning\nhas now been\nadded to all three of these modules.\nbase64\u00b6\nAdd base64.b32hexencode()\nand base64.b32hexdecode()\nto support the\nBase32 Encoding with Extended Hex Alphabet.\nbdb\u00b6\nAdd clearBreakpoints()\nto reset all set breakpoints.\n(Contributed by Irit Katriel in bpo-24160.)\nbisect\u00b6\nAdded the possibility of providing a key function to the APIs in the bisect\nmodule. (Contributed by Raymond Hettinger in bpo-4356.)\ncodecs\u00b6\nAdd a codecs.unregister()\nfunction to unregister a codec search function.\n(Contributed by Hai Shi in bpo-41842.)\ncollections.abc\u00b6\nThe __args__\nof the parameterized generic for\ncollections.abc.Callable\nare now consistent with typing.Callable\n.\ncollections.abc.Callable\ngeneric now flattens type parameters, similar\nto what typing.Callable\ncurrently does. This means that\ncollections.abc.Callable[[int, str], str]\nwill have __args__\nof\n(int, str, str)\n; previously this was ([int, str], str)\n. To allow this\nchange, types.GenericAlias\ncan now be subclassed, and a subclass will\nbe returned when subscripting the collections.abc.Callable\ntype. Note\nthat a TypeError\nmay be raised for invalid forms of parameterizing\ncollections.abc.Callable\nwhich may have passed silently in Python 3.9.\n(Contributed by Ken Jin in bpo-42195.)\ncontextlib\u00b6\nAdd a contextlib.aclosing()\ncontext manager to safely close async generators\nand objects representing asynchronously released resources.\n(Contributed by Joongi Kim and John Belmonte in bpo-41229.)\nAdd asynchronous context manager support to contextlib.nullcontext()\n.\n(Contributed by Tom Gringauz in bpo-41543.)\nAdd AsyncContextDecorator\n, for supporting usage of async\ncontext managers as decorators.\ncurses\u00b6\nThe extended color functions added in ncurses 6.1 will be used transparently\nby curses.color_content()\n, curses.init_color()\n,\ncurses.init_pair()\n, and curses.pair_content()\n. A new function,\ncurses.has_extended_color_support()\n, indicates whether extended color\nsupport is provided by the underlying ncurses library.\n(Contributed by Jeffrey Kintscher and Hans Petter Jansson in bpo-36982.)\nThe BUTTON5_*\nconstants are now exposed in the curses\nmodule if\nthey are provided by the underlying curses library.\n(Contributed by Zackery Spytz in bpo-39273.)\ndataclasses\u00b6\n__slots__\u00b6\nAdded slots\nparameter in dataclasses.dataclass()\ndecorator.\n(Contributed by Yurii Karabas in bpo-42269)\nKeyword-only fields\u00b6\ndataclasses now supports fields that are keyword-only in the generated __init__ method. There are a number of ways of specifying keyword-only fields.\nYou can say that every field is keyword-only:\nfrom dataclasses import dataclass\n@dataclass(kw_only=True)\nclass Birthday:\nname: str\nbirthday: datetime.date\nBoth name\nand birthday\nare keyword-only parameters to the\ngenerated __init__ method.\nYou can specify keyword-only on a per-field basis:\nfrom dataclasses import dataclass, field\n@dataclass\nclass Birthday:\nname: str\nbirthday: datetime.date = field(kw_only=True)\nHere only birthday\nis keyword-only. If you set kw_only\non\nindividual fields, be aware that there are rules about re-ordering\nfields due to keyword-only fields needing to follow non-keyword-only\nfields. See the full dataclasses documentation for details.\nYou can also specify that all fields following a KW_ONLY marker are keyword-only. This will probably be the most common usage:\nfrom dataclasses import dataclass, KW_ONLY\n@dataclass\nclass Point:\nx: float\ny: float\n_: KW_ONLY\nz: float = 0.0\nt: float = 0.0\nHere, z\nand t\nare keyword-only parameters, while x\nand\ny\nare not.\n(Contributed by Eric V. Smith in bpo-43532.)\ndistutils\u00b6\nThe entire distutils\npackage is deprecated, to be removed in Python\n3.12. Its functionality for specifying package builds has already been\ncompletely replaced by third-party packages setuptools\nand\npackaging\n, and most other commonly used APIs are available elsewhere\nin the standard library (such as platform\n, shutil\n,\nsubprocess\nor sysconfig\n). There are no plans to migrate\nany other functionality from distutils\n, and applications that are\nusing other functions should plan to make private copies of the code.\nRefer to PEP 632 for discussion.\nThe bdist_wininst\ncommand deprecated in Python 3.8 has been removed.\nThe bdist_wheel\ncommand is now recommended to distribute binary packages\non Windows.\n(Contributed by Victor Stinner in bpo-42802.)\ndoctest\u00b6\nWhen a module does not define __loader__\n, fall back to __spec__.loader\n.\n(Contributed by Brett Cannon in bpo-42133.)\nencodings\u00b6\nencodings.normalize_encoding()\nnow ignores non-ASCII characters.\n(Contributed by Hai Shi in bpo-39337.)\nenum\u00b6\nEnum\n__repr__()\nnow returns enum_name.member_name\nand\n__str__()\nnow returns member_name\n. Stdlib enums available as\nmodule constants have a repr()\nof module_name.member_name\n.\n(Contributed by Ethan Furman in bpo-40066.)\nAdd enum.StrEnum\nfor enums where all members are strings.\n(Contributed by Ethan Furman in bpo-41816.)\nfileinput\u00b6\nAdd encoding and errors parameters in fileinput.input()\nand\nfileinput.FileInput\n.\n(Contributed by Inada Naoki in bpo-43712.)\nfileinput.hook_compressed()\nnow returns TextIOWrapper\nobject\nwhen mode is \u201cr\u201d and file is compressed, like uncompressed files.\n(Contributed by Inada Naoki in bpo-5758.)\nfaulthandler\u00b6\nThe faulthandler\nmodule now detects if a fatal error occurs during a\ngarbage collector collection.\n(Contributed by Victor Stinner in bpo-44466.)\ngc\u00b6\nAdd audit hooks for gc.get_objects()\n, gc.get_referrers()\nand\ngc.get_referents()\n. (Contributed by Pablo Galindo in bpo-43439.)\nglob\u00b6\nAdd the root_dir and dir_fd parameters in glob()\nand\niglob()\nwhich allow to specify the root directory for searching.\n(Contributed by Serhiy Storchaka in bpo-38144.)\nhashlib\u00b6\nThe hashlib module requires OpenSSL 1.1.1 or newer. (Contributed by Christian Heimes in PEP 644 and bpo-43669.)\nThe hashlib module has preliminary support for OpenSSL 3.0.0. (Contributed by Christian Heimes in bpo-38820 and other issues.)\nThe pure-Python fallback of pbkdf2_hmac()\nis deprecated. In\nthe future PBKDF2-HMAC will only be available when Python has been built with\nOpenSSL support.\n(Contributed by Christian Heimes in bpo-43880.)\nhmac\u00b6\nThe hmac module now uses OpenSSL\u2019s HMAC implementation internally. (Contributed by Christian Heimes in bpo-40645.)\nIDLE and idlelib\u00b6\nMake IDLE invoke sys.excepthook()\n(when started without \u2018-n\u2019).\nUser hooks were previously ignored. (Contributed by Ken Hilton in\nbpo-43008.)\nRearrange the settings dialog. Split the General tab into Windows and Shell/Ed tabs. Move help sources, which extend the Help menu, to the Extensions tab. Make space for new options and shorten the dialog. The latter makes the dialog better fit small screens. (Contributed by Terry Jan Reedy in bpo-40468.) Move the indent space setting from the Font tab to the new Windows tab. (Contributed by Mark Roseman and Terry Jan Reedy in bpo-33962.)\nThe changes above were backported to a 3.9 maintenance release.\nAdd a Shell sidebar. Move the primary prompt (\u2018>>>\u2019) to the sidebar. Add secondary prompts (\u2019\u2026\u2019) to the sidebar. Left click and optional drag selects one or more lines of text, as with the editor line number sidebar. Right click after selecting text lines displays a context menu with \u2018copy with prompts\u2019. This zips together prompts from the sidebar with lines from the selected text. This option also appears on the context menu for the text. (Contributed by Tal Einat in bpo-37903.)\nUse spaces instead of tabs to indent interactive code. This makes interactive code entries \u2018look right\u2019. Making this feasible was a major motivation for adding the shell sidebar. (Contributed by Terry Jan Reedy in bpo-37892.)\nHighlight the new soft keywords match\n,\ncase\n, and _\nin\npattern-matching statements. However, this highlighting is not perfect\nand will be incorrect in some rare cases, including some _\n-s in\ncase\npatterns. (Contributed by Tal Einat in bpo-44010.)\nNew in 3.10 maintenance releases.\nApply syntax highlighting to .pyi\nfiles. (Contributed by Alex\nWaygood and Terry Jan Reedy in bpo-45447.)\nInclude prompts when saving Shell with inputs and outputs. (Contributed by Terry Jan Reedy in gh-95191.)\nimportlib.metadata\u00b6\nFeature parity with importlib_metadata\n4.6\n(history).\nimportlib.metadata entry points now provide a nicer experience for selecting entry points by group and name through a new importlib.metadata.EntryPoints class. See the Compatibility Note in the docs for more info on the deprecation and usage.\nAdded importlib.metadata.packages_distributions() for resolving top-level Python modules and packages to their importlib.metadata.Distribution.\ninspect\u00b6\nWhen a module does not define __loader__\n, fall back to __spec__.loader\n.\n(Contributed by Brett Cannon in bpo-42133.)\nAdd inspect.get_annotations()\n, which safely computes the annotations\ndefined on an object. It works around the quirks of accessing the annotations\non various types of objects, and makes very few assumptions about the object\nit examines. inspect.get_annotations()\ncan also correctly un-stringize\nstringized annotations. inspect.get_annotations()\nis now considered\nbest practice for accessing the annotations dict defined on any Python object;\nfor more information on best practices for working with annotations, please see\nAnnotations Best Practices.\nRelatedly, inspect.signature()\n,\ninspect.Signature.from_callable()\n, and inspect.Signature.from_function()\nnow call inspect.get_annotations()\nto retrieve annotations. This means\ninspect.signature()\nand inspect.Signature.from_callable()\ncan\nalso now un-stringize stringized annotations.\n(Contributed by Larry Hastings in bpo-43817.)\nitertools\u00b6\nAdd itertools.pairwise()\n.\n(Contributed by Raymond Hettinger in bpo-38200.)\nlinecache\u00b6\nWhen a module does not define __loader__\n, fall back to __spec__.loader\n.\n(Contributed by Brett Cannon in bpo-42133.)\nos\u00b6\nAdd os.cpu_count()\nsupport for VxWorks RTOS.\n(Contributed by Peixing Xin in bpo-41440.)\nAdd a new function os.eventfd()\nand related helpers to wrap the\neventfd2\nsyscall on Linux.\n(Contributed by Christian Heimes in bpo-41001.)\nAdd os.splice()\nthat allows to move data between two file\ndescriptors without copying between kernel address space and user\naddress space, where one of the file descriptors must refer to a\npipe. (Contributed by Pablo Galindo in bpo-41625.)\nAdd O_EVTONLY\n, O_FSYNC\n, O_SYMLINK\nand O_NOFOLLOW_ANY\nfor macOS.\n(Contributed by Donghee Na in bpo-43106.)\nos.path\u00b6\nos.path.realpath()\nnow accepts a strict keyword-only argument. When set\nto True\n, OSError\nis raised if a path doesn\u2019t exist or a symlink loop\nis encountered.\n(Contributed by Barney Gale in bpo-43757.)\npathlib\u00b6\nAdd slice support to PurePath.parents\n.\n(Contributed by Joshua Cannon in bpo-35498.)\nAdd negative indexing support to PurePath.parents\n.\n(Contributed by Yaroslav Pankovych in bpo-21041.)\nAdd Path.hardlink_to\nmethod that\nsupersedes link_to()\n. The new method has the same argument\norder as symlink_to()\n.\n(Contributed by Barney Gale in bpo-39950.)\npathlib.Path.stat()\nand chmod()\nnow accept a\nfollow_symlinks keyword-only argument for consistency with corresponding\nfunctions in the os\nmodule.\n(Contributed by Barney Gale in bpo-39906.)\nplatform\u00b6\nAdd platform.freedesktop_os_release()\nto retrieve operation system\nidentification from freedesktop.org os-release standard file.\n(Contributed by Christian Heimes in bpo-28468.)\npprint\u00b6\npprint.pprint()\nnow accepts a new underscore_numbers\nkeyword argument.\n(Contributed by sblondon in bpo-42914.)\npprint\ncan now pretty-print dataclasses.dataclass\ninstances.\n(Contributed by Lewis Gaul in bpo-43080.)\npy_compile\u00b6\nAdd --quiet\noption to command-line interface of py_compile\n.\n(Contributed by Gregory Schevchenko in bpo-38731.)\npyclbr\u00b6\nAdd an end_lineno\nattribute to the Function\nand Class\nobjects in the tree returned by pyclbr.readmodule()\nand\npyclbr.readmodule_ex()\n. It matches the existing (start) lineno\n.\n(Contributed by Aviral Srivastava in bpo-38307.)\nshelve\u00b6\nThe shelve\nmodule now uses pickle.DEFAULT_PROTOCOL\nby default\ninstead of pickle\nprotocol 3\nwhen creating shelves.\n(Contributed by Zackery Spytz in bpo-34204.)\nstatistics\u00b6\nAdd covariance()\n, Pearson\u2019s\ncorrelation()\n, and simple\nlinear_regression()\nfunctions.\n(Contributed by Tymoteusz Wo\u0142od\u017ako in bpo-38490.)\nsite\u00b6\nWhen a module does not define __loader__\n, fall back to __spec__.loader\n.\n(Contributed by Brett Cannon in bpo-42133.)\nsocket\u00b6\nThe exception socket.timeout\nis now an alias of TimeoutError\n.\n(Contributed by Christian Heimes in bpo-42413.)\nAdd option to create MPTCP sockets with IPPROTO_MPTCP\n(Contributed by Rui Cunha in bpo-43571.)\nAdd IP_RECVTOS\noption to receive the type of service (ToS) or DSCP/ECN fields\n(Contributed by Georg Sauthoff in bpo-44077.)\nssl\u00b6\nThe ssl module requires OpenSSL 1.1.1 or newer. (Contributed by Christian Heimes in PEP 644 and bpo-43669.)\nThe ssl module has preliminary support for OpenSSL 3.0.0 and new option\nOP_IGNORE_UNEXPECTED_EOF\n.\n(Contributed by Christian Heimes in bpo-38820, bpo-43794,\nbpo-43788, bpo-43791, bpo-43799, bpo-43920,\nbpo-43789, and bpo-43811.)\nDeprecated function and use of deprecated constants now result in\na DeprecationWarning\n. ssl.SSLContext.options\nhas\nOP_NO_SSLv2\nand OP_NO_SSLv3\nset by default and\ntherefore cannot warn about setting the flag again. The\ndeprecation section has a list of deprecated\nfeatures.\n(Contributed by Christian Heimes in bpo-43880.)\nThe ssl module now has more secure default settings. Ciphers without forward\nsecrecy or SHA-1 MAC are disabled by default. Security level 2 prohibits\nweak RSA, DH, and ECC keys with less than 112 bits of security.\nSSLContext\ndefaults to minimum protocol version TLS 1.2.\nSettings are based on Hynek Schlawack\u2019s research.\n(Contributed by Christian Heimes in bpo-43998.)\nThe deprecated protocols SSL 3.0, TLS 1.0, and TLS 1.1 are no longer officially supported. Python does not block them actively. However OpenSSL build options, distro configurations, vendor patches, and cipher suites may prevent a successful handshake.\nAdd a timeout parameter to the ssl.get_server_certificate()\nfunction.\n(Contributed by Zackery Spytz in bpo-31870.)\nThe ssl module uses heap-types and multi-phase initialization. (Contributed by Christian Heimes in bpo-42333.)\nA new verify flag VERIFY_X509_PARTIAL_CHAIN\nhas been added.\n(Contributed by l0x in bpo-40849.)\nsqlite3\u00b6\nAdd audit events for connect()\n,\nenable_load_extension()\n, and\nload_extension()\n.\n(Contributed by Erlend E. Aasland in bpo-43762.)\nsys\u00b6\nAdd sys.orig_argv\nattribute: the list of the original command line\narguments passed to the Python executable.\n(Contributed by Victor Stinner in bpo-23427.)\nAdd sys.stdlib_module_names\n, containing the list of the standard library\nmodule names.\n(Contributed by Victor Stinner in bpo-42955.)\n_thread\u00b6\n_thread.interrupt_main()\nnow takes an optional signal number to\nsimulate (the default is still signal.SIGINT\n).\n(Contributed by Antoine Pitrou in bpo-43356.)\nthreading\u00b6\nAdd threading.gettrace()\nand threading.getprofile()\nto\nretrieve the functions set by threading.settrace()\nand\nthreading.setprofile()\nrespectively.\n(Contributed by Mario Corchero in bpo-42251.)\nAdd threading.__excepthook__\nto allow retrieving the original value\nof threading.excepthook()\nin case it is set to a broken or a different\nvalue.\n(Contributed by Mario Corchero in bpo-42308.)\ntraceback\u00b6\nThe format_exception()\n,\nformat_exception_only()\n, and\nprint_exception()\nfunctions can now take an exception object\nas a positional-only argument.\n(Contributed by Zackery Spytz and Matthias Bussonnier in bpo-26389.)\ntypes\u00b6\nReintroduce the types.EllipsisType\n, types.NoneType\nand types.NotImplementedType\nclasses, providing a new set\nof types readily interpretable by type checkers.\n(Contributed by Bas van Beek in bpo-41810.)\ntyping\u00b6\nFor major changes, see New Features Related to Type Hints.\nThe behavior of typing.Literal\nwas changed to conform with PEP 586\nand to match the behavior of static type checkers specified in the PEP.\nLiteral\nnow de-duplicates parameters.Equality comparisons between\nLiteral\nobjects are now order independent.Literal\ncomparisons now respect types. For example,Literal[0] == Literal[False]\npreviously evaluated toTrue\n. It is nowFalse\n. To support this change, the internally used type cache now supports differentiating types.Literal\nobjects will now raise aTypeError\nexception during equality comparisons if any of their parameters are not hashable. Note that declaringLiteral\nwith unhashable parameters will not throw an error:>>> from typing import Literal >>> Literal[{0}] >>> Literal[{0}] == Literal[{False}] Traceback (most recent call last): File \"\", line 1, in TypeError: unhashable type: 'set'\n(Contributed by Yurii Karabas in bpo-42345.)\nAdd new function typing.is_typeddict()\nto introspect if an annotation\nis a typing.TypedDict\n.\n(Contributed by Patrick Reader in bpo-41792.)\nSubclasses of typing.Protocol\nwhich only have data variables declared\nwill now raise a TypeError\nwhen checked with isinstance\nunless they\nare decorated with runtime_checkable()\n. Previously, these checks\npassed silently. Users should decorate their\nsubclasses with the runtime_checkable()\ndecorator\nif they want runtime protocols.\n(Contributed by Yurii Karabas in bpo-38908.)\nImporting from the typing.io\nand typing.re\nsubmodules will now emit\nDeprecationWarning\n. These submodules have been deprecated since\nPython 3.8 and will be removed in a future version of Python. Anything\nbelonging to those submodules should be imported directly from\ntyping\ninstead.\n(Contributed by Sebastian Rittau in bpo-38291.)\nunittest\u00b6\nAdd new method assertNoLogs()\nto complement the\nexisting assertLogs()\n. (Contributed by Kit Yan Choi\nin bpo-39385.)\nurllib.parse\u00b6\nPython versions earlier than Python 3.10 allowed using both ;\nand &\nas\nquery parameter separators in urllib.parse.parse_qs()\nand\nurllib.parse.parse_qsl()\n. Due to security concerns, and to conform with\nnewer W3C recommendations, this has been changed to allow only a single\nseparator key, with &\nas the default. This change also affects\ncgi.parse()\nand cgi.parse_multipart()\nas they use the affected\nfunctions internally. For more details, please see their respective\ndocumentation.\n(Contributed by Adam Goldschmidt, Senthil Kumaran and Ken Jin in bpo-42967.)\nThe presence of newline or tab characters in parts of a URL allows for some\nforms of attacks. Following the WHATWG specification that updates RFC 3986,\nASCII newline \\n\n, \\r\nand tab \\t\ncharacters are stripped from the\nURL by the parser in urllib.parse\npreventing such attacks. The removal\ncharacters are controlled by a new module level variable\nurllib.parse._UNSAFE_URL_BYTES_TO_REMOVE\n. (See gh-88048)\nxml\u00b6\nAdd a LexicalHandler\nclass to the\nxml.sax.handler\nmodule.\n(Contributed by Jonathan Gossage and Zackery Spytz in bpo-35018.)\nzipimport\u00b6\nAdd methods related to PEP 451: find_spec()\n,\nzipimport.zipimporter.create_module()\n, and\nzipimport.zipimporter.exec_module()\n.\n(Contributed by Brett Cannon in bpo-42131.)\nAdd invalidate_caches()\nmethod.\n(Contributed by Desmond Cheong in bpo-14678.)\nOptimizations\u00b6\nConstructors\nstr()\n,bytes()\nandbytearray()\nare now faster (around 30\u201340% for small objects). (Contributed by Serhiy Storchaka in bpo-41334.)The\nrunpy\nmodule now imports fewer modules. Thepython3 -m module-name\ncommand startup time is 1.4x faster in average. On Linux,python3 -I -m module-name\nimports 69 modules on Python 3.9, whereas it only imports 51 modules (-18) on Python 3.10. (Contributed by Victor Stinner in bpo-41006 and bpo-41718.)The\nLOAD_ATTR\ninstruction now uses new \u201cper opcode cache\u201d mechanism. It is about 36% faster now for regular attributes and 44% faster for slots. (Contributed by Pablo Galindo and Yury Selivanov in bpo-42093 and Guido van Rossum in bpo-42927, based on ideas implemented originally in PyPy and MicroPython.)When building Python with\n--enable-optimizations\nnow-fno-semantic-interposition\nis added to both the compile and link line. This speeds builds of the Python interpreter created with--enable-shared\nwithgcc\nby up to 30%. See this article for more details. (Contributed by Victor Stinner and Pablo Galindo in bpo-38980.)Use a new output buffer management code for\nbz2\n/lzma\n/zlib\nmodules, and add.readall()\nfunction to_compression.DecompressReader\nclass. bz2 decompression is now 1.09x ~ 1.17x faster, lzma decompression 1.20x ~ 1.32x faster,GzipFile.read(-1)\n1.11x ~ 1.18x faster. (Contributed by Ma Lin, reviewed by Gregory P. Smith, in bpo-41486)When using stringized annotations, annotations dicts for functions are no longer created when the function is created. Instead, they are stored as a tuple of strings, and the function object lazily converts this into the annotations dict on demand. This optimization cuts the CPU time needed to define an annotated function by half. (Contributed by Yurii Karabas and Inada Naoki in bpo-42202.)\nSubstring search functions such as\nstr1 in str2\nandstr2.find(str1)\nnow sometimes use Crochemore & Perrin\u2019s \u201cTwo-Way\u201d string searching algorithm to avoid quadratic behavior on long strings. (Contributed by Dennis Sweeney in bpo-41972)Add micro-optimizations to\n_PyType_Lookup()\nto improve type attribute cache lookup performance in the common case of cache hits. This makes the interpreter 1.04 times faster on average. (Contributed by Dino Viehland in bpo-43452.)The following built-in functions now support the faster PEP 590 vectorcall calling convention:\nmap()\n,filter()\n,reversed()\n,bool()\nandfloat()\n. (Contributed by Donghee Na and Jeroen Demeyer in bpo-43575, bpo-43287, bpo-41922, bpo-41873 and bpo-41870.)BZ2File\nperformance is improved by removing internalRLock\n. This makesBZ2File\nthread unsafe in the face of multiple simultaneous readers or writers, just like its equivalent classes ingzip\nandlzma\nhave always been. (Contributed by Inada Naoki in bpo-43785.)\nDeprecated\u00b6\nCurrently Python accepts numeric literals immediately followed by keywords, for example\n0in x\n,1or x\n,0if 1else 2\n. It allows confusing and ambiguous expressions like[0x1for x in y]\n(which can be interpreted as[0x1 for x in y]\nor[0x1f or x in y]\n). Starting in this release, a deprecation warning is raised if the numeric literal is immediately followed by one of keywordsand\n,else\n,for\n,if\n,in\n,is\nandor\n. In future releases it will be changed to syntax warning, and finally to syntax error. (Contributed by Serhiy Storchaka in bpo-43833.)Starting in this release, there will be a concerted effort to begin cleaning up old import semantics that were kept for Python 2.7 compatibility. Specifically,\nfind_loader()\n/find_module()\n(superseded byfind_spec()\n),load_module()\n(superseded byexec_module()\n),module_repr()\n(which the import system takes care of for you), the__package__\nattribute (superseded by__spec__.parent\n), the__loader__\nattribute (superseded by__spec__.loader\n), and the__cached__\nattribute (superseded by__spec__.cached\n) will slowly be removed (as well as other classes and methods inimportlib\n).ImportWarning\nand/orDeprecationWarning\nwill be raised as appropriate to help identify code which needs updating during this transition.The entire\ndistutils\nnamespace is deprecated, to be removed in Python 3.12. Refer to the module changes section for more information.Non-integer arguments to\nrandom.randrange()\nare deprecated. TheValueError\nis deprecated in favor of aTypeError\n. (Contributed by Serhiy Storchaka and Raymond Hettinger in bpo-37319.)The various\nload_module()\nmethods ofimportlib\nhave been documented as deprecated since Python 3.6, but will now also trigger aDeprecationWarning\n. Useexec_module()\ninstead. (Contributed by Brett Cannon in bpo-26131.)zimport.zipimporter.load_module()\nhas been deprecated in preference forexec_module()\n. (Contributed by Brett Cannon in bpo-26131.)The use of\nload_module()\nby the import system now triggers anImportWarning\nasexec_module()\nis preferred. (Contributed by Brett Cannon in bpo-26131.)The use of\nimportlib.abc.MetaPathFinder.find_module()\nandimportlib.abc.PathEntryFinder.find_module()\nby the import system now trigger anImportWarning\nasimportlib.abc.MetaPathFinder.find_spec()\nandimportlib.abc.PathEntryFinder.find_spec()\nare preferred, respectively. You can useimportlib.util.spec_from_loader()\nto help in porting. (Contributed by Brett Cannon in bpo-42134.)The use of\nimportlib.abc.PathEntryFinder.find_loader()\nby the import system now triggers anImportWarning\nasimportlib.abc.PathEntryFinder.find_spec()\nis preferred. You can useimportlib.util.spec_from_loader()\nto help in porting. (Contributed by Brett Cannon in bpo-43672.)The various implementations of\nimportlib.abc.MetaPathFinder.find_module()\n(importlib.machinery.BuiltinImporter.find_module()\n,importlib.machinery.FrozenImporter.find_module()\n,importlib.machinery.WindowsRegistryFinder.find_module()\n,importlib.machinery.PathFinder.find_module()\n,importlib.abc.MetaPathFinder.find_module()\n),importlib.abc.PathEntryFinder.find_module()\n(importlib.machinery.FileFinder.find_module()\n), andimportlib.abc.PathEntryFinder.find_loader()\n(importlib.machinery.FileFinder.find_loader()\n) now raiseDeprecationWarning\nand are slated for removal in Python 3.12 (previously they were documented as deprecated in Python 3.4). (Contributed by Brett Cannon in bpo-42135.)importlib.abc.Finder\nis deprecated (including its sole method,find_module()\n). Bothimportlib.abc.MetaPathFinder\nandimportlib.abc.PathEntryFinder\nno longer inherit from the class. Users should inherit from one of these two classes as appropriate instead. (Contributed by Brett Cannon in bpo-42135.)The deprecations of\nimp\n,importlib.find_loader()\n,importlib.util.set_package_wrapper()\n,importlib.util.set_loader_wrapper()\n,importlib.util.module_for_loader()\n,pkgutil.ImpImporter\n, andpkgutil.ImpLoader\nhave all been updated to list Python 3.12 as the slated version of removal (they began raisingDeprecationWarning\nin previous versions of Python). (Contributed by Brett Cannon in bpo-43720.)The import system now uses the\n__spec__\nattribute on modules before falling back onmodule_repr()\nfor a module\u2019s__repr__()\nmethod. Removal of the use ofmodule_repr()\nis scheduled for Python 3.12. (Contributed by Brett Cannon in bpo-42137.)importlib.abc.Loader.module_repr()\n,importlib.machinery.FrozenLoader.module_repr()\n, andimportlib.machinery.BuiltinLoader.module_repr()\nare deprecated and slated for removal in Python 3.12. (Contributed by Brett Cannon in bpo-42136.)sqlite3.OptimizedUnicode\nhas been undocumented and obsolete since Python 3.3, when it was made an alias tostr\n. It is now deprecated, scheduled for removal in Python 3.12. (Contributed by Erlend E. Aasland in bpo-42264.)The undocumented built-in function\nsqlite3.enable_shared_cache\nis now deprecated, scheduled for removal in Python 3.12. Its use is strongly discouraged by the SQLite3 documentation. See the SQLite3 docs for more details. If a shared cache must be used, open the database in URI mode using thecache=shared\nquery parameter. (Contributed by Erlend E. Aasland in bpo-24464.)The following\nthreading\nmethods are now deprecated:threading.currentThread\n=>threading.current_thread()\nthreading.activeCount\n=>threading.active_count()\nthreading.Condition.notifyAll\n=>threading.Condition.notify_all()\nthreading.Event.isSet\n=>threading.Event.is_set()\nthreading.Thread.setName\n=>threading.Thread.name\nthreading.thread.getName\n=>threading.Thread.name\nthreading.Thread.isDaemon\n=>threading.Thread.daemon\nthreading.Thread.setDaemon\n=>threading.Thread.daemon\n(Contributed by Jelle Zijlstra in gh-87889.)\npathlib.Path.link_to()\nis deprecated and slated for removal in Python 3.12. Usepathlib.Path.hardlink_to()\ninstead. (Contributed by Barney Gale in bpo-39950.)cgi.log()\nis deprecated and slated for removal in Python 3.12. (Contributed by Inada Naoki in bpo-41139.)The following\nssl\nfeatures have been deprecated since Python 3.6, Python 3.7, or OpenSSL 1.1.0 and will be removed in 3.11:OP_NO_SSLv2\n,OP_NO_SSLv3\n,OP_NO_TLSv1\n,OP_NO_TLSv1_1\n,OP_NO_TLSv1_2\n, andOP_NO_TLSv1_3\nare replaced byminimum_version\nandmaximum_version\n.PROTOCOL_SSLv2\n,PROTOCOL_SSLv3\n,PROTOCOL_SSLv23\n,PROTOCOL_TLSv1\n,PROTOCOL_TLSv1_1\n,PROTOCOL_TLSv1_2\n, andPROTOCOL_TLS\nare deprecated in favor ofPROTOCOL_TLS_CLIENT\nandPROTOCOL_TLS_SERVER\nwrap_socket()\nis replaced byssl.SSLContext.wrap_socket()\nmatch_hostname()\nRAND_pseudo_bytes()\n,RAND_egd()\nNPN features like\nssl.SSLSocket.selected_npn_protocol()\nandssl.SSLContext.set_npn_protocols()\nare replaced by ALPN.\nThe threading debug (\nPYTHONTHREADDEBUG\nenvironment variable) is deprecated in Python 3.10 and will be removed in Python 3.12. This feature requires a debug build of Python. (Contributed by Victor Stinner in bpo-44584.)Importing from the\ntyping.io\nandtyping.re\nsubmodules will now emitDeprecationWarning\n. These submodules will be removed in a future version of Python. Anything belonging to these submodules should be imported directly fromtyping\ninstead. (Contributed by Sebastian Rittau in bpo-38291.)\nRemoved\u00b6\nRemoved special methods\n__int__\n,__float__\n,__floordiv__\n,__mod__\n,__divmod__\n,__rfloordiv__\n,__rmod__\nand__rdivmod__\nof thecomplex\nclass. They always raised aTypeError\n. (Contributed by Serhiy Storchaka in bpo-41974.)The\nParserBase.error()\nmethod from the private and undocumented_markupbase\nmodule has been removed.html.parser.HTMLParser\nis the only subclass ofParserBase\nand itserror()\nimplementation was already removed in Python 3.5. (Contributed by Berker Peksag in bpo-31844.)Removed the\nunicodedata.ucnhash_CAPI\nattribute which was an internal PyCapsule object. The related private_PyUnicode_Name_CAPI\nstructure was moved to the internal C API. (Contributed by Victor Stinner in bpo-42157.)Removed the\nparser\nmodule, which was deprecated in 3.9 due to the switch to the new PEG parser, as well as all the C source and header files that were only being used by the old parser, includingnode.h\n,parser.h\n,graminit.h\nandgrammar.h\n.Removed the Public C API functions\nPyParser_SimpleParseStringFlags\n,PyParser_SimpleParseStringFlagsFilename\n,PyParser_SimpleParseFileFlags\nandPyNode_Compile\nthat were deprecated in 3.9 due to the switch to the new PEG parser.Removed the\nformatter\nmodule, which was deprecated in Python 3.4. It is somewhat obsolete, little used, and not tested. It was originally scheduled to be removed in Python 3.6, but such removals were delayed until after Python 2.7 EOL. Existing users should copy whatever classes they use into their code. (Contributed by Donghee Na and Terry J. Reedy in bpo-42299.)Removed the\nPyModule_GetWarningsModule()\nfunction that was useless now due to the_warnings\nmodule was converted to a builtin module in 2.6. (Contributed by Hai Shi in bpo-42599.)Remove deprecated aliases to Collections Abstract Base Classes from the\ncollections\nmodule. (Contributed by Victor Stinner in bpo-37324.)The\nloop\nparameter has been removed from most ofasyncio\n\u2018s high-level API following deprecation in Python 3.8. The motivation behind this change is multifold:This simplifies the high-level API.\nThe functions in the high-level API have been implicitly getting the current thread\u2019s running event loop since Python 3.7. There isn\u2019t a need to pass the event loop to the API in most normal use cases.\nEvent loop passing is error-prone especially when dealing with loops running in different threads.\nNote that the low-level API will still accept\nloop\n. See Changes in the Python API for examples of how to replace existing code.(Contributed by Yurii Karabas, Andrew Svetlov, Yury Selivanov and Kyle Stanley in bpo-42392.)\nPorting to Python 3.10\u00b6\nThis section lists previously described changes and other bugfixes that may require changes to your code.\nChanges in the Python syntax\u00b6\nDeprecation warning is now emitted when compiling previously valid syntax if the numeric literal is immediately followed by a keyword (like in\n0in x\n). In future releases it will be changed to syntax warning, and finally to a syntax error. To get rid of the warning and make the code compatible with future releases just add a space between the numeric literal and the following keyword. (Contributed by Serhiy Storchaka in bpo-43833.)\nChanges in the Python API\u00b6\nThe etype parameters of the\nformat_exception()\n,format_exception_only()\n, andprint_exception()\nfunctions in thetraceback\nmodule have been renamed to exc. (Contributed by Zackery Spytz and Matthias Bussonnier in bpo-26389.)atexit\n: At Python exit, if a callback registered withatexit.register()\nfails, its exception is now logged. Previously, only some exceptions were logged, and the last exception was always silently ignored. (Contributed by Victor Stinner in bpo-42639.)collections.abc.Callable\ngeneric now flattens type parameters, similar to whattyping.Callable\ncurrently does. This means thatcollections.abc.Callable[[int, str], str]\nwill have__args__\nof(int, str, str)\n; previously this was([int, str], str)\n. Code which accesses the arguments viatyping.get_args()\nor__args__\nneed to account for this change. Furthermore,TypeError\nmay be raised for invalid forms of parameterizingcollections.abc.Callable\nwhich may have passed silently in Python 3.9. (Contributed by Ken Jin in bpo-42195.)socket.htons()\nandsocket.ntohs()\nnow raiseOverflowError\ninstead ofDeprecationWarning\nif the given parameter will not fit in a 16-bit unsigned integer. (Contributed by Erlend E. Aasland in bpo-42393.)The\nloop\nparameter has been removed from most ofasyncio\n\u2018s high-level API following deprecation in Python 3.8.A coroutine that currently looks like this:\nasync def foo(loop): await asyncio.sleep(1, loop=loop)\nShould be replaced with this:\nasync def foo(): await asyncio.sleep(1)\nIf\nfoo()\nwas specifically designed not to run in the current thread\u2019s running event loop (e.g. running in another thread\u2019s event loop), consider usingasyncio.run_coroutine_threadsafe()\ninstead.(Contributed by Yurii Karabas, Andrew Svetlov, Yury Selivanov and Kyle Stanley in bpo-42392.)\nThe\ntypes.FunctionType\nconstructor now inherits the current builtins if the globals dictionary has no\"__builtins__\"\nkey, rather than using{\"None\": None}\nas builtins: same behavior aseval()\nandexec()\nfunctions. Defining a function withdef function(...): ...\nin Python is not affected, globals cannot be overridden with this syntax: it also inherits the current builtins. (Contributed by Victor Stinner in bpo-42990.)\nChanges in the C API\u00b6\nThe C API functions\nPyParser_SimpleParseStringFlags\n,PyParser_SimpleParseStringFlagsFilename\n,PyParser_SimpleParseFileFlags\n,PyNode_Compile\nand the type used by these functions,struct _node\n, were removed due to the switch to the new PEG parser.Source should be now be compiled directly to a code object using, for example,\nPy_CompileString()\n. The resulting code object can then be evaluated using, for example,PyEval_EvalCode()\n.Specifically:\nA call to\nPyParser_SimpleParseStringFlags\nfollowed byPyNode_Compile\ncan be replaced by callingPy_CompileString()\n.There is no direct replacement for\nPyParser_SimpleParseFileFlags\n. To compile code from aFILE *\nargument, you will need to read the file in C and pass the resulting buffer toPy_CompileString()\n.To compile a file given a\nchar *\nfilename, explicitly open the file, read it and compile the result. One way to do this is using theio\nmodule withPyImport_ImportModule()\n,PyObject_CallMethod()\n,PyBytes_AsString()\nandPy_CompileString()\n, as sketched below. (Declarations and error handling are omitted.)io_module = Import_ImportModule(\"io\"); fileobject = PyObject_CallMethod(io_module, \"open\", \"ss\", filename, \"rb\"); source_bytes_object = PyObject_CallMethod(fileobject, \"read\", \"\"); result = PyObject_CallMethod(fileobject, \"close\", \"\"); source_buf = PyBytes_AsString(source_bytes_object); code = Py_CompileString(source_buf, filename, Py_file_input);\nFor\nFrameObject\nobjects, thef_lasti\nmember now represents a wordcode offset instead of a simple offset into the bytecode string. This means that this number needs to be multiplied by 2 to be used with APIs that expect a byte offset instead (likePyCode_Addr2Line()\nfor example). Notice as well that thef_lasti\nmember ofFrameObject\nobjects is not considered stable: please usePyFrame_GetLineNumber()\ninstead.\nCPython bytecode changes\u00b6\nThe\nMAKE_FUNCTION\ninstruction now accepts either a dict or a tuple of strings as the function\u2019s annotations. (Contributed by Yurii Karabas and Inada Naoki in bpo-42202.)\nBuild Changes\u00b6\nPEP 644: Python now requires OpenSSL 1.1.1 or newer. OpenSSL 1.0.2 is no longer supported. (Contributed by Christian Heimes in bpo-43669.)\nThe C99 functions\nsnprintf()\nandvsnprintf()\nare now required to build Python. (Contributed by Victor Stinner in bpo-36020.)sqlite3\nrequires SQLite 3.7.15 or higher. (Contributed by Sergey Fedoseev and Erlend E. Aasland in bpo-40744 and bpo-40810.)The\natexit\nmodule must now always be built as a built-in module. (Contributed by Victor Stinner in bpo-42639.)Add\n--disable-test-modules\noption to theconfigure\nscript: don\u2019t build nor install test modules. (Contributed by Xavier de Gaye, Thomas Petazzoni and Peixing Xin in bpo-27640.)Add\n--with-wheel-pkg-dir=PATH option\nto the./configure\nscript. If specified, theensurepip\nmodule looks forsetuptools\nandpip\nwheel packages in this directory: if both are present, these wheel packages are used instead of ensurepip bundled wheel packages.Some Linux distribution packaging policies recommend against bundling dependencies. For example, Fedora installs wheel packages in the\n/usr/share/python-wheels/\ndirectory and don\u2019t install theensurepip._bundled\npackage.(Contributed by Victor Stinner in bpo-42856.)\nAdd a new\nconfigure --without-static-libpython option\nto not build thelibpythonMAJOR.MINOR.a\nstatic library and not install thepython.o\nobject file.(Contributed by Victor Stinner in bpo-43103.)\nThe\nconfigure\nscript now uses thepkg-config\nutility, if available, to detect the location of Tcl/Tk headers and libraries. As before, those locations can be explicitly specified with the--with-tcltk-includes\nand--with-tcltk-libs\nconfiguration options. (Contributed by Manolis Stamatogiannakis in bpo-42603.)Add\n--with-openssl-rpath\noption toconfigure\nscript. The option simplifies building Python with a custom OpenSSL installation, e.g../configure --with-openssl=/path/to/openssl --with-openssl-rpath=auto\n. (Contributed by Christian Heimes in bpo-43466.)\nC API Changes\u00b6\nPEP 652: Maintaining the Stable ABI\u00b6\nThe Stable ABI (Application Binary Interface) for extension modules or embedding Python is now explicitly defined. C API Stability describes C API and ABI stability guarantees along with best practices for using the Stable ABI.\nNew Features\u00b6\nThe result of\nPyNumber_Index()\nnow always has exact typeint\n. Previously, the result could have been an instance of a subclass ofint\n. (Contributed by Serhiy Storchaka in bpo-40792.)Add a new\norig_argv\nmember to thePyConfig\nstructure: the list of the original command line arguments passed to the Python executable. (Contributed by Victor Stinner in bpo-23427.)The\nPyDateTime_DATE_GET_TZINFO()\nandPyDateTime_TIME_GET_TZINFO()\nmacros have been added for accessing thetzinfo\nattributes ofdatetime.datetime\nanddatetime.time\nobjects. (Contributed by Zackery Spytz in bpo-30155.)Add a\nPyCodec_Unregister()\nfunction to unregister a codec search function. (Contributed by Hai Shi in bpo-41842.)The\nPyIter_Send()\nfunction was added to allow sending value into iterator without raisingStopIteration\nexception. (Contributed by Vladimir Matveev in bpo-41756.)Add\nPyUnicode_AsUTF8AndSize()\nto the limited C API. (Contributed by Alex Gaynor in bpo-41784.)Add\nPyModule_AddObjectRef()\nfunction: similar toPyModule_AddObject()\nbut don\u2019t steal a reference to the value on success. (Contributed by Victor Stinner in bpo-1635741.)Add\nPy_NewRef()\nandPy_XNewRef()\nfunctions to increment the reference count of an object and return the object. (Contributed by Victor Stinner in bpo-42262.)The\nPyType_FromSpecWithBases()\nandPyType_FromModuleAndSpec()\nfunctions now accept a single class as the bases argument. (Contributed by Serhiy Storchaka in bpo-42423.)The\nPyType_FromModuleAndSpec()\nfunction now accepts NULLtp_doc\nslot. (Contributed by Hai Shi in bpo-41832.)The\nPyType_GetSlot()\nfunction can accept static types. (Contributed by Hai Shi and Petr Viktorin in bpo-41073.)Add a new\nPySet_CheckExact()\nfunction to the C-API to check if an object is an instance ofset\nbut not an instance of a subtype. (Contributed by Pablo Galindo in bpo-43277.)Add\nPyErr_SetInterruptEx()\nwhich allows passing a signal number to simulate. (Contributed by Antoine Pitrou in bpo-43356.)The limited C API is now supported if Python is built in debug mode (if the\nPy_DEBUG\nmacro is defined). In the limited C API, thePy_INCREF()\nandPy_DECREF()\nfunctions are now implemented as opaque function calls, rather than accessing directly thePyObject.ob_refcnt\nmember, if Python is built in debug mode and thePy_LIMITED_API\nmacro targets Python 3.10 or newer. It became possible to support the limited C API in debug mode because thePyObject\nstructure is the same in release and debug mode since Python 3.8 (see bpo-36465).The limited C API is still not supported in the\n--with-trace-refs\nspecial build (Py_TRACE_REFS\nmacro). (Contributed by Victor Stinner in bpo-43688.)Add the\nPy_Is(x, y)\nfunction to test if the x object is the y object, the same asx is y\nin Python. Add also thePy_IsNone()\n,Py_IsTrue()\n,Py_IsFalse()\nfunctions to test if an object is, respectively, theNone\nsingleton, theTrue\nsingleton or theFalse\nsingleton. (Contributed by Victor Stinner in bpo-43753.)Add new functions to control the garbage collector from C code:\nPyGC_Enable()\n,PyGC_Disable()\n,PyGC_IsEnabled()\n. These functions allow to activate, deactivate and query the state of the garbage collector from C code without having to import thegc\nmodule.Add a new\nPy_TPFLAGS_DISALLOW_INSTANTIATION\ntype flag to disallow creating type instances. (Contributed by Victor Stinner in bpo-43916.)Add a new\nPy_TPFLAGS_IMMUTABLETYPE\ntype flag for creating immutable type objects: type attributes cannot be set nor deleted. (Contributed by Victor Stinner and Erlend E. Aasland in bpo-43908.)\nPorting to Python 3.10\u00b6\nThe\nPY_SSIZE_T_CLEAN\nmacro must now be defined to usePyArg_ParseTuple()\nandPy_BuildValue()\nformats which use#\n:es#\n,et#\n,s#\n,u#\n,y#\n,z#\n,U#\nandZ#\n. See Parsing arguments and building values and PEP 353. (Contributed by Victor Stinner in bpo-40943.)Since\nPy_REFCNT()\nis changed to the inline static function,Py_REFCNT(obj) = new_refcnt\nmust be replaced withPy_SET_REFCNT(obj, new_refcnt)\n: seePy_SET_REFCNT()\n(available since Python 3.9). For backward compatibility, this macro can be used:#if PY_VERSION_HEX < 0x030900A4 # define Py_SET_REFCNT(obj, refcnt) ((Py_REFCNT(obj) = (refcnt)), (void)0) #endif\n(Contributed by Victor Stinner in bpo-39573.)\nCalling\nPyDict_GetItem()\nwithout GIL held had been allowed for historical reason. It is no longer allowed. (Contributed by Victor Stinner in bpo-40839.)PyUnicode_FromUnicode(NULL, size)\nandPyUnicode_FromStringAndSize(NULL, size)\nraiseDeprecationWarning\nnow. UsePyUnicode_New()\nto allocate Unicode object without initial data. (Contributed by Inada Naoki in bpo-36346.)The private\n_PyUnicode_Name_CAPI\nstructure of the PyCapsule APIunicodedata.ucnhash_CAPI\nhas been moved to the internal C API. (Contributed by Victor Stinner in bpo-42157.)Py_GetPath()\n,Py_GetPrefix()\n,Py_GetExecPrefix()\n,Py_GetProgramFullPath()\n,Py_GetPythonHome()\nandPy_GetProgramName()\nfunctions now returnNULL\nif called beforePy_Initialize()\n(before Python is initialized). Use the new Python Initialization Configuration API to get the Python Path Configuration. (Contributed by Victor Stinner in bpo-42260.)PyList_SET_ITEM()\n,PyTuple_SET_ITEM()\nandPyCell_SET()\nmacros can no longer be used as l-value or r-value. For example,x = PyList_SET_ITEM(a, b, c)\nandPyList_SET_ITEM(a, b, c) = x\nnow fail with a compiler error. It prevents bugs likeif (PyList_SET_ITEM (a, b, c) < 0) ...\ntest. (Contributed by Zackery Spytz and Victor Stinner in bpo-30459.)The non-limited API files\nodictobject.h\n,parser_interface.h\n,picklebufobject.h\n,pyarena.h\n,pyctype.h\n,pydebug.h\n,pyfpe.h\n, andpytime.h\nhave been moved to theInclude/cpython\ndirectory. These files must not be included directly, as they are already included inPython.h\n; see Include Files. If they have been included directly, consider includingPython.h\ninstead. (Contributed by Nicholas Sim in bpo-35134.)Use the\nPy_TPFLAGS_IMMUTABLETYPE\ntype flag to create immutable type objects. Do not rely onPy_TPFLAGS_HEAPTYPE\nto decide if a type object is mutable or not; check ifPy_TPFLAGS_IMMUTABLETYPE\nis set instead. (Contributed by Victor Stinner and Erlend E. Aasland in bpo-43908.)The undocumented function\nPy_FrozenMain\nhas been removed from the limited API. The function is mainly useful for custom builds of Python. (Contributed by Petr Viktorin in bpo-26241.)\nDeprecated\u00b6\nThe\nPyUnicode_InternImmortal()\nfunction is now deprecated and will be removed in Python 3.12: usePyUnicode_InternInPlace()\ninstead. (Contributed by Victor Stinner in bpo-41692.)\nRemoved\u00b6\nRemoved\nPy_UNICODE_str*\nfunctions manipulatingPy_UNICODE*\nstrings. (Contributed by Inada Naoki in bpo-41123.)Py_UNICODE_strlen\n: usePyUnicode_GetLength()\norPyUnicode_GET_LENGTH\nPy_UNICODE_strcat\n: usePyUnicode_CopyCharacters()\norPyUnicode_FromFormat()\nPy_UNICODE_strcpy\n,Py_UNICODE_strncpy\n: usePyUnicode_CopyCharacters()\norPyUnicode_Substring()\nPy_UNICODE_strcmp\n: usePyUnicode_Compare()\nPy_UNICODE_strncmp\n: usePyUnicode_Tailmatch()\nPy_UNICODE_strchr\n,Py_UNICODE_strrchr\n: usePyUnicode_FindChar()\nRemoved\nPyUnicode_GetMax()\n. Please migrate to new (PEP 393) APIs. (Contributed by Inada Naoki in bpo-41103.)Removed\nPyLong_FromUnicode()\n. Please migrate toPyLong_FromUnicodeObject()\n. (Contributed by Inada Naoki in bpo-41103.)Removed\nPyUnicode_AsUnicodeCopy()\n. Please usePyUnicode_AsUCS4Copy()\norPyUnicode_AsWideCharString()\n(Contributed by Inada Naoki in bpo-41103.)Removed\n_Py_CheckRecursionLimit\nvariable: it has been replaced byceval.recursion_limit\nof thePyInterpreterState\nstructure. (Contributed by Victor Stinner in bpo-41834.)Removed undocumented macros\nPy_ALLOW_RECURSION\nandPy_END_ALLOW_RECURSION\nand therecursion_critical\nfield of thePyInterpreterState\nstructure. (Contributed by Serhiy Storchaka in bpo-41936.)Removed the undocumented\nPyOS_InitInterrupts()\nfunction. Initializing Python already implicitly installs signal handlers: seePyConfig.install_signal_handlers\n. (Contributed by Victor Stinner in bpo-41713.)Remove the\nPyAST_Validate()\nfunction. It is no longer possible to build a AST object (mod_ty\ntype) with the public C API. The function was already excluded from the limited C API (PEP 384). (Contributed by Victor Stinner in bpo-43244.)Remove the\nsymtable.h\nheader file and the undocumented functions:PyST_GetScope()\nPySymtable_Build()\nPySymtable_BuildObject()\nPySymtable_Free()\nPy_SymtableString()\nPy_SymtableStringObject()\nThe\nPy_SymtableString()\nfunction was part the stable ABI by mistake but it could not be used, because thesymtable.h\nheader file was excluded from the limited C API.Use Python\nsymtable\nmodule instead. (Contributed by Victor Stinner in bpo-43244.)Remove\nPyOS_ReadlineFunctionPointer()\nfrom the limited C API headers and frompython3.dll\n, the library that provides the stable ABI on Windows. Since the function takes aFILE*\nargument, its ABI stability cannot be guaranteed. (Contributed by Petr Viktorin in bpo-43868.)Remove\nast.h\n,asdl.h\n, andPython-ast.h\nheader files. These functions were undocumented and excluded from the limited C API. Most names defined by these header files were not prefixed byPy\nand so could create names conflicts. For example,Python-ast.h\ndefined aYield\nmacro which was conflict with theYield\nname used by the Windows\nheader. Use the Pythonast\nmodule instead. (Contributed by Victor Stinner in bpo-43244.)Remove the compiler and parser functions using\nstruct _mod\ntype, because the public AST C API was removed:PyAST_Compile()\nPyAST_CompileEx()\nPyAST_CompileObject()\nPyFuture_FromAST()\nPyFuture_FromASTObject()\nPyParser_ASTFromFile()\nPyParser_ASTFromFileObject()\nPyParser_ASTFromFilename()\nPyParser_ASTFromString()\nPyParser_ASTFromStringObject()\nThese functions were undocumented and excluded from the limited C API. (Contributed by Victor Stinner in bpo-43244.)\nRemove the\npyarena.h\nheader file with functions:PyArena_New()\nPyArena_Free()\nPyArena_Malloc()\nPyArena_AddPyObject()\nThese functions were undocumented, excluded from the limited C API, and were only used internally by the compiler. (Contributed by Victor Stinner in bpo-43244.)\nThe\nPyThreadState.use_tracing\nmember has been removed to optimize Python. (Contributed by Mark Shannon in bpo-43760.)\nNotable security feature in 3.10.7\u00b6\nConverting between int\nand str\nin bases other than 2\n(binary), 4, 8 (octal), 16 (hexadecimal), or 32 such as base 10 (decimal)\nnow raises a ValueError\nif the number of digits in string form is\nabove a limit to avoid potential denial of service attacks due to the\nalgorithmic complexity. This is a mitigation for CVE 2020-10735.\nThis limit can be configured or disabled by environment variable, command\nline flag, or sys\nAPIs. See the integer string conversion\nlength limitation documentation. The default limit\nis 4300 digits in string form.\nNotable security feature in 3.10.8\u00b6\nThe deprecated mailcap\nmodule now refuses to inject unsafe text\n(filenames, MIME types, parameters) into shell commands. Instead of using such\ntext, it will warn and act as if a match was not found (or for test commands,\nas if the test failed).\n(Contributed by Petr Viktorin in gh-98966.)\nNotable changes in 3.10.12\u00b6\ntarfile\u00b6\nThe extraction methods in\ntarfile\n, andshutil.unpack_archive()\n, have a new a filter argument that allows limiting tar features than may be surprising or dangerous, such as creating files outside the destination directory. See Extraction filters for details. In Python 3.12, use without the filter argument will show aDeprecationWarning\n. In Python 3.14, the default will switch to'data'\n. (Contributed by Petr Viktorin in PEP 706.)", "code_snippets": [" ", "\n ", " ", "\n ", "\n ", " ", "\n ", "\n ", " ", "\n ", "\n ", "\n ", "\n", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", " ", "\n", " ", " ", " ", " ", " ", "\n ", " ", "\n", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n", "\n", " ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n ", "\n ", " ", "\n", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n\n", "\n ", " ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n ", " ", "\n ", "\n ", "\n ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n ", " ", "\n ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n ", " ", " ", " ", " ", "\n ", "\n ", "\n ", "\n", " ", "\n ", " ", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n", " ", "\n ", " ", " ", " ", " ", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", " ", "\n", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n\n", " ", " ", "\n", " ", "\n ", " ", "\n ", "\n ", " ", "\n ", "\n ", " ", "\n ", "\n", "\n", " ", " ", " ", "\n ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", "\n", " ", " ", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", " ", "\n ", " ", " ", "\n", " ", "\n ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n", "\n", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 17974} +{"url": "https://docs.python.org/3/whatsnew/3.11.html", "title": "What\u2019s New In Python 3.11", "content": "What\u2019s New In Python 3.11\u00b6\n- Editor:\nPablo Galindo Salgado\nThis article explains the new features in Python 3.11, compared to 3.10. Python 3.11 was released on October 24, 2022. For full details, see the changelog.\nSummary \u2013 Release highlights\u00b6\nPython 3.11 is between 10-60% faster than Python 3.10. On average, we measured a 1.25x speedup on the standard benchmark suite. See Faster CPython for details.\nNew syntax features:\nNew built-in features:\nNew standard library modules:\nInterpreter improvements:\nNew\n-P\ncommand line option andPYTHONSAFEPATH\nenvironment variable to disable automatically prepending potentially unsafe paths tosys.path\nNew typing features:\nImportant deprecations, removals and restrictions:\nPEP 594: Many legacy standard library modules have been deprecated and will be removed in Python 3.13\nNew Features\u00b6\nPEP 657: Fine-grained error locations in tracebacks\u00b6\nWhen printing tracebacks, the interpreter will now point to the exact expression that caused the error, instead of just the line. For example:\nTraceback (most recent call last):\nFile \"distance.py\", line 11, in \nprint(manhattan_distance(p1, p2))\n^^^^^^^^^^^^^^^^^^^^^^^^^^\nFile \"distance.py\", line 6, in manhattan_distance\nreturn abs(point_1.x - point_2.x) + abs(point_1.y - point_2.y)\n^^^^^^^^^\nAttributeError: 'NoneType' object has no attribute 'x'\nPrevious versions of the interpreter would point to just the line, making it\nambiguous which object was None\n. These enhanced errors can also be helpful\nwhen dealing with deeply nested dict\nobjects and multiple function calls:\nTraceback (most recent call last):\nFile \"query.py\", line 37, in \nmagic_arithmetic('foo')\nFile \"query.py\", line 18, in magic_arithmetic\nreturn add_counts(x) / 25\n^^^^^^^^^^^^^\nFile \"query.py\", line 24, in add_counts\nreturn 25 + query_user(user1) + query_user(user2)\n^^^^^^^^^^^^^^^^^\nFile \"query.py\", line 32, in query_user\nreturn 1 + query_count(db, response['a']['b']['c']['user'], retry=True)\n~~~~~~~~~~~~~~~~~~^^^^^\nTypeError: 'NoneType' object is not subscriptable\nAs well as complex arithmetic expressions:\nTraceback (most recent call last):\nFile \"calculation.py\", line 54, in \nresult = (x / y / z) * (a / b / c)\n~~~~~~^~~\nZeroDivisionError: division by zero\nAdditionally, the information used by the enhanced traceback feature is made available via a general API, that can be used to correlate bytecode instructions with source code location. This information can be retrieved using:\nThe\ncodeobject.co_positions()\nmethod in Python.The\nPyCode_Addr2Location()\nfunction in the C API.\nSee PEP 657 for more details. (Contributed by Pablo Galindo, Batuhan Taskaya and Ammar Askar in bpo-43950.)\nNote\nThis feature requires storing column positions in Code Objects,\nwhich may result in a small increase in interpreter memory usage\nand disk usage for compiled Python files.\nTo avoid storing the extra information\nand deactivate printing the extra traceback information,\nuse the -X no_debug_ranges\ncommand line option\nor the PYTHONNODEBUGRANGES\nenvironment variable.\nPEP 654: Exception Groups and except*\n\u00b6\nPEP 654 introduces language features that enable a program\nto raise and handle multiple unrelated exceptions simultaneously.\nThe builtin types ExceptionGroup\nand BaseExceptionGroup\nmake it possible to group exceptions and raise them together,\nand the new except*\nsyntax generalizes\nexcept\nto match subgroups of exception groups.\nSee PEP 654 for more details.\n(Contributed by Irit Katriel in bpo-45292. PEP written by Irit Katriel, Yury Selivanov and Guido van Rossum.)\nPEP 678: Exceptions can be enriched with notes\u00b6\nThe add_note()\nmethod is added to BaseException\n.\nIt can be used to enrich exceptions with context information\nthat is not available at the time when the exception is raised.\nThe added notes appear in the default traceback.\nSee PEP 678 for more details.\n(Contributed by Irit Katriel in bpo-45607. PEP written by Zac Hatfield-Dodds.)\nWindows py.exe\nlauncher improvements\u00b6\nThe copy of the Python install manager included with Python 3.11 has been significantly\nupdated. It now supports company/tag syntax as defined in PEP 514 using the\n-V:/\nargument instead of the limited -.\n.\nThis allows launching distributions other than PythonCore\n,\nthe one hosted on python.org.\nWhen using -V:\nselectors, either company or tag can be omitted, but all\ninstalls will be searched. For example, -V:OtherPython/\nwill select the\n\u201cbest\u201d tag registered for OtherPython\n, while -V:3.11\nor -V:/3.11\nwill select the \u201cbest\u201d distribution with tag 3.11\n.\nWhen using the legacy -\n, -.\n,\n--\nor -.-\narguments,\nall existing behaviour should be preserved from past versions,\nand only releases from PythonCore\nwill be selected.\nHowever, the -64\nsuffix now implies \u201cnot 32-bit\u201d (not necessarily x86-64),\nas there are multiple supported 64-bit platforms.\n32-bit runtimes are detected by checking the runtime\u2019s tag for a -32\nsuffix.\nAll releases of Python since 3.5 have included this in their 32-bit builds.\nOther Language Changes\u00b6\nStarred unpacking expressions can now be used in\nfor\nstatements. (See bpo-46725 for more details.)Asynchronous comprehensions are now allowed inside comprehensions in asynchronous functions. Outer comprehensions implicitly become asynchronous in this case. (Contributed by Serhiy Storchaka in bpo-33346.)\nA\nTypeError\nis now raised instead of anAttributeError\ninwith\nstatements andcontextlib.ExitStack.enter_context()\nfor objects that do not support the context manager protocol, and inasync with\nstatements andcontextlib.AsyncExitStack.enter_async_context()\nfor objects not supporting the asynchronous context manager protocol. (Contributed by Serhiy Storchaka in bpo-12022 and bpo-44471.)Added\nobject.__getstate__()\n, which provides the default implementation of the__getstate__()\nmethod.copy\ning andpickle\ning instances of subclasses of builtin typesbytearray\n,set\n,frozenset\n,collections.OrderedDict\n,collections.deque\n,weakref.WeakSet\n, anddatetime.tzinfo\nnow copies and pickles instance attributes implemented as slots. This change has an unintended side effect: It trips up a small minority of existing Python projects not expectingobject.__getstate__()\nto exist. See the later comments on gh-70766 for discussions of what workarounds such code may need. (Contributed by Serhiy Storchaka in bpo-26579.)\nAdded a\n-P\ncommand line option and aPYTHONSAFEPATH\nenvironment variable, which disable the automatic prepending tosys.path\nof the script\u2019s directory when running a script, or the current directory when using-c\nand-m\n. This ensures only stdlib and installed modules are picked up byimport\n, and avoids unintentionally or maliciously shadowing modules with those in a local (and typically user-writable) directory. (Contributed by Victor Stinner in gh-57684.)A\n\"z\"\noption was added to the Format Specification Mini-Language that coerces negative to positive zero after rounding to the format precision. See PEP 682 for more details. (Contributed by John Belmonte in gh-90153.)Bytes are no longer accepted on\nsys.path\n. Support broke sometime between Python 3.2 and 3.6, with no one noticing until after Python 3.10.0 was released. In addition, bringing back support would be problematic due to interactions between-b\nandsys.path_importer_cache\nwhen there is a mixture ofstr\nandbytes\nkeys. (Contributed by Thomas Grainger in gh-91181.)\nOther CPython Implementation Changes\u00b6\nThe special methods\n__complex__()\nforcomplex\nand__bytes__()\nforbytes\nare implemented to support thetyping.SupportsComplex\nandtyping.SupportsBytes\nprotocols. (Contributed by Mark Dickinson and Donghee Na in bpo-24234.)siphash13\nis added as a new internal hashing algorithm. It has similar security properties assiphash24\n, but it is slightly faster for long inputs.str\n,bytes\n, and some other types now use it as the default algorithm forhash()\n. PEP 552 hash-based .pyc files now usesiphash13\ntoo. (Contributed by Inada Naoki in bpo-29410.)When an active exception is re-raised by a\nraise\nstatement with no parameters, the traceback attached to this exception is now alwayssys.exc_info()[1].__traceback__\n. This means that changes made to the traceback in the currentexcept\nclause are reflected in the re-raised exception. (Contributed by Irit Katriel in bpo-45711.)The interpreter state\u2019s representation of handled exceptions (aka\nexc_info\nor_PyErr_StackItem\n) now only has theexc_value\nfield;exc_type\nandexc_traceback\nhave been removed, as they can be derived fromexc_value\n. (Contributed by Irit Katriel in bpo-45711.)A new command line option,\nAppendPath\n, has been added for the Windows installer. It behaves similarly toPrependPath\n, but appends the install and scripts directories instead of prepending them. (Contributed by Bastian Neuburger in bpo-44934.)The\nPyConfig.module_search_paths_set\nfield must now be set to1\nfor initialization to usePyConfig.module_search_paths\nto initializesys.path\n. Otherwise, initialization will recalculate the path and replace any values added tomodule_search_paths\n.The output of the\n--help\noption now fits in 50 lines/80 columns. Information about Python environment variables and-X\noptions is now available using the respective--help-env\nand--help-xoptions\nflags, and with the new--help-all\n. (Contributed by \u00c9ric Araujo in bpo-46142.)Converting between\nint\nandstr\nin bases other than 2 (binary), 4, 8 (octal), 16 (hexadecimal), or 32 such as base 10 (decimal) now raises aValueError\nif the number of digits in string form is above a limit to avoid potential denial of service attacks due to the algorithmic complexity. This is a mitigation for CVE 2020-10735. This limit can be configured or disabled by environment variable, command line flag, orsys\nAPIs. See the integer string conversion length limitation documentation. The default limit is 4300 digits in string form.\nNew Modules\u00b6\nImproved Modules\u00b6\nasyncio\u00b6\nAdded the\nTaskGroup\nclass, an asynchronous context manager holding a group of tasks that will wait for all of them upon exit. For new code this is recommended over usingcreate_task()\nandgather()\ndirectly. (Contributed by Yury Selivanov and others in gh-90908.)Added\ntimeout()\n, an asynchronous context manager for setting a timeout on asynchronous operations. For new code this is recommended over usingwait_for()\ndirectly. (Contributed by Andrew Svetlov in gh-90927.)Added the\nRunner\nclass, which exposes the machinery used byrun()\n. (Contributed by Andrew Svetlov in gh-91218.)Added the\nBarrier\nclass to the synchronization primitives in the asyncio library, and the relatedBrokenBarrierError\nexception. (Contributed by Yves Duprat and Andrew Svetlov in gh-87518.)Added keyword argument all_errors to\nasyncio.loop.create_connection()\nso that multiple connection errors can be raised as anExceptionGroup\n.Added the\nasyncio.StreamWriter.start_tls()\nmethod for upgrading existing stream-based connections to TLS. (Contributed by Ian Good in bpo-34975.)Added raw datagram socket functions to the event loop:\nsock_sendto()\n,sock_recvfrom()\nandsock_recvfrom_into()\n. These have implementations inSelectorEventLoop\nandProactorEventLoop\n. (Contributed by Alex Gr\u00f6nholm in bpo-46805.)Added\ncancelling()\nanduncancel()\nmethods toTask\n. These are primarily intended for internal use, notably byTaskGroup\n.\ncontextlib\u00b6\ndataclasses\u00b6\ndatetime\u00b6\nAdd\ndatetime.UTC\n, a convenience alias fordatetime.timezone.utc\n. (Contributed by Kabir Kwatra in gh-91973.)datetime.date.fromisoformat()\n,datetime.time.fromisoformat()\nanddatetime.datetime.fromisoformat()\ncan now be used to parse most ISO 8601 formats (barring only those that support fractional hours and minutes). (Contributed by Paul Ganssle in gh-80010.)\nenum\u00b6\nRenamed\nEnumMeta\ntoEnumType\n(EnumMeta\nkept as an alias).Added\nStrEnum\n, with members that can be used as (and must be) strings.Added\nReprEnum\n, which only modifies the__repr__()\nof members while returning their literal values (rather than names) for__str__()\nand__format__()\n(used bystr()\n,format()\nand f-strings).Changed\nEnum.__format__()\n(the default forformat()\n,str.format()\nand f-strings) to always produce the same result asEnum.__str__()\n: for enums inheriting fromReprEnum\nit will be the member\u2019s value; for all other enums it will be the enum and member name (e.g.Color.RED\n).Added a new boundary class parameter to\nFlag\nenums and theFlagBoundary\nenum with its options, to control how to handle out-of-range flag values.Added the\nverify()\nenum decorator and theEnumCheck\nenum with its options, to check enum classes against several specific constraints.Added the\nmember()\nandnonmember()\ndecorators, to ensure the decorated object is/is not converted to an enum member.Added the\nproperty()\ndecorator, which works likeproperty()\nexcept for enums. Use this instead oftypes.DynamicClassAttribute()\n.Added the\nglobal_enum()\nenum decorator, which adjusts__repr__()\nand__str__()\nto show values as members of their module rather than the enum class. For example,'re.ASCII'\nfor theASCII\nmember ofre.RegexFlag\nrather than'RegexFlag.ASCII'\n.Enhanced\nFlag\nto supportlen()\n, iteration andin\n/not in\non its members. For example, the following now works:len(AFlag(3)) == 2 and list(AFlag(3)) == (AFlag.ONE, AFlag.TWO)\nChanged\nEnum\nandFlag\nso that members are now defined before__init_subclass__()\nis called;dir()\nnow includes methods, etc., from mixed-in data types.Changed\nFlag\nto only consider primary values (power of two) canonical while composite values (3\n,6\n,10\n, etc.) are considered aliases; inverted flags are coerced to their positive equivalent.\nfcntl\u00b6\nOn FreeBSD, the\nF_DUP2FD\nandF_DUP2FD_CLOEXEC\nflags respectively are supported, the former equals todup2\nusage while the latter set theFD_CLOEXEC\nflag in addition.\nfractions\u00b6\nfunctools\u00b6\nfunctools.singledispatch()\nnow supportstypes.UnionType\nandtyping.Union\nas annotations to the dispatch argument.:>>> from functools import singledispatch >>> @singledispatch ... def fun(arg, verbose=False): ... if verbose: ... print(\"Let me just say,\", end=\" \") ... print(arg) ... >>> @fun.register ... def _(arg: int | float, verbose=False): ... if verbose: ... print(\"Strength in numbers, eh?\", end=\" \") ... print(arg) ... >>> from typing import Union >>> @fun.register ... def _(arg: Union[list, set], verbose=False): ... if verbose: ... print(\"Enumerate this:\") ... for i, elem in enumerate(arg): ... print(i, elem) ...\n(Contributed by Yurii Karabas in bpo-46014.)\ngzip\u00b6\nThe\ngzip.compress()\nfunction is now faster when used with the mtime=0 argument as it delegates the compression entirely to a singlezlib.compress()\noperation. There is one side effect of this change: The gzip file header contains an \u201cOS\u201d byte in its header. That was traditionally always set to a value of 255 representing \u201cunknown\u201d by thegzip\nmodule. Now, when usingcompress()\nwith mtime=0, it may be set to a different value by the underlying zlib C library Python was linked against. (See gh-112346 for details on the side effect.)\nhashlib\u00b6\nhashlib.blake2b()\nandhashlib.blake2s()\nnow prefer libb2 over Python\u2019s vendored copy. (Contributed by Christian Heimes in bpo-47095.)The internal\n_sha3\nmodule with SHA3 and SHAKE algorithms now uses tiny_sha3 instead of the Keccak Code Package to reduce code and binary size. Thehashlib\nmodule prefers optimized SHA3 and SHAKE implementations from OpenSSL. The change affects only installations without OpenSSL support. (Contributed by Christian Heimes in bpo-47098.)Add\nhashlib.file_digest()\n, a helper function for efficient hashing of files or file-like objects. (Contributed by Christian Heimes in gh-89313.)\nIDLE and idlelib\u00b6\ninspect\u00b6\nAdd\ngetmembers_static()\nto return all members without triggering dynamic lookup via the descriptor protocol. (Contributed by Weipeng Hong in bpo-30533.)Add\nismethodwrapper()\nfor checking if the type of an object is aMethodWrapperType\n. (Contributed by Hakan \u00c7elik in bpo-29418.)Change the frame-related functions in the\ninspect\nmodule to return newFrameInfo\nandTraceback\nclass instances (backwards compatible with the previous named tuple-like interfaces) that includes the extended PEP 657 position information (end line number, column and end column). The affected functions are:(Contributed by Pablo Galindo in gh-88116.)\nlocale\u00b6\nAdd\nlocale.getencoding()\nto get the current locale encoding. It is similar tolocale.getpreferredencoding(False)\nbut ignores the Python UTF-8 Mode.\nlogging\u00b6\nAdded\ngetLevelNamesMapping()\nto return a mapping from logging level names (e.g.'CRITICAL'\n) to the values of their corresponding Logging Levels (e.g.50\n, by default). (Contributed by Andrei Kulakovin in gh-88024.)Added a\ncreateSocket()\nmethod toSysLogHandler\n, to matchSocketHandler.createSocket()\n. It is called automatically during handler initialization and when emitting an event, if there is no active socket. (Contributed by Kirill Pinchuk in gh-88457.)\nmath\u00b6\nAdd\nmath.exp2()\n: return 2 raised to the power of x. (Contributed by Gideon Mitchell in bpo-45917.)Add\nmath.cbrt()\n: return the cube root of x. (Contributed by Ajith Ramachandran in bpo-44357.)The behaviour of two\nmath.pow()\ncorner cases was changed, for consistency with the IEEE 754 specification. The operationsmath.pow(0.0, -math.inf)\nandmath.pow(-0.0, -math.inf)\nnow returninf\n. Previously they raisedValueError\n. (Contributed by Mark Dickinson in bpo-44339.)The\nmath.nan\nvalue is now always available. (Contributed by Victor Stinner in bpo-46917.)\noperator\u00b6\nA new function\noperator.call\nhas been added, such thatoperator.call(obj, *args, **kwargs) == obj(*args, **kwargs)\n. (Contributed by Antony Lee in bpo-44019.)\nos\u00b6\nOn Windows,\nos.urandom()\nnow usesBCryptGenRandom()\n, instead ofCryptGenRandom()\nwhich is deprecated. (Contributed by Donghee Na in bpo-44611.)\npathlib\u00b6\nre\u00b6\nAtomic grouping (\n(?>...)\n) and possessive quantifiers (*+\n,++\n,?+\n,{m,n}+\n) are now supported in regular expressions. (Contributed by Jeffrey C. Jacobs and Serhiy Storchaka in bpo-433030.)\nshutil\u00b6\nAdd optional parameter dir_fd in\nshutil.rmtree()\n. (Contributed by Serhiy Storchaka in bpo-46245.)\nsocket\u00b6\nAdd CAN Socket support for NetBSD. (Contributed by Thomas Klausner in bpo-30512.)\ncreate_connection()\nhas an option to raise, in case of failure to connect, anExceptionGroup\ncontaining all errors instead of only raising the last error. (Contributed by Irit Katriel in bpo-29980.)\nsqlite3\u00b6\nYou can now disable the authorizer by passing\nNone\ntoset_authorizer()\n. (Contributed by Erlend E. Aasland in bpo-44491.)Collation name\ncreate_collation()\ncan now contain any Unicode character. Collation names with invalid characters now raiseUnicodeEncodeError\ninstead ofsqlite3.ProgrammingError\n. (Contributed by Erlend E. Aasland in bpo-44688.)sqlite3\nexceptions now include the SQLite extended error code assqlite_errorcode\nand the SQLite error name assqlite_errorname\n. (Contributed by Aviv Palivoda, Daniel Shahaf, and Erlend E. Aasland in bpo-16379 and bpo-24139.)Add\nsetlimit()\nandgetlimit()\ntosqlite3.Connection\nfor setting and getting SQLite limits by connection basis. (Contributed by Erlend E. Aasland in bpo-45243.)sqlite3\nnow setssqlite3.threadsafety\nbased on the default threading mode the underlying SQLite library has been compiled with. (Contributed by Erlend E. Aasland in bpo-45613.)sqlite3\nC callbacks now use unraisable exceptions if callback tracebacks are enabled. Users can now register anunraisable hook handler\nto improve their debug experience. (Contributed by Erlend E. Aasland in bpo-45828.)Fetch across rollback no longer raises\nInterfaceError\n. Instead we leave it to the SQLite library to handle these cases. (Contributed by Erlend E. Aasland in bpo-44092.)Add\nserialize()\nanddeserialize()\ntosqlite3.Connection\nfor serializing and deserializing databases. (Contributed by Erlend E. Aasland in bpo-41930.)Add\ncreate_window_function()\ntosqlite3.Connection\nfor creating aggregate window functions. (Contributed by Erlend E. Aasland in bpo-34916.)Add\nblobopen()\ntosqlite3.Connection\n.sqlite3.Blob\nallows incremental I/O operations on blobs. (Contributed by Aviv Palivoda and Erlend E. Aasland in bpo-24905.)\nstring\u00b6\nAdd\nget_identifiers()\nandis_valid()\ntostring.Template\n, which respectively return all valid placeholders, and whether any invalid placeholders are present. (Contributed by Ben Kehoe in gh-90465.)\nsys\u00b6\nsys.exc_info()\nnow derives thetype\nandtraceback\nfields from thevalue\n(the exception instance), so when an exception is modified while it is being handled, the changes are reflected in the results of subsequent calls toexc_info()\n. (Contributed by Irit Katriel in bpo-45711.)Add\nsys.exception()\nwhich returns the active exception instance (equivalent tosys.exc_info()[1]\n). (Contributed by Irit Katriel in bpo-46328.)Add the\nsys.flags.safe_path\nflag. (Contributed by Victor Stinner in gh-57684.)\nsysconfig\u00b6\nThree new installation schemes (posix_venv, nt_venv and venv) were added and are used when Python creates new virtual environments or when it is running from a virtual environment. The first two schemes (posix_venv and nt_venv) are OS-specific for non-Windows and Windows, the venv is essentially an alias to one of them according to the OS Python runs on. This is useful for downstream distributors who modify\nsysconfig.get_preferred_scheme()\n. Third party code that creates new virtual environments should use the new venv installation scheme to determine the paths, as doesvenv\n. (Contributed by Miro Hron\u010dok in bpo-45413.)\ntempfile\u00b6\nSpooledTemporaryFile\nobjects now fully implement the methods ofio.BufferedIOBase\norio.TextIOBase\n(depending on file mode). This lets them work correctly with APIs that expect file-like objects, such as compression modules. (Contributed by Carey Metcalfe in gh-70363.)\nthreading\u00b6\nOn Unix, if the\nsem_clockwait()\nfunction is available in the C library (glibc 2.30 and newer), thethreading.Lock.acquire()\nmethod now uses the monotonic clock (time.CLOCK_MONOTONIC\n) for the timeout, rather than using the system clock (time.CLOCK_REALTIME\n), to not be affected by system clock changes. (Contributed by Victor Stinner in bpo-41710.)\ntime\u00b6\nOn Unix,\ntime.sleep()\nnow uses theclock_nanosleep()\nornanosleep()\nfunction, if available, which has a resolution of 1 nanosecond (10-9 seconds), rather than usingselect()\nwhich has a resolution of 1 microsecond (10-6 seconds). (Contributed by Benjamin Sz\u0151ke and Victor Stinner in bpo-21302.)On Windows 8.1 and newer,\ntime.sleep()\nnow uses a waitable timer based on high-resolution timers which has a resolution of 100 nanoseconds (10-7 seconds). Previously, it had a resolution of 1 millisecond (10-3 seconds). (Contributed by Benjamin Sz\u0151ke, Donghee Na, Eryk Sun and Victor Stinner in bpo-21302 and bpo-45429.)\ntkinter\u00b6\nAdded method\ninfo_patchlevel()\nwhich returns the exact version of the Tcl library as a named tuple similar tosys.version_info\n. (Contributed by Serhiy Storchaka in gh-91827.)\ntraceback\u00b6\nAdd\ntraceback.StackSummary.format_frame_summary()\nto allow users to override which frames appear in the traceback, and how they are formatted. (Contributed by Ammar Askar in bpo-44569.)Add\ntraceback.TracebackException.print()\n, which prints the formattedTracebackException\ninstance to a file. (Contributed by Irit Katriel in bpo-33809.)\ntyping\u00b6\nFor major changes, see New Features Related to Type Hints.\nAdd\ntyping.assert_never()\nandtyping.Never\n.typing.assert_never()\nis useful for asking a type checker to confirm that a line of code is not reachable. At runtime, it raises anAssertionError\n. (Contributed by Jelle Zijlstra in gh-90633.)Add\ntyping.reveal_type()\n. This is useful for asking a type checker what type it has inferred for a given expression. At runtime it prints the type of the received value. (Contributed by Jelle Zijlstra in gh-90572.)Add\ntyping.assert_type()\n. This is useful for asking a type checker to confirm that the type it has inferred for a given expression matches the given type. At runtime it simply returns the received value. (Contributed by Jelle Zijlstra in gh-90638.)typing.TypedDict\ntypes can now be generic. (Contributed by Samodya Abeysiriwardane in gh-89026.)NamedTuple\ntypes can now be generic. (Contributed by Serhiy Storchaka in bpo-43923.)Allow subclassing of\ntyping.Any\n. This is useful for avoiding type checker errors related to highly dynamic class, such as mocks. (Contributed by Shantanu Jain in gh-91154.)The\ntyping.final()\ndecorator now sets the__final__\nattributed on the decorated object. (Contributed by Jelle Zijlstra in gh-90500.)The\ntyping.get_overloads()\nfunction can be used for introspecting the overloads of a function.typing.clear_overloads()\ncan be used to clear all registered overloads of a function. (Contributed by Jelle Zijlstra in gh-89263.)The\n__init__()\nmethod ofProtocol\nsubclasses is now preserved. (Contributed by Adrian Garcia Badarasco in gh-88970.)The representation of empty tuple types (\nTuple[()]\n) is simplified. This affects introspection, e.g.get_args(Tuple[()])\nnow evaluates to()\ninstead of((),)\n. (Contributed by Serhiy Storchaka in gh-91137.)Loosen runtime requirements for type annotations by removing the callable check in the private\ntyping._type_check\nfunction. (Contributed by Gregory Beauregard in gh-90802.)typing.get_type_hints()\nnow supports evaluating strings as forward references in PEP 585 generic aliases. (Contributed by Niklas Rosenstein in gh-85542.)typing.get_type_hints()\nno longer addsOptional\nto parameters withNone\nas a default. (Contributed by Nikita Sobolev in gh-90353.)typing.get_type_hints()\nnow supports evaluating bare stringifiedClassVar\nannotations. (Contributed by Gregory Beauregard in gh-90711.)typing.no_type_check()\nno longer modifies external classes and functions. It also now correctly marks classmethods as not to be type checked. (Contributed by Nikita Sobolev in gh-90729.)\nunicodedata\u00b6\nThe Unicode database has been updated to version 14.0.0. (Contributed by Benjamin Peterson in bpo-45190).\nunittest\u00b6\nAdded methods\nenterContext()\nandenterClassContext()\nof classTestCase\n, methodenterAsyncContext()\nof classIsolatedAsyncioTestCase\nand functionunittest.enterModuleContext()\n. (Contributed by Serhiy Storchaka in bpo-45046.)\nvenv\u00b6\nWhen new Python virtual environments are created, the venv sysconfig installation scheme is used to determine the paths inside the environment. When Python runs in a virtual environment, the same installation scheme is the default. That means that downstream distributors can change the default sysconfig install scheme without changing behavior of virtual environments. Third party code that also creates new virtual environments should do the same. (Contributed by Miro Hron\u010dok in bpo-45413.)\nwarnings\u00b6\nwarnings.catch_warnings()\nnow accepts arguments forwarnings.simplefilter()\n, providing a more concise way to locally ignore warnings or convert them to errors. (Contributed by Zac Hatfield-Dodds in bpo-47074.)\nzipfile\u00b6\nAdded support for specifying member name encoding for reading metadata in a\nZipFile\n\u2019s directory and file headers. (Contributed by Stephen J. Turnbull and Serhiy Storchaka in bpo-28080.)Added\nZipFile.mkdir()\nfor creating new directories inside ZIP archives. (Contributed by Sam Ezeh in gh-49083.)Added\nstem\n,suffix\nandsuffixes\ntozipfile.Path\n. (Contributed by Miguel Brito in gh-88261.)\nOptimizations\u00b6\nThis section covers specific optimizations independent of the Faster CPython project, which is covered in its own section.\nThe compiler now optimizes simple printf-style % formatting on string literals containing only the format codes\n%s\n,%r\nand%a\nand makes it as fast as a corresponding f-string expression. (Contributed by Serhiy Storchaka in bpo-28307.)Integer division (\n//\n) is better tuned for optimization by compilers. It is now around 20% faster on x86-64 when dividing anint\nby a value smaller than2**30\n. (Contributed by Gregory P. Smith and Tim Peters in gh-90564.)sum()\nis now nearly 30% faster for integers smaller than2**30\n. (Contributed by Stefan Behnel in gh-68264.)Resizing lists is streamlined for the common case, speeding up\nlist.append()\nby \u224815% and simple list comprehensions by up to 20-30% (Contributed by Dennis Sweeney in gh-91165.)Dictionaries don\u2019t store hash values when all keys are Unicode objects, decreasing\ndict\nsize. For example,sys.getsizeof(dict.fromkeys(\"abcdefg\"))\nis reduced from 352 bytes to 272 bytes (23% smaller) on 64-bit platforms. (Contributed by Inada Naoki in bpo-46845.)Using\nasyncio.DatagramProtocol\nis now orders of magnitude faster when transferring large files over UDP, with speeds over 100 times higher for a \u224860 MiB file. (Contributed by msoxzw in gh-91487.)math\nfunctionscomb()\nandperm()\nare now \u224810 times faster for large arguments (with a larger speedup for larger k). (Contributed by Serhiy Storchaka in bpo-37295.)The\nstatistics\nfunctionsmean()\n,variance()\nandstdev()\nnow consume iterators in one pass rather than converting them to alist\nfirst. This is twice as fast and can save substantial memory. (Contributed by Raymond Hettinger in gh-90415.)unicodedata.normalize()\nnow normalizes pure-ASCII strings in constant time. (Contributed by Donghee Na in bpo-44987.)\nFaster CPython\u00b6\nCPython 3.11 is an average of 25% faster than CPython 3.10 as measured with the pyperformance benchmark suite, when compiled with GCC on Ubuntu Linux. Depending on your workload, the overall speedup could be 10-60%.\nThis project focuses on two major areas in Python: Faster Startup and Faster Runtime. Optimizations not covered by this project are listed separately under Optimizations.\nFaster Startup\u00b6\nFrozen imports / Static code objects\u00b6\nPython caches bytecode in the __pycache__ directory to speed up module loading.\nPreviously in 3.10, Python module execution looked like this:\nRead __pycache__ -> Unmarshal -> Heap allocated code object -> Evaluate\nIn Python 3.11, the core modules essential for Python startup are \u201cfrozen\u201d. This means that their Code Objects (and bytecode) are statically allocated by the interpreter. This reduces the steps in module execution process to:\nStatically allocated code object -> Evaluate\nInterpreter startup is now 10-15% faster in Python 3.11. This has a big impact for short-running programs using Python.\n(Contributed by Eric Snow, Guido van Rossum and Kumar Aditya in many issues.)\nFaster Runtime\u00b6\nCheaper, lazy Python frames\u00b6\nPython frames, holding execution information, are created whenever Python calls a Python function. The following are new frame optimizations:\nStreamlined the frame creation process.\nAvoided memory allocation by generously re-using frame space on the C stack.\nStreamlined the internal frame struct to contain only essential information. Frames previously held extra debugging and memory management information.\nOld-style frame objects\nare now created only when requested by debuggers\nor by Python introspection functions such as sys._getframe()\nand\ninspect.currentframe()\n. For most user code, no frame objects are\ncreated at all. As a result, nearly all Python functions calls have sped\nup significantly. We measured a 3-7% speedup in pyperformance.\n(Contributed by Mark Shannon in bpo-44590.)\nInlined Python function calls\u00b6\nDuring a Python function call, Python will call an evaluating C function to interpret that function\u2019s code. This effectively limits pure Python recursion to what\u2019s safe for the C stack.\nIn 3.11, when CPython detects Python code calling another Python function, it sets up a new frame, and \u201cjumps\u201d to the new code inside the new frame. This avoids calling the C interpreting function altogether.\nMost Python function calls now consume no C stack space, speeding them up.\nIn simple recursive functions like fibonacci or\nfactorial, we observed a 1.7x speedup. This also means recursive functions\ncan recurse significantly deeper\n(if the user increases the recursion limit with sys.setrecursionlimit()\n).\nWe measured a 1-3% improvement in pyperformance.\n(Contributed by Pablo Galindo and Mark Shannon in bpo-45256.)\nPEP 659: Specializing Adaptive Interpreter\u00b6\nPEP 659 is one of the key parts of the Faster CPython project. The general idea is that while Python is a dynamic language, most code has regions where objects and types rarely change. This concept is known as type stability.\nAt runtime, Python will try to look for common patterns and type stability in the executing code. Python will then replace the current operation with a more specialized one. This specialized operation uses fast paths available only to those use cases/types, which generally outperform their generic counterparts. This also brings in another concept called inline caching, where Python caches the results of expensive operations directly in the bytecode.\nThe specializer will also combine certain common instruction pairs into one superinstruction, reducing the overhead during execution.\nPython will only specialize when it sees code that is \u201chot\u201d (executed multiple times). This prevents Python from wasting time on run-once code. Python can also de-specialize when code is too dynamic or when the use changes. Specialization is attempted periodically, and specialization attempts are not too expensive, allowing specialization to adapt to new circumstances.\n(PEP written by Mark Shannon, with ideas inspired by Stefan Brunthaler. See PEP 659 for more information. Implementation by Mark Shannon and Brandt Bucher, with additional help from Irit Katriel and Dennis Sweeney.)\nOperation |\nForm |\nSpecialization |\nOperation speedup (up to) |\nContributor(s) |\n|---|---|---|---|---|\nBinary operations |\n|\nBinary add, multiply and subtract for common types\nsuch as |\n10% |\nMark Shannon, Donghee Na, Brandt Bucher, Dennis Sweeney |\nSubscript |\n|\nSubscripting container types such as Subscripting custom |\n10-25% |\nIrit Katriel, Mark Shannon |\nStore subscript |\n|\nSimilar to subscripting specialization above. |\n10-25% |\nDennis Sweeney |\nCalls |\n|\nCalls to common builtin (C) functions and types such\nas |\n20% |\nMark Shannon, Ken Jin |\nLoad global variable |\n|\nThe object\u2019s index in the globals/builtins namespace is cached. Loading globals and builtins require zero namespace lookups. |\nMark Shannon |\n|\nLoad attribute |\n|\nSimilar to loading global variables. The attribute\u2019s index inside the class/object\u2019s namespace is cached. In most cases, attribute loading will require zero namespace lookups. |\nMark Shannon |\n|\nLoad methods for call |\n|\nThe actual address of the method is cached. Method loading now has no namespace lookups \u2013 even for classes with long inheritance chains. |\n10-20% |\nKen Jin, Mark Shannon |\nStore attribute |\n|\nSimilar to load attribute optimization. |\n2% in pyperformance |\nMark Shannon |\nUnpack Sequence |\n|\nSpecialized for common containers such as\n|\n8% |\nBrandt Bucher |\nMisc\u00b6\nObjects now require less memory due to lazily created object namespaces. Their namespace dictionaries now also share keys more freely. (Contributed Mark Shannon in bpo-45340 and bpo-40116.)\n\u201cZero-cost\u201d exceptions are implemented, eliminating the cost of\ntry\nstatements when no exception is raised. (Contributed by Mark Shannon in bpo-40222.)A more concise representation of exceptions in the interpreter reduced the time required for catching an exception by about 10%. (Contributed by Irit Katriel in bpo-45711.)\nre\n\u2019s regular expression matching engine has been partially refactored, and now uses computed gotos (or \u201cthreaded code\u201d) on supported platforms. As a result, Python 3.11 executes the pyperformance regular expression benchmarks up to 10% faster than Python 3.10. (Contributed by Brandt Bucher in gh-91404.)\nFAQ\u00b6\nHow should I write my code to utilize these speedups?\u00b6\nWrite Pythonic code that follows common best practices; you don\u2019t have to change your code. The Faster CPython project optimizes for common code patterns we observe.\nWill CPython 3.11 use more memory?\u00b6\nMaybe not; we don\u2019t expect memory use to exceed 20% higher than 3.10. This is offset by memory optimizations for frame objects and object dictionaries as mentioned above.\nI don\u2019t see any speedups in my workload. Why?\u00b6\nCertain code won\u2019t have noticeable benefits. If your code spends most of its time on I/O operations, or already does most of its computation in a C extension library like NumPy, there won\u2019t be significant speedups. This project currently benefits pure-Python workloads the most.\nFurthermore, the pyperformance figures are a geometric mean. Even within the pyperformance benchmarks, certain benchmarks have slowed down slightly, while others have sped up by nearly 2x!\nIs there a JIT compiler?\u00b6\nNo. We\u2019re still exploring other optimizations.\nAbout\u00b6\nFaster CPython explores optimizations for CPython. The main team is funded by Microsoft to work on this full-time. Pablo Galindo Salgado is also funded by Bloomberg LP to work on the project part-time. Finally, many contributors are volunteers from the community.\nCPython bytecode changes\u00b6\nThe bytecode now contains inline cache entries,\nwhich take the form of the newly-added CACHE\ninstructions.\nMany opcodes expect to be followed by an exact number of caches,\nand instruct the interpreter to skip over them at runtime.\nPopulated caches can look like arbitrary instructions,\nso great care should be taken when reading or modifying\nraw, adaptive bytecode containing quickened data.\nNew opcodes\u00b6\nASYNC_GEN_WRAP\n,RETURN_GENERATOR\nandSEND\n, used in generators and co-routines.COPY_FREE_VARS\n, which avoids needing special caller-side code for closures.JUMP_BACKWARD_NO_INTERRUPT\n, for use in certain loops where handling interrupts is undesirable.MAKE_CELL\n, to create Cell Objects.CHECK_EG_MATCH\nandPREP_RERAISE_STAR\n, to handle the new exception groups and except* added in PEP 654.PUSH_EXC_INFO\n, for use in exception handlers.RESUME\n, a no-op, for internal tracing, debugging and optimization checks.\nReplaced opcodes\u00b6\nReplaced Opcode(s) |\nNew Opcode(s) |\nNotes |\n|---|---|---|\nBINARY_* INPLACE_* |\nReplaced all numeric binary/in-place opcodes with a single opcode |\n|\nCALL_FUNCTION CALL_FUNCTION_KW CALL_METHOD |\nDecouples argument shifting for methods from handling of keyword arguments; allows better specialization of calls |\n|\nDUP_TOP DUP_TOP_TWO ROT_TWO ROT_THREE ROT_FOUR ROT_N |\nStack manipulation instructions |\n|\nJUMP_IF_NOT_EXC_MATCH |\nNow performs check but doesn\u2019t jump |\n|\nJUMP_ABSOLUTE POP_JUMP_IF_FALSE POP_JUMP_IF_TRUE |\nSee [3];\n|\n|\nSETUP_WITH SETUP_ASYNC_WITH |\n|\n|\nAll jump opcodes are now relative, including the\nexisting JUMP_IF_TRUE_OR_POP\nand JUMP_IF_FALSE_OR_POP\n.\nThe argument is now an offset from the current instruction\nrather than an absolute location.\nChanged/removed opcodes\u00b6\nChanged\nMATCH_CLASS\nandMATCH_KEYS\nto no longer push an additional boolean value to indicate success/failure. Instead,None\nis pushed on failure in place of the tuple of extracted values.Changed opcodes that work with exceptions to reflect them now being represented as one item on the stack instead of three (see gh-89874).\nRemoved\nCOPY_DICT_WITHOUT_KEYS\n,GEN_START\n,POP_BLOCK\n,SETUP_FINALLY\nandYIELD_FROM\n.\nDeprecated\u00b6\nThis section lists Python APIs that have been deprecated in Python 3.11.\nDeprecated C APIs are listed separately.\nLanguage/Builtins\u00b6\nChaining\nclassmethod\ndescriptors (introduced in bpo-19072) is now deprecated. It can no longer be used to wrap other descriptors such asproperty\n. The core design of this feature was flawed and caused a number of downstream problems. To \u201cpass-through\u201d aclassmethod\n, consider using the__wrapped__\nattribute that was added in Python 3.10. (Contributed by Raymond Hettinger in gh-89519.)Octal escapes in string and bytes literals with values larger than\n0o377\n(255 in decimal) now produce aDeprecationWarning\n. In a future Python version, they will raise aSyntaxWarning\nand eventually aSyntaxError\n. (Contributed by Serhiy Storchaka in gh-81548.)The delegation of\nint()\nto__trunc__()\nis now deprecated. Callingint(a)\nwhentype(a)\nimplements__trunc__()\nbut not__int__()\nor__index__()\nnow raises aDeprecationWarning\n. (Contributed by Zackery Spytz in bpo-44977.)\nModules\u00b6\nPEP 594 led to the deprecations of the following modules slated for removal in Python 3.13:\naifc\nchunk\nmsilib\npipes\ntelnetlib\naudioop\ncrypt\nnis\nsndhdr\nuu\ncgi\nimghdr\nnntplib\nspwd\nxdrlib\ncgitb\nmailcap\nossaudiodev\nsunau\n(Contributed by Brett Cannon in bpo-47061 and Victor Stinner in gh-68966.)\nThe\nasynchat\n,asyncore\nandsmtpd\nmodules have been deprecated since at least Python 3.6. Their documentation and deprecation warnings have now been updated to note they will be removed in Python 3.12. (Contributed by Hugo van Kemenade in bpo-47022.)The\nlib2to3\npackage and2to3\ntool are now deprecated and may not be able to parse Python 3.10 or newer. See PEP 617, introducing the new PEG parser, for details. (Contributed by Victor Stinner in bpo-40360.)Undocumented modules\nsre_compile\n,sre_constants\nandsre_parse\nare now deprecated. (Contributed by Serhiy Storchaka in bpo-47152.)\nStandard Library\u00b6\nThe following have been deprecated in\nconfigparser\nsince Python 3.2. Their deprecation warnings have now been updated to note they will be removed in Python 3.12:the\nconfigparser.SafeConfigParser\nclassthe\nconfigparser.ParsingError.filename\npropertythe\nconfigparser.RawConfigParser.readfp()\nmethod\n(Contributed by Hugo van Kemenade in bpo-45173.)\nconfigparser.LegacyInterpolation\nhas been deprecated in the docstring since Python 3.2, and is not listed in theconfigparser\ndocumentation. It now emits aDeprecationWarning\nand will be removed in Python 3.13. Useconfigparser.BasicInterpolation\norconfigparser.ExtendedInterpolation\ninstead. (Contributed by Hugo van Kemenade in bpo-46607.)The older set of\nimportlib.resources\nfunctions were deprecated in favor of the replacements added in Python 3.9 and will be removed in a future Python version, due to not supporting resources located within package subdirectories:importlib.resources.contents()\nimportlib.resources.is_resource()\nimportlib.resources.open_binary()\nimportlib.resources.open_text()\nimportlib.resources.read_binary()\nimportlib.resources.read_text()\nimportlib.resources.path()\nThe\nlocale.getdefaultlocale()\nfunction is deprecated and will be removed in Python 3.15. Uselocale.setlocale()\n,locale.getpreferredencoding(False)\nandlocale.getlocale()\nfunctions instead. (Contributed by Victor Stinner in gh-90817.)The\nlocale.resetlocale()\nfunction is deprecated and will be removed in Python 3.13. Uselocale.setlocale(locale.LC_ALL, \"\")\ninstead. (Contributed by Victor Stinner in gh-90817.)Stricter rules will now be applied for numerical group references and group names in regular expressions. Only sequences of ASCII digits will now be accepted as a numerical reference, and the group name in\nbytes\npatterns and replacement strings can only contain ASCII letters, digits and underscores. For now, a deprecation warning is raised for syntax violating these rules. (Contributed by Serhiy Storchaka in gh-91760.)In the\nre\nmodule, there.template()\nfunction and the correspondingre.TEMPLATE\nandre.T\nflags are deprecated, as they were undocumented and lacked an obvious purpose. They will be removed in Python 3.13. (Contributed by Serhiy Storchaka and Miro Hron\u010dok in gh-92728.)turtle.settiltangle()\nhas been deprecated since Python 3.1; it now emits a deprecation warning and will be removed in Python 3.13. Useturtle.tiltangle()\ninstead (it was earlier incorrectly marked as deprecated, and its docstring is now corrected). (Contributed by Hugo van Kemenade in bpo-45837.)typing.Text\n, which exists solely to provide compatibility support between Python 2 and Python 3 code, is now deprecated. Its removal is currently unplanned, but users are encouraged to usestr\ninstead wherever possible. (Contributed by Alex Waygood in gh-92332.)The keyword argument syntax for constructing\ntyping.TypedDict\ntypes is now deprecated. Support will be removed in Python 3.13. (Contributed by Jingchen Ye in gh-90224.)webbrowser.MacOSX\nis deprecated and will be removed in Python 3.13. It is untested, undocumented, and not used bywebbrowser\nitself. (Contributed by Donghee Na in bpo-42255.)The behavior of returning a value from a\nTestCase\nandIsolatedAsyncioTestCase\ntest methods (other than the defaultNone\nvalue) is now deprecated.Deprecated the following not-formally-documented\nunittest\nfunctions, scheduled for removal in Python 3.13:unittest.findTestCases()\nunittest.makeSuite()\nunittest.getTestCaseNames()\nUse\nTestLoader\nmethods instead:(Contributed by Erlend E. Aasland in bpo-5846.)\nunittest.TestProgram.usageExit()\nis marked deprecated, to be removed in 3.13. (Contributed by Carlos Dam\u00e1zio in gh-67048.)\nPending Removal in Python 3.12\u00b6\nThe following Python APIs have been deprecated in earlier Python releases, and will be removed in Python 3.12.\nC APIs pending removal are listed separately.\nThe\nasynchat\nmoduleThe\nasyncore\nmoduleThe\nimp\nmoduleThe\ntyping.io\nnamespaceThe\ntyping.re\nnamespacecgi.log()\nimportlib.find_loader()\nimportlib.abc.Loader.module_repr()\nimportlib.abc.MetaPathFinder.find_module()\nimportlib.abc.PathEntryFinder.find_loader()\nimportlib.abc.PathEntryFinder.find_module()\nimportlib.machinery.BuiltinImporter.find_module()\nimportlib.machinery.BuiltinLoader.module_repr()\nimportlib.machinery.FileFinder.find_loader()\nimportlib.machinery.FileFinder.find_module()\nimportlib.machinery.FrozenImporter.find_module()\nimportlib.machinery.FrozenLoader.module_repr()\nimportlib.machinery.PathFinder.find_module()\nimportlib.machinery.WindowsRegistryFinder.find_module()\nimportlib.util.module_for_loader()\nimportlib.util.set_loader_wrapper()\nimportlib.util.set_package_wrapper()\npkgutil.ImpImporter\npkgutil.ImpLoader\npathlib.Path.link_to()\nsqlite3.enable_shared_cache()\nsqlite3.OptimizedUnicode()\nPYTHONTHREADDEBUG\nenvironment variableThe following deprecated aliases in\nunittest\n:Deprecated alias\nMethod Name\nDeprecated in\nfailUnless\n3.1\nfailIf\n3.1\nfailUnlessEqual\n3.1\nfailIfEqual\n3.1\nfailUnlessAlmostEqual\n3.1\nfailIfAlmostEqual\n3.1\nfailUnlessRaises\n3.1\nassert_\n3.2\nassertEquals\n3.2\nassertNotEquals\n3.2\nassertAlmostEquals\n3.2\nassertNotAlmostEquals\n3.2\nassertRegexpMatches\n3.2\nassertRaisesRegexp\n3.2\nassertNotRegexpMatches\n3.5\nRemoved\u00b6\nThis section lists Python APIs that have been removed in Python 3.11.\nRemoved C APIs are listed separately.\nRemoved the\n@asyncio.coroutine()\ndecorator enabling legacy generator-based coroutines to be compatible withasync\n/await\ncode. The function has been deprecated since Python 3.8 and the removal was initially scheduled for Python 3.10. Useasync def\ninstead. (Contributed by Illia Volochii in bpo-43216.)Removed\nasyncio.coroutines.CoroWrapper\nused for wrapping legacy generator-based coroutine objects in the debug mode. (Contributed by Illia Volochii in bpo-43216.)Due to significant security concerns, the reuse_address parameter of\nasyncio.loop.create_datagram_endpoint()\n, disabled in Python 3.9, is now entirely removed. This is because of the behavior of the socket optionSO_REUSEADDR\nin UDP. (Contributed by Hugo van Kemenade in bpo-45129.)Removed the\nbinhex\nmodule, deprecated in Python 3.9. Also removed the related, similarly-deprecatedbinascii\nfunctions:binascii.a2b_hqx()\nbinascii.b2a_hqx()\nbinascii.rlecode_hqx()\nbinascii.rldecode_hqx()\nThe\nbinascii.crc_hqx()\nfunction remains available.(Contributed by Victor Stinner in bpo-45085.)\nRemoved the\ndistutils\nbdist_msi\ncommand deprecated in Python 3.9. Usebdist_wheel\n(wheel packages) instead. (Contributed by Hugo van Kemenade in bpo-45124.)Removed the\n__getitem__()\nmethods ofxml.dom.pulldom.DOMEventStream\n,wsgiref.util.FileWrapper\nandfileinput.FileInput\n, deprecated since Python 3.9. (Contributed by Hugo van Kemenade in bpo-45132.)Removed the deprecated\ngettext\nfunctionslgettext()\n,ldgettext()\n,lngettext()\nandldngettext()\n. Also removed thebind_textdomain_codeset()\nfunction, theNullTranslations.output_charset()\nandNullTranslations.set_output_charset()\nmethods, and the codeset parameter oftranslation()\nandinstall()\n, since they are only used for thel*gettext()\nfunctions. (Contributed by Donghee Na and Serhiy Storchaka in bpo-44235.)Removed from the\ninspect\nmodule:The\ngetargspec()\nfunction, deprecated since Python 3.0; useinspect.signature()\norinspect.getfullargspec()\ninstead.The\nformatargspec()\nfunction, deprecated since Python 3.5; use theinspect.signature()\nfunction or theinspect.Signature\nobject directly.The undocumented\nSignature.from_builtin()\nandSignature.from_function()\nmethods, deprecated since Python 3.5; use theSignature.from_callable()\nmethod instead.\n(Contributed by Hugo van Kemenade in bpo-45320.)\nRemoved the\n__class_getitem__()\nmethod frompathlib.PurePath\n, because it was not used and added by mistake in previous versions. (Contributed by Nikita Sobolev in bpo-46483.)Removed the\nMailmanProxy\nclass in thesmtpd\nmodule, as it is unusable without the externalmailman\npackage. (Contributed by Donghee Na in bpo-35800.)Removed the deprecated\nsplit()\nmethod of_tkinter.TkappType\n. (Contributed by Erlend E. Aasland in bpo-38371.)Removed namespace package support from\nunittest\ndiscovery. It was introduced in Python 3.4 but has been broken since Python 3.7. (Contributed by Inada Naoki in bpo-23882.)Removed the undocumented private\nfloat.__set_format__()\nmethod, previously known asfloat.__setformat__()\nin Python 3.7. Its docstring said: \u201cYou probably don\u2019t want to use this function. It exists mainly to be used in Python\u2019s test suite.\u201d (Contributed by Victor Stinner in bpo-46852.)The\n--experimental-isolated-subinterpreters\nconfigure flag (and correspondingEXPERIMENTAL_ISOLATED_SUBINTERPRETERS\nmacro) have been removed.Pynche \u2014 The Pythonically Natural Color and Hue Editor \u2014 has been moved out of\nTools/scripts\nand is being developed independently from the Python source tree.\nPorting to Python 3.11\u00b6\nThis section lists previously described changes and other bugfixes in the Python API that may require changes to your Python code.\nPorting notes for the C API are listed separately.\nopen()\n,io.open()\n,codecs.open()\nandfileinput.FileInput\nno longer accept'U'\n(\u201cuniversal newline\u201d) in the file mode. In Python 3, \u201cuniversal newline\u201d mode is used by default whenever a file is opened in text mode, and the'U'\nflag has been deprecated since Python 3.3. The newline parameter to these functions controls how universal newlines work. (Contributed by Victor Stinner in bpo-37330.)ast.AST\nnode positions are now validated when provided tocompile()\nand other related functions. If invalid positions are detected, aValueError\nwill be raised. (Contributed by Pablo Galindo in gh-93351)Prohibited passing non-\nconcurrent.futures.ThreadPoolExecutor\nexecutors toasyncio.loop.set_default_executor()\nfollowing a deprecation in Python 3.8. (Contributed by Illia Volochii in bpo-43234.)calendar\n: Thecalendar.LocaleTextCalendar\nandcalendar.LocaleHTMLCalendar\nclasses now uselocale.getlocale()\n, instead of usinglocale.getdefaultlocale()\n, if no locale is specified. (Contributed by Victor Stinner in bpo-46659.)The\npdb\nmodule now reads the.pdbrc\nconfiguration file with the'UTF-8'\nencoding. (Contributed by Srinivas Reddy Thatiparthy (\u0c36\u0c4d\u0c30\u0c40\u0c28\u0c3f\u0c35\u0c3e\u0c38\u0c4d \u0c30\u0c46\u0c21\u0c4d\u0c21\u0c3f \u0c24\u0c3e\u0c1f\u0c3f\u0c2a\u0c30\u0c4d\u0c24\u0c3f) in bpo-41137.)The population parameter of\nrandom.sample()\nmust be a sequence, and automatic conversion ofset\ns tolist\ns is no longer supported. Also, if the sample size is larger than the population size, aValueError\nis raised. (Contributed by Raymond Hettinger in bpo-40465.)The random optional parameter of\nrandom.shuffle()\nwas removed. It was previously an arbitrary random function to use for the shuffle; now,random.random()\n(its previous default) will always be used.In\nre\nRegular Expression Syntax, global inline flags (e.g.(?i)\n) can now only be used at the start of regular expressions. Using them elsewhere has been deprecated since Python 3.6. (Contributed by Serhiy Storchaka in bpo-47066.)In the\nre\nmodule, several long-standing bugs where fixed that, in rare cases, could cause capture groups to get the wrong result. Therefore, this could change the captured output in these cases. (Contributed by Ma Lin in bpo-35859.)\nBuild Changes\u00b6\nCPython now has PEP 11 Tier 3 support for cross compiling to the WebAssembly platforms Emscripten (\nwasm32-unknown-emscripten\n, i.e. Python in the browser) and WebAssembly System Interface (WASI) (wasm32-unknown-wasi\n). The effort is inspired by previous work like Pyodide. These platforms provide a limited subset of POSIX APIs; Python standard libraries features and modules related to networking, processes, threading, signals, mmap, and users/groups are not available or don\u2019t work. (Emscripten contributed by Christian Heimes and Ethan Smith in gh-84461 and WASI contributed by Christian Heimes in gh-90473; platforms promoted in gh-95085)Building CPython now requires:\nThe\nPy_NO_NAN\nmacro has been removed. Since CPython now requires IEEE 754 floats, NaN values are always available. (Contributed by Victor Stinner in bpo-46656.)The\ntkinter\npackage now requires Tcl/Tk version 8.5.12 or newer. (Contributed by Serhiy Storchaka in bpo-46996.)Build dependencies, compiler flags, and linker flags for most stdlib extension modules are now detected by configure. libffi, libnsl, libsqlite3, zlib, bzip2, liblzma, libcrypt, Tcl/Tk, and uuid flags are detected by pkg-config (when available).\ntkinter\nnow requires a pkg-config command to detect development settings for Tcl/Tk headers and libraries. (Contributed by Christian Heimes and Erlend Egeberg Aasland in bpo-45847, bpo-45747, and bpo-45763.)libpython is no longer linked against libcrypt. (Contributed by Mike Gilbert in bpo-45433.)\nCPython can now be built with the ThinLTO option via passing\nthin\nto--with-lto\n, i.e.--with-lto=thin\n. (Contributed by Donghee Na and Brett Holman in bpo-44340.)Freelists for object structs can now be disabled. A new configure option\n--without-freelists\ncan be used to disable all freelists except empty tuple singleton. (Contributed by Christian Heimes in bpo-45522.)Modules/Setup\nandModules/makesetup\nhave been improved and tied up. Extension modules can now be built throughmakesetup\n. All except some test modules can be linked statically into a main binary or library. (Contributed by Brett Cannon and Christian Heimes in bpo-45548, bpo-45570, bpo-45571, and bpo-43974.)Note\nUse the environment variables\nTCLTK_CFLAGS\nandTCLTK_LIBS\nto manually specify the location of Tcl/Tk headers and libraries. The configure options--with-tcltk-includes\nand--with-tcltk-libs\nhave been removed.On RHEL 7 and CentOS 7 the development packages do not provide\ntcl.pc\nandtk.pc\n; useTCLTK_LIBS=\"-ltk8.5 -ltkstub8.5 -ltcl8.5\"\n. The directoryMisc/rhel7\ncontains.pc\nfiles and instructions on how to build Python with RHEL 7\u2019s and CentOS 7\u2019s Tcl/Tk and OpenSSL.CPython will now use 30-bit digits by default for the Python\nint\nimplementation. Previously, the default was to use 30-bit digits on platforms withSIZEOF_VOID_P >= 8\n, and 15-bit digits otherwise. It\u2019s still possible to explicitly request use of 15-bit digits via either the--enable-big-digits\noption to the configure script or (for Windows) thePYLONG_BITS_IN_DIGIT\nvariable inPC/pyconfig.h\n, but this option may be removed at some point in the future. (Contributed by Mark Dickinson in bpo-45569.)\nC API Changes\u00b6\nNew Features\u00b6\nAdd a new\nPyType_GetName()\nfunction to get type\u2019s short name. (Contributed by Hai Shi in bpo-42035.)Add a new\nPyType_GetQualName()\nfunction to get type\u2019s qualified name. (Contributed by Hai Shi in bpo-42035.)Add new\nPyThreadState_EnterTracing()\nandPyThreadState_LeaveTracing()\nfunctions to the limited C API to suspend and resume tracing and profiling. (Contributed by Victor Stinner in bpo-43760.)Added the\nPy_Version\nconstant which bears the same value asPY_VERSION_HEX\n. (Contributed by Gabriele N. Tornetta in bpo-43931.)Py_buffer\nand APIs are now part of the limited API and the stable ABI:bf_getbuffer\nandbf_releasebuffer\ntype slots\n(Contributed by Christian Heimes in bpo-45459.)\nAdded the\nPyType_GetModuleByDef()\nfunction, used to get the module in which a method was defined, in cases where this information is not available directly (viaPyCMethod\n). (Contributed by Petr Viktorin in bpo-46613.)Add new functions to pack and unpack C double (serialize and deserialize):\nPyFloat_Pack2()\n,PyFloat_Pack4()\n,PyFloat_Pack8()\n,PyFloat_Unpack2()\n,PyFloat_Unpack4()\nandPyFloat_Unpack8()\n. (Contributed by Victor Stinner in bpo-46906.)Add new functions to get frame object attributes:\nPyFrame_GetBuiltins()\n,PyFrame_GetGenerator()\n,PyFrame_GetGlobals()\n,PyFrame_GetLasti()\n.Added two new functions to get and set the active exception instance:\nPyErr_GetHandledException()\nandPyErr_SetHandledException()\n. These are alternatives toPyErr_SetExcInfo()\nandPyErr_GetExcInfo()\nwhich work with the legacy 3-tuple representation of exceptions. (Contributed by Irit Katriel in bpo-46343.)Added the\nPyConfig.safe_path\nmember. (Contributed by Victor Stinner in gh-57684.)\nPorting to Python 3.11\u00b6\nSome macros have been converted to static inline functions to avoid macro pitfalls. The change should be mostly transparent to users, as the replacement functions will cast their arguments to the expected types to avoid compiler warnings due to static type checks. However, when the limited C API is set to >=3.11, these casts are not done, and callers will need to cast arguments to their expected types. See PEP 670 for more details. (Contributed by Victor Stinner and Erlend E. Aasland in gh-89653.)\nPyErr_SetExcInfo()\nno longer uses thetype\nandtraceback\narguments, the interpreter now derives those values from the exception instance (thevalue\nargument). The function still steals references of all three arguments. (Contributed by Irit Katriel in bpo-45711.)PyErr_GetExcInfo()\nnow derives thetype\nandtraceback\nfields of the result from the exception instance (thevalue\nfield). (Contributed by Irit Katriel in bpo-45711.)_frozen\nhas a newis_package\nfield to indicate whether or not the frozen module is a package. Previously, a negative value in thesize\nfield was the indicator. Now only non-negative values be used forsize\n. (Contributed by Kumar Aditya in bpo-46608.)_PyFrameEvalFunction()\nnow takes_PyInterpreterFrame*\nas its second parameter, instead ofPyFrameObject*\n. See PEP 523 for more details of how to use this function pointer type.PyCode_New()\nandPyCode_NewWithPosOnlyArgs()\nnow take an additionalexception_table\nargument. Using these functions should be avoided, if at all possible. To get a custom code object: create a code object using the compiler, then get a modified version with thereplace\nmethod.PyCodeObject\nno longer has theco_code\n,co_varnames\n,co_cellvars\nandco_freevars\nfields. Instead, usePyCode_GetCode()\n,PyCode_GetVarnames()\n,PyCode_GetCellvars()\nandPyCode_GetFreevars()\nrespectively to access them via the C API. (Contributed by Brandt Bucher in bpo-46841 and Ken Jin in gh-92154 and gh-94936.)The old trashcan macros (\nPy_TRASHCAN_SAFE_BEGIN\n/Py_TRASHCAN_SAFE_END\n) are now deprecated. They should be replaced by the new macrosPy_TRASHCAN_BEGIN\nandPy_TRASHCAN_END\n.A tp_dealloc function that has the old macros, such as:\nstatic void mytype_dealloc(mytype *p) { PyObject_GC_UnTrack(p); Py_TRASHCAN_SAFE_BEGIN(p); ... Py_TRASHCAN_SAFE_END }\nshould migrate to the new macros as follows:\nstatic void mytype_dealloc(mytype *p) { PyObject_GC_UnTrack(p); Py_TRASHCAN_BEGIN(p, mytype_dealloc) ... Py_TRASHCAN_END }\nNote that\nPy_TRASHCAN_BEGIN\nhas a second argument which should be the deallocation function it is in.To support older Python versions in the same codebase, you can define the following macros and use them throughout the code (credit: these were copied from the\nmypy\ncodebase):#if PY_VERSION_HEX >= 0x03080000 # define CPy_TRASHCAN_BEGIN(op, dealloc) Py_TRASHCAN_BEGIN(op, dealloc) # define CPy_TRASHCAN_END(op) Py_TRASHCAN_END #else # define CPy_TRASHCAN_BEGIN(op, dealloc) Py_TRASHCAN_SAFE_BEGIN(op) # define CPy_TRASHCAN_END(op) Py_TRASHCAN_SAFE_END(op) #endif\nThe\nPyType_Ready()\nfunction now raises an error if a type is defined with thePy_TPFLAGS_HAVE_GC\nflag set but has no traverse function (PyTypeObject.tp_traverse\n). (Contributed by Victor Stinner in bpo-44263.)Heap types with the\nPy_TPFLAGS_IMMUTABLETYPE\nflag can now inherit the PEP 590 vectorcall protocol. Previously, this was only possible for static types. (Contributed by Erlend E. Aasland in bpo-43908)Since\nPy_TYPE()\nis changed to a inline static function,Py_TYPE(obj) = new_type\nmust be replaced withPy_SET_TYPE(obj, new_type)\n: see thePy_SET_TYPE()\nfunction (available since Python 3.9). For backward compatibility, this macro can be used:#if PY_VERSION_HEX < 0x030900A4 && !defined(Py_SET_TYPE) static inline void _Py_SET_TYPE(PyObject *ob, PyTypeObject *type) { ob->ob_type = type; } #define Py_SET_TYPE(ob, type) _Py_SET_TYPE((PyObject*)(ob), type) #endif\n(Contributed by Victor Stinner in bpo-39573.)\nSince\nPy_SIZE()\nis changed to a inline static function,Py_SIZE(obj) = new_size\nmust be replaced withPy_SET_SIZE(obj, new_size)\n: see thePy_SET_SIZE()\nfunction (available since Python 3.9). For backward compatibility, this macro can be used:#if PY_VERSION_HEX < 0x030900A4 && !defined(Py_SET_SIZE) static inline void _Py_SET_SIZE(PyVarObject *ob, Py_ssize_t size) { ob->ob_size = size; } #define Py_SET_SIZE(ob, size) _Py_SET_SIZE((PyVarObject*)(ob), size) #endif\n(Contributed by Victor Stinner in bpo-39573.)\n\nno longer includes the header files\n,\n,\nand\nwhen thePy_LIMITED_API\nmacro is set to0x030b0000\n(Python 3.11) or higher. C extensions should explicitly include the header files after#include \n. (Contributed by Victor Stinner in bpo-45434.)The non-limited API files\ncellobject.h\n,classobject.h\n,code.h\n,context.h\n,funcobject.h\n,genobject.h\nandlongintrepr.h\nhave been moved to theInclude/cpython\ndirectory. Moreover, theeval.h\nheader file was removed. These files must not be included directly, as they are already included inPython.h\n: Include Files. If they have been included directly, consider includingPython.h\ninstead. (Contributed by Victor Stinner in bpo-35134.)The\nPyUnicode_CHECK_INTERNED()\nmacro has been excluded from the limited C API. It was never usable there, because it used internal structures which are not available in the limited C API. (Contributed by Victor Stinner in bpo-46007.)The following frame functions and type are now directly available with\n#include \n, it\u2019s no longer needed to add#include \n:(Contributed by Victor Stinner in gh-93937.)\nThe\nPyFrameObject\nstructure members have been removed from the public C API.While the documentation notes that the\nPyFrameObject\nfields are subject to change at any time, they have been stable for a long time and were used in several popular extensions.In Python 3.11, the frame struct was reorganized to allow performance optimizations. Some fields were removed entirely, as they were details of the old implementation.\nPyFrameObject\nfields:f_back\n: usePyFrame_GetBack()\n.f_blockstack\n: removed.f_builtins\n: usePyFrame_GetBuiltins()\n.f_code\n: usePyFrame_GetCode()\n.f_gen\n: usePyFrame_GetGenerator()\n.f_globals\n: usePyFrame_GetGlobals()\n.f_iblock\n: removed.f_lasti\n: usePyFrame_GetLasti()\n. Code usingf_lasti\nwithPyCode_Addr2Line()\nshould usePyFrame_GetLineNumber()\ninstead; it may be faster.f_lineno\n: usePyFrame_GetLineNumber()\nf_locals\n: usePyFrame_GetLocals()\n.f_stackdepth\n: removed.f_state\n: no public API (renamed tof_frame.f_state\n).f_trace\n: no public API.f_trace_lines\n: usePyObject_GetAttrString((PyObject*)frame, \"f_trace_lines\")\n.f_trace_opcodes\n: usePyObject_GetAttrString((PyObject*)frame, \"f_trace_opcodes\")\n.f_localsplus\n: no public API (renamed tof_frame.localsplus\n).f_valuestack\n: removed.\nThe Python frame object is now created lazily. A side effect is that the\nf_back\nmember must not be accessed directly, since its value is now also computed lazily. ThePyFrame_GetBack()\nfunction must be called instead.Debuggers that accessed the\nf_locals\ndirectly must callPyFrame_GetLocals()\ninstead. They no longer need to callPyFrame_FastToLocalsWithError()\norPyFrame_LocalsToFast()\n, in fact they should not call those functions. The necessary updating of the frame is now managed by the virtual machine.Code defining\nPyFrame_GetCode()\non Python 3.8 and older:#if PY_VERSION_HEX < 0x030900B1 static inline PyCodeObject* PyFrame_GetCode(PyFrameObject *frame) { Py_INCREF(frame->f_code); return frame->f_code; } #endif\nCode defining\nPyFrame_GetBack()\non Python 3.8 and older:#if PY_VERSION_HEX < 0x030900B1 static inline PyFrameObject* PyFrame_GetBack(PyFrameObject *frame) { Py_XINCREF(frame->f_back); return frame->f_back; } #endif\nOr use the pythoncapi_compat project to get these two functions on older Python versions.\nChanges of the\nPyThreadState\nstructure members:frame\n: removed, usePyThreadState_GetFrame()\n(function added to Python 3.9 by bpo-40429). Warning: the function returns a strong reference, need to callPy_XDECREF()\n.tracing\n: changed, usePyThreadState_EnterTracing()\nandPyThreadState_LeaveTracing()\n(functions added to Python 3.11 by bpo-43760).recursion_depth\n: removed, use(tstate->recursion_limit - tstate->recursion_remaining)\ninstead.stackcheck_counter\n: removed.\nCode defining\nPyThreadState_GetFrame()\non Python 3.8 and older:#if PY_VERSION_HEX < 0x030900B1 static inline PyFrameObject* PyThreadState_GetFrame(PyThreadState *tstate) { Py_XINCREF(tstate->frame); return tstate->frame; } #endif\nCode defining\nPyThreadState_EnterTracing()\nandPyThreadState_LeaveTracing()\non Python 3.10 and older:#if PY_VERSION_HEX < 0x030B00A2 static inline void PyThreadState_EnterTracing(PyThreadState *tstate) { tstate->tracing++; #if PY_VERSION_HEX >= 0x030A00A1 tstate->cframe->use_tracing = 0; #else tstate->use_tracing = 0; #endif } static inline void PyThreadState_LeaveTracing(PyThreadState *tstate) { int use_tracing = (tstate->c_tracefunc != NULL || tstate->c_profilefunc != NULL); tstate->tracing--; #if PY_VERSION_HEX >= 0x030A00A1 tstate->cframe->use_tracing = use_tracing; #else tstate->use_tracing = use_tracing; #endif } #endif\nOr use the pythoncapi-compat project to get these functions on old Python functions.\nDistributors are encouraged to build Python with the optimized Blake2 library libb2.\nThe\nPyConfig.module_search_paths_set\nfield must now be set to 1 for initialization to usePyConfig.module_search_paths\nto initializesys.path\n. Otherwise, initialization will recalculate the path and replace any values added tomodule_search_paths\n.PyConfig_Read()\nno longer calculates the initial search path, and will not fill any values intoPyConfig.module_search_paths\n. To calculate default paths and then modify them, finish initialization and usePySys_GetObject()\nto retrievesys.path\nas a Python list object and modify it directly.\nDeprecated\u00b6\nDeprecate the following functions to configure the Python initialization:\nPySys_AddWarnOptionUnicode()\nPySys_AddWarnOption()\nPySys_AddXOption()\nPySys_HasWarnOptions()\nPySys_SetArgvEx()\nPySys_SetArgv()\nPySys_SetPath()\nPy_SetPath()\nPy_SetProgramName()\nPy_SetPythonHome()\nPy_SetStandardStreamEncoding()\n_Py_SetProgramFullPath()\nUse the new\nPyConfig\nAPI of the Python Initialization Configuration instead (PEP 587). (Contributed by Victor Stinner in gh-88279.)Deprecate the\nob_shash\nmember of thePyBytesObject\n. UsePyObject_Hash()\ninstead. (Contributed by Inada Naoki in bpo-46864.)\nPending Removal in Python 3.12\u00b6\nThe following C APIs have been deprecated in earlier Python releases, and will be removed in Python 3.12.\nPyUnicode_AS_DATA()\nPyUnicode_AS_UNICODE()\nPyUnicode_AsUnicodeAndSize()\nPyUnicode_AsUnicode()\nPyUnicode_FromUnicode()\nPyUnicode_GET_DATA_SIZE()\nPyUnicode_GET_SIZE()\nPyUnicode_GetSize()\nPyUnicode_IS_COMPACT()\nPyUnicode_IS_READY()\nPyUnicode_WSTR_LENGTH()\n_PyUnicode_AsUnicode()\nPyUnicode_WCHAR_KIND\nPyUnicode_InternImmortal()\nRemoved\u00b6\nPyFrame_BlockSetup()\nandPyFrame_BlockPop()\nhave been removed. (Contributed by Mark Shannon in bpo-40222.)Remove the following math macros using the\nerrno\nvariable:Py_ADJUST_ERANGE1()\nPy_ADJUST_ERANGE2()\nPy_OVERFLOWED()\nPy_SET_ERANGE_IF_OVERFLOW()\nPy_SET_ERRNO_ON_MATH_ERROR()\n(Contributed by Victor Stinner in bpo-45412.)\nRemove\nPy_UNICODE_COPY()\nandPy_UNICODE_FILL()\nmacros, deprecated since Python 3.3. UsePyUnicode_CopyCharacters()\normemcpy()\n(wchar_t*\nstring), andPyUnicode_Fill()\nfunctions instead. (Contributed by Victor Stinner in bpo-41123.)Remove the\npystrhex.h\nheader file. It only contains private functions. C extensions should only include the main\nheader file. (Contributed by Victor Stinner in bpo-45434.)Remove the\nPy_FORCE_DOUBLE()\nmacro. It was used by thePy_IS_INFINITY()\nmacro. (Contributed by Victor Stinner in bpo-45440.)The following items are no longer available when\nPy_LIMITED_API\nis defined:the\nPy_MARSHAL_VERSION\nmacro\nThese are not part of the limited API.\n(Contributed by Victor Stinner in bpo-45474.)\nExclude\nPyWeakref_GET_OBJECT()\nfrom the limited C API. It never worked since thePyWeakReference\nstructure is opaque in the limited C API. (Contributed by Victor Stinner in bpo-35134.)Remove the\nPyHeapType_GET_MEMBERS()\nmacro. It was exposed in the public C API by mistake, it must only be used by Python internally. Use thePyTypeObject.tp_members\nmember instead. (Contributed by Victor Stinner in bpo-40170.)Remove the\nHAVE_PY_SET_53BIT_PRECISION\nmacro (moved to the internal C API). (Contributed by Victor Stinner in bpo-45412.)\nRemove the\nPy_UNICODE\nencoder APIs, as they have been deprecated since Python 3.3, are little used and are inefficient relative to the recommended alternatives.The removed functions are:\nPyUnicode_Encode()\nPyUnicode_EncodeASCII()\nPyUnicode_EncodeLatin1()\nPyUnicode_EncodeUTF7()\nPyUnicode_EncodeUTF8()\nPyUnicode_EncodeUTF16()\nPyUnicode_EncodeUTF32()\nPyUnicode_EncodeUnicodeEscape()\nPyUnicode_EncodeRawUnicodeEscape()\nPyUnicode_EncodeCharmap()\nPyUnicode_TranslateCharmap()\nPyUnicode_EncodeDecimal()\nPyUnicode_TransformDecimalToASCII()\nSee PEP 624 for details and migration guidance. (Contributed by Inada Naoki in bpo-44029.)\nNotable changes in 3.11.4\u00b6\ntarfile\u00b6\nThe extraction methods in\ntarfile\n, andshutil.unpack_archive()\n, have a new a filter argument that allows limiting tar features than may be surprising or dangerous, such as creating files outside the destination directory. See Extraction filters for details. In Python 3.12, use without the filter argument will show aDeprecationWarning\n. In Python 3.14, the default will switch to'data'\n. (Contributed by Petr Viktorin in PEP 706.)\nNotable changes in 3.11.5\u00b6\nOpenSSL\u00b6\nWindows builds and macOS installers from python.org now use OpenSSL 3.0.", "code_snippets": ["\n ", " ", "\n ", " ", "\n\n", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", "\n ", " ", "\n ", " ", "\n", "\n ", " ", " ", "\n ", "\n ", " ", "\n\n ", "\n\n", "\n ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", "\n\n ", "\n", " ", " ", " ", "\n ", "\n\n", "\n ", " ", "\n ", " ", "\n ", " ", "\n", " ", " ", "\n ", " ", "\n ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", "\n", "\n", "\n", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", "\n\n", "\n", "\n", "\n ", " ", "\n ", " ", "\n\n", " ", " ", " ", "\n", " ", "\n", "\n", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", " ", " ", "\n", " ", " ", "\n", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n", " ", "\n", " ", "\n", "\n ", "\n ", "\n ", "\n ", "\n", "\n", " ", "\n", " ", "\n", "\n ", "\n ", " ", "\n ", "\n ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", "\n", "\n", "\n", " ", " ", " ", " ", "\n", "\n ", "\n ", " ", "\n", "\n", "\n", "\n", " ", " ", " ", " ", "\n", "\n ", "\n ", " ", "\n", "\n", "\n", "\n", " ", " ", " ", " ", "\n", "\n ", "\n ", " ", "\n", "\n", "\n", "\n", " ", " ", " ", " ", "\n", "\n ", "\n", "\n ", " ", " ", "\n", "\n ", " ", " ", "\n", "\n", "\n\n", " ", " ", " ", " ", "\n", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n ", "\n", "\n ", " ", " ", "\n", "\n ", " ", " ", "\n", "\n", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 17622} +{"url": "https://docs.python.org/3/c-api/contextvars.html", "title": "Context Variables Objects", "content": "Context Variables Objects\u00b6\nAdded in version 3.7.\nChanged in version 3.7.1:\nNote\nIn Python 3.7.1 the signatures of all context variables\nC APIs were changed to use PyObject\npointers instead\nof PyContext\n, PyContextVar\n, and\nPyContextToken\n, e.g.:\n// in 3.7.0:\nPyContext *PyContext_New(void);\n// in 3.7.1+:\nPyObject *PyContext_New(void);\nSee bpo-34762 for more details.\nThis section details the public C API for the contextvars\nmodule.\n-\ntype PyContext\u00b6\nThe C structure used to represent a\ncontextvars.Context\nobject.\n-\ntype PyContextVar\u00b6\nThe C structure used to represent a\ncontextvars.ContextVar\nobject.\n-\ntype PyContextToken\u00b6\nThe C structure used to represent a\ncontextvars.Token\nobject.\n-\nPyTypeObject PyContext_Type\u00b6\nThe type object representing the context type.\n-\nPyTypeObject PyContextVar_Type\u00b6\nThe type object representing the context variable type.\n-\nPyTypeObject PyContextToken_Type\u00b6\nThe type object representing the context variable token type.\nType-check macros:\n-\nint PyContext_CheckExact(PyObject *o)\u00b6\nReturn true if o is of type\nPyContext_Type\n. o must not beNULL\n. This function always succeeds.\n-\nint PyContextVar_CheckExact(PyObject *o)\u00b6\nReturn true if o is of type\nPyContextVar_Type\n. o must not beNULL\n. This function always succeeds.\n-\nint PyContextToken_CheckExact(PyObject *o)\u00b6\nReturn true if o is of type\nPyContextToken_Type\n. o must not beNULL\n. This function always succeeds.\nContext object management functions:\n-\nPyObject *PyContext_New(void)\u00b6\n- Return value: New reference.\nCreate a new empty context object. Returns\nNULL\nif an error has occurred.\n-\nPyObject *PyContext_Copy(PyObject *ctx)\u00b6\n- Return value: New reference.\nCreate a shallow copy of the passed ctx context object. Returns\nNULL\nif an error has occurred.\n-\nPyObject *PyContext_CopyCurrent(void)\u00b6\n- Return value: New reference.\nCreate a shallow copy of the current thread context. Returns\nNULL\nif an error has occurred.\n-\nint PyContext_Enter(PyObject *ctx)\u00b6\nSet ctx as the current context for the current thread. Returns\n0\non success, and-1\non error.\n-\nint PyContext_Exit(PyObject *ctx)\u00b6\nDeactivate the ctx context and restore the previous context as the current context for the current thread. Returns\n0\non success, and-1\non error.\n-\nint PyContext_AddWatcher(PyContext_WatchCallback callback)\u00b6\nRegister callback as a context object watcher for the current interpreter. Return an ID which may be passed to\nPyContext_ClearWatcher()\n. In case of error (e.g. no more watcher IDs available), return-1\nand set an exception.Added in version 3.14.\n-\nint PyContext_ClearWatcher(int watcher_id)\u00b6\nClear watcher identified by watcher_id previously returned from\nPyContext_AddWatcher()\nfor the current interpreter. Return0\non success, or-1\nand set an exception on error (e.g. if the given watcher_id was never registered.)Added in version 3.14.\n-\ntype PyContextEvent\u00b6\nEnumeration of possible context object watcher events:\nPy_CONTEXT_SWITCHED\n: The current context has switched to a different context. The object passed to the watch callback is the now-currentcontextvars.Context\nobject, or None if no context is current.\nAdded in version 3.14.\n-\ntypedef int (*PyContext_WatchCallback)(PyContextEvent event, PyObject *obj)\u00b6\nContext object watcher callback function. The object passed to the callback is event-specific; see\nPyContextEvent\nfor details.If the callback returns with an exception set, it must return\n-1\n; this exception will be printed as an unraisable exception usingPyErr_FormatUnraisable()\n. Otherwise it should return0\n.There may already be a pending exception set on entry to the callback. In this case, the callback should return\n0\nwith the same exception still set. This means the callback may not call any other API that can set an exception unless it saves and clears the exception state first, and restores it before returning.Added in version 3.14.\nContext variable functions:\n-\nPyObject *PyContextVar_New(const char *name, PyObject *def)\u00b6\n- Return value: New reference.\nCreate a new\nContextVar\nobject. The name parameter is used for introspection and debug purposes. The def parameter specifies a default value for the context variable, orNULL\nfor no default. If an error has occurred, this function returnsNULL\n.\n-\nint PyContextVar_Get(PyObject *var, PyObject *default_value, PyObject **value)\u00b6\nGet the value of a context variable. Returns\n-1\nif an error has occurred during lookup, and0\nif no error occurred, whether or not a value was found.If the context variable was found, value will be a pointer to it. If the context variable was not found, value will point to:\ndefault_value, if not\nNULL\n;the default value of var, if not\nNULL\n;NULL\nExcept for\nNULL\n, the function returns a new reference.\n-\nPyObject *PyContextVar_Set(PyObject *var, PyObject *value)\u00b6\n- Return value: New reference.\nSet the value of var to value in the current context. Returns a new token object for this change, or\nNULL\nif an error has occurred.\n-\nint PyContextVar_Reset(PyObject *var, PyObject *token)\u00b6\nReset the state of the var context variable to that it was in before\nPyContextVar_Set()\nthat returned the token was called. This function returns0\non success and-1\non error.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1289} +{"url": "https://docs.python.org/3/library/email.headerregistry.html", "title": ": Custom Header Objects", "content": "email.headerregistry\n: Custom Header Objects\u00b6\nSource code: Lib/email/headerregistry.py\nAdded in version 3.6: [1]\nHeaders are represented by customized subclasses of str\n. The\nparticular class used to represent a given header is determined by the\nheader_factory\nof the policy\nin\neffect when the headers are created. This section documents the particular\nheader_factory\nimplemented by the email package for handling RFC 5322\ncompliant email messages, which not only provides customized header objects for\nvarious header types, but also provides an extension mechanism for applications\nto add their own custom header types.\nWhen using any of the policy objects derived from\nEmailPolicy\n, all headers are produced by\nHeaderRegistry\nand have BaseHeader\nas their last base\nclass. Each header class has an additional base class that is determined by\nthe type of the header. For example, many headers have the class\nUnstructuredHeader\nas their other base class. The specialized second\nclass for a header is determined by the name of the header, using a lookup\ntable stored in the HeaderRegistry\n. All of this is managed\ntransparently for the typical application program, but interfaces are provided\nfor modifying the default behavior for use by more complex applications.\nThe sections below first document the header base classes and their attributes,\nfollowed by the API for modifying the behavior of HeaderRegistry\n, and\nfinally the support classes used to represent the data parsed from structured\nheaders.\n- class email.headerregistry.BaseHeader(name, value)\u00b6\nname and value are passed to\nBaseHeader\nfrom theheader_factory\ncall. The string value of any header object is the value fully decoded to unicode.This base class defines the following read-only properties:\n- name\u00b6\nThe name of the header (the portion of the field before the \u2018:\u2019). This is exactly the value passed in the\nheader_factory\ncall for name; that is, case is preserved.\n- defects\u00b6\nA tuple of\nHeaderDefect\ninstances reporting any RFC compliance problems found during parsing. The email package tries to be complete about detecting compliance issues. See theerrors\nmodule for a discussion of the types of defects that may be reported.\n- max_count\u00b6\nThe maximum number of headers of this type that can have the same\nname\n. A value ofNone\nmeans unlimited. TheBaseHeader\nvalue for this attribute isNone\n; it is expected that specialized header classes will override this value as needed.\nBaseHeader\nalso provides the following method, which is called by the email library code and should not in general be called by application programs:- fold(*, policy)\u00b6\nReturn a string containing\nlinesep\ncharacters as required to correctly fold the header according to policy. Acte_type\nof8bit\nwill be treated as if it were7bit\n, since headers may not contain arbitrary binary data. Ifutf8\nisFalse\n, non-ASCII data will be RFC 2047 encoded.\nBaseHeader\nby itself cannot be used to create a header object. It defines a protocol that each specialized header cooperates with in order to produce the header object. Specifically,BaseHeader\nrequires that the specialized class provide aclassmethod()\nnamedparse\n. This method is called as follows:parse(string, kwds)\nkwds\nis a dictionary containing one pre-initialized key,defects\n.defects\nis an empty list. The parse method should append any detected defects to this list. On return, thekwds\ndictionary must contain values for at least the keysdecoded\nanddefects\n.decoded\nshould be the string value for the header (that is, the header value fully decoded to unicode). The parse method should assume that string may contain content-transfer-encoded parts, but should correctly handle all valid unicode characters as well so that it can parse un-encoded header values.BaseHeader\n\u2019s__new__\nthen creates the header instance, and calls itsinit\nmethod. The specialized class only needs to provide aninit\nmethod if it wishes to set additional attributes beyond those provided byBaseHeader\nitself. Such aninit\nmethod should look like this:def init(self, /, *args, **kw): self._myattr = kw.pop('myattr') super().init(*args, **kw)\nThat is, anything extra that the specialized class puts in to the\nkwds\ndictionary should be removed and handled, and the remaining contents ofkw\n(andargs\n) passed to theBaseHeader\ninit\nmethod.\n- class email.headerregistry.UnstructuredHeader\u00b6\nAn \u201cunstructured\u201d header is the default type of header in RFC 5322. Any header that does not have a specified syntax is treated as unstructured. The classic example of an unstructured header is the Subject header.\nIn RFC 5322, an unstructured header is a run of arbitrary text in the ASCII character set. RFC 2047, however, has an RFC 5322 compatible mechanism for encoding non-ASCII text as ASCII characters within a header value. When a value containing encoded words is passed to the constructor, the\nUnstructuredHeader\nparser converts such encoded words into unicode, following the RFC 2047 rules for unstructured text. The parser uses heuristics to attempt to decode certain non-compliant encoded words. Defects are registered in such cases, as well as defects for issues such as invalid characters within the encoded words or the non-encoded text.This header type provides no additional attributes.\n- class email.headerregistry.DateHeader\u00b6\nRFC 5322 specifies a very specific format for dates within email headers. The\nDateHeader\nparser recognizes that date format, as well as recognizing a number of variant forms that are sometimes found \u201cin the wild\u201d.This header type provides the following additional attributes:\n- datetime\u00b6\nIf the header value can be recognized as a valid date of one form or another, this attribute will contain a\ndatetime\ninstance representing that date. If the timezone of the input date is specified as-0000\n(indicating it is in UTC but contains no information about the source timezone), thendatetime\nwill be a naivedatetime\n. If a specific timezone offset is found (including+0000\n), thendatetime\nwill contain an awaredatetime\nthat usesdatetime.timezone\nto record the timezone offset.\nThe\ndecoded\nvalue of the header is determined by formatting thedatetime\naccording to the RFC 5322 rules; that is, it is set to:email.utils.format_datetime(self.datetime)\nWhen creating a\nDateHeader\n, value may bedatetime\ninstance. This means, for example, that the following code is valid and does what one would expect:msg['Date'] = datetime(2011, 7, 15, 21)\nBecause this is a naive\ndatetime\nit will be interpreted as a UTC timestamp, and the resulting value will have a timezone of-0000\n. Much more useful is to use thelocaltime()\nfunction from theutils\nmodule:msg['Date'] = utils.localtime()\nThis example sets the date header to the current time and date using the current timezone offset.\n- class email.headerregistry.AddressHeader\u00b6\nAddress headers are one of the most complex structured header types. The\nAddressHeader\nclass provides a generic interface to any address header.This header type provides the following additional attributes:\n- groups\u00b6\nA tuple of\nGroup\nobjects encoding the addresses and groups found in the header value. Addresses that are not part of a group are represented in this list as single-addressGroups\nwhosedisplay_name\nisNone\n.\n- addresses\u00b6\nA tuple of\nAddress\nobjects encoding all of the individual addresses from the header value. If the header value contains any groups, the individual addresses from the group are included in the list at the point where the group occurs in the value (that is, the list of addresses is \u201cflattened\u201d into a one dimensional list).\nThe\ndecoded\nvalue of the header will have all encoded words decoded to unicode.idna\nencoded domain names are also decoded to unicode. Thedecoded\nvalue is set by joining thestr\nvalue of the elements of thegroups\nattribute with', '\n.A list of\nAddress\nandGroup\nobjects in any combination may be used to set the value of an address header.Group\nobjects whosedisplay_name\nisNone\nwill be interpreted as single addresses, which allows an address list to be copied with groups intact by using the list obtained from thegroups\nattribute of the source header.\n- class email.headerregistry.SingleAddressHeader\u00b6\nA subclass of\nAddressHeader\nthat adds one additional attribute:- address\u00b6\nThe single address encoded by the header value. If the header value actually contains more than one address (which would be a violation of the RFC under the default\npolicy\n), accessing this attribute will result in aValueError\n.\nMany of the above classes also have a Unique\nvariant (for example,\nUniqueUnstructuredHeader\n). The only difference is that in the Unique\nvariant, max_count\nis set to 1.\n- class email.headerregistry.MIMEVersionHeader\u00b6\nThere is really only one valid value for the MIME-Version header, and that is\n1.0\n. For future proofing, this header class supports other valid version numbers. If a version number has a valid value per RFC 2045, then the header object will have non-None\nvalues for the following attributes:- version\u00b6\nThe version number as a string, with any whitespace and/or comments removed.\n- major\u00b6\nThe major version number as an integer\n- minor\u00b6\nThe minor version number as an integer\n- class email.headerregistry.ParameterizedMIMEHeader\u00b6\nMIME headers all start with the prefix \u2018Content-\u2019. Each specific header has a certain value, described under the class for that header. Some can also take a list of supplemental parameters, which have a common format. This class serves as a base for all the MIME headers that take parameters.\n- params\u00b6\nA dictionary mapping parameter names to parameter values.\n- class email.headerregistry.ContentTypeHeader\u00b6\nA\nParameterizedMIMEHeader\nclass that handles the Content-Type header.- content_type\u00b6\nThe content type string, in the form\nmaintype/subtype\n.\n- maintype\u00b6\n- subtype\u00b6\n- class email.headerregistry.ContentDispositionHeader\u00b6\nA\nParameterizedMIMEHeader\nclass that handles the Content-Disposition header.- content_disposition\u00b6\ninline\nandattachment\nare the only valid values in common use.\n- class email.headerregistry.ContentTransferEncodingHeader\u00b6\nHandles the Content-Transfer-Encoding header.\n- class email.headerregistry.HeaderRegistry(base_class=BaseHeader, default_class=UnstructuredHeader, use_default_map=True)\u00b6\nThis is the factory used by\nEmailPolicy\nby default.HeaderRegistry\nbuilds the class used to create a header instance dynamically, using base_class and a specialized class retrieved from a registry that it holds. When a given header name does not appear in the registry, the class specified by default_class is used as the specialized class. When use_default_map isTrue\n(the default), the standard mapping of header names to classes is copied in to the registry during initialization. base_class is always the last class in the generated class\u2019s__bases__\nlist.The default mappings are:\n- subject:\nUniqueUnstructuredHeader\n- date:\nUniqueDateHeader\n- resent-date:\nDateHeader\n- orig-date:\nUniqueDateHeader\n- sender:\nUniqueSingleAddressHeader\n- resent-sender:\nSingleAddressHeader\n- to:\nUniqueAddressHeader\n- resent-to:\nAddressHeader\n- cc:\nUniqueAddressHeader\n- resent-cc:\nAddressHeader\n- bcc:\nUniqueAddressHeader\n- resent-bcc:\nAddressHeader\n- from:\nUniqueAddressHeader\n- resent-from:\nAddressHeader\n- reply-to:\nUniqueAddressHeader\n- mime-version:\nMIMEVersionHeader\n- content-type:\nContentTypeHeader\n- content-disposition:\nContentDispositionHeader\n- content-transfer-encoding:\nContentTransferEncodingHeader\n- message-id:\nMessageIDHeader\nHeaderRegistry\nhas the following methods:- map_to_type(self, name, cls)\u00b6\nname is the name of the header to be mapped. It will be converted to lower case in the registry. cls is the specialized class to be used, along with base_class, to create the class used to instantiate headers that match name.\n- __getitem__(name)\u00b6\nConstruct and return a class to handle creating a name header.\n- __call__(name, value)\u00b6\nRetrieves the specialized header associated with name from the registry (using default_class if name does not appear in the registry) and composes it with base_class to produce a class, calls the constructed class\u2019s constructor, passing it the same argument list, and finally returns the class instance created thereby.\nThe following classes are the classes used to represent data parsed from structured headers and can, in general, be used by an application program to construct structured values to assign to specific headers.\n- class email.headerregistry.Address(display_name='', username='', domain='', addr_spec=None)\u00b6\nThe class used to represent an email address. The general form of an address is:\n[display_name] \nor:\nusername@domain\nwhere each part must conform to specific syntax rules spelled out in RFC 5322.\nAs a convenience addr_spec can be specified instead of username and domain, in which case username and domain will be parsed from the addr_spec. An addr_spec must be a properly RFC quoted string; if it is not\nAddress\nwill raise an error. Unicode characters are allowed and will be property encoded when serialized. However, per the RFCs, unicode is not allowed in the username portion of the address.- display_name\u00b6\nThe display name portion of the address, if any, with all quoting removed. If the address does not have a display name, this attribute will be an empty string.\n- username\u00b6\nThe\nusername\nportion of the address, with all quoting removed.\n- domain\u00b6\nThe\ndomain\nportion of the address.\n- addr_spec\u00b6\nThe\nusername@domain\nportion of the address, correctly quoted for use as a bare address (the second form shown above). This attribute is not mutable.\n- __str__()\u00b6\nThe\nstr\nvalue of the object is the address quoted according to RFC 5322 rules, but with no Content Transfer Encoding of any non-ASCII characters.\nTo support SMTP (RFC 5321),\nAddress\nhandles one special case: ifusername\nanddomain\nare both the empty string (orNone\n), then the string value of theAddress\nis<>\n.\n- class email.headerregistry.Group(display_name=None, addresses=None)\u00b6\nThe class used to represent an address group. The general form of an address group is:\ndisplay_name: [address-list];\nAs a convenience for processing lists of addresses that consist of a mixture of groups and single addresses, a\nGroup\nmay also be used to represent single addresses that are not part of a group by setting display_name toNone\nand providing a list of the single address as addresses.- display_name\u00b6\nThe\ndisplay_name\nof the group. If it isNone\nand there is exactly oneAddress\ninaddresses\n, then theGroup\nrepresents a single address that is not in a group.\nFootnotes", "code_snippets": [" ", "\n", " ", " ", " ", "\n ", " ", " ", "\n ", " ", "\n", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", "\n", " ", "\n", "\n", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 3633} +{"url": "https://docs.python.org/3/library/asyncio-dev.html", "title": "Developing with asyncio", "content": "Developing with asyncio\u00b6\nAsynchronous programming is different from classic \u201csequential\u201d programming.\nThis page lists common mistakes and traps and explains how to avoid them.\nDebug Mode\u00b6\nBy default asyncio runs in production mode. In order to ease the development asyncio has a debug mode.\nThere are several ways to enable asyncio debug mode:\nSetting the\nPYTHONASYNCIODEBUG\nenvironment variable to1\n.Using the Python Development Mode.\nPassing\ndebug=True\ntoasyncio.run()\n.Calling\nloop.set_debug()\n.\nIn addition to enabling the debug mode, consider also:\nsetting the log level of the asyncio logger to\nlogging.DEBUG\n, for example the following snippet of code can be run at startup of the application:logging.basicConfig(level=logging.DEBUG)\nconfiguring the\nwarnings\nmodule to displayResourceWarning\nwarnings. One way of doing that is by using the-W\ndefault\ncommand line option.\nWhen the debug mode is enabled:\nMany non-threadsafe asyncio APIs (such as\nloop.call_soon()\nandloop.call_at()\nmethods) raise an exception if they are called from a wrong thread.The execution time of the I/O selector is logged if it takes too long to perform an I/O operation.\nCallbacks taking longer than 100 milliseconds are logged. The\nloop.slow_callback_duration\nattribute can be used to set the minimum execution duration in seconds that is considered \u201cslow\u201d.\nConcurrency and Multithreading\u00b6\nAn event loop runs in a thread (typically the main thread) and executes\nall callbacks and Tasks in its thread. While a Task is running in the\nevent loop, no other Tasks can run in the same thread. When a Task\nexecutes an await\nexpression, the running Task gets suspended, and\nthe event loop executes the next Task.\nTo schedule a callback from another OS thread, the\nloop.call_soon_threadsafe()\nmethod should be used. Example:\nloop.call_soon_threadsafe(callback, *args)\nAlmost all asyncio objects are not thread safe, which is typically\nnot a problem unless there is code that works with them from outside\nof a Task or a callback. If there\u2019s a need for such code to call a\nlow-level asyncio API, the loop.call_soon_threadsafe()\nmethod\nshould be used, e.g.:\nloop.call_soon_threadsafe(fut.cancel)\nTo schedule a coroutine object from a different OS thread, the\nrun_coroutine_threadsafe()\nfunction should be used. It returns a\nconcurrent.futures.Future\nto access the result:\nasync def coro_func():\nreturn await asyncio.sleep(1, 42)\n# Later in another OS thread:\nfuture = asyncio.run_coroutine_threadsafe(coro_func(), loop)\n# Wait for the result:\nresult = future.result()\nTo handle signals the event loop must be run in the main thread.\nThe loop.run_in_executor()\nmethod can be used with a\nconcurrent.futures.ThreadPoolExecutor\nor\nInterpreterPoolExecutor\nto execute\nblocking code in a different OS thread without blocking the OS thread\nthat the event loop runs in.\nThere is currently no way to schedule coroutines or callbacks directly\nfrom a different process (such as one started with\nmultiprocessing\n). The Event Loop Methods\nsection lists APIs that can read from pipes and watch file descriptors\nwithout blocking the event loop. In addition, asyncio\u2019s\nSubprocess APIs provide a way to start a\nprocess and communicate with it from the event loop. Lastly, the\naforementioned loop.run_in_executor()\nmethod can also be used\nwith a concurrent.futures.ProcessPoolExecutor\nto execute\ncode in a different process.\nRunning Blocking Code\u00b6\nBlocking (CPU-bound) code should not be called directly. For example, if a function performs a CPU-intensive calculation for 1 second, all concurrent asyncio Tasks and IO operations would be delayed by 1 second.\nAn executor can be used to run a task in a different thread,\nincluding in a different interpreter, or even in\na different process to avoid blocking the OS thread with the\nevent loop. See the loop.run_in_executor()\nmethod for more\ndetails.\nLogging\u00b6\nasyncio uses the logging\nmodule and all logging is performed\nvia the \"asyncio\"\nlogger.\nThe default log level is logging.INFO\n, which can be easily\nadjusted:\nlogging.getLogger(\"asyncio\").setLevel(logging.WARNING)\nNetwork logging can block the event loop. It is recommended to use a separate thread for handling logs or use non-blocking IO. For example, see Dealing with handlers that block.\nDetect never-awaited coroutines\u00b6\nWhen a coroutine function is called, but not awaited\n(e.g. coro()\ninstead of await coro()\n)\nor the coroutine is not scheduled with asyncio.create_task()\n, asyncio\nwill emit a RuntimeWarning\n:\nimport asyncio\nasync def test():\nprint(\"never scheduled\")\nasync def main():\ntest()\nasyncio.run(main())\nOutput:\ntest.py:7: RuntimeWarning: coroutine 'test' was never awaited\ntest()\nOutput in debug mode:\ntest.py:7: RuntimeWarning: coroutine 'test' was never awaited\nCoroutine created at (most recent call last)\nFile \"../t.py\", line 9, in \nasyncio.run(main(), debug=True)\n< .. >\nFile \"../t.py\", line 7, in main\ntest()\ntest()\nThe usual fix is to either await the coroutine or call the\nasyncio.create_task()\nfunction:\nasync def main():\nawait test()\nDetect never-retrieved exceptions\u00b6\nIf a Future.set_exception()\nis called but the Future object is\nnever awaited on, the exception would never be propagated to the\nuser code. In this case, asyncio would emit a log message when the\nFuture object is garbage collected.\nExample of an unhandled exception:\nimport asyncio\nasync def bug():\nraise Exception(\"not consumed\")\nasync def main():\nasyncio.create_task(bug())\nasyncio.run(main())\nOutput:\nTask exception was never retrieved\nfuture: \nexception=Exception('not consumed')>\nTraceback (most recent call last):\nFile \"test.py\", line 4, in bug\nraise Exception(\"not consumed\")\nException: not consumed\nEnable the debug mode to get the traceback where the task was created:\nasyncio.run(main(), debug=True)\nOutput in debug mode:\nTask exception was never retrieved\nfuture: \nexception=Exception('not consumed') created at asyncio/tasks.py:321>\nsource_traceback: Object created at (most recent call last):\nFile \"../t.py\", line 9, in \nasyncio.run(main(), debug=True)\n< .. >\nTraceback (most recent call last):\nFile \"../t.py\", line 4, in bug\nraise Exception(\"not consumed\")\nException: not consumed", "code_snippets": ["\n", " ", "\n", "\n", " ", "\n ", " ", " ", " ", "\n\n", "\n\n", " ", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n\n", " ", "\n ", "\n\n", " ", "\n ", "\n\n", "\n", " ", " ", " ", " ", " ", " ", "\n ", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n ", " ", "\n\n ", " ", " ", "\n\n ", " ", " ", " ", " ", " ", "\n ", "\n ", "\n", " ", "\n ", " ", "\n", "\n\n", " ", "\n ", " ", "\n\n", " ", "\n ", "\n\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", "\n ", "\n\n", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n ", " ", "\n", " ", " ", "\n", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n\n", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n ", " ", "\n\n", " ", " ", "\n\n", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n ", " ", "\n", " ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 1568} +{"url": "https://docs.python.org/3/library/asyncio-subprocess.html", "title": "Subprocesses", "content": "Subprocesses\u00b6\nSource code: Lib/asyncio/subprocess.py, Lib/asyncio/base_subprocess.py\nThis section describes high-level async/await asyncio APIs to create and manage subprocesses.\nHere\u2019s an example of how asyncio can run a shell command and obtain its result:\nimport asyncio\nasync def run(cmd):\nproc = await asyncio.create_subprocess_shell(\ncmd,\nstdout=asyncio.subprocess.PIPE,\nstderr=asyncio.subprocess.PIPE)\nstdout, stderr = await proc.communicate()\nprint(f'[{cmd!r} exited with {proc.returncode}]')\nif stdout:\nprint(f'[stdout]\\n{stdout.decode()}')\nif stderr:\nprint(f'[stderr]\\n{stderr.decode()}')\nasyncio.run(run('ls /zzz'))\nwill print:\n['ls /zzz' exited with 1]\n[stderr]\nls: /zzz: No such file or directory\nBecause all asyncio subprocess functions are asynchronous and asyncio provides many tools to work with such functions, it is easy to execute and monitor multiple subprocesses in parallel. It is indeed trivial to modify the above example to run several commands simultaneously:\nasync def main():\nawait asyncio.gather(\nrun('ls /zzz'),\nrun('sleep 1; echo \"hello\"'))\nasyncio.run(main())\nSee also the Examples subsection.\nCreating Subprocesses\u00b6\n- async asyncio.create_subprocess_exec(program, *args, stdin=None, stdout=None, stderr=None, limit=None, **kwds)\u00b6\nCreate a subprocess.\nThe limit argument sets the buffer limit for\nStreamReader\nwrappers forstdout\nandstderr\n(ifsubprocess.PIPE\nis passed to stdout and stderr arguments).Return a\nProcess\ninstance.See the documentation of\nloop.subprocess_exec()\nfor other parameters.If the process object is garbage collected while the process is still running, the child process will be killed.\nChanged in version 3.10: Removed the loop parameter.\n- async asyncio.create_subprocess_shell(cmd, stdin=None, stdout=None, stderr=None, limit=None, **kwds)\u00b6\nRun the cmd shell command.\nThe limit argument sets the buffer limit for\nStreamReader\nwrappers forstdout\nandstderr\n(ifsubprocess.PIPE\nis passed to stdout and stderr arguments).Return a\nProcess\ninstance.See the documentation of\nloop.subprocess_shell()\nfor other parameters.If the process object is garbage collected while the process is still running, the child process will be killed.\nImportant\nIt is the application\u2019s responsibility to ensure that all whitespace and special characters are quoted appropriately to avoid shell injection vulnerabilities. The\nshlex.quote()\nfunction can be used to properly escape whitespace and special shell characters in strings that are going to be used to construct shell commands.Changed in version 3.10: Removed the loop parameter.\nNote\nSubprocesses are available for Windows if a ProactorEventLoop\nis\nused. See Subprocess Support on Windows\nfor details.\nSee also\nasyncio also has the following low-level APIs to work with subprocesses:\nloop.subprocess_exec()\n, loop.subprocess_shell()\n,\nloop.connect_read_pipe()\n, loop.connect_write_pipe()\n,\nas well as the Subprocess Transports\nand Subprocess Protocols.\nConstants\u00b6\n- asyncio.subprocess.PIPE\u00b6\nCan be passed to the stdin, stdout or stderr parameters.\nIf PIPE is passed to stdin argument, the\nProcess.stdin\nattribute will point to aStreamWriter\ninstance.If PIPE is passed to stdout or stderr arguments, the\nProcess.stdout\nandProcess.stderr\nattributes will point toStreamReader\ninstances.\n- asyncio.subprocess.STDOUT\u00b6\nSpecial value that can be used as the stderr argument and indicates that standard error should be redirected into standard output.\n- asyncio.subprocess.DEVNULL\u00b6\nSpecial value that can be used as the stdin, stdout or stderr argument to process creation functions. It indicates that the special file\nos.devnull\nwill be used for the corresponding subprocess stream.\nInteracting with Subprocesses\u00b6\nBoth create_subprocess_exec()\nand create_subprocess_shell()\nfunctions return instances of the Process class. Process is a high-level\nwrapper that allows communicating with subprocesses and watching for\ntheir completion.\n- class asyncio.subprocess.Process\u00b6\nAn object that wraps OS processes created by the\ncreate_subprocess_exec()\nandcreate_subprocess_shell()\nfunctions.This class is designed to have a similar API to the\nsubprocess.Popen\nclass, but there are some notable differences:unlike Popen, Process instances do not have an equivalent to the\npoll()\nmethod;the\ncommunicate()\nandwait()\nmethods don\u2019t have a timeout parameter: use thewait_for()\nfunction;the\nProcess.wait()\nmethod is asynchronous, whereassubprocess.Popen.wait()\nmethod is implemented as a blocking busy loop;the universal_newlines parameter is not supported.\nThis class is not thread safe.\nSee also the Subprocess and Threads section.\n- async wait()\u00b6\nWait for the child process to terminate.\nSet and return the\nreturncode\nattribute.Note\nThis method can deadlock when using\nstdout=PIPE\norstderr=PIPE\nand the child process generates so much output that it blocks waiting for the OS pipe buffer to accept more data. Use thecommunicate()\nmethod when using pipes to avoid this condition.\n- async communicate(input=None)\u00b6\nInteract with process:\nsend data to stdin (if input is not\nNone\n);closes stdin;\nread data from stdout and stderr, until EOF is reached;\nwait for process to terminate.\nThe optional input argument is the data (\nbytes\nobject) that will be sent to the child process.Return a tuple\n(stdout_data, stderr_data)\n.If either\nBrokenPipeError\norConnectionResetError\nexception is raised when writing input into stdin, the exception is ignored. This condition occurs when the process exits before all data are written into stdin.If it is desired to send data to the process\u2019 stdin, the process needs to be created with\nstdin=PIPE\n. Similarly, to get anything other thanNone\nin the result tuple, the process has to be created withstdout=PIPE\nand/orstderr=PIPE\narguments.Note, that the data read is buffered in memory, so do not use this method if the data size is large or unlimited.\nChanged in version 3.12: stdin gets closed when\ninput=None\ntoo.\n- send_signal(signal)\u00b6\nSends the signal signal to the child process.\nNote\nOn Windows,\nSIGTERM\nis an alias forterminate()\n.CTRL_C_EVENT\nandCTRL_BREAK_EVENT\ncan be sent to processes started with a creationflags parameter which includesCREATE_NEW_PROCESS_GROUP\n.\n- terminate()\u00b6\nStop the child process.\nOn POSIX systems this method sends\nSIGTERM\nto the child process.On Windows the Win32 API function\nTerminateProcess()\nis called to stop the child process.\n- kill()\u00b6\nKill the child process.\nOn POSIX systems this method sends\nSIGKILL\nto the child process.On Windows this method is an alias for\nterminate()\n.\n- stdin\u00b6\nStandard input stream (\nStreamWriter\n) orNone\nif the process was created withstdin=None\n.\n- stdout\u00b6\nStandard output stream (\nStreamReader\n) orNone\nif the process was created withstdout=None\n.\n- stderr\u00b6\nStandard error stream (\nStreamReader\n) orNone\nif the process was created withstderr=None\n.\nWarning\nUse the\ncommunicate()\nmethod rather thanprocess.stdin.write()\n,await process.stdout.read()\norawait process.stderr.read()\n. This avoids deadlocks due to streams pausing reading or writing and blocking the child process.- pid\u00b6\nProcess identification number (PID).\nNote that for processes created by the\ncreate_subprocess_shell()\nfunction, this attribute is the PID of the spawned shell.\n- returncode\u00b6\nReturn code of the process when it exits.\nA\nNone\nvalue indicates that the process has not terminated yet.A negative value\n-N\nindicates that the child was terminated by signalN\n(POSIX only).\nSubprocess and Threads\u00b6\nStandard asyncio event loop supports running subprocesses from different threads by default.\nOn Windows subprocesses are provided by ProactorEventLoop\nonly (default),\nSelectorEventLoop\nhas no subprocess support.\nNote that alternative event loop implementations might have own limitations; please refer to their documentation.\nSee also\nThe Concurrency and multithreading in asyncio section.\nExamples\u00b6\nAn example using the Process\nclass to\ncontrol a subprocess and the StreamReader\nclass to read from\nits standard output.\nThe subprocess is created by the create_subprocess_exec()\nfunction:\nimport asyncio\nimport sys\nasync def get_date():\ncode = 'import datetime; print(datetime.datetime.now())'\n# Create the subprocess; redirect the standard output\n# into a pipe.\nproc = await asyncio.create_subprocess_exec(\nsys.executable, '-c', code,\nstdout=asyncio.subprocess.PIPE)\n# Read one line of output.\ndata = await proc.stdout.readline()\nline = data.decode('ascii').rstrip()\n# Wait for the subprocess exit.\nawait proc.wait()\nreturn line\ndate = asyncio.run(get_date())\nprint(f\"Current date: {date}\")\nSee also the same example written using low-level APIs.", "code_snippets": ["\n\n", " ", "\n ", " ", " ", " ", "\n ", "\n ", "\n ", "\n\n ", " ", " ", " ", " ", "\n\n ", "\n ", " ", "\n ", "\n ", " ", "\n ", "\n\n", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", "\n ", " ", "\n ", "\n ", "\n\n", "\n", "\n", "\n\n", " ", "\n ", " ", " ", "\n\n ", "\n ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", "\n\n ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n\n ", "\n ", " ", "\n ", " ", "\n\n", " ", " ", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 2147} +{"url": "https://docs.python.org/3/c-api/coro.html", "title": "Coroutine Objects", "content": "Coroutine Objects\u00b6\nAdded in version 3.5.\nCoroutine objects are what functions declared with an async\nkeyword\nreturn.\n-\ntype PyCoroObject\u00b6\nThe C structure used for coroutine objects.\n-\nPyTypeObject PyCoro_Type\u00b6\nThe type object corresponding to coroutine objects.\n-\nint PyCoro_CheckExact(PyObject *ob)\u00b6\nReturn true if ob\u2019s type is\nPyCoro_Type\n; ob must not beNULL\n. This function always succeeds.\n-\nPyObject *PyCoro_New(PyFrameObject *frame, PyObject *name, PyObject *qualname)\u00b6\n- Return value: New reference.\nCreate and return a new coroutine object based on the frame object, with\n__name__\nand__qualname__\nset to name and qualname. A reference to frame is stolen by this function. The frame argument must not beNULL\n.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 179} +{"url": "https://docs.python.org/3/howto/argparse-optparse.html", "title": "Migrating ", "content": "Migrating optparse\ncode to argparse\n\u00b6\nThe argparse\nmodule offers several higher level features not natively\nprovided by the optparse\nmodule, including:\nHandling positional arguments.\nSupporting subcommands.\nAllowing alternative option prefixes like\n+\nand/\n.Handling zero-or-more and one-or-more style arguments.\nProducing more informative usage messages.\nProviding a much simpler interface for custom\ntype\nandaction\n.\nOriginally, the argparse\nmodule attempted to maintain compatibility\nwith optparse\n. However, the fundamental design differences between\nsupporting declarative command line option processing (while leaving positional\nargument processing to application code), and supporting both named options\nand positional arguments in the declarative interface mean that the\nAPI has diverged from that of optparse\nover time.\nAs described in Choosing an argument parsing library, applications that are\ncurrently using optparse\nand are happy with the way it works can\njust continue to use optparse\n.\nApplication developers that are considering migrating should also review the list of intrinsic behavioural differences described in that section before deciding whether or not migration is desirable.\nFor applications that do choose to migrate from optparse\nto argparse\n,\nthe following suggestions should be helpful:\nReplace all\noptparse.OptionParser.add_option()\ncalls withArgumentParser.add_argument()\ncalls.Replace\n(options, args) = parser.parse_args()\nwithargs = parser.parse_args()\nand add additionalArgumentParser.add_argument()\ncalls for the positional arguments. Keep in mind that what was previously calledoptions\n, now in theargparse\ncontext is calledargs\n.Replace\noptparse.OptionParser.disable_interspersed_args()\nby usingparse_intermixed_args()\ninstead ofparse_args()\n.Replace callback actions and the\ncallback_*\nkeyword arguments withtype\noraction\narguments.Replace string names for\ntype\nkeyword arguments with the corresponding type objects (e.g. int, float, complex, etc).Replace\noptparse.Values\nwithNamespace\nandoptparse.OptionError\nandoptparse.OptionValueError\nwithArgumentError\n.Replace strings with implicit arguments such as\n%default\nor%prog\nwith the standard Python syntax to use dictionaries to format strings, that is,%(default)s\nand%(prog)s\n.Replace the OptionParser constructor\nversion\nargument with a call toparser.add_argument('--version', action='version', version='')\n.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 603} +{"url": "https://docs.python.org/3/howto/pyporting.html", "title": "How to port Python 2 Code to Python 3", "content": "How to port Python 2 Code to Python 3\u00b6\n- author:\nBrett Cannon\nPython 2 reached its official end-of-life at the start of 2020. This means that no new bug reports, fixes, or changes will be made to Python 2 - it\u2019s no longer supported: see PEP 373 and status of Python versions.\nIf you are looking to port an extension module instead of pure Python code, please see Porting Extension Modules to Python 3.\nThe archived python-porting mailing list may contain some useful guidance.\nSince Python 3.11 the original porting guide was discontinued. You can find the old guide in the archive.\nThird-party guides\u00b6\nThere are also multiple third-party guides that might be useful:", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 167} +{"url": "https://docs.python.org/3/c-api/gen.html", "title": "Generator Objects", "content": "Generator Objects\u00b6\nGenerator objects are what Python uses to implement generator iterators. They\nare normally created by iterating over a function that yields values, rather\nthan explicitly calling PyGen_New()\nor PyGen_NewWithQualName()\n.\n-\ntype PyGenObject\u00b6\nThe C structure used for generator objects.\n-\nPyTypeObject PyGen_Type\u00b6\nThe type object corresponding to generator objects.\n-\nint PyGen_Check(PyObject *ob)\u00b6\nReturn true if ob is a generator object; ob must not be\nNULL\n. This function always succeeds.\n-\nint PyGen_CheckExact(PyObject *ob)\u00b6\nReturn true if ob\u2019s type is\nPyGen_Type\n; ob must not beNULL\n. This function always succeeds.\n-\nPyObject *PyGen_New(PyFrameObject *frame)\u00b6\n- Return value: New reference.\nCreate and return a new generator object based on the frame object. A reference to frame is stolen by this function. The argument must not be\nNULL\n.\n-\nPyObject *PyGen_NewWithQualName(PyFrameObject *frame, PyObject *name, PyObject *qualname)\u00b6\n- Return value: New reference.\nCreate and return a new generator object based on the frame object, with\n__name__\nand__qualname__\nset to name and qualname. A reference to frame is stolen by this function. The frame argument must not beNULL\n.\n-\nPyCodeObject *PyGen_GetCode(PyGenObject *gen)\u00b6\nReturn a new strong reference to the code object wrapped by gen. This function always succeeds.\nAsynchronous Generator Objects\u00b6\nSee also\n-\nPyTypeObject PyAsyncGen_Type\u00b6\nThe type object corresponding to asynchronous generator objects. This is available as\ntypes.AsyncGeneratorType\nin the Python layer.Added in version 3.6.\n-\nPyObject *PyAsyncGen_New(PyFrameObject *frame, PyObject *name, PyObject *qualname)\u00b6\nCreate a new asynchronous generator wrapping frame, with\n__name__\nand__qualname__\nset to name and qualname. frame is stolen by this function and must not beNULL\n.On success, this function returns a strong reference to the new asynchronous generator. On failure, this function returns\nNULL\nwith an exception set.Added in version 3.6.\nDeprecated API\u00b6\n-\nPyAsyncGenASend_CheckExact(op)\u00b6\nThis is a soft deprecated API that was included in Python\u2019s C API by mistake.\nIt is solely here for completeness; do not use this API.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 543} +{"url": "https://docs.python.org/3/c-api/picklebuffer.html", "title": "Pickle buffer objects", "content": "Pickle buffer objects\u00b6\nAdded in version 3.8.\nA pickle.PickleBuffer\nobject wraps a buffer-providing object for out-of-band data transfer with the pickle\nmodule.\n-\nPyTypeObject PyPickleBuffer_Type\u00b6\nThis instance of\nPyTypeObject\nrepresents the Python pickle buffer type. This is the same object aspickle.PickleBuffer\nin the Python layer.\n-\nint PyPickleBuffer_Check(PyObject *op)\u00b6\nReturn true if op is a pickle buffer instance. This function always succeeds.\n-\nPyObject *PyPickleBuffer_FromObject(PyObject *obj)\u00b6\nCreate a pickle buffer from the object obj.\nThis function will fail if obj doesn\u2019t support the buffer protocol.\nOn success, return a new pickle buffer instance. On failure, set an exception and return\nNULL\n.Analogous to calling\npickle.PickleBuffer\nwith obj in Python.\n-\nconst Py_buffer *PyPickleBuffer_GetBuffer(PyObject *picklebuf)\u00b6\nGet a pointer to the underlying\nPy_buffer\nthat the pickle buffer wraps.The returned pointer is valid as long as picklebuf is alive and has not been released. The caller must not modify or free the returned\nPy_buffer\n. If the pickle buffer has been released, raiseValueError\n.On success, return a pointer to the buffer view. On failure, set an exception and return\nNULL\n.\n-\nint PyPickleBuffer_Release(PyObject *picklebuf)\u00b6\nRelease the underlying buffer held by the pickle buffer.\nReturn\n0\non success. On failure, set an exception and return-1\n.Analogous to calling\npickle.PickleBuffer.release()\nin Python.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 362} +{"url": "https://docs.python.org/3/c-api/slice.html", "title": "Slice Objects", "content": "Slice Objects\u00b6\n-\nPyTypeObject PySlice_Type\u00b6\n- Part of the Stable ABI.\nThe type object for slice objects. This is the same as\nslice\nin the Python layer.\n-\nint PySlice_Check(PyObject *ob)\u00b6\nReturn true if ob is a slice object; ob must not be\nNULL\n. This function always succeeds.\n-\nPyObject *PySlice_New(PyObject *start, PyObject *stop, PyObject *step)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a new slice object with the given values. The start, stop, and step parameters are used as the values of the slice object attributes of the same names. Any of the values may be\nNULL\n, in which case theNone\nwill be used for the corresponding attribute.Return\nNULL\nwith an exception set if the new object could not be allocated.\n-\nint PySlice_GetIndices(PyObject *slice, Py_ssize_t length, Py_ssize_t *start, Py_ssize_t *stop, Py_ssize_t *step)\u00b6\n- Part of the Stable ABI.\nRetrieve the start, stop and step indices from the slice object slice, assuming a sequence of length length. Treats indices greater than length as errors.\nReturns\n0\non success and-1\non error with no exception set (unless one of the indices was notNone\nand failed to be converted to an integer, in which case-1\nis returned with an exception set).You probably do not want to use this function.\nChanged in version 3.2: The parameter type for the slice parameter was\nPySliceObject*\nbefore.\n-\nint PySlice_GetIndicesEx(PyObject *slice, Py_ssize_t length, Py_ssize_t *start, Py_ssize_t *stop, Py_ssize_t *step, Py_ssize_t *slicelength)\u00b6\n- Part of the Stable ABI.\nUsable replacement for\nPySlice_GetIndices()\n. Retrieve the start, stop, and step indices from the slice object slice assuming a sequence of length length, and store the length of the slice in slicelength. Out of bounds indices are clipped in a manner consistent with the handling of normal slices.Return\n0\non success and-1\non error with an exception set.Note\nThis function is considered not safe for resizable sequences. Its invocation should be replaced by a combination of\nPySlice_Unpack()\nandPySlice_AdjustIndices()\nwhereif (PySlice_GetIndicesEx(slice, length, &start, &stop, &step, &slicelength) < 0) { // return error }\nis replaced by\nif (PySlice_Unpack(slice, &start, &stop, &step) < 0) { // return error } slicelength = PySlice_AdjustIndices(length, &start, &stop, step);\nChanged in version 3.2: The parameter type for the slice parameter was\nPySliceObject*\nbefore.Changed in version 3.6.1: If\nPy_LIMITED_API\nis not set or set to the value between0x03050400\nand0x03060000\n(not including) or0x03060100\nor higherPySlice_GetIndicesEx()\nis implemented as a macro usingPySlice_Unpack()\nandPySlice_AdjustIndices()\n. Arguments start, stop and step are evaluated more than once.Deprecated since version 3.6.1: If\nPy_LIMITED_API\nis set to the value less than0x03050400\nor between0x03060000\nand0x03060100\n(not including)PySlice_GetIndicesEx()\nis a deprecated function.\n-\nint PySlice_Unpack(PyObject *slice, Py_ssize_t *start, Py_ssize_t *stop, Py_ssize_t *step)\u00b6\n- Part of the Stable ABI since version 3.7.\nExtract the start, stop and step data members from a slice object as C integers. Silently reduce values larger than\nPY_SSIZE_T_MAX\ntoPY_SSIZE_T_MAX\n, silently boost the start and stop values less thanPY_SSIZE_T_MIN\ntoPY_SSIZE_T_MIN\n, and silently boost the step values less than-PY_SSIZE_T_MAX\nto-PY_SSIZE_T_MAX\n.Return\n-1\nwith an exception set on error,0\non success.Added in version 3.6.1.\n-\nPy_ssize_t PySlice_AdjustIndices(Py_ssize_t length, Py_ssize_t *start, Py_ssize_t *stop, Py_ssize_t step)\u00b6\n- Part of the Stable ABI since version 3.7.\nAdjust start/end slice indices assuming a sequence of the specified length. Out of bounds indices are clipped in a manner consistent with the handling of normal slices.\nReturn the length of the slice. Always successful. Doesn\u2019t call Python code.\nAdded in version 3.6.1.\nEllipsis Object\u00b6\n-\nPyTypeObject PyEllipsis_Type\u00b6\n- Part of the Stable ABI.\nThe type of Python\nEllipsis\nobject. Same astypes.EllipsisType\nin the Python layer.\n-\nPyObject *Py_Ellipsis\u00b6\nThe Python\nEllipsis\nobject. This object has no methods. LikePy_None\n, it is an immortal singleton object.Changed in version 3.12:\nPy_Ellipsis\nis immortal.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1048} +{"url": "https://docs.python.org/3/c-api/descriptor.html", "title": "Descriptor Objects", "content": "Descriptor Objects\u00b6\n\u201cDescriptors\u201d are objects that describe some attribute of an object. They are found in the dictionary of type objects.\n-\nPyTypeObject PyProperty_Type\u00b6\n- Part of the Stable ABI.\nThe type object for the built-in descriptor types.\n-\nPyObject *PyDescr_NewGetSet(PyTypeObject *type, struct PyGetSetDef *getset)\u00b6\n- Return value: New reference. Part of the Stable ABI.\n-\nPyObject *PyDescr_NewMember(PyTypeObject *type, struct PyMemberDef *meth)\u00b6\n- Return value: New reference. Part of the Stable ABI.\n-\nPyTypeObject PyMemberDescr_Type\u00b6\n- Part of the Stable ABI.\nThe type object for member descriptor objects created from\nPyMemberDef\nstructures. These descriptors expose fields of a C struct as attributes on a type, and correspond totypes.MemberDescriptorType\nobjects in Python.\n-\nPyTypeObject PyGetSetDescr_Type\u00b6\n- Part of the Stable ABI.\nThe type object for get/set descriptor objects created from\nPyGetSetDef\nstructures. These descriptors implement attributes whose value is computed by C getter and setter functions, and are used for many built-in type attributes.\n-\nPyObject *PyDescr_NewMethod(PyTypeObject *type, struct PyMethodDef *meth)\u00b6\n- Return value: New reference. Part of the Stable ABI.\n-\nPyTypeObject PyMethodDescr_Type\u00b6\n- Part of the Stable ABI.\nThe type object for method descriptor objects created from\nPyMethodDef\nstructures. These descriptors expose C functions as methods on a type, and correspond totypes.MemberDescriptorType\nobjects in Python.\n-\nPyObject *PyDescr_NewWrapper(PyTypeObject *type, struct wrapperbase *wrapper, void *wrapped)\u00b6\n- Return value: New reference.\n-\nPyTypeObject PyWrapperDescr_Type\u00b6\n- Part of the Stable ABI.\nThe type object for wrapper descriptor objects created by\nPyDescr_NewWrapper()\nandPyWrapper_New()\n. Wrapper descriptors are used internally to expose special methods implemented via wrapper structures, and appear in Python astypes.WrapperDescriptorType\nobjects.\n-\nPyObject *PyDescr_NewClassMethod(PyTypeObject *type, PyMethodDef *method)\u00b6\n- Return value: New reference. Part of the Stable ABI.\n-\nint PyDescr_IsData(PyObject *descr)\u00b6\nReturn non-zero if the descriptor object descr describes a data attribute, or\n0\nif it describes a method. descr must be a descriptor object; there is no error checking.\n-\nPyObject *PyWrapper_New(PyObject*, PyObject*)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nBuilt-in descriptors\u00b6\n-\nPyTypeObject PySuper_Type\u00b6\n- Part of the Stable ABI.\nThe type object for super objects. This is the same object as\nsuper\nin the Python layer.\n-\nPyTypeObject PyClassMethod_Type\u00b6\nThe type of class method objects. This is the same object as\nclassmethod\nin the Python layer.\n-\nPyTypeObject PyClassMethodDescr_Type\u00b6\n- Part of the Stable ABI.\nThe type object for C-level class method descriptor objects. This is the type of the descriptors created for\nclassmethod()\ndefined in C extension types, and is the same object asclassmethod\nin Python.\n-\nPyObject *PyClassMethod_New(PyObject *callable)\u00b6\nCreate a new\nclassmethod\nobject wrapping callable. callable must be a callable object and must not beNULL\n.On success, this function returns a strong reference to a new class method descriptor. On failure, this function returns\nNULL\nwith an exception set.\n-\nPyTypeObject PyStaticMethod_Type\u00b6\nThe type of static method objects. This is the same object as\nstaticmethod\nin the Python layer.\n-\nPyObject *PyStaticMethod_New(PyObject *callable)\u00b6\nCreate a new\nstaticmethod\nobject wrapping callable. callable must be a callable object and must not beNULL\n.On success, this function returns a strong reference to a new static method descriptor. On failure, this function returns\nNULL\nwith an exception set.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 921} +{"url": "https://docs.python.org/3/c-api/iterator.html", "title": "Iterator Objects", "content": "Iterator Objects\u00b6\nPython provides two general-purpose iterator objects. The first, a sequence\niterator, works with an arbitrary sequence supporting the __getitem__()\nmethod. The second works with a callable object and a sentinel value, calling\nthe callable for each item in the sequence, and ending the iteration when the\nsentinel value is returned.\n-\nPyTypeObject PySeqIter_Type\u00b6\n- Part of the Stable ABI.\nType object for iterator objects returned by\nPySeqIter_New()\nand the one-argument form of theiter()\nbuilt-in function for built-in sequence types.\n-\nint PySeqIter_Check(PyObject *op)\u00b6\nReturn true if the type of op is\nPySeqIter_Type\n. This function always succeeds.\n-\nPyObject *PySeqIter_New(PyObject *seq)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn an iterator that works with a general sequence object, seq. The iteration ends when the sequence raises\nIndexError\nfor the subscripting operation.\n-\nPyTypeObject PyCallIter_Type\u00b6\n- Part of the Stable ABI.\nType object for iterator objects returned by\nPyCallIter_New()\nand the two-argument form of theiter()\nbuilt-in function.\n-\nint PyCallIter_Check(PyObject *op)\u00b6\nReturn true if the type of op is\nPyCallIter_Type\n. This function always succeeds.\n-\nPyObject *PyCallIter_New(PyObject *callable, PyObject *sentinel)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a new iterator. The first parameter, callable, can be any Python callable object that can be called with no parameters; each call to it should return the next item in the iteration. When callable returns a value equal to sentinel, the iteration will be terminated.\nRange Objects\u00b6\n-\nPyTypeObject PyRange_Type\u00b6\n- Part of the Stable ABI.\nThe type object for\nrange\nobjects.\nBuiltin Iterator Types\u00b6\nThese are built-in iteration types that are included in Python\u2019s C API, but provide no additional functions. They are here for completeness.\nC type |\nPython type |\n|---|---|\n|\n|\n|\n|\n|\n|\n|\n|\n|\nOther Iterator Objects\u00b6\n-\nPyTypeObject PyByteArrayIter_Type\u00b6\n- Part of the Stable ABI.\n-\nPyTypeObject PyBytesIter_Type\u00b6\n- Part of the Stable ABI.\n-\nPyTypeObject PyListIter_Type\u00b6\n- Part of the Stable ABI.\n-\nPyTypeObject PyListRevIter_Type\u00b6\n- Part of the Stable ABI.\n-\nPyTypeObject PySetIter_Type\u00b6\n- Part of the Stable ABI.\n-\nPyTypeObject PyTupleIter_Type\u00b6\n- Part of the Stable ABI.\n-\nPyTypeObject PyRangeIter_Type\u00b6\n- Part of the Stable ABI.\n-\nPyTypeObject PyLongRangeIter_Type\u00b6\n- Part of the Stable ABI.\n-\nPyTypeObject PyDictIterKey_Type\u00b6\n- Part of the Stable ABI.\n-\nPyTypeObject PyDictRevIterKey_Type\u00b6\n- Part of the Stable ABI since version 3.8.\n-\nPyTypeObject PyDictIterValue_Type\u00b6\n- Part of the Stable ABI.\n-\nPyTypeObject PyDictRevIterValue_Type\u00b6\n- Part of the Stable ABI since version 3.8.\n-\nPyTypeObject PyDictIterItem_Type\u00b6\n- Part of the Stable ABI.\n-\nPyTypeObject PyDictRevIterItem_Type\u00b6\n- Part of the Stable ABI since version 3.8.\n-\nPyTypeObject PyODictIter_Type\u00b6\nType objects for iterators of various built-in objects.\nDo not create instances of these directly; prefer calling\nPyObject_GetIter()\ninstead.Note that there is no guarantee that a given built-in type uses a given iterator type. For example, iterating over\nrange\nwill use one of two iterator types depending on the size of the range. Other types may start using a similar scheme in the future, without warning.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 829} +{"url": "https://docs.python.org/3/library/email.generator.html", "title": ": Generating MIME documents", "content": "email.generator\n: Generating MIME documents\u00b6\nSource code: Lib/email/generator.py\nOne of the most common tasks is to generate the flat (serialized) version of\nthe email message represented by a message object structure. You will need to\ndo this if you want to send your message via smtplib.SMTP.sendmail()\n,\nor print the message on the console. Taking a\nmessage object structure and producing a serialized representation is the job\nof the generator classes.\nAs with the email.parser\nmodule, you aren\u2019t limited to the functionality\nof the bundled generator; you could write one from scratch yourself. However\nthe bundled generator knows how to generate most email in a standards-compliant\nway, should handle MIME and non-MIME email messages just fine, and is designed\nso that the bytes-oriented parsing and generation operations are inverses,\nassuming the same non-transforming policy\nis used for both. That\nis, parsing the serialized byte stream via the\nBytesParser\nclass and then regenerating the serialized\nbyte stream using BytesGenerator\nshould produce output identical to\nthe input [1]. (On the other hand, using the generator on an\nEmailMessage\nconstructed by program may result in\nchanges to the EmailMessage\nobject as defaults are\nfilled in.)\nThe Generator\nclass can be used to flatten a message into a text (as\nopposed to binary) serialized representation, but since Unicode cannot\nrepresent binary data directly, the message is of necessity transformed into\nsomething that contains only ASCII characters, using the standard email RFC\nContent Transfer Encoding techniques for encoding email messages for transport\nover channels that are not \u201c8 bit clean\u201d.\nTo accommodate reproducible processing of SMIME-signed messages\nGenerator\ndisables header folding for message parts of type\nmultipart/signed\nand all subparts.\n- class email.generator.BytesGenerator(outfp, mangle_from_=None, maxheaderlen=None, *, policy=None)\u00b6\nReturn a\nBytesGenerator\nobject that will write any message provided to theflatten()\nmethod, or any surrogateescape encoded text provided to thewrite()\nmethod, to the file-like object outfp. outfp must support awrite\nmethod that accepts binary data.If optional mangle_from_ is\nTrue\n, put a>\ncharacter in front of any line in the body that starts with the exact string\"From \"\n, that isFrom\nfollowed by a space at the beginning of a line. mangle_from_ defaults to the value of themangle_from_\nsetting of the policy (which isTrue\nfor thecompat32\npolicy andFalse\nfor all others). mangle_from_ is intended for use when messages are stored in Unix mbox format (seemailbox\nand WHY THE CONTENT-LENGTH FORMAT IS BAD).If maxheaderlen is not\nNone\n, refold any header lines that are longer than maxheaderlen, or if0\n, do not rewrap any headers. If manheaderlen isNone\n(the default), wrap headers and other message lines according to the policy settings.If policy is specified, use that policy to control message generation. If policy is\nNone\n(the default), use the policy associated with theMessage\norEmailMessage\nobject passed toflatten\nto control the message generation. Seeemail.policy\nfor details on what policy controls.Added in version 3.2.\nChanged in version 3.3: Added the policy keyword.\nChanged in version 3.6: The default behavior of the mangle_from_ and maxheaderlen parameters is to follow the policy.\n- flatten(msg, unixfrom=False, linesep=None)\u00b6\nPrint the textual representation of the message object structure rooted at msg to the output file specified when the\nBytesGenerator\ninstance was created.If the\npolicy\noptioncte_type\nis8bit\n(the default), copy any headers in the original parsed message that have not been modified to the output with any bytes with the high bit set reproduced as in the original, and preserve the non-ASCII Content-Transfer-Encoding of any body parts that have them. Ifcte_type\nis7bit\n, convert the bytes with the high bit set as needed using an ASCII-compatible Content-Transfer-Encoding. That is, transform parts with non-ASCII Content-Transfer-Encoding (Content-Transfer-Encoding: 8bit) to an ASCII compatible Content-Transfer-Encoding, and encode RFC-invalid non-ASCII bytes in headers using the MIMEunknown-8bit\ncharacter set, thus rendering them RFC-compliant.If unixfrom is\nTrue\n, print the envelope header delimiter used by the Unix mailbox format (seemailbox\n) before the first of the RFC 5322 headers of the root message object. If the root object has no envelope header, craft a standard one. The default isFalse\n. Note that for subparts, no envelope header is ever printed.If linesep is not\nNone\n, use it as the separator character between all the lines of the flattened message. If linesep isNone\n(the default), use the value specified in the policy.\n- clone(fp)\u00b6\nReturn an independent clone of this\nBytesGenerator\ninstance with the exact same option settings, and fp as the new outfp.\n- write(s)\u00b6\nEncode s using the\nASCII\ncodec and thesurrogateescape\nerror handler, and pass it to the write method of the outfp passed to theBytesGenerator\n\u2019s constructor.\nAs a convenience, EmailMessage\nprovides the methods\nas_bytes()\nand bytes(aMessage)\n(a.k.a.\n__bytes__()\n), which simplify the generation of\na serialized binary representation of a message object. For more detail, see\nemail.message\n.\nBecause strings cannot represent binary data, the Generator\nclass must\nconvert any binary data in any message it flattens to an ASCII compatible\nformat, by converting them to an ASCII compatible\nContent-Transfer_Encoding. Using the terminology of the email\nRFCs, you can think of this as Generator\nserializing to an I/O stream\nthat is not \u201c8 bit clean\u201d. In other words, most applications will want\nto be using BytesGenerator\n, and not Generator\n.\n- class email.generator.Generator(outfp, mangle_from_=None, maxheaderlen=None, *, policy=None)\u00b6\nReturn a\nGenerator\nobject that will write any message provided to theflatten()\nmethod, or any text provided to thewrite()\nmethod, to the file-like object outfp. outfp must support awrite\nmethod that accepts string data.If optional mangle_from_ is\nTrue\n, put a>\ncharacter in front of any line in the body that starts with the exact string\"From \"\n, that isFrom\nfollowed by a space at the beginning of a line. mangle_from_ defaults to the value of themangle_from_\nsetting of the policy (which isTrue\nfor thecompat32\npolicy andFalse\nfor all others). mangle_from_ is intended for use when messages are stored in Unix mbox format (seemailbox\nand WHY THE CONTENT-LENGTH FORMAT IS BAD).If maxheaderlen is not\nNone\n, refold any header lines that are longer than maxheaderlen, or if0\n, do not rewrap any headers. If manheaderlen isNone\n(the default), wrap headers and other message lines according to the policy settings.If policy is specified, use that policy to control message generation. If policy is\nNone\n(the default), use the policy associated with theMessage\norEmailMessage\nobject passed toflatten\nto control the message generation. Seeemail.policy\nfor details on what policy controls.Changed in version 3.3: Added the policy keyword.\nChanged in version 3.6: The default behavior of the mangle_from_ and maxheaderlen parameters is to follow the policy.\n- flatten(msg, unixfrom=False, linesep=None)\u00b6\nPrint the textual representation of the message object structure rooted at msg to the output file specified when the\nGenerator\ninstance was created.If the\npolicy\noptioncte_type\nis8bit\n, generate the message as if the option were set to7bit\n. (This is required because strings cannot represent non-ASCII bytes.) Convert any bytes with the high bit set as needed using an ASCII-compatible Content-Transfer-Encoding. That is, transform parts with non-ASCII Content-Transfer-Encoding (Content-Transfer-Encoding: 8bit) to an ASCII compatible Content-Transfer-Encoding, and encode RFC-invalid non-ASCII bytes in headers using the MIMEunknown-8bit\ncharacter set, thus rendering them RFC-compliant.If unixfrom is\nTrue\n, print the envelope header delimiter used by the Unix mailbox format (seemailbox\n) before the first of the RFC 5322 headers of the root message object. If the root object has no envelope header, craft a standard one. The default isFalse\n. Note that for subparts, no envelope header is ever printed.If linesep is not\nNone\n, use it as the separator character between all the lines of the flattened message. If linesep isNone\n(the default), use the value specified in the policy.Changed in version 3.2: Added support for re-encoding\n8bit\nmessage bodies, and the linesep argument.\nAs a convenience, EmailMessage\nprovides the methods\nas_string()\nand str(aMessage)\n(a.k.a.\n__str__()\n), which simplify the generation of\na formatted string representation of a message object. For more detail, see\nemail.message\n.\nThe email.generator\nmodule also provides a derived class,\nDecodedGenerator\n, which is like the Generator\nbase class,\nexcept that non-text parts are not serialized, but are instead\nrepresented in the output stream by a string derived from a template filled\nin with information about the part.\n- class email.generator.DecodedGenerator(outfp, mangle_from_=None, maxheaderlen=None, fmt=None, *, policy=None)\u00b6\nAct like\nGenerator\n, except that for any subpart of the message passed toGenerator.flatten()\n, if the subpart is of main type text, print the decoded payload of the subpart, and if the main type is not text, instead of printing it fill in the string fmt using information from the part and print the resulting filled-in string.To fill in fmt, execute\nfmt % part_info\n, wherepart_info\nis a dictionary composed of the following keys and values:type\n\u2013 Full MIME type of the non-text partmaintype\n\u2013 Main MIME type of the non-text partsubtype\n\u2013 Sub-MIME type of the non-text partfilename\n\u2013 Filename of the non-text partdescription\n\u2013 Description associated with the non-text partencoding\n\u2013 Content transfer encoding of the non-text part\nIf fmt is\nNone\n, use the following default fmt:\u201c[Non-text (%(type)s) part of message omitted, filename %(filename)s]\u201d\nOptional _mangle_from_ and maxheaderlen are as with the\nGenerator\nbase class.\nFootnotes", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 2513} +{"url": "https://docs.python.org/3/library/email.compat32-message.html", "title": ": Representing an email message using the ", "content": "email.message.Message\n: Representing an email message using the compat32\nAPI\u00b6\nThe Message\nclass is very similar to the\nEmailMessage\nclass, without the methods added by that\nclass, and with the default behavior of certain other methods being slightly\ndifferent. We also document here some methods that, while supported by the\nEmailMessage\nclass, are not recommended unless you are\ndealing with legacy code.\nThe philosophy and structure of the two classes is otherwise the same.\nThis document describes the behavior under the default (for Message\n)\npolicy Compat32\n. If you are going to use another policy,\nyou should be using the EmailMessage\nclass instead.\nAn email message consists of headers and a payload. Headers must be RFC 5322 style names and values, where the field name and value are separated by a colon. The colon is not part of either the field name or the field value. The payload may be a simple text message, or a binary object, or a structured sequence of sub-messages each with their own set of headers and their own payload. The latter type of payload is indicated by the message having a MIME type such as multipart/* or message/rfc822.\nThe conceptual model provided by a Message\nobject is that of an\nordered dictionary of headers with additional methods for accessing both\nspecialized information from the headers, for accessing the payload, for\ngenerating a serialized version of the message, and for recursively walking\nover the object tree. Note that duplicate headers are supported but special\nmethods must be used to access them.\nThe Message\npseudo-dictionary is indexed by the header names, which\nmust be ASCII values. The values of the dictionary are strings that are\nsupposed to contain only ASCII characters; there is some special handling for\nnon-ASCII input, but it doesn\u2019t always produce the correct results. Headers\nare stored and returned in case-preserving form, but field names are matched\ncase-insensitively. There may also be a single envelope header, also known as\nthe Unix-From header or the From_\nheader. The payload is either a\nstring or bytes, in the case of simple message objects, or a list of\nMessage\nobjects, for MIME container documents (e.g.\nmultipart/* and message/rfc822).\nHere are the methods of the Message\nclass:\n- class email.message.Message(policy=compat32)\u00b6\nIf policy is specified (it must be an instance of a\npolicy\nclass) use the rules it specifies to update and serialize the representation of the message. If policy is not set, use thecompat32\npolicy, which maintains backward compatibility with the Python 3.2 version of the email package. For more information see thepolicy\ndocumentation.Changed in version 3.3: The policy keyword argument was added.\n- as_string(unixfrom=False, maxheaderlen=0, policy=None)\u00b6\nReturn the entire message flattened as a string. When optional unixfrom is true, the envelope header is included in the returned string. unixfrom defaults to\nFalse\n. For backward compatibility reasons, maxheaderlen defaults to0\n, so if you want a different value you must override it explicitly (the value specified for max_line_length in the policy will be ignored by this method). The policy argument may be used to override the default policy obtained from the message instance. This can be used to control some of the formatting produced by the method, since the specified policy will be passed to theGenerator\n.Flattening the message may trigger changes to the\nMessage\nif defaults need to be filled in to complete the transformation to a string (for example, MIME boundaries may be generated or modified).Note that this method is provided as a convenience and may not always format the message the way you want. For example, by default it does not do the mangling of lines that begin with\nFrom\nthat is required by the Unix mbox format. For more flexibility, instantiate aGenerator\ninstance and use itsflatten()\nmethod directly. For example:from io import StringIO from email.generator import Generator fp = StringIO() g = Generator(fp, mangle_from_=True, maxheaderlen=60) g.flatten(msg) text = fp.getvalue()\nIf the message object contains binary data that is not encoded according to RFC standards, the non-compliant data will be replaced by unicode \u201cunknown character\u201d code points. (See also\nas_bytes()\nandBytesGenerator\n.)Changed in version 3.4: the policy keyword argument was added.\n- __str__()\u00b6\nEquivalent to\nas_string()\n. Allowsstr(msg)\nto produce a string containing the formatted message.\n- as_bytes(unixfrom=False, policy=None)\u00b6\nReturn the entire message flattened as a bytes object. When optional unixfrom is true, the envelope header is included in the returned string. unixfrom defaults to\nFalse\n. The policy argument may be used to override the default policy obtained from the message instance. This can be used to control some of the formatting produced by the method, since the specified policy will be passed to theBytesGenerator\n.Flattening the message may trigger changes to the\nMessage\nif defaults need to be filled in to complete the transformation to a string (for example, MIME boundaries may be generated or modified).Note that this method is provided as a convenience and may not always format the message the way you want. For example, by default it does not do the mangling of lines that begin with\nFrom\nthat is required by the Unix mbox format. For more flexibility, instantiate aBytesGenerator\ninstance and use itsflatten()\nmethod directly. For example:from io import BytesIO from email.generator import BytesGenerator fp = BytesIO() g = BytesGenerator(fp, mangle_from_=True, maxheaderlen=60) g.flatten(msg) text = fp.getvalue()\nAdded in version 3.4.\n- __bytes__()\u00b6\nEquivalent to\nas_bytes()\n. Allowsbytes(msg)\nto produce a bytes object containing the formatted message.Added in version 3.4.\n- is_multipart()\u00b6\nReturn\nTrue\nif the message\u2019s payload is a list of sub-Message\nobjects, otherwise returnFalse\n. Whenis_multipart()\nreturnsFalse\n, the payload should be a string object (which might be a CTE encoded binary payload). (Note thatis_multipart()\nreturningTrue\ndoes not necessarily mean that \u201cmsg.get_content_maintype() == \u2018multipart\u2019\u201d will return theTrue\n. For example,is_multipart\nwill returnTrue\nwhen theMessage\nis of typemessage/rfc822\n.)\n- set_unixfrom(unixfrom)\u00b6\nSet the message\u2019s envelope header to unixfrom, which should be a string.\n- get_unixfrom()\u00b6\nReturn the message\u2019s envelope header. Defaults to\nNone\nif the envelope header was never set.\n- attach(payload)\u00b6\nAdd the given payload to the current payload, which must be\nNone\nor a list ofMessage\nobjects before the call. After the call, the payload will always be a list ofMessage\nobjects. If you want to set the payload to a scalar object (e.g. a string), useset_payload()\ninstead.This is a legacy method. On the\nEmailMessage\nclass its functionality is replaced byset_content()\nand the relatedmake\nandadd\nmethods.\n- get_payload(i=None, decode=False)\u00b6\nReturn the current payload, which will be a list of\nMessage\nobjects whenis_multipart()\nisTrue\n, or a string whenis_multipart()\nisFalse\n. If the payload is a list and you mutate the list object, you modify the message\u2019s payload in place.With optional argument i,\nget_payload()\nwill return the i-th element of the payload, counting from zero, ifis_multipart()\nisTrue\n. AnIndexError\nwill be raised if i is less than 0 or greater than or equal to the number of items in the payload. If the payload is a string (i.e.is_multipart()\nisFalse\n) and i is given, aTypeError\nis raised.Optional decode is a flag indicating whether the payload should be decoded or not, according to the Content-Transfer-Encoding header. When\nTrue\nand the message is not a multipart, the payload will be decoded if this header\u2019s value isquoted-printable\norbase64\n. If some other encoding is used, or Content-Transfer-Encoding header is missing, the payload is returned as-is (undecoded). In all cases the returned value is binary data. If the message is a multipart and the decode flag isTrue\n, thenNone\nis returned. If the payload is base64 and it was not perfectly formed (missing padding, characters outside the base64 alphabet), then an appropriate defect will be added to the message\u2019s defect property (InvalidBase64PaddingDefect\norInvalidBase64CharactersDefect\n, respectively).When decode is\nFalse\n(the default) the body is returned as a string without decoding the Content-Transfer-Encoding. However, for a Content-Transfer-Encoding of 8bit, an attempt is made to decode the original bytes using thecharset\nspecified by the Content-Type header, using thereplace\nerror handler. If nocharset\nis specified, or if thecharset\ngiven is not recognized by the email package, the body is decoded using the default ASCII charset.This is a legacy method. On the\nEmailMessage\nclass its functionality is replaced byget_content()\nanditer_parts()\n.\n- set_payload(payload, charset=None)\u00b6\nSet the entire message object\u2019s payload to payload. It is the client\u2019s responsibility to ensure the payload invariants. Optional charset sets the message\u2019s default character set; see\nset_charset()\nfor details.This is a legacy method. On the\nEmailMessage\nclass its functionality is replaced byset_content()\n.\n- set_charset(charset)\u00b6\nSet the character set of the payload to charset, which can either be a\nCharset\ninstance (seeemail.charset\n), a string naming a character set, orNone\n. If it is a string, it will be converted to aCharset\ninstance. If charset isNone\n, thecharset\nparameter will be removed from the Content-Type header (the message will not be otherwise modified). Anything else will generate aTypeError\n.If there is no existing MIME-Version header one will be added. If there is no existing Content-Type header, one will be added with a value of text/plain. Whether the Content-Type header already exists or not, its\ncharset\nparameter will be set to charset.output_charset. If charset.input_charset and charset.output_charset differ, the payload will be re-encoded to the output_charset. If there is no existing Content-Transfer-Encoding header, then the payload will be transfer-encoded, if needed, using the specifiedCharset\n, and a header with the appropriate value will be added. If a Content-Transfer-Encoding header already exists, the payload is assumed to already be correctly encoded using that Content-Transfer-Encoding and is not modified.This is a legacy method. On the\nEmailMessage\nclass its functionality is replaced by the charset parameter of theemail.message.EmailMessage.set_content()\nmethod.\n- get_charset()\u00b6\nReturn the\nCharset\ninstance associated with the message\u2019s payload.This is a legacy method. On the\nEmailMessage\nclass it always returnsNone\n.\nThe following methods implement a mapping-like interface for accessing the message\u2019s RFC 2822 headers. Note that there are some semantic differences between these methods and a normal mapping (i.e. dictionary) interface. For example, in a dictionary there are no duplicate keys, but here there may be duplicate message headers. Also, in dictionaries there is no guaranteed order to the keys returned by\nkeys()\n, but in aMessage\nobject, headers are always returned in the order they appeared in the original message, or were added to the message later. Any header deleted and then re-added are always appended to the end of the header list.These semantic differences are intentional and are biased toward maximal convenience.\nNote that in all cases, any envelope header present in the message is not included in the mapping interface.\nIn a model generated from bytes, any header values that (in contravention of the RFCs) contain non-ASCII bytes will, when retrieved through this interface, be represented as\nHeader\nobjects with a charset ofunknown-8bit\n.- __len__()\u00b6\nReturn the total number of headers, including duplicates.\n- __contains__(name)\u00b6\nReturn\nTrue\nif the message object has a field named name. Matching is done case-insensitively and name should not include the trailing colon. Used for thein\noperator, e.g.:if 'message-id' in myMessage: print('Message-ID:', myMessage['message-id'])\n- __getitem__(name)\u00b6\nReturn the value of the named header field. name should not include the colon field separator. If the header is missing,\nNone\nis returned; aKeyError\nis never raised.Note that if the named field appears more than once in the message\u2019s headers, exactly which of those field values will be returned is undefined. Use the\nget_all()\nmethod to get the values of all the extant named headers.\n- __setitem__(name, val)\u00b6\nAdd a header to the message with field name name and value val. The field is appended to the end of the message\u2019s existing fields.\nNote that this does not overwrite or delete any existing header with the same name. If you want to ensure that the new header is the only one present in the message with field name name, delete the field first, e.g.:\ndel msg['subject'] msg['subject'] = 'Python roolz!'\n- __delitem__(name)\u00b6\nDelete all occurrences of the field with name name from the message\u2019s headers. No exception is raised if the named field isn\u2019t present in the headers.\n- keys()\u00b6\nReturn a list of all the message\u2019s header field names.\n- values()\u00b6\nReturn a list of all the message\u2019s field values.\n- items()\u00b6\nReturn a list of 2-tuples containing all the message\u2019s field headers and values.\n- get(name, failobj=None)\u00b6\nReturn the value of the named header field. This is identical to\n__getitem__()\nexcept that optional failobj is returned if the named header is missing (defaults toNone\n).\nHere are some additional useful methods:\n- get_all(name, failobj=None)\u00b6\nReturn a list of all the values for the field named name. If there are no such named headers in the message, failobj is returned (defaults to\nNone\n).\n- add_header(_name, _value, **_params)\u00b6\nExtended header setting. This method is similar to\n__setitem__()\nexcept that additional header parameters can be provided as keyword arguments. _name is the header field to add and _value is the primary value for the header.For each item in the keyword argument dictionary _params, the key is taken as the parameter name, with underscores converted to dashes (since dashes are illegal in Python identifiers). Normally, the parameter will be added as\nkey=\"value\"\nunless the value isNone\n, in which case only the key will be added. If the value contains non-ASCII characters, it can be specified as a three tuple in the format(CHARSET, LANGUAGE, VALUE)\n, whereCHARSET\nis a string naming the charset to be used to encode the value,LANGUAGE\ncan usually be set toNone\nor the empty string (see RFC 2231 for other possibilities), andVALUE\nis the string value containing non-ASCII code points. If a three tuple is not passed and the value contains non-ASCII characters, it is automatically encoded in RFC 2231 format using aCHARSET\nofutf-8\nand aLANGUAGE\nofNone\n.Here\u2019s an example:\nmsg.add_header('Content-Disposition', 'attachment', filename='bud.gif')\nThis will add a header that looks like\nContent-Disposition: attachment; filename=\"bud.gif\"\nAn example with non-ASCII characters:\nmsg.add_header('Content-Disposition', 'attachment', filename=('iso-8859-1', '', 'Fu\u00dfballer.ppt'))\nWhich produces\nContent-Disposition: attachment; filename*=\"iso-8859-1''Fu%DFballer.ppt\"\n- replace_header(_name, _value)\u00b6\nReplace a header. Replace the first header found in the message that matches _name, retaining header order and field name case. If no matching header was found, a\nKeyError\nis raised.\n- get_content_type()\u00b6\nReturn the message\u2019s content type. The returned string is coerced to lower case of the form maintype/subtype. If there was no Content-Type header in the message the default type as given by\nget_default_type()\nwill be returned. Since according to RFC 2045, messages always have a default type,get_content_type()\nwill always return a value.RFC 2045 defines a message\u2019s default type to be text/plain unless it appears inside a multipart/digest container, in which case it would be message/rfc822. If the Content-Type header has an invalid type specification, RFC 2045 mandates that the default type be text/plain.\n- get_content_maintype()\u00b6\nReturn the message\u2019s main content type. This is the maintype part of the string returned by\nget_content_type()\n.\n- get_content_subtype()\u00b6\nReturn the message\u2019s sub-content type. This is the subtype part of the string returned by\nget_content_type()\n.\n- get_default_type()\u00b6\nReturn the default content type. Most messages have a default content type of text/plain, except for messages that are subparts of multipart/digest containers. Such subparts have a default content type of message/rfc822.\n- set_default_type(ctype)\u00b6\nSet the default content type. ctype should either be text/plain or message/rfc822, although this is not enforced. The default content type is not stored in the Content-Type header.\n- get_params(failobj=None, header='content-type', unquote=True)\u00b6\nReturn the message\u2019s Content-Type parameters, as a list. The elements of the returned list are 2-tuples of key/value pairs, as split on the\n'='\nsign. The left hand side of the'='\nis the key, while the right hand side is the value. If there is no'='\nsign in the parameter the value is the empty string, otherwise the value is as described inget_param()\nand is unquoted if optional unquote isTrue\n(the default).Optional failobj is the object to return if there is no Content-Type header. Optional header is the header to search instead of Content-Type.\nThis is a legacy method. On the\nEmailMessage\nclass its functionality is replaced by the params property of the individual header objects returned by the header access methods.\n- get_param(param, failobj=None, header='content-type', unquote=True)\u00b6\nReturn the value of the Content-Type header\u2019s parameter param as a string. If the message has no Content-Type header or if there is no such parameter, then failobj is returned (defaults to\nNone\n).Optional header if given, specifies the message header to use instead of Content-Type.\nParameter keys are always compared case insensitively. The return value can either be a string, or a 3-tuple if the parameter was RFC 2231 encoded. When it\u2019s a 3-tuple, the elements of the value are of the form\n(CHARSET, LANGUAGE, VALUE)\n. Note that bothCHARSET\nandLANGUAGE\ncan beNone\n, in which case you should considerVALUE\nto be encoded in theus-ascii\ncharset. You can usually ignoreLANGUAGE\n.If your application doesn\u2019t care whether the parameter was encoded as in RFC 2231, you can collapse the parameter value by calling\nemail.utils.collapse_rfc2231_value()\n, passing in the return value fromget_param()\n. This will return a suitably decoded Unicode string when the value is a tuple, or the original string unquoted if it isn\u2019t. For example:rawparam = msg.get_param('foo') param = email.utils.collapse_rfc2231_value(rawparam)\nIn any case, the parameter value (either the returned string, or the\nVALUE\nitem in the 3-tuple) is always unquoted, unless unquote is set toFalse\n.This is a legacy method. On the\nEmailMessage\nclass its functionality is replaced by the params property of the individual header objects returned by the header access methods.\n- set_param(param, value, header='Content-Type', requote=True, charset=None, language='', replace=False)\u00b6\nSet a parameter in the Content-Type header. If the parameter already exists in the header, its value will be replaced with value. If the Content-Type header as not yet been defined for this message, it will be set to text/plain and the new parameter value will be appended as per RFC 2045.\nOptional header specifies an alternative header to Content-Type, and all parameters will be quoted as necessary unless optional requote is\nFalse\n(the default isTrue\n).If optional charset is specified, the parameter will be encoded according to RFC 2231. Optional language specifies the RFC 2231 language, defaulting to the empty string. Both charset and language should be strings.\nIf replace is\nFalse\n(the default) the header is moved to the end of the list of headers. If replace isTrue\n, the header will be updated in place.Changed in version 3.4:\nreplace\nkeyword was added.\n- del_param(param, header='content-type', requote=True)\u00b6\nRemove the given parameter completely from the Content-Type header. The header will be re-written in place without the parameter or its value. All values will be quoted as necessary unless requote is\nFalse\n(the default isTrue\n). Optional header specifies an alternative to Content-Type.\n- set_type(type, header='Content-Type', requote=True)\u00b6\nSet the main type and subtype for the Content-Type header. type must be a string in the form maintype/subtype, otherwise a\nValueError\nis raised.This method replaces the Content-Type header, keeping all the parameters in place. If requote is\nFalse\n, this leaves the existing header\u2019s quoting as is, otherwise the parameters will be quoted (the default).An alternative header can be specified in the header argument. When the Content-Type header is set a MIME-Version header is also added.\nThis is a legacy method. On the\nEmailMessage\nclass its functionality is replaced by themake_\nandadd_\nmethods.\n- get_filename(failobj=None)\u00b6\nReturn the value of the\nfilename\nparameter of the Content-Disposition header of the message. If the header does not have afilename\nparameter, this method falls back to looking for thename\nparameter on the Content-Type header. If neither is found, or the header is missing, then failobj is returned. The returned string will always be unquoted as peremail.utils.unquote()\n.\n- get_boundary(failobj=None)\u00b6\nReturn the value of the\nboundary\nparameter of the Content-Type header of the message, or failobj if either the header is missing, or has noboundary\nparameter. The returned string will always be unquoted as peremail.utils.unquote()\n.\n- set_boundary(boundary)\u00b6\nSet the\nboundary\nparameter of the Content-Type header to boundary.set_boundary()\nwill always quote boundary if necessary. AHeaderParseError\nis raised if the message object has no Content-Type header.Note that using this method is subtly different than deleting the old Content-Type header and adding a new one with the new boundary via\nadd_header()\n, becauseset_boundary()\npreserves the order of the Content-Type header in the list of headers. However, it does not preserve any continuation lines which may have been present in the original Content-Type header.\n- get_content_charset(failobj=None)\u00b6\nReturn the\ncharset\nparameter of the Content-Type header, coerced to lower case. If there is no Content-Type header, or if that header has nocharset\nparameter, failobj is returned.Note that this method differs from\nget_charset()\nwhich returns theCharset\ninstance for the default encoding of the message body.\n- get_charsets(failobj=None)\u00b6\nReturn a list containing the character set names in the message. If the message is a multipart, then the list will contain one element for each subpart in the payload, otherwise, it will be a list of length 1.\nEach item in the list will be a string which is the value of the\ncharset\nparameter in the Content-Type header for the represented subpart. However, if the subpart has no Content-Type header, nocharset\nparameter, or is not of the text main MIME type, then that item in the returned list will be failobj.\n- get_content_disposition()\u00b6\nReturn the lowercased value (without parameters) of the message\u2019s Content-Disposition header if it has one, or\nNone\n. The possible values for this method are inline, attachment orNone\nif the message follows RFC 2183.Added in version 3.5.\n- walk()\u00b6\nThe\nwalk()\nmethod is an all-purpose generator which can be used to iterate over all the parts and subparts of a message object tree, in depth-first traversal order. You will typically usewalk()\nas the iterator in afor\nloop; each iteration returns the next subpart.Here\u2019s an example that prints the MIME type of every part of a multipart message structure:\n>>> for part in msg.walk(): ... print(part.get_content_type()) multipart/report text/plain message/delivery-status text/plain text/plain message/rfc822 text/plain\nwalk\niterates over the subparts of any part whereis_multipart()\nreturnsTrue\n, even thoughmsg.get_content_maintype() == 'multipart'\nmay returnFalse\n. We can see this in our example by making use of the_structure\ndebug helper function:>>> for part in msg.walk(): ... print(part.get_content_maintype() == 'multipart', ... part.is_multipart()) True True False False False True False False False False False True False False >>> _structure(msg) multipart/report text/plain message/delivery-status text/plain text/plain message/rfc822 text/plain\nHere the\nmessage\nparts are notmultiparts\n, but they do contain subparts.is_multipart()\nreturnsTrue\nandwalk\ndescends into the subparts.\nMessage\nobjects can also optionally contain two instance attributes, which can be used when generating the plain text of a MIME message.- preamble\u00b6\nThe format of a MIME document allows for some text between the blank line following the headers, and the first multipart boundary string. Normally, this text is never visible in a MIME-aware mail reader because it falls outside the standard MIME armor. However, when viewing the raw text of the message, or when viewing the message in a non-MIME aware reader, this text can become visible.\nThe preamble attribute contains this leading extra-armor text for MIME documents. When the\nParser\ndiscovers some text after the headers but before the first boundary string, it assigns this text to the message\u2019s preamble attribute. When theGenerator\nis writing out the plain text representation of a MIME message, and it finds the message has a preamble attribute, it will write this text in the area between the headers and the first boundary. Seeemail.parser\nandemail.generator\nfor details.Note that if the message object has no preamble, the preamble attribute will be\nNone\n.\n- epilogue\u00b6\nThe epilogue attribute acts the same way as the preamble attribute, except that it contains text that appears between the last boundary and the end of the message.\nYou do not need to set the epilogue to the empty string in order for the\nGenerator\nto print a newline at the end of the file.\n- defects\u00b6\nThe defects attribute contains a list of all the problems found when parsing this message. See\nemail.errors\nfor a detailed description of the possible parsing defects.", "code_snippets": [" ", "\n", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n", "\n", " ", " ", "\n", " ", "\n", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n", "\n", " ", " ", "\n", " ", " ", " ", "\n ", " ", "\n", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", "\n ", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 6561} +{"url": "https://docs.python.org/3/library/email.parser.html", "title": ": Parsing email messages", "content": "email.parser\n: Parsing email messages\u00b6\nSource code: Lib/email/parser.py\nMessage object structures can be created in one of two ways: they can be\ncreated from whole cloth by creating an EmailMessage\nobject, adding headers using the dictionary interface, and adding payload(s)\nusing set_content()\nand related methods, or\nthey can be created by parsing a serialized representation of the email\nmessage.\nThe email\npackage provides a standard parser that understands most email\ndocument structures, including MIME documents. You can pass the parser a\nbytes, string or file object, and the parser will return to you the root\nEmailMessage\ninstance of the object structure. For\nsimple, non-MIME messages the payload of this root object will likely be a\nstring containing the text of the message. For MIME messages, the root object\nwill return True\nfrom its is_multipart()\nmethod, and the subparts can be accessed via the payload manipulation methods,\nsuch as get_body()\n,\niter_parts()\n, and\nwalk()\n.\nThere are actually two parser interfaces available for use, the Parser\nAPI and the incremental FeedParser\nAPI. The Parser\nAPI is\nmost useful if you have the entire text of the message in memory, or if the\nentire message lives in a file on the file system. FeedParser\nis more\nappropriate when you are reading the message from a stream which might block\nwaiting for more input (such as reading an email message from a socket). The\nFeedParser\ncan consume and parse the message incrementally, and only\nreturns the root object when you close the parser.\nNote that the parser can be extended in limited ways, and of course you can\nimplement your own parser completely from scratch. All of the logic that\nconnects the email\npackage\u2019s bundled parser and the\nEmailMessage\nclass is embodied in the Policy\nclass, so a custom parser can create message object trees any way it finds\nnecessary by implementing custom versions of the appropriate Policy\nmethods.\nFeedParser API\u00b6\nThe BytesFeedParser\n, imported from the email.feedparser\nmodule,\nprovides an API that is conducive to incremental parsing of email messages,\nsuch as would be necessary when reading the text of an email message from a\nsource that can block (such as a socket). The BytesFeedParser\ncan of\ncourse be used to parse an email message fully contained in a bytes-like\nobject, string, or file, but the BytesParser\nAPI may be more\nconvenient for such use cases. The semantics and results of the two parser\nAPIs are identical.\nThe BytesFeedParser\n\u2019s API is simple; you create an instance, feed it a\nbunch of bytes until there\u2019s no more to feed it, then close the parser to\nretrieve the root message object. The BytesFeedParser\nis extremely\naccurate when parsing standards-compliant messages, and it does a very good job\nof parsing non-compliant messages, providing information about how a message\nwas deemed broken. It will populate a message object\u2019s\ndefects\nattribute with a list of any\nproblems it found in a message. See the email.errors\nmodule for the\nlist of defects that it can find.\nHere is the API for the BytesFeedParser\n:\n- class email.parser.BytesFeedParser(_factory=None, *, policy=policy.compat32)\u00b6\nCreate a\nBytesFeedParser\ninstance. Optional _factory is a no-argument callable; if not specified use themessage_factory\nfrom the policy. Call _factory whenever a new message object is needed.If policy is specified use the rules it specifies to update the representation of the message. If policy is not set, use the\ncompat32\npolicy, which maintains backward compatibility with the Python 3.2 version of the email package and providesMessage\nas the default factory. All other policies provideEmailMessage\nas the default _factory. For more information on what else policy controls, see thepolicy\ndocumentation.Note: The policy keyword should always be specified; The default will change to\nemail.policy.default\nin a future version of Python.Added in version 3.2.\nChanged in version 3.3: Added the policy keyword.\nChanged in version 3.6: _factory defaults to the policy\nmessage_factory\n.- feed(data)\u00b6\nFeed the parser some more data. data should be a bytes-like object containing one or more lines. The lines can be partial and the parser will stitch such partial lines together properly. The lines can have any of the three common line endings: carriage return, newline, or carriage return and newline (they can even be mixed).\n- class email.parser.FeedParser(_factory=None, *, policy=policy.compat32)\u00b6\nWorks like\nBytesFeedParser\nexcept that the input to thefeed()\nmethod must be a string. This is of limited utility, since the only way for such a message to be valid is for it to contain only ASCII text or, ifutf8\nisTrue\n, no binary attachments.Changed in version 3.3: Added the policy keyword.\nParser API\u00b6\nThe BytesParser\nclass, imported from the email.parser\nmodule,\nprovides an API that can be used to parse a message when the complete contents\nof the message are available in a bytes-like object or file. The\nemail.parser\nmodule also provides Parser\nfor parsing strings,\nand header-only parsers, BytesHeaderParser\nand\nHeaderParser\n, which can be used if you\u2019re only interested in the\nheaders of the message. BytesHeaderParser\nand HeaderParser\ncan be much faster in these situations, since they do not attempt to parse the\nmessage body, instead setting the payload to the raw body.\n- class email.parser.BytesParser(_class=None, *, policy=policy.compat32)\u00b6\nCreate a\nBytesParser\ninstance. The _class and policy arguments have the same meaning and semantics as the _factory and policy arguments ofBytesFeedParser\n.Note: The policy keyword should always be specified; The default will change to\nemail.policy.default\nin a future version of Python.Changed in version 3.3: Removed the strict argument that was deprecated in 2.4. Added the policy keyword.\nChanged in version 3.6: _class defaults to the policy\nmessage_factory\n.- parse(fp, headersonly=False)\u00b6\nRead all the data from the binary file-like object fp, parse the resulting bytes, and return the message object. fp must support both the\nreadline()\nand theread()\nmethods.The bytes contained in fp must be formatted as a block of RFC 5322 (or, if\nutf8\nisTrue\n, RFC 6532) style headers and header continuation lines, optionally preceded by an envelope header. The header block is terminated either by the end of the data or by a blank line. Following the header block is the body of the message (which may contain MIME-encoded subparts, including subparts with a Content-Transfer-Encoding of8bit\n).Optional headersonly is a flag specifying whether to stop parsing after reading the headers or not. The default is\nFalse\n, meaning it parses the entire contents of the file.\n- parsebytes(bytes, headersonly=False)\u00b6\nSimilar to the\nparse()\nmethod, except it takes a bytes-like object instead of a file-like object. Calling this method on a bytes-like object is equivalent to wrapping bytes in aBytesIO\ninstance first and callingparse()\n.Optional headersonly is as with the\nparse()\nmethod.\nAdded in version 3.2.\n- class email.parser.BytesHeaderParser(_class=None, *, policy=policy.compat32)\u00b6\nExactly like\nBytesParser\n, except that headersonly defaults toTrue\n.Added in version 3.3.\n- class email.parser.Parser(_class=None, *, policy=policy.compat32)\u00b6\nThis class is parallel to\nBytesParser\n, but handles string input.Changed in version 3.3: Removed the strict argument. Added the policy keyword.\nChanged in version 3.6: _class defaults to the policy\nmessage_factory\n.- parse(fp, headersonly=False)\u00b6\nRead all the data from the text-mode file-like object fp, parse the resulting text, and return the root message object. fp must support both the\nreadline()\nand theread()\nmethods on file-like objects.Other than the text mode requirement, this method operates like\nBytesParser.parse()\n.\n- class email.parser.HeaderParser(_class=None, *, policy=policy.compat32)\u00b6\nExactly like\nParser\n, except that headersonly defaults toTrue\n.\nSince creating a message object structure from a string or a file object is such\na common task, four functions are provided as a convenience. They are available\nin the top-level email\npackage namespace.\n- email.message_from_bytes(s, _class=None, *, policy=policy.compat32)\u00b6\nReturn a message object structure from a bytes-like object. This is equivalent to\nBytesParser().parsebytes(s)\n. Optional _class and policy are interpreted as with theBytesParser\nclass constructor.Added in version 3.2.\nChanged in version 3.3: Removed the strict argument. Added the policy keyword.\n- email.message_from_binary_file(fp, _class=None, *, policy=policy.compat32)\u00b6\nReturn a message object structure tree from an open binary file object. This is equivalent to\nBytesParser().parse(fp)\n. _class and policy are interpreted as with theBytesParser\nclass constructor.Added in version 3.2.\nChanged in version 3.3: Removed the strict argument. Added the policy keyword.\n- email.message_from_string(s, _class=None, *, policy=policy.compat32)\u00b6\nReturn a message object structure from a string. This is equivalent to\nParser().parsestr(s)\n. _class and policy are interpreted as with theParser\nclass constructor.Changed in version 3.3: Removed the strict argument. Added the policy keyword.\n- email.message_from_file(fp, _class=None, *, policy=policy.compat32)\u00b6\nReturn a message object structure tree from an open file object. This is equivalent to\nParser().parse(fp)\n. _class and policy are interpreted as with theParser\nclass constructor.Changed in version 3.3: Removed the strict argument. Added the policy keyword.\nChanged in version 3.6: _class defaults to the policy\nmessage_factory\n.\nHere\u2019s an example of how you might use message_from_bytes()\nat an\ninteractive Python prompt:\n>>> import email\n>>> msg = email.message_from_bytes(myBytes)\nAdditional notes\u00b6\nHere are some notes on the parsing semantics:\nMost non-multipart type messages are parsed as a single message object with a string payload. These objects will return\nFalse\nforis_multipart()\n, anditer_parts()\nwill yield an empty list.All multipart type messages will be parsed as a container message object with a list of sub-message objects for their payload. The outer container message will return\nTrue\nforis_multipart()\n, anditer_parts()\nwill yield a list of subparts.Most messages with a content type of message/* (such as message/delivery-status and message/rfc822) will also be parsed as container object containing a list payload of length 1. Their\nis_multipart()\nmethod will returnTrue\n. The single element yielded byiter_parts()\nwill be a sub-message object.Some non-standards-compliant messages may not be internally consistent about their multipart-edness. Such messages may have a Content-Type header of type multipart, but their\nis_multipart()\nmethod may returnFalse\n. If such messages were parsed with theFeedParser\n, they will have an instance of theMultipartInvariantViolationDefect\nclass in their defects attribute list. Seeemail.errors\nfor details.", "code_snippets": ["\n", " ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 2730} +{"url": "https://docs.python.org/3/library/email.policy.html", "title": ": Policy Objects", "content": "email.policy\n: Policy Objects\u00b6\nAdded in version 3.3.\nSource code: Lib/email/policy.py\nThe email\npackage\u2019s prime focus is the handling of email messages as\ndescribed by the various email and MIME RFCs. However, the general format of\nemail messages (a block of header fields each consisting of a name followed by\na colon followed by a value, the whole block followed by a blank line and an\narbitrary \u2018body\u2019), is a format that has found utility outside of the realm of\nemail. Some of these uses conform fairly closely to the main email RFCs, some\ndo not. Even when working with email, there are times when it is desirable to\nbreak strict compliance with the RFCs, such as generating emails that\ninteroperate with email servers that do not themselves follow the standards, or\nthat implement extensions you want to use in ways that violate the\nstandards.\nPolicy objects give the email package the flexibility to handle all these disparate use cases.\nA Policy\nobject encapsulates a set of attributes and methods that\ncontrol the behavior of various components of the email package during use.\nPolicy\ninstances can be passed to various classes and methods in the\nemail package to alter the default behavior. The settable values and their\ndefaults are described below.\nThere is a default policy used by all classes in the email package. For all of\nthe parser\nclasses and the related convenience functions, and for\nthe Message\nclass, this is the Compat32\npolicy, via its corresponding pre-defined instance compat32\n. This\npolicy provides for complete backward compatibility (in some cases, including\nbug compatibility) with the pre-Python3.3 version of the email package.\nThis default value for the policy keyword to\nEmailMessage\nis the EmailPolicy\npolicy, via\nits pre-defined instance default\n.\nWhen a Message\nor EmailMessage\nobject is created, it acquires a policy. If the message is created by a\nparser\n, a policy passed to the parser will be the policy used by\nthe message it creates. If the message is created by the program, then the\npolicy can be specified when it is created. When a message is passed to a\ngenerator\n, the generator uses the policy from the message by\ndefault, but you can also pass a specific policy to the generator that will\noverride the one stored on the message object.\nThe default value for the policy keyword for the email.parser\nclasses\nand the parser convenience functions will be changing in a future version of\nPython. Therefore you should always specify explicitly which policy you want\nto use when calling any of the classes and functions described in the\nparser\nmodule.\nThe first part of this documentation covers the features of Policy\n, an\nabstract base class that defines the features that are common to all\npolicy objects, including compat32\n. This includes certain hook\nmethods that are called internally by the email package, which a custom policy\ncould override to obtain different behavior. The second part describes the\nconcrete classes EmailPolicy\nand Compat32\n, which implement\nthe hooks that provide the standard behavior and the backward compatible\nbehavior and features, respectively.\nPolicy\ninstances are immutable, but they can be cloned, accepting the\nsame keyword arguments as the class constructor and returning a new\nPolicy\ninstance that is a copy of the original but with the specified\nattributes values changed.\nAs an example, the following code could be used to read an email message from a\nfile on disk and pass it to the system sendmail\nprogram on a Unix system:\n>>> from email import message_from_binary_file\n>>> from email.generator import BytesGenerator\n>>> from email import policy\n>>> from subprocess import Popen, PIPE\n>>> with open('mymsg.txt', 'rb') as f:\n... msg = message_from_binary_file(f, policy=policy.default)\n...\n>>> p = Popen(['sendmail', msg['To'].addresses[0]], stdin=PIPE)\n>>> g = BytesGenerator(p.stdin, policy=msg.policy.clone(linesep='\\r\\n'))\n>>> g.flatten(msg)\n>>> p.stdin.close()\n>>> rc = p.wait()\nHere we are telling BytesGenerator\nto use the RFC\ncorrect line separator characters when creating the binary string to feed into\nsendmail's\nstdin\n, where the default policy would use \\n\nline\nseparators.\nSome email package methods accept a policy keyword argument, allowing the\npolicy to be overridden for that method. For example, the following code uses\nthe as_bytes()\nmethod of the msg object from\nthe previous example and writes the message to a file using the native line\nseparators for the platform on which it is running:\n>>> import os\n>>> with open('converted.txt', 'wb') as f:\n... f.write(msg.as_bytes(policy=msg.policy.clone(linesep=os.linesep)))\n17\nPolicy objects can also be combined using the addition operator, producing a policy object whose settings are a combination of the non-default values of the summed objects:\n>>> compat_SMTP = policy.compat32.clone(linesep='\\r\\n')\n>>> compat_strict = policy.compat32.clone(raise_on_defect=True)\n>>> compat_strict_SMTP = compat_SMTP + compat_strict\nThis operation is not commutative; that is, the order in which the objects are added matters. To illustrate:\n>>> policy100 = policy.compat32.clone(max_line_length=100)\n>>> policy80 = policy.compat32.clone(max_line_length=80)\n>>> apolicy = policy100 + policy80\n>>> apolicy.max_line_length\n80\n>>> apolicy = policy80 + policy100\n>>> apolicy.max_line_length\n100\n- class email.policy.Policy(**kw)\u00b6\nThis is the abstract base class for all policy classes. It provides default implementations for a couple of trivial methods, as well as the implementation of the immutability property, the\nclone()\nmethod, and the constructor semantics.The constructor of a policy class can be passed various keyword arguments. The arguments that may be specified are any non-method properties on this class, plus any additional non-method properties on the concrete class. A value specified in the constructor will override the default value for the corresponding attribute.\nThis class defines the following properties, and thus values for the following may be passed in the constructor of any policy class:\n- max_line_length\u00b6\nThe maximum length of any line in the serialized output, not counting the end of line character(s). Default is 78, per RFC 5322. A value of\n0\norNone\nindicates that no line wrapping should be done at all.\n- linesep\u00b6\nThe string to be used to terminate lines in serialized output. The default is\n\\n\nbecause that\u2019s the internal end-of-line discipline used by Python, though\\r\\n\nis required by the RFCs.\n- cte_type\u00b6\nControls the type of Content Transfer Encodings that may be or are required to be used. The possible values are:\n7bit\nall data must be \u201c7 bit clean\u201d (ASCII-only). This means that where necessary data will be encoded using either quoted-printable or base64 encoding.\n8bit\ndata is not constrained to be 7 bit clean. Data in headers is still required to be ASCII-only and so will be encoded (see\nfold_binary()\nandutf8\nbelow for exceptions), but body parts may use the8bit\nCTE.A\ncte_type\nvalue of8bit\nonly works withBytesGenerator\n, notGenerator\n, because strings cannot contain binary data. If aGenerator\nis operating under a policy that specifiescte_type=8bit\n, it will act as ifcte_type\nis7bit\n.\n- raise_on_defect\u00b6\nIf\nTrue\n, any defects encountered will be raised as errors. IfFalse\n(the default), defects will be passed to theregister_defect()\nmethod.\n- mangle_from_\u00b6\nIf\nTrue\n, lines starting with \u201cFrom \u201c in the body are escaped by putting a>\nin front of them. This parameter is used when the message is being serialized by a generator. Default:False\n.Added in version 3.5.\n- message_factory\u00b6\nA factory function for constructing a new empty message object. Used by the parser when building messages. Defaults to\nNone\n, in which caseMessage\nis used.Added in version 3.6.\n- verify_generated_headers\u00b6\nIf\nTrue\n(the default), the generator will raiseHeaderWriteError\ninstead of writing a header that is improperly folded or delimited, such that it would be parsed as multiple headers or joined with adjacent data. Such headers can be generated by custom header classes or bugs in theemail\nmodule.As it\u2019s a security feature, this defaults to\nTrue\neven in theCompat32\npolicy. For backwards compatible, but unsafe, behavior, it must be set toFalse\nexplicitly.Added in version 3.13.\nThe following\nPolicy\nmethod is intended to be called by code using the email library to create policy instances with custom settings:- clone(**kw)\u00b6\nReturn a new\nPolicy\ninstance whose attributes have the same values as the current instance, except where those attributes are given new values by the keyword arguments.\nThe remaining\nPolicy\nmethods are called by the email package code, and are not intended to be called by an application using the email package. A custom policy must implement all of these methods.- handle_defect(obj, defect)\u00b6\nHandle a defect found on obj. When the email package calls this method, defect will always be a subclass of\nMessageDefect\n.The default implementation checks the\nraise_on_defect\nflag. If it isTrue\n, defect is raised as an exception. If it isFalse\n(the default), obj and defect are passed toregister_defect()\n.\n- register_defect(obj, defect)\u00b6\nRegister a defect on obj. In the email package, defect will always be a subclass of\nMessageDefect\n.The default implementation calls the\nappend\nmethod of thedefects\nattribute of obj. When the email package callshandle_defect\n, obj will normally have adefects\nattribute that has anappend\nmethod. Custom object types used with the email package (for example, customMessage\nobjects) should also provide such an attribute, otherwise defects in parsed messages will raise unexpected errors.\n- header_max_count(name)\u00b6\nReturn the maximum allowed number of headers named name.\nCalled when a header is added to an\nEmailMessage\norMessage\nobject. If the returned value is not0\norNone\n, and there are already a number of headers with the name name greater than or equal to the value returned, aValueError\nis raised.Because the default behavior of\nMessage.__setitem__\nis to append the value to the list of headers, it is easy to create duplicate headers without realizing it. This method allows certain headers to be limited in the number of instances of that header that may be added to aMessage\nprogrammatically. (The limit is not observed by the parser, which will faithfully produce as many headers as exist in the message being parsed.)The default implementation returns\nNone\nfor all header names.\n- header_source_parse(sourcelines)\u00b6\nThe email package calls this method with a list of strings, each string ending with the line separation characters found in the source being parsed. The first line includes the field header name and separator. All whitespace in the source is preserved. The method should return the\n(name, value)\ntuple that is to be stored in theMessage\nto represent the parsed header.If an implementation wishes to retain compatibility with the existing email package policies, name should be the case preserved name (all characters up to the \u2018\n:\n\u2019 separator), while value should be the unfolded value (all line separator characters removed, but whitespace kept intact), stripped of leading whitespace.sourcelines may contain surrogateescaped binary data.\nThere is no default implementation\n- header_store_parse(name, value)\u00b6\nThe email package calls this method with the name and value provided by the application program when the application program is modifying a\nMessage\nprogrammatically (as opposed to aMessage\ncreated by a parser). The method should return the(name, value)\ntuple that is to be stored in theMessage\nto represent the header.If an implementation wishes to retain compatibility with the existing email package policies, the name and value should be strings or string subclasses that do not change the content of the passed in arguments.\nThere is no default implementation\n- header_fetch_parse(name, value)\u00b6\nThe email package calls this method with the name and value currently stored in the\nMessage\nwhen that header is requested by the application program, and whatever the method returns is what is passed back to the application as the value of the header being retrieved. Note that there may be more than one header with the same name stored in theMessage\n; the method is passed the specific name and value of the header destined to be returned to the application.value may contain surrogateescaped binary data. There should be no surrogateescaped binary data in the value returned by the method.\nThere is no default implementation\n- fold(name, value)\u00b6\nThe email package calls this method with the name and value currently stored in the\nMessage\nfor a given header. The method should return a string that represents that header \u201cfolded\u201d correctly (according to the policy settings) by composing the name with the value and insertinglinesep\ncharacters at the appropriate places. See RFC 5322 for a discussion of the rules for folding email headers.value may contain surrogateescaped binary data. There should be no surrogateescaped binary data in the string returned by the method.\n- class email.policy.EmailPolicy(**kw)\u00b6\nThis concrete\nPolicy\nprovides behavior that is intended to be fully compliant with the current email RFCs. These include (but are not limited to) RFC 5322, RFC 2047, and the current MIME RFCs.This policy adds new header parsing and folding algorithms. Instead of simple strings, headers are\nstr\nsubclasses with attributes that depend on the type of the field. The parsing and folding algorithm fully implement RFC 2047 and RFC 5322.The default value for the\nmessage_factory\nattribute isEmailMessage\n.In addition to the settable attributes listed above that apply to all policies, this policy adds the following additional attributes:\nAdded in version 3.6: [1]\n- utf8\u00b6\nIf\nFalse\n, follow RFC 5322, supporting non-ASCII characters in headers by encoding them as \u201cencoded words\u201d. IfTrue\n, follow RFC 6532 and useutf-8\nencoding for headers. Messages formatted in this way may be passed to SMTP servers that support theSMTPUTF8\nextension (RFC 6531).\n- refold_source\u00b6\nIf the value for a header in the\nMessage\nobject originated from aparser\n(as opposed to being set by a program), this attribute indicates whether or not a generator should refold that value when transforming the message back into serialized form. The possible values are:none\nall source values use original folding\nlong\nsource values that have any line that is longer than\nmax_line_length\nwill be refoldedall\nall values are refolded.\nThe default is\nlong\n.\n- header_factory\u00b6\nA callable that takes two arguments,\nname\nandvalue\n, wherename\nis a header field name andvalue\nis an unfolded header field value, and returns a string subclass that represents that header. A defaultheader_factory\n(seeheaderregistry\n) is provided that supports custom parsing for the various address and date RFC 5322 header field types, and the major MIME header field stypes. Support for additional custom parsing will be added in the future.\n- content_manager\u00b6\nAn object with at least two methods: get_content and set_content. When the\nget_content()\norset_content()\nmethod of anEmailMessage\nobject is called, it calls the corresponding method of this object, passing it the message object as its first argument, and any arguments or keywords that were passed to it as additional arguments. By defaultcontent_manager\nis set toraw_data_manager\n.Added in version 3.4.\nThe class provides the following concrete implementations of the abstract methods of\nPolicy\n:- header_max_count(name)\u00b6\nReturns the value of the\nmax_count\nattribute of the specialized class used to represent the header with the given name.\n- header_source_parse(sourcelines)\u00b6\nThe name is parsed as everything up to the \u2018\n:\n\u2019 and returned unmodified. The value is determined by stripping leading whitespace off the remainder of the first line, joining all subsequent lines together, and stripping any trailing carriage return or linefeed characters.\n- header_store_parse(name, value)\u00b6\nThe name is returned unchanged. If the input value has a\nname\nattribute and it matches name ignoring case, the value is returned unchanged. Otherwise the name and value are passed toheader_factory\n, and the resulting header object is returned as the value. In this case aValueError\nis raised if the input value contains CR or LF characters.\n- header_fetch_parse(name, value)\u00b6\nIf the value has a\nname\nattribute, it is returned to unmodified. Otherwise the name, and the value with any CR or LF characters removed, are passed to theheader_factory\n, and the resulting header object is returned. Any surrogateescaped bytes get turned into the unicode unknown-character glyph.\n- fold(name, value)\u00b6\nHeader folding is controlled by the\nrefold_source\npolicy setting. A value is considered to be a \u2018source value\u2019 if and only if it does not have aname\nattribute (having aname\nattribute means it is a header object of some sort). If a source value needs to be refolded according to the policy, it is converted into a header object by passing the name and the value with any CR and LF characters removed to theheader_factory\n. Folding of a header object is done by calling itsfold\nmethod with the current policy.Source values are split into lines using\nsplitlines()\n. If the value is not to be refolded, the lines are rejoined using thelinesep\nfrom the policy and returned. The exception is lines containing non-ascii binary data. In that case the value is refolded regardless of therefold_source\nsetting, which causes the binary data to be CTE encoded using theunknown-8bit\ncharset.\n- fold_binary(name, value)\u00b6\nThe same as\nfold()\nifcte_type\nis7bit\n, except that the returned value is bytes.If\ncte_type\nis8bit\n, non-ASCII binary data is converted back into bytes. Headers with binary data are not refolded, regardless of therefold_header\nsetting, since there is no way to know whether the binary data consists of single byte characters or multibyte characters.\nThe following instances of EmailPolicy\nprovide defaults suitable for\nspecific application domains. Note that in the future the behavior of these\ninstances (in particular the HTTP\ninstance) may be adjusted to conform even\nmore closely to the RFCs relevant to their domains.\n- email.policy.default\u00b6\nAn instance of\nEmailPolicy\nwith all defaults unchanged. This policy uses the standard Python\\n\nline endings rather than the RFC-correct\\r\\n\n.\n- email.policy.SMTP\u00b6\nSuitable for serializing messages in conformance with the email RFCs. Like\ndefault\n, but withlinesep\nset to\\r\\n\n, which is RFC compliant.\n- email.policy.SMTPUTF8\u00b6\nThe same as\nSMTP\nexcept thatutf8\nisTrue\n. Useful for serializing messages to a message store without using encoded words in the headers. Should only be used for SMTP transmission if the sender or recipient addresses have non-ASCII characters (thesmtplib.SMTP.send_message()\nmethod handles this automatically).\n- email.policy.HTTP\u00b6\nSuitable for serializing headers with for use in HTTP traffic. Like\nSMTP\nexcept thatmax_line_length\nis set toNone\n(unlimited).\n- email.policy.strict\u00b6\nConvenience instance. The same as\ndefault\nexcept thatraise_on_defect\nis set toTrue\n. This allows any policy to be made strict by writing:somepolicy + policy.strict\nWith all of these EmailPolicies\n, the effective API of\nthe email package is changed from the Python 3.2 API in the following ways:\nSetting a header on a\nMessage\nresults in that header being parsed and a header object created.Fetching a header value from a\nMessage\nresults in that header being parsed and a header object created and returned.Any header object, or any header that is refolded due to the policy settings, is folded using an algorithm that fully implements the RFC folding algorithms, including knowing where encoded words are required and allowed.\nFrom the application view, this means that any header obtained through the\nEmailMessage\nis a header object with extra\nattributes, whose string value is the fully decoded unicode value of the\nheader. Likewise, a header may be assigned a new value, or a new header\ncreated, using a unicode string, and the policy will take care of converting\nthe unicode string into the correct RFC encoded form.\nThe header objects and their attributes are described in\nheaderregistry\n.\n- class email.policy.Compat32(**kw)\u00b6\nThis concrete\nPolicy\nis the backward compatibility policy. It replicates the behavior of the email package in Python 3.2. Thepolicy\nmodule also defines an instance of this class,compat32\n, that is used as the default policy. Thus the default behavior of the email package is to maintain compatibility with Python 3.2.The following attributes have values that are different from the\nPolicy\ndefault:- mangle_from_\u00b6\nThe default is\nTrue\n.\nThe class provides the following concrete implementations of the abstract methods of\nPolicy\n:- header_source_parse(sourcelines)\u00b6\nThe name is parsed as everything up to the \u2018\n:\n\u2019 and returned unmodified. The value is determined by stripping leading whitespace off the remainder of the first line, joining all subsequent lines together, and stripping any trailing carriage return or linefeed characters.\n- header_store_parse(name, value)\u00b6\nThe name and value are returned unmodified.\n- header_fetch_parse(name, value)\u00b6\nIf the value contains binary data, it is converted into a\nHeader\nobject using theunknown-8bit\ncharset. Otherwise it is returned unmodified.\n- fold(name, value)\u00b6\nHeaders are folded using the\nHeader\nfolding algorithm, which preserves existing line breaks in the value, and wraps each resulting line to themax_line_length\n. Non-ASCII binary data are CTE encoded using theunknown-8bit\ncharset.\n- fold_binary(name, value)\u00b6\nHeaders are folded using the\nHeader\nfolding algorithm, which preserves existing line breaks in the value, and wraps each resulting line to themax_line_length\n. Ifcte_type\nis7bit\n, non-ascii binary data is CTE encoded using theunknown-8bit\ncharset. Otherwise the original source header is used, with its existing line breaks and any (RFC invalid) binary data it may contain.\n- email.policy.compat32\u00b6\nAn instance of\nCompat32\n, providing backward compatibility with the behavior of the email package in Python 3.2.Note\nThe\ncompat32\npolicy should not be used as a policy forEmailMessage\nobjects, and should only be used to serialize messages that were created using thecompat32\npolicy.\nFootnotes", "code_snippets": ["\n", " ", " ", " ", " ", "\n", " ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", "\n", "\n", "\n", " ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 5583} +{"url": "https://docs.python.org/3/library/asyncio-runner.html", "title": "Runners", "content": "Runners\u00b6\nSource code: Lib/asyncio/runners.py\nThis section outlines high-level asyncio primitives to run asyncio code.\nThey are built on top of an event loop with the aim to simplify async code usage for common wide-spread scenarios.\nRunning an asyncio Program\u00b6\n- asyncio.run(coro, *, debug=None, loop_factory=None)\u00b6\nExecute coro in an asyncio event loop and return the result.\nThe argument can be any awaitable object.\nThis function runs the awaitable, taking care of managing the asyncio event loop, finalizing asynchronous generators, and closing the executor.\nThis function cannot be called when another asyncio event loop is running in the same thread.\nIf debug is\nTrue\n, the event loop will be run in debug mode.False\ndisables debug mode explicitly.None\nis used to respect the global Debug Mode settings.If loop_factory is not\nNone\n, it is used to create a new event loop; otherwiseasyncio.new_event_loop()\nis used. The loop is closed at the end. This function should be used as a main entry point for asyncio programs, and should ideally only be called once. It is recommended to use loop_factory to configure the event loop instead of policies. Passingasyncio.EventLoop\nallows running asyncio without the policy system.The executor is given a timeout duration of 5 minutes to shutdown. If the executor hasn\u2019t finished within that duration, a warning is emitted and the executor is closed.\nExample:\nasync def main(): await asyncio.sleep(1) print('hello') asyncio.run(main())\nAdded in version 3.7.\nChanged in version 3.9: Updated to use\nloop.shutdown_default_executor()\n.Changed in version 3.10: debug is\nNone\nby default to respect the global debug mode settings.Changed in version 3.12: Added loop_factory parameter.\nChanged in version 3.14: coro can be any awaitable object.\nNote\nThe\nasyncio\npolicy system is deprecated and will be removed in Python 3.16; from there on, an explicit loop_factory is needed to configure the event loop.\nRunner context manager\u00b6\n- class asyncio.Runner(*, debug=None, loop_factory=None)\u00b6\nA context manager that simplifies multiple async function calls in the same context.\nSometimes several top-level async functions should be called in the same event loop and\ncontextvars.Context\n.If debug is\nTrue\n, the event loop will be run in debug mode.False\ndisables debug mode explicitly.None\nis used to respect the global Debug Mode settings.loop_factory could be used for overriding the loop creation. It is the responsibility of the loop_factory to set the created loop as the current one. By default\nasyncio.new_event_loop()\nis used and set as current event loop withasyncio.set_event_loop()\nif loop_factory isNone\n.Basically,\nasyncio.run()\nexample can be rewritten with the runner usage:async def main(): await asyncio.sleep(1) print('hello') with asyncio.Runner() as runner: runner.run(main())\nAdded in version 3.11.\n- run(coro, *, context=None)\u00b6\nExecute coro in the embedded event loop.\nThe argument can be any awaitable object.\nIf the argument is a coroutine, it is wrapped in a Task.\nAn optional keyword-only context argument allows specifying a custom\ncontextvars.Context\nfor the code to run in. The runner\u2019s default context is used if context isNone\n.Returns the awaitable\u2019s result or raises an exception.\nThis function cannot be called when another asyncio event loop is running in the same thread.\nChanged in version 3.14: coro can be any awaitable object.\n- close()\u00b6\nClose the runner.\nFinalize asynchronous generators, shutdown default executor, close the event loop and release embedded\ncontextvars.Context\n.\n- get_loop()\u00b6\nReturn the event loop associated with the runner instance.\nNote\nRunner\nuses the lazy initialization strategy, its constructor doesn\u2019t initialize underlying low-level structures.Embedded loop and context are created at the\nwith\nbody entering or the first call ofrun()\norget_loop()\n.\nHandling Keyboard Interruption\u00b6\nAdded in version 3.11.\nWhen signal.SIGINT\nis raised by Ctrl-C, KeyboardInterrupt\nexception is raised in the main thread by default. However this doesn\u2019t work with\nasyncio\nbecause it can interrupt asyncio internals and can hang the program from\nexiting.\nTo mitigate this issue, asyncio\nhandles signal.SIGINT\nas follows:\nasyncio.Runner.run()\ninstalls a customsignal.SIGINT\nhandler before any user code is executed and removes it when exiting from the function.The\nRunner\ncreates the main task for the passed coroutine for its execution.When\nsignal.SIGINT\nis raised by Ctrl-C, the custom signal handler cancels the main task by callingasyncio.Task.cancel()\nwhich raisesasyncio.CancelledError\ninside the main task. This causes the Python stack to unwind,try/except\nandtry/finally\nblocks can be used for resource cleanup. After the main task is cancelled,asyncio.Runner.run()\nraisesKeyboardInterrupt\n.A user could write a tight loop which cannot be interrupted by\nasyncio.Task.cancel()\n, in which case the second following Ctrl-C immediately raises theKeyboardInterrupt\nwithout cancelling the main task.", "code_snippets": [" ", "\n ", " ", "\n ", "\n\n", "\n", " ", "\n ", " ", "\n ", "\n\n", " ", " ", " ", "\n ", "\n"], "language": "Python", "source": "python.org", "token_count": 1244} +{"url": "https://docs.python.org/3/c-api/file.html", "title": "File Objects", "content": "File Objects\u00b6\nThese APIs are a minimal emulation of the Python 2 C API for built-in file\nobjects, which used to rely on the buffered I/O (FILE*) support\nfrom the C standard library. In Python 3, files and streams use the new\nio\nmodule, which defines several layers over the low-level unbuffered\nI/O of the operating system. The functions described below are\nconvenience C wrappers over these new APIs, and meant mostly for internal\nerror reporting in the interpreter; third-party code is advised to access\nthe io\nAPIs instead.\n-\nPyObject *PyFile_FromFd(int fd, const char *name, const char *mode, int buffering, const char *encoding, const char *errors, const char *newline, int closefd)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCreate a Python file object from the file descriptor of an already opened file fd. The arguments name, encoding, errors and newline can be\nNULL\nto use the defaults; buffering can be -1 to use the default. name is ignored and kept for backward compatibility. ReturnNULL\non failure. For a more comprehensive description of the arguments, please refer to theio.open()\nfunction documentation.Warning\nSince Python streams have their own buffering layer, mixing them with OS-level file descriptors can produce various issues (such as unexpected ordering of data).\nChanged in version 3.2: Ignore name attribute.\n-\nint PyObject_AsFileDescriptor(PyObject *p)\u00b6\n- Part of the Stable ABI.\nReturn the file descriptor associated with p as an int. If the object is an integer, its value is returned. If not, the object\u2019s\nfileno()\nmethod is called if it exists; the method must return an integer, which is returned as the file descriptor value. Sets an exception and returns-1\non failure.\n-\nPyObject *PyFile_GetLine(PyObject *p, int n)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nEquivalent to\np.readline([n])\n, this function reads one line from the object p. p may be a file object or any object with areadline()\nmethod. If n is0\n, exactly one line is read, regardless of the length of the line. If n is greater than0\n, no more than n bytes will be read from the file; a partial line can be returned. In both cases, an empty string is returned if the end of the file is reached immediately. If n is less than0\n, however, one line is read regardless of length, butEOFError\nis raised if the end of the file is reached immediately.\n-\nint PyFile_SetOpenCodeHook(Py_OpenCodeHookFunction handler)\u00b6\nOverrides the normal behavior of\nio.open_code()\nto pass its parameter through the provided handler.The handler is a function of type:\n-\ntypedef PyObject *(*Py_OpenCodeHookFunction)(PyObject*, void*)\u00b6\nEquivalent of PyObject *(*)(PyObject *path, void *userData), where path is guaranteed to be\nPyUnicodeObject\n.\nThe userData pointer is passed into the hook function. Since hook functions may be called from different runtimes, this pointer should not refer directly to Python state.\nAs this hook is intentionally used during import, avoid importing new modules during its execution unless they are known to be frozen or available in\nsys.modules\n.Once a hook has been set, it cannot be removed or replaced, and later calls to\nPyFile_SetOpenCodeHook()\nwill fail. On failure, the function returns -1 and sets an exception if the interpreter has been initialized.This function is safe to call before\nPy_Initialize()\n.Raises an auditing event\nsetopencodehook\nwith no arguments.Added in version 3.8.\n-\ntypedef PyObject *(*Py_OpenCodeHookFunction)(PyObject*, void*)\u00b6\n-\nPyObject *PyFile_OpenCodeObject(PyObject *path)\u00b6\nOpen path with the mode\n'rb'\n. path must be a Pythonstr\nobject. The behavior of this function may be overridden byPyFile_SetOpenCodeHook()\nto allow for some preprocessing of the text.This is analogous to\nio.open_code()\nin Python.On success, this function returns a strong reference to a Python file object. On failure, this function returns\nNULL\nwith an exception set.Added in version 3.8.\n-\nPyObject *PyFile_OpenCode(const char *path)\u00b6\nSimilar to\nPyFile_OpenCodeObject()\n, but path is a UTF-8 encoded const char*.Added in version 3.8.\n-\nint PyFile_WriteObject(PyObject *obj, PyObject *p, int flags)\u00b6\n- Part of the Stable ABI.\nWrite object obj to file object p. The only supported flag for flags is\nPy_PRINT_RAW\n; if given, thestr()\nof the object is written instead of therepr()\n. Return0\non success or-1\non failure; the appropriate exception will be set.\n-\nint PyFile_WriteString(const char *s, PyObject *p)\u00b6\n- Part of the Stable ABI.\nWrite string s to file object p. Return\n0\non success or-1\non failure; the appropriate exception will be set.\nDeprecated API\u00b6\nThese are soft deprecated APIs that were included in Python\u2019s C API\nby mistake. They are documented solely for completeness; use other\nPyFile*\nAPIs instead.\n-\nPyObject *PyFile_NewStdPrinter(int fd)\u00b6\nUse\nPyFile_FromFd()\nwith defaults (fd, NULL, \"w\", -1, NULL, NULL, NULL, 0\n) instead.\n-\nPyTypeObject PyStdPrinter_Type\u00b6\nType of file-like objects used internally at Python startup when\nio\nis not yet available. Use Pythonopen()\norPyFile_FromFd()\nto create file objects instead.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1269} +{"url": "https://docs.python.org/3/c-api/code.html", "title": "Code Objects", "content": "Code Objects\u00b6\nCode objects are a low-level detail of the CPython implementation. Each one represents a chunk of executable code that hasn\u2019t yet been bound into a function.\n-\ntype PyCodeObject\u00b6\nThe C structure of the objects used to describe code objects. The fields of this type are subject to change at any time.\n-\nPyTypeObject PyCode_Type\u00b6\nThis is an instance of\nPyTypeObject\nrepresenting the Python code object.\n-\nint PyCode_Check(PyObject *co)\u00b6\nReturn true if co is a code object. This function always succeeds.\n-\nPy_ssize_t PyCode_GetNumFree(PyCodeObject *co)\u00b6\nReturn the number of free (closure) variables in a code object.\n-\nint PyUnstable_Code_GetFirstFree(PyCodeObject *co)\u00b6\n- This is Unstable API. It may change without warning in minor releases.\nReturn the position of the first free (closure) variable in a code object.\nChanged in version 3.13: Renamed from\nPyCode_GetFirstFree\nas part of Unstable C API. The old name is deprecated, but will remain available until the signature changes again.\n-\nPyCodeObject *PyUnstable_Code_New(int argcount, int kwonlyargcount, int nlocals, int stacksize, int flags, PyObject *code, PyObject *consts, PyObject *names, PyObject *varnames, PyObject *freevars, PyObject *cellvars, PyObject *filename, PyObject *name, PyObject *qualname, int firstlineno, PyObject *linetable, PyObject *exceptiontable)\u00b6\n- This is Unstable API. It may change without warning in minor releases.\nReturn a new code object. If you need a dummy code object to create a frame, use\nPyCode_NewEmpty()\ninstead.Since the definition of the bytecode changes often, calling\nPyUnstable_Code_New()\ndirectly can bind you to a precise Python version.The many arguments of this function are inter-dependent in complex ways, meaning that subtle changes to values are likely to result in incorrect execution or VM crashes. Use this function only with extreme care.\nChanged in version 3.11: Added\nqualname\nandexceptiontable\nparameters.Changed in version 3.12: Renamed from\nPyCode_New\nas part of Unstable C API. The old name is deprecated, but will remain available until the signature changes again.\n-\nPyCodeObject *PyUnstable_Code_NewWithPosOnlyArgs(int argcount, int posonlyargcount, int kwonlyargcount, int nlocals, int stacksize, int flags, PyObject *code, PyObject *consts, PyObject *names, PyObject *varnames, PyObject *freevars, PyObject *cellvars, PyObject *filename, PyObject *name, PyObject *qualname, int firstlineno, PyObject *linetable, PyObject *exceptiontable)\u00b6\n- This is Unstable API. It may change without warning in minor releases.\nSimilar to\nPyUnstable_Code_New()\n, but with an extra \u201cposonlyargcount\u201d for positional-only arguments. The same caveats that apply toPyUnstable_Code_New\nalso apply to this function.Added in version 3.8: as\nPyCode_NewWithPosOnlyArgs\nChanged in version 3.11: Added\nqualname\nandexceptiontable\nparameters.Changed in version 3.12: Renamed to\nPyUnstable_Code_NewWithPosOnlyArgs\n. The old name is deprecated, but will remain available until the signature changes again.\n-\nPyCodeObject *PyCode_NewEmpty(const char *filename, const char *funcname, int firstlineno)\u00b6\n- Return value: New reference.\nReturn a new empty code object with the specified filename, function name, and first line number. The resulting code object will raise an\nException\nif executed.\n-\nint PyCode_Addr2Line(PyCodeObject *co, int byte_offset)\u00b6\nReturn the line number of the instruction that occurs on or before\nbyte_offset\nand ends after it. If you just need the line number of a frame, usePyFrame_GetLineNumber()\ninstead.For efficiently iterating over the line numbers in a code object, use the API described in PEP 626.\n-\nint PyCode_Addr2Location(PyObject *co, int byte_offset, int *start_line, int *start_column, int *end_line, int *end_column)\u00b6\nSets the passed\nint\npointers to the source code line and column numbers for the instruction atbyte_offset\n. Sets the value to0\nwhen information is not available for any particular element.Returns\n1\nif the function succeeds and 0 otherwise.Added in version 3.11.\n-\nPyObject *PyCode_GetCode(PyCodeObject *co)\u00b6\nEquivalent to the Python code\ngetattr(co, 'co_code')\n. Returns a strong reference to aPyBytesObject\nrepresenting the bytecode in a code object. On error,NULL\nis returned and an exception is raised.This\nPyBytesObject\nmay be created on-demand by the interpreter and does not necessarily represent the bytecode actually executed by CPython. The primary use case for this function is debuggers and profilers.Added in version 3.11.\n-\nPyObject *PyCode_GetVarnames(PyCodeObject *co)\u00b6\nEquivalent to the Python code\ngetattr(co, 'co_varnames')\n. Returns a new reference to aPyTupleObject\ncontaining the names of the local variables. On error,NULL\nis returned and an exception is raised.Added in version 3.11.\n-\nPyObject *PyCode_GetCellvars(PyCodeObject *co)\u00b6\nEquivalent to the Python code\ngetattr(co, 'co_cellvars')\n. Returns a new reference to aPyTupleObject\ncontaining the names of the local variables that are referenced by nested functions. On error,NULL\nis returned and an exception is raised.Added in version 3.11.\n-\nPyObject *PyCode_GetFreevars(PyCodeObject *co)\u00b6\nEquivalent to the Python code\ngetattr(co, 'co_freevars')\n. Returns a new reference to aPyTupleObject\ncontaining the names of the free (closure) variables. On error,NULL\nis returned and an exception is raised.Added in version 3.11.\n-\nint PyCode_AddWatcher(PyCode_WatchCallback callback)\u00b6\nRegister callback as a code object watcher for the current interpreter. Return an ID which may be passed to\nPyCode_ClearWatcher()\n. In case of error (e.g. no more watcher IDs available), return-1\nand set an exception.Added in version 3.12.\n-\nint PyCode_ClearWatcher(int watcher_id)\u00b6\nClear watcher identified by watcher_id previously returned from\nPyCode_AddWatcher()\nfor the current interpreter. Return0\non success, or-1\nand set an exception on error (e.g. if the given watcher_id was never registered.)Added in version 3.12.\n-\ntype PyCodeEvent\u00b6\nEnumeration of possible code object watcher events: -\nPY_CODE_EVENT_CREATE\n-PY_CODE_EVENT_DESTROY\nAdded in version 3.12.\n-\ntypedef int (*PyCode_WatchCallback)(PyCodeEvent event, PyCodeObject *co)\u00b6\nType of a code object watcher callback function.\nIf event is\nPY_CODE_EVENT_CREATE\n, then the callback is invoked after co has been fully initialized. Otherwise, the callback is invoked before the destruction of co takes place, so the prior state of co can be inspected.If event is\nPY_CODE_EVENT_DESTROY\n, taking a reference in the callback to the about-to-be-destroyed code object will resurrect it and prevent it from being freed at this time. When the resurrected object is destroyed later, any watcher callbacks active at that time will be called again.Users of this API should not rely on internal runtime implementation details. Such details may include, but are not limited to, the exact order and timing of creation and destruction of code objects. While changes in these details may result in differences observable by watchers (including whether a callback is invoked or not), it does not change the semantics of the Python code being executed.\nIf the callback sets an exception, it must return\n-1\n; this exception will be printed as an unraisable exception usingPyErr_WriteUnraisable()\n. Otherwise it should return0\n.There may already be a pending exception set on entry to the callback. In this case, the callback should return\n0\nwith the same exception still set. This means the callback may not call any other API that can set an exception unless it saves and clears the exception state first, and restores it before returning.Added in version 3.12.\nCode Object Flags\u00b6\nCode objects contain a bit-field of flags, which can be retrieved as the\nco_flags\nPython attribute (for example using\nPyObject_GetAttrString()\n), and set using a flags argument to\nPyUnstable_Code_New()\nand similar functions.\nFlags whose names start with CO_FUTURE_\ncorrespond to features normally\nselectable by future statements. These flags can be used in\nPyCompilerFlags.cf_flags\n.\nNote that many CO_FUTURE_\nflags are mandatory in current versions of\nPython, and setting them has no effect.\nThe following flags are available. For their meaning, see the linked documentation of their Python equivalents.\nFlag |\nMeaning |\n|---|---|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\nno effect ( |\n|\nno effect ( |\n|\nno effect ( |\n|\nno effect ( |\n|\nno effect ( |\n|\nno effect ( |\n|\nExtra information\u00b6\nTo support low-level extensions to frame evaluation, such as external just-in-time compilers, it is possible to attach arbitrary extra data to code objects.\nThese functions are part of the unstable C API tier: this functionality is a CPython implementation detail, and the API may change without deprecation warnings.\n-\nPy_ssize_t PyUnstable_Eval_RequestCodeExtraIndex(freefunc free)\u00b6\n- This is Unstable API. It may change without warning in minor releases.\nReturn a new opaque index value used to adding data to code objects.\nYou generally call this function once (per interpreter) and use the result with\nPyCode_GetExtra\nandPyCode_SetExtra\nto manipulate data on individual code objects.If free is not\nNULL\n: when a code object is deallocated, free will be called on non-NULL\ndata stored under the new index. UsePy_DecRef()\nwhen storingPyObject\n.Added in version 3.6: as\n_PyEval_RequestCodeExtraIndex\nChanged in version 3.12: Renamed to\nPyUnstable_Eval_RequestCodeExtraIndex\n. The old private name is deprecated, but will be available until the API changes.\n-\nint PyUnstable_Code_GetExtra(PyObject *code, Py_ssize_t index, void **extra)\u00b6\n- This is Unstable API. It may change without warning in minor releases.\nSet extra to the extra data stored under the given index. Return 0 on success. Set an exception and return -1 on failure.\nIf no data was set under the index, set extra to\nNULL\nand return 0 without setting an exception.Added in version 3.6: as\n_PyCode_GetExtra\nChanged in version 3.12: Renamed to\nPyUnstable_Code_GetExtra\n. The old private name is deprecated, but will be available until the API changes.\n-\nint PyUnstable_Code_SetExtra(PyObject *code, Py_ssize_t index, void *extra)\u00b6\n- This is Unstable API. It may change without warning in minor releases.\nSet the extra data stored under the given index to extra. Return 0 on success. Set an exception and return -1 on failure.\nAdded in version 3.6: as\n_PyCode_SetExtra\nChanged in version 3.12: Renamed to\nPyUnstable_Code_SetExtra\n. The old private name is deprecated, but will be available until the API changes.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 2629} +{"url": "https://docs.python.org/3/library/asyncio-api-index.html", "title": "High-level API Index", "content": "High-level API Index\u00b6\nThis page lists all high-level async/await enabled asyncio APIs.\nTasks\u00b6\nUtilities to run asyncio programs, create Tasks, and await on multiple things with timeouts.\nCreate event loop, run a coroutine, close the loop. |\n|\nA context manager that simplifies multiple async function calls. |\n|\nTask object. |\n|\nA context manager that holds a group of tasks. Provides a convenient and reliable way to wait for all tasks in the group to finish. |\n|\nStart an asyncio Task, then returns it. |\n|\nReturn the current Task. |\n|\nReturn all tasks that are not yet finished for an event loop. |\n|\n|\nSleep for a number of seconds. |\n|\nSchedule and wait for things concurrently. |\n|\nRun with a timeout. |\n|\nShield from cancellation. |\n|\nMonitor for completion. |\nRun with a timeout. Useful in cases when |\n|\nAsynchronously run a function in a separate OS thread. |\n|\nSchedule a coroutine from another OS thread. |\n|\n|\nMonitor for completion with a |\nExamples\nQueues\u00b6\nQueues should be used to distribute work amongst multiple asyncio Tasks, implement connection pools, and pub/sub patterns.\nA FIFO queue. |\n|\nA priority queue. |\n|\nA LIFO queue. |\nExamples\nSubprocesses\u00b6\nUtilities to spawn subprocesses and run shell commands.\n|\nCreate a subprocess. |\nRun a shell command. |\nExamples\nSee also the subprocess APIs documentation.\nStreams\u00b6\nHigh-level APIs to work with network IO.\n|\nEstablish a TCP connection. |\n|\nEstablish a Unix socket connection. |\n|\nStart a TCP server. |\n|\nStart a Unix socket server. |\nHigh-level async/await object to receive network data. |\n|\nHigh-level async/await object to send network data. |\nExamples\nSee also the streams APIs documentation.\nSynchronization\u00b6\nThreading-like synchronization primitives that can be used in Tasks.\nA mutex lock. |\n|\nAn event object. |\n|\nA condition object. |\n|\nA semaphore. |\n|\nA bounded semaphore. |\n|\nA barrier object. |\nExamples\nSee also the documentation of asyncio synchronization primitives.\nExceptions\u00b6\nRaised when a Task is cancelled. See also |\n|\nRaised when a Barrier is broken. See also |\nExamples\nHandling CancelledError to run code on cancellation request.\nSee also the full list of asyncio-specific exceptions.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 546} +{"url": "https://docs.python.org/3/c-api/cell.html", "title": "Cell Objects", "content": "Cell Objects\u00b6\n\u201cCell\u201d objects are used to implement variables referenced by multiple scopes. For each such variable, a cell object is created to store the value; the local variables of each stack frame that references the value contain a reference to the cells from outer scopes which also use that variable. When the value is accessed, the value contained in the cell is used instead of the cell object itself. This de-referencing of the cell object requires support from the generated byte-code; these are not automatically de-referenced when accessed. Cell objects are not likely to be useful elsewhere.\n-\ntype PyCellObject\u00b6\nThe C structure used for cell objects.\n-\nPyTypeObject PyCell_Type\u00b6\nThe type object corresponding to cell objects.\n-\nint PyCell_Check(PyObject *ob)\u00b6\nReturn true if ob is a cell object; ob must not be\nNULL\n. This function always succeeds.\n-\nPyObject *PyCell_New(PyObject *ob)\u00b6\n- Return value: New reference.\nCreate and return a new cell object containing the value ob. The parameter may be\nNULL\n.\n-\nPyObject *PyCell_Get(PyObject *cell)\u00b6\n- Return value: New reference.\nReturn the contents of the cell cell, which can be\nNULL\n. If cell is not a cell object, returnsNULL\nwith an exception set.\n-\nPyObject *PyCell_GET(PyObject *cell)\u00b6\n- Return value: Borrowed reference.\nReturn the contents of the cell cell, but without checking that cell is non-\nNULL\nand a cell object.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 348} +{"url": "https://docs.python.org/3/c-api/method.html", "title": "Instance Method Objects", "content": "Instance Method Objects\u00b6\nAn instance method is a wrapper for a PyCFunction\nand the new way\nto bind a PyCFunction\nto a class object. It replaces the former call\nPyMethod_New(func, NULL, class)\n.\n-\nPyTypeObject PyInstanceMethod_Type\u00b6\nThis instance of\nPyTypeObject\nrepresents the Python instance method type. It is not exposed to Python programs.\n-\nint PyInstanceMethod_Check(PyObject *o)\u00b6\nReturn true if o is an instance method object (has type\nPyInstanceMethod_Type\n). The parameter must not beNULL\n. This function always succeeds.\n-\nPyObject *PyInstanceMethod_New(PyObject *func)\u00b6\n- Return value: New reference.\nReturn a new instance method object, with func being any callable object. func is the function that will be called when the instance method is called.\n-\nPyObject *PyInstanceMethod_Function(PyObject *im)\u00b6\n- Return value: Borrowed reference.\nReturn the function object associated with the instance method im.\n-\nPyObject *PyInstanceMethod_GET_FUNCTION(PyObject *im)\u00b6\n- Return value: Borrowed reference.\nMacro version of\nPyInstanceMethod_Function()\nwhich avoids error checking.\nMethod Objects\u00b6\nMethods are bound function objects. Methods are always bound to an instance of a user-defined class. Unbound methods (methods bound to a class object) are no longer available.\n-\nPyTypeObject PyMethod_Type\u00b6\nThis instance of\nPyTypeObject\nrepresents the Python method type. This is exposed to Python programs astypes.MethodType\n.\n-\nint PyMethod_Check(PyObject *o)\u00b6\nReturn true if o is a method object (has type\nPyMethod_Type\n). The parameter must not beNULL\n. This function always succeeds.\n-\nPyObject *PyMethod_New(PyObject *func, PyObject *self)\u00b6\n- Return value: New reference.\nReturn a new method object, with func being any callable object and self the instance the method should be bound. func is the function that will be called when the method is called. self must not be\nNULL\n.\n-\nPyObject *PyMethod_Function(PyObject *meth)\u00b6\n- Return value: Borrowed reference.\nReturn the function object associated with the method meth.\n-\nPyObject *PyMethod_GET_FUNCTION(PyObject *meth)\u00b6\n- Return value: Borrowed reference.\nMacro version of\nPyMethod_Function()\nwhich avoids error checking.\n-\nPyObject *PyMethod_Self(PyObject *meth)\u00b6\n- Return value: Borrowed reference.\nReturn the instance associated with the method meth.\n-\nPyObject *PyMethod_GET_SELF(PyObject *meth)\u00b6\n- Return value: Borrowed reference.\nMacro version of\nPyMethod_Self()\nwhich avoids error checking.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 614} +{"url": "https://docs.python.org/3/c-api/function.html", "title": "Function Objects", "content": "Function Objects\u00b6\nThere are a few functions specific to Python functions.\n-\ntype PyFunctionObject\u00b6\nThe C structure used for functions.\n-\nPyTypeObject PyFunction_Type\u00b6\nThis is an instance of\nPyTypeObject\nand represents the Python function type. It is exposed to Python programmers astypes.FunctionType\n.\n-\nint PyFunction_Check(PyObject *o)\u00b6\nReturn true if o is a function object (has type\nPyFunction_Type\n). The parameter must not beNULL\n. This function always succeeds.\n-\nPyObject *PyFunction_New(PyObject *code, PyObject *globals)\u00b6\n- Return value: New reference.\nReturn a new function object associated with the code object code. globals must be a dictionary with the global variables accessible to the function.\nThe function\u2019s docstring and name are retrieved from the code object.\n__module__\nis retrieved from globals. The argument defaults, annotations and closure are set toNULL\n.__qualname__\nis set to the same value as the code object\u2019sco_qualname\nfield.\n-\nPyObject *PyFunction_NewWithQualName(PyObject *code, PyObject *globals, PyObject *qualname)\u00b6\n- Return value: New reference.\nAs\nPyFunction_New()\n, but also allows setting the function object\u2019s__qualname__\nattribute. qualname should be a unicode object orNULL\n; ifNULL\n, the__qualname__\nattribute is set to the same value as the code object\u2019sco_qualname\nfield.Added in version 3.3.\n-\nPyObject *PyFunction_GetCode(PyObject *op)\u00b6\n- Return value: Borrowed reference.\nReturn the code object associated with the function object op.\n-\nPyObject *PyFunction_GetGlobals(PyObject *op)\u00b6\n- Return value: Borrowed reference.\nReturn the globals dictionary associated with the function object op.\n-\nPyObject *PyFunction_GetModule(PyObject *op)\u00b6\n- Return value: Borrowed reference.\nReturn a borrowed reference to the\n__module__\nattribute of the function object op. It can be NULL.This is normally a\nstring\ncontaining the module name, but can be set to any other object by Python code.\n-\nPyObject *PyFunction_GetDefaults(PyObject *op)\u00b6\n- Return value: Borrowed reference.\nReturn the argument default values of the function object op. This can be a tuple of arguments or\nNULL\n.\n-\nint PyFunction_SetDefaults(PyObject *op, PyObject *defaults)\u00b6\nSet the argument default values for the function object op. defaults must be\nPy_None\nor a tuple.Raises\nSystemError\nand returns-1\non failure.\n-\nvoid PyFunction_SetVectorcall(PyFunctionObject *func, vectorcallfunc vectorcall)\u00b6\nSet the vectorcall field of a given function object func.\nWarning: extensions using this API must preserve the behavior of the unaltered (default) vectorcall function!\nAdded in version 3.12.\n-\nPyObject *PyFunction_GetKwDefaults(PyObject *op)\u00b6\n- Return value: Borrowed reference.\nReturn the keyword-only argument default values of the function object op. This can be a dictionary of arguments or\nNULL\n.\n-\nint PyFunction_SetKwDefaults(PyObject *op, PyObject *defaults)\u00b6\nSet the keyword-only argument default values of the function object op. defaults must be a dictionary of keyword-only arguments or\nPy_None\n.This function returns\n0\non success, and returns-1\nwith an exception set on failure.\n-\nPyObject *PyFunction_GetClosure(PyObject *op)\u00b6\n- Return value: Borrowed reference.\nReturn the closure associated with the function object op. This can be\nNULL\nor a tuple of cell objects.\n-\nint PyFunction_SetClosure(PyObject *op, PyObject *closure)\u00b6\nSet the closure associated with the function object op. closure must be\nPy_None\nor a tuple of cell objects.Raises\nSystemError\nand returns-1\non failure.\n-\nPyObject *PyFunction_GetAnnotations(PyObject *op)\u00b6\n- Return value: Borrowed reference.\nReturn the annotations of the function object op. This can be a mutable dictionary or\nNULL\n.\n-\nint PyFunction_SetAnnotations(PyObject *op, PyObject *annotations)\u00b6\nSet the annotations for the function object op. annotations must be a dictionary or\nPy_None\n.Raises\nSystemError\nand returns-1\non failure.\n-\nPyObject *PyFunction_GET_CODE(PyObject *op)\u00b6\n-\nPyObject *PyFunction_GET_GLOBALS(PyObject *op)\u00b6\n-\nPyObject *PyFunction_GET_MODULE(PyObject *op)\u00b6\n-\nPyObject *PyFunction_GET_DEFAULTS(PyObject *op)\u00b6\n-\nPyObject *PyFunction_GET_KW_DEFAULTS(PyObject *op)\u00b6\n-\nPyObject *PyFunction_GET_CLOSURE(PyObject *op)\u00b6\n-\nPyObject *PyFunction_GET_ANNOTATIONS(PyObject *op)\u00b6\n- Return value: Borrowed reference.\nThese functions are similar to their\nPyFunction_Get*\ncounterparts, but do not do type checking. Passing anything other than an instance ofPyFunction_Type\nis undefined behavior.\n-\nint PyFunction_AddWatcher(PyFunction_WatchCallback callback)\u00b6\nRegister callback as a function watcher for the current interpreter. Return an ID which may be passed to\nPyFunction_ClearWatcher()\n. In case of error (e.g. no more watcher IDs available), return-1\nand set an exception.Added in version 3.12.\n-\nint PyFunction_ClearWatcher(int watcher_id)\u00b6\nClear watcher identified by watcher_id previously returned from\nPyFunction_AddWatcher()\nfor the current interpreter. Return0\non success, or-1\nand set an exception on error (e.g. if the given watcher_id was never registered.)Added in version 3.12.\n-\ntype PyFunction_WatchEvent\u00b6\nEnumeration of possible function watcher events:\nPyFunction_EVENT_CREATE\nPyFunction_EVENT_DESTROY\nPyFunction_EVENT_MODIFY_CODE\nPyFunction_EVENT_MODIFY_DEFAULTS\nPyFunction_EVENT_MODIFY_KWDEFAULTS\nAdded in version 3.12.\n-\ntypedef int (*PyFunction_WatchCallback)(PyFunction_WatchEvent event, PyFunctionObject *func, PyObject *new_value)\u00b6\nType of a function watcher callback function.\nIf event is\nPyFunction_EVENT_CREATE\norPyFunction_EVENT_DESTROY\nthen new_value will beNULL\n. Otherwise, new_value will hold a borrowed reference to the new value that is about to be stored in func for the attribute that is being modified.The callback may inspect but must not modify func; doing so could have unpredictable effects, including infinite recursion.\nIf event is\nPyFunction_EVENT_CREATE\n, then the callback is invoked after func has been fully initialized. Otherwise, the callback is invoked before the modification to func takes place, so the prior state of func can be inspected. The runtime is permitted to optimize away the creation of function objects when possible. In such cases no event will be emitted. Although this creates the possibility of an observable difference of runtime behavior depending on optimization decisions, it does not change the semantics of the Python code being executed.If event is\nPyFunction_EVENT_DESTROY\n, Taking a reference in the callback to the about-to-be-destroyed function will resurrect it, preventing it from being freed at this time. When the resurrected object is destroyed later, any watcher callbacks active at that time will be called again.If the callback sets an exception, it must return\n-1\n; this exception will be printed as an unraisable exception usingPyErr_WriteUnraisable()\n. Otherwise it should return0\n.There may already be a pending exception set on entry to the callback. In this case, the callback should return\n0\nwith the same exception still set. This means the callback may not call any other API that can set an exception unless it saves and clears the exception state first, and restores it before returning.Added in version 3.12.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1795} +{"url": "https://docs.python.org/3/library/asyncio-llapi-index.html", "title": "Low-level API Index", "content": "Low-level API Index\u00b6\nThis page lists all low-level asyncio APIs.\nObtaining the Event Loop\u00b6\nThe preferred function to get the running event loop. |\n|\nGet an event loop instance (running or current via the current policy). |\n|\nSet the event loop as current via the current policy. |\n|\nCreate a new event loop. |\nExamples\nEvent Loop Methods\u00b6\nSee also the main documentation section about the Event Loop Methods.\nLifecycle\nRun a Future/Task/awaitable until complete. |\n|\nRun the event loop forever. |\n|\nStop the event loop. |\n|\nClose the event loop. |\n|\nReturn |\n|\nReturn |\n|\nClose asynchronous generators. |\nDebugging\nEnable or disable the debug mode. |\n|\nGet the current debug mode. |\nScheduling Callbacks\nInvoke a callback soon. |\n|\nA thread-safe variant of |\n|\nInvoke a callback after the given time. |\n|\nInvoke a callback at the given time. |\nThread/Interpreter/Process Pool\n|\nRun a CPU-bound or other blocking function in\na |\nSet the default executor for |\nTasks and Futures\nCreate a |\n|\nSchedule coroutine as a |\n|\nSet a factory used by |\n|\nGet the factory |\nDNS\n|\nAsynchronous version of |\n|\nAsynchronous version of |\nNetworking and IPC\n|\nOpen a TCP connection. |\n|\nCreate a TCP server. |\nOpen a Unix socket connection. |\n|\nCreate a Unix socket server. |\n|\nWrap a |\n|\nOpen a datagram (UDP) connection. |\n|\n|\nSend a file over a transport. |\n|\nUpgrade an existing connection to TLS. |\n|\nWrap a read end of a pipe into a |\nWrap a write end of a pipe into a |\nSockets\n|\nReceive data from the |\n|\nReceive data from the |\n|\nReceive a datagram from the |\nReceive a datagram from the |\n|\n|\nSend data to the |\n|\nSend a datagram via the |\n|\nConnect the |\n|\nAccept a |\n|\nSend a file over the |\nStart watching a file descriptor for read availability. |\n|\nStop watching a file descriptor for read availability. |\n|\nStart watching a file descriptor for write availability. |\n|\nStop watching a file descriptor for write availability. |\nUnix Signals\nAdd a handler for a |\n|\nRemove a handler for a |\nSubprocesses\nSpawn a subprocess. |\n|\nSpawn a subprocess from a shell command. |\nError Handling\nCall the exception handler. |\n|\nSet a new exception handler. |\n|\nGet the current exception handler. |\n|\nThe default exception handler implementation. |\nExamples\nUsing\nloop.create_connection()\nto implement an echo-client.Using\nloop.create_connection()\nto connect a socket.\nTransports\u00b6\nAll transports implement the following methods:\nClose the transport. |\n|\nReturn |\n|\nRequest for information about the transport. |\n|\nSet a new protocol. |\n|\nReturn the current protocol. |\nTransports that can receive data (TCP and Unix connections,\npipes, etc). Returned from methods like\nloop.create_connection()\n, loop.create_unix_connection()\n,\nloop.connect_read_pipe()\n, etc:\nRead Transports\nReturn |\n|\nPause receiving. |\n|\nResume receiving. |\nTransports that can Send data (TCP and Unix connections,\npipes, etc). Returned from methods like\nloop.create_connection()\n, loop.create_unix_connection()\n,\nloop.connect_write_pipe()\n, etc:\nWrite Transports\nWrite data to the transport. |\n|\nWrite buffers to the transport. |\n|\nReturn |\n|\nClose and send EOF after flushing buffered data. |\n|\nClose the transport immediately. |\n|\nReturn the current size of the output buffer. |\n|\nReturn high and low water marks for write flow control. |\n|\nSet new high and low water marks for write flow control. |\nTransports returned by loop.create_datagram_endpoint()\n:\nDatagram Transports\nSend data to the remote peer. |\n|\nClose the transport immediately. |\nLow-level transport abstraction over subprocesses.\nReturned by loop.subprocess_exec()\nand\nloop.subprocess_shell()\n:\nSubprocess Transports\nReturn the subprocess process id. |\n|\nReturn the transport for the requested communication pipe (stdin, stdout, or stderr). |\n|\nReturn the subprocess return code. |\n|\nKill the subprocess. |\n|\nSend a signal to the subprocess. |\n|\nStop the subprocess. |\n|\nKill the subprocess and close all pipes. |\nProtocols\u00b6\nProtocol classes can implement the following callback methods:\n|\nCalled when a connection is made. |\n|\nCalled when the connection is lost or closed. |\n|\nCalled when the transport\u2019s buffer goes over the high water mark. |\n|\nCalled when the transport\u2019s buffer drains below the low water mark. |\nStreaming Protocols (TCP, Unix Sockets, Pipes)\n|\nCalled when some data is received. |\n|\nCalled when an EOF is received. |\nBuffered Streaming Protocols\n|\nCalled to allocate a new receive buffer. |\n|\nCalled when the buffer was updated with the received data. |\n|\nCalled when an EOF is received. |\nDatagram Protocols\n|\nCalled when a datagram is received. |\n|\nCalled when a previous send or receive operation raises an\n|\nSubprocess Protocols\n|\nCalled when the child process writes data into its stdout or stderr pipe. |\n|\nCalled when one of the pipes communicating with the child process is closed. |\n|\nCalled when the child process has exited. It can be called before\n|\nEvent Loop Policies\u00b6\nPolicies is a low-level mechanism to alter the behavior of\nfunctions like asyncio.get_event_loop()\n. See also\nthe main policies section for more\ndetails.\nAccessing Policies\nReturn the current process-wide policy. |\n|\nSet a new process-wide policy. |\n|\nBase class for policy objects. |", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1304} +{"url": "https://docs.python.org/3/library/asyncio-platforms.html", "title": "Platform Support", "content": "Platform Support\u00b6\nThe asyncio\nmodule is designed to be portable,\nbut some platforms have subtle differences and limitations\ndue to the platforms\u2019 underlying architecture and capabilities.\nAll Platforms\u00b6\nloop.add_reader()\nandloop.add_writer()\ncannot be used to monitor file I/O.\nWindows\u00b6\nSource code: Lib/asyncio/proactor_events.py, Lib/asyncio/windows_events.py, Lib/asyncio/windows_utils.py\nChanged in version 3.8: On Windows, ProactorEventLoop\nis now the default event loop.\nAll event loops on Windows do not support the following methods:\nloop.create_unix_connection()\nandloop.create_unix_server()\nare not supported. Thesocket.AF_UNIX\nsocket family is specific to Unix.loop.add_signal_handler()\nandloop.remove_signal_handler()\nare not supported.\nSelectorEventLoop\nhas the following limitations:\nSelectSelector\nis used to wait on socket events: it supports sockets and is limited to 512 sockets.loop.add_reader()\nandloop.add_writer()\nonly accept socket handles (e.g. pipe file descriptors are not supported).Pipes are not supported, so the\nloop.connect_read_pipe()\nandloop.connect_write_pipe()\nmethods are not implemented.Subprocesses are not supported, i.e.\nloop.subprocess_exec()\nandloop.subprocess_shell()\nmethods are not implemented.\nProactorEventLoop\nhas the following limitations:\nThe\nloop.add_reader()\nandloop.add_writer()\nmethods are not supported.\nThe resolution of the monotonic clock on Windows is usually around 15.6 milliseconds. The best resolution is 0.5 milliseconds. The resolution depends on the hardware (availability of HPET) and on the Windows configuration.\nSubprocess Support on Windows\u00b6\nOn Windows, the default event loop ProactorEventLoop\nsupports\nsubprocesses, whereas SelectorEventLoop\ndoes not.\nmacOS\u00b6\nModern macOS versions are fully supported.\nmacOS <= 10.8\nOn macOS 10.6, 10.7 and 10.8, the default event loop\nuses selectors.KqueueSelector\n, which does not support\ncharacter devices on these versions. The SelectorEventLoop\ncan be manually configured to use SelectSelector\nor PollSelector\nto support character devices on\nthese older versions of macOS. Example:\nimport asyncio\nimport selectors\nselector = selectors.SelectSelector()\nloop = asyncio.SelectorEventLoop(selector)\nasyncio.set_event_loop(loop)", "code_snippets": ["\n", "\n\n", " ", " ", "\n", " ", " ", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 558} +{"url": "https://docs.python.org/3/c-api/complex.html", "title": "Complex Number Objects", "content": "Complex Number Objects\u00b6\nPython\u2019s complex number objects are implemented as two distinct types when viewed from the C API: one is the Python object exposed to Python programs, and the other is a C structure which represents the actual complex number value. The API provides functions for working with both.\nComplex Numbers as C Structures\u00b6\nNote that the functions which accept these structures as parameters and return them as results do so by value rather than dereferencing them through pointers. This is consistent throughout the API.\n-\ntype Py_complex\u00b6\nThe C structure which corresponds to the value portion of a Python complex number object. Most of the functions for dealing with complex number objects use structures of this type as input or output values, as appropriate.\nThe structure is defined as:\ntypedef struct { double real; double imag; } Py_complex;\n-\nPy_complex _Py_c_sum(Py_complex left, Py_complex right)\u00b6\nReturn the sum of two complex numbers, using the C\nPy_complex\nrepresentation.\n-\nPy_complex _Py_c_diff(Py_complex left, Py_complex right)\u00b6\nReturn the difference between two complex numbers, using the C\nPy_complex\nrepresentation.\n-\nPy_complex _Py_c_neg(Py_complex num)\u00b6\nReturn the negation of the complex number num, using the C\nPy_complex\nrepresentation.\n-\nPy_complex _Py_c_prod(Py_complex left, Py_complex right)\u00b6\nReturn the product of two complex numbers, using the C\nPy_complex\nrepresentation.\n-\nPy_complex _Py_c_quot(Py_complex dividend, Py_complex divisor)\u00b6\nReturn the quotient of two complex numbers, using the C\nPy_complex\nrepresentation.If divisor is null, this method returns zero and sets\nerrno\ntoEDOM\n.\n-\nPy_complex _Py_c_pow(Py_complex num, Py_complex exp)\u00b6\nReturn the exponentiation of num by exp, using the C\nPy_complex\nrepresentation.If num is null and exp is not a positive real number, this method returns zero and sets\nerrno\ntoEDOM\n.Set\nerrno\ntoERANGE\non overflows.\nComplex Numbers as Python Objects\u00b6\n-\nPyTypeObject PyComplex_Type\u00b6\n- Part of the Stable ABI.\nThis instance of\nPyTypeObject\nrepresents the Python complex number type. It is the same object ascomplex\nin the Python layer.\n-\nint PyComplex_Check(PyObject *p)\u00b6\nReturn true if its argument is a\nPyComplexObject\nor a subtype ofPyComplexObject\n. This function always succeeds.\n-\nint PyComplex_CheckExact(PyObject *p)\u00b6\nReturn true if its argument is a\nPyComplexObject\n, but not a subtype ofPyComplexObject\n. This function always succeeds.\n-\nPyObject *PyComplex_FromCComplex(Py_complex v)\u00b6\n- Return value: New reference.\nCreate a new Python complex number object from a C\nPy_complex\nvalue. ReturnNULL\nwith an exception set on error.\n-\nPyObject *PyComplex_FromDoubles(double real, double imag)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a new\nPyComplexObject\nobject from real and imag. ReturnNULL\nwith an exception set on error.\n-\ndouble PyComplex_RealAsDouble(PyObject *op)\u00b6\n- Part of the Stable ABI.\nReturn the real part of op as a C double.\nIf op is not a Python complex number object but has a\n__complex__()\nmethod, this method will first be called to convert op to a Python complex number object. If__complex__()\nis not defined then it falls back to callPyFloat_AsDouble()\nand returns its result.Upon failure, this method returns\n-1.0\nwith an exception set, so one should callPyErr_Occurred()\nto check for errors.Changed in version 3.13: Use\n__complex__()\nif available.\n-\ndouble PyComplex_ImagAsDouble(PyObject *op)\u00b6\n- Part of the Stable ABI.\nReturn the imaginary part of op as a C double.\nIf op is not a Python complex number object but has a\n__complex__()\nmethod, this method will first be called to convert op to a Python complex number object. If__complex__()\nis not defined then it falls back to callPyFloat_AsDouble()\nand returns0.0\non success.Upon failure, this method returns\n-1.0\nwith an exception set, so one should callPyErr_Occurred()\nto check for errors.Changed in version 3.13: Use\n__complex__()\nif available.\n-\nPy_complex PyComplex_AsCComplex(PyObject *op)\u00b6\nReturn the\nPy_complex\nvalue of the complex number op.If op is not a Python complex number object but has a\n__complex__()\nmethod, this method will first be called to convert op to a Python complex number object. If__complex__()\nis not defined then it falls back to__float__()\n. If__float__()\nis not defined then it falls back to__index__()\n.Upon failure, this method returns\nPy_complex\nwithreal\nset to-1.0\nand with an exception set, so one should callPyErr_Occurred()\nto check for errors.Changed in version 3.8: Use\n__index__()\nif available.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1133} +{"url": "https://docs.python.org/3/c-api/float.html", "title": "Floating-Point Objects", "content": "Floating-Point Objects\u00b6\n-\nPyTypeObject PyFloat_Type\u00b6\n- Part of the Stable ABI.\nThis instance of\nPyTypeObject\nrepresents the Python floating-point type. This is the same object asfloat\nin the Python layer.\n-\nint PyFloat_Check(PyObject *p)\u00b6\nReturn true if its argument is a\nPyFloatObject\nor a subtype ofPyFloatObject\n. This function always succeeds.\n-\nint PyFloat_CheckExact(PyObject *p)\u00b6\nReturn true if its argument is a\nPyFloatObject\n, but not a subtype ofPyFloatObject\n. This function always succeeds.\n-\nPyObject *PyFloat_FromString(PyObject *str)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCreate a\nPyFloatObject\nobject based on the string value in str, orNULL\non failure.\n-\nPyObject *PyFloat_FromDouble(double v)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCreate a\nPyFloatObject\nobject from v, orNULL\non failure.\n-\ndouble PyFloat_AsDouble(PyObject *pyfloat)\u00b6\n- Part of the Stable ABI.\nReturn a C double representation of the contents of pyfloat. If pyfloat is not a Python floating-point object but has a\n__float__()\nmethod, this method will first be called to convert pyfloat into a float. If__float__()\nis not defined then it falls back to__index__()\n. This method returns-1.0\nupon failure, so one should callPyErr_Occurred()\nto check for errors.Changed in version 3.8: Use\n__index__()\nif available.\n-\ndouble PyFloat_AS_DOUBLE(PyObject *pyfloat)\u00b6\nReturn a C double representation of the contents of pyfloat, but without error checking.\n-\nPyObject *PyFloat_GetInfo(void)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a structseq instance which contains information about the precision, minimum and maximum values of a float. It\u2019s a thin wrapper around the header file\nfloat.h\n.\n-\ndouble PyFloat_GetMax()\u00b6\n- Part of the Stable ABI.\nReturn the maximum representable finite float DBL_MAX as C double.\n-\ndouble PyFloat_GetMin()\u00b6\n- Part of the Stable ABI.\nReturn the minimum normalized positive float DBL_MIN as C double.\n-\nPy_INFINITY\u00b6\nThis macro expands to a constant expression of type double, that represents the positive infinity.\nOn most platforms, this is equivalent to the\nINFINITY\nmacro from the C11 standard\nheader.\n-\nPy_NAN\u00b6\nThis macro expands to a constant expression of type double, that represents a quiet not-a-number (qNaN) value.\nOn most platforms, this is equivalent to the\nNAN\nmacro from the C11 standard\nheader.\n-\nPy_HUGE_VAL\u00b6\nEquivalent to\nINFINITY\n.Deprecated since version 3.14: The macro is soft deprecated.\n-\nPy_MATH_TAU\u00b6\nThe definition (accurate for a double type) of the\nmath.tau\nconstant.Added in version 3.6.\n-\nPy_RETURN_NAN\u00b6\nReturn\nmath.nan\nfrom a function.On most platforms, this is equivalent to\nreturn PyFloat_FromDouble(NAN)\n.\n-\nPy_RETURN_INF(sign)\u00b6\nReturn\nmath.inf\nor-math.inf\nfrom a function, depending on the sign of sign.On most platforms, this is equivalent to the following:\nreturn PyFloat_FromDouble(copysign(INFINITY, sign));\n-\nPy_IS_FINITE(X)\u00b6\nReturn\n1\nif the given floating-point number X is finite, that is, it is normal, subnormal or zero, but not infinite or NaN. Return0\notherwise.Deprecated since version 3.14: The macro is soft deprecated. Use\nisfinite\ninstead.\n-\nPy_IS_INFINITY(X)\u00b6\nReturn\n1\nif the given floating-point number X is positive or negative infinity. Return0\notherwise.Deprecated since version 3.14: The macro is soft deprecated. Use\nisinf\ninstead.\n-\nPy_IS_NAN(X)\u00b6\nReturn\n1\nif the given floating-point number X is a not-a-number (NaN) value. Return0\notherwise.Deprecated since version 3.14: The macro is soft deprecated. Use\nisnan\ninstead.\nPack and Unpack functions\u00b6\nThe pack and unpack functions provide an efficient platform-independent way to store floating-point values as byte strings. The Pack routines produce a bytes string from a C double, and the Unpack routines produce a C double from such a bytes string. The suffix (2, 4 or 8) specifies the number of bytes in the bytes string.\nOn platforms that appear to use IEEE 754 formats these functions work by copying bits. On other platforms, the 2-byte format is identical to the IEEE 754 binary16 half-precision format, the 4-byte format (32-bit) is identical to the IEEE 754 binary32 single precision format, and the 8-byte format to the IEEE 754 binary64 double precision format, although the packing of INFs and NaNs (if such things exist on the platform) isn\u2019t handled correctly, and attempting to unpack a bytes string containing an IEEE INF or NaN will raise an exception.\nNote that NaNs type may not be preserved on IEEE platforms (signaling NaN become quiet NaN), for example on x86 systems in 32-bit mode.\nOn non-IEEE platforms with more precision, or larger dynamic range, than IEEE 754 supports, not all values can be packed; on non-IEEE platforms with less precision, or smaller dynamic range, not all values can be unpacked. What happens in such cases is partly accidental (alas).\nAdded in version 3.11.\nPack functions\u00b6\nThe pack routines write 2, 4 or 8 bytes, starting at p. le is an\nint argument, non-zero if you want the bytes string in little-endian\nformat (exponent last, at p+1\n, p+3\n, or p+6\np+7\n), zero if you\nwant big-endian format (exponent first, at p). The PY_BIG_ENDIAN\nconstant can be used to use the native endian: it is equal to 1\non big\nendian processor, or 0\non little endian processor.\nReturn value: 0\nif all is OK, -1\nif error (and an exception is set,\nmost likely OverflowError\n).\nThere are two problems on non-IEEE platforms:\nWhat this does is undefined if x is a NaN or infinity.\n-0.0\nand+0.0\nproduce the same bytes string.\n-\nint PyFloat_Pack2(double x, char *p, int le)\u00b6\nPack a C double as the IEEE 754 binary16 half-precision format.\n-\nint PyFloat_Pack4(double x, char *p, int le)\u00b6\nPack a C double as the IEEE 754 binary32 single precision format.\n-\nint PyFloat_Pack8(double x, char *p, int le)\u00b6\nPack a C double as the IEEE 754 binary64 double precision format.\nUnpack functions\u00b6\nThe unpack routines read 2, 4 or 8 bytes, starting at p. le is an\nint argument, non-zero if the bytes string is in little-endian format\n(exponent last, at p+1\n, p+3\nor p+6\nand p+7\n), zero if big-endian\n(exponent first, at p). The PY_BIG_ENDIAN\nconstant can be used to\nuse the native endian: it is equal to 1\non big endian processor, or 0\non little endian processor.\nReturn value: The unpacked double. On error, this is -1.0\nand\nPyErr_Occurred()\nis true (and an exception is set, most likely\nOverflowError\n).\nNote that on a non-IEEE platform this will refuse to unpack a bytes string that represents a NaN or infinity.\n-\ndouble PyFloat_Unpack2(const char *p, int le)\u00b6\nUnpack the IEEE 754 binary16 half-precision format as a C double.\n-\ndouble PyFloat_Unpack4(const char *p, int le)\u00b6\nUnpack the IEEE 754 binary32 single precision format as a C double.\n-\ndouble PyFloat_Unpack8(const char *p, int le)\u00b6\nUnpack the IEEE 754 binary64 double precision format as a C double.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1715} +{"url": "https://docs.python.org/3/c-api/bytearray.html", "title": "Byte Array Objects", "content": "Byte Array Objects\u00b6\n-\nPyTypeObject PyByteArray_Type\u00b6\n- Part of the Stable ABI.\nThis instance of\nPyTypeObject\nrepresents the Python bytearray type; it is the same object asbytearray\nin the Python layer.\nType check macros\u00b6\nDirect API functions\u00b6\n-\nPyObject *PyByteArray_FromObject(PyObject *o)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a new bytearray object from any object, o, that implements the buffer protocol.\nOn failure, return\nNULL\nwith an exception set.\n-\nPyObject *PyByteArray_FromStringAndSize(const char *string, Py_ssize_t len)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCreate a new bytearray object from string and its length, len.\nOn failure, return\nNULL\nwith an exception set.\n-\nPyObject *PyByteArray_Concat(PyObject *a, PyObject *b)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nConcat bytearrays a and b and return a new bytearray with the result.\nOn failure, return\nNULL\nwith an exception set.\n-\nPy_ssize_t PyByteArray_Size(PyObject *bytearray)\u00b6\n- Part of the Stable ABI.\nReturn the size of bytearray after checking for a\nNULL\npointer.\n-\nchar *PyByteArray_AsString(PyObject *bytearray)\u00b6\n- Part of the Stable ABI.\nReturn the contents of bytearray as a char array after checking for a\nNULL\npointer. The returned array always has an extra null byte appended.\n-\nint PyByteArray_Resize(PyObject *bytearray, Py_ssize_t len)\u00b6\n- Part of the Stable ABI.\nResize the internal buffer of bytearray to len. Failure is a\n-1\nreturn with an exception set.Changed in version 3.14: A negative len will now result in an exception being set and -1 returned.\nMacros\u00b6\nThese macros trade safety for speed and they don\u2019t check pointers.\n-\nchar *PyByteArray_AS_STRING(PyObject *bytearray)\u00b6\nSimilar to\nPyByteArray_AsString()\n, but without error checking.\n-\nPy_ssize_t PyByteArray_GET_SIZE(PyObject *bytearray)\u00b6\nSimilar to\nPyByteArray_Size()\n, but without error checking.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 476} +{"url": "https://docs.python.org/3/c-api/conversion.html", "title": "String conversion and formatting", "content": "String conversion and formatting\u00b6\nFunctions for number conversion and formatted string output.\n-\nint PyOS_snprintf(char *str, size_t size, const char *format, ...)\u00b6\n- Part of the Stable ABI.\nOutput not more than size bytes to str according to the format string format and the extra arguments. See the Unix man page snprintf(3).\n-\nint PyOS_vsnprintf(char *str, size_t size, const char *format, va_list va)\u00b6\n- Part of the Stable ABI.\nOutput not more than size bytes to str according to the format string format and the variable argument list va. Unix man page vsnprintf(3).\nPyOS_snprintf()\nand PyOS_vsnprintf()\nwrap the Standard C library\nfunctions snprintf()\nand vsnprintf()\n. Their purpose is to\nguarantee consistent behavior in corner cases, which the Standard C functions do\nnot.\nThe wrappers ensure that str[size-1]\nis always '\\0'\nupon return. They\nnever write more than size bytes (including the trailing '\\0'\n) into str.\nBoth functions require that str != NULL\n, size > 0\n, format != NULL\nand size < INT_MAX\n. Note that this means there is no equivalent to the C99\nn = snprintf(NULL, 0, ...)\nwhich would determine the necessary buffer size.\nThe return value (rv) for these functions should be interpreted as follows:\nWhen\n0 <= rv < size\n, the output conversion was successful and rv characters were written to str (excluding the trailing'\\0'\nbyte atstr[rv]\n).When\nrv >= size\n, the output conversion was truncated and a buffer withrv + 1\nbytes would have been needed to succeed.str[size-1]\nis'\\0'\nin this case.When\nrv < 0\n, the output conversion failed andstr[size-1]\nis'\\0'\nin this case too, but the rest of str is undefined. The exact cause of the error depends on the underlying platform.\nThe following functions provide locale-independent string to number conversions.\n-\nunsigned long PyOS_strtoul(const char *str, char **ptr, int base)\u00b6\n- Part of the Stable ABI.\nConvert the initial part of the string in\nstr\nto an unsigned long value according to the givenbase\n, which must be between2\nand36\ninclusive, or be the special value0\n.Leading white space and case of characters are ignored. If\nbase\nis zero it looks for a leading0b\n,0o\nor0x\nto tell which base. If these are absent it defaults to10\n. Base must be 0 or between 2 and 36 (inclusive). Ifptr\nis non-NULL\nit will contain a pointer to the end of the scan.If the converted value falls out of range of corresponding return type, range error occurs (\nerrno\nis set toERANGE\n) andULONG_MAX\nis returned. If no conversion can be performed,0\nis returned.See also the Unix man page strtoul(3).\nAdded in version 3.2.\n-\nlong PyOS_strtol(const char *str, char **ptr, int base)\u00b6\n- Part of the Stable ABI.\nConvert the initial part of the string in\nstr\nto an long value according to the givenbase\n, which must be between2\nand36\ninclusive, or be the special value0\n.Same as\nPyOS_strtoul()\n, but return a long value instead andLONG_MAX\non overflows.See also the Unix man page strtol(3).\nAdded in version 3.2.\n-\ndouble PyOS_string_to_double(const char *s, char **endptr, PyObject *overflow_exception)\u00b6\n- Part of the Stable ABI.\nConvert a string\ns\nto a double, raising a Python exception on failure. The set of accepted strings corresponds to the set of strings accepted by Python\u2019sfloat()\nconstructor, except thats\nmust not have leading or trailing whitespace. The conversion is independent of the current locale.If\nendptr\nisNULL\n, convert the whole string. RaiseValueError\nand return-1.0\nif the string is not a valid representation of a floating-point number.If endptr is not\nNULL\n, convert as much of the string as possible and set*endptr\nto point to the first unconverted character. If no initial segment of the string is the valid representation of a floating-point number, set*endptr\nto point to the beginning of the string, raise ValueError, and return-1.0\n.If\ns\nrepresents a value that is too large to store in a float (for example,\"1e500\"\nis such a string on many platforms) then ifoverflow_exception\nisNULL\nreturnPy_INFINITY\n(with an appropriate sign) and don\u2019t set any exception. Otherwise,overflow_exception\nmust point to a Python exception object; raise that exception and return-1.0\n. In both cases, set*endptr\nto point to the first character after the converted value.If any other error occurs during the conversion (for example an out-of-memory error), set the appropriate Python exception and return\n-1.0\n.Added in version 3.1.\n-\nchar *PyOS_double_to_string(double val, char format_code, int precision, int flags, int *ptype)\u00b6\n- Part of the Stable ABI.\nConvert a double val to a string using supplied format_code, precision, and flags.\nformat_code must be one of\n'e'\n,'E'\n,'f'\n,'F'\n,'g'\n,'G'\nor'r'\n. For'r'\n, the supplied precision must be 0 and is ignored. The'r'\nformat code specifies the standardrepr()\nformat.flags can be zero or more of the following values or-ed together:\n-\nPy_DTSF_SIGN\u00b6\nAlways precede the returned string with a sign character, even if val is non-negative.\n-\nPy_DTSF_ADD_DOT_0\u00b6\nEnsure that the returned string will not look like an integer.\n-\nPy_DTSF_ALT\u00b6\nApply \u201calternate\u201d formatting rules. See the documentation for the\nPyOS_snprintf()\n'#'\nspecifier for details.\n-\nPy_DTSF_NO_NEG_0\u00b6\nNegative zero is converted to positive zero.\nAdded in version 3.11.\nIf ptype is non-\nNULL\n, then the value it points to will be set to one of the following constants depending on the type of val:*ptype\ntype of val\n-\nPy_DTST_FINITE\u00b6\nfinite number\n-\nPy_DTST_INFINITE\u00b6\ninfinite number\n-\nPy_DTST_NAN\u00b6\nnot a number\nThe return value is a pointer to buffer with the converted string or\nNULL\nif the conversion failed. The caller is responsible for freeing the returned string by callingPyMem_Free()\n.Added in version 3.1.\n-\nPy_DTSF_SIGN\u00b6\n-\nint PyOS_mystricmp(const char *str1, const char *str2)\u00b6\n-\nint PyOS_mystrnicmp(const char *str1, const char *str2, Py_ssize_t size)\u00b6\n- Part of the Stable ABI.\nCase insensitive comparison of strings. These functions work almost identically to\nstrcmp()\nandstrncmp()\n(respectively), except that they ignore the case of ASCII characters.Return\n0\nif the strings are equal, a negative value if str1 sorts lexicographically before str2, or a positive value if it sorts after.In the str1 or str2 arguments, a NUL byte marks the end of the string. For\nPyOS_mystrnicmp()\n, the size argument gives the maximum size of the string, as if NUL was present at the index given by size.These functions do not use the locale.\n-\nint PyOS_stricmp(const char *str1, const char *str2)\u00b6\n-\nint PyOS_strnicmp(const char *str1, const char *str2, Py_ssize_t size)\u00b6\nCase insensitive comparison of strings.\nOn Windows, these are aliases of\nstricmp()\nandstrnicmp()\n, respectively.On other platforms, they are aliases of\nPyOS_mystricmp()\nandPyOS_mystrnicmp()\n, respectively.\nCharacter classification and conversion\u00b6\nThe following macros provide locale-independent (unlike the C standard library\nctype.h\n) character classification and conversion.\nThe argument must be a signed or unsigned char.\n-\nPy_ISALNUM(c)\u00b6\nReturn true if the character c is an alphanumeric character.\n-\nPy_ISALPHA(c)\u00b6\nReturn true if the character c is an alphabetic character (\na-z\nandA-Z\n).\n-\nPy_ISDIGIT(c)\u00b6\nReturn true if the character c is a decimal digit (\n0-9\n).\n-\nPy_ISLOWER(c)\u00b6\nReturn true if the character c is a lowercase ASCII letter (\na-z\n).\n-\nPy_ISUPPER(c)\u00b6\nReturn true if the character c is an uppercase ASCII letter (\nA-Z\n).\n-\nPy_ISSPACE(c)\u00b6\nReturn true if the character c is a whitespace character (space, tab, carriage return, newline, vertical tab, or form feed).\n-\nPy_ISXDIGIT(c)\u00b6\nReturn true if the character c is a hexadecimal digit (\n0-9\n,a-f\n, andA-F\n).\n-\nPy_TOLOWER(c)\u00b6\nReturn the lowercase equivalent of the character c.\n-\nPy_TOUPPER(c)\u00b6\nReturn the uppercase equivalent of the character c.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1939} +{"url": "https://docs.python.org/3/c-api/marshal.html", "title": "Data marshalling support", "content": "Data marshalling support\u00b6\nThese routines allow C code to work with serialized objects using the same\ndata format as the marshal\nmodule. There are functions to write data\ninto the serialization format, and additional functions that can be used to\nread the data back. Files used to store marshalled data must be opened in\nbinary mode.\nNumeric values are stored with the least significant byte first.\nThe module supports several versions of the data format; see\nthe Python module documentation\nfor details.\n-\nPy_MARSHAL_VERSION\u00b6\nThe current format version. See\nmarshal.version\n.\n-\nvoid PyMarshal_WriteLongToFile(long value, FILE *file, int version)\u00b6\nMarshal a long integer, value, to file. This will only write the least-significant 32 bits of value; regardless of the size of the native long type. version indicates the file format.\nThis function can fail, in which case it sets the error indicator. Use\nPyErr_Occurred()\nto check for that.\n-\nvoid PyMarshal_WriteObjectToFile(PyObject *value, FILE *file, int version)\u00b6\nMarshal a Python object, value, to file. version indicates the file format.\nThis function can fail, in which case it sets the error indicator. Use\nPyErr_Occurred()\nto check for that.\n-\nPyObject *PyMarshal_WriteObjectToString(PyObject *value, int version)\u00b6\n- Return value: New reference.\nReturn a bytes object containing the marshalled representation of value. version indicates the file format.\nThe following functions allow marshalled values to be read back in.\n-\nlong PyMarshal_ReadLongFromFile(FILE *file)\u00b6\nReturn a C long from the data stream in a FILE* opened for reading. Only a 32-bit value can be read in using this function, regardless of the native size of long.\nOn error, sets the appropriate exception (\nEOFError\n) and returns-1\n.\n-\nint PyMarshal_ReadShortFromFile(FILE *file)\u00b6\nReturn a C short from the data stream in a FILE* opened for reading. Only a 16-bit value can be read in using this function, regardless of the native size of short.\nOn error, sets the appropriate exception (\nEOFError\n) and returns-1\n.\n-\nPyObject *PyMarshal_ReadObjectFromFile(FILE *file)\u00b6\n- Return value: New reference.\nReturn a Python object from the data stream in a FILE* opened for reading.\nOn error, sets the appropriate exception (\nEOFError\n,ValueError\norTypeError\n) and returnsNULL\n.\n-\nPyObject *PyMarshal_ReadLastObjectFromFile(FILE *file)\u00b6\n- Return value: New reference.\nReturn a Python object from the data stream in a FILE* opened for reading. Unlike\nPyMarshal_ReadObjectFromFile()\n, this function assumes that no further objects will be read from the file, allowing it to aggressively load file data into memory so that the de-serialization can operate from data in memory rather than reading a byte at a time from the file. Only use this variant if you are certain that you won\u2019t be reading anything else from the file.On error, sets the appropriate exception (\nEOFError\n,ValueError\norTypeError\n) and returnsNULL\n.\n-\nPyObject *PyMarshal_ReadObjectFromString(const char *data, Py_ssize_t len)\u00b6\n- Return value: New reference.\nReturn a Python object from the data stream in a byte buffer containing len bytes pointed to by data.\nOn error, sets the appropriate exception (\nEOFError\n,ValueError\norTypeError\n) and returnsNULL\n.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 810} +{"url": "https://docs.python.org/3/library/email.header.html", "title": ": Internationalized headers", "content": "email.header\n: Internationalized headers\u00b6\nSource code: Lib/email/header.py\nThis module is part of the legacy (Compat32\n) email API. In the current API\nencoding and decoding of headers is handled transparently by the\ndictionary-like API of the EmailMessage\nclass. In\naddition to uses in legacy code, this module can be useful in applications that\nneed to completely control the character sets used when encoding headers.\nThe remaining text in this section is the original documentation of the module.\nRFC 2822 is the base standard that describes the format of email messages. It derives from the older RFC 822 standard which came into widespread use at a time when most email was composed of ASCII characters only. RFC 2822 is a specification written assuming email contains only 7-bit ASCII characters.\nOf course, as email has been deployed worldwide, it has become\ninternationalized, such that language specific character sets can now be used in\nemail messages. The base standard still requires email messages to be\ntransferred using only 7-bit ASCII characters, so a slew of RFCs have been\nwritten describing how to encode email containing non-ASCII characters into\nRFC 2822-compliant format. These RFCs include RFC 2045, RFC 2046,\nRFC 2047, and RFC 2231. The email\npackage supports these standards\nin its email.header\nand email.charset\nmodules.\nIf you want to include non-ASCII characters in your email headers, say in the\nSubject or To fields, you should use the\nHeader\nclass and assign the field in the Message\nobject to an instance of Header\ninstead of using a string for the header\nvalue. Import the Header\nclass from the email.header\nmodule.\nFor example:\n>>> from email.message import Message\n>>> from email.header import Header\n>>> msg = Message()\n>>> h = Header('p\\xf6stal', 'iso-8859-1')\n>>> msg['Subject'] = h\n>>> msg.as_string()\n'Subject: =?iso-8859-1?q?p=F6stal?=\\n\\n'\nNotice here how we wanted the Subject field to contain a non-ASCII\ncharacter? We did this by creating a Header\ninstance and passing in\nthe character set that the byte string was encoded in. When the subsequent\nMessage\ninstance was flattened, the Subject\nfield was properly RFC 2047 encoded. MIME-aware mail readers would show this\nheader using the embedded ISO-8859-1 character.\nHere is the Header\nclass description:\n- class email.header.Header(s=None, charset=None, maxlinelen=None, header_name=None, continuation_ws=' ', errors='strict')\u00b6\nCreate a MIME-compliant header that can contain strings in different character sets.\nOptional s is the initial header value. If\nNone\n(the default), the initial header value is not set. You can later append to the header withappend()\nmethod calls. s may be an instance ofbytes\norstr\n, but see theappend()\ndocumentation for semantics.Optional charset serves two purposes: it has the same meaning as the charset argument to the\nappend()\nmethod. It also sets the default character set for all subsequentappend()\ncalls that omit the charset argument. If charset is not provided in the constructor (the default), theus-ascii\ncharacter set is used both as s\u2019s initial charset and as the default for subsequentappend()\ncalls.The maximum line length can be specified explicitly via maxlinelen. For splitting the first line to a shorter value (to account for the field header which isn\u2019t included in s, e.g. Subject) pass in the name of the field in header_name. The default maxlinelen is 78, and the default value for header_name is\nNone\n, meaning it is not taken into account for the first line of a long, split header.Optional continuation_ws must be RFC 2822-compliant folding whitespace, and is usually either a space or a hard tab character. This character will be prepended to continuation lines. continuation_ws defaults to a single space character.\nOptional errors is passed straight through to the\nappend()\nmethod.- append(s, charset=None, errors='strict')\u00b6\nAppend the string s to the MIME header.\nOptional charset, if given, should be a\nCharset\ninstance (seeemail.charset\n) or the name of a character set, which will be converted to aCharset\ninstance. A value ofNone\n(the default) means that the charset given in the constructor is used.s may be an instance of\nbytes\norstr\n. If it is an instance ofbytes\n, then charset is the encoding of that byte string, and aUnicodeError\nwill be raised if the string cannot be decoded with that character set.If s is an instance of\nstr\n, then charset is a hint specifying the character set of the characters in the string.In either case, when producing an RFC 2822-compliant header using RFC 2047 rules, the string will be encoded using the output codec of the charset. If the string cannot be encoded using the output codec, a UnicodeError will be raised.\nOptional errors is passed as the errors argument to the decode call if s is a byte string.\n- encode(splitchars=';, \\t', maxlinelen=None, linesep='\\n')\u00b6\nEncode a message header into an RFC-compliant format, possibly wrapping long lines and encapsulating non-ASCII parts in base64 or quoted-printable encodings.\nOptional splitchars is a string containing characters which should be given extra weight by the splitting algorithm during normal header wrapping. This is in very rough support of RFC 2822's \u2018higher level syntactic breaks\u2019: split points preceded by a splitchar are preferred during line splitting, with the characters preferred in the order in which they appear in the string. Space and tab may be included in the string to indicate whether preference should be given to one over the other as a split point when other split chars do not appear in the line being split. Splitchars does not affect RFC 2047 encoded lines.\nmaxlinelen, if given, overrides the instance\u2019s value for the maximum line length.\nlinesep specifies the characters used to separate the lines of the folded header. It defaults to the most useful value for Python application code (\n\\n\n), but\\r\\n\ncan be specified in order to produce headers with RFC-compliant line separators.Changed in version 3.2: Added the linesep argument.\nThe\nHeader\nclass also provides a number of methods to support standard operators and built-in functions.- __str__()\u00b6\nReturns an approximation of the\nHeader\nas a string, using an unlimited line length. All pieces are converted to unicode using the specified encoding and joined together appropriately. Any pieces with a charset of'unknown-8bit'\nare decoded as ASCII using the'replace'\nerror handler.Changed in version 3.2: Added handling for the\n'unknown-8bit'\ncharset.\nThe email.header\nmodule also provides the following convenient functions.\n- email.header.decode_header(header)\u00b6\nDecode a message header value without converting the character set. The header value is in header.\nFor historical reasons, this function may return either:\nA list of pairs containing each of the decoded parts of the header,\n(decoded_bytes, charset)\n, where decoded_bytes is always an instance ofbytes\n, and charset is either:A lower case string containing the name of the character set specified.\nNone\nfor non-encoded parts of the header.\nA list of length 1 containing a pair\n(string, None)\n, where string is always an instance ofstr\n.\nAn\nemail.errors.HeaderParseError\nmay be raised when certain decoding errors occur (e.g. a base64 decoding exception).Here are examples:\n>>> from email.header import decode_header >>> decode_header('=?iso-8859-1?q?p=F6stal?=') [(b'p\\xf6stal', 'iso-8859-1')] >>> decode_header('unencoded_string') [('unencoded_string', None)] >>> decode_header('bar =?utf-8?B?ZsOzbw==?=') [(b'bar ', None), (b'f\\xc3\\xb3o', 'utf-8')]\nNote\nThis function exists for backwards compatibility only. For new code, we recommend using\nemail.headerregistry.HeaderRegistry\n.\n- email.header.make_header(decoded_seq, maxlinelen=None, header_name=None, continuation_ws=' ')\u00b6\nCreate a\nHeader\ninstance from a sequence of pairs as returned bydecode_header()\n.decode_header()\ntakes a header value string and returns a sequence of pairs of the format(decoded_string, charset)\nwhere charset is the name of the character set.This function takes one of those sequence of pairs and returns a\nHeader\ninstance. Optional maxlinelen, header_name, and continuation_ws are as in theHeader\nconstructor.Note\nThis function exists for backwards compatibility only, and is not recommended for use in new code.", "code_snippets": [" ", "\n", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 2074} +{"url": "https://docs.python.org/3/whatsnew/3.2.html", "title": "What\u2019s New In Python 3.2", "content": "What\u2019s New In Python 3.2\u00b6\n- Author:\nRaymond Hettinger\nThis article explains the new features in Python 3.2 as compared to 3.1. Python 3.2 was released on February 20, 2011. It focuses on a few highlights and gives a few examples. For full details, see the Misc/NEWS file.\nSee also\nPEP 392 - Python 3.2 Release Schedule\nPEP 384: Defining a Stable ABI\u00b6\nIn the past, extension modules built for one Python version were often not usable with other Python versions. Particularly on Windows, every feature release of Python required rebuilding all extension modules that one wanted to use. This requirement was the result of the free access to Python interpreter internals that extension modules could use.\nWith Python 3.2, an alternative approach becomes available: extension modules which restrict themselves to a limited API (by defining Py_LIMITED_API) cannot use many of the internals, but are constrained to a set of API functions that are promised to be stable for several releases. As a consequence, extension modules built for 3.2 in that mode will also work with 3.3, 3.4, and so on. Extension modules that make use of details of memory structures can still be built, but will need to be recompiled for every feature release.\nSee also\n- PEP 384 - Defining a Stable ABI\nPEP written by Martin von L\u00f6wis.\nPEP 389: Argparse Command Line Parsing Module\u00b6\nA new module for command line parsing, argparse\n, was introduced to\novercome the limitations of optparse\nwhich did not provide support for\npositional arguments (not just options), subcommands, required options and other\ncommon patterns of specifying and validating options.\nThis module has already had widespread success in the community as a\nthird-party module. Being more fully featured than its predecessor, the\nargparse\nmodule is now the preferred module for command-line processing.\nThe older module is still being kept available because of the substantial amount\nof legacy code that depends on it.\nHere\u2019s an annotated example parser showing features like limiting results to a set of choices, specifying a metavar in the help screen, validating that one or more positional arguments is present, and making a required option:\nimport argparse\nparser = argparse.ArgumentParser(\ndescription = 'Manage servers', # main description for help\nepilog = 'Tested on Solaris and Linux') # displayed after help\nparser.add_argument('action', # argument name\nchoices = ['deploy', 'start', 'stop'], # three allowed values\nhelp = 'action on each target') # help msg\nparser.add_argument('targets',\nmetavar = 'HOSTNAME', # var name used in help msg\nnargs = '+', # require one or more targets\nhelp = 'url for target machines') # help msg explanation\nparser.add_argument('-u', '--user', # -u or --user option\nrequired = True, # make it a required argument\nhelp = 'login as user')\nExample of calling the parser on a command string:\n>>> cmd = 'deploy sneezy.example.com sleepy.example.com -u skycaptain'\n>>> result = parser.parse_args(cmd.split())\n>>> result.action\n'deploy'\n>>> result.targets\n['sneezy.example.com', 'sleepy.example.com']\n>>> result.user\n'skycaptain'\nExample of the parser\u2019s automatically generated help:\n>>> parser.parse_args('-h'.split())\nusage: manage_cloud.py [-h] -u USER\n{deploy,start,stop} HOSTNAME [HOSTNAME ...]\nManage servers\npositional arguments:\n{deploy,start,stop} action on each target\nHOSTNAME url for target machines\noptional arguments:\n-h, --help show this help message and exit\n-u USER, --user USER login as user\nTested on Solaris and Linux\nAn especially nice argparse\nfeature is the ability to define subparsers,\neach with their own argument patterns and help displays:\nimport argparse\nparser = argparse.ArgumentParser(prog='HELM')\nsubparsers = parser.add_subparsers()\nparser_l = subparsers.add_parser('launch', help='Launch Control') # first subgroup\nparser_l.add_argument('-m', '--missiles', action='store_true')\nparser_l.add_argument('-t', '--torpedos', action='store_true')\nparser_m = subparsers.add_parser('move', help='Move Vessel', # second subgroup\naliases=('steer', 'turn')) # equivalent names\nparser_m.add_argument('-c', '--course', type=int, required=True)\nparser_m.add_argument('-s', '--speed', type=int, default=0)\n$ ./helm.py --help # top level help (launch and move)\n$ ./helm.py launch --help # help for launch options\n$ ./helm.py launch --missiles # set missiles=True and torpedos=False\n$ ./helm.py steer --course 180 --speed 5 # set movement parameters\nSee also\n- PEP 389 - New Command Line Parsing Module\nPEP written by Steven Bethard.\nMigrating optparse code to argparse for details on the differences from optparse\n.\nPEP 391: Dictionary Based Configuration for Logging\u00b6\nThe logging\nmodule provided two kinds of configuration, one style with\nfunction calls for each option or another style driven by an external file saved\nin a configparser\nformat. Those options did not provide the flexibility\nto create configurations from JSON or YAML files, nor did they support\nincremental configuration, which is needed for specifying logger options from a\ncommand line.\nTo support a more flexible style, the module now offers\nlogging.config.dictConfig()\nfor specifying logging configuration with\nplain Python dictionaries. The configuration options include formatters,\nhandlers, filters, and loggers. Here\u2019s a working example of a configuration\ndictionary:\n{\"version\": 1,\n\"formatters\": {\"brief\": {\"format\": \"%(levelname)-8s: %(name)-15s: %(message)s\"},\n\"full\": {\"format\": \"%(asctime)s %(name)-15s %(levelname)-8s %(message)s\"}\n},\n\"handlers\": {\"console\": {\n\"class\": \"logging.StreamHandler\",\n\"formatter\": \"brief\",\n\"level\": \"INFO\",\n\"stream\": \"ext://sys.stdout\"},\n\"console_priority\": {\n\"class\": \"logging.StreamHandler\",\n\"formatter\": \"full\",\n\"level\": \"ERROR\",\n\"stream\": \"ext://sys.stderr\"}\n},\n\"root\": {\"level\": \"DEBUG\", \"handlers\": [\"console\", \"console_priority\"]}}\nIf that dictionary is stored in a file called conf.json\n, it can be\nloaded and called with code like this:\n>>> import json, logging.config\n>>> with open('conf.json') as f:\n... conf = json.load(f)\n...\n>>> logging.config.dictConfig(conf)\n>>> logging.info(\"Transaction completed normally\")\nINFO : root : Transaction completed normally\n>>> logging.critical(\"Abnormal termination\")\n2011-02-17 11:14:36,694 root CRITICAL Abnormal termination\nSee also\n- PEP 391 - Dictionary Based Configuration for Logging\nPEP written by Vinay Sajip.\nPEP 3148: The concurrent.futures\nmodule\u00b6\nCode for creating and managing concurrency is being collected in a new top-level namespace, concurrent. Its first member is a futures package which provides a uniform high-level interface for managing threads and processes.\nThe design for concurrent.futures\nwas inspired by the\njava.util.concurrent package. In that model, a running call and its result\nare represented by a Future\nobject that abstracts\nfeatures common to threads, processes, and remote procedure calls. That object\nsupports status checks (running or done), timeouts, cancellations, adding\ncallbacks, and access to results or exceptions.\nThe primary offering of the new module is a pair of executor classes for launching and managing calls. The goal of the executors is to make it easier to use existing tools for making parallel calls. They save the effort needed to setup a pool of resources, launch the calls, create a results queue, add time-out handling, and limit the total number of threads, processes, or remote procedure calls.\nIdeally, each application should share a single executor across multiple components so that process and thread limits can be centrally managed. This solves the design challenge that arises when each component has its own competing strategy for resource management.\nBoth classes share a common interface with three methods:\nsubmit()\nfor scheduling a callable and\nreturning a Future\nobject;\nmap()\nfor scheduling many asynchronous calls\nat a time, and shutdown()\nfor freeing\nresources. The class is a context manager and can be used in a\nwith\nstatement to assure that resources are automatically released\nwhen currently pending futures are done executing.\nA simple of example of ThreadPoolExecutor\nis a\nlaunch of four parallel threads for copying files:\nimport concurrent.futures, shutil\nwith concurrent.futures.ThreadPoolExecutor(max_workers=4) as e:\ne.submit(shutil.copy, 'src1.txt', 'dest1.txt')\ne.submit(shutil.copy, 'src2.txt', 'dest2.txt')\ne.submit(shutil.copy, 'src3.txt', 'dest3.txt')\ne.submit(shutil.copy, 'src3.txt', 'dest4.txt')\nSee also\n- PEP 3148 - Futures \u2013 Execute Computations Asynchronously\nPEP written by Brian Quinlan.\nCode for Threaded Parallel URL reads, an example using threads to fetch multiple web pages in parallel.\nCode for computing prime numbers in\nparallel, an example demonstrating\nProcessPoolExecutor\n.\nPEP 3147: PYC Repository Directories\u00b6\nPython\u2019s scheme for caching bytecode in .pyc files did not work well in environments with multiple Python interpreters. If one interpreter encountered a cached file created by another interpreter, it would recompile the source and overwrite the cached file, thus losing the benefits of caching.\nThe issue of \u201cpyc fights\u201d has become more pronounced as it has become commonplace for Linux distributions to ship with multiple versions of Python. These conflicts also arise with CPython alternatives such as Unladen Swallow.\nTo solve this problem, Python\u2019s import machinery has been extended to use distinct filenames for each interpreter. Instead of Python 3.2 and Python 3.3 and Unladen Swallow each competing for a file called \u201cmymodule.pyc\u201d, they will now look for \u201cmymodule.cpython-32.pyc\u201d, \u201cmymodule.cpython-33.pyc\u201d, and \u201cmymodule.unladen10.pyc\u201d. And to prevent all of these new files from cluttering source directories, the pyc files are now collected in a \u201c__pycache__\u201d directory stored under the package directory.\nAside from the filenames and target directories, the new scheme has a few aspects that are visible to the programmer:\nImported modules now have a\n__cached__\nattribute which stores the name of the actual file that was imported:>>> import collections >>> collections.__cached__ 'c:/py32/lib/__pycache__/collections.cpython-32.pyc'\nThe tag that is unique to each interpreter is accessible from the\nimp\nmodule:>>> import imp >>> imp.get_tag() 'cpython-32'\nScripts that try to deduce source filename from the imported file now need to be smarter. It is no longer sufficient to simply strip the \u201cc\u201d from a \u201c.pyc\u201d filename. Instead, use the new functions in the\nimp\nmodule:>>> imp.source_from_cache('c:/py32/lib/__pycache__/collections.cpython-32.pyc') 'c:/py32/lib/collections.py' >>> imp.cache_from_source('c:/py32/lib/collections.py') 'c:/py32/lib/__pycache__/collections.cpython-32.pyc'\nThe\npy_compile\nandcompileall\nmodules have been updated to reflect the new naming convention and target directory. The command-line invocation of compileall has new options:-i\nfor specifying a list of files and directories to compile and-b\nwhich causes bytecode files to be written to their legacy location rather than __pycache__.The\nimportlib.abc\nmodule has been updated with new abstract base classes for loading bytecode files. The obsolete ABCs,PyLoader\nandPyPycLoader\n, have been deprecated (instructions on how to stay Python 3.1 compatible are included with the documentation).\nSee also\n- PEP 3147 - PYC Repository Directories\nPEP written by Barry Warsaw.\nPEP 3149: ABI Version Tagged .so Files\u00b6\nThe PYC repository directory allows multiple bytecode cache files to be co-located. This PEP implements a similar mechanism for shared object files by giving them a common directory and distinct names for each version.\nThe common directory is \u201cpyshared\u201d and the file names are made distinct by identifying the Python implementation (such as CPython, PyPy, Jython, etc.), the major and minor version numbers, and optional build flags (such as \u201cd\u201d for debug, \u201cm\u201d for pymalloc, \u201cu\u201d for wide-unicode). For an arbitrary package \u201cfoo\u201d, you may see these files when the distribution package is installed:\n/usr/share/pyshared/foo.cpython-32m.so\n/usr/share/pyshared/foo.cpython-33md.so\nIn Python itself, the tags are accessible from functions in the sysconfig\nmodule:\n>>> import sysconfig\n>>> sysconfig.get_config_var('SOABI') # find the version tag\n'cpython-32mu'\n>>> sysconfig.get_config_var('EXT_SUFFIX') # find the full filename extension\n'.cpython-32mu.so'\nSee also\n- PEP 3149 - ABI Version Tagged .so Files\nPEP written by Barry Warsaw.\nPEP 3333: Python Web Server Gateway Interface v1.0.1\u00b6\nThis informational PEP clarifies how bytes/text issues are to be handled by the\nWSGI protocol. The challenge is that string handling in Python 3 is most\nconveniently handled with the str\ntype even though the HTTP protocol\nis itself bytes oriented.\nThe PEP differentiates so-called native strings that are used for request/response headers and metadata versus byte strings which are used for the bodies of requests and responses.\nThe native strings are always of type str\nbut are restricted to code\npoints between U+0000 through U+00FF which are translatable to bytes using\nLatin-1 encoding. These strings are used for the keys and values in the\nenvironment dictionary and for response headers and statuses in the\nstart_response()\nfunction. They must follow RFC 2616 with respect to\nencoding. That is, they must either be ISO-8859-1 characters or use\nRFC 2047 MIME encoding.\nFor developers porting WSGI applications from Python 2, here are the salient points:\nIf the app already used strings for headers in Python 2, no change is needed.\nIf instead, the app encoded output headers or decoded input headers, then the headers will need to be re-encoded to Latin-1. For example, an output header encoded in utf-8 was using\nh.encode('utf-8')\nnow needs to convert from bytes to native strings usingh.encode('utf-8').decode('latin-1')\n.Values yielded by an application or sent using the\nwrite()\nmethod must be byte strings. Thestart_response()\nfunction and environ must use native strings. The two cannot be mixed.\nFor server implementers writing CGI-to-WSGI pathways or other CGI-style\nprotocols, the users must to be able access the environment using native strings\neven though the underlying platform may have a different convention. To bridge\nthis gap, the wsgiref\nmodule has a new function,\nwsgiref.handlers.read_environ()\nfor transcoding CGI variables from\nos.environ\ninto native strings and returning a new dictionary.\nSee also\n- PEP 3333 - Python Web Server Gateway Interface v1.0.1\nPEP written by Phillip Eby.\nOther Language Changes\u00b6\nSome smaller changes made to the core Python language are:\nString formatting for\nformat()\nandstr.format()\ngained new capabilities for the format character #. Previously, for integers in binary, octal, or hexadecimal, it caused the output to be prefixed with \u20180b\u2019, \u20180o\u2019, or \u20180x\u2019 respectively. Now it can also handle floats, complex, and Decimal, causing the output to always have a decimal point even when no digits follow it.>>> format(20, '#o') '0o24' >>> format(12.34, '#5.0f') ' 12.'\n(Suggested by Mark Dickinson and implemented by Eric Smith in bpo-7094.)\nThere is also a new\nstr.format_map()\nmethod that extends the capabilities of the existingstr.format()\nmethod by accepting arbitrary mapping objects. This new method makes it possible to use string formatting with any of Python\u2019s many dictionary-like objects such asdefaultdict\n,Shelf\n,ConfigParser\n, ordbm\n. It is also useful with customdict\nsubclasses that normalize keys before look-up or that supply a__missing__()\nmethod for unknown keys:>>> import shelve >>> d = shelve.open('tmp.shl') >>> 'The {project_name} status is {status} as of {date}'.format_map(d) 'The testing project status is green as of February 15, 2011' >>> class LowerCasedDict(dict): ... def __getitem__(self, key): ... return dict.__getitem__(self, key.lower()) ... >>> lcd = LowerCasedDict(part='widgets', quantity=10) >>> 'There are {QUANTITY} {Part} in stock'.format_map(lcd) 'There are 10 widgets in stock' >>> class PlaceholderDict(dict): ... def __missing__(self, key): ... return '<{}>'.format(key) ... >>> 'Hello {name}, welcome to {location}'.format_map(PlaceholderDict()) 'Hello , welcome to '\n(Suggested by Raymond Hettinger and implemented by Eric Smith in bpo-6081.)\nThe interpreter can now be started with a quiet option,\n-q\n, to prevent the copyright and version information from being displayed in the interactive mode. The option can be introspected using thesys.flags\nattribute:$ python -q >>> sys.flags sys.flags(debug=0, division_warning=0, inspect=0, interactive=0, optimize=0, dont_write_bytecode=0, no_user_site=0, no_site=0, ignore_environment=0, verbose=0, bytes_warning=0, quiet=1)\n(Contributed by Marcin Wojdyr in bpo-1772833).\nThe\nhasattr()\nfunction works by callinggetattr()\nand detecting whether an exception is raised. This technique allows it to detect methods created dynamically by__getattr__()\nor__getattribute__()\nwhich would otherwise be absent from the class dictionary. Formerly, hasattr would catch any exception, possibly masking genuine errors. Now, hasattr has been tightened to only catchAttributeError\nand let other exceptions pass through:>>> class A: ... @property ... def f(self): ... return 1 // 0 ... >>> a = A() >>> hasattr(a, 'f') Traceback (most recent call last): ... ZeroDivisionError: integer division or modulo by zero\n(Discovered by Yury Selivanov and fixed by Benjamin Peterson; bpo-9666.)\nThe\nstr()\nof a float or complex number is now the same as itsrepr()\n. Previously, thestr()\nform was shorter but that just caused confusion and is no longer needed now that the shortest possiblerepr()\nis displayed by default:>>> import math >>> repr(math.pi) '3.141592653589793' >>> str(math.pi) '3.141592653589793'\n(Proposed and implemented by Mark Dickinson; bpo-9337.)\nmemoryview\nobjects now have arelease()\nmethod and they also now support the context management protocol. This allows timely release of any resources that were acquired when requesting a buffer from the original object.>>> with memoryview(b'abcdefgh') as v: ... print(v.tolist()) [97, 98, 99, 100, 101, 102, 103, 104]\n(Added by Antoine Pitrou; bpo-9757.)\nPreviously it was illegal to delete a name from the local namespace if it occurs as a free variable in a nested block:\ndef outer(x): def inner(): return x inner() del x\nThis is now allowed. Remember that the target of an\nexcept\nclause is cleared, so this code which used to work with Python 2.6, raised aSyntaxError\nwith Python 3.1 and now works again:def f(): def print_error(): print(e) try: something except Exception as e: print_error() # implicit \"del e\" here\n(See bpo-4617.)\nStruct sequence types are now subclasses of tuple. This means that C structures like those returned by\nos.stat()\n,time.gmtime()\n, andsys.version_info\nnow work like a named tuple and now work with functions and methods that expect a tuple as an argument. This is a big step forward in making the C structures as flexible as their pure Python counterparts:>>> import sys >>> isinstance(sys.version_info, tuple) True >>> 'Version %d.%d.%d %s(%d)' % sys.version_info 'Version 3.2.0 final(0)'\n(Suggested by Arfrever Frehtes Taifersar Arahesis and implemented by Benjamin Peterson in bpo-8413.)\nWarnings are now easier to control using the\nPYTHONWARNINGS\nenvironment variable as an alternative to using-W\nat the command line:$ export PYTHONWARNINGS='ignore::RuntimeWarning::,once::UnicodeWarning::'\n(Suggested by Barry Warsaw and implemented by Philip Jenvey in bpo-7301.)\nA new warning category,\nResourceWarning\n, has been added. It is emitted when potential issues with resource consumption or cleanup are detected. It is silenced by default in normal release builds but can be enabled through the means provided by thewarnings\nmodule, or on the command line.A\nResourceWarning\nis issued at interpreter shutdown if thegc.garbage\nlist isn\u2019t empty, and ifgc.DEBUG_UNCOLLECTABLE\nis set, all uncollectable objects are printed. This is meant to make the programmer aware that their code contains object finalization issues.A\nResourceWarning\nis also issued when a file object is destroyed without having been explicitly closed. While the deallocator for such object ensures it closes the underlying operating system resource (usually, a file descriptor), the delay in deallocating the object could produce various issues, especially under Windows. Here is an example of enabling the warning from the command line:$ python -q -Wdefault >>> f = open(\"foo\", \"wb\") >>> del f __main__:1: ResourceWarning: unclosed file <_io.BufferedWriter name='foo'>\n(Added by Antoine Pitrou and Georg Brandl in bpo-10093 and bpo-477863.)\nrange\nobjects now support index and count methods. This is part of an effort to make more objects fully implement thecollections.Sequence\nabstract base class. As a result, the language will have a more uniform API. In addition,range\nobjects now support slicing and negative indices, even with values larger thansys.maxsize\n. This makes range more interoperable with lists:>>> range(0, 100, 2).count(10) 1 >>> range(0, 100, 2).index(10) 5 >>> range(0, 100, 2)[5] 10 >>> range(0, 100, 2)[0:5] range(0, 10, 2)\n(Contributed by Daniel Stutzbach in bpo-9213, by Alexander Belopolsky in bpo-2690, and by Nick Coghlan in bpo-10889.)\nThe\ncallable()\nbuiltin function from Py2.x was resurrected. It provides a concise, readable alternative to using an abstract base class in an expression likeisinstance(x, collections.Callable)\n:>>> callable(max) True >>> callable(20) False\n(See bpo-10518.)\nPython\u2019s import mechanism can now load modules installed in directories with non-ASCII characters in the path name. This solved an aggravating problem with home directories for users with non-ASCII characters in their usernames.\n(Required extensive work by Victor Stinner in bpo-9425.)\nNew, Improved, and Deprecated Modules\u00b6\nPython\u2019s standard library has undergone significant maintenance efforts and quality improvements.\nThe biggest news for Python 3.2 is that the email\npackage, mailbox\nmodule, and nntplib\nmodules now work correctly with the bytes/text model\nin Python 3. For the first time, there is correct handling of messages with\nmixed encodings.\nThroughout the standard library, there has been more careful attention to encodings and text versus bytes issues. In particular, interactions with the operating system are now better able to exchange non-ASCII data using the Windows MBCS encoding, locale-aware encodings, or UTF-8.\nAnother significant win is the addition of substantially better support for SSL connections and security certificates.\nIn addition, more classes now implement a context manager to support\nconvenient and reliable resource clean-up using a with\nstatement.\nemail\u00b6\nThe usability of the email\npackage in Python 3 has been mostly fixed by\nthe extensive efforts of R. David Murray. The problem was that emails are\ntypically read and stored in the form of bytes\nrather than str\ntext, and they may contain multiple encodings within a single email. So, the\nemail package had to be extended to parse and generate email messages in bytes\nformat.\nNew functions\nmessage_from_bytes()\nandmessage_from_binary_file()\n, and new classesBytesFeedParser\nandBytesParser\nallow binary message data to be parsed into model objects.Given bytes input to the model,\nget_payload()\nwill by default decode a message body that has a Content-Transfer-Encoding of 8bit using the charset specified in the MIME headers and return the resulting string.Given bytes input to the model,\nGenerator\nwill convert message bodies that have a Content-Transfer-Encoding of 8bit to instead have a 7bit Content-Transfer-Encoding.Headers with unencoded non-ASCII bytes are deemed to be RFC 2047-encoded using the unknown-8bit character set.\nA new class\nBytesGenerator\nproduces bytes as output, preserving any unchanged non-ASCII data that was present in the input used to build the model, including message bodies with a Content-Transfer-Encoding of 8bit.The\nsmtplib\nSMTP\nclass now accepts a byte string for the msg argument to thesendmail()\nmethod, and a new method,send_message()\naccepts aMessage\nobject and can optionally obtain the from_addr and to_addrs addresses directly from the object.\n(Proposed and implemented by R. David Murray, bpo-4661 and bpo-10321.)\nelementtree\u00b6\nThe xml.etree.ElementTree\npackage and its xml.etree.cElementTree\ncounterpart have been updated to version 1.3.\nSeveral new and useful functions and methods have been added:\nxml.etree.ElementTree.fromstringlist()\nwhich builds an XML document from a sequence of fragmentsxml.etree.ElementTree.register_namespace()\nfor registering a global namespace prefixxml.etree.ElementTree.tostringlist()\nfor string representation including all sublistsxml.etree.ElementTree.Element.extend()\nfor appending a sequence of zero or more elementsxml.etree.ElementTree.Element.iterfind()\nsearches an element and subelementsxml.etree.ElementTree.Element.itertext()\ncreates a text iterator over an element and its subelementsxml.etree.ElementTree.TreeBuilder.end()\ncloses the current elementxml.etree.ElementTree.TreeBuilder.doctype()\nhandles a doctype declaration\nTwo methods have been deprecated:\nxml.etree.ElementTree.getchildren()\nuselist(elem)\ninstead.xml.etree.ElementTree.getiterator()\nuseElement.iter\ninstead.\nFor details of the update, see Introducing ElementTree on Fredrik Lundh\u2019s website.\n(Contributed by Florent Xicluna and Fredrik Lundh, bpo-6472.)\nfunctools\u00b6\nThe\nfunctools\nmodule includes a new decorator for caching function calls.functools.lru_cache()\ncan save repeated queries to an external resource whenever the results are expected to be the same.For example, adding a caching decorator to a database query function can save database accesses for popular searches:\n>>> import functools >>> @functools.lru_cache(maxsize=300) ... def get_phone_number(name): ... c = conn.cursor() ... c.execute('SELECT phonenumber FROM phonelist WHERE name=?', (name,)) ... return c.fetchone()[0]\n>>> for name in user_requests: ... get_phone_number(name) # cached lookup\nTo help with choosing an effective cache size, the wrapped function is instrumented for tracking cache statistics:\n>>> get_phone_number.cache_info() CacheInfo(hits=4805, misses=980, maxsize=300, currsize=300)\nIf the phonelist table gets updated, the outdated contents of the cache can be cleared with:\n>>> get_phone_number.cache_clear()\n(Contributed by Raymond Hettinger and incorporating design ideas from Jim Baker, Miki Tebeka, and Nick Coghlan; see recipe 498245, recipe 577479, bpo-10586, and bpo-10593.)\nThe\nfunctools.wraps()\ndecorator now adds a__wrapped__\nattribute pointing to the original callable function. This allows wrapped functions to be introspected. It also copies__annotations__\nif defined. And now it also gracefully skips over missing attributes such as__doc__\nwhich might not be defined for the wrapped callable.In the above example, the cache can be removed by recovering the original function:\n>>> get_phone_number = get_phone_number.__wrapped__ # uncached function\n(By Nick Coghlan and Terrence Cole; bpo-9567, bpo-3445, and bpo-8814.)\nTo help write classes with rich comparison methods, a new decorator\nfunctools.total_ordering()\nwill use existing equality and inequality methods to fill in the remaining methods.For example, supplying __eq__ and __lt__ will enable\ntotal_ordering()\nto fill-in __le__, __gt__ and __ge__:@total_ordering class Student: def __eq__(self, other): return ((self.lastname.lower(), self.firstname.lower()) == (other.lastname.lower(), other.firstname.lower())) def __lt__(self, other): return ((self.lastname.lower(), self.firstname.lower()) < (other.lastname.lower(), other.firstname.lower()))\nWith the total_ordering decorator, the remaining comparison methods are filled in automatically.\n(Contributed by Raymond Hettinger.)\nTo aid in porting programs from Python 2, the\nfunctools.cmp_to_key()\nfunction converts an old-style comparison function to modern key function:>>> # locale-aware sort order >>> sorted(iterable, key=cmp_to_key(locale.strcoll))\nFor sorting examples and a brief sorting tutorial, see the Sorting HowTo tutorial.\n(Contributed by Raymond Hettinger.)\nitertools\u00b6\nThe\nitertools\nmodule has a newaccumulate()\nfunction modeled on APL\u2019s scan operator and Numpy\u2019s accumulate function:>>> from itertools import accumulate >>> list(accumulate([8, 2, 50])) [8, 10, 60]\n>>> prob_dist = [0.1, 0.4, 0.2, 0.3] >>> list(accumulate(prob_dist)) # cumulative probability distribution [0.1, 0.5, 0.7, 1.0]\nFor an example using\naccumulate()\n, see the examples for the random module.(Contributed by Raymond Hettinger and incorporating design suggestions from Mark Dickinson.)\ncollections\u00b6\nThe\ncollections.Counter\nclass now has two forms of in-place subtraction, the existing -= operator for saturating subtraction and the newsubtract()\nmethod for regular subtraction. The former is suitable for multisets which only have positive counts, and the latter is more suitable for use cases that allow negative counts:>>> from collections import Counter >>> tally = Counter(dogs=5, cats=3) >>> tally -= Counter(dogs=2, cats=8) # saturating subtraction >>> tally Counter({'dogs': 3})\n>>> tally = Counter(dogs=5, cats=3) >>> tally.subtract(dogs=2, cats=8) # regular subtraction >>> tally Counter({'dogs': 3, 'cats': -5})\n(Contributed by Raymond Hettinger.)\nThe\ncollections.OrderedDict\nclass has a new methodmove_to_end()\nwhich takes an existing key and moves it to either the first or last position in the ordered sequence.The default is to move an item to the last position. This is equivalent of renewing an entry with\nod[k] = od.pop(k)\n.A fast move-to-end operation is useful for resequencing entries. For example, an ordered dictionary can be used to track order of access by aging entries from the oldest to the most recently accessed.\n>>> from collections import OrderedDict >>> d = OrderedDict.fromkeys(['a', 'b', 'X', 'd', 'e']) >>> list(d) ['a', 'b', 'X', 'd', 'e'] >>> d.move_to_end('X') >>> list(d) ['a', 'b', 'd', 'e', 'X']\n(Contributed by Raymond Hettinger.)\nThe\ncollections.deque\nclass grew two new methodscount()\nandreverse()\nthat make them more substitutable forlist\nobjects:>>> from collections import deque >>> d = deque('simsalabim') >>> d.count('s') 2 >>> d.reverse() >>> d deque(['m', 'i', 'b', 'a', 'l', 'a', 's', 'm', 'i', 's'])\n(Contributed by Raymond Hettinger.)\nthreading\u00b6\nThe threading\nmodule has a new Barrier\nsynchronization class for making multiple threads wait until all of them have\nreached a common barrier point. Barriers are useful for making sure that a task\nwith multiple preconditions does not run until all of the predecessor tasks are\ncomplete.\nBarriers can work with an arbitrary number of threads. This is a generalization of a Rendezvous which is defined for only two threads.\nImplemented as a two-phase cyclic barrier, Barrier\nobjects\nare suitable for use in loops. The separate filling and draining phases\nassure that all threads get released (drained) before any one of them can loop\nback and re-enter the barrier. The barrier fully resets after each cycle.\nExample of using barriers:\nfrom threading import Barrier, Thread\ndef get_votes(site):\nballots = conduct_election(site)\nall_polls_closed.wait() # do not count until all polls are closed\ntotals = summarize(ballots)\npublish(site, totals)\nall_polls_closed = Barrier(len(sites))\nfor site in sites:\nThread(target=get_votes, args=(site,)).start()\nIn this example, the barrier enforces a rule that votes cannot be counted at any\npolling site until all polls are closed. Notice how a solution with a barrier\nis similar to one with threading.Thread.join()\n, but the threads stay alive\nand continue to do work (summarizing ballots) after the barrier point is\ncrossed.\nIf any of the predecessor tasks can hang or be delayed, a barrier can be created\nwith an optional timeout parameter. Then if the timeout period elapses before\nall the predecessor tasks reach the barrier point, all waiting threads are\nreleased and a BrokenBarrierError\nexception is raised:\ndef get_votes(site):\nballots = conduct_election(site)\ntry:\nall_polls_closed.wait(timeout=midnight - time.now())\nexcept BrokenBarrierError:\nlockbox = seal_ballots(ballots)\nqueue.put(lockbox)\nelse:\ntotals = summarize(ballots)\npublish(site, totals)\nIn this example, the barrier enforces a more robust rule. If some election sites do not finish before midnight, the barrier times-out and the ballots are sealed and deposited in a queue for later handling.\nSee Barrier Synchronization Patterns for more examples of how barriers can be used in parallel computing. Also, there is a simple but thorough explanation of barriers in The Little Book of Semaphores, section 3.6.\n(Contributed by Kristj\u00e1n Valur J\u00f3nsson with an API review by Jeffrey Yasskin in bpo-8777.)\ndatetime and time\u00b6\nThe\ndatetime\nmodule has a new typetimezone\nthat implements thetzinfo\ninterface by returning a fixed UTC offset and timezone name. This makes it easier to create timezone-aware datetime objects:>>> from datetime import datetime, timezone >>> datetime.now(timezone.utc) datetime.datetime(2010, 12, 8, 21, 4, 2, 923754, tzinfo=datetime.timezone.utc) >>> datetime.strptime(\"01/01/2000 12:00 +0000\", \"%m/%d/%Y %H:%M %z\") datetime.datetime(2000, 1, 1, 12, 0, tzinfo=datetime.timezone.utc)\nAlso,\ntimedelta\nobjects can now be multiplied byfloat\nand divided byfloat\nandint\nobjects. Andtimedelta\nobjects can now divide one another.The\ndatetime.date.strftime()\nmethod is no longer restricted to years after 1900. The new supported year range is from 1000 to 9999 inclusive.Whenever a two-digit year is used in a time tuple, the interpretation has been governed by\ntime.accept2dyear\n. The default isTrue\nwhich means that for a two-digit year, the century is guessed according to the POSIX rules governing the%y\nstrptime format.Starting with Py3.2, use of the century guessing heuristic will emit a\nDeprecationWarning\n. Instead, it is recommended thattime.accept2dyear\nbe set toFalse\nso that large date ranges can be used without guesswork:>>> import time, warnings >>> warnings.resetwarnings() # remove the default warning filters >>> time.accept2dyear = True # guess whether 11 means 11 or 2011 >>> time.asctime((11, 1, 1, 12, 34, 56, 4, 1, 0)) Warning (from warnings module): ... DeprecationWarning: Century info guessed for a 2-digit year. 'Fri Jan 1 12:34:56 2011' >>> time.accept2dyear = False # use the full range of allowable dates >>> time.asctime((11, 1, 1, 12, 34, 56, 4, 1, 0)) 'Fri Jan 1 12:34:56 11'\nSeveral functions now have significantly expanded date ranges. When\ntime.accept2dyear\nis false, thetime.asctime()\nfunction will accept any year that fits in a C int, while thetime.mktime()\nandtime.strftime()\nfunctions will accept the full range supported by the corresponding operating system functions.\n(Contributed by Alexander Belopolsky and Victor Stinner in bpo-1289118, bpo-5094, bpo-6641, bpo-2706, bpo-1777412, bpo-8013, and bpo-10827.)\nmath\u00b6\nThe math\nmodule has been updated with six new functions inspired by the\nC99 standard.\nThe isfinite()\nfunction provides a reliable and fast way to detect\nspecial values. It returns True\nfor regular numbers and False\nfor Nan or\nInfinity:\n>>> from math import isfinite\n>>> [isfinite(x) for x in (123, 4.56, float('Nan'), float('Inf'))]\n[True, True, False, False]\nThe expm1()\nfunction computes e**x-1\nfor small values of x\nwithout incurring the loss of precision that usually accompanies the subtraction\nof nearly equal quantities:\n>>> from math import expm1\n>>> expm1(0.013671875) # more accurate way to compute e**x-1 for a small x\n0.013765762467652909\nThe erf()\nfunction computes a probability integral or Gaussian\nerror function. The\ncomplementary error function, erfc()\n, is 1 - erf(x)\n:\n>>> from math import erf, erfc, sqrt\n>>> erf(1.0/sqrt(2.0)) # portion of normal distribution within 1 standard deviation\n0.682689492137086\n>>> erfc(1.0/sqrt(2.0)) # portion of normal distribution outside 1 standard deviation\n0.31731050786291404\n>>> erf(1.0/sqrt(2.0)) + erfc(1.0/sqrt(2.0))\n1.0\nThe gamma()\nfunction is a continuous extension of the factorial\nfunction. See https://en.wikipedia.org/wiki/Gamma_function for details. Because\nthe function is related to factorials, it grows large even for small values of\nx, so there is also a lgamma()\nfunction for computing the natural\nlogarithm of the gamma function:\n>>> from math import gamma, lgamma\n>>> gamma(7.0) # six factorial\n720.0\n>>> lgamma(801.0) # log(800 factorial)\n4551.950730698041\n(Contributed by Mark Dickinson.)\nabc\u00b6\nThe abc\nmodule now supports abstractclassmethod()\nand\nabstractstaticmethod()\n.\nThese tools make it possible to define an abstract base class that\nrequires a particular classmethod()\nor staticmethod()\nto be\nimplemented:\nclass Temperature(metaclass=abc.ABCMeta):\n@abc.abstractclassmethod\ndef from_fahrenheit(cls, t):\n...\n@abc.abstractclassmethod\ndef from_celsius(cls, t):\n...\n(Patch submitted by Daniel Urban; bpo-5867.)\nio\u00b6\nThe io.BytesIO\nhas a new method, getbuffer()\n, which\nprovides functionality similar to memoryview()\n. It creates an editable\nview of the data without making a copy. The buffer\u2019s random access and support\nfor slice notation are well-suited to in-place editing:\n>>> REC_LEN, LOC_START, LOC_LEN = 34, 7, 11\n>>> def change_location(buffer, record_number, location):\n... start = record_number * REC_LEN + LOC_START\n... buffer[start: start+LOC_LEN] = location\n>>> import io\n>>> byte_stream = io.BytesIO(\n... b'G3805 storeroom Main chassis '\n... b'X7899 shipping Reserve cog '\n... b'L6988 receiving Primary sprocket'\n... )\n>>> buffer = byte_stream.getbuffer()\n>>> change_location(buffer, 1, b'warehouse ')\n>>> change_location(buffer, 0, b'showroom ')\n>>> print(byte_stream.getvalue())\nb'G3805 showroom Main chassis '\nb'X7899 warehouse Reserve cog '\nb'L6988 receiving Primary sprocket'\n(Contributed by Antoine Pitrou in bpo-5506.)\nreprlib\u00b6\nWhen writing a __repr__()\nmethod for a custom container, it is easy to\nforget to handle the case where a member refers back to the container itself.\nPython\u2019s builtin objects such as list\nand set\nhandle\nself-reference by displaying \u201c\u2026\u201d in the recursive part of the representation\nstring.\nTo help write such __repr__()\nmethods, the reprlib\nmodule has a new\ndecorator, recursive_repr()\n, for detecting recursive calls to\n__repr__()\nand substituting a placeholder string instead:\n>>> class MyList(list):\n... @recursive_repr()\n... def __repr__(self):\n... return '<' + '|'.join(map(repr, self)) + '>'\n...\n>>> m = MyList('abc')\n>>> m.append(m)\n>>> m.append('x')\n>>> print(m)\n<'a'|'b'|'c'|...|'x'>\n(Contributed by Raymond Hettinger in bpo-9826 and bpo-9840.)\nlogging\u00b6\nIn addition to dictionary-based configuration described above, the\nlogging\npackage has many other improvements.\nThe logging documentation has been augmented by a basic tutorial, an advanced tutorial, and a cookbook of logging recipes. These documents are the fastest way to learn about logging.\nThe logging.basicConfig()\nset-up function gained a style argument to\nsupport three different types of string formatting. It defaults to \u201c%\u201d for\ntraditional %-formatting, can be set to \u201c{\u201d for the new str.format()\nstyle, or\ncan be set to \u201c$\u201d for the shell-style formatting provided by\nstring.Template\n. The following three configurations are equivalent:\n>>> from logging import basicConfig\n>>> basicConfig(style='%', format=\"%(name)s -> %(levelname)s: %(message)s\")\n>>> basicConfig(style='{', format=\"{name} -> {levelname} {message}\")\n>>> basicConfig(style='$', format=\"$name -> $levelname: $message\")\nIf no configuration is set-up before a logging event occurs, there is now a\ndefault configuration using a StreamHandler\ndirected to\nsys.stderr\nfor events of WARNING\nlevel or higher. Formerly, an\nevent occurring before a configuration was set-up would either raise an\nexception or silently drop the event depending on the value of\nlogging.raiseExceptions\n. The new default handler is stored in\nlogging.lastResort\n.\nThe use of filters has been simplified. Instead of creating a\nFilter\nobject, the predicate can be any Python callable that\nreturns True\nor False\n.\nThere were a number of other improvements that add flexibility and simplify configuration. See the module documentation for a full listing of changes in Python 3.2.\ncsv\u00b6\nThe csv\nmodule now supports a new dialect, unix_dialect\n,\nwhich applies quoting for all fields and a traditional Unix style with '\\n'\nas\nthe line terminator. The registered dialect name is unix\n.\nThe csv.DictWriter\nhas a new method,\nwriteheader()\nfor writing-out an initial row to document\nthe field names:\n>>> import csv, sys\n>>> w = csv.DictWriter(sys.stdout, ['name', 'dept'], dialect='unix')\n>>> w.writeheader()\n\"name\",\"dept\"\n>>> w.writerows([\n... {'name': 'tom', 'dept': 'accounting'},\n... {'name': 'susan', 'dept': 'Salesl'}])\n\"tom\",\"accounting\"\n\"susan\",\"sales\"\n(New dialect suggested by Jay Talbot in bpo-5975, and the new method suggested by Ed Abraham in bpo-1537721.)\ncontextlib\u00b6\nThere is a new and slightly mind-blowing tool\nContextDecorator\nthat is helpful for creating a\ncontext manager that does double duty as a function decorator.\nAs a convenience, this new functionality is used by\ncontextmanager()\nso that no extra effort is needed to support\nboth roles.\nThe basic idea is that both context managers and function decorators can be used\nfor pre-action and post-action wrappers. Context managers wrap a group of\nstatements using a with\nstatement, and function decorators wrap a\ngroup of statements enclosed in a function. So, occasionally there is a need to\nwrite a pre-action or post-action wrapper that can be used in either role.\nFor example, it is sometimes useful to wrap functions or groups of statements\nwith a logger that can track the time of entry and time of exit. Rather than\nwriting both a function decorator and a context manager for the task, the\ncontextmanager()\nprovides both capabilities in a single\ndefinition:\nfrom contextlib import contextmanager\nimport logging\nlogging.basicConfig(level=logging.INFO)\n@contextmanager\ndef track_entry_and_exit(name):\nlogging.info('Entering: %s', name)\nyield\nlogging.info('Exiting: %s', name)\nFormerly, this would have only been usable as a context manager:\nwith track_entry_and_exit('widget loader'):\nprint('Some time consuming activity goes here')\nload_widget()\nNow, it can be used as a decorator as well:\n@track_entry_and_exit('widget loader')\ndef activity():\nprint('Some time consuming activity goes here')\nload_widget()\nTrying to fulfill two roles at once places some limitations on the technique.\nContext managers normally have the flexibility to return an argument usable by\na with\nstatement, but there is no parallel for function decorators.\nIn the above example, there is not a clean way for the track_entry_and_exit context manager to return a logging instance for use in the body of enclosed statements.\n(Contributed by Michael Foord in bpo-9110.)\ndecimal and fractions\u00b6\nMark Dickinson crafted an elegant and efficient scheme for assuring that different numeric datatypes will have the same hash value whenever their actual values are equal (bpo-8188):\nassert hash(Fraction(3, 2)) == hash(1.5) == \\\nhash(Decimal(\"1.5\")) == hash(complex(1.5, 0))\nSome of the hashing details are exposed through a new attribute,\nsys.hash_info\n, which describes the bit width of the hash value, the\nprime modulus, the hash values for infinity and nan, and the multiplier\nused for the imaginary part of a number:\n>>> sys.hash_info\nsys.hash_info(width=64, modulus=2305843009213693951, inf=314159, nan=0, imag=1000003)\nAn early decision to limit the interoperability of various numeric types has\nbeen relaxed. It is still unsupported (and ill-advised) to have implicit\nmixing in arithmetic expressions such as Decimal('1.1') + float('1.1')\nbecause the latter loses information in the process of constructing the binary\nfloat. However, since existing floating-point value can be converted losslessly\nto either a decimal or rational representation, it makes sense to add them to\nthe constructor and to support mixed-type comparisons.\nThe\ndecimal.Decimal\nconstructor now acceptsfloat\nobjects directly so there in no longer a need to use thefrom_float()\nmethod (bpo-8257).Mixed type comparisons are now fully supported so that\nDecimal\nobjects can be directly compared withfloat\nandfractions.Fraction\n(bpo-2531 and bpo-8188).\nSimilar changes were made to fractions.Fraction\nso that the\nfrom_float()\nand from_decimal()\nmethods are no longer needed (bpo-8294):\n>>> from decimal import Decimal\n>>> from fractions import Fraction\n>>> Decimal(1.1)\nDecimal('1.100000000000000088817841970012523233890533447265625')\n>>> Fraction(1.1)\nFraction(2476979795053773, 2251799813685248)\nAnother useful change for the decimal\nmodule is that the\nContext.clamp\nattribute is now public. This is useful in creating\ncontexts that correspond to the decimal interchange formats specified in IEEE\n754 (see bpo-8540).\n(Contributed by Mark Dickinson and Raymond Hettinger.)\nftp\u00b6\nThe ftplib.FTP\nclass now supports the context management protocol to\nunconditionally consume socket.error\nexceptions and to close the FTP\nconnection when done:\n>>> from ftplib import FTP\n>>> with FTP(\"ftp1.at.proftpd.org\") as ftp:\nftp.login()\nftp.dir()\n'230 Anonymous login ok, restrictions apply.'\ndr-xr-xr-x 9 ftp ftp 154 May 6 10:43 .\ndr-xr-xr-x 9 ftp ftp 154 May 6 10:43 ..\ndr-xr-xr-x 5 ftp ftp 4096 May 6 10:43 CentOS\ndr-xr-xr-x 3 ftp ftp 18 Jul 10 2008 Fedora\nOther file-like objects such as mmap.mmap\nand fileinput.input()\nalso grew auto-closing context managers:\nwith fileinput.input(files=('log1.txt', 'log2.txt')) as f:\nfor line in f:\nprocess(line)\n(Contributed by Tarek Ziad\u00e9 and Giampaolo Rodol\u00e0 in bpo-4972, and by Georg Brandl in bpo-8046 and bpo-1286.)\nThe FTP_TLS\nclass now accepts a context parameter, which is a\nssl.SSLContext\nobject allowing bundling SSL configuration options,\ncertificates and private keys into a single (potentially long-lived) structure.\n(Contributed by Giampaolo Rodol\u00e0; bpo-8806.)\npopen\u00b6\nThe os.popen()\nand subprocess.Popen()\nfunctions now support\nwith\nstatements for auto-closing of the file descriptors.\n(Contributed by Antoine Pitrou and Brian Curtin in bpo-7461 and bpo-10554.)\nselect\u00b6\nThe select\nmodule now exposes a new, constant attribute,\nPIPE_BUF\n, which gives the minimum number of bytes which are\nguaranteed not to block when select.select()\nsays a pipe is ready\nfor writing.\n>>> import select\n>>> select.PIPE_BUF\n512\n(Available on Unix systems. Patch by S\u00e9bastien Sabl\u00e9 in bpo-9862)\ngzip and zipfile\u00b6\ngzip.GzipFile\nnow implements the io.BufferedIOBase\nabstract base class (except for truncate()\n). It also has a\npeek()\nmethod and supports unseekable as well as\nzero-padded file objects.\nThe gzip\nmodule also gains the compress()\nand\ndecompress()\nfunctions for easier in-memory compression and\ndecompression. Keep in mind that text needs to be encoded as bytes\nbefore compressing and decompressing:\n>>> import gzip\n>>> s = 'Three shall be the number thou shalt count, '\n>>> s += 'and the number of the counting shall be three'\n>>> b = s.encode() # convert to utf-8\n>>> len(b)\n89\n>>> c = gzip.compress(b)\n>>> len(c)\n77\n>>> gzip.decompress(c).decode()[:42] # decompress and convert to text\n'Three shall be the number thou shalt count'\n(Contributed by Anand B. Pillai in bpo-3488; and by Antoine Pitrou, Nir Aides and Brian Curtin in bpo-9962, bpo-1675951, bpo-7471 and bpo-2846.)\nAlso, the zipfile.ZipExtFile\nclass was reworked internally to represent\nfiles stored inside an archive. The new implementation is significantly faster\nand can be wrapped in an io.BufferedReader\nobject for more speedups. It\nalso solves an issue where interleaved calls to read and readline gave the\nwrong results.\n(Patch submitted by Nir Aides in bpo-7610.)\ntarfile\u00b6\nThe TarFile\nclass can now be used as a context manager. In\naddition, its add()\nmethod has a new option, filter,\nthat controls which files are added to the archive and allows the file metadata\nto be edited.\nThe new filter option replaces the older, less flexible exclude parameter\nwhich is now deprecated. If specified, the optional filter parameter needs to\nbe a keyword argument. The user-supplied filter function accepts a\nTarInfo\nobject and returns an updated\nTarInfo\nobject, or if it wants the file to be excluded, the\nfunction can return None\n:\n>>> import tarfile, glob\n>>> def myfilter(tarinfo):\n... if tarinfo.isfile(): # only save real files\n... tarinfo.uname = 'monty' # redact the user name\n... return tarinfo\n>>> with tarfile.open(name='myarchive.tar.gz', mode='w:gz') as tf:\n... for filename in glob.glob('*.txt'):\n... tf.add(filename, filter=myfilter)\n... tf.list()\n-rw-r--r-- monty/501 902 2011-01-26 17:59:11 annotations.txt\n-rw-r--r-- monty/501 123 2011-01-26 17:59:11 general_questions.txt\n-rw-r--r-- monty/501 3514 2011-01-26 17:59:11 prion.txt\n-rw-r--r-- monty/501 124 2011-01-26 17:59:11 py_todo.txt\n-rw-r--r-- monty/501 1399 2011-01-26 17:59:11 semaphore_notes.txt\n(Proposed by Tarek Ziad\u00e9 and implemented by Lars Gust\u00e4bel in bpo-6856.)\nhashlib\u00b6\nThe hashlib\nmodule has two new constant attributes listing the hashing\nalgorithms guaranteed to be present in all implementations and those available\non the current implementation:\n>>> import hashlib\n>>> hashlib.algorithms_guaranteed\n{'sha1', 'sha224', 'sha384', 'sha256', 'sha512', 'md5'}\n>>> hashlib.algorithms_available\n{'md2', 'SHA256', 'SHA512', 'dsaWithSHA', 'mdc2', 'SHA224', 'MD4', 'sha256',\n'sha512', 'ripemd160', 'SHA1', 'MDC2', 'SHA', 'SHA384', 'MD2',\n'ecdsa-with-SHA1','md4', 'md5', 'sha1', 'DSA-SHA', 'sha224',\n'dsaEncryption', 'DSA', 'RIPEMD160', 'sha', 'MD5', 'sha384'}\n(Suggested by Carl Chenet in bpo-7418.)\nast\u00b6\nThe ast\nmodule has a wonderful a general-purpose tool for safely\nevaluating expression strings using the Python literal\nsyntax. The ast.literal_eval()\nfunction serves as a secure alternative to\nthe builtin eval()\nfunction which is easily abused. Python 3.2 adds\nbytes\nand set\nliterals to the list of supported types:\nstrings, bytes, numbers, tuples, lists, dicts, sets, booleans, and None\n.\n>>> from ast import literal_eval\n>>> request = \"{'req': 3, 'func': 'pow', 'args': (2, 0.5)}\"\n>>> literal_eval(request)\n{'args': (2, 0.5), 'req': 3, 'func': 'pow'}\n>>> request = \"os.system('do something harmful')\"\n>>> literal_eval(request)\nTraceback (most recent call last):\n...\nValueError: malformed node or string: <_ast.Call object at 0x101739a10>\n(Implemented by Benjamin Peterson and Georg Brandl.)\nos\u00b6\nDifferent operating systems use various encodings for filenames and environment\nvariables. The os\nmodule provides two new functions,\nfsencode()\nand fsdecode()\n, for encoding and decoding\nfilenames:\n>>> import os\n>>> filename = 'Sehensw\u00fcrdigkeiten'\n>>> os.fsencode(filename)\nb'Sehensw\\xc3\\xbcrdigkeiten'\nSome operating systems allow direct access to encoded bytes in the\nenvironment. If so, the os.supports_bytes_environ\nconstant will be\ntrue.\nFor direct access to encoded environment variables (if available),\nuse the new os.getenvb()\nfunction or use os.environb\nwhich is a bytes version of os.environ\n.\n(Contributed by Victor Stinner.)\nshutil\u00b6\nThe shutil.copytree()\nfunction has two new options:\nignore_dangling_symlinks: when\nsymlinks=False\nso that the function copies a file pointed to by a symlink, not the symlink itself. This option will silence the error raised if the file doesn\u2019t exist.copy_function: is a callable that will be used to copy files.\nshutil.copy2()\nis used by default.\n(Contributed by Tarek Ziad\u00e9.)\nIn addition, the shutil\nmodule now supports archiving operations for zipfiles, uncompressed tarfiles, gzipped tarfiles,\nand bzipped tarfiles. And there are functions for registering additional\narchiving file formats (such as xz compressed tarfiles or custom formats).\nThe principal functions are make_archive()\nand\nunpack_archive()\n. By default, both operate on the current\ndirectory (which can be set by os.chdir()\n) and on any sub-directories.\nThe archive filename needs to be specified with a full pathname. The archiving\nstep is non-destructive (the original files are left unchanged).\n>>> import shutil, pprint\n>>> os.chdir('mydata') # change to the source directory\n>>> f = shutil.make_archive('/var/backup/mydata',\n... 'zip') # archive the current directory\n>>> f # show the name of archive\n'/var/backup/mydata.zip'\n>>> os.chdir('tmp') # change to an unpacking\n>>> shutil.unpack_archive('/var/backup/mydata.zip') # recover the data\n>>> pprint.pprint(shutil.get_archive_formats()) # display known formats\n[('bztar', \"bzip2'ed tar-file\"),\n('gztar', \"gzip'ed tar-file\"),\n('tar', 'uncompressed tar file'),\n('zip', 'ZIP file')]\n>>> shutil.register_archive_format( # register a new archive format\n... name='xz',\n... function=xz.compress, # callable archiving function\n... extra_args=[('level', 8)], # arguments to the function\n... description='xz compression'\n... )\n(Contributed by Tarek Ziad\u00e9.)\nsqlite3\u00b6\nThe sqlite3\nmodule was updated to pysqlite version 2.6.0. It has two new capabilities.\nThe\nsqlite3.Connection.in_transit\nattribute is true if there is an active transaction for uncommitted changes.The\nsqlite3.Connection.enable_load_extension()\nandsqlite3.Connection.load_extension()\nmethods allows you to load SQLite extensions from \u201c.so\u201d files. One well-known extension is the fulltext-search extension distributed with SQLite.\n(Contributed by R. David Murray and Shashwat Anand; bpo-8845.)\nhtml\u00b6\nA new html\nmodule was introduced with only a single function,\nescape()\n, which is used for escaping reserved characters from HTML\nmarkup:\n>>> import html\n>>> html.escape('x > 2 && x < 7')\n'x > 2 && x < 7'\nsocket\u00b6\nThe socket\nmodule has two new improvements.\nSocket objects now have a\ndetach()\nmethod which puts the socket into closed state without actually closing the underlying file descriptor. The latter can then be reused for other purposes. (Added by Antoine Pitrou; bpo-8524.)socket.create_connection()\nnow supports the context management protocol to unconditionally consumesocket.error\nexceptions and to close the socket when done. (Contributed by Giampaolo Rodol\u00e0; bpo-9794.)\nssl\u00b6\nThe ssl\nmodule added a number of features to satisfy common requirements\nfor secure (encrypted, authenticated) internet connections:\nA new class,\nSSLContext\n, serves as a container for persistent SSL data, such as protocol settings, certificates, private keys, and various other options. It includes awrap_socket()\nfor creating an SSL socket from an SSL context.A new function,\nssl.match_hostname()\n, supports server identity verification for higher-level protocols by implementing the rules of HTTPS (from RFC 2818) which are also suitable for other protocols.The\nssl.wrap_socket()\nconstructor function now takes a ciphers argument. The ciphers string lists the allowed encryption algorithms using the format described in the OpenSSL documentation.When linked against recent versions of OpenSSL, the\nssl\nmodule now supports the Server Name Indication extension to the TLS protocol, allowing multiple \u201cvirtual hosts\u201d using different certificates on a single IP port. This extension is only supported in client mode, and is activated by passing the server_hostname argument tossl.SSLContext.wrap_socket()\n.Various options have been added to the\nssl\nmodule, such asOP_NO_SSLv2\nwhich disables the insecure and obsolete SSLv2 protocol.The extension now loads all the OpenSSL ciphers and digest algorithms. If some SSL certificates cannot be verified, they are reported as an \u201cunknown algorithm\u201d error.\nThe version of OpenSSL being used is now accessible using the module attributes\nssl.OPENSSL_VERSION\n(a string),ssl.OPENSSL_VERSION_INFO\n(a 5-tuple), andssl.OPENSSL_VERSION_NUMBER\n(an integer).\n(Contributed by Antoine Pitrou in bpo-8850, bpo-1589, bpo-8322, bpo-5639, bpo-4870, bpo-8484, and bpo-8321.)\nnntp\u00b6\nThe nntplib\nmodule has a revamped implementation with better bytes and\ntext semantics as well as more practical APIs. These improvements break\ncompatibility with the nntplib version in Python 3.1, which was partly\ndysfunctional in itself.\nSupport for secure connections through both implicit (using\nnntplib.NNTP_SSL\n) and explicit (using nntplib.NNTP.starttls()\n)\nTLS has also been added.\n(Contributed by Antoine Pitrou in bpo-9360 and Andrew Vant in bpo-1926.)\ncertificates\u00b6\nhttp.client.HTTPSConnection\n, urllib.request.HTTPSHandler\nand urllib.request.urlopen()\nnow take optional arguments to allow for\nserver certificate checking against a set of Certificate Authorities,\nas recommended in public uses of HTTPS.\n(Added by Antoine Pitrou, bpo-9003.)\nimaplib\u00b6\nSupport for explicit TLS on standard IMAP4 connections has been added through\nthe new imaplib.IMAP4.starttls\nmethod.\n(Contributed by Lorenzo M. Catucci and Antoine Pitrou, bpo-4471.)\nhttp.client\u00b6\nThere were a number of small API improvements in the http.client\nmodule.\nThe old-style HTTP 0.9 simple responses are no longer supported and the strict\nparameter is deprecated in all classes.\nThe HTTPConnection\nand\nHTTPSConnection\nclasses now have a source_address\nparameter for a (host, port) tuple indicating where the HTTP connection is made\nfrom.\nSupport for certificate checking and HTTPS virtual hosts were added to\nHTTPSConnection\n.\nThe request()\nmethod on connection objects\nallowed an optional body argument so that a file object could be used\nto supply the content of the request. Conveniently, the body argument now\nalso accepts an iterable object so long as it includes an explicit\nContent-Length\nheader. This extended interface is much more flexible than\nbefore.\nTo establish an HTTPS connection through a proxy server, there is a new\nset_tunnel()\nmethod that sets the host and\nport for HTTP Connect tunneling.\nTo match the behavior of http.server\n, the HTTP client library now also\nencodes headers with ISO-8859-1 (Latin-1) encoding. It was already doing that\nfor incoming headers, so now the behavior is consistent for both incoming and\noutgoing traffic. (See work by Armin Ronacher in bpo-10980.)\nunittest\u00b6\nThe unittest module has a number of improvements supporting test discovery for packages, easier experimentation at the interactive prompt, new testcase methods, improved diagnostic messages for test failures, and better method names.\nThe command-line call\npython -m unittest\ncan now accept file paths instead of module names for running specific tests (bpo-10620). The new test discovery can find tests within packages, locating any test importable from the top-level directory. The top-level directory can be specified with the-t\noption, a pattern for matching files with-p\n, and a directory to start discovery with-s\n:$ python -m unittest discover -s my_proj_dir -p _test.py\n(Contributed by Michael Foord.)\nExperimentation at the interactive prompt is now easier because the\nunittest.TestCase\nclass can now be instantiated without arguments:>>> from unittest import TestCase >>> TestCase().assertEqual(pow(2, 3), 8)\n(Contributed by Michael Foord.)\nThe\nunittest\nmodule has two new methods,assertWarns()\nandassertWarnsRegex()\nto verify that a given warning type is triggered by the code under test:with self.assertWarns(DeprecationWarning): legacy_function('XYZ')\n(Contributed by Antoine Pitrou, bpo-9754.)\nAnother new method,\nassertCountEqual()\nis used to compare two iterables to determine if their element counts are equal (whether the same elements are present with the same number of occurrences regardless of order):def test_anagram(self): self.assertCountEqual('algorithm', 'logarithm')\n(Contributed by Raymond Hettinger.)\nA principal feature of the unittest module is an effort to produce meaningful diagnostics when a test fails. When possible, the failure is recorded along with a diff of the output. This is especially helpful for analyzing log files of failed test runs. However, since diffs can sometime be voluminous, there is a new\nmaxDiff\nattribute that sets maximum length of diffs displayed.In addition, the method names in the module have undergone a number of clean-ups.\nFor example,\nassertRegex()\nis the new name forassertRegexpMatches()\nwhich was misnamed because the test usesre.search()\n, notre.match()\n. Other methods using regular expressions are now named using short form \u201cRegex\u201d in preference to \u201cRegexp\u201d \u2013 this matches the names used in other unittest implementations, matches Python\u2019s old name for there\nmodule, and it has unambiguous camel-casing.(Contributed by Raymond Hettinger and implemented by Ezio Melotti.)\nTo improve consistency, some long-standing method aliases are being deprecated in favor of the preferred names:\nOld Name\nPreferred Name\nassert_()\nassertEquals()\nassertNotEquals()\nassertAlmostEquals()\nassertNotAlmostEquals()\nLikewise, the\nTestCase.fail*\nmethods deprecated in Python 3.1 are expected to be removed in Python 3.3.(Contributed by Ezio Melotti; bpo-9424.)\nThe\nassertDictContainsSubset()\nmethod was deprecated because it was misimplemented with the arguments in the wrong order. This created hard-to-debug optical illusions where tests likeTestCase().assertDictContainsSubset({'a':1, 'b':2}, {'a':1})\nwould fail.(Contributed by Raymond Hettinger.)\nrandom\u00b6\nThe integer methods in the random\nmodule now do a better job of producing\nuniform distributions. Previously, they computed selections with\nint(n*random())\nwhich had a slight bias whenever n was not a power of two.\nNow, multiple selections are made from a range up to the next power of two and a\nselection is kept only when it falls within the range 0 <= x < n\n. The\nfunctions and methods affected are randrange()\n,\nrandint()\n, choice()\n, shuffle()\nand\nsample()\n.\n(Contributed by Raymond Hettinger; bpo-9025.)\npoplib\u00b6\nPOP3_SSL\nclass now accepts a context parameter, which is a\nssl.SSLContext\nobject allowing bundling SSL configuration options,\ncertificates and private keys into a single (potentially long-lived)\nstructure.\n(Contributed by Giampaolo Rodol\u00e0; bpo-8807.)\nasyncore\u00b6\nasyncore.dispatcher\nnow provides a\nhandle_accepted()\nmethod\nreturning a (sock, addr)\npair which is called when a connection has actually\nbeen established with a new remote endpoint. This is supposed to be used as a\nreplacement for old handle_accept()\nand avoids\nthe user to call accept()\ndirectly.\n(Contributed by Giampaolo Rodol\u00e0; bpo-6706.)\ntempfile\u00b6\nThe tempfile\nmodule has a new context manager,\nTemporaryDirectory\nwhich provides easy deterministic\ncleanup of temporary directories:\nwith tempfile.TemporaryDirectory() as tmpdirname:\nprint('created temporary dir:', tmpdirname)\n(Contributed by Neil Schemenauer and Nick Coghlan; bpo-5178.)\ninspect\u00b6\nThe\ninspect\nmodule has a new functiongetgeneratorstate()\nto easily identify the current state of a generator-iterator:>>> from inspect import getgeneratorstate >>> def gen(): ... yield 'demo' ... >>> g = gen() >>> getgeneratorstate(g) 'GEN_CREATED' >>> next(g) 'demo' >>> getgeneratorstate(g) 'GEN_SUSPENDED' >>> next(g, None) >>> getgeneratorstate(g) 'GEN_CLOSED'\n(Contributed by Rodolpho Eckhardt and Nick Coghlan, bpo-10220.)\nTo support lookups without the possibility of activating a dynamic attribute, the\ninspect\nmodule has a new function,getattr_static()\n. Unlikehasattr()\n, this is a true read-only search, guaranteed not to change state while it is searching:>>> class A: ... @property ... def f(self): ... print('Running') ... return 10 ... >>> a = A() >>> getattr(a, 'f') Running 10 >>> inspect.getattr_static(a, 'f') \n(Contributed by Michael Foord.)\npydoc\u00b6\nThe pydoc\nmodule now provides a much-improved web server interface, as\nwell as a new command-line option -b\nto automatically open a browser window\nto display that server:\n$ pydoc3.2 -b\n(Contributed by Ron Adam; bpo-2001.)\ndis\u00b6\nThe dis\nmodule gained two new functions for inspecting code,\ncode_info()\nand show_code()\n. Both provide detailed code\nobject information for the supplied function, method, source code string or code\nobject. The former returns a string and the latter prints it:\n>>> import dis, random\n>>> dis.show_code(random.choice)\nName: choice\nFilename: /Library/Frameworks/Python.framework/Versions/3.2/lib/python3.2/random.py\nArgument count: 2\nKw-only arguments: 0\nNumber of locals: 3\nStack size: 11\nFlags: OPTIMIZED, NEWLOCALS, NOFREE\nConstants:\n0: 'Choose a random element from a non-empty sequence.'\n1: 'Cannot choose from an empty sequence'\nNames:\n0: _randbelow\n1: len\n2: ValueError\n3: IndexError\nVariable names:\n0: self\n1: seq\n2: i\nIn addition, the dis()\nfunction now accepts string arguments\nso that the common idiom dis(compile(s, '', 'eval'))\ncan be shortened\nto dis(s)\n:\n>>> dis('3*x+1 if x%2==1 else x//2')\n1 0 LOAD_NAME 0 (x)\n3 LOAD_CONST 0 (2)\n6 BINARY_MODULO\n7 LOAD_CONST 1 (1)\n10 COMPARE_OP 2 (==)\n13 POP_JUMP_IF_FALSE 28\n16 LOAD_CONST 2 (3)\n19 LOAD_NAME 0 (x)\n22 BINARY_MULTIPLY\n23 LOAD_CONST 1 (1)\n26 BINARY_ADD\n27 RETURN_VALUE\n>> 28 LOAD_NAME 0 (x)\n31 LOAD_CONST 0 (2)\n34 BINARY_FLOOR_DIVIDE\n35 RETURN_VALUE\nTaken together, these improvements make it easier to explore how CPython is implemented and to see for yourself what the language syntax does under-the-hood.\n(Contributed by Nick Coghlan in bpo-9147.)\ndbm\u00b6\nAll database modules now support the get()\nand setdefault()\nmethods.\n(Suggested by Ray Allen in bpo-9523.)\nctypes\u00b6\nA new type, ctypes.c_ssize_t\nrepresents the C ssize_t\ndatatype.\nsite\u00b6\nThe site\nmodule has three new functions useful for reporting on the\ndetails of a given Python installation.\ngetsitepackages()\nlists all global site-packages directories.getuserbase()\nreports on the user\u2019s base directory where data can be stored.getusersitepackages()\nreveals the user-specific site-packages directory path.\n>>> import site\n>>> site.getsitepackages()\n['/Library/Frameworks/Python.framework/Versions/3.2/lib/python3.2/site-packages',\n'/Library/Frameworks/Python.framework/Versions/3.2/lib/site-python',\n'/Library/Python/3.2/site-packages']\n>>> site.getuserbase()\n'/Users/raymondhettinger/Library/Python/3.2'\n>>> site.getusersitepackages()\n'/Users/raymondhettinger/Library/Python/3.2/lib/python/site-packages'\nConveniently, some of site\u2019s functionality is accessible directly from the command-line:\n$ python -m site --user-base\n/Users/raymondhettinger/.local\n$ python -m site --user-site\n/Users/raymondhettinger/.local/lib/python3.2/site-packages\n(Contributed by Tarek Ziad\u00e9 in bpo-6693.)\nsysconfig\u00b6\nThe new sysconfig\nmodule makes it straightforward to discover\ninstallation paths and configuration variables that vary across platforms and\ninstallations.\nThe module offers access simple access functions for platform and version information:\nget_platform()\nreturning values like linux-i586 or macosx-10.6-ppc.get_python_version()\nreturns a Python version string such as \u201c3.2\u201d.\nIt also provides access to the paths and variables corresponding to one of\nseven named schemes used by distutils\n. Those include posix_prefix,\nposix_home, posix_user, nt, nt_user, os2, os2_home:\nget_paths()\nmakes a dictionary containing installation paths for the current installation scheme.get_config_vars()\nreturns a dictionary of platform specific variables.\nThere is also a convenient command-line interface:\nC:\\Python32>python -m sysconfig\nPlatform: \"win32\"\nPython version: \"3.2\"\nCurrent installation scheme: \"nt\"\nPaths:\ndata = \"C:\\Python32\"\ninclude = \"C:\\Python32\\Include\"\nplatinclude = \"C:\\Python32\\Include\"\nplatlib = \"C:\\Python32\\Lib\\site-packages\"\nplatstdlib = \"C:\\Python32\\Lib\"\npurelib = \"C:\\Python32\\Lib\\site-packages\"\nscripts = \"C:\\Python32\\Scripts\"\nstdlib = \"C:\\Python32\\Lib\"\nVariables:\nBINDIR = \"C:\\Python32\"\nBINLIBDEST = \"C:\\Python32\\Lib\"\nEXE = \".exe\"\nINCLUDEPY = \"C:\\Python32\\Include\"\nLIBDEST = \"C:\\Python32\\Lib\"\nSO = \".pyd\"\nVERSION = \"32\"\nabiflags = \"\"\nbase = \"C:\\Python32\"\nexec_prefix = \"C:\\Python32\"\nplatbase = \"C:\\Python32\"\nprefix = \"C:\\Python32\"\nprojectbase = \"C:\\Python32\"\npy_version = \"3.2\"\npy_version_nodot = \"32\"\npy_version_short = \"3.2\"\nsrcdir = \"C:\\Python32\"\nuserbase = \"C:\\Documents and Settings\\Raymond\\Application Data\\Python\"\n(Moved out of Distutils by Tarek Ziad\u00e9.)\npdb\u00b6\nThe pdb\ndebugger module gained a number of usability improvements:\npdb.py\nnow has a-c\noption that executes commands as given in a.pdbrc\nscript file.A\n.pdbrc\nscript file can containcontinue\nandnext\ncommands that continue debugging.The\nPdb\nclass constructor now accepts a nosigint argument.New commands:\nl(list)\n,ll(long list)\nandsource\nfor listing source code.New commands:\ndisplay\nandundisplay\nfor showing or hiding the value of an expression if it has changed.New command:\ninteract\nfor starting an interactive interpreter containing the global and local names found in the current scope.Breakpoints can be cleared by breakpoint number.\n(Contributed by Georg Brandl, Antonio Cuni and Ilya Sandler.)\nconfigparser\u00b6\nThe configparser\nmodule was modified to improve usability and\npredictability of the default parser and its supported INI syntax. The old\nConfigParser\nclass was removed in favor of SafeConfigParser\nwhich has in turn been renamed to ConfigParser\n. Support\nfor inline comments is now turned off by default and section or option\nduplicates are not allowed in a single configuration source.\nConfig parsers gained a new API based on the mapping protocol:\n>>> parser = ConfigParser()\n>>> parser.read_string(\"\"\"\n... [DEFAULT]\n... location = upper left\n... visible = yes\n... editable = no\n... color = blue\n...\n... [main]\n... title = Main Menu\n... color = green\n...\n... [options]\n... title = Options\n... \"\"\")\n>>> parser['main']['color']\n'green'\n>>> parser['main']['editable']\n'no'\n>>> section = parser['options']\n>>> section['title']\n'Options'\n>>> section['title'] = 'Options (editable: %(editable)s)'\n>>> section['title']\n'Options (editable: no)'\nThe new API is implemented on top of the classical API, so custom parser subclasses should be able to use it without modifications.\nThe INI file structure accepted by config parsers can now be customized. Users can specify alternative option/value delimiters and comment prefixes, change the name of the DEFAULT section or switch the interpolation syntax.\nThere is support for pluggable interpolation including an additional interpolation\nhandler ExtendedInterpolation\n:\n>>> parser = ConfigParser(interpolation=ExtendedInterpolation())\n>>> parser.read_dict({'buildout': {'directory': '/home/ambv/zope9'},\n... 'custom': {'prefix': '/usr/local'}})\n>>> parser.read_string(\"\"\"\n... [buildout]\n... parts =\n... zope9\n... instance\n... find-links =\n... ${buildout:directory}/downloads/dist\n...\n... [zope9]\n... recipe = plone.recipe.zope9install\n... location = /opt/zope\n...\n... [instance]\n... recipe = plone.recipe.zope9instance\n... zope9-location = ${zope9:location}\n... zope-conf = ${custom:prefix}/etc/zope.conf\n... \"\"\")\n>>> parser['buildout']['find-links']\n'\\n/home/ambv/zope9/downloads/dist'\n>>> parser['instance']['zope-conf']\n'/usr/local/etc/zope.conf'\n>>> instance = parser['instance']\n>>> instance['zope-conf']\n'/usr/local/etc/zope.conf'\n>>> instance['zope9-location']\n'/opt/zope'\nA number of smaller features were also introduced, like support for specifying encoding in read operations, specifying fallback values for get-functions, or reading directly from dictionaries and strings.\n(All changes contributed by \u0141ukasz Langa.)\nurllib.parse\u00b6\nA number of usability improvements were made for the urllib.parse\nmodule.\nThe urlparse()\nfunction now supports IPv6 addresses as described in RFC 2732:\n>>> import urllib.parse\n>>> urllib.parse.urlparse('http://[dead:beef:cafe:5417:affe:8FA3:deaf:feed]/foo/')\nParseResult(scheme='http',\nnetloc='[dead:beef:cafe:5417:affe:8FA3:deaf:feed]',\npath='/foo/',\nparams='',\nquery='',\nfragment='')\nThe urldefrag()\nfunction now returns a named tuple:\n>>> r = urllib.parse.urldefrag('http://python.org/about/#target')\n>>> r\nDefragResult(url='http://python.org/about/', fragment='target')\n>>> r[0]\n'http://python.org/about/'\n>>> r.fragment\n'target'\nAnd, the urlencode()\nfunction is now much more flexible,\naccepting either a string or bytes type for the query argument. If it is a\nstring, then the safe, encoding, and error parameters are sent to\nquote_plus()\nfor encoding:\n>>> urllib.parse.urlencode([\n... ('type', 'telenovela'),\n... ('name', '\u00bfD\u00f3nde Est\u00e1 Elisa?')],\n... encoding='latin-1')\n'type=telenovela&name=%BFD%F3nde+Est%E1+Elisa%3F'\nAs detailed in Parsing ASCII Encoded Bytes, all the urllib.parse\nfunctions now accept ASCII-encoded byte strings as input, so long as they are\nnot mixed with regular strings. If ASCII-encoded byte strings are given as\nparameters, the return types will also be an ASCII-encoded byte strings:\n>>> urllib.parse.urlparse(b'http://www.python.org:80/about/')\nParseResultBytes(scheme=b'http', netloc=b'www.python.org:80',\npath=b'/about/', params=b'', query=b'', fragment=b'')\n(Work by Nick Coghlan, Dan Mahn, and Senthil Kumaran in bpo-2987, bpo-5468, and bpo-9873.)\nmailbox\u00b6\nThanks to a concerted effort by R. David Murray, the mailbox\nmodule has\nbeen fixed for Python 3.2. The challenge was that mailbox had been originally\ndesigned with a text interface, but email messages are best represented with\nbytes\nbecause various parts of a message may have different encodings.\nThe solution harnessed the email\npackage\u2019s binary support for parsing\narbitrary email messages. In addition, the solution required a number of API\nchanges.\nAs expected, the add()\nmethod for\nmailbox.Mailbox\nobjects now accepts binary input.\nStringIO\nand text file input are deprecated. Also, string input\nwill fail early if non-ASCII characters are used. Previously it would fail when\nthe email was processed in a later step.\nThere is also support for binary output. The get_file()\nmethod now returns a file in the binary mode (where it used to incorrectly set\nthe file to text-mode). There is also a new get_bytes()\nmethod that returns a bytes\nrepresentation of a message corresponding\nto a given key.\nIt is still possible to get non-binary output using the old API\u2019s\nget_string()\nmethod, but that approach\nis not very useful. Instead, it is best to extract messages from\na Message\nobject or to load them from binary input.\n(Contributed by R. David Murray, with efforts from Steffen Daode Nurpmeso and an initial patch by Victor Stinner in bpo-9124.)\nturtledemo\u00b6\nThe demonstration code for the turtle\nmodule was moved from the Demo\ndirectory to main library. It includes over a dozen sample scripts with\nlively displays. Being on sys.path\n, it can now be run directly\nfrom the command-line:\n$ python -m turtledemo\n(Moved from the Demo directory by Alexander Belopolsky in bpo-10199.)\nMulti-threading\u00b6\nThe mechanism for serializing execution of concurrently running Python threads (generally known as the GIL or Global Interpreter Lock) has been rewritten. Among the objectives were more predictable switching intervals and reduced overhead due to lock contention and the number of ensuing system calls. The notion of a \u201ccheck interval\u201d to allow thread switches has been abandoned and replaced by an absolute duration expressed in seconds. This parameter is tunable through\nsys.setswitchinterval()\n. It currently defaults to 5 milliseconds.Additional details about the implementation can be read from a python-dev mailing-list message (however, \u201cpriority requests\u201d as exposed in this message have not been kept for inclusion).\n(Contributed by Antoine Pitrou.)\nRegular and recursive locks now accept an optional timeout argument to their\nacquire()\nmethod. (Contributed by Antoine Pitrou; bpo-7316.)Similarly,\nthreading.Semaphore.acquire()\nalso gained a timeout argument. (Contributed by Torsten Landschoff; bpo-850728.)Regular and recursive lock acquisitions can now be interrupted by signals on platforms using Pthreads. This means that Python programs that deadlock while acquiring locks can be successfully killed by repeatedly sending SIGINT to the process (by pressing Ctrl+C in most shells). (Contributed by Reid Kleckner; bpo-8844.)\nOptimizations\u00b6\nA number of small performance enhancements have been added:\nPython\u2019s peephole optimizer now recognizes patterns such\nx in {1, 2, 3}\nas being a test for membership in a set of constants. The optimizer recasts theset\nas afrozenset\nand stores the pre-built constant.Now that the speed penalty is gone, it is practical to start writing membership tests using set-notation. This style is both semantically clear and operationally fast:\nextension = name.rpartition('.')[2] if extension in {'xml', 'html', 'xhtml', 'css'}: handle(name)\n(Patch and additional tests contributed by Dave Malcolm; bpo-6690).\nSerializing and unserializing data using the\npickle\nmodule is now several times faster.(Contributed by Alexandre Vassalotti, Antoine Pitrou and the Unladen Swallow team in bpo-9410 and bpo-3873.)\nThe Timsort algorithm used in\nlist.sort()\nandsorted()\nnow runs faster and uses less memory when called with a key function. Previously, every element of a list was wrapped with a temporary object that remembered the key value associated with each element. Now, two arrays of keys and values are sorted in parallel. This saves the memory consumed by the sort wrappers, and it saves time lost to delegating comparisons.(Patch by Daniel Stutzbach in bpo-9915.)\nJSON decoding performance is improved and memory consumption is reduced whenever the same string is repeated for multiple keys. Also, JSON encoding now uses the C speedups when the\nsort_keys\nargument is true.(Contributed by Antoine Pitrou in bpo-7451 and by Raymond Hettinger and Antoine Pitrou in bpo-10314.)\nRecursive locks (created with the\nthreading.RLock()\nAPI) now benefit from a C implementation which makes them as fast as regular locks, and between 10x and 15x faster than their previous pure Python implementation.(Contributed by Antoine Pitrou; bpo-3001.)\nThe fast-search algorithm in stringlib is now used by the\nsplit()\n,rsplit()\n,splitlines()\nandreplace()\nmethods onbytes\n,bytearray\nandstr\nobjects. Likewise, the algorithm is also used byrfind()\n,rindex()\n,rsplit()\nandrpartition()\n.Integer to string conversions now work two \u201cdigits\u201d at a time, reducing the number of division and modulo operations.\n(bpo-6713 by Gawain Bolton, Mark Dickinson, and Victor Stinner.)\nThere were several other minor optimizations. Set differencing now runs faster\nwhen one operand is much larger than the other (patch by Andress Bennetts in\nbpo-8685). The array.repeat()\nmethod has a faster implementation\n(bpo-1569291 by Alexander Belopolsky). The BaseHTTPRequestHandler\nhas more efficient buffering (bpo-3709 by Andrew Schaaf). The\noperator.attrgetter()\nfunction has been sped-up (bpo-10160 by\nChristos Georgiou). And ConfigParser\nloads multi-line arguments a bit\nfaster (bpo-7113 by \u0141ukasz Langa).\nUnicode\u00b6\nPython has been updated to Unicode 6.0.0. The update to the standard adds over 2,000 new characters including emoji symbols which are important for mobile phones.\nIn addition, the updated standard has altered the character properties for two Kannada characters (U+0CF1, U+0CF2) and one New Tai Lue numeric character (U+19DA), making the former eligible for use in identifiers while disqualifying the latter. For more information, see Unicode Character Database Changes.\nCodecs\u00b6\nSupport was added for cp720 Arabic DOS encoding (bpo-1616979).\nMBCS encoding no longer ignores the error handler argument. In the default\nstrict mode, it raises an UnicodeDecodeError\nwhen it encounters an\nundecodable byte sequence and an UnicodeEncodeError\nfor an unencodable\ncharacter.\nThe MBCS codec supports 'strict'\nand 'ignore'\nerror handlers for\ndecoding, and 'strict'\nand 'replace'\nfor encoding.\nTo emulate Python3.1 MBCS encoding, select the 'ignore'\nhandler for decoding\nand the 'replace'\nhandler for encoding.\nOn Mac OS X, Python decodes command line arguments with 'utf-8'\nrather than\nthe locale encoding.\nBy default, tarfile\nuses 'utf-8'\nencoding on Windows (instead of\n'mbcs'\n) and the 'surrogateescape'\nerror handler on all operating\nsystems.\nDocumentation\u00b6\nThe documentation continues to be improved.\nA table of quick links has been added to the top of lengthy sections such as Built-in Functions. In the case of\nitertools\n, the links are accompanied by tables of cheatsheet-style summaries to provide an overview and memory jog without having to read all of the docs.In some cases, the pure Python source code can be a helpful adjunct to the documentation, so now many modules now feature quick links to the latest version of the source code. For example, the\nfunctools\nmodule documentation has a quick link at the top labeled:Source code Lib/functools.py.\n(Contributed by Raymond Hettinger; see rationale.)\nThe docs now contain more examples and recipes. In particular,\nre\nmodule has an extensive section, Regular Expression Examples. Likewise, theitertools\nmodule continues to be updated with new Itertools Recipes.The\ndatetime\nmodule now has an auxiliary implementation in pure Python. No functionality was changed. This just provides an easier-to-read alternate implementation.(Contributed by Alexander Belopolsky in bpo-9528.)\nThe unmaintained\nDemo\ndirectory has been removed. Some demos were integrated into the documentation, some were moved to theTools/demo\ndirectory, and others were removed altogether.(Contributed by Georg Brandl in bpo-7962.)\nIDLE\u00b6\nCode Repository\u00b6\nIn addition to the existing Subversion code repository at https://svn.python.org there is now a Mercurial repository at https://hg.python.org/.\nAfter the 3.2 release, there are plans to switch to Mercurial as the primary repository. This distributed version control system should make it easier for members of the community to create and share external changesets. See PEP 385 for details.\nTo learn to use the new version control system, see the Quick Start or the Guide to Mercurial Workflows.\nBuild and C API Changes\u00b6\nChanges to Python\u2019s build process and to the C API include:\nThe idle, pydoc and 2to3 scripts are now installed with a version-specific suffix on\nmake altinstall\n(bpo-10679).The C functions that access the Unicode Database now accept and return characters from the full Unicode range, even on narrow unicode builds (Py_UNICODE_TOLOWER, Py_UNICODE_ISDECIMAL, and others). A visible difference in Python is that\nunicodedata.numeric()\nnow returns the correct value for large code points, andrepr()\nmay consider more characters as printable.(Reported by Bupjoe Lee and fixed by Amaury Forgeot D\u2019Arc; bpo-5127.)\nComputed gotos are now enabled by default on supported compilers (which are detected by the configure script). They can still be disabled selectively by specifying\n--without-computed-gotos\n.(Contributed by Antoine Pitrou; bpo-9203.)\nThe option\n--with-wctype-functions\nwas removed. The built-in unicode database is now used for all functions.(Contributed by Amaury Forgeot D\u2019Arc; bpo-9210.)\nHash values are now values of a new type,\nPy_hash_t\n, which is defined to be the same size as a pointer. Previously they were of type long, which on some 64-bit operating systems is still only 32 bits long. As a result of this fix,set\nanddict\ncan now hold more than2**32\nentries on builds with 64-bit pointers (previously, they could grow to that size but their performance degraded catastrophically).(Suggested by Raymond Hettinger and implemented by Benjamin Peterson; bpo-9778.)\nA new macro\nPy_VA_COPY\ncopies the state of the variable argument list. It is equivalent to C99 va_copy but available on all Python platforms (bpo-2443).A new C API function\nPySys_SetArgvEx()\nallows an embedded interpreter to setsys.argv\nwithout also modifyingsys.path\n(bpo-5753).PyEval_CallObject()\nis now only available in macro form. The function declaration, which was kept for backwards compatibility reasons, is now removed \u2013 the macro was introduced in 1997 (bpo-8276).There is a new function\nPyLong_AsLongLongAndOverflow()\nwhich is analogous toPyLong_AsLongAndOverflow()\n. They both serve to convert Pythonint\ninto a native fixed-width type while providing detection of cases where the conversion won\u2019t fit (bpo-7767).The\nPyUnicode_CompareWithASCIIString()\nfunction now returns not equal if the Python string is NUL terminated.There is a new function\nPyErr_NewExceptionWithDoc()\nthat is likePyErr_NewException()\nbut allows a docstring to be specified. This lets C exceptions have the same self-documenting capabilities as their pure Python counterparts (bpo-7033).When compiled with the\n--with-valgrind\noption, the pymalloc allocator will be automatically disabled when running under Valgrind. This gives improved memory leak detection when running under Valgrind, while taking advantage of pymalloc at other times (bpo-2422).Removed the\nO?\nformat from the PyArg_Parse functions. The format is no longer used and it had never been documented (bpo-8837).\nThere were a number of other small changes to the C-API. See the Misc/NEWS file for a complete list.\nAlso, there were a number of updates to the Mac OS X build, see Mac/BuildScript/README.txt for details. For users running a 32/64-bit build, there is a known problem with the default Tcl/Tk on Mac OS X 10.6. Accordingly, we recommend installing an updated alternative such as ActiveState Tcl/Tk 8.5.9. See https://www.python.org/download/mac/tcltk/ for additional details.\nPorting to Python 3.2\u00b6\nThis section lists previously described changes and other bugfixes that may require changes to your code:\nThe\nconfigparser\nmodule has a number of clean-ups. The major change is to replace the oldConfigParser\nclass with long-standing preferred alternativeSafeConfigParser\n. In addition there are a number of smaller incompatibilities:The interpolation syntax is now validated on\nget()\nandset()\noperations. In the default interpolation scheme, only two tokens with percent signs are valid:%(name)s\nand%%\n, the latter being an escaped percent sign.The\nset()\nandadd_section()\nmethods now verify that values are actual strings. Formerly, unsupported types could be introduced unintentionally.Duplicate sections or options from a single source now raise either\nDuplicateSectionError\norDuplicateOptionError\n. Formerly, duplicates would silently overwrite a previous entry.Inline comments are now disabled by default so now the ; character can be safely used in values.\nComments now can be indented. Consequently, for ; or # to appear at the start of a line in multiline values, it has to be interpolated. This keeps comment prefix characters in values from being mistaken as comments.\n\"\"\nis now a valid value and is no longer automatically converted to an empty string. For empty strings, use\"option =\"\nin a line.\nThe\nnntplib\nmodule was reworked extensively, meaning that its APIs are often incompatible with the 3.1 APIs.bytearray\nobjects can no longer be used as filenames; instead, they should be converted tobytes\n.The\narray.tostring()\nandarray.fromstring()\nhave been renamed toarray.tobytes()\nandarray.frombytes()\nfor clarity. The old names have been deprecated. (See bpo-8990.)PyArg_Parse*()\nfunctions:\u201ct#\u201d format has been removed: use \u201cs#\u201d or \u201cs*\u201d instead\n\u201cw\u201d and \u201cw#\u201d formats has been removed: use \u201cw*\u201d instead\nThe\nPyCObject\ntype, deprecated in 3.1, has been removed. To wrap opaque C pointers in Python objects, thePyCapsule\nAPI should be used instead; the new type has a well-defined interface for passing typing safety information and a less complicated signature for calling a destructor.The\nsys.setfilesystemencoding()\nfunction was removed because it had a flawed design.The\nrandom.seed()\nfunction and method now salt string seeds with an sha512 hash function. To access the previous version of seed in order to reproduce Python 3.1 sequences, set the version argument to 1,random.seed(s, version=1)\n.The previously deprecated\nstring.maketrans()\nfunction has been removed in favor of the static methodsbytes.maketrans()\nandbytearray.maketrans()\n. This change solves the confusion around which types were supported by thestring\nmodule. Now,str\n,bytes\n, andbytearray\neach have their own maketrans and translate methods with intermediate translation tables of the appropriate type.(Contributed by Georg Brandl; bpo-5675.)\nThe previously deprecated\ncontextlib.nested()\nfunction has been removed in favor of a plainwith\nstatement which can accept multiple context managers. The latter technique is faster (because it is built-in), and it does a better job finalizing multiple context managers when one of them raises an exception:with open('mylog.txt') as infile, open('a.out', 'w') as outfile: for line in infile: if '' in line: outfile.write(line)\n(Contributed by Georg Brandl and Mattias Br\u00e4ndstr\u00f6m; appspot issue 53094.)\nstruct.pack()\nnow only allows bytes for thes\nstring pack code. Formerly, it would accept text arguments and implicitly encode them to bytes using UTF-8. This was problematic because it made assumptions about the correct encoding and because a variable-length encoding can fail when writing to fixed length segment of a structure.Code such as\nstruct.pack('<6sHHBBB', 'GIF87a', x, y)\nshould be rewritten with to use bytes instead of text,struct.pack('<6sHHBBB', b'GIF87a', x, y)\n.(Discovered by David Beazley and fixed by Victor Stinner; bpo-10783.)\nThe\nxml.etree.ElementTree\nclass now raises anxml.etree.ElementTree.ParseError\nwhen a parse fails. Previously it raised anxml.parsers.expat.ExpatError\n.The new, longer\nstr()\nvalue on floats may break doctests which rely on the old output format.In\nsubprocess.Popen\n, the default value for close_fds is nowTrue\nunder Unix; under Windows, it isTrue\nif the three standard streams are set toNone\n,False\notherwise. Previously, close_fds was alwaysFalse\nby default, which produced difficult to solve bugs or race conditions when open file descriptors would leak into the child process.Support for legacy HTTP 0.9 has been removed from\nurllib.request\nandhttp.client\n. Such support is still present on the server side (inhttp.server\n).(Contributed by Antoine Pitrou, bpo-10711.)\nSSL sockets in timeout mode now raise\nsocket.timeout\nwhen a timeout occurs, rather than a genericSSLError\n.(Contributed by Antoine Pitrou, bpo-10272.)\nThe misleading functions\nPyEval_AcquireLock()\nandPyEval_ReleaseLock()\nhave been officially deprecated. The thread-state aware APIs (such asPyEval_SaveThread()\nandPyEval_RestoreThread()\n) should be used instead.Due to security risks,\nasyncore.handle_accept()\nhas been deprecated, and a new function,asyncore.handle_accepted()\n, was added to replace it.(Contributed by Giampaolo Rodola in bpo-6706.)\nDue to the new GIL implementation,\nPyEval_InitThreads()\ncannot be called beforePy_Initialize()\nanymore.", "code_snippets": ["\n", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n", " ", "\n ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n\n", "\n", "\n\n", "\n\n", "\n", "\n", "\n\n", "\n", "\n", "\n\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n\n", " ", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n\n", " ", " ", " ", " ", "\n ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", " ", " ", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n", "\n", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n\n", "\n", " ", " ", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n", "\n\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", "\n", " ", "\n", " ", " ", " ", " ", "\n", "\n", " ", " ", "\n", " ", "\n", "\n", "\n", ": ", "\n", "\n ", "\n ", " ", "\n ", "\n ", " ", "\n", "\n ", "\n ", "\n ", "\n ", "\n ", " ", " ", " ", "\n ", "\n ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n\n ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n", " ", " ", "\n\n", "\n ", " ", " ", "\n ", " ", "\n ", " ", " ", "\n ", " ", "\n\n", " ", " ", "\n", " ", " ", " ", "\n ", " ", "\n", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", " ", "\n ", " ", " ", "\n ", "\n ", "\n ", " ", " ", "\n ", " ", "\n", " ", " ", "\n\n", "\n", "\n\n", " ", "\n", "\n", "\n", " ", "\n\n", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n\n", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n ", "\n ", " ", "\n ", "\n ", "\n ", " ", "\n ", "\n", " ", " ", " ", " ", " ", " ", "\n\n", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n\n", "\n\n", " ", " ", "\n", " ", "\n", " ", "\n", " ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", " ", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n", "\n", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", "\n", "\n", " ", "\n", "\n\n", "\n\n", "\n", "\n ", " ", "\n ", "\n ", " ", "\n", " ", "\n ", "\n ", "\n", "\n", "\n ", "\n ", "\n", " ", " ", " ", " ", " ", " \\\n ", " ", " ", " ", "\n", " ", "\n", " ", " ", " ", "\n", "\n", "\n\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", "\n", "\n\n", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", "\n\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n\n", "\n", "\n\n", "\n", "\n", "\n", "\n", "\n", " ", "\n\n", " ", " ", "\n", "\n", "\n\n", " ", " ", "\n", "\n", "\n", "\n", ": ", "\n", "\n\n", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", "\n", "\n", " ", "\n", " ", "\n\n", " ", "\n", "\n", "\n", "\n", "\n\n", " ", "\n", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", "\n", " ", "\n ", "\n", "\n ", " ", "\n", " ", " ", " ", "\n ", " ", "\n", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", " ", "\n", " ", "\n", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", " ", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", "\n", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n ", "\n", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", "\n"], "language": "Python", "source": "python.org", "token_count": 22777} +{"url": "https://docs.python.org/3/whatsnew/3.4.html", "title": "What\u2019s New In Python 3.4", "content": "What\u2019s New In Python 3.4\u00b6\n- Author:\nR. David Murray (Editor)\nThis article explains the new features in Python 3.4, compared to 3.3. Python 3.4 was released on March 16, 2014. For full details, see the changelog.\nSee also\nPEP 429 \u2013 Python 3.4 Release Schedule\nSummary \u2013 Release Highlights\u00b6\nNew syntax features:\nNo new syntax features were added in Python 3.4.\nOther new features:\nNewly created file descriptors are non-inheritable (PEP 446).\ncommand line option for isolated mode (bpo-16499).\nimprovements in the handling of codecs that are not text encodings (multiple issues).\nA ModuleSpec Type for the Import System (PEP 451). (Affects importer authors.)\nThe\nmarshal\nformat has been made more compact and efficient (bpo-16475).\nNew library modules:\nasyncio\n: New provisional API for asynchronous IO (PEP 3156).selectors\n: High-level and efficient I/O multiplexing, built upon theselect\nmodule primitives (part of PEP 3156).statistics\n: A basic numerically stable statistics library (PEP 450).\nSignificantly improved library modules:\nNew\npickle\nprotocol 4 (PEP 3154).multiprocessing\nnow has an option to avoid using os.fork on Unix (bpo-8713).email\nhas a new submodule,contentmanager\n, and a newMessage\nsubclass (EmailMessage\n) that simplify MIME handling (bpo-18891).The\ninspect\nandpydoc\nmodules are now capable of correct introspection of a much wider variety of callable objects, which improves the output of the Pythonhelp()\nsystem.The\nipaddress\nmodule API has been declared stable\nSecurity improvements:\nMake newly created file descriptors non-inheritable (PEP 446) to avoid leaking file descriptors to child processes.\nNew command line option for isolated mode, (bpo-16499).\nmultiprocessing\nnow has an option to avoid using os.fork on Unix. spawn and forkserver are more secure because they avoid sharing data with child processes.multiprocessing\nchild processes on Windows no longer inherit all of the parent\u2019s inheritable handles, only the necessary ones.A new\nhashlib.pbkdf2_hmac()\nfunction provides the PKCS#5 password-based key derivation function 2.Retrieving certificates from the Windows system cert store support for\nssl\n.The\nssl.SSLContext\nclass has a lot of improvements.All modules in the standard library that support SSL now support server certificate verification, including hostname matching (\nssl.match_hostname()\n) and CRLs (Certificate Revocation lists, seessl.SSLContext.load_verify_locations()\n).\nCPython implementation improvements:\nLeveraging PEP 442, in most cases module globals are no longer set to None during finalization (bpo-18214).\nPlease read on for a comprehensive list of user-facing changes, including many other smaller improvements, CPython optimizations, deprecations, and potential porting issues.\nNew Features\u00b6\nPEP 453: Explicit Bootstrapping of PIP in Python Installations\u00b6\nBootstrapping pip By Default\u00b6\nThe new ensurepip\nmodule (defined in PEP 453) provides a standard\ncross-platform mechanism to bootstrap the pip installer into Python\ninstallations and virtual environments. The version of pip\nincluded\nwith Python 3.4.0 is pip\n1.5.4, and future 3.4.x maintenance releases\nwill update the bundled version to the latest version of pip\nthat is\navailable at the time of creating the release candidate.\nBy default, the commands pipX\nand pipX.Y\nwill be installed on all\nplatforms (where X.Y stands for the version of the Python installation),\nalong with the pip\nPython package and its dependencies. On Windows and\nin virtual environments on all platforms, the unversioned pip\ncommand\nwill also be installed. On other platforms, the system wide unversioned\npip\ncommand typically refers to the separately installed Python 2\nversion.\nThe pyvenv\ncommand line utility and the venv\nmodule make use of the ensurepip\nmodule to make pip\nreadily\navailable in virtual environments. When using the command line utility,\npip\nis installed by default, while when using the venv\nmodule\nAPI installation of pip\nmust be requested explicitly.\nFor CPython source builds on POSIX systems,\nthe make install\nand make altinstall\ncommands bootstrap pip\nby\ndefault. This behaviour can be controlled through configure options, and\noverridden through Makefile options.\nOn Windows and Mac OS X, the CPython installers now default to installing\npip\nalong with CPython itself (users may opt out of installing it\nduring the installation process). Window users will need to opt in to the\nautomatic PATH\nmodifications to have pip\navailable from the command\nline by default, otherwise it can still be accessed through the Python\nlauncher for Windows as py -m pip\n.\nAs discussed in the PEP platform packagers may choose not to install these commands by default, as long as, when invoked, they provide clear and simple directions on how to install them on that platform (usually using the system package manager).\nNote\nTo avoid conflicts between parallel Python 2 and Python 3 installations,\nonly the versioned pip3\nand pip3.4\ncommands are bootstrapped by\ndefault when ensurepip\nis invoked directly - the --default-pip\noption is needed to also request the unversioned pip\ncommand.\npyvenv\nand the Windows installer ensure that the unqualified pip\ncommand is made available in those environments, and pip\ncan always be\ninvoked via the -m\nswitch rather than directly to avoid ambiguity on\nsystems with multiple Python installations.\nDocumentation Changes\u00b6\nAs part of this change, the Installing Python Modules and Distributing Python Modules sections of the documentation have been completely redesigned as short getting started and FAQ documents. Most packaging documentation has now been moved out to the Python Packaging Authority maintained Python Packaging User Guide and the documentation of the individual projects.\nHowever, as this migration is currently still incomplete, the legacy versions of those guides remaining available as Building C and C++ Extensions with setuptools and Building C and C++ Extensions with setuptools.\nSee also\n- PEP 453 \u2013 Explicit bootstrapping of pip in Python installations\nPEP written by Donald Stufft and Nick Coghlan, implemented by Donald Stufft, Nick Coghlan, Martin von L\u00f6wis and Ned Deily.\nPEP 446: Newly Created File Descriptors Are Non-Inheritable\u00b6\nPEP 446 makes newly created file descriptors non-inheritable. In general, this is the behavior an application will want: when launching a new process, having currently open files also open in the new process can lead to all sorts of hard to find bugs, and potentially to security issues.\nHowever, there are occasions when inheritance is desired. To support these cases, the following new functions and methods are available:\nSee also\n- PEP 446 \u2013 Make newly created file descriptors non-inheritable\nPEP written and implemented by Victor Stinner.\nImprovements to Codec Handling\u00b6\nSince it was first introduced, the codecs\nmodule has always been\nintended to operate as a type-neutral dynamic encoding and decoding\nsystem. However, its close coupling with the Python text model, especially\nthe type restricted convenience methods on the builtin str\n,\nbytes\nand bytearray\ntypes, has historically obscured that\nfact.\nAs a key step in clarifying the situation, the codecs.encode()\nand\ncodecs.decode()\nconvenience functions are now properly documented in\nPython 2.7, 3.3 and 3.4. These functions have existed in the codecs\nmodule (and have been covered by the regression test suite) since Python 2.4,\nbut were previously only discoverable through runtime introspection.\nUnlike the convenience methods on str\n, bytes\nand\nbytearray\n, the codecs\nconvenience functions support arbitrary\ncodecs in both Python 2 and Python 3, rather than being limited to Unicode text\nencodings (in Python 3) or basestring\n<-> basestring\nconversions (in\nPython 2).\nIn Python 3.4, the interpreter is able to identify the known non-text encodings provided in the standard library and direct users towards these general purpose convenience functions when appropriate:\n>>> b\"abcdef\".decode(\"hex\")\nTraceback (most recent call last):\nFile \"\", line 1, in \nLookupError: 'hex' is not a text encoding; use codecs.decode() to handle arbitrary codecs\n>>> \"hello\".encode(\"rot13\")\nTraceback (most recent call last):\nFile \"\", line 1, in \nLookupError: 'rot13' is not a text encoding; use codecs.encode() to handle arbitrary codecs\n>>> open(\"foo.txt\", encoding=\"hex\")\nTraceback (most recent call last):\nFile \"\", line 1, in \nLookupError: 'hex' is not a text encoding; use codecs.open() to handle arbitrary codecs\nIn a related change, whenever it is feasible without breaking backwards compatibility, exceptions raised during encoding and decoding operations are wrapped in a chained exception of the same type that mentions the name of the codec responsible for producing the error:\n>>> import codecs\n>>> codecs.decode(b\"abcdefgh\", \"hex\")\nTraceback (most recent call last):\nFile \"/usr/lib/python3.4/encodings/hex_codec.py\", line 20, in hex_decode\nreturn (binascii.a2b_hex(input), len(input))\nbinascii.Error: Non-hexadecimal digit found\nThe above exception was the direct cause of the following exception:\nTraceback (most recent call last):\nFile \"\", line 1, in \nbinascii.Error: decoding with 'hex' codec failed (Error: Non-hexadecimal digit found)\n>>> codecs.encode(\"hello\", \"bz2\")\nTraceback (most recent call last):\nFile \"/usr/lib/python3.4/encodings/bz2_codec.py\", line 17, in bz2_encode\nreturn (bz2.compress(input), len(input))\nFile \"/usr/lib/python3.4/bz2.py\", line 498, in compress\nreturn comp.compress(data) + comp.flush()\nTypeError: 'str' does not support the buffer interface\nThe above exception was the direct cause of the following exception:\nTraceback (most recent call last):\nFile \"\", line 1, in \nTypeError: encoding with 'bz2' codec failed (TypeError: 'str' does not support the buffer interface)\nFinally, as the examples above show, these improvements have permitted the restoration of the convenience aliases for the non-Unicode codecs that were themselves restored in Python 3.2. This means that encoding binary data to and from its hexadecimal representation (for example) can now be written as:\n>>> from codecs import encode, decode\n>>> encode(b\"hello\", \"hex\")\nb'68656c6c6f'\n>>> decode(b\"68656c6c6f\", \"hex\")\nb'hello'\nThe binary and text transforms provided in the standard library are detailed in Binary Transforms and Text Transforms.\n(Contributed by Nick Coghlan in bpo-7475, bpo-17827, bpo-17828 and bpo-19619.)\nPEP 451: A ModuleSpec Type for the Import System\u00b6\nPEP 451 provides an encapsulation of the information about a module that the import machinery will use to load it (that is, a module specification). This helps simplify both the import implementation and several import-related APIs. The change is also a stepping stone for several future import-related improvements.\nThe public-facing changes from the PEP are entirely backward-compatible. Furthermore, they should be transparent to everyone but importer authors. Key finder and loader methods have been deprecated, but they will continue working. New importers should use the new methods described in the PEP. Existing importers should be updated to implement the new methods. See the Deprecated section for a list of methods that should be replaced and their replacements.\nOther Language Changes\u00b6\nSome smaller changes made to the core Python language are:\nUnicode database updated to UCD version 6.3.\nmin()\nandmax()\nnow accept a default keyword-only argument that can be used to specify the value they return if the iterable they are evaluating has no elements. (Contributed by Julian Berman in bpo-18111.)Module objects are now weakly referenceable.\nModule\n__file__\nattributes (and related values) should now always contain absolute paths by default, with the sole exception of__main__.__file__\nwhen a script has been executed directly using a relative path. (Contributed by Brett Cannon in bpo-18416.)All the UTF-* codecs (except UTF-7) now reject surrogates during both encoding and decoding unless the\nsurrogatepass\nerror handler is used, with the exception of the UTF-16 decoder (which accepts valid surrogate pairs) and the UTF-16 encoder (which produces them while encoding non-BMP characters). (Contributed by Victor Stinner, Kang-Hao (Kenny) Lu and Serhiy Storchaka in bpo-12892.)New German EBCDIC codec\ncp273\n. (Contributed by Michael Bierenfeld and Andrew Kuchling in bpo-1097797.)New Ukrainian codec\ncp1125\n. (Contributed by Serhiy Storchaka in bpo-19668.)bytes\n.join() andbytearray\n.join() now accept arbitrary buffer objects as arguments. (Contributed by Antoine Pitrou in bpo-15958.)The\nint\nconstructor now accepts any object that has an__index__\nmethod for its base argument. (Contributed by Mark Dickinson in bpo-16772.)Frame objects now have a\nclear()\nmethod that clears all references to local variables from the frame. (Contributed by Antoine Pitrou in bpo-17934.)memoryview\nis now registered as aSequence\n, and supports thereversed()\nbuiltin. (Contributed by Nick Coghlan and Claudiu Popa in bpo-18690 and bpo-19078.)Signatures reported by\nhelp()\nhave been modified and improved in several cases as a result of the introduction of Argument Clinic and other changes to theinspect\nandpydoc\nmodules.__length_hint__()\nis now part of the formal language specification (see PEP 424). (Contributed by Armin Ronacher in bpo-16148.)\nNew Modules\u00b6\nasyncio\u00b6\nThe new asyncio\nmodule (defined in PEP 3156) provides a standard\npluggable event loop model for Python, providing solid asynchronous IO\nsupport in the standard library, and making it easier for other event loop\nimplementations to interoperate with the standard library and each other.\nFor Python 3.4, this module is considered a provisional API.\nSee also\n- PEP 3156 \u2013 Asynchronous IO Support Rebooted: the \u201casyncio\u201d Module\nPEP written and implementation led by Guido van Rossum.\nensurepip\u00b6\nThe new ensurepip\nmodule is the primary infrastructure for the\nPEP 453 implementation. In the normal course of events end users will not\nneed to interact with this module, but it can be used to manually bootstrap\npip\nif the automated bootstrapping into an installation or virtual\nenvironment was declined.\nensurepip\nincludes a bundled copy of pip\n, up-to-date as of the first\nrelease candidate of the release of CPython with which it ships (this applies\nto both maintenance releases and feature releases). ensurepip\ndoes not\naccess the internet. If the installation has internet access, after\nensurepip\nis run the bundled pip\ncan be used to upgrade pip\nto a\nmore recent release than the bundled one. (Note that such an upgraded version\nof pip\nis considered to be a separately installed package and will not be\nremoved if Python is uninstalled.)\nThe module is named ensurepip because if called when pip\nis already\ninstalled, it does nothing. It also has an --upgrade\noption that will\ncause it to install the bundled copy of pip\nif the existing installed\nversion of pip\nis older than the bundled copy.\nenum\u00b6\nThe new enum\nmodule (defined in PEP 435) provides a standard\nimplementation of enumeration types, allowing other modules (such as\nsocket\n) to provide more informative error messages and better\ndebugging support by replacing opaque integer constants with backwards\ncompatible enumeration values.\nSee also\n- PEP 435 \u2013 Adding an Enum type to the Python standard library\nPEP written by Barry Warsaw, Eli Bendersky and Ethan Furman, implemented by Ethan Furman.\npathlib\u00b6\nThe new pathlib\nmodule offers classes representing filesystem paths\nwith semantics appropriate for different operating systems. Path classes are\ndivided between pure paths, which provide purely computational operations\nwithout I/O, and concrete paths, which inherit from pure paths but also\nprovide I/O operations.\nFor Python 3.4, this module is considered a provisional API.\nSee also\n- PEP 428 \u2013 The pathlib module \u2013 object-oriented filesystem paths\nPEP written and implemented by Antoine Pitrou.\nselectors\u00b6\nThe new selectors\nmodule (created as part of implementing PEP 3156)\nallows high-level and efficient I/O multiplexing, built upon the\nselect\nmodule primitives.\nstatistics\u00b6\nThe new statistics\nmodule (defined in PEP 450) offers some core\nstatistics functionality directly in the standard library. This module\nsupports calculation of the mean, median, mode, variance and standard\ndeviation of a data series.\nSee also\n- PEP 450 \u2013 Adding A Statistics Module To The Standard Library\nPEP written and implemented by Steven D\u2019Aprano\ntracemalloc\u00b6\nThe new tracemalloc\nmodule (defined in PEP 454) is a debug tool to\ntrace memory blocks allocated by Python. It provides the following information:\nTrace where an object was allocated\nStatistics on allocated memory blocks per filename and per line number: total size, number and average size of allocated memory blocks\nCompute the differences between two snapshots to detect memory leaks\nSee also\n- PEP 454 \u2013 Add a new tracemalloc module to trace Python memory allocations\nPEP written and implemented by Victor Stinner\nImproved Modules\u00b6\nabc\u00b6\nNew function abc.get_cache_token()\ncan be used to know when to invalidate\ncaches that are affected by changes in the object graph. (Contributed\nby \u0141ukasz Langa in bpo-16832.)\nNew class ABC\nhas ABCMeta\nas its meta class.\nUsing ABC\nas a base class has essentially the same effect as specifying\nmetaclass=abc.ABCMeta\n, but is simpler to type and easier to read.\n(Contributed by Bruno Dupuis in bpo-16049.)\naifc\u00b6\nThe getparams()\nmethod now returns a namedtuple rather than a\nplain tuple. (Contributed by Claudiu Popa in bpo-17818.)\naifc.open()\nnow supports the context management protocol: when used in a\nwith\nblock, the close()\nmethod of the returned\nobject will be called automatically at the end of the block. (Contributed by\nSerhiy Storchacha in bpo-16486.)\nThe writeframesraw()\nand writeframes()\nmethods now accept any bytes-like object. (Contributed by Serhiy\nStorchaka in bpo-8311.)\nargparse\u00b6\nThe FileType\nclass now accepts encoding and\nerrors arguments, which are passed through to open()\n. (Contributed\nby Lucas Maystre in bpo-11175.)\naudioop\u00b6\naudioop\nnow supports 24-bit samples. (Contributed by Serhiy Storchaka\nin bpo-12866.)\nNew byteswap()\nfunction converts big-endian samples to\nlittle-endian and vice versa. (Contributed by Serhiy Storchaka in\nbpo-19641.)\nAll audioop\nfunctions now accept any bytes-like object. Strings\nare not accepted: they didn\u2019t work before, now they raise an error right away.\n(Contributed by Serhiy Storchaka in bpo-16685.)\nbase64\u00b6\nThe encoding and decoding functions in base64\nnow accept any\nbytes-like object in cases where it previously required a\nbytes\nor bytearray\ninstance. (Contributed by Nick Coghlan in\nbpo-17839.)\nNew functions a85encode()\n, a85decode()\n,\nb85encode()\n, and b85decode()\nprovide the ability to\nencode and decode binary data from and to Ascii85\nand the git/mercurial\nBase85\nformats, respectively. The a85\nfunctions have options that can\nbe used to make them compatible with the variants of the Ascii85\nencoding,\nincluding the Adobe variant. (Contributed by Martin Morrison, the Mercurial\nproject, Serhiy Storchaka, and Antoine Pitrou in bpo-17618.)\ncollections\u00b6\nThe ChainMap.new_child()\nmethod now accepts an m argument specifying\nthe child map to add to the chain. This allows an existing mapping and/or a\ncustom mapping type to be used for the child. (Contributed by Vinay Sajip in\nbpo-16613.)\ncolorsys\u00b6\nThe number of digits in the coefficients for the RGB \u2014 YIQ conversions have been expanded so that they match the FCC NTSC versions. The change in results should be less than 1% and may better match results found elsewhere. (Contributed by Brian Landers and Serhiy Storchaka in bpo-14323.)\ncontextlib\u00b6\nThe new contextlib.suppress\ncontext manager helps to clarify the\nintent of code that deliberately suppresses exceptions from a single\nstatement. (Contributed by Raymond Hettinger in bpo-15806 and\nZero Piraeus in bpo-19266.)\nThe new contextlib.redirect_stdout()\ncontext manager makes it easier\nfor utility scripts to handle inflexible APIs that write their output to\nsys.stdout\nand don\u2019t provide any options to redirect it. Using the\ncontext manager, the sys.stdout\noutput can be redirected to any\nother stream or, in conjunction with io.StringIO\n, to a string.\nThe latter can be especially useful, for example, to capture output\nfrom a function that was written to implement a command line interface.\nIt is recommended only for utility scripts because it affects the\nglobal state of sys.stdout\n. (Contributed by Raymond Hettinger\nin bpo-15805.)\nThe contextlib\ndocumentation has also been updated to include a\ndiscussion of the\ndifferences between single use, reusable and reentrant context managers.\ndbm\u00b6\ndbm.open()\nobjects now support the context management protocol. When\nused in a with\nstatement, the close\nmethod of the database\nobject will be called automatically at the end of the block. (Contributed by\nClaudiu Popa and Nick Coghlan in bpo-19282.)\ndis\u00b6\nFunctions show_code()\n, dis()\n, distb()\n, and\ndisassemble()\nnow accept a keyword-only file argument that\ncontrols where they write their output.\nThe dis\nmodule is now built around an Instruction\nclass\nthat provides object oriented access to the details of each individual bytecode\noperation.\nA new method, get_instructions()\n, provides an iterator that emits\nthe Instruction stream for a given piece of Python code. Thus it is now\npossible to write a program that inspects and manipulates a bytecode\nobject in ways different from those provided by the dis\nmodule\nitself. For example:\n>>> import dis\n>>> for instr in dis.get_instructions(lambda x: x + 1):\n... print(instr.opname)\nLOAD_FAST\nLOAD_CONST\nBINARY_ADD\nRETURN_VALUE\nThe various display tools in the dis\nmodule have been rewritten to use\nthese new components.\nIn addition, a new application-friendly class Bytecode\nprovides\nan object-oriented API for inspecting bytecode in both in human-readable form\nand for iterating over instructions. The Bytecode\nconstructor\ntakes the same arguments that get_instructions()\ndoes (plus an\noptional current_offset), and the resulting object can be iterated to produce\nInstruction\nobjects. But it also has a dis\nmethod, equivalent to calling dis\non the constructor argument, but\nreturned as a multi-line string:\n>>> bytecode = dis.Bytecode(lambda x: x + 1, current_offset=3)\n>>> for instr in bytecode:\n... print('{} ({})'.format(instr.opname, instr.opcode))\nLOAD_FAST (124)\nLOAD_CONST (100)\nBINARY_ADD (23)\nRETURN_VALUE (83)\n>>> bytecode.dis().splitlines()\n[' 1 0 LOAD_FAST 0 (x)',\n' --> 3 LOAD_CONST 1 (1)',\n' 6 BINARY_ADD',\n' 7 RETURN_VALUE']\nBytecode\nalso has a class method,\nfrom_traceback()\n, that provides the ability to manipulate a\ntraceback (that is, print(Bytecode.from_traceback(tb).dis())\nis equivalent\nto distb(tb)\n).\n(Contributed by Nick Coghlan, Ryan Kelly and Thomas Kluyver in bpo-11816 and Claudiu Popa in bpo-17916.)\nNew function stack_effect()\ncomputes the effect on the Python stack\nof a given opcode and argument, information that is not otherwise available.\n(Contributed by Larry Hastings in bpo-19722.)\ndoctest\u00b6\nA new option flag, FAIL_FAST\n, halts\ntest running as soon as the first failure is detected. (Contributed by R.\nDavid Murray and Daniel Urban in bpo-16522.)\nThe doctest\ncommand line interface now uses argparse\n, and has two\nnew options, -o\nand -f\n. -o\nallows doctest options to be specified on the command line, and -f\nis a\nshorthand for -o FAIL_FAST\n(to parallel the similar option supported by the\nunittest\nCLI). (Contributed by R. David Murray in bpo-11390.)\ndoctest\nwill now find doctests in extension module __doc__\nstrings.\n(Contributed by Zachary Ware in bpo-3158.)\nemail\u00b6\nas_string()\nnow accepts a policy argument to\noverride the default policy of the message when generating a string\nrepresentation of it. This means that as_string\ncan now be used in more\ncircumstances, instead of having to create and use a generator\nin\norder to pass formatting parameters to its flatten\nmethod. (Contributed by\nR. David Murray in bpo-18600.)\nNew method as_bytes()\nadded to produce a bytes\nrepresentation of the message in a fashion similar to how as_string\nproduces a string representation. It does not accept the maxheaderlen\nargument, but does accept the unixfrom and policy arguments. The\nMessage\n__bytes__()\nmethod\ncalls it, meaning that bytes(mymsg)\nwill now produce the intuitive\nresult: a bytes object containing the fully formatted message. (Contributed\nby R. David Murray in bpo-18600.)\nThe Message.set_param()\nmessage now accepts a replace keyword argument.\nWhen specified, the associated header will be updated without changing\nits location in the list of headers. For backward compatibility, the default\nis False\n. (Contributed by R. David Murray in bpo-18891.)\nA pair of new subclasses of Message\nhave been added\n(EmailMessage\nand MIMEPart\n), along with a new sub-module,\ncontentmanager\nand a new policy\nattribute\ncontent_manager\n. All documentation is\ncurrently in the new module, which is being added as part of email\u2019s new\nprovisional API. These classes provide a number of new methods that\nmake extracting content from and inserting content into email messages much\neasier. For details, see the contentmanager\ndocumentation and\nthe email: Examples. These API additions complete the\nbulk of the work that was planned as part of the email6 project. The currently\nprovisional API is scheduled to become final in Python 3.5 (possibly with a few\nminor additions in the area of error handling). (Contributed by R. David\nMurray in bpo-18891.)\nfilecmp\u00b6\nA new clear_cache()\nfunction provides the ability to clear the\nfilecmp\ncomparison cache, which uses os.stat()\ninformation to\ndetermine if the file has changed since the last compare. This can be used,\nfor example, if the file might have been changed and re-checked in less time\nthan the resolution of a particular filesystem\u2019s file modification time field.\n(Contributed by Mark Levitt in bpo-18149.)\nNew module attribute DEFAULT_IGNORES\nprovides the list of\ndirectories that are used as the default value for the ignore parameter of\nthe dircmp()\nfunction. (Contributed by Eli Bendersky in\nbpo-15442.)\nfunctools\u00b6\nThe new partialmethod()\ndescriptor brings partial argument\napplication to descriptors, just as partial()\nprovides\nfor normal callables. The new descriptor also makes it easier to get\narbitrary callables (including partial()\ninstances)\nto behave like normal instance methods when included in a class definition.\n(Contributed by Alon Horev and Nick Coghlan in bpo-4331.)\nThe new singledispatch()\ndecorator brings support for\nsingle-dispatch generic functions to the Python standard library. Where\nobject oriented programming focuses on grouping multiple operations on a\ncommon set of data into a class, a generic function focuses on grouping\nmultiple implementations of an operation that allows it to work with\ndifferent kinds of data.\nSee also\n- PEP 443 \u2013 Single-dispatch generic functions\nPEP written and implemented by \u0141ukasz Langa.\ntotal_ordering()\nnow supports a return value of\nNotImplemented\nfrom the underlying comparison function. (Contributed\nby Katie Miller in bpo-10042.)\nA pure-python version of the partial()\nfunction is now in the\nstdlib; in CPython it is overridden by the C accelerated version, but it is\navailable for other implementations to use. (Contributed by Brian Thorne in\nbpo-12428.)\ngc\u00b6\nNew function get_stats()\nreturns a list of three per-generation\ndictionaries containing the collections statistics since interpreter startup.\n(Contributed by Antoine Pitrou in bpo-16351.)\nglob\u00b6\nA new function escape()\nprovides a way to escape special characters\nin a filename so that they do not become part of the globbing expansion but are\ninstead matched literally. (Contributed by Serhiy Storchaka in bpo-8402.)\nhashlib\u00b6\nA new hashlib.pbkdf2_hmac()\nfunction provides\nthe PKCS#5 password-based key derivation function 2. (Contributed by Christian\nHeimes in bpo-18582.)\nThe name\nattribute of hashlib\nhash objects is now\na formally supported interface. It has always existed in CPython\u2019s\nhashlib\n(although it did not return lower case names for all supported\nhashes), but it was not a public interface and so some other Python\nimplementations have not previously supported it. (Contributed by Jason R.\nCoombs in bpo-18532.)\nhmac\u00b6\nhmac\nnow accepts bytearray\nas well as bytes\nfor the key\nargument to the new()\nfunction, and the msg parameter to both the\nnew()\nfunction and the update()\nmethod now\naccepts any type supported by the hashlib\nmodule. (Contributed\nby Jonas Borgstr\u00f6m in bpo-18240.)\nThe digestmod argument to the hmac.new()\nfunction may now be any hash\ndigest name recognized by hashlib\n. In addition, the current behavior in\nwhich the value of digestmod defaults to MD5\nis deprecated: in a\nfuture version of Python there will be no default value. (Contributed by\nChristian Heimes in bpo-17276.)\nWith the addition of block_size\nand name\nattributes (and the formal documentation of the digest_size\nattribute), the hmac\nmodule now conforms fully to the PEP 247 API.\n(Contributed by Christian Heimes in bpo-18775.)\nhtml\u00b6\nNew function unescape()\nfunction converts HTML5 character references to\nthe corresponding Unicode characters. (Contributed by Ezio Melotti in\nbpo-2927.)\nHTMLParser\naccepts a new keyword argument\nconvert_charrefs that, when True\n, automatically converts all character\nreferences. For backward-compatibility, its value defaults to False\n, but\nit will change to True\nin a future version of Python, so you are invited to\nset it explicitly and update your code to use this new feature. (Contributed\nby Ezio Melotti in bpo-13633.)\nThe strict argument of HTMLParser\nis now deprecated.\n(Contributed by Ezio Melotti in bpo-15114.)\nhttp\u00b6\nsend_error()\nnow accepts an\noptional additional explain parameter which can be used to provide an\nextended error description, overriding the hardcoded default if there is one.\nThis extended error description will be formatted using the\nerror_message_format\nattribute\nand sent as the body of the error response.\n(Contributed by Karl Cow in bpo-12921.)\nThe http.server\ncommand line interface now has\na -b/--bind\noption that causes the server to listen on a specific address.\n(Contributed by Malte Swart in bpo-17764.)\nidlelib and IDLE\u00b6\nSince idlelib implements the IDLE shell and editor and is not intended for\nimport by other programs, it gets improvements with every release. See\nLib/idlelib/NEWS.txt\nfor a cumulative list of changes since 3.3.0,\nas well as changes made in future 3.4.x releases. This file is also available\nfrom the IDLE dialog.\nimportlib\u00b6\nThe InspectLoader\nABC defines a new method,\nsource_to_code()\nthat accepts source\ndata and a path and returns a code object. The default implementation\nis equivalent to compile(data, path, 'exec', dont_inherit=True)\n.\n(Contributed by Eric Snow and Brett Cannon in bpo-15627.)\nInspectLoader\nalso now has a default implementation\nfor the get_code()\nmethod. However,\nit will normally be desirable to override the default implementation\nfor performance reasons. (Contributed by Brett Cannon in bpo-18072.)\nThe reload()\nfunction has been moved from imp\nto\nimportlib\nas part of the imp\nmodule deprecation. (Contributed by\nBerker Peksag in bpo-18193.)\nimportlib.util\nnow has a MAGIC_NUMBER\nattribute\nproviding access to the bytecode version number. This replaces the\nget_magic()\nfunction in the deprecated imp\nmodule.\n(Contributed by Brett Cannon in bpo-18192.)\nNew importlib.util\nfunctions cache_from_source()\nand source_from_cache()\nreplace the same-named functions\nin the deprecated imp\nmodule. (Contributed by Brett Cannon in\nbpo-18194.)\nThe importlib\nbootstrap NamespaceLoader\nnow conforms to\nthe InspectLoader\nABC, which means that runpy\nand\npython -m\ncan now be used with namespace packages. (Contributed\nby Brett Cannon in bpo-18058.)\nimportlib.util\nhas a new function decode_source()\nthat decodes source from bytes using universal newline processing. This is\nuseful for implementing InspectLoader.get_source()\nmethods.\nimportlib.machinery.ExtensionFileLoader\nnow has a\nget_filename()\nmethod. This was\ninadvertently omitted in the original implementation. (Contributed by Eric\nSnow in bpo-19152.)\ninspect\u00b6\nThe inspect\nmodule now offers a basic command line interface to quickly display source code and other\ninformation for modules, classes and functions. (Contributed by Claudiu Popa\nand Nick Coghlan in bpo-18626.)\nunwrap()\nmakes it easy to unravel wrapper function chains\ncreated by functools.wraps()\n(and any other API that sets the\n__wrapped__\nattribute on a wrapper function). (Contributed by\nDaniel Urban, Aaron Iles and Nick Coghlan in bpo-13266.)\nAs part of the implementation of the new enum\nmodule, the\ninspect\nmodule now has substantially better support for custom\n__dir__\nmethods and dynamic class attributes provided through\nmetaclasses. (Contributed by Ethan Furman in bpo-18929 and\nbpo-19030.)\ngetfullargspec()\nand getargspec()\nnow use the signature()\nAPI. This allows them to\nsupport a much broader range of callables, including those with\n__signature__\nattributes, those with metadata provided by argument\nclinic, functools.partial()\nobjects and more. Note that, unlike\nsignature()\n, these functions still ignore __wrapped__\nattributes, and report the already bound first argument for bound methods,\nso it is still necessary to update your code to use\nsignature()\ndirectly if those features are desired.\n(Contributed by Yury Selivanov in bpo-17481.)\nsignature()\nnow supports duck types of CPython functions,\nwhich adds support for functions compiled with Cython. (Contributed\nby Stefan Behnel and Yury Selivanov in bpo-17159.)\nipaddress\u00b6\nipaddress\nwas added to the standard library in Python 3.3 as a\nprovisional API. With the release of Python 3.4, this qualification\nhas been removed: ipaddress\nis now considered a stable API, covered\nby the normal standard library requirements to maintain backwards\ncompatibility.\nA new is_global\nproperty is True\nif\nan address is globally routeable. (Contributed by Peter Moody in\nbpo-17400.)\nlogging\u00b6\nThe TimedRotatingFileHandler\nhas a new atTime\nparameter that can be used to specify the time of day when rollover should\nhappen. (Contributed by Ronald Oussoren in bpo-9556.)\nSocketHandler\nand\nDatagramHandler\nnow support Unix domain sockets (by\nsetting port to None\n). (Contributed by Vinay Sajip in commit\nce46195b56a9.)\nfileConfig()\nnow accepts a\nconfigparser.RawConfigParser\nsubclass instance for the fname\nparameter. This facilitates using a configuration file when logging\nconfiguration is just a part of the overall application configuration, or where\nthe application modifies the configuration before passing it to\nfileConfig()\n. (Contributed by Vinay Sajip in\nbpo-16110.)\nLogging configuration data received from a socket via the\nlogging.config.listen()\nfunction can now be validated before being\nprocessed by supplying a verification function as the argument to the new\nverify keyword argument. (Contributed by Vinay Sajip in bpo-15452.)\nmarshal\u00b6\nThe default marshal\nversion has been bumped to 3. The code implementing\nthe new version restores the Python2 behavior of recording only one copy of\ninterned strings and preserving the interning on deserialization, and extends\nthis \u201cone copy\u201d ability to any object type (including handling recursive\nreferences). This reduces both the size of .pyc\nfiles and the amount of\nmemory a module occupies in memory when it is loaded from a .pyc\n(or\n.pyo\n) file. (Contributed by Kristj\u00e1n Valur J\u00f3nsson in bpo-16475,\nwith additional speedups by Antoine Pitrou in bpo-19219.)\nmmap\u00b6\nmmap objects are now weakly referenceable. (Contributed by Valerie Lambert in bpo-4885.)\nmultiprocessing\u00b6\nOn Unix two new start methods,\nspawn\nand forkserver\n, have been added for starting processes using\nmultiprocessing\n. These make the mixing of processes with threads more\nrobust, and the spawn\nmethod matches the semantics that multiprocessing has\nalways used on Windows. New function\nget_all_start_methods()\nreports all start methods\navailable on the platform, get_start_method()\nreports\nthe current start method, and set_start_method()\nsets\nthe start method. (Contributed by Richard Oudkerk in bpo-8713.)\nmultiprocessing\nalso now has the concept of a context\n, which\ndetermines how child processes are created. New function\nget_context()\nreturns a context that uses a specified\nstart method. It has the same API as the multiprocessing\nmodule itself,\nso you can use it to create Pool\ns and other\nobjects that will operate within that context. This allows a framework and an\napplication or different parts of the same application to use multiprocessing\nwithout interfering with each other. (Contributed by Richard Oudkerk in\nbpo-18999.)\nExcept when using the old fork start method, child processes no longer inherit unneeded handles/file descriptors from their parents (part of bpo-8713).\nmultiprocessing\nnow relies on runpy\n(which implements the\n-m\nswitch) to initialise __main__\nappropriately in child processes\nwhen using the spawn\nor forkserver\nstart methods. This resolves some\nedge cases where combining multiprocessing, the -m\ncommand line switch,\nand explicit relative imports could cause obscure failures in child\nprocesses. (Contributed by Nick Coghlan in bpo-19946.)\noperator\u00b6\nNew function length_hint()\nprovides an implementation of the\nspecification for how the __length_hint__()\nspecial method should\nbe used, as part of the PEP 424 formal specification of this language\nfeature. (Contributed by Armin Ronacher in bpo-16148.)\nThere is now a pure-python version of the operator\nmodule available for\nreference and for use by alternate implementations of Python. (Contributed by\nZachary Ware in bpo-16694.)\nos\u00b6\nThere are new functions to get and set the inheritable flag of a file descriptor (os.get_inheritable()\n,\nos.set_inheritable()\n) or a Windows handle\n(os.get_handle_inheritable()\n, os.set_handle_inheritable()\n).\nNew function cpu_count()\nreports the number of CPUs available on the\nplatform on which Python is running (or None\nif the count can\u2019t be\ndetermined). The multiprocessing.cpu_count()\nfunction is now implemented\nin terms of this function). (Contributed by Trent Nelson, Yogesh Chaudhari,\nVictor Stinner, and Charles-Fran\u00e7ois Natali in bpo-17914.)\nos.path.samestat()\nis now available on the Windows platform (and the\nos.path.samefile()\nimplementation is now shared between Unix and\nWindows). (Contributed by Brian Curtin in bpo-11939.)\nos.path.ismount()\nnow recognizes volumes mounted below a drive\nroot on Windows. (Contributed by Tim Golden in bpo-9035.)\nos.open()\nsupports two new flags on platforms that provide them,\nO_PATH\n(un-opened file descriptor), and O_TMPFILE\n(unnamed temporary file; as of 3.4.0 release available only on Linux systems\nwith a kernel version of 3.11 or newer that have uapi headers). (Contributed\nby Christian Heimes in bpo-18673 and Benjamin Peterson, respectively.)\npdb\u00b6\npdb\nhas been enhanced to handle generators, yield\n, and\nyield from\nin a more useful fashion. This is especially helpful when\ndebugging asyncio\nbased programs. (Contributed by Andrew Svetlov and\nXavier de Gaye in bpo-16596.)\nThe print\ncommand has been removed from pdb\n, restoring access to the\nPython print()\nfunction from the pdb command line. Python2\u2019s pdb\ndid\nnot have a print\ncommand; instead, entering print\nexecuted the\nprint\nstatement. In Python3 print\nwas mistakenly made an alias for the\npdb p\ncommand. p\n, however, prints the repr\nof its argument,\nnot the str\nlike the Python2 print\ncommand did. Worse, the Python3\npdb print\ncommand shadowed the Python3 print\nfunction, making it\ninaccessible at the pdb\nprompt. (Contributed by Connor Osborn in\nbpo-18764.)\npickle\u00b6\npickle\nnow supports (but does not use by default) a new pickle protocol,\nprotocol 4. This new protocol addresses a number of issues that were present\nin previous protocols, such as the serialization of nested classes, very large\nstrings and containers, and classes whose __new__()\nmethod takes\nkeyword-only arguments. It also provides some efficiency improvements.\nSee also\n- PEP 3154 \u2013 Pickle protocol 4\nPEP written by Antoine Pitrou and implemented by Alexandre Vassalotti.\nplistlib\u00b6\nplistlib\nnow has an API that is similar to the standard pattern for\nstdlib serialization protocols, with new load()\n,\ndump()\n, loads()\n, and dumps()\nfunctions. (The older API is now deprecated.) In addition to the already\nsupported XML plist format (FMT_XML\n), it also now supports\nthe binary plist format (FMT_BINARY\n). (Contributed by Ronald\nOussoren and others in bpo-14455.)\npoplib\u00b6\nTwo new methods have been added to poplib\n: capa()\n,\nwhich returns the list of capabilities advertised by the POP server, and\nstls()\n, which switches a clear-text POP3 session into an\nencrypted POP3 session if the POP server supports it. (Contributed by Lorenzo\nCatucci in bpo-4473.)\npprint\u00b6\nThe pprint\nmodule\u2019s PrettyPrinter\nclass and its\npformat()\n, and pprint()\nfunctions have a new\noption, compact, that controls how the output is formatted. Currently\nsetting compact to True\nmeans that sequences will be printed with as many\nsequence elements as will fit within width on each (indented) line.\n(Contributed by Serhiy Storchaka in bpo-19132.)\nLong strings are now wrapped using Python\u2019s normal line continuation syntax. (Contributed by Antoine Pitrou in bpo-17150.)\npty\u00b6\npty.spawn()\nnow returns the status value from os.waitpid()\non\nthe child process, instead of None\n. (Contributed by Gregory P. Smith.)\npydoc\u00b6\nThe pydoc\nmodule is now based directly on the inspect.signature()\nintrospection API, allowing it to provide signature information for a wider\nvariety of callable objects. This change also means that __wrapped__\nattributes are now taken into account when displaying help information.\n(Contributed by Larry Hastings in bpo-19674.)\nThe pydoc\nmodule no longer displays the self\nparameter for\nalready bound methods. Instead, it aims to always display the exact current\nsignature of the supplied callable. (Contributed by Larry Hastings in\nbpo-20710.)\nIn addition to the changes that have been made to pydoc\ndirectly,\nits handling of custom __dir__\nmethods and various descriptor\nbehaviours has also been improved substantially by the underlying changes in\nthe inspect\nmodule.\nAs the help()\nbuiltin is based on pydoc\n, the above changes also\naffect the behaviour of help()\n.\nre\u00b6\nNew fullmatch()\nfunction and Pattern.fullmatch()\nmethod anchor\nthe pattern at both ends of the string to match. This provides a way to be\nexplicit about the goal of the match, which avoids a class of subtle bugs where\n$\ncharacters get lost during code changes or the addition of alternatives\nto an existing regular expression. (Contributed by Matthew Barnett in\nbpo-16203.)\nThe repr of regex objects now includes the pattern and the flags; the repr of match objects now includes the start, end, and the part of the string that matched. (Contributed by Hugo Lopes Tavares and Serhiy Storchaka in bpo-13592 and bpo-17087.)\nresource\u00b6\nNew prlimit()\nfunction, available on Linux platforms with a\nkernel version of 2.6.36 or later and glibc of 2.13 or later, provides the\nability to query or set the resource limits for processes other than the one\nmaking the call. (Contributed by Christian Heimes in bpo-16595.)\nOn Linux kernel version 2.6.36 or later, there are also some new\nLinux specific constants: RLIMIT_MSGQUEUE\n,\nRLIMIT_NICE\n, RLIMIT_RTPRIO\n,\nRLIMIT_RTTIME\n, and RLIMIT_SIGPENDING\n.\n(Contributed by Christian Heimes in bpo-19324.)\nOn FreeBSD version 9 and later, there some new FreeBSD specific constants:\nRLIMIT_SBSIZE\n, RLIMIT_SWAP\n, and\nRLIMIT_NPTS\n. (Contributed by Claudiu Popa in\nbpo-19343.)\nselect\u00b6\nepoll\nobjects now support the context management protocol.\nWhen used in a with\nstatement, the close()\nmethod will be called automatically at the end of the block. (Contributed\nby Serhiy Storchaka in bpo-16488.)\ndevpoll\nobjects now have fileno()\nand\nclose()\nmethods, as well as a new attribute\nclosed\n. (Contributed by Victor Stinner in\nbpo-18794.)\nshelve\u00b6\nShelf\ninstances may now be used in with\nstatements,\nand will be automatically closed at the end of the with\nblock.\n(Contributed by Filip Gruszczy\u0144ski in bpo-13896.)\nshutil\u00b6\ncopyfile()\nnow raises a specific Error\nsubclass,\nSameFileError\n, when the source and destination are the same\nfile, which allows an application to take appropriate action on this specific\nerror. (Contributed by Atsuo Ishimoto and Hynek Schlawack in\nbpo-1492704.)\nsmtpd\u00b6\nThe SMTPServer\nand SMTPChannel\nclasses now\naccept a map keyword argument which, if specified, is passed in to\nasynchat.async_chat\nas its map argument. This allows an application\nto avoid affecting the global socket map. (Contributed by Vinay Sajip in\nbpo-11959.)\nsmtplib\u00b6\nSMTPException\nis now a subclass of OSError\n, which allows\nboth socket level errors and SMTP protocol level errors to be caught in one\ntry/except statement by code that only cares whether or not an error occurred.\n(Contributed by Ned Jackson Lovely in bpo-2118.)\nsocket\u00b6\nThe socket module now supports the CAN_BCM\nprotocol on\nplatforms that support it. (Contributed by Brian Thorne in bpo-15359.)\nSocket objects have new methods to get or set their inheritable flag, get_inheritable()\nand\nset_inheritable()\n.\nThe socket.AF_*\nand socket.SOCK_*\nconstants are now enumeration values\nusing the new enum\nmodule. This allows meaningful names to be printed\nduring debugging, instead of integer \u201cmagic numbers\u201d.\nThe AF_LINK\nconstant is now available on BSD and OSX.\ninet_pton()\nand inet_ntop()\nare now supported\non Windows. (Contributed by Atsuo Ishimoto in bpo-7171.)\nsqlite3\u00b6\nA new boolean parameter to the connect()\nfunction, uri, can be\nused to indicate that the database parameter is a uri\n(see the SQLite\nURI documentation). (Contributed by poq in\nbpo-13773.)\nssl\u00b6\nPROTOCOL_TLSv1_1\nand PROTOCOL_TLSv1_2\n(TLSv1.1 and\nTLSv1.2 support) have been added; support for these protocols is only available if\nPython is linked with OpenSSL 1.0.1 or later. (Contributed by Michele Orr\u00f9 and\nAntoine Pitrou in bpo-16692.)\nNew function create_default_context()\nprovides a standard way to\nobtain an SSLContext\nwhose settings are intended to be a\nreasonable balance between compatibility and security. These settings are\nmore stringent than the defaults provided by the SSLContext\nconstructor, and may be adjusted in the future, without prior deprecation, if\nbest-practice security requirements change. The new recommended best\npractice for using stdlib libraries that support SSL is to use\ncreate_default_context()\nto obtain an SSLContext\nobject, modify it if needed, and then pass it as the context argument\nof the appropriate stdlib API. (Contributed by Christian Heimes\nin bpo-19689.)\nSSLContext\nmethod load_verify_locations()\naccepts a new optional argument cadata, which can be used to provide PEM or\nDER encoded certificates directly via strings or bytes, respectively.\n(Contributed by Christian Heimes in bpo-18138.)\nNew function get_default_verify_paths()\nreturns\na named tuple of the paths and environment variables that the\nset_default_verify_paths()\nmethod uses to set\nOpenSSL\u2019s default cafile\nand capath\n. This can be an aid in\ndebugging default verification issues. (Contributed by Christian Heimes\nin bpo-18143.)\nSSLContext\nhas a new method,\ncert_store_stats()\n, that reports the number of loaded\nX.509\ncerts, X.509 CA\ncerts, and certificate revocation lists\n(crl\ns), as well as a get_ca_certs()\nmethod that\nreturns a list of the loaded CA\ncertificates. (Contributed by Christian\nHeimes in bpo-18147.)\nIf OpenSSL 0.9.8 or later is available, SSLContext\nhas a new\nattribute verify_flags\nthat can be used to control the\ncertificate verification process by setting it to some combination of the new\nconstants VERIFY_DEFAULT\n, VERIFY_CRL_CHECK_LEAF\n,\nVERIFY_CRL_CHECK_CHAIN\n, or VERIFY_X509_STRICT\n.\nOpenSSL does not do any CRL verification by default. (Contributed by\nChristien Heimes in bpo-8813.)\nNew SSLContext\nmethod load_default_certs()\nloads a set of default \u201ccertificate authority\u201d (CA) certificates from default\nlocations, which vary according to the platform. It can be used to load both\nTLS web server authentication certificates\n(purpose=\nSERVER_AUTH\n) for a client to use to verify a\nserver, and certificates for a server to use in verifying client certificates\n(purpose=\nCLIENT_AUTH\n). (Contributed by Christian\nHeimes in bpo-19292.)\nTwo new windows-only functions, enum_certificates()\nand\nenum_crls()\nprovide the ability to retrieve certificates,\ncertificate information, and CRLs from the Windows cert store. (Contributed\nby Christian Heimes in bpo-17134.)\nSupport for server-side SNI (Server Name Indication) using the new\nssl.SSLContext.set_servername_callback()\nmethod.\n(Contributed by Daniel Black in bpo-8109.)\nThe dictionary returned by SSLSocket.getpeercert()\ncontains additional\nX509v3\nextension items: crlDistributionPoints\n, calIssuers\n, and\nOCSP\nURIs. (Contributed by Christian Heimes in bpo-18379.)\nstat\u00b6\nThe stat\nmodule is now backed by a C implementation in _stat\n. A C\nimplementation is required as most of the values aren\u2019t standardized and\nare platform-dependent. (Contributed by Christian Heimes in bpo-11016.)\nThe module supports new ST_MODE\nflags, S_IFDOOR\n,\nS_IFPORT\n, and S_IFWHT\n. (Contributed by\nChristian Hiemes in bpo-11016.)\nstruct\u00b6\nNew function iter_unpack\nand a new\nstruct.Struct.iter_unpack()\nmethod on compiled formats provide streamed\nunpacking of a buffer containing repeated instances of a given format of data.\n(Contributed by Antoine Pitrou in bpo-17804.)\nsubprocess\u00b6\ncheck_output()\nnow accepts an input argument that can\nbe used to provide the contents of stdin\nfor the command that is run.\n(Contributed by Zack Weinberg in bpo-16624.)\ngetoutput()\nand getstatusoutput()\nnow\nwork on Windows. This change was actually inadvertently made in 3.3.4.\n(Contributed by Tim Golden in bpo-10197.)\nsunau\u00b6\nThe getparams()\nmethod now returns a namedtuple rather than a\nplain tuple. (Contributed by Claudiu Popa in bpo-18901.)\nsunau.open()\nnow supports the context management protocol: when used in a\nwith\nblock, the close\nmethod of the returned object will be\ncalled automatically at the end of the block. (Contributed by Serhiy Storchaka\nin bpo-18878.)\nAU_write.setsampwidth()\nnow supports 24 bit samples, thus adding\nsupport for writing 24 sample using the module. (Contributed by\nSerhiy Storchaka in bpo-19261.)\nThe writeframesraw()\nand\nwriteframes()\nmethods now accept any bytes-like\nobject. (Contributed by Serhiy Storchaka in bpo-8311.)\nsys\u00b6\nNew function sys.getallocatedblocks()\nreturns the current number of\nblocks allocated by the interpreter. (In CPython with the default\n--with-pymalloc\nsetting, this is allocations made through the\nPyObject_Malloc()\nAPI.) This can be useful for tracking memory leaks,\nespecially if automated via a test suite. (Contributed by Antoine Pitrou\nin bpo-13390.)\nWhen the Python interpreter starts in interactive mode, it checks for an __interactivehook__\nattribute\non the sys\nmodule. If the attribute exists, its value is called with no\narguments just before interactive mode is started. The check is made after the\nPYTHONSTARTUP\nfile is read, so it can be set there. The site\nmodule sets it to a function that enables tab\ncompletion and history saving (in ~/.python-history\n) if the platform\nsupports readline\n. If you do not want this (new) behavior, you can\noverride it in PYTHONSTARTUP\n, sitecustomize\n, or\nusercustomize\nby deleting this attribute from sys\n(or setting it\nto some other callable). (Contributed by \u00c9ric Araujo and Antoine Pitrou in\nbpo-5845.)\ntarfile\u00b6\nThe tarfile\nmodule now supports a simple Command-Line Interface when\ncalled as a script directly or via -m\n. This can be used to create and\nextract tarfile archives. (Contributed by Berker Peksag in bpo-13477.)\ntextwrap\u00b6\nThe TextWrapper\nclass has two new attributes/constructor\narguments: max_lines\n, which limits the number of\nlines in the output, and placeholder\n, which is a\nstring that will appear at the end of the output if it has been truncated\nbecause of max_lines. Building on these capabilities, a new convenience\nfunction shorten()\ncollapses all of the whitespace in the input\nto single spaces and produces a single line of a given width that ends with\nthe placeholder (by default, [...]\n). (Contributed by Antoine Pitrou and\nSerhiy Storchaka in bpo-18585 and bpo-18725.)\nthreading\u00b6\nThe Thread\nobject representing the main thread can be\nobtained from the new main_thread()\nfunction. In normal\nconditions this will be the thread from which the Python interpreter was\nstarted. (Contributed by Andrew Svetlov in bpo-18882.)\ntraceback\u00b6\nA new traceback.clear_frames()\nfunction takes a traceback object\nand clears the local variables in all of the frames it references,\nreducing the amount of memory consumed. (Contributed by Andrew Kuchling in\nbpo-1565525.)\ntypes\u00b6\nA new DynamicClassAttribute()\ndescriptor provides a way to define\nan attribute that acts normally when looked up through an instance object, but\nwhich is routed to the class __getattr__\nwhen looked up through the\nclass. This allows one to have properties active on a class, and have virtual\nattributes on the class with the same name (see enum\nfor an example).\n(Contributed by Ethan Furman in bpo-19030.)\nurllib\u00b6\nurllib.request\nnow supports data:\nURLs via the\nDataHandler\nclass. (Contributed by Mathias Panzenb\u00f6ck\nin bpo-16423.)\nThe http method that will be used by a Request\nclass\ncan now be specified by setting a method\nclass attribute on the subclass. (Contributed by Jason R Coombs in\nbpo-18978.)\nRequest\nobjects are now reusable: if the\nfull_url\nor data\nattributes are modified, all relevant internal properties are updated. This\nmeans, for example, that it is now possible to use the same\nRequest\nobject in more than one\nOpenerDirector.open()\ncall with different data arguments, or to\nmodify a Request\n\u2018s url\nrather than recomputing it\nfrom scratch. There is also a new\nremove_header()\nmethod that can be used to remove\nheaders from a Request\n. (Contributed by Alexey\nKachayev in bpo-16464, Daniel Wozniak in bpo-17485, and Damien Brecht\nand Senthil Kumaran in bpo-17272.)\nHTTPError\nobjects now have a\nheaders\nattribute that provides access to the\nHTTP response headers associated with the error. (Contributed by\nBerker Peksag in bpo-15701.)\nunittest\u00b6\nThe TestCase\nclass has a new method,\nsubTest()\n, that produces a context manager whose\nwith\nblock becomes a \u201csub-test\u201d. This context manager allows a test\nmethod to dynamically generate subtests by, say, calling the subTest\ncontext manager inside a loop. A single test method can thereby produce an\nindefinite number of separately identified and separately counted tests, all of\nwhich will run even if one or more of them fail. For example:\nclass NumbersTest(unittest.TestCase):\ndef test_even(self):\nfor i in range(6):\nwith self.subTest(i=i):\nself.assertEqual(i % 2, 0)\nwill result in six subtests, each identified in the unittest verbose output\nwith a label consisting of the variable name i\nand a particular value for\nthat variable (i=0\n, i=1\n, etc). See Distinguishing test iterations using subtests for the full\nversion of this example. (Contributed by Antoine Pitrou in bpo-16997.)\nunittest.main()\nnow accepts an iterable of test names for\ndefaultTest, where previously it only accepted a single test name as a\nstring. (Contributed by Jyrki Pulliainen in bpo-15132.)\nIf SkipTest\nis raised during test discovery (that is, at the\nmodule level in the test file), it is now reported as a skip instead of an\nerror. (Contributed by Zach Ware in bpo-16935.)\ndiscover()\nnow sorts the discovered files to provide\nconsistent test ordering. (Contributed by Martin Melin and Jeff Ramnani in\nbpo-16709.)\nTestSuite\nnow drops references to tests as soon as the test\nhas been run, if the test is successful. On Python interpreters that do\ngarbage collection, this allows the tests to be garbage collected if nothing\nelse is holding a reference to the test. It is possible to override this\nbehavior by creating a TestSuite\nsubclass that defines a\ncustom _removeTestAtIndex\nmethod. (Contributed by Tom Wardill, Matt\nMcClure, and Andrew Svetlov in bpo-11798.)\nA new test assertion context-manager, assertLogs()\n,\nwill ensure that a given block of code emits a log message using the\nlogging\nmodule. By default the message can come from any logger and\nhave a priority of INFO\nor higher, but both the logger name and an\nalternative minimum logging level may be specified. The object returned by the\ncontext manager can be queried for the LogRecord\ns and/or\nformatted messages that were logged. (Contributed by Antoine Pitrou in\nbpo-18937.)\nTest discovery now works with namespace packages (Contributed by Claudiu Popa in bpo-17457.)\nunittest.mock\nobjects now inspect their specification signatures when\nmatching calls, which means an argument can now be matched by either position\nor name, instead of only by position. (Contributed by Antoine Pitrou in\nbpo-17015.)\nmock_open()\nobjects now have readline\nand readlines\nmethods. (Contributed by Toshio Kuratomi in bpo-17467.)\nvenv\u00b6\nvenv\nnow includes activation scripts for the csh\nand fish\nshells. (Contributed by Andrew Svetlov in bpo-15417.)\nEnvBuilder\nand the create()\nconvenience function\ntake a new keyword argument with_pip, which defaults to False\n, that\ncontrols whether or not EnvBuilder\nensures that pip\nis\ninstalled in the virtual environment. (Contributed by Nick Coghlan in\nbpo-19552 as part of the PEP 453 implementation.)\nwave\u00b6\nThe getparams()\nmethod now returns a namedtuple rather\nthan a plain tuple. (Contributed by Claudiu Popa in bpo-17487.)\nwave.open()\nnow supports the context management protocol. (Contributed\nby Claudiu Popa in bpo-17616.)\nwave\ncan now write output to unseekable files. (Contributed by David Jones, Guilherme Polo, and Serhiy\nStorchaka in bpo-5202.)\nThe writeframesraw()\nand\nwriteframes()\nmethods now accept any bytes-like\nobject. (Contributed by Serhiy Storchaka in bpo-8311.)\nweakref\u00b6\nNew WeakMethod\nclass simulates weak references to bound\nmethods. (Contributed by Antoine Pitrou in bpo-14631.)\nNew finalize\nclass makes it possible to register a callback\nto be invoked when an object is garbage collected, without needing to\ncarefully manage the lifecycle of the weak reference itself. (Contributed by\nRichard Oudkerk in bpo-15528.)\nThe callback, if any, associated with a ref\nis now\nexposed via the __callback__\nattribute. (Contributed\nby Mark Dickinson in bpo-17643.)\nxml.etree\u00b6\nA new parser, XMLPullParser\n, allows a\nnon-blocking applications to parse XML documents. An example can be\nseen at Pull API for non-blocking parsing. (Contributed by Antoine\nPitrou in bpo-17741.)\nThe xml.etree.ElementTree\ntostring()\nand\ntostringlist()\nfunctions, and the\nElementTree\nwrite()\nmethod, now have a\nshort_empty_elements keyword-only parameter\nproviding control over whether elements with no content are written in\nabbreviated (\n) or expanded (\n) form. (Contributed by\nAriel Poliak and Serhiy Storchaka in bpo-14377.)\nzipfile\u00b6\nThe writepy()\nmethod of the\nPyZipFile\nclass has a new filterfunc option that can be\nused to control which directories and files are added to the archive. For\nexample, this could be used to exclude test files from the archive.\n(Contributed by Christian Tismer in bpo-19274.)\nThe allowZip64 parameter to ZipFile\nand\nPyZipFile\nis now True\nby default. (Contributed by\nWilliam Mallard in bpo-17201.)\nCPython Implementation Changes\u00b6\nPEP 445: Customization of CPython Memory Allocators\u00b6\nPEP 445 adds new C level interfaces to customize memory allocation in the CPython interpreter.\nSee also\n- PEP 445 \u2013 Add new APIs to customize Python memory allocators\nPEP written and implemented by Victor Stinner.\nPEP 442: Safe Object Finalization\u00b6\nPEP 442 removes the current limitations and quirks of object finalization\nin CPython. With it, objects with __del__()\nmethods, as well as\ngenerators with finally\nclauses, can be finalized when they are\npart of a reference cycle.\nAs part of this change, module globals are no longer forcibly set to\nNone\nduring interpreter shutdown in most cases, instead relying\non the normal operation of the cyclic garbage collector. This avoids a\nwhole class of interpreter-shutdown-time errors, usually involving\n__del__\nmethods, that have plagued Python since the cyclic GC\nwas first introduced.\nSee also\n- PEP 442 \u2013 Safe object finalization\nPEP written and implemented by Antoine Pitrou.\nPEP 456: Secure and Interchangeable Hash Algorithm\u00b6\nPEP 456 follows up on earlier security fix work done on Python\u2019s hash algorithm to address certain DOS attacks to which public facing APIs backed by dictionary lookups may be subject. (See bpo-14621 for the start of the current round of improvements.) The PEP unifies CPython\u2019s hash code to make it easier for a packager to substitute a different hash algorithm, and switches Python\u2019s default implementation to a SipHash implementation on platforms that have a 64 bit data type. Any performance differences in comparison with the older FNV algorithm are trivial.\nThe PEP adds additional fields to the sys.hash_info\nnamed tuple to\ndescribe the hash algorithm in use by the currently executing binary. Otherwise,\nthe PEP does not alter any existing CPython APIs.\nPEP 436: Argument Clinic\u00b6\n\u201cArgument Clinic\u201d (PEP 436) is now part of the CPython build process and can be used to simplify the process of defining and maintaining accurate signatures for builtins and standard library extension modules implemented in C.\nSome standard library extension modules have been converted to use Argument\nClinic in Python 3.4, and pydoc\nand inspect\nhave been updated\naccordingly.\nIt is expected that signature metadata for programmatic introspection will be added to additional callables implemented in C as part of Python 3.4 maintenance releases.\nNote\nThe Argument Clinic PEP is not fully up to date with the state of the implementation. This has been deemed acceptable by the release manager and core development team in this case, as Argument Clinic will not be made available as a public API for third party use in Python 3.4.\nSee also\n- PEP 436 \u2013 The Argument Clinic DSL\nPEP written and implemented by Larry Hastings.\nOther Build and C API Changes\u00b6\nThe new\nPyType_GetSlot()\nfunction has been added to the stable ABI, allowing retrieval of function pointers from named type slots when using the limited API. (Contributed by Martin von L\u00f6wis in bpo-17162.)The new\nPy_SetStandardStreamEncoding()\npre-initialization API allows applications embedding the CPython interpreter to reliably force a particular encoding and error handler for the standard streams. (Contributed by Bastien Montagne and Nick Coghlan in bpo-16129.)Most Python C APIs that don\u2019t mutate string arguments are now correctly marked as accepting\nconst char *\nrather thanchar *\n. (Contributed by Serhiy Storchaka in bpo-1772673.)A new shell version of\npython-config\ncan be used even when a python interpreter is not available (for example, in cross compilation scenarios).PyUnicode_FromFormat()\nnow supports width and precision specifications for%s\n,%A\n,%U\n,%V\n,%S\n, and%R\n. (Contributed by Ysj Ray and Victor Stinner in bpo-7330.)New function\nPyStructSequence_InitType2()\nsupplements the existingPyStructSequence_InitType()\nfunction. The difference is that it returns0\non success and-1\non failure.The CPython source can now be compiled using the address sanity checking features of recent versions of GCC and clang: the false alarms in the small object allocator have been silenced. (Contributed by Dhiru Kholia in bpo-18596.)\nThe Windows build now uses Address Space Layout Randomization and Data Execution Prevention. (Contributed by Christian Heimes in bpo-16632.)\nNew function\nPyObject_LengthHint()\nis the C API equivalent ofoperator.length_hint()\n. (Contributed by Armin Ronacher in bpo-16148.)\nOther Improvements\u00b6\nThe python command has a new option,\n-I\n, which causes it to run in \u201cisolated mode\u201d, which means thatsys.path\ncontains neither the script\u2019s directory nor the user\u2019ssite-packages\ndirectory, and allPYTHON*\nenvironment variables are ignored (it implies both-s\nand-E\n). Other restrictions may also be applied in the future, with the goal being to isolate the execution of a script from the user\u2019s environment. This is appropriate, for example, when Python is used to run a system script. On most POSIX systems it can and should be used in the#!\nline of system scripts. (Contributed by Christian Heimes in bpo-16499.)Tab-completion is now enabled by default in the interactive interpreter on systems that support\nreadline\n. History is also enabled by default, and is written to (and read from) the file~/.python-history\n. (Contributed by Antoine Pitrou and \u00c9ric Araujo in bpo-5845.)Invoking the Python interpreter with\n--version\nnow outputs the version to standard output instead of standard error (bpo-18338). Similar changes were made toargparse\n(bpo-18920) and other modules that have script-like invocation capabilities (bpo-18922).The CPython Windows installer now adds\n.py\nto thePATHEXT\nvariable when extensions are registered, allowing users to run a python script at the windows command prompt by just typing its name without the.py\nextension. (Contributed by Paul Moore in bpo-18569.)A new\nmake\ntarget coverage-report will build python, run the test suite, and generate an HTML coverage report for the C codebase usinggcov\nand lcov.The\n-R\noption to the python regression test suite now also checks for memory allocation leaks, usingsys.getallocatedblocks()\n. (Contributed by Antoine Pitrou in bpo-13390.)python -m\nnow works with namespace packages.The\nstat\nmodule is now implemented in C, which means it gets the values for its constants from the C header files, instead of having the values hard-coded in the python module as was previously the case.Loading multiple python modules from a single OS module (\n.so\n,.dll\n) now works correctly (previously it silently returned the first python module in the file). (Contributed by V\u00e1clav \u0160milauer in bpo-16421.)A new opcode,\nLOAD_CLASSDEREF\n, has been added to fix a bug in the loading of free variables in class bodies that could be triggered by certain uses of __prepare__. (Contributed by Benjamin Peterson in bpo-17853.)A number of MemoryError-related crashes were identified and fixed by Victor Stinner using his PEP 445-based\npyfailmalloc\ntool (bpo-18408, bpo-18520).The\npyvenv\ncommand now accepts a--copies\noption to use copies rather than symlinks even on systems where symlinks are the default. (Contributed by Vinay Sajip in bpo-18807.)The\npyvenv\ncommand also accepts a--without-pip\noption to suppress the otherwise-automatic bootstrapping of pip into the virtual environment. (Contributed by Nick Coghlan in bpo-19552 as part of the PEP 453 implementation.)The encoding name is now optional in the value set for the\nPYTHONIOENCODING\nenvironment variable. This makes it possible to set just the error handler, without changing the default encoding. (Contributed by Serhiy Storchaka in bpo-18818.)The\nbz2\n,lzma\n, andgzip\nmoduleopen\nfunctions now supportx\n(exclusive creation) mode. (Contributed by Tim Heaney and Vajrasky Kok in bpo-19201, bpo-19222, and bpo-19223.)\nSignificant Optimizations\u00b6\nThe UTF-32 decoder is now 3x to 4x faster. (Contributed by Serhiy Storchaka in bpo-14625.)\nThe cost of hash collisions for sets is now reduced. Each hash table probe now checks a series of consecutive, adjacent key/hash pairs before continuing to make random probes through the hash table. This exploits cache locality to make collision resolution less expensive. The collision resolution scheme can be described as a hybrid of linear probing and open addressing. The number of additional linear probes defaults to nine. This can be changed at compile-time by defining LINEAR_PROBES to be any value. Set LINEAR_PROBES=0 to turn-off linear probing entirely. (Contributed by Raymond Hettinger in bpo-18771.)\nThe interpreter starts about 30% faster. A couple of measures lead to the speedup. The interpreter loads fewer modules on startup, e.g. the\nre\n,collections\nandlocale\nmodules and their dependencies are no longer imported by default. The marshal module has been improved to load compiled Python code faster. (Contributed by Antoine Pitrou, Christian Heimes and Victor Stinner in bpo-19219, bpo-19218, bpo-19209, bpo-19205 and bpo-9548.)bz2.BZ2File\nis now as fast or faster than the Python2 version for most cases.lzma.LZMAFile\nhas also been optimized. (Contributed by Serhiy Storchaka and Nadeem Vawda in bpo-16034.)random.getrandbits()\nis 20%-40% faster for small integers (the most common use case). (Contributed by Serhiy Storchaka in bpo-16674.)By taking advantage of the new storage format for strings, pickling of strings is now significantly faster. (Contributed by Victor Stinner and Antoine Pitrou in bpo-15596.)\nA performance issue in\nio.FileIO.readall()\nhas been solved. This particularly affects Windows, and significantly speeds up the case of piping significant amounts of data throughsubprocess\n. (Contributed by Richard Oudkerk in bpo-15758.)html.escape()\nis now 10x faster. (Contributed by Matt Bryant in bpo-18020.)On Windows, the native\nVirtualAlloc\nis now used instead of the CRTmalloc\ninobmalloc\n. Artificial benchmarks show about a 3% memory savings.os.urandom()\nnow uses a lazily opened persistent file descriptor so as to avoid using many file descriptors when run in parallel from multiple threads. (Contributed by Antoine Pitrou in bpo-18756.)\nDeprecated\u00b6\nThis section covers various APIs and other features that have been deprecated\nin Python 3.4, and will be removed in Python 3.5 or later. In most (but not\nall) cases, using the deprecated APIs will produce a DeprecationWarning\nwhen the interpreter is run with deprecation warnings enabled (for example, by\nusing -Wd\n).\nDeprecations in the Python API\u00b6\nAs mentioned in PEP 451: A ModuleSpec Type for the Import System, a number of\nimportlib\nmethods and functions are deprecated:importlib.find_loader()\nis replaced byimportlib.util.find_spec()\n;importlib.machinery.PathFinder.find_module()\nis replaced byimportlib.machinery.PathFinder.find_spec()\n;importlib.abc.MetaPathFinder.find_module()\nis replaced byimportlib.abc.MetaPathFinder.find_spec()\n;importlib.abc.PathEntryFinder.find_loader()\nandfind_module()\nare replaced byimportlib.abc.PathEntryFinder.find_spec()\n; all of thexxxLoader\nABCload_module\nmethods (importlib.abc.Loader.load_module()\n,importlib.abc.InspectLoader.load_module()\n,importlib.abc.FileLoader.load_module()\n,importlib.abc.SourceLoader.load_module()\n) should no longer be implemented, instead loaders should implement anexec_module\nmethod (importlib.abc.Loader.exec_module()\n,importlib.abc.InspectLoader.exec_module()\nimportlib.abc.SourceLoader.exec_module()\n) and let the import system take care of the rest; andimportlib.abc.Loader.module_repr()\n,importlib.util.module_for_loader()\n,importlib.util.set_loader()\n, andimportlib.util.set_package()\nare no longer needed because their functions are now handled automatically by the import system.The\nimp\nmodule is pending deprecation. To keep compatibility with Python 2/3 code bases, the module\u2019s removal is currently not scheduled.The\nformatter\nmodule is pending deprecation and is slated for removal in Python 3.6.MD5\nas the default digestmod for thehmac.new()\nfunction is deprecated. Python 3.6 will require an explicit digest name or constructor as digestmod argument.The internal\nNetrc\nclass in theftplib\nmodule has been documented as deprecated in its docstring for quite some time. It now emits aDeprecationWarning\nand will be removed completely in Python 3.5.The undocumented endtime argument to\nsubprocess.Popen.wait()\nshould not have been exposed and is hopefully not in use; it is deprecated and will mostly likely be removed in Python 3.5.The strict argument of\nHTMLParser\nis deprecated.The\nplistlib\nreadPlist()\n,writePlist()\n,readPlistFromBytes()\n, andwritePlistToBytes()\nfunctions are deprecated in favor of the corresponding new functionsload()\n,dump()\n,loads()\n, anddumps()\n.Data()\nis deprecated in favor of just using thebytes\nconstructor.The\nsysconfig\nkeySO\nis deprecated, it has been replaced byEXT_SUFFIX\n.The\nU\nmode accepted by variousopen\nfunctions is deprecated. In Python3 it does not do anything useful, and should be replaced by appropriate uses ofio.TextIOWrapper\n(if needed) and its newline argument.The parser argument of\nxml.etree.ElementTree.iterparse()\nhas been deprecated, as has the html argument ofXMLParser()\n. To prepare for the removal of the latter, all arguments toXMLParser\nshould be passed by keyword.\nDeprecated Features\u00b6\nRunning IDLE \u2014 Python editor and shell with the\n-n\nflag (no subprocess) is deprecated. However, the feature will not be removed until bpo-18823 is resolved.The site module adding a \u201csite-python\u201d directory to sys.path, if it exists, is deprecated (bpo-19375).\nRemoved\u00b6\nOperating Systems No Longer Supported\u00b6\nSupport for the following operating systems has been removed from the source and build tools:\nAPI and Feature Removals\u00b6\nThe following obsolete and previously deprecated APIs and features have been removed:\nThe unmaintained\nMisc/TextMate\nandMisc/vim\ndirectories have been removed (see the devguide for suggestions on what to use instead).The\nSO\nmakefile macro is removed (it was replaced by theSHLIB_SUFFIX\nandEXT_SUFFIX\nmacros) (bpo-16754).The\nPyThreadState.tick_counter\nfield has been removed; its value has been meaningless since Python 3.2, when the \u201cnew GIL\u201d was introduced (bpo-19199).PyLoader\nandPyPycLoader\nhave been removed fromimportlib\n. (Contributed by Taras Lyapun in bpo-15641.)The strict argument to\nHTTPConnection\nandHTTPSConnection\nhas been removed. HTTP 0.9-style \u201cSimple Responses\u201d are no longer supported.The deprecated\nurllib.request.Request\ngetter and setter methodsadd_data\n,has_data\n,get_data\n,get_type\n,get_host\n,get_selector\n,set_proxy\n,get_origin_req_host\n, andis_unverifiable\nhave been removed (use direct attribute access instead).Support for loading the deprecated\nTYPE_INT64\nhas been removed frommarshal\n. (Contributed by Dan Riti in bpo-15480.)inspect.Signature\n: positional-only parameters are now required to have a valid name.object.__format__()\nno longer accepts non-empty format strings, it now raises aTypeError\ninstead. Using a non-empty string has been deprecated since Python 3.2. This change has been made to prevent a situation where previously working (but incorrect) code would start failing if an object gained a __format__ method, which means that your code may now raise aTypeError\nif you are using an's'\nformat code with objects that do not have a __format__ method that handles it. See bpo-7994 for background.difflib.SequenceMatcher.isbjunk()\nanddifflib.SequenceMatcher.isbpopular()\nwere deprecated in 3.2, and have now been removed: usex in sm.bjunk\nandx in sm.bpopular\n, where sm is aSequenceMatcher\nobject (bpo-13248).\nCode Cleanups\u00b6\nThe unused and undocumented internal\nScanner\nclass has been removed from thepydoc\nmodule.The private and effectively unused\n_gestalt\nmodule has been removed, along with the privateplatform\nfunctions_mac_ver_lookup\n,_mac_ver_gstalt\n, and_bcd2str\n, which would only have ever been called on badly broken OSX systems (see bpo-18393).The hardcoded copies of certain\nstat\nconstants that were included in thetarfile\nmodule namespace have been removed.\nPorting to Python 3.4\u00b6\nThis section lists previously described changes and other bugfixes that may require changes to your code.\nChanges in \u2018python\u2019 Command Behavior\u00b6\nIn a posix shell, setting the\nPATH\nenvironment variable to an empty value is equivalent to not setting it at all. However, settingPYTHONPATH\nto an empty value was not equivalent to not setting it at all: settingPYTHONPATH\nto an empty value was equivalent to setting it to.\n, which leads to confusion when reasoning by analogy to howPATH\nworks. The behavior now conforms to the posix convention forPATH\n.The [X refs, Y blocks] output of a debug (\n--with-pydebug\n) build of the CPython interpreter is now off by default. It can be re-enabled using the-X showrefcount\noption. (Contributed by Ezio Melotti in bpo-17323.)The python command and most stdlib scripts (as well as\nargparse\n) now output--version\ninformation tostdout\ninstead ofstderr\n(for issue list see Other Improvements above).\nChanges in the Python API\u00b6\nThe ABCs defined in\nimportlib.abc\nnow either raise the appropriate exception or return a default value instead of raisingNotImplementedError\nblindly. This will only affect code callingsuper()\nand falling through all the way to the ABCs. For compatibility, catch bothNotImplementedError\nor the appropriate exception as needed.The module type now initializes the\n__package__\nand__loader__\nattributes toNone\nby default. To determine if these attributes were set in a backwards-compatible fashion, use e.g.getattr(module, '__loader__', None) is not None\n. (bpo-17115.)importlib.util.module_for_loader()\nnow sets__loader__\nand__package__\nunconditionally to properly support reloading. If this is not desired then you will need to set these attributes manually. You can useimportlib.util.module_to_load()\nfor module management.Import now resets relevant attributes (e.g.\n__name__\n,__loader__\n,__package__\n,__file__\n,__cached__\n) unconditionally when reloading. Note that this restores a pre-3.3 behavior in that it means a module is re-found when re-loaded (bpo-19413).Frozen packages no longer set\n__path__\nto a list containing the package name, they now set it to an empty list. The previous behavior could cause the import system to do the wrong thing on submodule imports if there was also a directory with the same name as the frozen package. The correct way to determine if a module is a package or not is to usehasattr(module, '__path__')\n(bpo-18065).Frozen modules no longer define a\n__file__\nattribute. It\u2019s semantically incorrect for frozen modules to set the attribute as they are not loaded from any explicit location. If you must know that a module comes from frozen code then you can see if the module\u2019s__spec__.location\nis set to'frozen'\n, check if the loader is a subclass ofimportlib.machinery.FrozenImporter\n, or if Python 2 compatibility is necessary you can useimp.is_frozen()\n.py_compile.compile()\nnow raisesFileExistsError\nif the file path it would write to is a symlink or a non-regular file. This is to act as a warning that import will overwrite those files with a regular file regardless of what type of file path they were originally.importlib.abc.SourceLoader.get_source()\nno longer raisesImportError\nwhen the source code being loaded triggers aSyntaxError\norUnicodeDecodeError\n. AsImportError\nis meant to be raised only when source code cannot be found but it should, it was felt to be over-reaching/overloading of that meaning when the source code is found but improperly structured. If you were catching ImportError before and wish to continue to ignore syntax or decoding issues, catch all three exceptions now.functools.update_wrapper()\nandfunctools.wraps()\nnow correctly set the__wrapped__\nattribute to the function being wrapped, even if that function also had its__wrapped__\nattribute set. This means__wrapped__\nattributes now correctly link a stack of decorated functions rather than every__wrapped__\nattribute in the chain referring to the innermost function. Introspection libraries that assumed the previous behaviour was intentional can useinspect.unwrap()\nto access the first function in the chain that has no__wrapped__\nattribute.inspect.getfullargspec()\nhas been reimplemented on top ofinspect.signature()\nand hence handles a much wider variety of callable objects than it did in the past. It is expected that additional builtin and extension module callables will gain signature metadata over the course of the Python 3.4 series. Code that assumes thatinspect.getfullargspec()\nwill fail on non-Python callables may need to be adjusted accordingly.importlib.machinery.PathFinder\nnow passes on the current working directory to objects insys.path_hooks\nfor the empty string. This results insys.path_importer_cache\nnever containing''\n, thus iterating throughsys.path_importer_cache\nbased onsys.path\nwill not find all keys. A module\u2019s__file__\nwhen imported in the current working directory will also now have an absolute path, including when using-m\nwith the interpreter (except for__main__.__file__\nwhen a script has been executed directly using a relative path) (Contributed by Brett Cannon in bpo-18416). is specified on the command-line) (bpo-18416).The removal of the strict argument to\nHTTPConnection\nandHTTPSConnection\nchanges the meaning of the remaining arguments if you are specifying them positionally rather than by keyword. If you\u2019ve been paying attention to deprecation warnings your code should already be specifying any additional arguments via keywords.Strings between\nfrom __future__ import ...\nstatements now always raise aSyntaxError\n. Previously if there was no leading docstring, an interstitial string would sometimes be ignored. This brings CPython into compliance with the language spec; Jython and PyPy already were. (bpo-17434).ssl.SSLSocket.getpeercert()\nandssl.SSLSocket.do_handshake()\nnow raise anOSError\nwithENOTCONN\nwhen theSSLSocket\nis not connected, instead of the previous behavior of raising anAttributeError\n. In addition,getpeercert()\nwill raise aValueError\nif the handshake has not yet been done.base64.b32decode()\nnow raises abinascii.Error\nwhen the input string contains non-b32-alphabet characters, instead of aTypeError\n. This particularTypeError\nwas missed when the otherTypeError\ns were converted. (Contributed by Serhiy Storchaka in bpo-18011.) Note: this change was also inadvertently applied in Python 3.3.3.The\nfile\nattribute is now automatically closed when the creatingcgi.FieldStorage\ninstance is garbage collected. If you were pulling the file object out separately from thecgi.FieldStorage\ninstance and not keeping the instance alive, then you should either store the entirecgi.FieldStorage\ninstance or read the contents of the file before thecgi.FieldStorage\ninstance is garbage collected.Calling\nread\norwrite\non a closed SSL socket now raises an informativeValueError\nrather than the previous more mysteriousAttributeError\n(bpo-9177).slice.indices()\nno longer produces anOverflowError\nfor huge values. As a consequence of this fix,slice.indices()\nnow raises aValueError\nif given a negative length; previously it returned nonsense values (bpo-14794).The\ncomplex\nconstructor, unlike thecmath\nfunctions, was incorrectly acceptingfloat\nvalues if an object\u2019s__complex__\nspecial method returned one. This now raises aTypeError\n. (bpo-16290.)The\nint\nconstructor in 3.2 and 3.3 erroneously acceptsfloat\nvalues for the base parameter. It is unlikely anyone was doing this, but if so, it will now raise aTypeError\n(bpo-16772).Defaults for keyword-only arguments are now evaluated after defaults for regular keyword arguments, instead of before. Hopefully no one wrote any code that depends on the previous buggy behavior (bpo-16967).\nStale thread states are now cleared after\nfork()\n. This may cause some system resources to be released that previously were incorrectly kept perpetually alive (for example, database connections kept in thread-local storage). (bpo-17094.)Parameter names in\n__annotations__\ndicts are now mangled properly, similarly to__kwdefaults__\n. (Contributed by Yury Selivanov in bpo-20625.)hashlib.hash.name\nnow always returns the identifier in lower case. Previously some builtin hashes had uppercase names, but now that it is a formal public interface the naming has been made consistent (bpo-18532).Because\nunittest.TestSuite\nnow drops references to tests after they are run, test harnesses that reuse aTestSuite\nto re-run a set of tests may fail. Test suites should not be re-used in this fashion since it means state is retained between test runs, breaking the test isolation thatunittest\nis designed to provide. However, if the lack of isolation is considered acceptable, the old behavior can be restored by creating aTestSuite\nsubclass that defines a_removeTestAtIndex\nmethod that does nothing (seeTestSuite.__iter__()\n) (bpo-11798).unittest\nnow usesargparse\nfor command line parsing. There are certain invalid command forms that used to work that are no longer allowed; in theory this should not cause backward compatibility issues since the disallowed command forms didn\u2019t make any sense and are unlikely to be in use.The\nre.split()\n,re.findall()\n, andre.sub()\nfunctions, and thegroup()\nandgroups()\nmethods ofmatch\nobjects now always return a bytes object when the string to be matched is a bytes-like object. Previously the return type matched the input type, so if your code was depending on the return value being, say, abytearray\n, you will need to change your code.audioop\nfunctions now raise an error immediately if passed string input, instead of failing randomly later on (bpo-16685).The new convert_charrefs argument to\nHTMLParser\ncurrently defaults toFalse\nfor backward compatibility, but will eventually be changed to default toTrue\n. It is recommended that you add this keyword, with the appropriate value, to anyHTMLParser\ncalls in your code (bpo-13633).Since the digestmod argument to the\nhmac.new()\nfunction will in the future have no default, all calls tohmac.new()\nshould be changed to explicitly specify a digestmod (bpo-17276).Calling\nsysconfig.get_config_var()\nwith theSO\nkey, or lookingSO\nup in the results of a call tosysconfig.get_config_vars()\nis deprecated. This key should be replaced byEXT_SUFFIX\norSHLIB_SUFFIX\n, depending on the context (bpo-19555).Any calls to\nopen\nfunctions that specifyU\nshould be modified.U\nis ineffective in Python3 and will eventually raise an error if used. Depending on the function, the equivalent of its old Python2 behavior can be achieved using either a newline argument, or if necessary by wrapping the stream inTextIOWrapper\nto use its newline argument (bpo-15204).If you use\npyvenv\nin a script and desire that pip not be installed, you must add--without-pip\nto your command invocation.The default behavior of\njson.dump()\nandjson.dumps()\nwhen an indent is specified has changed: it no longer produces trailing spaces after the item separating commas at the ends of lines. This will matter only if you have tests that are doing white-space-sensitive comparisons of such output (bpo-16333).doctest\nnow looks for doctests in extension module__doc__\nstrings, so if your doctest test discovery includes extension modules that have things that look like doctests in them you may see test failures you\u2019ve never seen before when running your tests (bpo-3158).The\ncollections.abc\nmodule has been slightly refactored as part of the Python startup improvements. As a consequence of this, it is no longer the case that importingcollections\nautomatically importscollections.abc\n. If your program depended on the (undocumented) implicit import, you will need to add an explicitimport collections.abc\n(bpo-20784).\nChanges in the C API\u00b6\nPyEval_EvalFrameEx()\n,PyObject_Repr()\n, andPyObject_Str()\n, along with some other internal C APIs, now include a debugging assertion that ensures they are not used in situations where they may silently discard a currently active exception. In cases where discarding the active exception is expected and desired (for example, because it has already been saved locally withPyErr_Fetch()\nor is being deliberately replaced with a different exception), an explicitPyErr_Clear()\ncall will be needed to avoid triggering the assertion when invoking these operations (directly or indirectly) and running against a version of Python that is compiled with assertions enabled.PyErr_SetImportError()\nnow setsTypeError\nwhen its msg argument is not set. Previously onlyNULL\nwas returned with no exception set.The result of the\nPyOS_ReadlineFunctionPointer\ncallback must now be a string allocated byPyMem_RawMalloc()\norPyMem_RawRealloc()\n, orNULL\nif an error occurred, instead of a string allocated byPyMem_Malloc()\norPyMem_Realloc()\n(bpo-16742)PyThread_set_key_value()\nnow always set the value. In Python 3.3, the function did nothing if the key already exists (if the current value is a non-NULL\npointer).The\nf_tstate\n(thread state) field of thePyFrameObject\nstructure has been removed to fix a bug: see bpo-14432 for the rationale.\nChanged in 3.4.3\u00b6\nPEP 476: Enabling certificate verification by default for stdlib http clients\u00b6\nhttp.client\nand modules which use it, such as urllib.request\nand\nxmlrpc.client\n, will now verify that the server presents a certificate\nwhich is signed by a CA in the platform trust store and whose hostname matches\nthe hostname being requested by default, significantly improving security for\nmany applications.\nFor applications which require the old previous behavior, they can pass an alternate context:\nimport urllib.request\nimport ssl\n# This disables all verification\ncontext = ssl._create_unverified_context()\n# This allows using a specific certificate for the host, which doesn't need\n# to be in the trust store\ncontext = ssl.create_default_context(cafile=\"/path/to/file.crt\")\nurllib.request.urlopen(\"https://invalid-cert\", context=context)", "code_snippets": ["\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n\n", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n\n", " ", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", "\n\n", " ", "\n", "\n File ", ", line ", ", in ", "\n", " ", " ", "\n", ": ", "\n\n", "\n\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n\n", " ", "\n", "\n File ", ", line ", ", in ", "\n", " ", " ", "\n File ", ", line ", ", in ", "\n", " ", " ", " ", "\n", ": ", "\n\n", "\n\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", " ", " ", "\n", " ", "\n", "\n", " ", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", " ", " ", "\n", "\n", "\n\n", "\n", " ", " ", "\n\n", "\n", "\n", " ", " ", "\n\n", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 22351} +{"url": "https://docs.python.org/3/library/asyncio-task.html", "title": "Coroutines and Tasks", "content": "Coroutines and Tasks\u00b6\nThis section outlines high-level asyncio APIs to work with coroutines and Tasks.\nCoroutines\u00b6\nSource code: Lib/asyncio/coroutines.py\nCoroutines declared with the async/await syntax is the preferred way of writing asyncio applications. For example, the following snippet of code prints \u201chello\u201d, waits 1 second, and then prints \u201cworld\u201d:\n>>> import asyncio\n>>> async def main():\n... print('hello')\n... await asyncio.sleep(1)\n... print('world')\n>>> asyncio.run(main())\nhello\nworld\nNote that simply calling a coroutine will not schedule it to be executed:\n>>> main()\n\nTo actually run a coroutine, asyncio provides the following mechanisms:\nThe\nasyncio.run()\nfunction to run the top-level entry point \u201cmain()\u201d function (see the above example.)Awaiting on a coroutine. The following snippet of code will print \u201chello\u201d after waiting for 1 second, and then print \u201cworld\u201d after waiting for another 2 seconds:\nimport asyncio import time async def say_after(delay, what): await asyncio.sleep(delay) print(what) async def main(): print(f\"started at {time.strftime('%X')}\") await say_after(1, 'hello') await say_after(2, 'world') print(f\"finished at {time.strftime('%X')}\") asyncio.run(main())\nExpected output:\nstarted at 17:13:52 hello world finished at 17:13:55\nThe\nasyncio.create_task()\nfunction to run coroutines concurrently as asyncioTasks\n.Let\u2019s modify the above example and run two\nsay_after\ncoroutines concurrently:async def main(): task1 = asyncio.create_task( say_after(1, 'hello')) task2 = asyncio.create_task( say_after(2, 'world')) print(f\"started at {time.strftime('%X')}\") # Wait until both tasks are completed (should take # around 2 seconds.) await task1 await task2 print(f\"finished at {time.strftime('%X')}\")\nNote that expected output now shows that the snippet runs 1 second faster than before:\nstarted at 17:14:32 hello world finished at 17:14:34\nThe\nasyncio.TaskGroup\nclass provides a more modern alternative tocreate_task()\n. Using this API, the last example becomes:async def main(): async with asyncio.TaskGroup() as tg: task1 = tg.create_task( say_after(1, 'hello')) task2 = tg.create_task( say_after(2, 'world')) print(f\"started at {time.strftime('%X')}\") # The await is implicit when the context manager exits. print(f\"finished at {time.strftime('%X')}\")\nThe timing and output should be the same as for the previous version.\nAdded in version 3.11:\nasyncio.TaskGroup\n.\nAwaitables\u00b6\nWe say that an object is an awaitable object if it can be used\nin an await\nexpression. Many asyncio APIs are designed to\naccept awaitables.\nThere are three main types of awaitable objects: coroutines, Tasks, and Futures.\nCoroutines\nPython coroutines are awaitables and therefore can be awaited from other coroutines:\nimport asyncio\nasync def nested():\nreturn 42\nasync def main():\n# Nothing happens if we just call \"nested()\".\n# A coroutine object is created but not awaited,\n# so it *won't run at all*.\nnested() # will raise a \"RuntimeWarning\".\n# Let's do it differently now and await it:\nprint(await nested()) # will print \"42\".\nasyncio.run(main())\nImportant\nIn this documentation the term \u201ccoroutine\u201d can be used for two closely related concepts:\na coroutine function: an\nasync def\nfunction;a coroutine object: an object returned by calling a coroutine function.\nTasks\nTasks are used to schedule coroutines concurrently.\nWhen a coroutine is wrapped into a Task with functions like\nasyncio.create_task()\nthe coroutine is automatically\nscheduled to run soon:\nimport asyncio\nasync def nested():\nreturn 42\nasync def main():\n# Schedule nested() to run soon concurrently\n# with \"main()\".\ntask = asyncio.create_task(nested())\n# \"task\" can now be used to cancel \"nested()\", or\n# can simply be awaited to wait until it is complete:\nawait task\nasyncio.run(main())\nFutures\nA Future\nis a special low-level awaitable object that\nrepresents an eventual result of an asynchronous operation.\nWhen a Future object is awaited it means that the coroutine will wait until the Future is resolved in some other place.\nFuture objects in asyncio are needed to allow callback-based code to be used with async/await.\nNormally there is no need to create Future objects at the application level code.\nFuture objects, sometimes exposed by libraries and some asyncio APIs, can be awaited:\nasync def main():\nawait function_that_returns_a_future_object()\n# this is also valid:\nawait asyncio.gather(\nfunction_that_returns_a_future_object(),\nsome_python_coroutine()\n)\nA good example of a low-level function that returns a Future object\nis loop.run_in_executor()\n.\nCreating Tasks\u00b6\nSource code: Lib/asyncio/tasks.py\n- asyncio.create_task(coro, *, name=None, context=None, eager_start=None, **kwargs)\u00b6\nWrap the coro coroutine into a\nTask\nand schedule its execution. Return the Task object.The full function signature is largely the same as that of the\nTask\nconstructor (or factory) - all of the keyword arguments to this function are passed through to that interface.An optional keyword-only context argument allows specifying a custom\ncontextvars.Context\nfor the coro to run in. The current context copy is created when no context is provided.An optional keyword-only eager_start argument allows specifying if the task should execute eagerly during the call to create_task, or be scheduled later. If eager_start is not passed the mode set by\nloop.set_task_factory()\nwill be used.The task is executed in the loop returned by\nget_running_loop()\n,RuntimeError\nis raised if there is no running loop in current thread.Note\nasyncio.TaskGroup.create_task()\nis a new alternative leveraging structural concurrency; it allows for waiting for a group of related tasks with strong safety guarantees.Important\nSave a reference to the result of this function, to avoid a task disappearing mid-execution. The event loop only keeps weak references to tasks. A task that isn\u2019t referenced elsewhere may get garbage collected at any time, even before it\u2019s done. For reliable \u201cfire-and-forget\u201d background tasks, gather them in a collection:\nbackground_tasks = set() for i in range(10): task = asyncio.create_task(some_coro(param=i)) # Add task to the set. This creates a strong reference. background_tasks.add(task) # To prevent keeping references to finished tasks forever, # make each task remove its own reference from the set after # completion: task.add_done_callback(background_tasks.discard)\nAdded in version 3.7.\nChanged in version 3.8: Added the name parameter.\nChanged in version 3.11: Added the context parameter.\nChanged in version 3.14: Added the eager_start parameter by passing on all kwargs.\nTask Cancellation\u00b6\nTasks can easily and safely be cancelled.\nWhen a task is cancelled, asyncio.CancelledError\nwill be raised\nin the task at the next opportunity.\nIt is recommended that coroutines use try/finally\nblocks to robustly\nperform clean-up logic. In case asyncio.CancelledError\nis explicitly caught, it should generally be propagated when\nclean-up is complete. asyncio.CancelledError\ndirectly subclasses\nBaseException\nso most code will not need to be aware of it.\nThe asyncio components that enable structured concurrency, like\nasyncio.TaskGroup\nand asyncio.timeout()\n,\nare implemented using cancellation internally and might misbehave if\na coroutine swallows asyncio.CancelledError\n. Similarly, user code\nshould not generally call uncancel\n.\nHowever, in cases when suppressing asyncio.CancelledError\nis\ntruly desired, it is necessary to also call uncancel()\nto completely\nremove the cancellation state.\nTask Groups\u00b6\nTask groups combine a task creation API with a convenient and reliable way to wait for all tasks in the group to finish.\n- class asyncio.TaskGroup\u00b6\nAn asynchronous context manager holding a group of tasks. Tasks can be added to the group using\ncreate_task()\n. All tasks are awaited when the context manager exits.Added in version 3.11.\n- create_task(coro, *, name=None, context=None, eager_start=None, **kwargs)\u00b6\nCreate a task in this task group. The signature matches that of\nasyncio.create_task()\n. If the task group is inactive (e.g. not yet entered, already finished, or in the process of shutting down), we will close the givencoro\n.Changed in version 3.13: Close the given coroutine if the task group is not active.\nChanged in version 3.14: Passes on all kwargs to\nloop.create_task()\nExample:\nasync def main():\nasync with asyncio.TaskGroup() as tg:\ntask1 = tg.create_task(some_coro(...))\ntask2 = tg.create_task(another_coro(...))\nprint(f\"Both tasks have completed now: {task1.result()}, {task2.result()}\")\nThe async with\nstatement will wait for all tasks in the group to finish.\nWhile waiting, new tasks may still be added to the group\n(for example, by passing tg\ninto one of the coroutines\nand calling tg.create_task()\nin that coroutine).\nOnce the last task has finished and the async with\nblock is exited,\nno new tasks may be added to the group.\nThe first time any of the tasks belonging to the group fails\nwith an exception other than asyncio.CancelledError\n,\nthe remaining tasks in the group are cancelled.\nNo further tasks can then be added to the group.\nAt this point, if the body of the async with\nstatement is still active\n(i.e., __aexit__()\nhasn\u2019t been called yet),\nthe task directly containing the async with\nstatement is also cancelled.\nThe resulting asyncio.CancelledError\nwill interrupt an await\n,\nbut it will not bubble out of the containing async with\nstatement.\nOnce all tasks have finished, if any tasks have failed\nwith an exception other than asyncio.CancelledError\n,\nthose exceptions are combined in an\nExceptionGroup\nor BaseExceptionGroup\n(as appropriate; see their documentation)\nwhich is then raised.\nTwo base exceptions are treated specially:\nIf any task fails with KeyboardInterrupt\nor SystemExit\n,\nthe task group still cancels the remaining tasks and waits for them,\nbut then the initial KeyboardInterrupt\nor SystemExit\nis re-raised instead of ExceptionGroup\nor BaseExceptionGroup\n.\nIf the body of the async with\nstatement exits with an exception\n(so __aexit__()\nis called with an exception set),\nthis is treated the same as if one of the tasks failed:\nthe remaining tasks are cancelled and then waited for,\nand non-cancellation exceptions are grouped into an\nexception group and raised.\nThe exception passed into __aexit__()\n,\nunless it is asyncio.CancelledError\n,\nis also included in the exception group.\nThe same special case is made for\nKeyboardInterrupt\nand SystemExit\nas in the previous paragraph.\nTask groups are careful not to mix up the internal cancellation used to\n\u201cwake up\u201d their __aexit__()\nwith cancellation requests\nfor the task in which they are running made by other parties.\nIn particular, when one task group is syntactically nested in another,\nand both experience an exception in one of their child tasks simultaneously,\nthe inner task group will process its exceptions, and then the outer task group\nwill receive another cancellation and process its own exceptions.\nIn the case where a task group is cancelled externally and also must\nraise an ExceptionGroup\n, it will call the parent task\u2019s\ncancel()\nmethod. This ensures that a\nasyncio.CancelledError\nwill be raised at the next\nawait\n, so the cancellation is not lost.\nTask groups preserve the cancellation count\nreported by asyncio.Task.cancelling()\n.\nChanged in version 3.13: Improved handling of simultaneous internal and external cancellations and correct preservation of cancellation counts.\nTerminating a Task Group\u00b6\nWhile terminating a task group is not natively supported by the standard library, termination can be achieved by adding an exception-raising task to the task group and ignoring the raised exception:\nimport asyncio\nfrom asyncio import TaskGroup\nclass TerminateTaskGroup(Exception):\n\"\"\"Exception raised to terminate a task group.\"\"\"\nasync def force_terminate_task_group():\n\"\"\"Used to force termination of a task group.\"\"\"\nraise TerminateTaskGroup()\nasync def job(task_id, sleep_time):\nprint(f'Task {task_id}: start')\nawait asyncio.sleep(sleep_time)\nprint(f'Task {task_id}: done')\nasync def main():\ntry:\nasync with TaskGroup() as group:\n# spawn some tasks\ngroup.create_task(job(1, 0.5))\ngroup.create_task(job(2, 1.5))\n# sleep for 1 second\nawait asyncio.sleep(1)\n# add an exception-raising task to force the group to terminate\ngroup.create_task(force_terminate_task_group())\nexcept* TerminateTaskGroup:\npass\nasyncio.run(main())\nExpected output:\nTask 1: start\nTask 2: start\nTask 1: done\nSleeping\u00b6\n- async asyncio.sleep(delay, result=None)\u00b6\nBlock for delay seconds.\nIf result is provided, it is returned to the caller when the coroutine completes.\nsleep()\nalways suspends the current task, allowing other tasks to run.Setting the delay to 0 provides an optimized path to allow other tasks to run. This can be used by long-running functions to avoid blocking the event loop for the full duration of the function call.\nExample of coroutine displaying the current date every second for 5 seconds:\nimport asyncio import datetime async def display_date(): loop = asyncio.get_running_loop() end_time = loop.time() + 5.0 while True: print(datetime.datetime.now()) if (loop.time() + 1.0) >= end_time: break await asyncio.sleep(1) asyncio.run(display_date())\nChanged in version 3.10: Removed the loop parameter.\nChanged in version 3.13: Raises\nValueError\nif delay isnan\n.\nRunning Tasks Concurrently\u00b6\n- awaitable asyncio.gather(*aws, return_exceptions=False)\u00b6\nRun awaitable objects in the aws sequence concurrently.\nIf any awaitable in aws is a coroutine, it is automatically scheduled as a Task.\nIf all awaitables are completed successfully, the result is an aggregate list of returned values. The order of result values corresponds to the order of awaitables in aws.\nIf return_exceptions is\nFalse\n(default), the first raised exception is immediately propagated to the task that awaits ongather()\n. Other awaitables in the aws sequence won\u2019t be cancelled and will continue to run.If return_exceptions is\nTrue\n, exceptions are treated the same as successful results, and aggregated in the result list.If\ngather()\nis cancelled, all submitted awaitables (that have not completed yet) are also cancelled.If any Task or Future from the aws sequence is cancelled, it is treated as if it raised\nCancelledError\n\u2013 thegather()\ncall is not cancelled in this case. This is to prevent the cancellation of one submitted Task/Future to cause other Tasks/Futures to be cancelled.Note\nA new alternative to create and run tasks concurrently and wait for their completion is\nasyncio.TaskGroup\n. TaskGroup provides stronger safety guarantees than gather for scheduling a nesting of subtasks: if a task (or a subtask, a task scheduled by a task) raises an exception, TaskGroup will, while gather will not, cancel the remaining scheduled tasks).Example:\nimport asyncio async def factorial(name, number): f = 1 for i in range(2, number + 1): print(f\"Task {name}: Compute factorial({number}), currently i={i}...\") await asyncio.sleep(1) f *= i print(f\"Task {name}: factorial({number}) = {f}\") return f async def main(): # Schedule three calls *concurrently*: L = await asyncio.gather( factorial(\"A\", 2), factorial(\"B\", 3), factorial(\"C\", 4), ) print(L) asyncio.run(main()) # Expected output: # # Task A: Compute factorial(2), currently i=2... # Task B: Compute factorial(3), currently i=2... # Task C: Compute factorial(4), currently i=2... # Task A: factorial(2) = 2 # Task B: Compute factorial(3), currently i=3... # Task C: Compute factorial(4), currently i=3... # Task B: factorial(3) = 6 # Task C: Compute factorial(4), currently i=4... # Task C: factorial(4) = 24 # [2, 6, 24]\nNote\nIf return_exceptions is false, cancelling gather() after it has been marked done won\u2019t cancel any submitted awaitables. For instance, gather can be marked done after propagating an exception to the caller, therefore, calling\ngather.cancel()\nafter catching an exception (raised by one of the awaitables) from gather won\u2019t cancel any other awaitables.Changed in version 3.7: If the gather itself is cancelled, the cancellation is propagated regardless of return_exceptions.\nChanged in version 3.10: Removed the loop parameter.\nDeprecated since version 3.10: Deprecation warning is emitted if no positional arguments are provided or not all positional arguments are Future-like objects and there is no running event loop.\nEager Task Factory\u00b6\n- asyncio.eager_task_factory(loop, coro, *, name=None, context=None)\u00b6\nA task factory for eager task execution.\nWhen using this factory (via\nloop.set_task_factory(asyncio.eager_task_factory)\n), coroutines begin execution synchronously duringTask\nconstruction. Tasks are only scheduled on the event loop if they block. This can be a performance improvement as the overhead of loop scheduling is avoided for coroutines that complete synchronously.A common example where this is beneficial is coroutines which employ caching or memoization to avoid actual I/O when possible.\nNote\nImmediate execution of the coroutine is a semantic change. If the coroutine returns or raises, the task is never scheduled to the event loop. If the coroutine execution blocks, the task is scheduled to the event loop. This change may introduce behavior changes to existing applications. For example, the application\u2019s task execution order is likely to change.\nAdded in version 3.12.\n- asyncio.create_eager_task_factory(custom_task_constructor)\u00b6\nCreate an eager task factory, similar to\neager_task_factory()\n, using the provided custom_task_constructor when creating a new task instead of the defaultTask\n.custom_task_constructor must be a callable with the signature matching the signature of\nTask.__init__\n. The callable must return aasyncio.Task\n-compatible object.This function returns a callable intended to be used as a task factory of an event loop via\nloop.set_task_factory(factory)\n).Added in version 3.12.\nShielding From Cancellation\u00b6\n- awaitable asyncio.shield(aw)\u00b6\nProtect an awaitable object from being\ncancelled\n.If aw is a coroutine it is automatically scheduled as a Task.\nThe statement:\ntask = asyncio.create_task(something()) res = await shield(task)\nis equivalent to:\nres = await something()\nexcept that if the coroutine containing it is cancelled, the Task running in\nsomething()\nis not cancelled. From the point of view ofsomething()\n, the cancellation did not happen. Although its caller is still cancelled, so the \u201cawait\u201d expression still raises aCancelledError\n.If\nsomething()\nis cancelled by other means (i.e. from within itself) that would also cancelshield()\n.If it is desired to completely ignore cancellation (not recommended) the\nshield()\nfunction should be combined with a try/except clause, as follows:task = asyncio.create_task(something()) try: res = await shield(task) except CancelledError: res = None\nImportant\nSave a reference to tasks passed to this function, to avoid a task disappearing mid-execution. The event loop only keeps weak references to tasks. A task that isn\u2019t referenced elsewhere may get garbage collected at any time, even before it\u2019s done.\nChanged in version 3.10: Removed the loop parameter.\nDeprecated since version 3.10: Deprecation warning is emitted if aw is not Future-like object and there is no running event loop.\nTimeouts\u00b6\n- asyncio.timeout(delay)\u00b6\nReturn an asynchronous context manager that can be used to limit the amount of time spent waiting on something.\ndelay can either be\nNone\n, or a float/int number of seconds to wait. If delay isNone\n, no time limit will be applied; this can be useful if the delay is unknown when the context manager is created.In either case, the context manager can be rescheduled after creation using\nTimeout.reschedule()\n.Example:\nasync def main(): async with asyncio.timeout(10): await long_running_task()\nIf\nlong_running_task\ntakes more than 10 seconds to complete, the context manager will cancel the current task and handle the resultingasyncio.CancelledError\ninternally, transforming it into aTimeoutError\nwhich can be caught and handled.Note\nThe\nasyncio.timeout()\ncontext manager is what transforms theasyncio.CancelledError\ninto aTimeoutError\n, which means theTimeoutError\ncan only be caught outside of the context manager.Example of catching\nTimeoutError\n:async def main(): try: async with asyncio.timeout(10): await long_running_task() except TimeoutError: print(\"The long operation timed out, but we've handled it.\") print(\"This statement will run regardless.\")\nThe context manager produced by\nasyncio.timeout()\ncan be rescheduled to a different deadline and inspected.- class asyncio.Timeout(when)\u00b6\nAn asynchronous context manager for cancelling overdue coroutines.\nPrefer using\nasyncio.timeout()\norasyncio.timeout_at()\nrather than instantiatingTimeout\ndirectly.when\nshould be an absolute time at which the context should time out, as measured by the event loop\u2019s clock:If\nwhen\nisNone\n, the timeout will never trigger.If\nwhen < loop.time()\n, the timeout will trigger on the next iteration of the event loop.\nExample:\nasync def main(): try: # We do not know the timeout when starting, so we pass ``None``. async with asyncio.timeout(None) as cm: # We know the timeout now, so we reschedule it. new_deadline = get_running_loop().time() + 10 cm.reschedule(new_deadline) await long_running_task() except TimeoutError: pass if cm.expired(): print(\"Looks like we haven't finished on time.\")\nTimeout context managers can be safely nested.\nAdded in version 3.11.\n- asyncio.timeout_at(when)\u00b6\nSimilar to\nasyncio.timeout()\n, except when is the absolute time to stop waiting, orNone\n.Example:\nasync def main(): loop = get_running_loop() deadline = loop.time() + 20 try: async with asyncio.timeout_at(deadline): await long_running_task() except TimeoutError: print(\"The long operation timed out, but we've handled it.\") print(\"This statement will run regardless.\")\nAdded in version 3.11.\n- async asyncio.wait_for(aw, timeout)\u00b6\nWait for the aw awaitable to complete with a timeout.\nIf aw is a coroutine it is automatically scheduled as a Task.\ntimeout can either be\nNone\nor a float or int number of seconds to wait for. If timeout isNone\n, block until the future completes.If a timeout occurs, it cancels the task and raises\nTimeoutError\n.To avoid the task\ncancellation\n, wrap it inshield()\n.The function will wait until the future is actually cancelled, so the total wait time may exceed the timeout. If an exception happens during cancellation, it is propagated.\nIf the wait is cancelled, the future aw is also cancelled.\nExample:\nasync def eternity(): # Sleep for one hour await asyncio.sleep(3600) print('yay!') async def main(): # Wait for at most 1 second try: await asyncio.wait_for(eternity(), timeout=1.0) except TimeoutError: print('timeout!') asyncio.run(main()) # Expected output: # # timeout!\nChanged in version 3.7: When aw is cancelled due to a timeout,\nwait_for\nwaits for aw to be cancelled. Previously, it raisedTimeoutError\nimmediately.Changed in version 3.10: Removed the loop parameter.\nChanged in version 3.11: Raises\nTimeoutError\ninstead ofasyncio.TimeoutError\n.\nWaiting Primitives\u00b6\n- async asyncio.wait(aws, *, timeout=None, return_when=ALL_COMPLETED)\u00b6\nRun\nFuture\nandTask\ninstances in the aws iterable concurrently and block until the condition specified by return_when.The aws iterable must not be empty.\nReturns two sets of Tasks/Futures:\n(done, pending)\n.Usage:\ndone, pending = await asyncio.wait(aws)\ntimeout (a float or int), if specified, can be used to control the maximum number of seconds to wait before returning.\nNote that this function does not raise\nTimeoutError\n. Futures or Tasks that aren\u2019t done when the timeout occurs are simply returned in the second set.return_when indicates when this function should return. It must be one of the following constants:\nConstant\nDescription\n- asyncio.FIRST_COMPLETED\u00b6\nThe function will return when any future finishes or is cancelled.\n- asyncio.FIRST_EXCEPTION\u00b6\nThe function will return when any future finishes by raising an exception. If no future raises an exception then it is equivalent to\nALL_COMPLETED\n.- asyncio.ALL_COMPLETED\u00b6\nThe function will return when all futures finish or are cancelled.\nUnlike\nwait_for()\n,wait()\ndoes not cancel the futures when a timeout occurs.Changed in version 3.10: Removed the loop parameter.\nChanged in version 3.11: Passing coroutine objects to\nwait()\ndirectly is forbidden.Changed in version 3.12: Added support for generators yielding tasks.\n- asyncio.as_completed(aws, *, timeout=None)\u00b6\nRun awaitable objects in the aws iterable concurrently. The returned object can be iterated to obtain the results of the awaitables as they finish.\nThe object returned by\nas_completed()\ncan be iterated as an asynchronous iterator or a plain iterator. When asynchronous iteration is used, the originally-supplied awaitables are yielded if they are tasks or futures. This makes it easy to correlate previously-scheduled tasks with their results. Example:ipv4_connect = create_task(open_connection(\"127.0.0.1\", 80)) ipv6_connect = create_task(open_connection(\"::1\", 80)) tasks = [ipv4_connect, ipv6_connect] async for earliest_connect in as_completed(tasks): # earliest_connect is done. The result can be obtained by # awaiting it or calling earliest_connect.result() reader, writer = await earliest_connect if earliest_connect is ipv6_connect: print(\"IPv6 connection established.\") else: print(\"IPv4 connection established.\")\nDuring asynchronous iteration, implicitly-created tasks will be yielded for supplied awaitables that aren\u2019t tasks or futures.\nWhen used as a plain iterator, each iteration yields a new coroutine that returns the result or raises the exception of the next completed awaitable. This pattern is compatible with Python versions older than 3.13:\nipv4_connect = create_task(open_connection(\"127.0.0.1\", 80)) ipv6_connect = create_task(open_connection(\"::1\", 80)) tasks = [ipv4_connect, ipv6_connect] for next_connect in as_completed(tasks): # next_connect is not one of the original task objects. It must be # awaited to obtain the result value or raise the exception of the # awaitable that finishes next. reader, writer = await next_connect\nA\nTimeoutError\nis raised if the timeout occurs before all awaitables are done. This is raised by theasync for\nloop during asynchronous iteration or by the coroutines yielded during plain iteration.Changed in version 3.10: Removed the loop parameter.\nDeprecated since version 3.10: Deprecation warning is emitted if not all awaitable objects in the aws iterable are Future-like objects and there is no running event loop.\nChanged in version 3.12: Added support for generators yielding tasks.\nChanged in version 3.13: The result can now be used as either an asynchronous iterator or as a plain iterator (previously it was only a plain iterator).\nRunning in Threads\u00b6\n- async asyncio.to_thread(func, /, *args, **kwargs)\u00b6\nAsynchronously run function func in a separate thread.\nAny *args and **kwargs supplied for this function are directly passed to func. Also, the current\ncontextvars.Context\nis propagated, allowing context variables from the event loop thread to be accessed in the separate thread.Return a coroutine that can be awaited to get the eventual result of func.\nThis coroutine function is primarily intended to be used for executing IO-bound functions/methods that would otherwise block the event loop if they were run in the main thread. For example:\ndef blocking_io(): print(f\"start blocking_io at {time.strftime('%X')}\") # Note that time.sleep() can be replaced with any blocking # IO-bound operation, such as file operations. time.sleep(1) print(f\"blocking_io complete at {time.strftime('%X')}\") async def main(): print(f\"started main at {time.strftime('%X')}\") await asyncio.gather( asyncio.to_thread(blocking_io), asyncio.sleep(1)) print(f\"finished main at {time.strftime('%X')}\") asyncio.run(main()) # Expected output: # # started main at 19:50:53 # start blocking_io at 19:50:53 # blocking_io complete at 19:50:54 # finished main at 19:50:54\nDirectly calling\nblocking_io()\nin any coroutine would block the event loop for its duration, resulting in an additional 1 second of run time. Instead, by usingasyncio.to_thread()\n, we can run it in a separate thread without blocking the event loop.Note\nDue to the GIL,\nasyncio.to_thread()\ncan typically only be used to make IO-bound functions non-blocking. However, for extension modules that release the GIL or alternative Python implementations that don\u2019t have one,asyncio.to_thread()\ncan also be used for CPU-bound functions.Added in version 3.9.\nScheduling From Other Threads\u00b6\n- asyncio.run_coroutine_threadsafe(coro, loop)\u00b6\nSubmit a coroutine to the given event loop. Thread-safe.\nReturn a\nconcurrent.futures.Future\nto wait for the result from another OS thread.This function is meant to be called from a different OS thread than the one where the event loop is running. Example:\ndef in_thread(loop: asyncio.AbstractEventLoop) -> None: # Run some blocking IO pathlib.Path(\"example.txt\").write_text(\"hello world\", encoding=\"utf8\") # Create a coroutine coro = asyncio.sleep(1, result=3) # Submit the coroutine to a given loop future = asyncio.run_coroutine_threadsafe(coro, loop) # Wait for the result with an optional timeout argument assert future.result(timeout=2) == 3 async def amain() -> None: # Get the running loop loop = asyncio.get_running_loop() # Run something in a thread await asyncio.to_thread(in_thread, loop)\nIt\u2019s also possible to run the other way around. Example:\n@contextlib.contextmanager def loop_in_thread() -> Generator[asyncio.AbstractEventLoop]: loop_fut = concurrent.futures.Future[asyncio.AbstractEventLoop]() stop_event = asyncio.Event() async def main() -> None: loop_fut.set_result(asyncio.get_running_loop()) await stop_event.wait() with concurrent.futures.ThreadPoolExecutor(1) as tpe: complete_fut = tpe.submit(asyncio.run, main()) for fut in concurrent.futures.as_completed((loop_fut, complete_fut)): if fut is loop_fut: loop = loop_fut.result() try: yield loop finally: loop.call_soon_threadsafe(stop_event.set) else: fut.result() # Create a loop in another thread with loop_in_thread() as loop: # Create a coroutine coro = asyncio.sleep(1, result=3) # Submit the coroutine to a given loop future = asyncio.run_coroutine_threadsafe(coro, loop) # Wait for the result with an optional timeout argument assert future.result(timeout=2) == 3\nIf an exception is raised in the coroutine, the returned Future will be notified. It can also be used to cancel the task in the event loop:\ntry: result = future.result(timeout) except TimeoutError: print('The coroutine took too long, cancelling the task...') future.cancel() except Exception as exc: print(f'The coroutine raised an exception: {exc!r}') else: print(f'The coroutine returned: {result!r}')\nSee the concurrency and multithreading section of the documentation.\nUnlike other asyncio functions this function requires the loop argument to be passed explicitly.\nAdded in version 3.5.1.\nIntrospection\u00b6\n- asyncio.current_task(loop=None)\u00b6\nReturn the currently running\nTask\ninstance, orNone\nif no task is running.If loop is\nNone\nget_running_loop()\nis used to get the current loop.Added in version 3.7.\n- asyncio.all_tasks(loop=None)\u00b6\nReturn a set of not yet finished\nTask\nobjects run by the loop.If loop is\nNone\n,get_running_loop()\nis used for getting current loop.Added in version 3.7.\n- asyncio.iscoroutine(obj)\u00b6\nReturn\nTrue\nif obj is a coroutine object.Added in version 3.4.\nTask Object\u00b6\n- class asyncio.Task(coro, *, loop=None, name=None, context=None, eager_start=False)\u00b6\nA\nFuture-like\nobject that runs a Python coroutine. Not thread-safe.Tasks are used to run coroutines in event loops. If a coroutine awaits on a Future, the Task suspends the execution of the coroutine and waits for the completion of the Future. When the Future is done, the execution of the wrapped coroutine resumes.\nEvent loops use cooperative scheduling: an event loop runs one Task at a time. While a Task awaits for the completion of a Future, the event loop runs other Tasks, callbacks, or performs IO operations.\nUse the high-level\nasyncio.create_task()\nfunction to create Tasks, or the low-levelloop.create_task()\norensure_future()\nfunctions. Manual instantiation of Tasks is discouraged.To cancel a running Task use the\ncancel()\nmethod. Calling it will cause the Task to throw aCancelledError\nexception into the wrapped coroutine. If a coroutine is awaiting on a Future object during cancellation, the Future object will be cancelled.cancelled()\ncan be used to check if the Task was cancelled. The method returnsTrue\nif the wrapped coroutine did not suppress theCancelledError\nexception and was actually cancelled.asyncio.Task\ninherits fromFuture\nall of its APIs exceptFuture.set_result()\nandFuture.set_exception()\n.An optional keyword-only context argument allows specifying a custom\ncontextvars.Context\nfor the coro to run in. If no context is provided, the Task copies the current context and later runs its coroutine in the copied context.An optional keyword-only eager_start argument allows eagerly starting the execution of the\nasyncio.Task\nat task creation time. If set toTrue\nand the event loop is running, the task will start executing the coroutine immediately, until the first time the coroutine blocks. If the coroutine returns or raises without blocking, the task will be finished eagerly and will skip scheduling to the event loop.Changed in version 3.7: Added support for the\ncontextvars\nmodule.Changed in version 3.8: Added the name parameter.\nDeprecated since version 3.10: Deprecation warning is emitted if loop is not specified and there is no running event loop.\nChanged in version 3.11: Added the context parameter.\nChanged in version 3.12: Added the eager_start parameter.\n- done()\u00b6\nReturn\nTrue\nif the Task is done.A Task is done when the wrapped coroutine either returned a value, raised an exception, or the Task was cancelled.\n- result()\u00b6\nReturn the result of the Task.\nIf the Task is done, the result of the wrapped coroutine is returned (or if the coroutine raised an exception, that exception is re-raised.)\nIf the Task has been cancelled, this method raises a\nCancelledError\nexception.If the Task\u2019s result isn\u2019t yet available, this method raises an\nInvalidStateError\nexception.\n- exception()\u00b6\nReturn the exception of the Task.\nIf the wrapped coroutine raised an exception that exception is returned. If the wrapped coroutine returned normally this method returns\nNone\n.If the Task has been cancelled, this method raises a\nCancelledError\nexception.If the Task isn\u2019t done yet, this method raises an\nInvalidStateError\nexception.\n- add_done_callback(callback, *, context=None)\u00b6\nAdd a callback to be run when the Task is done.\nThis method should only be used in low-level callback-based code.\nSee the documentation of\nFuture.add_done_callback()\nfor more details.\n- remove_done_callback(callback)\u00b6\nRemove callback from the callbacks list.\nThis method should only be used in low-level callback-based code.\nSee the documentation of\nFuture.remove_done_callback()\nfor more details.\n- get_stack(*, limit=None)\u00b6\nReturn the list of stack frames for this Task.\nIf the wrapped coroutine is not done, this returns the stack where it is suspended. If the coroutine has completed successfully or was cancelled, this returns an empty list. If the coroutine was terminated by an exception, this returns the list of traceback frames.\nThe frames are always ordered from oldest to newest.\nOnly one stack frame is returned for a suspended coroutine.\nThe optional limit argument sets the maximum number of frames to return; by default all available frames are returned. The ordering of the returned list differs depending on whether a stack or a traceback is returned: the newest frames of a stack are returned, but the oldest frames of a traceback are returned. (This matches the behavior of the traceback module.)\n- print_stack(*, limit=None, file=None)\u00b6\nPrint the stack or traceback for this Task.\nThis produces output similar to that of the traceback module for the frames retrieved by\nget_stack()\n.The limit argument is passed to\nget_stack()\ndirectly.The file argument is an I/O stream to which the output is written; by default output is written to\nsys.stdout\n.\n- get_coro()\u00b6\nReturn the coroutine object wrapped by the\nTask\n.Note\nThis will return\nNone\nfor Tasks which have already completed eagerly. See the Eager Task Factory.Added in version 3.8.\nChanged in version 3.12: Newly added eager task execution means result may be\nNone\n.\n- get_context()\u00b6\nReturn the\ncontextvars.Context\nobject associated with the task.Added in version 3.12.\n- get_name()\u00b6\nReturn the name of the Task.\nIf no name has been explicitly assigned to the Task, the default asyncio Task implementation generates a default name during instantiation.\nAdded in version 3.8.\n- set_name(value)\u00b6\nSet the name of the Task.\nThe value argument can be any object, which is then converted to a string.\nIn the default Task implementation, the name will be visible in the\nrepr()\noutput of a task object.Added in version 3.8.\n- cancel(msg=None)\u00b6\nRequest the Task to be cancelled.\nIf the Task is already done or cancelled, return\nFalse\n, otherwise, returnTrue\n.The method arranges for a\nCancelledError\nexception to be thrown into the wrapped coroutine on the next cycle of the event loop.The coroutine then has a chance to clean up or even deny the request by suppressing the exception with a\ntry\n\u2026 \u2026except CancelledError\n\u2026finally\nblock. Therefore, unlikeFuture.cancel()\n,Task.cancel()\ndoes not guarantee that the Task will be cancelled, although suppressing cancellation completely is not common and is actively discouraged. Should the coroutine nevertheless decide to suppress the cancellation, it needs to callTask.uncancel()\nin addition to catching the exception.Changed in version 3.9: Added the msg parameter.\nChanged in version 3.11: The\nmsg\nparameter is propagated from cancelled task to its awaiter.The following example illustrates how coroutines can intercept the cancellation request:\nasync def cancel_me(): print('cancel_me(): before sleep') try: # Wait for 1 hour await asyncio.sleep(3600) except asyncio.CancelledError: print('cancel_me(): cancel sleep') raise finally: print('cancel_me(): after sleep') async def main(): # Create a \"cancel_me\" Task task = asyncio.create_task(cancel_me()) # Wait for 1 second await asyncio.sleep(1) task.cancel() try: await task except asyncio.CancelledError: print(\"main(): cancel_me is cancelled now\") asyncio.run(main()) # Expected output: # # cancel_me(): before sleep # cancel_me(): cancel sleep # cancel_me(): after sleep # main(): cancel_me is cancelled now\n- cancelled()\u00b6\nReturn\nTrue\nif the Task is cancelled.The Task is cancelled when the cancellation was requested with\ncancel()\nand the wrapped coroutine propagated theCancelledError\nexception thrown into it.\n- uncancel()\u00b6\nDecrement the count of cancellation requests to this Task.\nReturns the remaining number of cancellation requests.\nNote that once execution of a cancelled task completed, further calls to\nuncancel()\nare ineffective.Added in version 3.11.\nThis method is used by asyncio\u2019s internals and isn\u2019t expected to be used by end-user code. In particular, if a Task gets successfully uncancelled, this allows for elements of structured concurrency like Task Groups and\nasyncio.timeout()\nto continue running, isolating cancellation to the respective structured block. For example:async def make_request_with_timeout(): try: async with asyncio.timeout(1): # Structured block affected by the timeout: await make_request() await make_another_request() except TimeoutError: log(\"There was a timeout\") # Outer code not affected by the timeout: await unrelated_code()\nWhile the block with\nmake_request()\nandmake_another_request()\nmight get cancelled due to the timeout,unrelated_code()\nshould continue running even in case of the timeout. This is implemented withuncancel()\n.TaskGroup\ncontext managers useuncancel()\nin a similar fashion.If end-user code is, for some reason, suppressing cancellation by catching\nCancelledError\n, it needs to call this method to remove the cancellation state.When this method decrements the cancellation count to zero, the method checks if a previous\ncancel()\ncall had arranged forCancelledError\nto be thrown into the task. If it hasn\u2019t been thrown yet, that arrangement will be rescinded (by resetting the internal_must_cancel\nflag).\nChanged in version 3.13: Changed to rescind pending cancellation requests upon reaching zero.\n- cancelling()\u00b6\nReturn the number of pending cancellation requests to this Task, i.e., the number of calls to\ncancel()\nless the number ofuncancel()\ncalls.Note that if this number is greater than zero but the Task is still executing,\ncancelled()\nwill still returnFalse\n. This is because this number can be lowered by callinguncancel()\n, which can lead to the task not being cancelled after all if the cancellation requests go down to zero.This method is used by asyncio\u2019s internals and isn\u2019t expected to be used by end-user code. See\nuncancel()\nfor more details.Added in version 3.11.", "code_snippets": ["\n\n", " ", "\n", " ", "\n", " ", " ", "\n", " ", "\n\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n\n", " ", " ", "\n ", " ", "\n ", "\n\n", " ", "\n ", "\n\n ", " ", " ", "\n ", " ", " ", "\n\n ", "\n\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", " ", "\n ", " ", " ", "\n ", " ", "\n\n ", " ", " ", "\n ", " ", "\n\n ", "\n\n ", "\n ", "\n ", " ", "\n ", " ", "\n\n ", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", "\n\n ", " ", " ", "\n ", " ", "\n\n ", "\n\n ", "\n\n ", "\n", "\n\n", " ", "\n ", " ", "\n\n", " ", "\n ", "\n ", "\n ", "\n ", " ", "\n\n ", "\n ", " ", " ", "\n\n", "\n", "\n\n", " ", "\n ", " ", "\n\n", " ", "\n ", "\n ", "\n ", " ", " ", "\n\n ", "\n ", "\n ", " ", "\n\n", "\n", " ", "\n ", " ", "\n\n ", "\n ", " ", "\n ", "\n ", "\n ", "\n", " ", " ", "\n\n", " ", " ", " ", "\n ", " ", " ", "\n\n ", "\n ", "\n\n ", "\n ", "\n ", "\n ", "\n", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", "\n", "\n", "\n\n", " ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", "\n ", "\n ", " ", " ", " ", " ", " ", "\n ", "\n ", " ", "\n\n", "\n", "\n\n", " ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", "\n ", "\n ", " ", "\n ", " ", " ", "\n ", "\n ", " ", "\n\n", " ", "\n ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", "\n\n", "\n\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n ", " ", " ", " ", "\n", " ", "\n ", " ", " ", "\n", " ", "\n ", " ", " ", "\n ", " ", "\n", " ", "\n ", "\n ", " ", " ", "\n ", " ", "\n ", " ", "\n ", "\n\n ", "\n", " ", "\n ", "\n ", "\n ", " ", " ", " ", " ", "\n ", "\n ", " ", " ", " ", " ", "\n ", "\n\n ", " ", "\n ", " ", "\n ", "\n\n ", " ", "\n ", "\n", " ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", "\n ", " ", " ", "\n ", " ", "\n ", " ", "\n ", "\n\n ", "\n", " ", "\n ", "\n ", " ", "\n ", "\n\n", " ", "\n ", "\n ", "\n ", " ", " ", "\n ", " ", "\n ", "\n\n", "\n\n", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n\n", " ", " ", " ", " ", "\n ", "\n ", "\n ", " ", " ", " ", " ", "\n\n ", " ", " ", " ", "\n ", "\n ", "\n ", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n\n", " ", " ", " ", "\n ", "\n ", "\n ", "\n ", " ", " ", " ", " ", "\n", "\n ", "\n ", "\n ", "\n ", "\n ", "\n\n", " ", "\n ", "\n\n ", " ", "\n ", "\n ", "\n\n ", "\n\n\n", "\n\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", "\n ", "\n ", " ", "\n\n ", "\n ", " ", " ", " ", "\n\n ", "\n ", " ", " ", " ", "\n\n ", "\n ", " ", " ", " ", "\n\n", " ", " ", " ", "\n ", "\n ", " ", " ", "\n\n ", "\n ", " ", " ", "\n", "\n", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n\n ", " ", " ", " ", "\n ", "\n ", " ", "\n\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", "\n ", " ", "\n ", "\n ", "\n ", "\n ", "\n\n", "\n", " ", " ", " ", "\n ", "\n ", " ", " ", " ", "\n\n ", "\n ", " ", " ", " ", "\n\n ", "\n ", " ", " ", " ", "\n", "\n ", " ", " ", "\n", " ", "\n ", "\n ", "\n", " ", " ", " ", "\n ", "\n", "\n ", "\n", " ", "\n ", "\n\n ", "\n ", "\n ", " ", "\n ", " ", "\n ", "\n ", "\n ", "\n ", "\n\n", " ", "\n ", "\n ", " ", " ", "\n\n ", "\n ", " ", "\n\n ", "\n ", "\n ", " ", "\n ", " ", "\n ", "\n\n", "\n\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n ", "\n ", " ", " ", "\n ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", "\n ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 10258} +{"url": "https://docs.python.org/3/library/asyncio-protocol.html", "title": "Transports and Protocols", "content": "Transports and Protocols\u00b6\nPreface\nTransports and Protocols are used by the low-level event loop\nAPIs such as loop.create_connection()\n. They use\ncallback-based programming style and enable high-performance\nimplementations of network or IPC protocols (e.g. HTTP).\nEssentially, transports and protocols should only be used in libraries and frameworks and never in high-level asyncio applications.\nThis documentation page covers both Transports and Protocols.\nIntroduction\nAt the highest level, the transport is concerned with how bytes are transmitted, while the protocol determines which bytes to transmit (and to some extent when).\nA different way of saying the same thing: a transport is an abstraction for a socket (or similar I/O endpoint) while a protocol is an abstraction for an application, from the transport\u2019s point of view.\nYet another view is the transport and protocol interfaces together define an abstract interface for using network I/O and interprocess I/O.\nThere is always a 1:1 relationship between transport and protocol objects: the protocol calls transport methods to send data, while the transport calls protocol methods to pass it data that has been received.\nMost of connection oriented event loop methods\n(such as loop.create_connection()\n) usually accept a\nprotocol_factory argument used to create a Protocol object\nfor an accepted connection, represented by a Transport object.\nSuch methods usually return a tuple of (transport, protocol)\n.\nContents\nThis documentation page contains the following sections:\nThe Transports section documents asyncio\nBaseTransport\n,ReadTransport\n,WriteTransport\n,Transport\n,DatagramTransport\n, andSubprocessTransport\nclasses.The Protocols section documents asyncio\nBaseProtocol\n,Protocol\n,BufferedProtocol\n,DatagramProtocol\n, andSubprocessProtocol\nclasses.The Examples section showcases how to work with transports, protocols, and low-level event loop APIs.\nTransports\u00b6\nSource code: Lib/asyncio/transports.py\nTransports are classes provided by asyncio\nin order to abstract\nvarious kinds of communication channels.\nTransport objects are always instantiated by an asyncio event loop.\nasyncio implements transports for TCP, UDP, SSL, and subprocess pipes. The methods available on a transport depend on the transport\u2019s kind.\nThe transport classes are not thread safe.\nTransports Hierarchy\u00b6\n- class asyncio.BaseTransport\u00b6\nBase class for all transports. Contains methods that all asyncio transports share.\n- class asyncio.WriteTransport(BaseTransport)\u00b6\nA base transport for write-only connections.\nInstances of the WriteTransport class are returned from the\nloop.connect_write_pipe()\nevent loop method and are also used by subprocess-related methods likeloop.subprocess_exec()\n.\n- class asyncio.ReadTransport(BaseTransport)\u00b6\nA base transport for read-only connections.\nInstances of the ReadTransport class are returned from the\nloop.connect_read_pipe()\nevent loop method and are also used by subprocess-related methods likeloop.subprocess_exec()\n.\n- class asyncio.Transport(WriteTransport, ReadTransport)\u00b6\nInterface representing a bidirectional transport, such as a TCP connection.\nThe user does not instantiate a transport directly; they call a utility function, passing it a protocol factory and other information necessary to create the transport and protocol.\nInstances of the Transport class are returned from or used by event loop methods like\nloop.create_connection()\n,loop.create_unix_connection()\n,loop.create_server()\n,loop.sendfile()\n, etc.\n- class asyncio.DatagramTransport(BaseTransport)\u00b6\nA transport for datagram (UDP) connections.\nInstances of the DatagramTransport class are returned from the\nloop.create_datagram_endpoint()\nevent loop method.\n- class asyncio.SubprocessTransport(BaseTransport)\u00b6\nAn abstraction to represent a connection between a parent and its child OS process.\nInstances of the SubprocessTransport class are returned from event loop methods\nloop.subprocess_shell()\nandloop.subprocess_exec()\n.\nBase Transport\u00b6\n- BaseTransport.close()\u00b6\nClose the transport.\nIf the transport has a buffer for outgoing data, buffered data will be flushed asynchronously. No more data will be received. After all buffered data is flushed, the protocol\u2019s\nprotocol.connection_lost()\nmethod will be called withNone\nas its argument. The transport should not be used once it is closed.\n- BaseTransport.is_closing()\u00b6\nReturn\nTrue\nif the transport is closing or is closed.\n- BaseTransport.get_extra_info(name, default=None)\u00b6\nReturn information about the transport or underlying resources it uses.\nname is a string representing the piece of transport-specific information to get.\ndefault is the value to return if the information is not available, or if the transport does not support querying it with the given third-party event loop implementation or on the current platform.\nFor example, the following code attempts to get the underlying socket object of the transport:\nsock = transport.get_extra_info('socket') if sock is not None: print(sock.getsockopt(...))\nCategories of information that can be queried on some transports:\nsocket:\n'peername'\n: the remote address to which the socket is connected, result ofsocket.socket.getpeername()\n(None\non error)'socket'\n:socket.socket\ninstance'sockname'\n: the socket\u2019s own address, result ofsocket.socket.getsockname()\nSSL socket:\n'compression'\n: the compression algorithm being used as a string, orNone\nif the connection isn\u2019t compressed; result ofssl.SSLSocket.compression()\n'cipher'\n: a three-value tuple containing the name of the cipher being used, the version of the SSL protocol that defines its use, and the number of secret bits being used; result ofssl.SSLSocket.cipher()\n'peercert'\n: peer certificate; result ofssl.SSLSocket.getpeercert()\n'sslcontext'\n:ssl.SSLContext\ninstance'ssl_object'\n:ssl.SSLObject\norssl.SSLSocket\ninstance\npipe:\n'pipe'\n: pipe object\nsubprocess:\n'subprocess'\n:subprocess.Popen\ninstance\n- BaseTransport.set_protocol(protocol)\u00b6\nSet a new protocol.\nSwitching protocol should only be done when both protocols are documented to support the switch.\n- BaseTransport.get_protocol()\u00b6\nReturn the current protocol.\nRead-only Transports\u00b6\n- ReadTransport.is_reading()\u00b6\nReturn\nTrue\nif the transport is receiving new data.Added in version 3.7.\n- ReadTransport.pause_reading()\u00b6\nPause the receiving end of the transport. No data will be passed to the protocol\u2019s\nprotocol.data_received()\nmethod untilresume_reading()\nis called.Changed in version 3.7: The method is idempotent, i.e. it can be called when the transport is already paused or closed.\n- ReadTransport.resume_reading()\u00b6\nResume the receiving end. The protocol\u2019s\nprotocol.data_received()\nmethod will be called once again if some data is available for reading.Changed in version 3.7: The method is idempotent, i.e. it can be called when the transport is already reading.\nWrite-only Transports\u00b6\n- WriteTransport.abort()\u00b6\nClose the transport immediately, without waiting for pending operations to complete. Buffered data will be lost. No more data will be received. The protocol\u2019s\nprotocol.connection_lost()\nmethod will eventually be called withNone\nas its argument.\n- WriteTransport.can_write_eof()\u00b6\nReturn\nTrue\nif the transport supportswrite_eof()\n,False\nif not.\n- WriteTransport.get_write_buffer_size()\u00b6\nReturn the current size of the output buffer used by the transport.\n- WriteTransport.get_write_buffer_limits()\u00b6\nGet the high and low watermarks for write flow control. Return a tuple\n(low, high)\nwhere low and high are positive number of bytes.Use\nset_write_buffer_limits()\nto set the limits.Added in version 3.4.2.\n- WriteTransport.set_write_buffer_limits(high=None, low=None)\u00b6\nSet the high and low watermarks for write flow control.\nThese two values (measured in number of bytes) control when the protocol\u2019s\nprotocol.pause_writing()\nandprotocol.resume_writing()\nmethods are called. If specified, the low watermark must be less than or equal to the high watermark. Neither high nor low can be negative.pause_writing()\nis called when the buffer size becomes greater than or equal to the high value. If writing has been paused,resume_writing()\nis called when the buffer size becomes less than or equal to the low value.The defaults are implementation-specific. If only the high watermark is given, the low watermark defaults to an implementation-specific value less than or equal to the high watermark. Setting high to zero forces low to zero as well, and causes\npause_writing()\nto be called whenever the buffer becomes non-empty. Setting low to zero causesresume_writing()\nto be called only once the buffer is empty. Use of zero for either limit is generally sub-optimal as it reduces opportunities for doing I/O and computation concurrently.Use\nget_write_buffer_limits()\nto get the limits.\n- WriteTransport.write(data)\u00b6\nWrite some data bytes to the transport.\nThis method does not block; it buffers the data and arranges for it to be sent out asynchronously.\n- WriteTransport.writelines(list_of_data)\u00b6\nWrite a list (or any iterable) of data bytes to the transport. This is functionally equivalent to calling\nwrite()\non each element yielded by the iterable, but may be implemented more efficiently.\n- WriteTransport.write_eof()\u00b6\nClose the write end of the transport after flushing all buffered data. Data may still be received.\nThis method can raise\nNotImplementedError\nif the transport (e.g. SSL) doesn\u2019t support half-closed connections.\nDatagram Transports\u00b6\n- DatagramTransport.sendto(data, addr=None)\u00b6\nSend the data bytes to the remote peer given by addr (a transport-dependent target address). If addr is\nNone\n, the data is sent to the target address given on transport creation.This method does not block; it buffers the data and arranges for it to be sent out asynchronously.\nChanged in version 3.13: This method can be called with an empty bytes object to send a zero-length datagram. The buffer size calculation used for flow control is also updated to account for the datagram header.\n- DatagramTransport.abort()\u00b6\nClose the transport immediately, without waiting for pending operations to complete. Buffered data will be lost. No more data will be received. The protocol\u2019s\nprotocol.connection_lost()\nmethod will eventually be called withNone\nas its argument.\nSubprocess Transports\u00b6\n- SubprocessTransport.get_pid()\u00b6\nReturn the subprocess process id as an integer.\n- SubprocessTransport.get_pipe_transport(fd)\u00b6\nReturn the transport for the communication pipe corresponding to the integer file descriptor fd:\n0\n: writable streaming transport of the standard input (stdin), orNone\nif the subprocess was not created withstdin=PIPE\n1\n: readable streaming transport of the standard output (stdout), orNone\nif the subprocess was not created withstdout=PIPE\n2\n: readable streaming transport of the standard error (stderr), orNone\nif the subprocess was not created withstderr=PIPE\nother fd:\nNone\n- SubprocessTransport.get_returncode()\u00b6\nReturn the subprocess return code as an integer or\nNone\nif it hasn\u2019t returned, which is similar to thesubprocess.Popen.returncode\nattribute.\n- SubprocessTransport.kill()\u00b6\nKill the subprocess.\nOn POSIX systems, the function sends SIGKILL to the subprocess. On Windows, this method is an alias for\nterminate()\n.See also\nsubprocess.Popen.kill()\n.\n- SubprocessTransport.send_signal(signal)\u00b6\nSend the signal number to the subprocess, as in\nsubprocess.Popen.send_signal()\n.\n- SubprocessTransport.terminate()\u00b6\nStop the subprocess.\nOn POSIX systems, this method sends\nSIGTERM\nto the subprocess. On Windows, the Windows API functionTerminateProcess()\nis called to stop the subprocess.See also\nsubprocess.Popen.terminate()\n.\nProtocols\u00b6\nSource code: Lib/asyncio/protocols.py\nasyncio provides a set of abstract base classes that should be used to implement network protocols. Those classes are meant to be used together with transports.\nSubclasses of abstract base protocol classes may implement some or all methods. All these methods are callbacks: they are called by transports on certain events, for example when some data is received. A base protocol method should be called by the corresponding transport.\nBase Protocols\u00b6\n- class asyncio.BaseProtocol\u00b6\nBase protocol with methods that all protocols share.\n- class asyncio.Protocol(BaseProtocol)\u00b6\nThe base class for implementing streaming protocols (TCP, Unix sockets, etc).\n- class asyncio.BufferedProtocol(BaseProtocol)\u00b6\nA base class for implementing streaming protocols with manual control of the receive buffer.\n- class asyncio.DatagramProtocol(BaseProtocol)\u00b6\nThe base class for implementing datagram (UDP) protocols.\n- class asyncio.SubprocessProtocol(BaseProtocol)\u00b6\nThe base class for implementing protocols communicating with child processes (unidirectional pipes).\nBase Protocol\u00b6\nAll asyncio protocols can implement Base Protocol callbacks.\nConnection Callbacks\nConnection callbacks are called on all protocols, exactly once per a successful connection. All other protocol callbacks can only be called between those two methods.\n- BaseProtocol.connection_made(transport)\u00b6\nCalled when a connection is made.\nThe transport argument is the transport representing the connection. The protocol is responsible for storing the reference to its transport.\n- BaseProtocol.connection_lost(exc)\u00b6\nCalled when the connection is lost or closed.\nThe argument is either an exception object or\nNone\n. The latter means a regular EOF is received, or the connection was aborted or closed by this side of the connection.\nFlow Control Callbacks\nFlow control callbacks can be called by transports to pause or resume writing performed by the protocol.\nSee the documentation of the set_write_buffer_limits()\nmethod for more details.\n- BaseProtocol.pause_writing()\u00b6\nCalled when the transport\u2019s buffer goes over the high watermark.\n- BaseProtocol.resume_writing()\u00b6\nCalled when the transport\u2019s buffer drains below the low watermark.\nIf the buffer size equals the high watermark,\npause_writing()\nis not called: the buffer size must\ngo strictly over.\nConversely, resume_writing()\nis called when the\nbuffer size is equal or lower than the low watermark. These end\nconditions are important to ensure that things go as expected when\neither mark is zero.\nStreaming Protocols\u00b6\nEvent methods, such as loop.create_server()\n,\nloop.create_unix_server()\n, loop.create_connection()\n,\nloop.create_unix_connection()\n, loop.connect_accepted_socket()\n,\nloop.connect_read_pipe()\n, and loop.connect_write_pipe()\naccept factories that return streaming protocols.\n- Protocol.data_received(data)\u00b6\nCalled when some data is received. data is a non-empty bytes object containing the incoming data.\nWhether the data is buffered, chunked or reassembled depends on the transport. In general, you shouldn\u2019t rely on specific semantics and instead make your parsing generic and flexible. However, data is always received in the correct order.\nThe method can be called an arbitrary number of times while a connection is open.\nHowever,\nprotocol.eof_received()\nis called at most once. Onceeof_received()\nis called,data_received()\nis not called anymore.\n- Protocol.eof_received()\u00b6\nCalled when the other end signals it won\u2019t send any more data (for example by calling\ntransport.write_eof()\n, if the other end also uses asyncio).This method may return a false value (including\nNone\n), in which case the transport will close itself. Conversely, if this method returns a true value, the protocol used determines whether to close the transport. Since the default implementation returnsNone\n, it implicitly closes the connection.Some transports, including SSL, don\u2019t support half-closed connections, in which case returning true from this method will result in the connection being closed.\nState machine:\nstart -> connection_made\n[-> data_received]*\n[-> eof_received]?\n-> connection_lost -> end\nBuffered Streaming Protocols\u00b6\nAdded in version 3.7.\nBuffered Protocols can be used with any event loop method that supports Streaming Protocols.\nBufferedProtocol\nimplementations allow explicit manual allocation\nand control of the receive buffer. Event loops can then use the buffer\nprovided by the protocol to avoid unnecessary data copies. This\ncan result in noticeable performance improvement for protocols that\nreceive big amounts of data. Sophisticated protocol implementations\ncan significantly reduce the number of buffer allocations.\nThe following callbacks are called on BufferedProtocol\ninstances:\n- BufferedProtocol.get_buffer(sizehint)\u00b6\nCalled to allocate a new receive buffer.\nsizehint is the recommended minimum size for the returned buffer. It is acceptable to return smaller or larger buffers than what sizehint suggests. When set to -1, the buffer size can be arbitrary. It is an error to return a buffer with a zero size.\nget_buffer()\nmust return an object implementing the buffer protocol.\n- BufferedProtocol.buffer_updated(nbytes)\u00b6\nCalled when the buffer was updated with the received data.\nnbytes is the total number of bytes that were written to the buffer.\n- BufferedProtocol.eof_received()\u00b6\nSee the documentation of the\nprotocol.eof_received()\nmethod.\nget_buffer()\ncan be called an arbitrary number\nof times during a connection. However, protocol.eof_received()\nis called at most once\nand, if called, get_buffer()\nand\nbuffer_updated()\nwon\u2019t be called after it.\nState machine:\nstart -> connection_made\n[-> get_buffer\n[-> buffer_updated]?\n]*\n[-> eof_received]?\n-> connection_lost -> end\nDatagram Protocols\u00b6\nDatagram Protocol instances should be constructed by protocol\nfactories passed to the loop.create_datagram_endpoint()\nmethod.\n- DatagramProtocol.datagram_received(data, addr)\u00b6\nCalled when a datagram is received. data is a bytes object containing the incoming data. addr is the address of the peer sending the data; the exact format depends on the transport.\n- DatagramProtocol.error_received(exc)\u00b6\nCalled when a previous send or receive operation raises an\nOSError\n. exc is theOSError\ninstance.This method is called in rare conditions, when the transport (e.g. UDP) detects that a datagram could not be delivered to its recipient. In many conditions though, undeliverable datagrams will be silently dropped.\nNote\nOn BSD systems (macOS, FreeBSD, etc.) flow control is not supported for datagram protocols, because there is no reliable way to detect send failures caused by writing too many packets.\nThe socket always appears \u2018ready\u2019 and excess packets are dropped. An\nOSError\nwith errno\nset to errno.ENOBUFS\nmay\nor may not be raised; if it is raised, it will be reported to\nDatagramProtocol.error_received()\nbut otherwise ignored.\nSubprocess Protocols\u00b6\nSubprocess Protocol instances should be constructed by protocol\nfactories passed to the loop.subprocess_exec()\nand\nloop.subprocess_shell()\nmethods.\n- SubprocessProtocol.pipe_data_received(fd, data)\u00b6\nCalled when the child process writes data into its stdout or stderr pipe.\nfd is the integer file descriptor of the pipe.\ndata is a non-empty bytes object containing the received data.\n- SubprocessProtocol.pipe_connection_lost(fd, exc)\u00b6\nCalled when one of the pipes communicating with the child process is closed.\nfd is the integer file descriptor that was closed.\n- SubprocessProtocol.process_exited()\u00b6\nCalled when the child process has exited.\nIt can be called before\npipe_data_received()\nandpipe_connection_lost()\nmethods.\nExamples\u00b6\nTCP Echo Server\u00b6\nCreate a TCP echo server using the loop.create_server()\nmethod, send back\nreceived data, and close the connection:\nimport asyncio\nclass EchoServerProtocol(asyncio.Protocol):\ndef connection_made(self, transport):\npeername = transport.get_extra_info('peername')\nprint('Connection from {}'.format(peername))\nself.transport = transport\ndef data_received(self, data):\nmessage = data.decode()\nprint('Data received: {!r}'.format(message))\nprint('Send: {!r}'.format(message))\nself.transport.write(data)\nprint('Close the client socket')\nself.transport.close()\nasync def main():\n# Get a reference to the event loop as we plan to use\n# low-level APIs.\nloop = asyncio.get_running_loop()\nserver = await loop.create_server(\nEchoServerProtocol,\n'127.0.0.1', 8888)\nasync with server:\nawait server.serve_forever()\nasyncio.run(main())\nSee also\nThe TCP echo server using streams\nexample uses the high-level asyncio.start_server()\nfunction.\nTCP Echo Client\u00b6\nA TCP echo client using the loop.create_connection()\nmethod, sends\ndata, and waits until the connection is closed:\nimport asyncio\nclass EchoClientProtocol(asyncio.Protocol):\ndef __init__(self, message, on_con_lost):\nself.message = message\nself.on_con_lost = on_con_lost\ndef connection_made(self, transport):\ntransport.write(self.message.encode())\nprint('Data sent: {!r}'.format(self.message))\ndef data_received(self, data):\nprint('Data received: {!r}'.format(data.decode()))\ndef connection_lost(self, exc):\nprint('The server closed the connection')\nself.on_con_lost.set_result(True)\nasync def main():\n# Get a reference to the event loop as we plan to use\n# low-level APIs.\nloop = asyncio.get_running_loop()\non_con_lost = loop.create_future()\nmessage = 'Hello World!'\ntransport, protocol = await loop.create_connection(\nlambda: EchoClientProtocol(message, on_con_lost),\n'127.0.0.1', 8888)\n# Wait until the protocol signals that the connection\n# is lost and close the transport.\ntry:\nawait on_con_lost\nfinally:\ntransport.close()\nasyncio.run(main())\nSee also\nThe TCP echo client using streams\nexample uses the high-level asyncio.open_connection()\nfunction.\nUDP Echo Server\u00b6\nA UDP echo server, using the loop.create_datagram_endpoint()\nmethod, sends back received data:\nimport asyncio\nclass EchoServerProtocol:\ndef connection_made(self, transport):\nself.transport = transport\ndef datagram_received(self, data, addr):\nmessage = data.decode()\nprint('Received %r from %s' % (message, addr))\nprint('Send %r to %s' % (message, addr))\nself.transport.sendto(data, addr)\nasync def main():\nprint(\"Starting UDP server\")\n# Get a reference to the event loop as we plan to use\n# low-level APIs.\nloop = asyncio.get_running_loop()\n# One protocol instance will be created to serve all\n# client requests.\ntransport, protocol = await loop.create_datagram_endpoint(\nEchoServerProtocol,\nlocal_addr=('127.0.0.1', 9999))\ntry:\nawait asyncio.sleep(3600) # Serve for 1 hour.\nfinally:\ntransport.close()\nasyncio.run(main())\nUDP Echo Client\u00b6\nA UDP echo client, using the loop.create_datagram_endpoint()\nmethod, sends data and closes the transport when it receives the answer:\nimport asyncio\nclass EchoClientProtocol:\ndef __init__(self, message, on_con_lost):\nself.message = message\nself.on_con_lost = on_con_lost\nself.transport = None\ndef connection_made(self, transport):\nself.transport = transport\nprint('Send:', self.message)\nself.transport.sendto(self.message.encode())\ndef datagram_received(self, data, addr):\nprint(\"Received:\", data.decode())\nprint(\"Close the socket\")\nself.transport.close()\ndef error_received(self, exc):\nprint('Error received:', exc)\ndef connection_lost(self, exc):\nprint(\"Connection closed\")\nself.on_con_lost.set_result(True)\nasync def main():\n# Get a reference to the event loop as we plan to use\n# low-level APIs.\nloop = asyncio.get_running_loop()\non_con_lost = loop.create_future()\nmessage = \"Hello World!\"\ntransport, protocol = await loop.create_datagram_endpoint(\nlambda: EchoClientProtocol(message, on_con_lost),\nremote_addr=('127.0.0.1', 9999))\ntry:\nawait on_con_lost\nfinally:\ntransport.close()\nasyncio.run(main())\nConnecting Existing Sockets\u00b6\nWait until a socket receives data using the\nloop.create_connection()\nmethod with a protocol:\nimport asyncio\nimport socket\nclass MyProtocol(asyncio.Protocol):\ndef __init__(self, on_con_lost):\nself.transport = None\nself.on_con_lost = on_con_lost\ndef connection_made(self, transport):\nself.transport = transport\ndef data_received(self, data):\nprint(\"Received:\", data.decode())\n# We are done: close the transport;\n# connection_lost() will be called automatically.\nself.transport.close()\ndef connection_lost(self, exc):\n# The socket has been closed\nself.on_con_lost.set_result(True)\nasync def main():\n# Get a reference to the event loop as we plan to use\n# low-level APIs.\nloop = asyncio.get_running_loop()\non_con_lost = loop.create_future()\n# Create a pair of connected sockets\nrsock, wsock = socket.socketpair()\n# Register the socket to wait for data.\ntransport, protocol = await loop.create_connection(\nlambda: MyProtocol(on_con_lost), sock=rsock)\n# Simulate the reception of data from the network.\nloop.call_soon(wsock.send, 'abc'.encode())\ntry:\nawait protocol.on_con_lost\nfinally:\ntransport.close()\nwsock.close()\nasyncio.run(main())\nSee also\nThe watch a file descriptor for read events example uses the low-level\nloop.add_reader()\nmethod to register an FD.\nThe register an open socket to wait for data using streams example uses high-level streams\ncreated by the open_connection()\nfunction in a coroutine.\nloop.subprocess_exec() and SubprocessProtocol\u00b6\nAn example of a subprocess protocol used to get the output of a subprocess and to wait for the subprocess exit.\nThe subprocess is created by the loop.subprocess_exec()\nmethod:\nimport asyncio\nimport sys\nclass DateProtocol(asyncio.SubprocessProtocol):\ndef __init__(self, exit_future):\nself.exit_future = exit_future\nself.output = bytearray()\nself.pipe_closed = False\nself.exited = False\ndef pipe_connection_lost(self, fd, exc):\nself.pipe_closed = True\nself.check_for_exit()\ndef pipe_data_received(self, fd, data):\nself.output.extend(data)\ndef process_exited(self):\nself.exited = True\n# process_exited() method can be called before\n# pipe_connection_lost() method: wait until both methods are\n# called.\nself.check_for_exit()\ndef check_for_exit(self):\nif self.pipe_closed and self.exited:\nself.exit_future.set_result(True)\nasync def get_date():\n# Get a reference to the event loop as we plan to use\n# low-level APIs.\nloop = asyncio.get_running_loop()\ncode = 'import datetime; print(datetime.datetime.now())'\nexit_future = asyncio.Future(loop=loop)\n# Create the subprocess controlled by DateProtocol;\n# redirect the standard output into a pipe.\ntransport, protocol = await loop.subprocess_exec(\nlambda: DateProtocol(exit_future),\nsys.executable, '-c', code,\nstdin=None, stderr=None)\n# Wait for the subprocess exit using the process_exited()\n# method of the protocol.\nawait exit_future\n# Close the stdout pipe.\ntransport.close()\n# Read the output which was collected by the\n# pipe_data_received() method of the protocol.\ndata = bytes(protocol.output)\nreturn data.decode('ascii').rstrip()\ndate = asyncio.run(get_date())\nprint(f\"Current date: {date}\")\nSee also the same example written using high-level APIs.", "code_snippets": [" ", " ", "\n", " ", " ", " ", " ", "\n ", "\n", "\n\n\n", "\n ", " ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n\n ", " ", "\n ", " ", " ", "\n ", "\n\n ", "\n ", "\n\n ", "\n ", "\n\n\n", " ", "\n ", "\n ", "\n ", " ", " ", "\n\n ", " ", " ", " ", "\n ", "\n ", " ", "\n\n ", " ", " ", "\n ", " ", "\n\n\n", "\n", "\n\n\n", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n\n ", " ", "\n ", "\n ", "\n\n ", " ", "\n ", "\n\n ", " ", "\n ", "\n ", "\n\n\n", " ", "\n ", "\n ", "\n ", " ", " ", "\n\n ", " ", " ", "\n ", " ", " ", "\n\n ", " ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", "\n\n ", "\n ", "\n ", "\n ", " ", "\n ", "\n ", "\n\n\n", "\n", "\n\n\n", "\n ", " ", "\n ", " ", " ", "\n\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n\n\n", " ", "\n ", "\n\n ", "\n ", "\n ", " ", " ", "\n\n ", "\n ", "\n ", " ", " ", " ", " ", "\n ", "\n ", " ", "\n\n ", "\n ", " ", " ", "\n ", "\n ", "\n\n\n", "\n", "\n\n\n", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n\n ", " ", "\n ", " ", " ", "\n ", " ", "\n ", "\n\n ", " ", " ", "\n ", " ", "\n\n ", "\n ", "\n\n ", " ", "\n ", " ", "\n\n ", " ", "\n ", "\n ", "\n\n\n", " ", "\n ", "\n ", "\n ", " ", " ", "\n\n ", " ", " ", "\n ", " ", " ", "\n\n ", " ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", "\n\n ", "\n ", " ", "\n ", "\n ", "\n\n\n", "\n", "\n", "\n\n\n", "\n\n ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n\n ", " ", "\n ", " ", " ", "\n\n ", " ", "\n ", " ", "\n\n ", "\n ", "\n ", "\n\n ", " ", "\n ", "\n ", "\n\n\n", " ", "\n ", "\n ", "\n ", " ", " ", "\n ", " ", " ", "\n\n ", "\n ", " ", " ", " ", "\n\n ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", "\n\n ", "\n ", " ", "\n\n ", "\n ", " ", "\n ", "\n ", "\n ", "\n\n", "\n", "\n", "\n\n", "\n ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n\n ", " ", " ", "\n ", " ", " ", "\n ", "\n\n ", " ", " ", "\n ", "\n\n ", "\n ", " ", " ", "\n ", "\n ", "\n ", "\n ", "\n\n ", "\n ", " ", " ", " ", "\n ", "\n\n", " ", "\n ", "\n ", "\n ", " ", " ", "\n\n ", " ", " ", "\n ", " ", " ", "\n\n ", "\n ", "\n ", " ", " ", " ", " ", "\n ", " ", "\n ", " ", " ", "\n ", " ", "\n\n ", "\n ", "\n ", " ", "\n\n ", "\n ", "\n\n ", "\n ", "\n ", " ", " ", "\n ", " ", "\n\n", " ", " ", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 6639} +{"url": "https://docs.python.org/3/library/asyncio-eventloop.html", "title": "Event Loop", "content": "Event Loop\u00b6\nSource code: Lib/asyncio/events.py, Lib/asyncio/base_events.py\nPreface\nThe event loop is the core of every asyncio application. Event loops run asynchronous tasks and callbacks, perform network IO operations, and run subprocesses.\nApplication developers should typically use the high-level asyncio functions,\nsuch as asyncio.run()\n, and should rarely need to reference the loop\nobject or call its methods. This section is intended mostly for authors\nof lower-level code, libraries, and frameworks, who need finer control over\nthe event loop behavior.\nObtaining the Event Loop\nThe following low-level functions can be used to get, set, or create an event loop:\n- asyncio.get_running_loop()\u00b6\nReturn the running event loop in the current OS thread.\nRaise a\nRuntimeError\nif there is no running event loop.This function can only be called from a coroutine or a callback.\nAdded in version 3.7.\n- asyncio.get_event_loop()\u00b6\nGet the current event loop.\nWhen called from a coroutine or a callback (e.g. scheduled with call_soon or similar API), this function will always return the running event loop.\nIf there is no running event loop set, the function will return the result of the\nget_event_loop_policy().get_event_loop()\ncall.Because this function has rather complex behavior (especially when custom event loop policies are in use), using the\nget_running_loop()\nfunction is preferred toget_event_loop()\nin coroutines and callbacks.As noted above, consider using the higher-level\nasyncio.run()\nfunction, instead of using these lower level functions to manually create and close an event loop.Changed in version 3.14: Raises a\nRuntimeError\nif there is no current event loop.Note\nThe\nasyncio\npolicy system is deprecated and will be removed in Python 3.16; from there on, this function will return the current running event loop if present else it will return the loop set byset_event_loop()\n.\n- asyncio.set_event_loop(loop)\u00b6\nSet loop as the current event loop for the current OS thread.\n- asyncio.new_event_loop()\u00b6\nCreate and return a new event loop object.\nNote that the behaviour of get_event_loop()\n, set_event_loop()\n,\nand new_event_loop()\nfunctions can be altered by\nsetting a custom event loop policy.\nContents\nThis documentation page contains the following sections:\nThe Event Loop Methods section is the reference documentation of the event loop APIs;\nThe Callback Handles section documents the\nHandle\nandTimerHandle\ninstances which are returned from scheduling methods such asloop.call_soon()\nandloop.call_later()\n;The Server Objects section documents types returned from event loop methods like\nloop.create_server()\n;The Event Loop Implementations section documents the\nSelectorEventLoop\nandProactorEventLoop\nclasses;The Examples section showcases how to work with some event loop APIs.\nEvent Loop Methods\u00b6\nEvent loops have low-level APIs for the following:\nRunning and stopping the loop\u00b6\n- loop.run_until_complete(future)\u00b6\nRun until the future (an instance of\nFuture\n) has completed.If the argument is a coroutine object it is implicitly scheduled to run as a\nasyncio.Task\n.Return the Future\u2019s result or raise its exception.\n- loop.run_forever()\u00b6\nRun the event loop until\nstop()\nis called.If\nstop()\nis called beforerun_forever()\nis called, the loop will poll the I/O selector once with a timeout of zero, run all callbacks scheduled in response to I/O events (and those that were already scheduled), and then exit.If\nstop()\nis called whilerun_forever()\nis running, the loop will run the current batch of callbacks and then exit. Note that new callbacks scheduled by callbacks will not run in this case; instead, they will run the next timerun_forever()\norrun_until_complete()\nis called.\n- loop.stop()\u00b6\nStop the event loop.\n- loop.is_running()\u00b6\nReturn\nTrue\nif the event loop is currently running.\n- loop.is_closed()\u00b6\nReturn\nTrue\nif the event loop was closed.\n- loop.close()\u00b6\nClose the event loop.\nThe loop must not be running when this function is called. Any pending callbacks will be discarded.\nThis method clears all queues and shuts down the executor, but does not wait for the executor to finish.\nThis method is idempotent and irreversible. No other methods should be called after the event loop is closed.\n- async loop.shutdown_asyncgens()\u00b6\nSchedule all currently open asynchronous generator objects to close with an\naclose()\ncall. After calling this method, the event loop will issue a warning if a new asynchronous generator is iterated. This should be used to reliably finalize all scheduled asynchronous generators.Note that there is no need to call this function when\nasyncio.run()\nis used.Example:\ntry: loop.run_forever() finally: loop.run_until_complete(loop.shutdown_asyncgens()) loop.close()\nAdded in version 3.6.\n- async loop.shutdown_default_executor(timeout=None)\u00b6\nSchedule the closure of the default executor and wait for it to join all of the threads in the\nThreadPoolExecutor\n. Once this method has been called, using the default executor withloop.run_in_executor()\nwill raise aRuntimeError\n.The timeout parameter specifies the amount of time (in\nfloat\nseconds) the executor will be given to finish joining. With the default,None\n, the executor is allowed an unlimited amount of time.If the timeout is reached, a\nRuntimeWarning\nis emitted and the default executor is terminated without waiting for its threads to finish joining.Note\nDo not call this method when using\nasyncio.run()\n, as the latter handles default executor shutdown automatically.Added in version 3.9.\nChanged in version 3.12: Added the timeout parameter.\nScheduling callbacks\u00b6\n- loop.call_soon(callback, *args, context=None)\u00b6\nSchedule the callback callback to be called with args arguments at the next iteration of the event loop.\nReturn an instance of\nasyncio.Handle\n, which can be used later to cancel the callback.Callbacks are called in the order in which they are registered. Each callback will be called exactly once.\nThe optional keyword-only context argument specifies a custom\ncontextvars.Context\nfor the callback to run in. Callbacks use the current context when no context is provided.Unlike\ncall_soon_threadsafe()\n, this method is not thread-safe.\n- loop.call_soon_threadsafe(callback, *args, context=None)\u00b6\nA thread-safe variant of\ncall_soon()\n. When scheduling callbacks from another thread, this function must be used, sincecall_soon()\nis not thread-safe.This function is safe to be called from a reentrant context or signal handler, however, it is not safe or fruitful to use the returned handle in such contexts.\nRaises\nRuntimeError\nif called on a loop that\u2019s been closed. This can happen on a secondary thread when the main application is shutting down.See the concurrency and multithreading section of the documentation.\nChanged in version 3.7: The context keyword-only parameter was added. See PEP 567 for more details.\nNote\nMost asyncio\nscheduling functions don\u2019t allow passing\nkeyword arguments. To do that, use functools.partial()\n:\n# will schedule \"print(\"Hello\", flush=True)\"\nloop.call_soon(\nfunctools.partial(print, \"Hello\", flush=True))\nUsing partial objects is usually more convenient than using lambdas, as asyncio can render partial objects better in debug and error messages.\nScheduling delayed callbacks\u00b6\nEvent loop provides mechanisms to schedule callback functions to be called at some point in the future. Event loop uses monotonic clocks to track time.\n- loop.call_later(delay, callback, *args, context=None)\u00b6\nSchedule callback to be called after the given delay number of seconds (can be either an int or a float).\nAn instance of\nasyncio.TimerHandle\nis returned which can be used to cancel the callback.callback will be called exactly once. If two callbacks are scheduled for exactly the same time, the order in which they are called is undefined.\nThe optional positional args will be passed to the callback when it is called. Use\nfunctools.partial()\nto pass keyword arguments to callback.An optional keyword-only context argument allows specifying a custom\ncontextvars.Context\nfor the callback to run in. The current context is used when no context is provided.Note\nFor performance, callbacks scheduled with\nloop.call_later()\nmay run up to one clock-resolution early (seetime.get_clock_info('monotonic').resolution\n).Changed in version 3.7: The context keyword-only parameter was added. See PEP 567 for more details.\nChanged in version 3.8: In Python 3.7 and earlier with the default event loop implementation, the delay could not exceed one day. This has been fixed in Python 3.8.\n- loop.call_at(when, callback, *args, context=None)\u00b6\nSchedule callback to be called at the given absolute timestamp when (an int or a float), using the same time reference as\nloop.time()\n.This method\u2019s behavior is the same as\ncall_later()\n.An instance of\nasyncio.TimerHandle\nis returned which can be used to cancel the callback.Note\nFor performance, callbacks scheduled with\nloop.call_at()\nmay run up to one clock-resolution early (seetime.get_clock_info('monotonic').resolution\n).Changed in version 3.7: The context keyword-only parameter was added. See PEP 567 for more details.\nChanged in version 3.8: In Python 3.7 and earlier with the default event loop implementation, the difference between when and the current time could not exceed one day. This has been fixed in Python 3.8.\n- loop.time()\u00b6\nReturn the current time, as a\nfloat\nvalue, according to the event loop\u2019s internal monotonic clock.\nNote\nChanged in version 3.8: In Python 3.7 and earlier timeouts (relative delay or absolute when) should not exceed one day. This has been fixed in Python 3.8.\nSee also\nThe asyncio.sleep()\nfunction.\nCreating Futures and Tasks\u00b6\n- loop.create_future()\u00b6\nCreate an\nasyncio.Future\nobject attached to the event loop.This is the preferred way to create Futures in asyncio. This lets third-party event loops provide alternative implementations of the Future object (with better performance or instrumentation).\nAdded in version 3.5.2.\n- loop.create_task(coro, *, name=None, context=None, eager_start=None, **kwargs)\u00b6\nSchedule the execution of coroutine coro. Return a\nTask\nobject.Third-party event loops can use their own subclass of\nTask\nfor interoperability. In this case, the result type is a subclass ofTask\n.The full function signature is largely the same as that of the\nTask\nconstructor (or factory) - all of the keyword arguments to this function are passed through to that interface.If the name argument is provided and not\nNone\n, it is set as the name of the task usingTask.set_name()\n.An optional keyword-only context argument allows specifying a custom\ncontextvars.Context\nfor the coro to run in. The current context copy is created when no context is provided.An optional keyword-only eager_start argument allows specifying if the task should execute eagerly during the call to create_task, or be scheduled later. If eager_start is not passed the mode set by\nloop.set_task_factory()\nwill be used.Changed in version 3.8: Added the name parameter.\nChanged in version 3.11: Added the context parameter.\nChanged in version 3.13.3: Added\nkwargs\nwhich passes on arbitrary extra parameters, includingname\nandcontext\n.Changed in version 3.13.4: Rolled back the change that passes on name and context (if it is None), while still passing on other arbitrary keyword arguments (to avoid breaking backwards compatibility with 3.13.3).\nChanged in version 3.14: All kwargs are now passed on. The eager_start parameter works with eager task factories.\n- loop.set_task_factory(factory)\u00b6\nSet a task factory that will be used by\nloop.create_task()\n.If factory is\nNone\nthe default task factory will be set. Otherwise, factory must be a callable with the signature matching(loop, coro, **kwargs)\n, where loop is a reference to the active event loop, and coro is a coroutine object. The callable must pass on all kwargs, and return aasyncio.Task\n-compatible object.Changed in version 3.13.3: Required that all kwargs are passed on to\nasyncio.Task\n.Changed in version 3.13.4: name is no longer passed to task factories. context is no longer passed to task factories if it is\nNone\n.Changed in version 3.14: name and context are now unconditionally passed on to task factories again.\n- loop.get_task_factory()\u00b6\nReturn a task factory or\nNone\nif the default one is in use.\nOpening network connections\u00b6\n- async loop.create_connection(protocol_factory, host=None, port=None, *, ssl=None, family=0, proto=0, flags=0, sock=None, local_addr=None, server_hostname=None, ssl_handshake_timeout=None, ssl_shutdown_timeout=None, happy_eyeballs_delay=None, interleave=None, all_errors=False)\u00b6\nOpen a streaming transport connection to a given address specified by host and port.\nThe socket family can be either\nAF_INET\norAF_INET6\ndepending on host (or the family argument, if provided).The socket type will be\nSOCK_STREAM\n.protocol_factory must be a callable returning an asyncio protocol implementation.\nThis method will try to establish the connection in the background. When successful, it returns a\n(transport, protocol)\npair.The chronological synopsis of the underlying operation is as follows:\nThe connection is established and a transport is created for it.\nprotocol_factory is called without arguments and is expected to return a protocol instance.\nThe protocol instance is coupled with the transport by calling its\nconnection_made()\nmethod.A\n(transport, protocol)\ntuple is returned on success.\nThe created transport is an implementation-dependent bidirectional stream.\nOther arguments:\nssl: if given and not false, a SSL/TLS transport is created (by default a plain TCP transport is created). If ssl is a\nssl.SSLContext\nobject, this context is used to create the transport; if ssl isTrue\n, a default context returned fromssl.create_default_context()\nis used.See also\nserver_hostname sets or overrides the hostname that the target server\u2019s certificate will be matched against. Should only be passed if ssl is not\nNone\n. By default the value of the host argument is used. If host is empty, there is no default and you must pass a value for server_hostname. If server_hostname is an empty string, hostname matching is disabled (which is a serious security risk, allowing for potential man-in-the-middle attacks).family, proto, flags are the optional address family, protocol and flags to be passed through to getaddrinfo() for host resolution. If given, these should all be integers from the corresponding\nsocket\nmodule constants.happy_eyeballs_delay, if given, enables Happy Eyeballs for this connection. It should be a floating-point number representing the amount of time in seconds to wait for a connection attempt to complete, before starting the next attempt in parallel. This is the \u201cConnection Attempt Delay\u201d as defined in RFC 8305. A sensible default value recommended by the RFC is\n0.25\n(250 milliseconds).interleave controls address reordering when a host name resolves to multiple IP addresses. If\n0\nor unspecified, no reordering is done, and addresses are tried in the order returned bygetaddrinfo()\n. If a positive integer is specified, the addresses are interleaved by address family, and the given integer is interpreted as \u201cFirst Address Family Count\u201d as defined in RFC 8305. The default is0\nif happy_eyeballs_delay is not specified, and1\nif it is.sock, if given, should be an existing, already connected\nsocket.socket\nobject to be used by the transport. If sock is given, none of host, port, family, proto, flags, happy_eyeballs_delay, interleave and local_addr should be specified.Note\nThe sock argument transfers ownership of the socket to the transport created. To close the socket, call the transport\u2019s\nclose()\nmethod.local_addr, if given, is a\n(local_host, local_port)\ntuple used to bind the socket locally. The local_host and local_port are looked up usinggetaddrinfo()\n, similarly to host and port.ssl_handshake_timeout is (for a TLS connection) the time in seconds to wait for the TLS handshake to complete before aborting the connection.\n60.0\nseconds ifNone\n(default).ssl_shutdown_timeout is the time in seconds to wait for the SSL shutdown to complete before aborting the connection.\n30.0\nseconds ifNone\n(default).all_errors determines what exceptions are raised when a connection cannot be created. By default, only a single\nException\nis raised: the first exception if there is only one or all errors have same message, or a singleOSError\nwith the error messages combined. Whenall_errors\nisTrue\n, anExceptionGroup\nwill be raised containing all exceptions (even if there is only one).\nChanged in version 3.5: Added support for SSL/TLS in\nProactorEventLoop\n.Changed in version 3.6: The socket option socket.TCP_NODELAY is set by default for all TCP connections.\nChanged in version 3.7: Added the ssl_handshake_timeout parameter.\nChanged in version 3.8: Added the happy_eyeballs_delay and interleave parameters.\nHappy Eyeballs Algorithm: Success with Dual-Stack Hosts. When a server\u2019s IPv4 path and protocol are working, but the server\u2019s IPv6 path and protocol are not working, a dual-stack client application experiences significant connection delay compared to an IPv4-only client. This is undesirable because it causes the dual-stack client to have a worse user experience. This document specifies requirements for algorithms that reduce this user-visible delay and provides an algorithm.\nFor more information: https://datatracker.ietf.org/doc/html/rfc6555\nChanged in version 3.11: Added the ssl_shutdown_timeout parameter.\nChanged in version 3.12: all_errors was added.\nSee also\nThe\nopen_connection()\nfunction is a high-level alternative API. It returns a pair of (StreamReader\n,StreamWriter\n) that can be used directly in async/await code.\n- async loop.create_datagram_endpoint(protocol_factory, local_addr=None, remote_addr=None, *, family=0, proto=0, flags=0, reuse_port=None, allow_broadcast=None, sock=None)\u00b6\nCreate a datagram connection.\nThe socket family can be either\nAF_INET\n,AF_INET6\n, orAF_UNIX\n, depending on host (or the family argument, if provided).The socket type will be\nSOCK_DGRAM\n.protocol_factory must be a callable returning a protocol implementation.\nA tuple of\n(transport, protocol)\nis returned on success.Other arguments:\nlocal_addr, if given, is a\n(local_host, local_port)\ntuple used to bind the socket locally. The local_host and local_port are looked up usinggetaddrinfo()\n.Note\nOn Windows, when using the proactor event loop with\nlocal_addr=None\n, anOSError\nwitherrno.WSAEINVAL\nwill be raised when running it.remote_addr, if given, is a\n(remote_host, remote_port)\ntuple used to connect the socket to a remote address. The remote_host and remote_port are looked up usinggetaddrinfo()\n.family, proto, flags are the optional address family, protocol and flags to be passed through to\ngetaddrinfo()\nfor host resolution. If given, these should all be integers from the correspondingsocket\nmodule constants.reuse_port tells the kernel to allow this endpoint to be bound to the same port as other existing endpoints are bound to, so long as they all set this flag when being created. This option is not supported on Windows and some Unixes. If the socket.SO_REUSEPORT constant is not defined then this capability is unsupported.\nallow_broadcast tells the kernel to allow this endpoint to send messages to the broadcast address.\nsock can optionally be specified in order to use a preexisting, already connected,\nsocket.socket\nobject to be used by the transport. If specified, local_addr and remote_addr should be omitted (must beNone\n).Note\nThe sock argument transfers ownership of the socket to the transport created. To close the socket, call the transport\u2019s\nclose()\nmethod.\nSee UDP echo client protocol and UDP echo server protocol examples.\nChanged in version 3.4.4: The family, proto, flags, reuse_address, reuse_port, allow_broadcast, and sock parameters were added.\nChanged in version 3.8: Added support for Windows.\nChanged in version 3.8.1: The reuse_address parameter is no longer supported, as using socket.SO_REUSEADDR poses a significant security concern for UDP. Explicitly passing\nreuse_address=True\nwill raise an exception.When multiple processes with differing UIDs assign sockets to an identical UDP socket address with\nSO_REUSEADDR\n, incoming packets can become randomly distributed among the sockets.For supported platforms, reuse_port can be used as a replacement for similar functionality. With reuse_port, socket.SO_REUSEPORT is used instead, which specifically prevents processes with differing UIDs from assigning sockets to the same socket address.\nChanged in version 3.11: The reuse_address parameter, disabled since Python 3.8.1, 3.7.6 and 3.6.10, has been entirely removed.\n- async loop.create_unix_connection(protocol_factory, path=None, *, ssl=None, sock=None, server_hostname=None, ssl_handshake_timeout=None, ssl_shutdown_timeout=None)\u00b6\nCreate a Unix connection.\nThe socket family will be\nAF_UNIX\n; socket type will beSOCK_STREAM\n.A tuple of\n(transport, protocol)\nis returned on success.path is the name of a Unix domain socket and is required, unless a sock parameter is specified. Abstract Unix sockets,\nstr\n,bytes\n, andPath\npaths are supported.See the documentation of the\nloop.create_connection()\nmethod for information about arguments to this method.Availability: Unix.\nChanged in version 3.7: Added the ssl_handshake_timeout parameter. The path parameter can now be a path-like object.\nChanged in version 3.11: Added the ssl_shutdown_timeout parameter.\nCreating network servers\u00b6\n- async loop.create_server(protocol_factory, host=None, port=None, *, family=socket.AF_UNSPEC, flags=socket.AI_PASSIVE, sock=None, backlog=100, ssl=None, reuse_address=None, reuse_port=None, keep_alive=None, ssl_handshake_timeout=None, ssl_shutdown_timeout=None, start_serving=True)\u00b6\nCreate a TCP server (socket type\nSOCK_STREAM\n) listening on port of the host address.Returns a\nServer\nobject.Arguments:\nprotocol_factory must be a callable returning a protocol implementation.\nThe host parameter can be set to several types which determine where the server would be listening:\nIf host is a string, the TCP server is bound to a single network interface specified by host.\nIf host is a sequence of strings, the TCP server is bound to all network interfaces specified by the sequence.\nIf host is an empty string or\nNone\n, all interfaces are assumed and a list of multiple sockets will be returned (most likely one for IPv4 and another one for IPv6).\nThe port parameter can be set to specify which port the server should listen on. If\n0\norNone\n(the default), a random unused port will be selected (note that if host resolves to multiple network interfaces, a different random port will be selected for each interface).family can be set to either\nsocket.AF_INET\norAF_INET6\nto force the socket to use IPv4 or IPv6. If not set, the family will be determined from host name (defaults toAF_UNSPEC\n).flags is a bitmask for\ngetaddrinfo()\n.sock can optionally be specified in order to use a preexisting socket object. If specified, host and port must not be specified.\nNote\nThe sock argument transfers ownership of the socket to the server created. To close the socket, call the server\u2019s\nclose()\nmethod.backlog is the maximum number of queued connections passed to\nlisten()\n(defaults to 100).ssl can be set to an\nSSLContext\ninstance to enable TLS over the accepted connections.reuse_address tells the kernel to reuse a local socket in\nTIME_WAIT\nstate, without waiting for its natural timeout to expire. If not specified will automatically be set toTrue\non Unix.reuse_port tells the kernel to allow this endpoint to be bound to the same port as other existing endpoints are bound to, so long as they all set this flag when being created. This option is not supported on Windows.\nkeep_alive set to\nTrue\nkeeps connections active by enabling the periodic transmission of messages.\nChanged in version 3.13: Added the keep_alive parameter.\nssl_handshake_timeout is (for a TLS server) the time in seconds to wait for the TLS handshake to complete before aborting the connection.\n60.0\nseconds ifNone\n(default).ssl_shutdown_timeout is the time in seconds to wait for the SSL shutdown to complete before aborting the connection.\n30.0\nseconds ifNone\n(default).start_serving set to\nTrue\n(the default) causes the created server to start accepting connections immediately. When set toFalse\n, the user should await onServer.start_serving()\norServer.serve_forever()\nto make the server to start accepting connections.\nChanged in version 3.5: Added support for SSL/TLS in\nProactorEventLoop\n.Changed in version 3.5.1: The host parameter can be a sequence of strings.\nChanged in version 3.6: Added ssl_handshake_timeout and start_serving parameters. The socket option socket.TCP_NODELAY is set by default for all TCP connections.\nChanged in version 3.11: Added the ssl_shutdown_timeout parameter.\nSee also\nThe\nstart_server()\nfunction is a higher-level alternative API that returns a pair ofStreamReader\nandStreamWriter\nthat can be used in an async/await code.\n- async loop.create_unix_server(protocol_factory, path=None, *, sock=None, backlog=100, ssl=None, ssl_handshake_timeout=None, ssl_shutdown_timeout=None, start_serving=True, cleanup_socket=True)\u00b6\nSimilar to\nloop.create_server()\nbut works with theAF_UNIX\nsocket family.path is the name of a Unix domain socket, and is required, unless a sock argument is provided. Abstract Unix sockets,\nstr\n,bytes\n, andPath\npaths are supported.If cleanup_socket is true then the Unix socket will automatically be removed from the filesystem when the server is closed, unless the socket has been replaced after the server has been created.\nSee the documentation of the\nloop.create_server()\nmethod for information about arguments to this method.Availability: Unix.\nChanged in version 3.7: Added the ssl_handshake_timeout and start_serving parameters. The path parameter can now be a\nPath\nobject.Changed in version 3.11: Added the ssl_shutdown_timeout parameter.\nChanged in version 3.13: Added the cleanup_socket parameter.\n- async loop.connect_accepted_socket(protocol_factory, sock, *, ssl=None, ssl_handshake_timeout=None, ssl_shutdown_timeout=None)\u00b6\nWrap an already accepted connection into a transport/protocol pair.\nThis method can be used by servers that accept connections outside of asyncio but that use asyncio to handle them.\nParameters:\nprotocol_factory must be a callable returning a protocol implementation.\nsock is a preexisting socket object returned from\nsocket.accept\n.Note\nThe sock argument transfers ownership of the socket to the transport created. To close the socket, call the transport\u2019s\nclose()\nmethod.ssl can be set to an\nSSLContext\nto enable SSL over the accepted connections.ssl_handshake_timeout is (for an SSL connection) the time in seconds to wait for the SSL handshake to complete before aborting the connection.\n60.0\nseconds ifNone\n(default).ssl_shutdown_timeout is the time in seconds to wait for the SSL shutdown to complete before aborting the connection.\n30.0\nseconds ifNone\n(default).\nReturns a\n(transport, protocol)\npair.Added in version 3.5.3.\nChanged in version 3.7: Added the ssl_handshake_timeout parameter.\nChanged in version 3.11: Added the ssl_shutdown_timeout parameter.\nTransferring files\u00b6\n- async loop.sendfile(transport, file, offset=0, count=None, *, fallback=True)\u00b6\nSend a file over a transport. Return the total number of bytes sent.\nThe method uses high-performance\nos.sendfile()\nif available.file must be a regular file object opened in binary mode.\noffset tells from where to start reading the file. If specified, count is the total number of bytes to transmit as opposed to sending the file until EOF is reached. File position is always updated, even when this method raises an error, and\nfile.tell()\ncan be used to obtain the actual number of bytes sent.fallback set to\nTrue\nmakes asyncio to manually read and send the file when the platform does not support the sendfile system call (e.g. Windows or SSL socket on Unix).Raise\nSendfileNotAvailableError\nif the system does not support the sendfile syscall and fallback isFalse\n.Added in version 3.7.\nTLS Upgrade\u00b6\n- async loop.start_tls(transport, protocol, sslcontext, *, server_side=False, server_hostname=None, ssl_handshake_timeout=None, ssl_shutdown_timeout=None)\u00b6\nUpgrade an existing transport-based connection to TLS.\nCreate a TLS coder/decoder instance and insert it between the transport and the protocol. The coder/decoder implements both transport-facing protocol and protocol-facing transport.\nReturn the created two-interface instance. After await, the protocol must stop using the original transport and communicate with the returned object only because the coder caches protocol-side data and sporadically exchanges extra TLS session packets with transport.\nIn some situations (e.g. when the passed transport is already closing) this may return\nNone\n.Parameters:\ntransport and protocol instances that methods like\ncreate_server()\nandcreate_connection()\nreturn.sslcontext: a configured instance of\nSSLContext\n.server_side pass\nTrue\nwhen a server-side connection is being upgraded (like the one created bycreate_server()\n).server_hostname: sets or overrides the host name that the target server\u2019s certificate will be matched against.\nssl_handshake_timeout is (for a TLS connection) the time in seconds to wait for the TLS handshake to complete before aborting the connection.\n60.0\nseconds ifNone\n(default).ssl_shutdown_timeout is the time in seconds to wait for the SSL shutdown to complete before aborting the connection.\n30.0\nseconds ifNone\n(default).\nAdded in version 3.7.\nChanged in version 3.11: Added the ssl_shutdown_timeout parameter.\nWatching file descriptors\u00b6\n- loop.add_reader(fd, callback, *args)\u00b6\nStart monitoring the fd file descriptor for read availability and invoke callback with the specified arguments once fd is available for reading.\nAny preexisting callback registered for fd is cancelled and replaced by callback.\n- loop.remove_reader(fd)\u00b6\nStop monitoring the fd file descriptor for read availability. Returns\nTrue\nif fd was previously being monitored for reads.\n- loop.add_writer(fd, callback, *args)\u00b6\nStart monitoring the fd file descriptor for write availability and invoke callback with the specified arguments args once fd is available for writing.\nAny preexisting callback registered for fd is cancelled and replaced by callback.\nUse\nfunctools.partial()\nto pass keyword arguments to callback.\n- loop.remove_writer(fd)\u00b6\nStop monitoring the fd file descriptor for write availability. Returns\nTrue\nif fd was previously being monitored for writes.\nSee also Platform Support section for some limitations of these methods.\nWorking with socket objects directly\u00b6\nIn general, protocol implementations that use transport-based APIs\nsuch as loop.create_connection()\nand loop.create_server()\nare faster than implementations that work with sockets directly.\nHowever, there are some use cases when performance is not critical, and\nworking with socket\nobjects directly is more\nconvenient.\n- async loop.sock_recv(sock, nbytes)\u00b6\nReceive up to nbytes from sock. Asynchronous version of\nsocket.recv()\n.Return the received data as a bytes object.\nsock must be a non-blocking socket.\nChanged in version 3.7: Even though this method was always documented as a coroutine method, releases before Python 3.7 returned a\nFuture\n. Since Python 3.7 this is anasync def\nmethod.\n- async loop.sock_recv_into(sock, buf)\u00b6\nReceive data from sock into the buf buffer. Modeled after the blocking\nsocket.recv_into()\nmethod.Return the number of bytes written to the buffer.\nsock must be a non-blocking socket.\nAdded in version 3.7.\n- async loop.sock_recvfrom(sock, bufsize)\u00b6\nReceive a datagram of up to bufsize from sock. Asynchronous version of\nsocket.recvfrom()\n.Return a tuple of (received data, remote address).\nsock must be a non-blocking socket.\nAdded in version 3.11.\n- async loop.sock_recvfrom_into(sock, buf, nbytes=0)\u00b6\nReceive a datagram of up to nbytes from sock into buf. Asynchronous version of\nsocket.recvfrom_into()\n.Return a tuple of (number of bytes received, remote address).\nsock must be a non-blocking socket.\nAdded in version 3.11.\n- async loop.sock_sendall(sock, data)\u00b6\nSend data to the sock socket. Asynchronous version of\nsocket.sendall()\n.This method continues to send to the socket until either all data in data has been sent or an error occurs.\nNone\nis returned on success. On error, an exception is raised. Additionally, there is no way to determine how much data, if any, was successfully processed by the receiving end of the connection.sock must be a non-blocking socket.\nChanged in version 3.7: Even though the method was always documented as a coroutine method, before Python 3.7 it returned a\nFuture\n. Since Python 3.7, this is anasync def\nmethod.\n- async loop.sock_sendto(sock, data, address)\u00b6\nSend a datagram from sock to address. Asynchronous version of\nsocket.sendto()\n.Return the number of bytes sent.\nsock must be a non-blocking socket.\nAdded in version 3.11.\n- async loop.sock_connect(sock, address)\u00b6\nConnect sock to a remote socket at address.\nAsynchronous version of\nsocket.connect()\n.sock must be a non-blocking socket.\nChanged in version 3.5.2:\naddress\nno longer needs to be resolved.sock_connect\nwill try to check if the address is already resolved by callingsocket.inet_pton()\n. If not,loop.getaddrinfo()\nwill be used to resolve the address.See also\n- async loop.sock_accept(sock)\u00b6\nAccept a connection. Modeled after the blocking\nsocket.accept()\nmethod.The socket must be bound to an address and listening for connections. The return value is a pair\n(conn, address)\nwhere conn is a new socket object usable to send and receive data on the connection, and address is the address bound to the socket on the other end of the connection.sock must be a non-blocking socket.\nChanged in version 3.7: Even though the method was always documented as a coroutine method, before Python 3.7 it returned a\nFuture\n. Since Python 3.7, this is anasync def\nmethod.See also\n- async loop.sock_sendfile(sock, file, offset=0, count=None, *, fallback=True)\u00b6\nSend a file using high-performance\nos.sendfile\nif possible. Return the total number of bytes sent.Asynchronous version of\nsocket.sendfile()\n.sock must be a non-blocking\nsocket.SOCK_STREAM\nsocket\n.file must be a regular file object open in binary mode.\noffset tells from where to start reading the file. If specified, count is the total number of bytes to transmit as opposed to sending the file until EOF is reached. File position is always updated, even when this method raises an error, and\nfile.tell()\ncan be used to obtain the actual number of bytes sent.fallback, when set to\nTrue\n, makes asyncio manually read and send the file when the platform does not support the sendfile syscall (e.g. Windows or SSL socket on Unix).Raise\nSendfileNotAvailableError\nif the system does not support sendfile syscall and fallback isFalse\n.sock must be a non-blocking socket.\nAdded in version 3.7.\nDNS\u00b6\n- async loop.getaddrinfo(host, port, *, family=0, type=0, proto=0, flags=0)\u00b6\nAsynchronous version of\nsocket.getaddrinfo()\n.\n- async loop.getnameinfo(sockaddr, flags=0)\u00b6\nAsynchronous version of\nsocket.getnameinfo()\n.\nNote\nBoth getaddrinfo and getnameinfo internally utilize their synchronous versions through the loop\u2019s default thread pool executor. When this executor is saturated, these methods may experience delays, which higher-level networking libraries may report as increased timeouts. To mitigate this, consider using a custom executor for other user tasks, or setting a default executor with a larger number of workers.\nChanged in version 3.7: Both getaddrinfo and getnameinfo methods were always documented\nto return a coroutine, but prior to Python 3.7 they were, in fact,\nreturning asyncio.Future\nobjects. Starting with Python 3.7\nboth methods are coroutines.\nWorking with pipes\u00b6\n- async loop.connect_read_pipe(protocol_factory, pipe)\u00b6\nRegister the read end of pipe in the event loop.\nprotocol_factory must be a callable returning an asyncio protocol implementation.\npipe is a file-like object.\nReturn pair\n(transport, protocol)\n, where transport supports theReadTransport\ninterface and protocol is an object instantiated by the protocol_factory.With\nSelectorEventLoop\nevent loop, the pipe is set to non-blocking mode.\n- async loop.connect_write_pipe(protocol_factory, pipe)\u00b6\nRegister the write end of pipe in the event loop.\nprotocol_factory must be a callable returning an asyncio protocol implementation.\npipe is file-like object.\nReturn pair\n(transport, protocol)\n, where transport supportsWriteTransport\ninterface and protocol is an object instantiated by the protocol_factory.With\nSelectorEventLoop\nevent loop, the pipe is set to non-blocking mode.\nNote\nSelectorEventLoop\ndoes not support the above methods on\nWindows. Use ProactorEventLoop\ninstead for Windows.\nSee also\nThe loop.subprocess_exec()\nand\nloop.subprocess_shell()\nmethods.\nUnix signals\u00b6\n- loop.add_signal_handler(signum, callback, *args)\u00b6\nSet callback as the handler for the signum signal, passing args as positional arguments.\nThe callback will be invoked by loop, along with other queued callbacks and runnable coroutines of that event loop. Unlike signal handlers registered using\nsignal.signal()\n, a callback registered with this function is allowed to interact with the event loop.Raise\nValueError\nif the signal number is invalid or uncatchable. RaiseRuntimeError\nif there is a problem setting up the handler.Use\nfunctools.partial()\nto pass keyword arguments to callback.Like\nsignal.signal()\n, this function must be invoked in the main thread.\n- loop.remove_signal_handler(sig)\u00b6\nRemove the handler for the sig signal.\nReturn\nTrue\nif the signal handler was removed, orFalse\nif no handler was set for the given signal.Availability: Unix.\nSee also\nThe signal\nmodule.\nExecuting code in thread or process pools\u00b6\n- awaitable loop.run_in_executor(executor, func, *args)\u00b6\nArrange for func to be called in the specified executor passing args as positional arguments.\nThe executor argument should be an\nconcurrent.futures.Executor\ninstance. The default executor is used if executor isNone\n. The default executor can be set byloop.set_default_executor()\n, otherwise, aconcurrent.futures.ThreadPoolExecutor\nwill be lazy-initialized and used byrun_in_executor()\nif needed.Example:\nimport asyncio import concurrent.futures def blocking_io(): # File operations (such as logging) can block the # event loop: run them in a thread pool. with open('/dev/urandom', 'rb') as f: return f.read(100) def cpu_bound(): # CPU-bound operations will block the event loop: # in general it is preferable to run them in a # process pool. return sum(i * i for i in range(10 ** 7)) async def main(): loop = asyncio.get_running_loop() ## Options: # 1. Run in the default loop's executor: result = await loop.run_in_executor( None, blocking_io) print('default thread pool', result) # 2. Run in a custom thread pool: with concurrent.futures.ThreadPoolExecutor() as pool: result = await loop.run_in_executor( pool, blocking_io) print('custom thread pool', result) # 3. Run in a custom process pool: with concurrent.futures.ProcessPoolExecutor() as pool: result = await loop.run_in_executor( pool, cpu_bound) print('custom process pool', result) # 4. Run in a custom interpreter pool: with concurrent.futures.InterpreterPoolExecutor() as pool: result = await loop.run_in_executor( pool, cpu_bound) print('custom interpreter pool', result) if __name__ == '__main__': asyncio.run(main())\nNote that the entry point guard (\nif __name__ == '__main__'\n) is required for option 3 due to the peculiarities ofmultiprocessing\n, which is used byProcessPoolExecutor\n. See Safe importing of main module.This method returns a\nasyncio.Future\nobject.Use\nfunctools.partial()\nto pass keyword arguments to func.Changed in version 3.5.3:\nloop.run_in_executor()\nno longer configures themax_workers\nof the thread pool executor it creates, instead leaving it up to the thread pool executor (ThreadPoolExecutor\n) to set the default.\n- loop.set_default_executor(executor)\u00b6\nSet executor as the default executor used by\nrun_in_executor()\n. executor must be an instance ofThreadPoolExecutor\n, which includesInterpreterPoolExecutor\n.Changed in version 3.11: executor must be an instance of\nThreadPoolExecutor\n.\nError Handling API\u00b6\nAllows customizing how exceptions are handled in the event loop.\n- loop.set_exception_handler(handler)\u00b6\nSet handler as the new event loop exception handler.\nIf handler is\nNone\n, the default exception handler will be set. Otherwise, handler must be a callable with the signature matching(loop, context)\n, whereloop\nis a reference to the active event loop, andcontext\nis adict\nobject containing the details of the exception (seecall_exception_handler()\ndocumentation for details about context).If the handler is called on behalf of a\nTask\norHandle\n, it is run in thecontextvars.Context\nof that task or callback handle.Changed in version 3.12: The handler may be called in the\nContext\nof the task or handle where the exception originated.\n- loop.get_exception_handler()\u00b6\nReturn the current exception handler, or\nNone\nif no custom exception handler was set.Added in version 3.5.2.\n- loop.default_exception_handler(context)\u00b6\nDefault exception handler.\nThis is called when an exception occurs and no exception handler is set. This can be called by a custom exception handler that wants to defer to the default handler behavior.\ncontext parameter has the same meaning as in\ncall_exception_handler()\n.\n- loop.call_exception_handler(context)\u00b6\nCall the current event loop exception handler.\ncontext is a\ndict\nobject containing the following keys (new keys may be introduced in future Python versions):\u2018message\u2019: Error message;\n\u2018exception\u2019 (optional): Exception object;\n\u2018future\u2019 (optional):\nasyncio.Future\ninstance;\u2018task\u2019 (optional):\nasyncio.Task\ninstance;\u2018handle\u2019 (optional):\nasyncio.Handle\ninstance;\u2018protocol\u2019 (optional): Protocol instance;\n\u2018transport\u2019 (optional): Transport instance;\n\u2018socket\u2019 (optional):\nsocket.socket\ninstance;\u2018source_traceback\u2019 (optional): Traceback of the source;\n\u2018handle_traceback\u2019 (optional): Traceback of the handle;\n- \u2018asyncgen\u2019 (optional): Asynchronous generator that caused\nthe exception.\nNote\nThis method should not be overloaded in subclassed event loops. For custom exception handling, use the\nset_exception_handler()\nmethod.\nEnabling debug mode\u00b6\n- loop.get_debug()\u00b6\nGet the debug mode (\nbool\n) of the event loop.The default value is\nTrue\nif the environment variablePYTHONASYNCIODEBUG\nis set to a non-empty string,False\notherwise.\n- loop.set_debug(enabled: bool)\u00b6\nSet the debug mode of the event loop.\nChanged in version 3.7: The new Python Development Mode can now also be used to enable the debug mode.\n- loop.slow_callback_duration\u00b6\nThis attribute can be used to set the minimum execution duration in seconds that is considered \u201cslow\u201d. When debug mode is enabled, \u201cslow\u201d callbacks are logged.\nDefault value is 100 milliseconds.\nSee also\nRunning Subprocesses\u00b6\nMethods described in this subsections are low-level. In regular\nasync/await code consider using the high-level\nasyncio.create_subprocess_shell()\nand\nasyncio.create_subprocess_exec()\nconvenience functions instead.\nNote\nOn Windows, the default event loop ProactorEventLoop\nsupports\nsubprocesses, whereas SelectorEventLoop\ndoes not. See\nSubprocess Support on Windows for\ndetails.\n- async loop.subprocess_exec(protocol_factory, *args, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, **kwargs)\u00b6\nCreate a subprocess from one or more string arguments specified by args.\nargs must be a list of strings represented by:\nstr\n;or\nbytes\n, encoded to the filesystem encoding.\nThe first string specifies the program executable, and the remaining strings specify the arguments. Together, string arguments form the\nargv\nof the program.This is similar to the standard library\nsubprocess.Popen\nclass called withshell=False\nand the list of strings passed as the first argument; however, wherePopen\ntakes a single argument which is list of strings, subprocess_exec takes multiple string arguments.The protocol_factory must be a callable returning a subclass of the\nasyncio.SubprocessProtocol\nclass.Other parameters:\nstdin can be any of these:\na file-like object\nan existing file descriptor (a positive integer), for example those created with\nos.pipe()\nthe\nsubprocess.PIPE\nconstant (default) which will create a new pipe and connect it,the value\nNone\nwhich will make the subprocess inherit the file descriptor from this processthe\nsubprocess.DEVNULL\nconstant which indicates that the specialos.devnull\nfile will be used\nstdout can be any of these:\na file-like object\nthe\nsubprocess.PIPE\nconstant (default) which will create a new pipe and connect it,the value\nNone\nwhich will make the subprocess inherit the file descriptor from this processthe\nsubprocess.DEVNULL\nconstant which indicates that the specialos.devnull\nfile will be used\nstderr can be any of these:\na file-like object\nthe\nsubprocess.PIPE\nconstant (default) which will create a new pipe and connect it,the value\nNone\nwhich will make the subprocess inherit the file descriptor from this processthe\nsubprocess.DEVNULL\nconstant which indicates that the specialos.devnull\nfile will be usedthe\nsubprocess.STDOUT\nconstant which will connect the standard error stream to the process\u2019 standard output stream\nAll other keyword arguments are passed to\nsubprocess.Popen\nwithout interpretation, except for bufsize, universal_newlines, shell, text, encoding and errors, which should not be specified at all.The\nasyncio\nsubprocess API does not support decoding the streams as text.bytes.decode()\ncan be used to convert the bytes returned from the stream to text.\nIf a file-like object passed as stdin, stdout or stderr represents a pipe, then the other side of this pipe should be registered with\nconnect_write_pipe()\norconnect_read_pipe()\nfor use with the event loop.See the constructor of the\nsubprocess.Popen\nclass for documentation on other arguments.Returns a pair of\n(transport, protocol)\n, where transport conforms to theasyncio.SubprocessTransport\nbase class and protocol is an object instantiated by the protocol_factory.If the transport is closed or is garbage collected, the child process is killed if it is still running.\n- async loop.subprocess_shell(protocol_factory, cmd, *, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, **kwargs)\u00b6\nCreate a subprocess from cmd, which can be a\nstr\nor abytes\nstring encoded to the filesystem encoding, using the platform\u2019s \u201cshell\u201d syntax.This is similar to the standard library\nsubprocess.Popen\nclass called withshell=True\n.The protocol_factory must be a callable returning a subclass of the\nSubprocessProtocol\nclass.See\nsubprocess_exec()\nfor more details about the remaining arguments.Returns a pair of\n(transport, protocol)\n, where transport conforms to theSubprocessTransport\nbase class and protocol is an object instantiated by the protocol_factory.If the transport is closed or is garbage collected, the child process is killed if it is still running.\nNote\nIt is the application\u2019s responsibility to ensure that all whitespace\nand special characters are quoted appropriately to avoid shell injection\nvulnerabilities. The shlex.quote()\nfunction can be used to\nproperly escape whitespace and special characters in strings that\nare going to be used to construct shell commands.\nCallback Handles\u00b6\n- class asyncio.Handle\u00b6\nA callback wrapper object returned by\nloop.call_soon()\n,loop.call_soon_threadsafe()\n.- get_context()\u00b6\nReturn the\ncontextvars.Context\nobject associated with the handle.Added in version 3.12.\n- cancel()\u00b6\nCancel the callback. If the callback has already been canceled or executed, this method has no effect.\n- cancelled()\u00b6\nReturn\nTrue\nif the callback was cancelled.Added in version 3.7.\n- class asyncio.TimerHandle\u00b6\nA callback wrapper object returned by\nloop.call_later()\n, andloop.call_at()\n.This class is a subclass of\nHandle\n.- when()\u00b6\nReturn a scheduled callback time as\nfloat\nseconds.The time is an absolute timestamp, using the same time reference as\nloop.time()\n.Added in version 3.7.\nServer Objects\u00b6\nServer objects are created by loop.create_server()\n,\nloop.create_unix_server()\n, start_server()\n,\nand start_unix_server()\nfunctions.\nDo not instantiate the Server\nclass directly.\n- class asyncio.Server\u00b6\nServer objects are asynchronous context managers. When used in an\nasync with\nstatement, it\u2019s guaranteed that the Server object is closed and not accepting new connections when theasync with\nstatement is completed:srv = await loop.create_server(...) async with srv: # some code # At this point, srv is closed and no longer accepts new connections.\nChanged in version 3.7: Server object is an asynchronous context manager since Python 3.7.\nChanged in version 3.11: This class was exposed publicly as\nasyncio.Server\nin Python 3.9.11, 3.10.3 and 3.11.- close()\u00b6\nStop serving: close listening sockets and set the\nsockets\nattribute toNone\n.The sockets that represent existing incoming client connections are left open.\nThe server is closed asynchronously; use the\nwait_closed()\ncoroutine to wait until the server is closed (and no more connections are active).\n- close_clients()\u00b6\nClose all existing incoming client connections.\nCalls\nclose()\non all associated transports.close()\nshould be called beforeclose_clients()\nwhen closing the server to avoid races with new clients connecting.Added in version 3.13.\n- abort_clients()\u00b6\nClose all existing incoming client connections immediately, without waiting for pending operations to complete.\nCalls\nabort()\non all associated transports.close()\nshould be called beforeabort_clients()\nwhen closing the server to avoid races with new clients connecting.Added in version 3.13.\n- get_loop()\u00b6\nReturn the event loop associated with the server object.\nAdded in version 3.7.\n- async start_serving()\u00b6\nStart accepting connections.\nThis method is idempotent, so it can be called when the server is already serving.\nThe start_serving keyword-only parameter to\nloop.create_server()\nandasyncio.start_server()\nallows creating a Server object that is not accepting connections initially. In this caseServer.start_serving()\n, orServer.serve_forever()\ncan be used to make the Server start accepting connections.Added in version 3.7.\n- async serve_forever()\u00b6\nStart accepting connections until the coroutine is cancelled. Cancellation of\nserve_forever\ntask causes the server to be closed.This method can be called if the server is already accepting connections. Only one\nserve_forever\ntask can exist per one Server object.Example:\nasync def client_connected(reader, writer): # Communicate with the client with # reader/writer streams. For example: await reader.readline() async def main(host, port): srv = await asyncio.start_server( client_connected, host, port) await srv.serve_forever() asyncio.run(main('127.0.0.1', 0))\nAdded in version 3.7.\n- is_serving()\u00b6\nReturn\nTrue\nif the server is accepting new connections.Added in version 3.7.\n- async wait_closed()\u00b6\nWait until the\nclose()\nmethod completes and all active connections have finished.\n- sockets\u00b6\nList of socket-like objects,\nasyncio.trsock.TransportSocket\n, which the server is listening on.Changed in version 3.7: Prior to Python 3.7\nServer.sockets\nused to return an internal list of server sockets directly. In 3.7 a copy of that list is returned.\nEvent Loop Implementations\u00b6\nasyncio ships with two different event loop implementations:\nSelectorEventLoop\nand ProactorEventLoop\n.\nBy default asyncio is configured to use EventLoop\n.\n- class asyncio.SelectorEventLoop\u00b6\nA subclass of\nAbstractEventLoop\nbased on theselectors\nmodule.Uses the most efficient selector available for the given platform. It is also possible to manually configure the exact selector implementation to be used:\nimport asyncio import selectors async def main(): ... loop_factory = lambda: asyncio.SelectorEventLoop(selectors.SelectSelector()) asyncio.run(main(), loop_factory=loop_factory)\nAvailability: Unix, Windows.\n- class asyncio.ProactorEventLoop\u00b6\nA subclass of\nAbstractEventLoop\nfor Windows that uses \u201cI/O Completion Ports\u201d (IOCP).Availability: Windows.\n- class asyncio.EventLoop\u00b6\nAn alias to the most efficient available subclass of\nAbstractEventLoop\nfor the given platform.It is an alias to\nSelectorEventLoop\non Unix andProactorEventLoop\non Windows.Added in version 3.13.\n- class asyncio.AbstractEventLoop\u00b6\nAbstract base class for asyncio-compliant event loops.\nThe Event Loop Methods section lists all methods that an alternative implementation of\nAbstractEventLoop\nshould have defined.\nExamples\u00b6\nNote that all examples in this section purposefully show how\nto use the low-level event loop APIs, such as loop.run_forever()\nand loop.call_soon()\n. Modern asyncio applications rarely\nneed to be written this way; consider using the high-level functions\nlike asyncio.run()\n.\nHello World with call_soon()\u00b6\nAn example using the loop.call_soon()\nmethod to schedule a\ncallback. The callback displays \"Hello World\"\nand then stops the\nevent loop:\nimport asyncio\ndef hello_world(loop):\n\"\"\"A callback to print 'Hello World' and stop the event loop\"\"\"\nprint('Hello World')\nloop.stop()\nloop = asyncio.new_event_loop()\n# Schedule a call to hello_world()\nloop.call_soon(hello_world, loop)\n# Blocking call interrupted by loop.stop()\ntry:\nloop.run_forever()\nfinally:\nloop.close()\nSee also\nA similar Hello World\nexample created with a coroutine and the run()\nfunction.\nDisplay the current date with call_later()\u00b6\nAn example of a callback displaying the current date every second. The\ncallback uses the loop.call_later()\nmethod to reschedule itself\nafter 5 seconds, and then stops the event loop:\nimport asyncio\nimport datetime\ndef display_date(end_time, loop):\nprint(datetime.datetime.now())\nif (loop.time() + 1.0) < end_time:\nloop.call_later(1, display_date, end_time, loop)\nelse:\nloop.stop()\nloop = asyncio.new_event_loop()\n# Schedule the first call to display_date()\nend_time = loop.time() + 5.0\nloop.call_soon(display_date, end_time, loop)\n# Blocking call interrupted by loop.stop()\ntry:\nloop.run_forever()\nfinally:\nloop.close()\nSee also\nA similar current date example\ncreated with a coroutine and the run()\nfunction.\nWatch a file descriptor for read events\u00b6\nWait until a file descriptor received some data using the\nloop.add_reader()\nmethod and then close the event loop:\nimport asyncio\nfrom socket import socketpair\n# Create a pair of connected file descriptors\nrsock, wsock = socketpair()\nloop = asyncio.new_event_loop()\ndef reader():\ndata = rsock.recv(100)\nprint(\"Received:\", data.decode())\n# We are done: unregister the file descriptor\nloop.remove_reader(rsock)\n# Stop the event loop\nloop.stop()\n# Register the file descriptor for read event\nloop.add_reader(rsock, reader)\n# Simulate the reception of data from the network\nloop.call_soon(wsock.send, 'abc'.encode())\ntry:\n# Run the event loop\nloop.run_forever()\nfinally:\n# We are done. Close sockets and the event loop.\nrsock.close()\nwsock.close()\nloop.close()\nSee also\nA similar example using transports, protocols, and the\nloop.create_connection()\nmethod.Another similar example using the high-level\nasyncio.open_connection()\nfunction and streams.\nSet signal handlers for SIGINT and SIGTERM\u00b6\n(This signals\nexample only works on Unix.)\nRegister handlers for signals SIGINT\nand SIGTERM\nusing the loop.add_signal_handler()\nmethod:\nimport asyncio\nimport functools\nimport os\nimport signal\ndef ask_exit(signame, loop):\nprint(\"got signal %s: exit\" % signame)\nloop.stop()\nasync def main():\nloop = asyncio.get_running_loop()\nfor signame in {'SIGINT', 'SIGTERM'}:\nloop.add_signal_handler(\ngetattr(signal, signame),\nfunctools.partial(ask_exit, signame, loop))\nawait asyncio.sleep(3600)\nprint(\"Event loop running for 1 hour, press Ctrl+C to interrupt.\")\nprint(f\"pid {os.getpid()}: send SIGINT or SIGTERM to exit.\")\nasyncio.run(main())", "code_snippets": ["\n ", "\n", "\n ", "\n ", "\n", "\n", "\n ", " ", " ", "\n", "\n", "\n\n", "\n ", "\n ", "\n ", " ", " ", " ", " ", "\n ", " ", "\n\n", "\n ", "\n ", "\n ", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n\n", " ", "\n ", " ", " ", "\n\n ", "\n\n ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", "\n\n ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", "\n\n ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", "\n\n ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", "\n\n", " ", " ", " ", "\n ", "\n", " ", " ", " ", "\n\n", " ", " ", "\n ", "\n\n", "\n", " ", " ", "\n ", "\n ", "\n ", " ", "\n\n", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", "\n\n", " ", "\n", "\n", "\n\n", " ", "\n ", "\n\n", " ", " ", " ", "\n", " ", "\n", "\n\n", "\n", "\n ", "\n ", "\n\n", " ", " ", "\n\n", "\n", " ", "\n\n", "\n", "\n ", "\n", "\n ", "\n", "\n", "\n\n", " ", "\n ", "\n ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", "\n ", "\n\n", " ", " ", "\n\n", "\n", " ", " ", " ", " ", "\n", " ", " ", "\n\n", "\n", "\n ", "\n", "\n ", "\n", "\n", " ", "\n\n", "\n", " ", " ", " ", "\n\n", " ", " ", "\n\n", "\n ", " ", " ", "\n ", " ", "\n\n ", "\n ", "\n\n ", "\n ", "\n\n", "\n", " ", "\n\n", "\n", " ", "\n\n", "\n ", "\n ", "\n", "\n ", "\n ", "\n ", "\n ", "\n", "\n", "\n", "\n", "\n\n", " ", "\n ", " ", " ", "\n ", "\n\n", " ", "\n ", " ", " ", "\n\n ", " ", " ", " ", " ", "\n ", "\n ", " ", "\n ", " ", " ", "\n\n ", " ", "\n\n", "\n", "\n\n", "\n"], "language": "Python", "source": "python.org", "token_count": 13964} +{"url": "https://docs.python.org/3/library/asyncio-extending.html", "title": "Extending", "content": "Extending\u00b6\nThe main direction for asyncio\nextending is writing custom event loop\nclasses. Asyncio has helpers that could be used to simplify this task.\nNote\nThird-parties should reuse existing asyncio code with caution, a new Python version is free to break backward compatibility in internal part of API.\nWriting a Custom Event Loop\u00b6\nasyncio.AbstractEventLoop\ndeclares very many methods. Implementing all them\nfrom scratch is a tedious job.\nA loop can get many common methods implementation for free by inheriting from\nasyncio.BaseEventLoop\n.\nIn turn, the successor should implement a bunch of private methods declared but not\nimplemented in asyncio.BaseEventLoop\n.\nFor example, loop.create_connection()\nchecks arguments, resolves DNS addresses, and\ncalls loop._make_socket_transport()\nthat should be implemented by inherited class.\nThe _make_socket_transport()\nmethod is not documented and is considered as an\ninternal API.\nFuture and Task private constructors\u00b6\nasyncio.Future\nand asyncio.Task\nshould be never created directly,\nplease use corresponding loop.create_future()\nand loop.create_task()\n,\nor asyncio.create_task()\nfactories instead.\nHowever, third-party event loops may reuse built-in future and task implementations for the sake of getting a complex and highly optimized code for free.\nFor this purpose the following, private constructors are listed:\n- Future.__init__(*, loop=None)\u00b6\nCreate a built-in future instance.\nloop is an optional event loop instance.\n- Task.__init__(coro, *, loop=None, name=None, context=None)\u00b6\nCreate a built-in task instance.\nloop is an optional event loop instance. The rest of arguments are described in\nloop.create_task()\ndescription.Changed in version 3.11: context argument is added.\nTask lifetime support\u00b6\nA third party task implementation should call the following functions to keep a task\nvisible by asyncio.all_tasks()\nand asyncio.current_task()\n:\n- asyncio._register_task(task)\u00b6\nRegister a new task as managed by asyncio.\nCall the function from a task constructor.\n- asyncio._unregister_task(task)\u00b6\nUnregister a task from asyncio internal structures.\nThe function should be called when a task is about to finish.\n- asyncio._enter_task(loop, task)\u00b6\nSwitch the current task to the task argument.\nCall the function just before executing a portion of embedded coroutine (\ncoroutine.send()\norcoroutine.throw()\n).\n- asyncio._leave_task(loop, task)\u00b6\nSwitch the current task back from task to\nNone\n.Call the function just after\ncoroutine.send()\norcoroutine.throw()\nexecution.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 630} +{"url": "https://docs.python.org/3/library/nis.html", "title": " \u2014 Interface to Sun\u2019s NIS (Yellow Pages)", "content": "nis\n\u2014 Interface to Sun\u2019s NIS (Yellow Pages)\u00b6\nDeprecated since version 3.11, removed in version 3.13.\nThis module is no longer part of the Python standard library. It was removed in Python 3.13 after being deprecated in Python 3.11. The removal was decided in PEP 594.\nThe last version of Python that provided the nis\nmodule was\nPython 3.12.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 85} +{"url": "https://docs.python.org/3/library/msilib.html", "title": " \u2014 Read and write Microsoft Installer files", "content": "msilib\n\u2014 Read and write Microsoft Installer files\u00b6\nDeprecated since version 3.11, removed in version 3.13.\nThis module is no longer part of the Python standard library. It was removed in Python 3.13 after being deprecated in Python 3.11. The removal was decided in PEP 594.\nThe last version of Python that provided the msilib\nmodule was\nPython 3.12.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 87} +{"url": "https://docs.python.org/3/library/mailcap.html", "title": " \u2014 Mailcap file handling", "content": "mailcap\n\u2014 Mailcap file handling\u00b6\nDeprecated since version 3.11, removed in version 3.13.\nThis module is no longer part of the Python standard library. It was removed in Python 3.13 after being deprecated in Python 3.11. The removal was decided in PEP 594.\nThe last version of Python that provided the mailcap\nmodule was\nPython 3.12.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 83} +{"url": "https://docs.python.org/3/library/asyncio-policy.html", "title": "Policies", "content": "Policies\u00b6\nWarning\nPolicies are deprecated and will be removed in Python 3.16.\nUsers are encouraged to use the asyncio.run()\nfunction\nor the asyncio.Runner\nwith loop_factory to use\nthe desired loop implementation.\nAn event loop policy is a global object used to get and set the current event loop, as well as create new event loops. The default policy can be replaced with built-in alternatives to use different event loop implementations, or substituted by a custom policy that can override these behaviors.\nThe policy object gets and sets a separate event loop per context. This is per-thread by default, though custom policies could define context differently.\nCustom event loop policies can control the behavior of\nget_event_loop()\n, set_event_loop()\n, and new_event_loop()\n.\nPolicy objects should implement the APIs defined\nin the AbstractEventLoopPolicy\nabstract base class.\nGetting and Setting the Policy\u00b6\nThe following functions can be used to get and set the policy for the current process:\n- asyncio.get_event_loop_policy()\u00b6\nReturn the current process-wide policy.\nDeprecated since version 3.14: The\nget_event_loop_policy()\nfunction is deprecated and will be removed in Python 3.16.\n- asyncio.set_event_loop_policy(policy)\u00b6\nSet the current process-wide policy to policy.\nIf policy is set to\nNone\n, the default policy is restored.Deprecated since version 3.14: The\nset_event_loop_policy()\nfunction is deprecated and will be removed in Python 3.16.\nPolicy Objects\u00b6\nThe abstract event loop policy base class is defined as follows:\n- class asyncio.AbstractEventLoopPolicy\u00b6\nAn abstract base class for asyncio policies.\n- get_event_loop()\u00b6\nGet the event loop for the current context.\nReturn an event loop object implementing the\nAbstractEventLoop\ninterface.This method should never return\nNone\n.Changed in version 3.6.\n- set_event_loop(loop)\u00b6\nSet the event loop for the current context to loop.\n- new_event_loop()\u00b6\nCreate and return a new event loop object.\nThis method should never return\nNone\n.\nDeprecated since version 3.14: The\nAbstractEventLoopPolicy\nclass is deprecated and will be removed in Python 3.16.\nasyncio ships with the following built-in policies:\n- class asyncio.DefaultEventLoopPolicy\u00b6\nThe default asyncio policy. Uses\nSelectorEventLoop\non Unix andProactorEventLoop\non Windows.There is no need to install the default policy manually. asyncio is configured to use the default policy automatically.\nChanged in version 3.8: On Windows,\nProactorEventLoop\nis now used by default.Changed in version 3.14: The\nget_event_loop()\nmethod of the default asyncio policy now raises aRuntimeError\nif there is no set event loop.Deprecated since version 3.14: The\nDefaultEventLoopPolicy\nclass is deprecated and will be removed in Python 3.16.\n- class asyncio.WindowsSelectorEventLoopPolicy\u00b6\nAn alternative event loop policy that uses the\nSelectorEventLoop\nevent loop implementation.Availability: Windows.\nDeprecated since version 3.14: The\nWindowsSelectorEventLoopPolicy\nclass is deprecated and will be removed in Python 3.16.\n- class asyncio.WindowsProactorEventLoopPolicy\u00b6\nAn alternative event loop policy that uses the\nProactorEventLoop\nevent loop implementation.Availability: Windows.\nDeprecated since version 3.14: The\nWindowsProactorEventLoopPolicy\nclass is deprecated and will be removed in Python 3.16.\nCustom Policies\u00b6\nTo implement a new event loop policy, it is recommended to subclass\nDefaultEventLoopPolicy\nand override the methods for which\ncustom behavior is wanted, e.g.:\nclass MyEventLoopPolicy(asyncio.DefaultEventLoopPolicy):\ndef get_event_loop(self):\n\"\"\"Get the event loop.\nThis may be None or an instance of EventLoop.\n\"\"\"\nloop = super().get_event_loop()\n# Do something with loop ...\nreturn loop\nasyncio.set_event_loop_policy(MyEventLoopPolicy())", "code_snippets": ["\n\n ", "\n", "\n\n", "\n", "\n ", " ", " ", "\n ", "\n ", " ", "\n\n", "\n"], "language": "Python", "source": "python.org", "token_count": 942} +{"url": "https://docs.python.org/3/library/imp.html", "title": " \u2014 Access the import internals", "content": "imp\n\u2014 Access the import internals\u00b6\nDeprecated since version 3.4, removed in version 3.12.\nThis module is no longer part of the Python standard library. It was removed in Python 3.12 after being deprecated in Python 3.4.\nThe removal notice includes guidance for\nmigrating code from imp\nto importlib\n.\nThe last version of Python that provided the imp\nmodule was\nPython 3.11.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 93} +{"url": "https://docs.python.org/3/c-api/init_config.html", "title": "Python Initialization Configuration", "content": "Python Initialization Configuration\u00b6\nPyInitConfig C API\u00b6\nAdded in version 3.14.\nPython can be initialized with Py_InitializeFromInitConfig()\n.\nThe Py_RunMain()\nfunction can be used to write a customized Python\nprogram.\nSee also Initialization, Finalization, and Threads.\nSee also\nPEP 741 \u201cPython Configuration C API\u201d.\nExample\u00b6\nExample of customized Python always running with the Python Development\nMode enabled; return -1\non error:\nint init_python(void)\n{\nPyInitConfig *config = PyInitConfig_Create();\nif (config == NULL) {\nprintf(\"PYTHON INIT ERROR: memory allocation failed\\n\");\nreturn -1;\n}\n// Enable the Python Development Mode\nif (PyInitConfig_SetInt(config, \"dev_mode\", 1) < 0) {\ngoto error;\n}\n// Initialize Python with the configuration\nif (Py_InitializeFromInitConfig(config) < 0) {\ngoto error;\n}\nPyInitConfig_Free(config);\nreturn 0;\nerror:\n{\n// Display the error message.\n//\n// This uncommon braces style is used, because you cannot make\n// goto targets point to variable declarations.\nconst char *err_msg;\n(void)PyInitConfig_GetError(config, &err_msg);\nprintf(\"PYTHON INIT ERROR: %s\\n\", err_msg);\nPyInitConfig_Free(config);\nreturn -1;\n}\n}\nCreate Config\u00b6\n-\nstruct PyInitConfig\u00b6\nOpaque structure to configure the Python initialization.\n-\nPyInitConfig *PyInitConfig_Create(void)\u00b6\nCreate a new initialization configuration using Isolated Configuration default values.\nIt must be freed by\nPyInitConfig_Free()\n.Return\nNULL\non memory allocation failure.\n-\nvoid PyInitConfig_Free(PyInitConfig *config)\u00b6\nFree memory of the initialization configuration config.\nIf config is\nNULL\n, no operation is performed.\nError Handling\u00b6\n-\nint PyInitConfig_GetError(PyInitConfig *config, const char **err_msg)\u00b6\nGet the config error message.\nSet *err_msg and return\n1\nif an error is set.Set *err_msg to\nNULL\nand return0\notherwise.\nAn error message is a UTF-8 encoded string.\nIf config has an exit code, format the exit code as an error message.\nThe error message remains valid until another\nPyInitConfig\nfunction is called with config. The caller doesn\u2019t have to free the error message.\n-\nint PyInitConfig_GetExitCode(PyInitConfig *config, int *exitcode)\u00b6\nGet the config exit code.\nSet *exitcode and return\n1\nif config has an exit code set.Return\n0\nif config has no exit code set.\nOnly the\nPy_InitializeFromInitConfig()\nfunction can set an exit code if theparse_argv\noption is non-zero.An exit code can be set when parsing the command line failed (exit code\n2\n) or when a command line option asks to display the command line help (exit code0\n).\nGet Options\u00b6\nThe configuration option name parameter must be a non-NULL null-terminated UTF-8 encoded string. See Configuration Options.\n-\nint PyInitConfig_HasOption(PyInitConfig *config, const char *name)\u00b6\nTest if the configuration has an option called name.\nReturn\n1\nif the option exists, or return0\notherwise.\n-\nint PyInitConfig_GetInt(PyInitConfig *config, const char *name, int64_t *value)\u00b6\nGet an integer configuration option.\nSet *value, and return\n0\non success.Set an error in config and return\n-1\non error.\n-\nint PyInitConfig_GetStr(PyInitConfig *config, const char *name, char **value)\u00b6\nGet a string configuration option as a null-terminated UTF-8 encoded string.\nSet *value, and return\n0\non success.Set an error in config and return\n-1\non error.\n*value can be set to\nNULL\nif the option is an optional string and the option is unset.On success, the string must be released with\nfree(value)\nif it\u2019s notNULL\n.\n-\nint PyInitConfig_GetStrList(PyInitConfig *config, const char *name, size_t *length, char ***items)\u00b6\nGet a string list configuration option as an array of null-terminated UTF-8 encoded strings.\nSet *length and *value, and return\n0\non success.Set an error in config and return\n-1\non error.\nOn success, the string list must be released with\nPyInitConfig_FreeStrList(length, items)\n.\n-\nvoid PyInitConfig_FreeStrList(size_t length, char **items)\u00b6\nFree memory of a string list created by\nPyInitConfig_GetStrList()\n.\nSet Options\u00b6\nThe configuration option name parameter must be a non-NULL null-terminated UTF-8 encoded string. See Configuration Options.\nSome configuration options have side effects on other options. This logic is\nonly implemented when Py_InitializeFromInitConfig()\nis called, not by the\n\u201cSet\u201d functions below. For example, setting dev_mode\nto 1\ndoes not set\nfaulthandler\nto 1\n.\n-\nint PyInitConfig_SetInt(PyInitConfig *config, const char *name, int64_t value)\u00b6\nSet an integer configuration option.\nReturn\n0\non success.Set an error in config and return\n-1\non error.\n-\nint PyInitConfig_SetStr(PyInitConfig *config, const char *name, const char *value)\u00b6\nSet a string configuration option from a null-terminated UTF-8 encoded string. The string is copied.\nReturn\n0\non success.Set an error in config and return\n-1\non error.\n-\nint PyInitConfig_SetStrList(PyInitConfig *config, const char *name, size_t length, char *const *items)\u00b6\nSet a string list configuration option from an array of null-terminated UTF-8 encoded strings. The string list is copied.\nReturn\n0\non success.Set an error in config and return\n-1\non error.\nModule\u00b6\n-\nint PyInitConfig_AddModule(PyInitConfig *config, const char *name, PyObject *(*initfunc)(void))\u00b6\nAdd a built-in extension module to the table of built-in modules.\nThe new module can be imported by the name name, and uses the function initfunc as the initialization function called on the first attempted import.\nReturn\n0\non success.Set an error in config and return\n-1\non error.\nIf Python is initialized multiple times,\nPyInitConfig_AddModule()\nmust be called at each Python initialization.Similar to the\nPyImport_AppendInittab()\nfunction.\nInitialize Python\u00b6\n-\nint Py_InitializeFromInitConfig(PyInitConfig *config)\u00b6\nInitialize Python from the initialization configuration.\nReturn\n0\non success.Set an error in config and return\n-1\non error.Set an exit code in config and return\n-1\nif Python wants to exit.\nSee\nPyInitConfig_GetExitcode()\nfor the exit code case.\nConfiguration Options\u00b6\nOption |\nPyConfig/PyPreConfig member |\nType |\nVisibility |\n|---|---|---|---|\n|\n|\nRead-only |\n|\n|\n|\nPublic |\n|\n|\n|\nPublic |\n|\n|\n|\nPublic |\n|\n|\n|\nPublic |\n|\n|\n|\nRead-only |\n|\n|\n|\nPublic |\n|\n|\n|\nRead-only |\n|\n|\n|\nRead-only |\n|\n|\n|\nRead-only |\n|\n|\n|\nRead-only |\n|\n|\n|\nRead-only |\n|\n|\n|\nRead-only |\n|\n|\n|\nPublic |\n|\n|\n|\nRead-only |\n|\n|\n|\nRead-only |\n|\n|\n|\nRead-only |\n|\n|\n|\nPublic |\n|\n|\n|\nPublic |\n|\n|\n|\nRead-only |\n|\n|\n|\nRead-only |\n|\n|\n|\nRead-only |\n|\n|\n|\nRead-only |\n|\n|\n|\nRead-only |\n|\n|\n|\nRead-only |\n|\n|\n|\nPublic |\n|\n|\n|\nRead-only |\n|\n|\n|\nPublic |\n|\n|\n|\nPublic |\n|\n|\n|\nRead-only |\n|\n|\n|\nRead-only |\n|\n|\n|\nRead-only |\n|\n|\n|\nRead-only |\n|\n|\n|\nPublic |\n|\n|\n|\nPublic |\n|\n|\n|\nRead-only |\n|\n|\n|\nRead-only |\n|\n|\n|\nPublic |\n|\n|\n|\nRead-only |\n|\n|\n|\nRead-only |\n|\n|\n|\nPublic |\n|\n|\n|\nPublic |\n|\n|\n|\nRead-only |\n|\n|\n|\nPublic |\n|\n|\n|\nPublic |\n|\n|\n|\nRead-only |\n|\n|\n|\nRead-only |\n|\n|\n|\nRead-only |\n|\n|\n|\nRead-only |\n|\n|\n|\nRead-only |\n|\n|\n|\nRead-only |\n|\n|\n|\nRead-only |\n|\n|\n|\nRead-only |\n|\n|\n|\nRead-only |\n|\n|\n|\nRead-only |\n|\n|\n|\nPublic |\n|\n|\n|\nRead-only |\n|\n|\n|\nPublic |\n|\n|\n|\nRead-only |\n|\n|\n|\nRead-only |\n|\n|\n|\nRead-only |\n|\n|\n|\nRead-only |\n|\n|\n|\nRead-only |\n|\n|\n|\nPublic |\n|\n|\n|\nRead-only |\n|\n|\n|\nPublic |\n|\n|\n|\nPublic |\n|\n|\n|\nPublic |\n|\n|\n|\nRead-only |\nVisibility:\nPublic: Can be retrieved by\nPyConfig_Get()\nand set byPyConfig_Set()\n.Read-only: Can be retrieved by\nPyConfig_Get()\n, but cannot be set byPyConfig_Set()\n.\nRuntime Python configuration API\u00b6\nAt runtime, it\u2019s possible to get and set configuration options using\nPyConfig_Get()\nand PyConfig_Set()\nfunctions.\nThe configuration option name parameter must be a non-NULL null-terminated UTF-8 encoded string. See Configuration Options.\nSome options are read from the sys\nattributes. For example, the option\n\"argv\"\nis read from sys.argv\n.\n-\nPyObject *PyConfig_Get(const char *name)\u00b6\nGet the current runtime value of a configuration option as a Python object.\nReturn a new reference on success.\nSet an exception and return\nNULL\non error.\nThe object type depends on the configuration option. It can be:\nbool\nint\nstr\nlist[str]\ndict[str, str]\nThe caller must have an attached thread state. The function cannot be called before Python initialization nor after Python finalization.\nAdded in version 3.14.\n-\nint PyConfig_GetInt(const char *name, int *value)\u00b6\nSimilar to\nPyConfig_Get()\n, but get the value as a C int.Return\n0\non success.Set an exception and return\n-1\non error.\nAdded in version 3.14.\n-\nPyObject *PyConfig_Names(void)\u00b6\nGet all configuration option names as a\nfrozenset\n.Return a new reference on success.\nSet an exception and return\nNULL\non error.\nThe caller must have an attached thread state. The function cannot be called before Python initialization nor after Python finalization.\nAdded in version 3.14.\n-\nint PyConfig_Set(const char *name, PyObject *value)\u00b6\nSet the current runtime value of a configuration option.\nRaise a\nValueError\nif there is no option name.Raise a\nValueError\nif value is an invalid value.Raise a\nValueError\nif the option is read-only (cannot be set).Raise a\nTypeError\nif value has not the proper type.\nThe caller must have an attached thread state. The function cannot be called before Python initialization nor after Python finalization.\nRaises an auditing event\ncpython.PyConfig_Set\nwith argumentsname\n,value\n.Added in version 3.14.\nPyConfig C API\u00b6\nAdded in version 3.8.\nPython can be initialized with Py_InitializeFromConfig()\nand the\nPyConfig\nstructure. It can be preinitialized with\nPy_PreInitialize()\nand the PyPreConfig\nstructure.\nThere are two kinds of configuration:\nThe Python Configuration can be used to build a customized Python which behaves as the regular Python. For example, environment variables and command line arguments are used to configure Python.\nThe Isolated Configuration can be used to embed Python into an application. It isolates Python from the system. For example, environment variables are ignored, the LC_CTYPE locale is left unchanged and no signal handler is registered.\nThe Py_RunMain()\nfunction can be used to write a customized Python\nprogram.\nSee also Initialization, Finalization, and Threads.\nSee also\nPEP 587 \u201cPython Initialization Configuration\u201d.\nExample\u00b6\nExample of customized Python always running in isolated mode:\nint main(int argc, char **argv)\n{\nPyStatus status;\nPyConfig config;\nPyConfig_InitPythonConfig(&config);\nconfig.isolated = 1;\n/* Decode command line arguments.\nImplicitly preinitialize Python (in isolated mode). */\nstatus = PyConfig_SetBytesArgv(&config, argc, argv);\nif (PyStatus_Exception(status)) {\ngoto exception;\n}\nstatus = Py_InitializeFromConfig(&config);\nif (PyStatus_Exception(status)) {\ngoto exception;\n}\nPyConfig_Clear(&config);\nreturn Py_RunMain();\nexception:\nPyConfig_Clear(&config);\nif (PyStatus_IsExit(status)) {\nreturn status.exitcode;\n}\n/* Display the error message and exit the process with\nnon-zero exit code */\nPy_ExitStatusException(status);\n}\nPyWideStringList\u00b6\n-\ntype PyWideStringList\u00b6\nList of\nwchar_t*\nstrings.If length is non-zero, items must be non-\nNULL\nand all strings must be non-NULL\n.Methods:\n-\nPyStatus PyWideStringList_Append(PyWideStringList *list, const wchar_t *item)\u00b6\nAppend item to list.\nPython must be preinitialized to call this function.\n-\nPyStatus PyWideStringList_Insert(PyWideStringList *list, Py_ssize_t index, const wchar_t *item)\u00b6\nInsert item into list at index.\nIf index is greater than or equal to list length, append item to list.\nindex must be greater than or equal to\n0\n.Python must be preinitialized to call this function.\nStructure fields:\n-\nPy_ssize_t length\u00b6\nList length.\n-\nwchar_t **items\u00b6\nList items.\n-\nPyStatus PyWideStringList_Append(PyWideStringList *list, const wchar_t *item)\u00b6\nPyStatus\u00b6\n-\ntype PyStatus\u00b6\nStructure to store an initialization function status: success, error or exit.\nFor an error, it can store the C function name which created the error.\nStructure fields:\n-\nint exitcode\u00b6\nExit code. Argument passed to\nexit()\n.\n-\nconst char *err_msg\u00b6\nError message.\n-\nconst char *func\u00b6\nName of the function which created an error, can be\nNULL\n.\nFunctions to create a status:\n-\nPyStatus PyStatus_Error(const char *err_msg)\u00b6\nInitialization error with a message.\nerr_msg must not be\nNULL\n.\nFunctions to handle a status:\n-\nint PyStatus_Exception(PyStatus status)\u00b6\nIs the status an error or an exit? If true, the exception must be handled; by calling\nPy_ExitStatusException()\nfor example.\n-\nint exitcode\u00b6\nNote\nInternally, Python uses macros which set PyStatus.func\n,\nwhereas functions to create a status set func\nto NULL\n.\nExample:\nPyStatus alloc(void **ptr, size_t size)\n{\n*ptr = PyMem_RawMalloc(size);\nif (*ptr == NULL) {\nreturn PyStatus_NoMemory();\n}\nreturn PyStatus_Ok();\n}\nint main(int argc, char **argv)\n{\nvoid *ptr;\nPyStatus status = alloc(&ptr, 16);\nif (PyStatus_Exception(status)) {\nPy_ExitStatusException(status);\n}\nPyMem_Free(ptr);\nreturn 0;\n}\nPyPreConfig\u00b6\n-\ntype PyPreConfig\u00b6\nStructure used to preinitialize Python.\nFunction to initialize a preconfiguration:\n-\nvoid PyPreConfig_InitPythonConfig(PyPreConfig *preconfig)\u00b6\nInitialize the preconfiguration with Python Configuration.\n-\nvoid PyPreConfig_InitIsolatedConfig(PyPreConfig *preconfig)\u00b6\nInitialize the preconfiguration with Isolated Configuration.\nStructure fields:\n-\nint allocator\u00b6\nName of the Python memory allocators:\nPYMEM_ALLOCATOR_NOT_SET\n(0\n): don\u2019t change memory allocators (use defaults).PYMEM_ALLOCATOR_DEFAULT\n(1\n): default memory allocators.PYMEM_ALLOCATOR_DEBUG\n(2\n): default memory allocators with debug hooks.PYMEM_ALLOCATOR_MALLOC\n(3\n): usemalloc()\nof the C library.PYMEM_ALLOCATOR_MALLOC_DEBUG\n(4\n): force usage ofmalloc()\nwith debug hooks.PYMEM_ALLOCATOR_PYMALLOC\n(5\n): Python pymalloc memory allocator.PYMEM_ALLOCATOR_PYMALLOC_DEBUG\n(6\n): Python pymalloc memory allocator with debug hooks.PYMEM_ALLOCATOR_MIMALLOC\n(6\n): usemimalloc\n, a fast malloc replacement.PYMEM_ALLOCATOR_MIMALLOC_DEBUG\n(7\n): usemimalloc\n, a fast malloc replacement with debug hooks.\nPYMEM_ALLOCATOR_PYMALLOC\nandPYMEM_ALLOCATOR_PYMALLOC_DEBUG\nare not supported if Python isconfigured using --without-pymalloc\n.PYMEM_ALLOCATOR_MIMALLOC\nandPYMEM_ALLOCATOR_MIMALLOC_DEBUG\nare not supported if Python isconfigured using --without-mimalloc\nor if the underlying atomic support isn\u2019t available.See Memory Management.\nDefault:\nPYMEM_ALLOCATOR_NOT_SET\n.\n-\nint configure_locale\u00b6\nSet the LC_CTYPE locale to the user preferred locale.\nIf equals to\n0\n, setcoerce_c_locale\nandcoerce_c_locale_warn\nmembers to0\n.See the locale encoding.\nDefault:\n1\nin Python config,0\nin isolated config.\n-\nint coerce_c_locale\u00b6\nIf equals to\n2\n, coerce the C locale.If equals to\n1\n, read the LC_CTYPE locale to decide if it should be coerced.See the locale encoding.\nDefault:\n-1\nin Python config,0\nin isolated config.\n-\nint coerce_c_locale_warn\u00b6\nIf non-zero, emit a warning if the C locale is coerced.\nDefault:\n-1\nin Python config,0\nin isolated config.\n-\nint dev_mode\u00b6\nPython Development Mode: see\nPyConfig.dev_mode\n.Default:\n-1\nin Python mode,0\nin isolated mode.\n-\nint isolated\u00b6\nIsolated mode: see\nPyConfig.isolated\n.Default:\n0\nin Python mode,1\nin isolated mode.\n-\nint legacy_windows_fs_encoding\u00b6\nIf non-zero:\nSet\nPyPreConfig.utf8_mode\nto0\n,Set\nPyConfig.filesystem_encoding\nto\"mbcs\"\n,Set\nPyConfig.filesystem_errors\nto\"replace\"\n.\nInitialized from the\nPYTHONLEGACYWINDOWSFSENCODING\nenvironment variable value.Only available on Windows.\n#ifdef MS_WINDOWS\nmacro can be used for Windows specific code.Default:\n0\n.\n-\nint parse_argv\u00b6\nIf non-zero,\nPy_PreInitializeFromArgs()\nandPy_PreInitializeFromBytesArgs()\nparse theirargv\nargument the same way the regular Python parses command line arguments: see Command Line Arguments.Default:\n1\nin Python config,0\nin isolated config.\n-\nint use_environment\u00b6\nUse environment variables? See\nPyConfig.use_environment\n.Default:\n1\nin Python config and0\nin isolated config.\n-\nint utf8_mode\u00b6\nIf non-zero, enable the Python UTF-8 Mode.\nSet to\n0\nor1\nby the-X utf8\ncommand line option and thePYTHONUTF8\nenvironment variable.Also set to\n1\nif theLC_CTYPE\nlocale isC\norPOSIX\n.Default:\n-1\nin Python config and0\nin isolated config.\n-\nvoid PyPreConfig_InitPythonConfig(PyPreConfig *preconfig)\u00b6\nPreinitialize Python with PyPreConfig\u00b6\nThe preinitialization of Python:\nSet the Python memory allocators (\nPyPreConfig.allocator\n)Configure the LC_CTYPE locale (locale encoding)\nSet the Python UTF-8 Mode (\nPyPreConfig.utf8_mode\n)\nThe current preconfiguration (PyPreConfig\ntype) is stored in\n_PyRuntime.preconfig\n.\nFunctions to preinitialize Python:\n-\nPyStatus Py_PreInitialize(const PyPreConfig *preconfig)\u00b6\nPreinitialize Python from preconfig preconfiguration.\npreconfig must not be\nNULL\n.\n-\nPyStatus Py_PreInitializeFromBytesArgs(const PyPreConfig *preconfig, int argc, char *const *argv)\u00b6\nPreinitialize Python from preconfig preconfiguration.\nParse argv command line arguments (bytes strings) if\nparse_argv\nof preconfig is non-zero.preconfig must not be\nNULL\n.\n-\nPyStatus Py_PreInitializeFromArgs(const PyPreConfig *preconfig, int argc, wchar_t *const *argv)\u00b6\nPreinitialize Python from preconfig preconfiguration.\nParse argv command line arguments (wide strings) if\nparse_argv\nof preconfig is non-zero.preconfig must not be\nNULL\n.\nThe caller is responsible to handle exceptions (error or exit) using\nPyStatus_Exception()\nand Py_ExitStatusException()\n.\nFor Python Configuration\n(PyPreConfig_InitPythonConfig()\n), if Python is initialized with\ncommand line arguments, the command line arguments must also be passed to\npreinitialize Python, since they have an effect on the pre-configuration\nlike encodings. For example, the -X utf8\ncommand line option\nenables the Python UTF-8 Mode.\nPyMem_SetAllocator()\ncan be called after Py_PreInitialize()\nand\nbefore Py_InitializeFromConfig()\nto install a custom memory allocator.\nIt can be called before Py_PreInitialize()\nif\nPyPreConfig.allocator\nis set to PYMEM_ALLOCATOR_NOT_SET\n.\nPython memory allocation functions like PyMem_RawMalloc()\nmust not be\nused before the Python preinitialization, whereas calling directly malloc()\nand free()\nis always safe. Py_DecodeLocale()\nmust not be called\nbefore the Python preinitialization.\nExample using the preinitialization to enable the Python UTF-8 Mode:\nPyStatus status;\nPyPreConfig preconfig;\nPyPreConfig_InitPythonConfig(&preconfig);\npreconfig.utf8_mode = 1;\nstatus = Py_PreInitialize(&preconfig);\nif (PyStatus_Exception(status)) {\nPy_ExitStatusException(status);\n}\n/* at this point, Python speaks UTF-8 */\nPy_Initialize();\n/* ... use Python API here ... */\nPy_Finalize();\nPyConfig\u00b6\n-\ntype PyConfig\u00b6\nStructure containing most parameters to configure Python.\nWhen done, the\nPyConfig_Clear()\nfunction must be used to release the configuration memory.Structure methods:\n-\nvoid PyConfig_InitPythonConfig(PyConfig *config)\u00b6\nInitialize configuration with the Python Configuration.\n-\nvoid PyConfig_InitIsolatedConfig(PyConfig *config)\u00b6\nInitialize configuration with the Isolated Configuration.\n-\nPyStatus PyConfig_SetString(PyConfig *config, wchar_t *const *config_str, const wchar_t *str)\u00b6\nCopy the wide character string str into\n*config_str\n.Preinitialize Python if needed.\n-\nPyStatus PyConfig_SetBytesString(PyConfig *config, wchar_t *const *config_str, const char *str)\u00b6\nDecode str using\nPy_DecodeLocale()\nand set the result into*config_str\n.Preinitialize Python if needed.\n-\nPyStatus PyConfig_SetArgv(PyConfig *config, int argc, wchar_t *const *argv)\u00b6\nSet command line arguments (\nargv\nmember of config) from the argv list of wide character strings.Preinitialize Python if needed.\n-\nPyStatus PyConfig_SetBytesArgv(PyConfig *config, int argc, char *const *argv)\u00b6\nSet command line arguments (\nargv\nmember of config) from the argv list of bytes strings. Decode bytes usingPy_DecodeLocale()\n.Preinitialize Python if needed.\n-\nPyStatus PyConfig_SetWideStringList(PyConfig *config, PyWideStringList *list, Py_ssize_t length, wchar_t **items)\u00b6\nSet the list of wide strings list to length and items.\nPreinitialize Python if needed.\n-\nPyStatus PyConfig_Read(PyConfig *config)\u00b6\nRead all Python configuration.\nFields which are already initialized are left unchanged.\nFields for path configuration are no longer calculated or modified when calling this function, as of Python 3.11.\nThe\nPyConfig_Read()\nfunction only parsesPyConfig.argv\narguments once:PyConfig.parse_argv\nis set to2\nafter arguments are parsed. Since Python arguments are stripped fromPyConfig.argv\n, parsing arguments twice would parse the application options as Python options.Preinitialize Python if needed.\nChanged in version 3.10: The\nPyConfig.argv\narguments are now only parsed once,PyConfig.parse_argv\nis set to2\nafter arguments are parsed, and arguments are only parsed ifPyConfig.parse_argv\nequals1\n.Changed in version 3.11:\nPyConfig_Read()\nno longer calculates all paths, and so fields listed under Python Path Configuration may no longer be updated untilPy_InitializeFromConfig()\nis called.\nMost\nPyConfig\nmethods preinitialize Python if needed. In that case, the Python preinitialization configuration (PyPreConfig\n) is based on thePyConfig\n. If configuration fields which are in common withPyPreConfig\nare tuned, they must be set before calling aPyConfig\nmethod:Moreover, if\nPyConfig_SetArgv()\norPyConfig_SetBytesArgv()\nis used, this method must be called before other methods, since the preinitialization configuration depends on command line arguments (ifparse_argv\nis non-zero).The caller of these methods is responsible to handle exceptions (error or exit) using\nPyStatus_Exception()\nandPy_ExitStatusException()\n.Structure fields:\n-\nPyWideStringList argv\u00b6\nSet\nsys.argv\ncommand line arguments based onargv\n. These parameters are similar to those passed to the program\u2019smain()\nfunction with the difference that the first entry should refer to the script file to be executed rather than the executable hosting the Python interpreter. If there isn\u2019t a script that will be run, the first entry inargv\ncan be an empty string.Set\nparse_argv\nto1\nto parseargv\nthe same way the regular Python parses Python command line arguments and then to strip Python arguments fromargv\n.If\nargv\nis empty, an empty string is added to ensure thatsys.argv\nalways exists and is never empty.Default:\nNULL\n.See also the\norig_argv\nmember.\n-\nint safe_path\u00b6\nIf equals to zero,\nPy_RunMain()\nprepends a potentially unsafe path tosys.path\nat startup:If\nargv[0]\nis equal toL\"-m\"\n(python -m module\n), prepend the current working directory.If running a script (\npython script.py\n), prepend the script\u2019s directory. If it\u2019s a symbolic link, resolve symbolic links.Otherwise (\npython -c code\nandpython\n), prepend an empty string, which means the current working directory.\nSet to\n1\nby the-P\ncommand line option and thePYTHONSAFEPATH\nenvironment variable.Default:\n0\nin Python config,1\nin isolated config.Added in version 3.11.\n-\nwchar_t *base_exec_prefix\u00b6\n-\nDefault:\nNULL\n.Part of the Python Path Configuration output.\nSee also\nPyConfig.exec_prefix\n.\n-\nwchar_t *base_executable\u00b6\nPython base executable:\nsys._base_executable\n.Set by the\n__PYVENV_LAUNCHER__\nenvironment variable.Set from\nPyConfig.executable\nifNULL\n.Default:\nNULL\n.Part of the Python Path Configuration output.\nSee also\nPyConfig.executable\n.\n-\nwchar_t *base_prefix\u00b6\n-\nDefault:\nNULL\n.Part of the Python Path Configuration output.\nSee also\nPyConfig.prefix\n.\n-\nint buffered_stdio\u00b6\nIf equals to\n0\nandconfigure_c_stdio\nis non-zero, disable buffering on the C streams stdout and stderr.Set to\n0\nby the-u\ncommand line option and thePYTHONUNBUFFERED\nenvironment variable.stdin is always opened in buffered mode.\nDefault:\n1\n.\n-\nint bytes_warning\u00b6\nIf equals to\n1\n, issue a warning when comparingbytes\norbytearray\nwithstr\n, or comparingbytes\nwithint\n.If equal or greater to\n2\n, raise aBytesWarning\nexception in these cases.Incremented by the\n-b\ncommand line option.Default:\n0\n.\n-\nint warn_default_encoding\u00b6\nIf non-zero, emit a\nEncodingWarning\nwarning whenio.TextIOWrapper\nuses its default encoding. See Opt-in EncodingWarning for details.Default:\n0\n.Added in version 3.10.\n-\nint code_debug_ranges\u00b6\nIf equals to\n0\n, disables the inclusion of the end line and column mappings in code objects. Also disables traceback printing carets to specific error locations.Set to\n0\nby thePYTHONNODEBUGRANGES\nenvironment variable and by the-X no_debug_ranges\ncommand line option.Default:\n1\n.Added in version 3.11.\n-\nwchar_t *check_hash_pycs_mode\u00b6\nControl the validation behavior of hash-based\n.pyc\nfiles: value of the--check-hash-based-pycs\ncommand line option.Valid values:\nL\"always\"\n: Hash the source file for invalidation regardless of value of the \u2018check_source\u2019 flag.L\"never\"\n: Assume that hash-based pycs always are valid.L\"default\"\n: The \u2018check_source\u2019 flag in hash-based pycs determines invalidation.\nDefault:\nL\"default\"\n.See also PEP 552 \u201cDeterministic pycs\u201d.\n-\nint configure_c_stdio\u00b6\nIf non-zero, configure C standard streams:\nOn Windows, set the binary mode (\nO_BINARY\n) on stdin, stdout and stderr.If\nbuffered_stdio\nequals zero, disable buffering of stdin, stdout and stderr streams.If\ninteractive\nis non-zero, enable stream buffering on stdin and stdout (only stdout on Windows).\nDefault:\n1\nin Python config,0\nin isolated config.\n-\nint dev_mode\u00b6\nIf non-zero, enable the Python Development Mode.\nSet to\n1\nby the-X dev\noption and thePYTHONDEVMODE\nenvironment variable.Default:\n-1\nin Python mode,0\nin isolated mode.\n-\nint dump_refs\u00b6\nDump Python references?\nIf non-zero, dump all objects which are still alive at exit.\nSet to\n1\nby thePYTHONDUMPREFS\nenvironment variable.Needs a special build of Python with the\nPy_TRACE_REFS\nmacro defined: see theconfigure --with-trace-refs option\n.Default:\n0\n.\n-\nwchar_t *dump_refs_file\u00b6\nFilename where to dump Python references.\nSet by the\nPYTHONDUMPREFSFILE\nenvironment variable.Default:\nNULL\n.Added in version 3.11.\n-\nwchar_t *exec_prefix\u00b6\nThe site-specific directory prefix where the platform-dependent Python files are installed:\nsys.exec_prefix\n.Default:\nNULL\n.Part of the Python Path Configuration output.\nSee also\nPyConfig.base_exec_prefix\n.\n-\nwchar_t *executable\u00b6\nThe absolute path of the executable binary for the Python interpreter:\nsys.executable\n.Default:\nNULL\n.Part of the Python Path Configuration output.\nSee also\nPyConfig.base_executable\n.\n-\nint faulthandler\u00b6\nEnable faulthandler?\nIf non-zero, call\nfaulthandler.enable()\nat startup.Set to\n1\nby-X faulthandler\nand thePYTHONFAULTHANDLER\nenvironment variable.Default:\n-1\nin Python mode,0\nin isolated mode.\n-\nwchar_t *filesystem_encoding\u00b6\nFilesystem encoding:\nsys.getfilesystemencoding()\n.On macOS, Android and VxWorks: use\n\"utf-8\"\nby default.On Windows: use\n\"utf-8\"\nby default, or\"mbcs\"\niflegacy_windows_fs_encoding\nofPyPreConfig\nis non-zero.Default encoding on other platforms:\n\"utf-8\"\nifPyPreConfig.utf8_mode\nis non-zero.\"ascii\"\nif Python detects thatnl_langinfo(CODESET)\nannounces the ASCII encoding, whereas thembstowcs()\nfunction decodes from a different encoding (usually Latin1).\"utf-8\"\nifnl_langinfo(CODESET)\nreturns an empty string.Otherwise, use the locale encoding:\nnl_langinfo(CODESET)\nresult.\nAt Python startup, the encoding name is normalized to the Python codec name. For example,\n\"ANSI_X3.4-1968\"\nis replaced with\"ascii\"\n.See also the\nfilesystem_errors\nmember.\n-\nwchar_t *filesystem_errors\u00b6\nFilesystem error handler:\nsys.getfilesystemencodeerrors()\n.On Windows: use\n\"surrogatepass\"\nby default, or\"replace\"\niflegacy_windows_fs_encoding\nofPyPreConfig\nis non-zero.On other platforms: use\n\"surrogateescape\"\nby default.Supported error handlers:\n\"strict\"\n\"surrogateescape\"\n\"surrogatepass\"\n(only supported with the UTF-8 encoding)\nSee also the\nfilesystem_encoding\nmember.\n-\nint use_frozen_modules\u00b6\nIf non-zero, use frozen modules.\nSet by the\nPYTHON_FROZEN_MODULES\nenvironment variable.Default:\n1\nin a release build, or0\nin a debug build.\n-\nunsigned long hash_seed\u00b6\n-\nint use_hash_seed\u00b6\nRandomized hash function seed.\nIf\nuse_hash_seed\nis zero, a seed is chosen randomly at Python startup, andhash_seed\nis ignored.Set by the\nPYTHONHASHSEED\nenvironment variable.Default use_hash_seed value:\n-1\nin Python mode,0\nin isolated mode.\n-\nwchar_t *home\u00b6\nSet the default Python \u201chome\u201d directory, that is, the location of the standard Python libraries (see\nPYTHONHOME\n).Set by the\nPYTHONHOME\nenvironment variable.Default:\nNULL\n.Part of the Python Path Configuration input.\n-\nint import_time\u00b6\nIf\n1\n, profile import time. If2\n, include additional output that indicates when an imported module has already been loaded.Set by the\n-X importtime\noption and thePYTHONPROFILEIMPORTTIME\nenvironment variable.Default:\n0\n.Changed in version 3.14: Added support for\nimport_time = 2\n-\nint inspect\u00b6\nEnter interactive mode after executing a script or a command.\nIf greater than\n0\n, enable inspect: when a script is passed as first argument or the -c option is used, enter interactive mode after executing the script or the command, even whensys.stdin\ndoes not appear to be a terminal.Incremented by the\n-i\ncommand line option. Set to1\nif thePYTHONINSPECT\nenvironment variable is non-empty.Default:\n0\n.\n-\nint install_signal_handlers\u00b6\nInstall Python signal handlers?\nDefault:\n1\nin Python mode,0\nin isolated mode.\n-\nint interactive\u00b6\nIf greater than\n0\n, enable the interactive mode (REPL).Incremented by the\n-i\ncommand line option.Default:\n0\n.\n-\nint int_max_str_digits\u00b6\nConfigures the integer string conversion length limitation. An initial value of\n-1\nmeans the value will be taken from the command line or environment or otherwise default to 4300 (sys.int_info.default_max_str_digits\n). A value of0\ndisables the limitation. Values greater than zero but less than 640 (sys.int_info.str_digits_check_threshold\n) are unsupported and will produce an error.Configured by the\n-X int_max_str_digits\ncommand line flag or thePYTHONINTMAXSTRDIGITS\nenvironment variable.Default:\n-1\nin Python mode. 4300 (sys.int_info.default_max_str_digits\n) in isolated mode.Added in version 3.12.\n-\nint cpu_count\u00b6\nIf the value of\ncpu_count\nis not-1\nthen it will override the return values ofos.cpu_count()\n,os.process_cpu_count()\n, andmultiprocessing.cpu_count()\n.Configured by the\n-X cpu_count=n|default\ncommand line flag or thePYTHON_CPU_COUNT\nenvironment variable.Default:\n-1\n.Added in version 3.13.\n-\nint isolated\u00b6\nIf greater than\n0\n, enable isolated mode:Set\nsafe_path\nto1\n: don\u2019t prepend a potentially unsafe path tosys.path\nat Python startup, such as the current directory, the script\u2019s directory or an empty string.Set\nuse_environment\nto0\n: ignorePYTHON\nenvironment variables.Set\nuser_site_directory\nto0\n: don\u2019t add the user site directory tosys.path\n.Python REPL doesn\u2019t import\nreadline\nnor enable default readline configuration on interactive prompts.\nSet to\n1\nby the-I\ncommand line option.Default:\n0\nin Python mode,1\nin isolated mode.See also the Isolated Configuration and\nPyPreConfig.isolated\n.\n-\nint legacy_windows_stdio\u00b6\nIf non-zero, use\nio.FileIO\ninstead ofio._WindowsConsoleIO\nforsys.stdin\n,sys.stdout\nandsys.stderr\n.Set to\n1\nif thePYTHONLEGACYWINDOWSSTDIO\nenvironment variable is set to a non-empty string.Only available on Windows.\n#ifdef MS_WINDOWS\nmacro can be used for Windows specific code.Default:\n0\n.See also the PEP 528 (Change Windows console encoding to UTF-8).\n-\nint malloc_stats\u00b6\nIf non-zero, dump statistics on Python pymalloc memory allocator at exit.\nSet to\n1\nby thePYTHONMALLOCSTATS\nenvironment variable.The option is ignored if Python is\nconfigured using the --without-pymalloc option\n.Default:\n0\n.\n-\nwchar_t *platlibdir\u00b6\nPlatform library directory name:\nsys.platlibdir\n.Set by the\nPYTHONPLATLIBDIR\nenvironment variable.Default: value of the\nPLATLIBDIR\nmacro which is set by theconfigure --with-platlibdir option\n(default:\"lib\"\n, or\"DLLs\"\non Windows).Part of the Python Path Configuration input.\nAdded in version 3.9.\nChanged in version 3.11: This macro is now used on Windows to locate the standard library extension modules, typically under\nDLLs\n. However, for compatibility, note that this value is ignored for any non-standard layouts, including in-tree builds and virtual environments.\n-\nwchar_t *pythonpath_env\u00b6\nModule search paths (\nsys.path\n) as a string separated byDELIM\n(os.pathsep\n).Set by the\nPYTHONPATH\nenvironment variable.Default:\nNULL\n.Part of the Python Path Configuration input.\n-\nPyWideStringList module_search_paths\u00b6\n-\nint module_search_paths_set\u00b6\nModule search paths:\nsys.path\n.If\nmodule_search_paths_set\nis equal to0\n,Py_InitializeFromConfig()\nwill replacemodule_search_paths\nand setsmodule_search_paths_set\nto1\n.Default: empty list (\nmodule_search_paths\n) and0\n(module_search_paths_set\n).Part of the Python Path Configuration output.\n-\nint optimization_level\u00b6\nCompilation optimization level:\n0\n: Peephole optimizer, set__debug__\ntoTrue\n.1\n: Level 0, remove assertions, set__debug__\ntoFalse\n.2\n: Level 1, strip docstrings.\nIncremented by the\n-O\ncommand line option. Set to thePYTHONOPTIMIZE\nenvironment variable value.Default:\n0\n.\n-\nPyWideStringList orig_argv\u00b6\nThe list of the original command line arguments passed to the Python executable:\nsys.orig_argv\n.If\norig_argv\nlist is empty andargv\nis not a list only containing an empty string,PyConfig_Read()\ncopiesargv\nintoorig_argv\nbefore modifyingargv\n(ifparse_argv\nis non-zero).See also the\nargv\nmember and thePy_GetArgcArgv()\nfunction.Default: empty list.\nAdded in version 3.10.\n-\nint parse_argv\u00b6\nParse command line arguments?\nIf equals to\n1\n, parseargv\nthe same way the regular Python parses command line arguments, and strip Python arguments fromargv\n.The\nPyConfig_Read()\nfunction only parsesPyConfig.argv\narguments once:PyConfig.parse_argv\nis set to2\nafter arguments are parsed. Since Python arguments are stripped fromPyConfig.argv\n, parsing arguments twice would parse the application options as Python options.Default:\n1\nin Python mode,0\nin isolated mode.Changed in version 3.10: The\nPyConfig.argv\narguments are now only parsed ifPyConfig.parse_argv\nequals to1\n.\n-\nint parser_debug\u00b6\nParser debug mode. If greater than\n0\n, turn on parser debugging output (for expert only, depending on compilation options).Incremented by the\n-d\ncommand line option. Set to thePYTHONDEBUG\nenvironment variable value.Needs a debug build of Python (the\nPy_DEBUG\nmacro must be defined).Default:\n0\n.\n-\nint pathconfig_warnings\u00b6\nIf non-zero, calculation of path configuration is allowed to log warnings into\nstderr\n. If equals to0\n, suppress these warnings.Default:\n1\nin Python mode,0\nin isolated mode.Part of the Python Path Configuration input.\nChanged in version 3.11: Now also applies on Windows.\n-\nwchar_t *prefix\u00b6\nThe site-specific directory prefix where the platform independent Python files are installed:\nsys.prefix\n.Default:\nNULL\n.Part of the Python Path Configuration output.\nSee also\nPyConfig.base_prefix\n.\n-\nwchar_t *program_name\u00b6\nProgram name used to initialize\nexecutable\nand in early error messages during Python initialization.On macOS, use\nPYTHONEXECUTABLE\nenvironment variable if set.If the\nWITH_NEXT_FRAMEWORK\nmacro is defined, use__PYVENV_LAUNCHER__\nenvironment variable if set.Use\nargv[0]\nofargv\nif available and non-empty.Otherwise, use\nL\"python\"\non Windows, orL\"python3\"\non other platforms.\nDefault:\nNULL\n.Part of the Python Path Configuration input.\n-\nwchar_t *pycache_prefix\u00b6\nDirectory where cached\n.pyc\nfiles are written:sys.pycache_prefix\n.Set by the\n-X pycache_prefix=PATH\ncommand line option and thePYTHONPYCACHEPREFIX\nenvironment variable. The command-line option takes precedence.If\nNULL\n,sys.pycache_prefix\nis set toNone\n.Default:\nNULL\n.\n-\nint quiet\u00b6\nQuiet mode. If greater than\n0\n, don\u2019t display the copyright and version at Python startup in interactive mode.Incremented by the\n-q\ncommand line option.Default:\n0\n.\n-\nwchar_t *run_command\u00b6\nValue of the\n-c\ncommand line option.Used by\nPy_RunMain()\n.Default:\nNULL\n.\n-\nwchar_t *run_filename\u00b6\nFilename passed on the command line: trailing command line argument without\n-c\nor-m\n. It is used by thePy_RunMain()\nfunction.For example, it is set to\nscript.py\nby thepython3 script.py arg\ncommand line.See also the\nPyConfig.skip_source_first_line\noption.Default:\nNULL\n.\n-\nwchar_t *run_module\u00b6\nValue of the\n-m\ncommand line option.Used by\nPy_RunMain()\n.Default:\nNULL\n.\n-\nwchar_t *run_presite\u00b6\npackage.module\npath to module that should be imported beforesite.py\nis run.Set by the\n-X presite=package.module\ncommand-line option and thePYTHON_PRESITE\nenvironment variable. The command-line option takes precedence.Needs a debug build of Python (the\nPy_DEBUG\nmacro must be defined).Default:\nNULL\n.\n-\nint show_ref_count\u00b6\nShow total reference count at exit (excluding immortal objects)?\nSet to\n1\nby-X showrefcount\ncommand line option.Needs a debug build of Python (the\nPy_REF_DEBUG\nmacro must be defined).Default:\n0\n.\n-\nint site_import\u00b6\nImport the\nsite\nmodule at startup?If equal to zero, disable the import of the module site and the site-dependent manipulations of\nsys.path\nthat it entails.Also disable these manipulations if the\nsite\nmodule is explicitly imported later (callsite.main()\nif you want them to be triggered).Set to\n0\nby the-S\ncommand line option.sys.flags.no_site\nis set to the inverted value ofsite_import\n.Default:\n1\n.\n-\nint skip_source_first_line\u00b6\nIf non-zero, skip the first line of the\nPyConfig.run_filename\nsource.It allows the usage of non-Unix forms of\n#!cmd\n. This is intended for a DOS specific hack only.Set to\n1\nby the-x\ncommand line option.Default:\n0\n.\n-\nwchar_t *stdio_encoding\u00b6\n-\nwchar_t *stdio_errors\u00b6\nEncoding and encoding errors of\nsys.stdin\n,sys.stdout\nandsys.stderr\n(butsys.stderr\nalways uses\"backslashreplace\"\nerror handler).Use the\nPYTHONIOENCODING\nenvironment variable if it is non-empty.Default encoding:\n\"UTF-8\"\nifPyPreConfig.utf8_mode\nis non-zero.Otherwise, use the locale encoding.\nDefault error handler:\nOn Windows: use\n\"surrogateescape\"\n.\"surrogateescape\"\nifPyPreConfig.utf8_mode\nis non-zero, or if the LC_CTYPE locale is \u201cC\u201d or \u201cPOSIX\u201d.\"strict\"\notherwise.\nSee also\nPyConfig.legacy_windows_stdio\n.\n-\nint tracemalloc\u00b6\nEnable tracemalloc?\nIf non-zero, call\ntracemalloc.start()\nat startup.Set by\n-X tracemalloc=N\ncommand line option and by thePYTHONTRACEMALLOC\nenvironment variable.Default:\n-1\nin Python mode,0\nin isolated mode.\n-\nint perf_profiling\u00b6\nEnable the Linux\nperf\nprofiler support?If equals to\n1\n, enable support for the Linuxperf\nprofiler.If equals to\n2\n, enable support for the Linuxperf\nprofiler with DWARF JIT support.Set to\n1\nby-X perf\ncommand-line option and thePYTHONPERFSUPPORT\nenvironment variable.Set to\n2\nby the-X perf_jit\ncommand-line option and thePYTHON_PERF_JIT_SUPPORT\nenvironment variable.Default:\n-1\n.See also\nSee Python support for the Linux perf profiler for more information.\nAdded in version 3.12.\n-\nwchar_t *stdlib_dir\u00b6\nDirectory of the Python standard library.\nDefault:\nNULL\n.Added in version 3.11.\n-\nint use_environment\u00b6\n-\nIf equals to zero, ignore the environment variables.\nSet to\n0\nby the-E\nenvironment variable.Default:\n1\nin Python config and0\nin isolated config.\n-\nint use_system_logger\u00b6\nIf non-zero,\nstdout\nandstderr\nwill be redirected to the system log.Only available on macOS 10.12 and later, and on iOS.\nDefault:\n0\n(don\u2019t use the system log) on macOS;1\non iOS (use the system log).Added in version 3.14.\n-\nint user_site_directory\u00b6\nIf non-zero, add the user site directory to\nsys.path\n.Set to\n0\nby the-s\nand-I\ncommand line options.Set to\n0\nby thePYTHONNOUSERSITE\nenvironment variable.Default:\n1\nin Python mode,0\nin isolated mode.\n-\nint verbose\u00b6\nVerbose mode. If greater than\n0\n, print a message each time a module is imported, showing the place (filename or built-in module) from which it is loaded.If greater than or equal to\n2\n, print a message for each file that is checked for when searching for a module. Also provides information on module cleanup at exit.Incremented by the\n-v\ncommand line option.Set by the\nPYTHONVERBOSE\nenvironment variable value.Default:\n0\n.\n-\nPyWideStringList warnoptions\u00b6\nOptions of the\nwarnings\nmodule to build warnings filters, lowest to highest priority:sys.warnoptions\n.The\nwarnings\nmodule addssys.warnoptions\nin the reverse order: the lastPyConfig.warnoptions\nitem becomes the first item ofwarnings.filters\nwhich is checked first (highest priority).The\n-W\ncommand line options adds its value towarnoptions\n, it can be used multiple times.The\nPYTHONWARNINGS\nenvironment variable can also be used to add warning options. Multiple options can be specified, separated by commas (,\n).Default: empty list.\n-\nint write_bytecode\u00b6\nIf equal to\n0\n, Python won\u2019t try to write.pyc\nfiles on the import of source modules.Set to\n0\nby the-B\ncommand line option and thePYTHONDONTWRITEBYTECODE\nenvironment variable.sys.dont_write_bytecode\nis initialized to the inverted value ofwrite_bytecode\n.Default:\n1\n.\n-\nPyWideStringList xoptions\u00b6\nValues of the\n-X\ncommand line options:sys._xoptions\n.Default: empty list.\n-\nint _pystats\u00b6\nIf non-zero, write performance statistics at Python exit.\nNeed a special build with the\nPy_STATS\nmacro: see--enable-pystats\n.Default:\n0\n.\n-\nvoid PyConfig_InitPythonConfig(PyConfig *config)\u00b6\nIf parse_argv\nis non-zero, argv\narguments are parsed the same way the regular Python parses command line\narguments, and Python arguments are stripped from\nargv\n.\nThe xoptions\noptions are parsed to set other options: see\nthe -X\ncommand line option.\nChanged in version 3.9: The show_alloc_count\nfield has been removed.\nInitialization with PyConfig\u00b6\nInitializing the interpreter from a populated configuration struct is handled\nby calling Py_InitializeFromConfig()\n.\nThe caller is responsible to handle exceptions (error or exit) using\nPyStatus_Exception()\nand Py_ExitStatusException()\n.\nIf PyImport_FrozenModules()\n, PyImport_AppendInittab()\nor\nPyImport_ExtendInittab()\nare used, they must be set or called after\nPython preinitialization and before the Python initialization. If Python is\ninitialized multiple times, PyImport_AppendInittab()\nor\nPyImport_ExtendInittab()\nmust be called before each Python\ninitialization.\nThe current configuration (PyConfig\ntype) is stored in\nPyInterpreterState.config\n.\nExample setting the program name:\nvoid init_python(void)\n{\nPyStatus status;\nPyConfig config;\nPyConfig_InitPythonConfig(&config);\n/* Set the program name. Implicitly preinitialize Python. */\nstatus = PyConfig_SetString(&config, &config.program_name,\nL\"/path/to/my_program\");\nif (PyStatus_Exception(status)) {\ngoto exception;\n}\nstatus = Py_InitializeFromConfig(&config);\nif (PyStatus_Exception(status)) {\ngoto exception;\n}\nPyConfig_Clear(&config);\nreturn;\nexception:\nPyConfig_Clear(&config);\nPy_ExitStatusException(status);\n}\nMore complete example modifying the default configuration, read the configuration, and then override some parameters. Note that since 3.11, many parameters are not calculated until initialization, and so values cannot be read from the configuration structure. Any values set before initialize is called will be left unchanged by initialization:\nPyStatus init_python(const char *program_name)\n{\nPyStatus status;\nPyConfig config;\nPyConfig_InitPythonConfig(&config);\n/* Set the program name before reading the configuration\n(decode byte string from the locale encoding).\nImplicitly preinitialize Python. */\nstatus = PyConfig_SetBytesString(&config, &config.program_name,\nprogram_name);\nif (PyStatus_Exception(status)) {\ngoto done;\n}\n/* Read all configuration at once */\nstatus = PyConfig_Read(&config);\nif (PyStatus_Exception(status)) {\ngoto done;\n}\n/* Specify sys.path explicitly */\n/* If you want to modify the default set of paths, finish\ninitialization first and then use PySys_GetObject(\"path\") */\nconfig.module_search_paths_set = 1;\nstatus = PyWideStringList_Append(&config.module_search_paths,\nL\"/path/to/stdlib\");\nif (PyStatus_Exception(status)) {\ngoto done;\n}\nstatus = PyWideStringList_Append(&config.module_search_paths,\nL\"/path/to/more/modules\");\nif (PyStatus_Exception(status)) {\ngoto done;\n}\n/* Override executable computed by PyConfig_Read() */\nstatus = PyConfig_SetString(&config, &config.executable,\nL\"/path/to/my_executable\");\nif (PyStatus_Exception(status)) {\ngoto done;\n}\nstatus = Py_InitializeFromConfig(&config);\ndone:\nPyConfig_Clear(&config);\nreturn status;\n}\nIsolated Configuration\u00b6\nPyPreConfig_InitIsolatedConfig()\nand\nPyConfig_InitIsolatedConfig()\nfunctions create a configuration to\nisolate Python from the system. For example, to embed Python into an\napplication.\nThis configuration ignores global configuration variables, environment\nvariables, command line arguments (PyConfig.argv\nis not parsed)\nand user site directory. The C standard streams (ex: stdout\n) and the\nLC_CTYPE locale are left unchanged. Signal handlers are not installed.\nConfiguration files are still used with this configuration to determine\npaths that are unspecified. Ensure PyConfig.home\nis specified\nto avoid computing the default path configuration.\nPython Configuration\u00b6\nPyPreConfig_InitPythonConfig()\nand PyConfig_InitPythonConfig()\nfunctions create a configuration to build a customized Python which behaves as\nthe regular Python.\nEnvironments variables and command line arguments are used to configure Python, whereas global configuration variables are ignored.\nThis function enables C locale coercion (PEP 538)\nand Python UTF-8 Mode\n(PEP 540) depending on the LC_CTYPE locale, PYTHONUTF8\nand\nPYTHONCOERCECLOCALE\nenvironment variables.\nPython Path Configuration\u00b6\nPyConfig\ncontains multiple fields for the path configuration:\nPath configuration inputs:\ncurrent working directory: to get absolute paths\nPATH\nenvironment variable to get the program full path (fromPyConfig.program_name\n)__PYVENV_LAUNCHER__\nenvironment variable(Windows only) Application paths in the registry under \u201cSoftwarePythonPythonCoreX.YPythonPath\u201d of HKEY_CURRENT_USER and HKEY_LOCAL_MACHINE (where X.Y is the Python version).\nPath configuration output fields:\nIf at least one \u201coutput field\u201d is not set, Python calculates the path\nconfiguration to fill unset fields. If\nmodule_search_paths_set\nis equal to 0\n,\nmodule_search_paths\nis overridden and\nmodule_search_paths_set\nis set to 1\n.\nIt is possible to completely ignore the function calculating the default\npath configuration by setting explicitly all path configuration output\nfields listed above. A string is considered as set even if it is non-empty.\nmodule_search_paths\nis considered as set if\nmodule_search_paths_set\nis set to 1\n. In this case,\nmodule_search_paths\nwill be used without modification.\nSet pathconfig_warnings\nto 0\nto suppress warnings when\ncalculating the path configuration (Unix only, Windows does not log any warning).\nIf base_prefix\nor base_exec_prefix\nfields are not set, they inherit their value from prefix\nand exec_prefix\nrespectively.\nPy_RunMain()\nand Py_Main()\nmodify sys.path\n:\nIf\nrun_filename\nis set and is a directory which contains a__main__.py\nscript, prependrun_filename\ntosys.path\n.If\nisolated\nis zero:If\nrun_module\nis set, prepend the current directory tosys.path\n. Do nothing if the current directory cannot be read.If\nrun_filename\nis set, prepend the directory of the filename tosys.path\n.Otherwise, prepend an empty string to\nsys.path\n.\nIf site_import\nis non-zero, sys.path\ncan be\nmodified by the site\nmodule. If\nuser_site_directory\nis non-zero and the user\u2019s\nsite-package directory exists, the site\nmodule appends the user\u2019s\nsite-package directory to sys.path\n.\nThe following configuration files are used by the path configuration:\npyvenv.cfg\n._pth\nfile (ex:python._pth\n)pybuilddir.txt\n(Unix only)\nIf a ._pth\nfile is present:\nSet\nisolated\nto1\n.Set\nuse_environment\nto0\n.Set\nsite_import\nto0\n.Set\nsafe_path\nto1\n.\nIf home\nis not set and a pyvenv.cfg\nfile is present in\nthe same directory as executable\n, or its parent,\nprefix\nand exec_prefix\nare set that\nlocation. When this happens, base_prefix\nand\nbase_exec_prefix\nstill keep their value, pointing to the\nbase installation. See Virtual Environments for more\ninformation.\nThe __PYVENV_LAUNCHER__\nenvironment variable is used to set\nPyConfig.base_executable\n.\nChanged in version 3.14: prefix\n, and exec_prefix\n, are now\nset to the pyvenv.cfg\ndirectory. This was previously done by site\n,\ntherefore affected by -S\n.\nPy_GetArgcArgv()\u00b6\n-\nvoid Py_GetArgcArgv(int *argc, wchar_t ***argv)\u00b6\nGet the original command line arguments, before Python modified them.\nSee also\nPyConfig.orig_argv\nmember.\nDelaying main module execution\u00b6\nIn some embedding use cases, it may be desirable to separate interpreter initialization from the execution of the main module.\nThis separation can be achieved by setting PyConfig.run_command\nto the empty\nstring during initialization (to prevent the interpreter from dropping into the\ninteractive prompt), and then subsequently executing the desired main module\ncode using __main__.__dict__\nas the global namespace.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 12138} +{"url": "https://docs.python.org/3/c-api/datetime.html", "title": "DateTime Objects", "content": "DateTime Objects\u00b6\nVarious date and time objects are supplied by the datetime\nmodule.\nBefore using any of these functions, the header file datetime.h\nmust be\nincluded in your source (note that this is not included by Python.h\n),\nand the macro PyDateTime_IMPORT\nmust be invoked, usually as part of\nthe module initialisation function. The macro puts a pointer to a C structure\ninto a static variable, PyDateTimeAPI\n, that is used by the following\nmacros.\n-\nPyDateTime_IMPORT()\u00b6\nImport the datetime C API.\nOn success, populate the\nPyDateTimeAPI\npointer. On failure, setPyDateTimeAPI\ntoNULL\nand set an exception. The caller must check if an error occurred viaPyErr_Occurred()\n:PyDateTime_IMPORT; if (PyErr_Occurred()) { /* cleanup */ }\nWarning\nThis is not compatible with subinterpreters.\n-\ntype PyDateTime_CAPI\u00b6\nStructure containing the fields for the datetime C API.\nThe fields of this structure are private and subject to change.\nDo not use this directly; prefer\nPyDateTime_*\nAPIs instead.\n-\nPyDateTime_CAPI *PyDateTimeAPI\u00b6\nDynamically allocated object containing the datetime C API.\nThis variable is only available once\nPyDateTime_IMPORT\nsucceeds.\n-\ntype PyDateTime_Delta\u00b6\nThis subtype of\nPyObject\nrepresents the difference between two datetime values.\n-\nPyTypeObject PyDateTime_DateType\u00b6\nThis instance of\nPyTypeObject\nrepresents the Python date type; it is the same object asdatetime.date\nin the Python layer.\n-\nPyTypeObject PyDateTime_DateTimeType\u00b6\nThis instance of\nPyTypeObject\nrepresents the Python datetime type; it is the same object asdatetime.datetime\nin the Python layer.\n-\nPyTypeObject PyDateTime_TimeType\u00b6\nThis instance of\nPyTypeObject\nrepresents the Python time type; it is the same object asdatetime.time\nin the Python layer.\n-\nPyTypeObject PyDateTime_DeltaType\u00b6\nThis instance of\nPyTypeObject\nrepresents the Python type for the difference between two datetime values; it is the same object asdatetime.timedelta\nin the Python layer.\n-\nPyTypeObject PyDateTime_TZInfoType\u00b6\nThis instance of\nPyTypeObject\nrepresents the Python time zone info type; it is the same object asdatetime.tzinfo\nin the Python layer.\nMacro for access to the UTC singleton:\n-\nPyObject *PyDateTime_TimeZone_UTC\u00b6\nReturns the time zone singleton representing UTC, the same object as\ndatetime.timezone.utc\n.Added in version 3.7.\nType-check macros:\n-\nint PyDate_Check(PyObject *ob)\u00b6\nReturn true if ob is of type\nPyDateTime_DateType\nor a subtype ofPyDateTime_DateType\n. ob must not beNULL\n. This function always succeeds.\n-\nint PyDate_CheckExact(PyObject *ob)\u00b6\nReturn true if ob is of type\nPyDateTime_DateType\n. ob must not beNULL\n. This function always succeeds.\n-\nint PyDateTime_Check(PyObject *ob)\u00b6\nReturn true if ob is of type\nPyDateTime_DateTimeType\nor a subtype ofPyDateTime_DateTimeType\n. ob must not beNULL\n. This function always succeeds.\n-\nint PyDateTime_CheckExact(PyObject *ob)\u00b6\nReturn true if ob is of type\nPyDateTime_DateTimeType\n. ob must not beNULL\n. This function always succeeds.\n-\nint PyTime_Check(PyObject *ob)\u00b6\nReturn true if ob is of type\nPyDateTime_TimeType\nor a subtype ofPyDateTime_TimeType\n. ob must not beNULL\n. This function always succeeds.\n-\nint PyTime_CheckExact(PyObject *ob)\u00b6\nReturn true if ob is of type\nPyDateTime_TimeType\n. ob must not beNULL\n. This function always succeeds.\n-\nint PyDelta_Check(PyObject *ob)\u00b6\nReturn true if ob is of type\nPyDateTime_DeltaType\nor a subtype ofPyDateTime_DeltaType\n. ob must not beNULL\n. This function always succeeds.\n-\nint PyDelta_CheckExact(PyObject *ob)\u00b6\nReturn true if ob is of type\nPyDateTime_DeltaType\n. ob must not beNULL\n. This function always succeeds.\n-\nint PyTZInfo_Check(PyObject *ob)\u00b6\nReturn true if ob is of type\nPyDateTime_TZInfoType\nor a subtype ofPyDateTime_TZInfoType\n. ob must not beNULL\n. This function always succeeds.\n-\nint PyTZInfo_CheckExact(PyObject *ob)\u00b6\nReturn true if ob is of type\nPyDateTime_TZInfoType\n. ob must not beNULL\n. This function always succeeds.\nMacros to create objects:\n-\nPyObject *PyDate_FromDate(int year, int month, int day)\u00b6\n- Return value: New reference.\nReturn a\ndatetime.date\nobject with the specified year, month and day.\n-\nPyObject *PyDateTime_FromDateAndTime(int year, int month, int day, int hour, int minute, int second, int usecond)\u00b6\n- Return value: New reference.\nReturn a\ndatetime.datetime\nobject with the specified year, month, day, hour, minute, second and microsecond.\n-\nPyObject *PyDateTime_FromDateAndTimeAndFold(int year, int month, int day, int hour, int minute, int second, int usecond, int fold)\u00b6\n- Return value: New reference.\nReturn a\ndatetime.datetime\nobject with the specified year, month, day, hour, minute, second, microsecond and fold.Added in version 3.6.\n-\nPyObject *PyTime_FromTime(int hour, int minute, int second, int usecond)\u00b6\n- Return value: New reference.\nReturn a\ndatetime.time\nobject with the specified hour, minute, second and microsecond.\n-\nPyObject *PyTime_FromTimeAndFold(int hour, int minute, int second, int usecond, int fold)\u00b6\n- Return value: New reference.\nReturn a\ndatetime.time\nobject with the specified hour, minute, second, microsecond and fold.Added in version 3.6.\n-\nPyObject *PyDelta_FromDSU(int days, int seconds, int useconds)\u00b6\n- Return value: New reference.\nReturn a\ndatetime.timedelta\nobject representing the given number of days, seconds and microseconds. Normalization is performed so that the resulting number of microseconds and seconds lie in the ranges documented fordatetime.timedelta\nobjects.\n-\nPyObject *PyTimeZone_FromOffset(PyObject *offset)\u00b6\n- Return value: New reference.\nReturn a\ndatetime.timezone\nobject with an unnamed fixed offset represented by the offset argument.Added in version 3.7.\n-\nPyObject *PyTimeZone_FromOffsetAndName(PyObject *offset, PyObject *name)\u00b6\n- Return value: New reference.\nReturn a\ndatetime.timezone\nobject with a fixed offset represented by the offset argument and with tzname name.Added in version 3.7.\nMacros to extract fields from date objects. The argument must be an instance of\nPyDateTime_Date\n, including subclasses (such as\nPyDateTime_DateTime\n). The argument must not be NULL\n, and the type is\nnot checked:\n-\nint PyDateTime_GET_YEAR(PyDateTime_Date *o)\u00b6\nReturn the year, as a positive int.\n-\nint PyDateTime_GET_MONTH(PyDateTime_Date *o)\u00b6\nReturn the month, as an int from 1 through 12.\n-\nint PyDateTime_GET_DAY(PyDateTime_Date *o)\u00b6\nReturn the day, as an int from 1 through 31.\nMacros to extract fields from datetime objects. The argument must be an\ninstance of PyDateTime_DateTime\n, including subclasses. The argument\nmust not be NULL\n, and the type is not checked:\n-\nint PyDateTime_DATE_GET_HOUR(PyDateTime_DateTime *o)\u00b6\nReturn the hour, as an int from 0 through 23.\n-\nint PyDateTime_DATE_GET_MINUTE(PyDateTime_DateTime *o)\u00b6\nReturn the minute, as an int from 0 through 59.\n-\nint PyDateTime_DATE_GET_SECOND(PyDateTime_DateTime *o)\u00b6\nReturn the second, as an int from 0 through 59.\n-\nint PyDateTime_DATE_GET_MICROSECOND(PyDateTime_DateTime *o)\u00b6\nReturn the microsecond, as an int from 0 through 999999.\n-\nint PyDateTime_DATE_GET_FOLD(PyDateTime_DateTime *o)\u00b6\nReturn the fold, as an int from 0 through 1.\nAdded in version 3.6.\n-\nPyObject *PyDateTime_DATE_GET_TZINFO(PyDateTime_DateTime *o)\u00b6\nReturn the tzinfo (which may be\nNone\n).Added in version 3.10.\nMacros to extract fields from time objects. The argument must be an instance of\nPyDateTime_Time\n, including subclasses. The argument must not be NULL\n,\nand the type is not checked:\n-\nint PyDateTime_TIME_GET_HOUR(PyDateTime_Time *o)\u00b6\nReturn the hour, as an int from 0 through 23.\n-\nint PyDateTime_TIME_GET_MINUTE(PyDateTime_Time *o)\u00b6\nReturn the minute, as an int from 0 through 59.\n-\nint PyDateTime_TIME_GET_SECOND(PyDateTime_Time *o)\u00b6\nReturn the second, as an int from 0 through 59.\n-\nint PyDateTime_TIME_GET_MICROSECOND(PyDateTime_Time *o)\u00b6\nReturn the microsecond, as an int from 0 through 999999.\n-\nint PyDateTime_TIME_GET_FOLD(PyDateTime_Time *o)\u00b6\nReturn the fold, as an int from 0 through 1.\nAdded in version 3.6.\n-\nPyObject *PyDateTime_TIME_GET_TZINFO(PyDateTime_Time *o)\u00b6\nReturn the tzinfo (which may be\nNone\n).Added in version 3.10.\nMacros to extract fields from time delta objects. The argument must be an\ninstance of PyDateTime_Delta\n, including subclasses. The argument must\nnot be NULL\n, and the type is not checked:\n-\nint PyDateTime_DELTA_GET_DAYS(PyDateTime_Delta *o)\u00b6\nReturn the number of days, as an int from -999999999 to 999999999.\nAdded in version 3.3.\n-\nint PyDateTime_DELTA_GET_SECONDS(PyDateTime_Delta *o)\u00b6\nReturn the number of seconds, as an int from 0 through 86399.\nAdded in version 3.3.\n-\nint PyDateTime_DELTA_GET_MICROSECONDS(PyDateTime_Delta *o)\u00b6\nReturn the number of microseconds, as an int from 0 through 999999.\nAdded in version 3.3.\nMacros for the convenience of modules implementing the DB API:\n-\nPyObject *PyDateTime_FromTimestamp(PyObject *args)\u00b6\n- Return value: New reference.\nCreate and return a new\ndatetime.datetime\nobject given an argument tuple suitable for passing todatetime.datetime.fromtimestamp()\n.\n-\nPyObject *PyDate_FromTimestamp(PyObject *args)\u00b6\n- Return value: New reference.\nCreate and return a new\ndatetime.date\nobject given an argument tuple suitable for passing todatetime.date.fromtimestamp()\n.\nInternal data\u00b6\nThe following symbols are exposed by the C API but should be considered internal-only.\n-\nPyDateTime_CAPSULE_NAME\u00b6\nName of the datetime capsule to pass to\nPyCapsule_Import()\n.Internal usage only. Use\nPyDateTime_IMPORT\ninstead.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 2358} +{"url": "https://docs.python.org/3/library/imghdr.html", "title": " \u2014 Determine the type of an image", "content": "imghdr\n\u2014 Determine the type of an image\u00b6\nDeprecated since version 3.11, removed in version 3.13.\nThis module is no longer part of the Python standard library. It was removed in Python 3.13 after being deprecated in Python 3.11. The removal was decided in PEP 594.\nPossible replacements are third-party libraries from PyPI: filetype, puremagic, or python-magic. These are not supported or maintained by the Python core team.\nThe last version of Python that provided the imghdr\nmodule was\nPython 3.12.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 125} +{"url": "https://docs.python.org/3/library/distutils.html", "title": " \u2014 Building and installing Python modules", "content": "distutils\n\u2014 Building and installing Python modules\u00b6\nDeprecated since version 3.10, removed in version 3.12.\nThis module is no longer part of the Python standard library. It was removed in Python 3.12 after being deprecated in Python 3.10. The removal was decided in PEP 632, which has migration advice.\nThe last version of Python that provided the distutils\nmodule was\nPython 3.11.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 95} +{"url": "https://docs.python.org/3/library/crypt.html", "title": " \u2014 Function to check Unix passwords", "content": "crypt\n\u2014 Function to check Unix passwords\u00b6\nDeprecated since version 3.11, removed in version 3.13.\nThis module is no longer part of the Python standard library. It was removed in Python 3.13 after being deprecated in Python 3.11. The removal was decided in PEP 594.\nApplications can use the hashlib\nmodule from the standard library.\nOther possible replacements are third-party libraries from PyPI:\nlegacycrypt, bcrypt, or argon2-cffi.\nThese are not supported or maintained by the Python core team.\nThe last version of Python that provided the crypt\nmodule was\nPython 3.12.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 143} +{"url": "https://docs.python.org/3/library/chunk.html", "title": " \u2014 Read IFF chunked data", "content": "chunk\n\u2014 Read IFF chunked data\u00b6\nDeprecated since version 3.11, removed in version 3.13.\nThis module is no longer part of the Python standard library. It was removed in Python 3.13 after being deprecated in Python 3.11. The removal was decided in PEP 594.\nThe last version of Python that provided the chunk\nmodule was\nPython 3.12.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 82} +{"url": "https://docs.python.org/3/c-api/lifecycle.html", "title": "Object Life Cycle", "content": "Object Life Cycle\u00b6\nThis section explains how a type\u2019s slots relate to each other throughout the life of an object. It is not intended to be a complete canonical reference for the slots; instead, refer to the slot-specific documentation in Type Object Structures for details about a particular slot.\nLife Events\u00b6\nThe figure below illustrates the order of events that can occur throughout an object\u2019s life. An arrow from A to B indicates that event B can occur after event A has occurred, with the arrow\u2019s label indicating the condition that must be true for B to occur after A.\nExplanation:\nWhen a new object is constructed by calling its type:\ntp_new\nis called to create a new object.tp_alloc\nis directly called bytp_new\nto allocate the memory for the new object.tp_init\ninitializes the newly created object.tp_init\ncan be called again to re-initialize an object, if desired. Thetp_init\ncall can also be skipped entirely, for example by Python code calling__new__()\n.\nAfter\ntp_init\ncompletes, the object is ready to use.Some time after the last reference to an object is removed:\nIf an object is not marked as finalized, it might be finalized by marking it as finalized and calling its\ntp_finalize\nfunction. Python does not finalize an object when the last reference to it is deleted; usePyObject_CallFinalizerFromDealloc()\nto ensure thattp_finalize\nis always called.If the object is marked as finalized,\ntp_clear\nmight be called by the garbage collector to clear references held by the object. It is not called when the object\u2019s reference count reaches zero.tp_dealloc\nis called to destroy the object. To avoid code duplication,tp_dealloc\ntypically calls intotp_clear\nto free up the object\u2019s references.When\ntp_dealloc\nfinishes object destruction, it directly callstp_free\n(usually set toPyObject_Free()\norPyObject_GC_Del()\nautomatically as appropriate for the type) to deallocate the memory.\nThe\ntp_finalize\nfunction is permitted to add a reference to the object if desired. If it does, the object is resurrected, preventing its pending destruction. (Onlytp_finalize\nis allowed to resurrect an object;tp_clear\nandtp_dealloc\ncannot without calling intotp_finalize\n.) Resurrecting an object may or may not cause the object\u2019s finalized mark to be removed. Currently, Python does not remove the finalized mark from a resurrected object if it supports garbage collection (i.e., thePy_TPFLAGS_HAVE_GC\nflag is set) but does remove the mark if the object does not support garbage collection; either or both of these behaviors may change in the future.tp_dealloc\ncan optionally calltp_finalize\nviaPyObject_CallFinalizerFromDealloc()\nif it wishes to reuse that code to help with object destruction. This is recommended because it guarantees thattp_finalize\nis always called before destruction. See thetp_dealloc\ndocumentation for example code.If the object is a member of a cyclic isolate and either\ntp_clear\nfails to break the reference cycle or the cyclic isolate is not detected (perhapsgc.disable()\nwas called, or thePy_TPFLAGS_HAVE_GC\nflag was erroneously omitted in one of the involved types), the objects remain indefinitely uncollectable (they \u201cleak\u201d). Seegc.garbage\n.\nIf the object is marked as supporting garbage collection (the\nPy_TPFLAGS_HAVE_GC\nflag is set in\ntp_flags\n), the following events are also possible:\nThe garbage collector occasionally calls\ntp_traverse\nto identify cyclic isolates.When the garbage collector discovers a cyclic isolate, it finalizes one of the objects in the group by marking it as finalized and calling its\ntp_finalize\nfunction, if it has one. This repeats until the cyclic isolate doesn\u2019t exist or all of the objects have been finalized.tp_finalize\nis permitted to resurrect the object by adding a reference from outside the cyclic isolate. The new reference causes the group of objects to no longer form a cyclic isolate (the reference cycle may still exist, but if it does the objects are no longer isolated).When the garbage collector discovers a cyclic isolate and all of the objects in the group have already been marked as finalized, the garbage collector clears one or more of the uncleared objects in the group (possibly concurrently) by calling each\u2019s\ntp_clear\nfunction. This repeats as long as the cyclic isolate still exists and not all of the objects have been cleared.\nCyclic Isolate Destruction\u00b6\nListed below are the stages of life of a hypothetical cyclic isolate\nthat continues to exist after each member object is finalized or cleared. It\nis a memory leak if a cyclic isolate progresses through all of these stages; it should\nvanish once all objects are cleared, if not sooner. A cyclic isolate can\nvanish either because the reference cycle is broken or because the objects are\nno longer isolated due to finalizer resurrection (see\ntp_finalize\n).\nReachable (not yet a cyclic isolate): All objects are in their normal, reachable state. A reference cycle could exist, but an external reference means the objects are not yet isolated.\nUnreachable but consistent: The final reference from outside the cyclic group of objects has been removed, causing the objects to become isolated (thus a cyclic isolate is born). None of the group\u2019s objects have been finalized or cleared yet. The cyclic isolate remains at this stage until some future run of the garbage collector (not necessarily the next run because the next run might not scan every object).\nMix of finalized and not finalized: Objects in a cyclic isolate are finalized one at a time, which means that there is a period of time when the cyclic isolate is composed of a mix of finalized and non-finalized objects. Finalization order is unspecified, so it can appear random. A finalized object must behave in a sane manner when non-finalized objects interact with it, and a non-finalized object must be able to tolerate the finalization of an arbitrary subset of its referents.\nAll finalized: All objects in a cyclic isolate are finalized before any of them are cleared.\nMix of finalized and cleared: The objects can be cleared serially or concurrently (but with the GIL held); either way, some will finish before others. A finalized object must be able to tolerate the clearing of a subset of its referents. PEP 442 calls this stage \u201ccyclic trash\u201d.\nLeaked: If a cyclic isolate still exists after all objects in the group have been finalized and cleared, then the objects remain indefinitely uncollectable (see\ngc.garbage\n). It is a bug if a cyclic isolate reaches this stage\u2014it means thetp_clear\nmethods of the participating objects have failed to break the reference cycle as required.\nIf tp_clear\ndid not exist, then Python would have no\nway to safely break a reference cycle. Simply destroying an object in a cyclic\nisolate would result in a dangling pointer, triggering undefined behavior when\nan object referencing the destroyed object is itself destroyed. The clearing\nstep makes object destruction a two-phase process: first\ntp_clear\nis called to partially destroy the objects\nenough to detangle them from each other, then\ntp_dealloc\nis called to complete the destruction.\nUnlike clearing, finalization is not a phase of destruction. A finalized\nobject must still behave properly by continuing to fulfill its design\ncontracts. An object\u2019s finalizer is allowed to execute arbitrary Python code,\nand is even allowed to prevent the impending destruction by adding a reference.\nThe finalizer is only related to destruction by call order\u2014if it runs, it runs\nbefore destruction, which starts with tp_clear\n(if\ncalled) and concludes with tp_dealloc\n.\nThe finalization step is not necessary to safely reclaim the objects in a cyclic isolate, but its existence makes it easier to design types that behave in a sane manner when objects are cleared. Clearing an object might necessarily leave it in a broken, partially destroyed state\u2014it might be unsafe to call any of the cleared object\u2019s methods or access any of its attributes. With finalization, only finalized objects can possibly interact with cleared objects; non-finalized objects are guaranteed to interact with only non-cleared (but potentially finalized) objects.\nTo summarize the possible interactions:\nA non-finalized object might have references to or from non-finalized and finalized objects, but not to or from cleared objects.\nA finalized object might have references to or from non-finalized, finalized, and cleared objects.\nA cleared object might have references to or from finalized and cleared objects, but not to or from non-finalized objects.\nWithout any reference cycles, an object can be simply destroyed once its last\nreference is deleted; the finalization and clearing steps are not necessary to\nsafely reclaim unused objects. However, it can be useful to automatically call\ntp_finalize\nand tp_clear\nbefore destruction anyway because type design is simplified when all objects\nalways experience the same series of events regardless of whether they\nparticipated in a cyclic isolate. Python currently only calls\ntp_finalize\nand tp_clear\nas\nneeded to destroy a cyclic isolate; this may change in a future version.\nFunctions\u00b6\nTo allocate and free memory, see Allocating Objects on the Heap.\n-\nvoid PyObject_CallFinalizer(PyObject *op)\u00b6\nFinalizes the object as described in\ntp_finalize\n. Call this function (orPyObject_CallFinalizerFromDealloc()\n) instead of callingtp_finalize\ndirectly because this function may deduplicate multiple calls totp_finalize\n. Currently, calls are only deduplicated if the type supports garbage collection (i.e., thePy_TPFLAGS_HAVE_GC\nflag is set); this may change in the future.Added in version 3.4.\n-\nint PyObject_CallFinalizerFromDealloc(PyObject *op)\u00b6\nSame as\nPyObject_CallFinalizer()\nbut meant to be called at the beginning of the object\u2019s destructor (tp_dealloc\n). There must not be any references to the object. If the object\u2019s finalizer resurrects the object, this function returns -1; no further destruction should happen. Otherwise, this function returns 0 and destruction can continue normally.Added in version 3.4.\nSee also\ntp_dealloc\nfor example code.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 2516} +{"url": "https://docs.python.org/3/library/cgitb.html", "title": " \u2014 Traceback manager for CGI scripts", "content": "cgitb\n\u2014 Traceback manager for CGI scripts\u00b6\nDeprecated since version 3.11, removed in version 3.13.\nThis module is no longer part of the Python standard library. It was removed in Python 3.13 after being deprecated in Python 3.11. The removal was decided in PEP 594.\nA fork of the module on PyPI can now be used instead: legacy-cgi. This is a copy of the cgi module, no longer maintained or supported by the core Python team.\nThe last version of Python that provided the cgitb\nmodule was\nPython 3.12.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 125} +{"url": "https://docs.python.org/3/library/cgi.html", "title": " \u2014 Common Gateway Interface support", "content": "cgi\n\u2014 Common Gateway Interface support\u00b6\nDeprecated since version 3.11, removed in version 3.13.\nThis module is no longer part of the Python standard library. It was removed in Python 3.13 after being deprecated in Python 3.11. The removal was decided in PEP 594.\nA fork of the module on PyPI can be used instead: legacy-cgi. This is a copy of the cgi module, no longer maintained or supported by the core Python team.\nThe last version of Python that provided the cgi\nmodule was\nPython 3.12.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 122} +{"url": "https://docs.python.org/3/c-api/allocation.html", "title": "Allocating Objects on the Heap", "content": "Allocating Objects on the Heap\u00b6\n-\nPyObject *_PyObject_New(PyTypeObject *type)\u00b6\n- Return value: New reference.\n-\nPyVarObject *_PyObject_NewVar(PyTypeObject *type, Py_ssize_t size)\u00b6\n- Return value: New reference.\n-\nPyObject *PyObject_Init(PyObject *op, PyTypeObject *type)\u00b6\n- Return value: Borrowed reference. Part of the Stable ABI.\nInitialize a newly allocated object op with its type and initial reference. Returns the initialized object. Other fields of the object are not initialized. Despite its name, this function is unrelated to the object\u2019s\n__init__()\nmethod (tp_init\nslot). Specifically, this function does not call the object\u2019s__init__()\nmethod.In general, consider this function to be a low-level routine. Use\ntp_alloc\nwhere possible. For implementingtp_alloc\nfor your type, preferPyType_GenericAlloc()\norPyObject_New()\n.Note\nThis function only initializes the object\u2019s memory corresponding to the initial\nPyObject\nstructure. It does not zero the rest.\n-\nPyVarObject *PyObject_InitVar(PyVarObject *op, PyTypeObject *type, Py_ssize_t size)\u00b6\n- Return value: Borrowed reference. Part of the Stable ABI.\nThis does everything\nPyObject_Init()\ndoes, and also initializes the length information for a variable-size object.Note\nThis function only initializes some of the object\u2019s memory. It does not zero the rest.\n-\nPyObject_New(TYPE, typeobj)\u00b6\nAllocates a new Python object using the C structure type TYPE and the Python type object typeobj (\nPyTypeObject*\n) by callingPyObject_Malloc()\nto allocate memory and initializing it likePyObject_Init()\n. The caller will own the only reference to the object (i.e. its reference count will be one).Avoid calling this directly to allocate memory for an object; call the type\u2019s\ntp_alloc\nslot instead.When populating a type\u2019s\ntp_alloc\nslot,PyType_GenericAlloc()\nis preferred over a custom function that simply calls this macro.This macro does not call\ntp_alloc\n,tp_new\n(__new__()\n), ortp_init\n(__init__()\n).This cannot be used for objects with\nPy_TPFLAGS_HAVE_GC\nset intp_flags\n; usePyObject_GC_New\ninstead.Memory allocated by this macro must be freed with\nPyObject_Free()\n(usually called via the object\u2019stp_free\nslot).Note\nThe returned memory is not guaranteed to have been completely zeroed before it was initialized.\nNote\nThis macro does not construct a fully initialized object of the given type; it merely allocates memory and prepares it for further initialization by\ntp_init\n. To construct a fully initialized object, call typeobj instead. For example:PyObject *foo = PyObject_CallNoArgs((PyObject *)&PyFoo_Type);\n-\nPyObject_NewVar(TYPE, typeobj, size)\u00b6\nLike\nPyObject_New\nexcept:It allocates enough memory for the TYPE structure plus size (\nPy_ssize_t\n) fields of the size given by thetp_itemsize\nfield of typeobj.The memory is initialized like\nPyObject_InitVar()\n.\nThis is useful for implementing objects like tuples, which are able to determine their size at construction time. Embedding the array of fields into the same allocation decreases the number of allocations, improving the memory management efficiency.\nAvoid calling this directly to allocate memory for an object; call the type\u2019s\ntp_alloc\nslot instead.When populating a type\u2019s\ntp_alloc\nslot,PyType_GenericAlloc()\nis preferred over a custom function that simply calls this macro.This cannot be used for objects with\nPy_TPFLAGS_HAVE_GC\nset intp_flags\n; usePyObject_GC_NewVar\ninstead.Memory allocated by this function must be freed with\nPyObject_Free()\n(usually called via the object\u2019stp_free\nslot).Note\nThe returned memory is not guaranteed to have been completely zeroed before it was initialized.\nNote\nThis macro does not construct a fully initialized object of the given type; it merely allocates memory and prepares it for further initialization by\ntp_init\n. To construct a fully initialized object, call typeobj instead. For example:PyObject *list_instance = PyObject_CallNoArgs((PyObject *)&PyList_Type);\n-\nPyObject _Py_NoneStruct\u00b6\nObject which is visible in Python as\nNone\n. This should only be accessed using thePy_None\nmacro, which evaluates to a pointer to this object.\nSee also\n- Module Objects\nTo allocate and create extension modules.\nDeprecated aliases\u00b6\nThese are soft deprecated aliases to existing functions and macros. They exist solely for backwards compatibility.\nDeprecated alias |\nFunction |\n|---|---|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1092} +{"url": "https://docs.python.org/3/c-api/buffer.html", "title": "Buffer Protocol", "content": "Buffer Protocol\u00b6\nCertain objects available in Python wrap access to an underlying memory\narray or buffer. Such objects include the built-in bytes\nand\nbytearray\n, and some extension types like array.array\n.\nThird-party libraries may define their own types for special purposes, such\nas image processing or numeric analysis.\nWhile each of these types have their own semantics, they share the common characteristic of being backed by a possibly large memory buffer. It is then desirable, in some situations, to access that buffer directly and without intermediate copying.\nPython provides such a facility at the C and Python level in the form of the buffer protocol. This protocol has two sides:\non the producer side, a type can export a \u201cbuffer interface\u201d which allows objects of that type to expose information about their underlying buffer. This interface is described in the section Buffer Object Structures; for Python see Emulating buffer types.\non the consumer side, several means are available to obtain a pointer to the raw underlying data of an object (for example a method parameter). For Python see\nmemoryview\n.\nSimple objects such as bytes\nand bytearray\nexpose their\nunderlying buffer in byte-oriented form. Other forms are possible; for example,\nthe elements exposed by an array.array\ncan be multi-byte values.\nAn example consumer of the buffer interface is the write()\nmethod of file objects: any object that can export a series of bytes through\nthe buffer interface can be written to a file. While write()\nonly\nneeds read-only access to the internal contents of the object passed to it,\nother methods such as readinto()\nneed write access\nto the contents of their argument. The buffer interface allows objects to\nselectively allow or reject exporting of read-write and read-only buffers.\nThere are two ways for a consumer of the buffer interface to acquire a buffer over a target object:\ncall\nPyObject_GetBuffer()\nwith the right parameters;call\nPyArg_ParseTuple()\n(or one of its siblings) with one of they*\n,w*\nors*\nformat codes.\nIn both cases, PyBuffer_Release()\nmust be called when the buffer\nisn\u2019t needed anymore. Failure to do so could lead to various issues such as\nresource leaks.\nAdded in version 3.12: The buffer protocol is now accessible in Python, see\nEmulating buffer types and memoryview\n.\nBuffer structure\u00b6\nBuffer structures (or simply \u201cbuffers\u201d) are useful as a way to expose the binary data from another object to the Python programmer. They can also be used as a zero-copy slicing mechanism. Using their ability to reference a block of memory, it is possible to expose any data to the Python programmer quite easily. The memory could be a large, constant array in a C extension, it could be a raw block of memory for manipulation before passing to an operating system library, or it could be used to pass around structured data in its native, in-memory format.\nContrary to most data types exposed by the Python interpreter, buffers\nare not PyObject\npointers but rather simple C structures. This\nallows them to be created and copied very simply. When a generic wrapper\naround a buffer is needed, a memoryview object\ncan be created.\nFor short instructions how to write an exporting object, see\nBuffer Object Structures. For obtaining\na buffer, see PyObject_GetBuffer()\n.\n-\ntype Py_buffer\u00b6\n- Part of the Stable ABI (including all members) since version 3.11.\n-\nvoid *buf\u00b6\nA pointer to the start of the logical structure described by the buffer fields. This can be any location within the underlying physical memory block of the exporter. For example, with negative\nstrides\nthe value may point to the end of the memory block.For contiguous arrays, the value points to the beginning of the memory block.\n-\nPyObject *obj\u00b6\nA new reference to the exporting object. The reference is owned by the consumer and automatically released (i.e. reference count decremented) and set to\nNULL\nbyPyBuffer_Release()\n. The field is the equivalent of the return value of any standard C-API function.As a special case, for temporary buffers that are wrapped by\nPyMemoryView_FromBuffer()\norPyBuffer_FillInfo()\nthis field isNULL\n. In general, exporting objects MUST NOT use this scheme.\n-\nPy_ssize_t len\u00b6\nproduct(shape) * itemsize\n. For contiguous arrays, this is the length of the underlying memory block. For non-contiguous arrays, it is the length that the logical structure would have if it were copied to a contiguous representation.Accessing\n((char *)buf)[0] up to ((char *)buf)[len-1]\nis only valid if the buffer has been obtained by a request that guarantees contiguity. In most cases such a request will bePyBUF_SIMPLE\norPyBUF_WRITABLE\n.\n-\nint readonly\u00b6\nAn indicator of whether the buffer is read-only. This field is controlled by the\nPyBUF_WRITABLE\nflag.\n-\nPy_ssize_t itemsize\u00b6\nItem size in bytes of a single element. Same as the value of\nstruct.calcsize()\ncalled on non-NULL\nformat\nvalues.Important exception: If a consumer requests a buffer without the\nPyBUF_FORMAT\nflag,format\nwill be set toNULL\n, butitemsize\nstill has the value for the original format.If\nshape\nis present, the equalityproduct(shape) * itemsize == len\nstill holds and the consumer can useitemsize\nto navigate the buffer.If\nshape\nisNULL\nas a result of aPyBUF_SIMPLE\nor aPyBUF_WRITABLE\nrequest, the consumer must disregarditemsize\nand assumeitemsize == 1\n.\n-\nchar *format\u00b6\nA NULL terminated string in\nstruct\nmodule style syntax describing the contents of a single item. If this isNULL\n,\"B\"\n(unsigned bytes) is assumed.This field is controlled by the\nPyBUF_FORMAT\nflag.\n-\nint ndim\u00b6\nThe number of dimensions the memory represents as an n-dimensional array. If it is\n0\n,buf\npoints to a single item representing a scalar. In this case,shape\n,strides\nandsuboffsets\nMUST beNULL\n. The maximum number of dimensions is given byPyBUF_MAX_NDIM\n.\n-\nPy_ssize_t *shape\u00b6\nAn array of\nPy_ssize_t\nof lengthndim\nindicating the shape of the memory as an n-dimensional array. Note thatshape[0] * ... * shape[ndim-1] * itemsize\nMUST be equal tolen\n.Shape values are restricted to\nshape[n] >= 0\n. The caseshape[n] == 0\nrequires special attention. See complex arrays for further information.The shape array is read-only for the consumer.\n-\nPy_ssize_t *strides\u00b6\nAn array of\nPy_ssize_t\nof lengthndim\ngiving the number of bytes to skip to get to a new element in each dimension.Stride values can be any integer. For regular arrays, strides are usually positive, but a consumer MUST be able to handle the case\nstrides[n] <= 0\n. See complex arrays for further information.The strides array is read-only for the consumer.\n-\nPy_ssize_t *suboffsets\u00b6\nAn array of\nPy_ssize_t\nof lengthndim\n. Ifsuboffsets[n] >= 0\n, the values stored along the nth dimension are pointers and the suboffset value dictates how many bytes to add to each pointer after de-referencing. A suboffset value that is negative indicates that no de-referencing should occur (striding in a contiguous memory block).If all suboffsets are negative (i.e. no de-referencing is needed), then this field must be\nNULL\n(the default value).This type of array representation is used by the Python Imaging Library (PIL). See complex arrays for further information how to access elements of such an array.\nThe suboffsets array is read-only for the consumer.\n-\nvoid *internal\u00b6\nThis is for use internally by the exporting object. For example, this might be re-cast as an integer by the exporter and used to store flags about whether or not the shape, strides, and suboffsets arrays must be freed when the buffer is released. The consumer MUST NOT alter this value.\n-\nvoid *buf\u00b6\nConstants:\n-\nPyBUF_MAX_NDIM\u00b6\n- Part of the Stable ABI since version 3.11.\nThe maximum number of dimensions the memory represents. Exporters MUST respect this limit, consumers of multi-dimensional buffers SHOULD be able to handle up to\nPyBUF_MAX_NDIM\ndimensions. Currently set to 64.\nBuffer request types\u00b6\nBuffers are usually obtained by sending a buffer request to an exporting\nobject via PyObject_GetBuffer()\n. Since the complexity of the logical\nstructure of the memory can vary drastically, the consumer uses the flags\nargument to specify the exact buffer type it can handle.\nAll Py_buffer\nfields are unambiguously defined by the request\ntype.\nrequest-independent fields\u00b6\nThe following fields are not influenced by flags and must always be filled in\nwith the correct values: obj\n, buf\n,\nlen\n, itemsize\n, ndim\n.\nreadonly, format\u00b6\n- PyBUF_WRITABLE\u00b6\n- Part of the Stable ABI since version 3.11.\nControls the\nreadonly\nfield. If set, the exporter MUST provide a writable buffer or else report failure. Otherwise, the exporter MAY provide either a read-only or writable buffer, but the choice MUST be consistent for all consumers. For example, PyBUF_SIMPLE | PyBUF_WRITABLE can be used to request a simple writable buffer.\n- PyBUF_WRITEABLE\u00b6\nThis is a soft deprecated alias to\nPyBUF_WRITABLE\n.\n- PyBUF_FORMAT\u00b6\n- Part of the Stable ABI since version 3.11.\nControls the\nformat\nfield. If set, this field MUST be filled in correctly. Otherwise, this field MUST beNULL\n.\nPyBUF_WRITABLE\ncan be |\u2019d to any of the flags in the next section.\nSince PyBUF_SIMPLE\nis defined as 0, PyBUF_WRITABLE\ncan be used as a stand-alone flag to request a simple writable buffer.\nPyBUF_FORMAT\nmust be |\u2019d to any of the flags except PyBUF_SIMPLE\n, because\nthe latter already implies format B\n(unsigned bytes). PyBUF_FORMAT\ncannot be\nused on its own.\nshape, strides, suboffsets\u00b6\nThe flags that control the logical structure of the memory are listed in decreasing order of complexity. Note that each flag contains all bits of the flags below it.\nRequest |\nshape |\nstrides |\nsuboffsets |\n|---|---|---|---|\n|\nyes |\nyes |\nif needed |\n|\nyes |\nyes |\nNULL |\n|\nyes |\nNULL |\nNULL |\n|\nNULL |\nNULL |\nNULL |\ncontiguity requests\u00b6\nC or Fortran contiguity can be explicitly requested, with and without stride information. Without stride information, the buffer must be C-contiguous.\nRequest |\nshape |\nstrides |\nsuboffsets |\ncontig |\n|---|---|---|---|---|\n|\nyes |\nyes |\nNULL |\nC |\n|\nyes |\nyes |\nNULL |\nF |\n|\nyes |\nyes |\nNULL |\nC or F |\nyes |\nNULL |\nNULL |\nC |\ncompound requests\u00b6\nAll possible requests are fully defined by some combination of the flags in the previous section. For convenience, the buffer protocol provides frequently used combinations as single flags.\nIn the following table U stands for undefined contiguity. The consumer would\nhave to call PyBuffer_IsContiguous()\nto determine contiguity.\nRequest |\nshape |\nstrides |\nsuboffsets |\ncontig |\nreadonly |\nformat |\n|---|---|---|---|---|---|---|\n|\nyes |\nyes |\nif needed |\nU |\n0 |\nyes |\n|\nyes |\nyes |\nif needed |\nU |\n1 or 0 |\nyes |\n|\nyes |\nyes |\nNULL |\nU |\n0 |\nyes |\n|\nyes |\nyes |\nNULL |\nU |\n1 or 0 |\nyes |\n|\nyes |\nyes |\nNULL |\nU |\n0 |\nNULL |\n|\nyes |\nyes |\nNULL |\nU |\n1 or 0 |\nNULL |\n|\nyes |\nNULL |\nNULL |\nC |\n0 |\nNULL |\n|\nyes |\nNULL |\nNULL |\nC |\n1 or 0 |\nNULL |\nComplex arrays\u00b6\nNumPy-style: shape and strides\u00b6\nThe logical structure of NumPy-style arrays is defined by itemsize\n,\nndim\n, shape\nand strides\n.\nIf ndim == 0\n, the memory location pointed to by buf\nis\ninterpreted as a scalar of size itemsize\n. In that case,\nboth shape\nand strides\nare NULL\n.\nIf strides\nis NULL\n, the array is interpreted as\na standard n-dimensional C-array. Otherwise, the consumer must access an\nn-dimensional array as follows:\nptr = (char *)buf + indices[0] * strides[0] + ... + indices[n-1] * strides[n-1];\nitem = *((typeof(item) *)ptr);\nAs noted above, buf\ncan point to any location within\nthe actual memory block. An exporter can check the validity of a buffer with\nthis function:\ndef verify_structure(memlen, itemsize, ndim, shape, strides, offset):\n\"\"\"Verify that the parameters represent a valid array within\nthe bounds of the allocated memory:\nchar *mem: start of the physical memory block\nmemlen: length of the physical memory block\noffset: (char *)buf - mem\n\"\"\"\nif offset % itemsize:\nreturn False\nif offset < 0 or offset+itemsize > memlen:\nreturn False\nif any(v % itemsize for v in strides):\nreturn False\nif ndim <= 0:\nreturn ndim == 0 and not shape and not strides\nif 0 in shape:\nreturn True\nimin = sum(strides[j]*(shape[j]-1) for j in range(ndim)\nif strides[j] <= 0)\nimax = sum(strides[j]*(shape[j]-1) for j in range(ndim)\nif strides[j] > 0)\nreturn 0 <= offset+imin and offset+imax+itemsize <= memlen\nPIL-style: shape, strides and suboffsets\u00b6\nIn addition to the regular items, PIL-style arrays can contain pointers\nthat must be followed in order to get to the next element in a dimension.\nFor example, the regular three-dimensional C-array char v[2][2][3]\ncan\nalso be viewed as an array of 2 pointers to 2 two-dimensional arrays:\nchar (*v[2])[2][3]\n. In suboffsets representation, those two pointers\ncan be embedded at the start of buf\n, pointing\nto two char x[2][3]\narrays that can be located anywhere in memory.\nHere is a function that returns a pointer to the element in an N-D array\npointed to by an N-dimensional index when there are both non-NULL\nstrides\nand suboffsets:\nvoid *get_item_pointer(int ndim, void *buf, Py_ssize_t *strides,\nPy_ssize_t *suboffsets, Py_ssize_t *indices) {\nchar *pointer = (char*)buf;\nint i;\nfor (i = 0; i < ndim; i++) {\npointer += strides[i] * indices[i];\nif (suboffsets[i] >=0 ) {\npointer = *((char**)pointer) + suboffsets[i];\n}\n}\nreturn (void*)pointer;\n}", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 3319} +{"url": "https://docs.python.org/3/library/audioop.html", "title": " \u2014 Manipulate raw audio data", "content": "audioop\n\u2014 Manipulate raw audio data\u00b6\nDeprecated since version 3.11, removed in version 3.13.\nThis module is no longer part of the Python standard library. It was removed in Python 3.13 after being deprecated in Python 3.11. The removal was decided in PEP 594.\nThe last version of Python that provided the audioop\nmodule was\nPython 3.12.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 84} +{"url": "https://docs.python.org/3/c-api/objimpl.html", "title": "Object Implementation Support", "content": "Object Implementation Support\u00b6\nThis chapter describes the functions, types, and macros used when defining new object types.\n- Allocating Objects on the Heap\n- Object Life Cycle\n- Common Object Structures\n- Type Object Structures\n- Supporting Cyclic Garbage Collection", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 67} +{"url": "https://docs.python.org/3/c-api/gcsupport.html", "title": "Supporting Cyclic Garbage Collection", "content": "Supporting Cyclic Garbage Collection\u00b6\nPython\u2019s support for detecting and collecting garbage which involves circular references requires support from object types which are \u201ccontainers\u201d for other objects which may also be containers. Types which do not store references to other objects, or which only store references to atomic types (such as numbers or strings), do not need to provide any explicit support for garbage collection.\nTo create a container type, the tp_flags\nfield of the type object must\ninclude the Py_TPFLAGS_HAVE_GC\nand provide an implementation of the\ntp_traverse\nhandler. If instances of the type are mutable, a\ntp_clear\nimplementation must also be provided.\nPy_TPFLAGS_HAVE_GC\nObjects with a type with this flag set must conform with the rules documented here. For convenience these objects will be referred to as container objects.\nConstructors for container types must conform to two rules:\nThe memory for the object must be allocated using\nPyObject_GC_New\norPyObject_GC_NewVar\n.Once all the fields which may contain references to other containers are initialized, it must call\nPyObject_GC_Track()\n.\nSimilarly, the deallocator for the object must conform to a similar pair of rules:\nBefore fields which refer to other containers are invalidated,\nPyObject_GC_UnTrack()\nmust be called.The object\u2019s memory must be deallocated using\nPyObject_GC_Del()\n.Warning\nIf a type adds the Py_TPFLAGS_HAVE_GC, then it must implement at least a\ntp_traverse\nhandler or explicitly use one from its subclass or subclasses.When calling\nPyType_Ready()\nor some of the APIs that indirectly call it likePyType_FromSpecWithBases()\norPyType_FromSpec()\nthe interpreter will automatically populate thetp_flags\n,tp_traverse\nandtp_clear\nfields if the type inherits from a class that implements the garbage collector protocol and the child class does not include thePy_TPFLAGS_HAVE_GC\nflag.\n-\nPyObject_GC_New(TYPE, typeobj)\u00b6\nAnalogous to\nPyObject_New\nbut for container objects with thePy_TPFLAGS_HAVE_GC\nflag set.Do not call this directly to allocate memory for an object; call the type\u2019s\ntp_alloc\nslot instead.When populating a type\u2019s\ntp_alloc\nslot,PyType_GenericAlloc()\nis preferred over a custom function that simply calls this macro.Memory allocated by this macro must be freed with\nPyObject_GC_Del()\n(usually called via the object\u2019stp_free\nslot).\n-\nPyObject_GC_NewVar(TYPE, typeobj, size)\u00b6\nAnalogous to\nPyObject_NewVar\nbut for container objects with thePy_TPFLAGS_HAVE_GC\nflag set.Do not call this directly to allocate memory for an object; call the type\u2019s\ntp_alloc\nslot instead.When populating a type\u2019s\ntp_alloc\nslot,PyType_GenericAlloc()\nis preferred over a custom function that simply calls this macro.Memory allocated by this macro must be freed with\nPyObject_GC_Del()\n(usually called via the object\u2019stp_free\nslot).\n-\nPyObject *PyUnstable_Object_GC_NewWithExtraData(PyTypeObject *type, size_t extra_size)\u00b6\n- This is Unstable API. It may change without warning in minor releases.\nAnalogous to\nPyObject_GC_New\nbut allocates extra_size bytes at the end of the object (at offsettp_basicsize\n). The allocated memory is initialized to zeros, except for thePython object header\n.The extra data will be deallocated with the object, but otherwise it is not managed by Python.\nMemory allocated by this function must be freed with\nPyObject_GC_Del()\n(usually called via the object\u2019stp_free\nslot).Warning\nThe function is marked as unstable because the final mechanism for reserving extra data after an instance is not yet decided. For allocating a variable number of fields, prefer using\nPyVarObject\nandtp_itemsize\ninstead.Added in version 3.12.\n-\nPyObject_GC_Resize(TYPE, op, newsize)\u00b6\nResize an object allocated by\nPyObject_NewVar\n. Returns the resized object of typeTYPE*\n(refers to any C type) orNULL\non failure.op must be of type PyVarObject* and must not be tracked by the collector yet. newsize must be of type\nPy_ssize_t\n.\n-\nvoid PyObject_GC_Track(PyObject *op)\u00b6\n- Part of the Stable ABI.\nAdds the object op to the set of container objects tracked by the collector. The collector can run at unexpected times so objects must be valid while being tracked. This should be called once all the fields followed by the\ntp_traverse\nhandler become valid, usually near the end of the constructor.\n-\nint PyObject_IS_GC(PyObject *obj)\u00b6\nReturns non-zero if the object implements the garbage collector protocol, otherwise returns 0.\nThe object cannot be tracked by the garbage collector if this function returns 0.\n-\nint PyObject_GC_IsTracked(PyObject *op)\u00b6\n- Part of the Stable ABI since version 3.9.\nReturns 1 if the object type of op implements the GC protocol and op is being currently tracked by the garbage collector and 0 otherwise.\nThis is analogous to the Python function\ngc.is_tracked()\n.Added in version 3.9.\n-\nint PyObject_GC_IsFinalized(PyObject *op)\u00b6\n- Part of the Stable ABI since version 3.9.\nReturns 1 if the object type of op implements the GC protocol and op has been already finalized by the garbage collector and 0 otherwise.\nThis is analogous to the Python function\ngc.is_finalized()\n.Added in version 3.9.\n-\nvoid PyObject_GC_Del(void *op)\u00b6\n- Part of the Stable ABI.\nReleases memory allocated to an object using\nPyObject_GC_New\norPyObject_GC_NewVar\n.Do not call this directly to free an object\u2019s memory; call the type\u2019s\ntp_free\nslot instead.Do not use this for memory allocated by\nPyObject_New\n,PyObject_NewVar\n, or related allocation functions; usePyObject_Free()\ninstead.See also\nPyObject_Free()\nis the non-GC equivalent of this function.\n-\nvoid PyObject_GC_UnTrack(void *op)\u00b6\n- Part of the Stable ABI.\nRemove the object op from the set of container objects tracked by the collector. Note that\nPyObject_GC_Track()\ncan be called again on this object to add it back to the set of tracked objects. The deallocator (tp_dealloc\nhandler) should call this for the object before any of the fields used by thetp_traverse\nhandler become invalid.\nChanged in version 3.8: The _PyObject_GC_TRACK()\nand _PyObject_GC_UNTRACK()\nmacros\nhave been removed from the public C API.\nThe tp_traverse\nhandler accepts a function parameter of this type:\n-\ntypedef int (*visitproc)(PyObject *object, void *arg)\u00b6\n- Part of the Stable ABI.\nType of the visitor function passed to the\ntp_traverse\nhandler. The function should be called with an object to traverse as object and the third parameter to thetp_traverse\nhandler as arg. The Python core uses several visitor functions to implement cyclic garbage detection; it\u2019s not expected that users will need to write their own visitor functions.\nThe tp_traverse\nhandler must have the following type:\n-\ntypedef int (*traverseproc)(PyObject *self, visitproc visit, void *arg)\u00b6\n- Part of the Stable ABI.\nTraversal function for a container object. Implementations must call the visit function for each object directly contained by self, with the parameters to visit being the contained object and the arg value passed to the handler. The visit function must not be called with a\nNULL\nobject argument. If visit returns a non-zero value that value should be returned immediately.The traversal function must not have any side effects. Implementations may not modify the reference counts of any Python objects nor create or destroy any Python objects.\nTo simplify writing tp_traverse\nhandlers, a Py_VISIT()\nmacro is\nprovided. In order to use this macro, the tp_traverse\nimplementation\nmust name its arguments exactly visit and arg:\n-\nPy_VISIT(o)\u00b6\nIf the PyObject* o is not\nNULL\n, call the visit callback, with arguments o and arg. If visit returns a non-zero value, then return it. Using this macro,tp_traverse\nhandlers look like:static int my_traverse(Noddy *self, visitproc visit, void *arg) { Py_VISIT(self->foo); Py_VISIT(self->bar); return 0; }\nThe tp_clear\nhandler must be of the inquiry\ntype, or NULL\nif the object is immutable.\n-\ntypedef int (*inquiry)(PyObject *self)\u00b6\n- Part of the Stable ABI.\nDrop references that may have created reference cycles. Immutable objects do not have to define this method since they can never directly create reference cycles. Note that the object must still be valid after calling this method (don\u2019t just call\nPy_DECREF()\non a reference). The collector will call this method if it detects that this object is involved in a reference cycle.\nControlling the Garbage Collector State\u00b6\nThe C-API provides the following functions for controlling garbage collection runs.\n-\nPy_ssize_t PyGC_Collect(void)\u00b6\n- Part of the Stable ABI.\nPerform a full garbage collection, if the garbage collector is enabled. (Note that\ngc.collect()\nruns it unconditionally.)Returns the number of collected + unreachable objects which cannot be collected. If the garbage collector is disabled or already collecting, returns\n0\nimmediately. Errors during garbage collection are passed tosys.unraisablehook\n. This function does not raise exceptions.\n-\nint PyGC_Enable(void)\u00b6\n- Part of the Stable ABI since version 3.10.\nEnable the garbage collector: similar to\ngc.enable()\n. Returns the previous state, 0 for disabled and 1 for enabled.Added in version 3.10.\n-\nint PyGC_Disable(void)\u00b6\n- Part of the Stable ABI since version 3.10.\nDisable the garbage collector: similar to\ngc.disable()\n. Returns the previous state, 0 for disabled and 1 for enabled.Added in version 3.10.\n-\nint PyGC_IsEnabled(void)\u00b6\n- Part of the Stable ABI since version 3.10.\nQuery the state of the garbage collector: similar to\ngc.isenabled()\n. Returns the current state, 0 for disabled and 1 for enabled.Added in version 3.10.\nQuerying Garbage Collector State\u00b6\nThe C-API provides the following interface for querying information about the garbage collector.\n-\nvoid PyUnstable_GC_VisitObjects(gcvisitobjects_t callback, void *arg)\u00b6\n- This is Unstable API. It may change without warning in minor releases.\nRun supplied callback on all live GC-capable objects. arg is passed through to all invocations of callback.\nWarning\nIf new objects are (de)allocated by the callback it is undefined if they will be visited.\nGarbage collection is disabled during operation. Explicitly running a collection in the callback may lead to undefined behaviour e.g. visiting the same objects multiple times or not at all.\nAdded in version 3.12.\n-\ntypedef int (*gcvisitobjects_t)(PyObject *object, void *arg)\u00b6\nType of the visitor function to be passed to\nPyUnstable_GC_VisitObjects()\n. arg is the same as the arg passed toPyUnstable_GC_VisitObjects\n. Return1\nto continue iteration, return0\nto stop iteration. Other return values are reserved for now so behavior on returning anything else is undefined.Added in version 3.12.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 2648} +{"url": "https://docs.python.org/3/library/asyncore.html", "title": " \u2014 Asynchronous socket handler", "content": "asyncore\n\u2014 Asynchronous socket handler\u00b6\nDeprecated since version 3.6, removed in version 3.12.\nThis module is no longer part of the Python standard library. It was removed in Python 3.12 after being deprecated in Python 3.6. The removal was decided in PEP 594.\nApplications should use the asyncio\nmodule instead.\nThe last version of Python that provided the asyncore\nmodule was\nPython 3.11.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 97} +{"url": "https://docs.python.org/3/library/asynchat.html", "title": " \u2014 Asynchronous socket command/response handler", "content": "asynchat\n\u2014 Asynchronous socket command/response handler\u00b6\nDeprecated since version 3.6, removed in version 3.12.\nThis module is no longer part of the Python standard library. It was removed in Python 3.12 after being deprecated in Python 3.6. The removal was decided in PEP 594.\nApplications should use the asyncio\nmodule instead.\nThe last version of Python that provided the asynchat\nmodule was\nPython 3.11.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 102} +{"url": "https://docs.python.org/3/library/aifc.html", "title": " \u2014 Read and write AIFF and AIFC files", "content": "aifc\n\u2014 Read and write AIFF and AIFC files\u00b6\nDeprecated since version 3.11, removed in version 3.13.\nThis module is no longer part of the Python standard library. It was removed in Python 3.13 after being deprecated in Python 3.11. The removal was decided in PEP 594.\nThe last version of Python that provided the aifc\nmodule was\nPython 3.12.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 85} +{"url": "https://docs.python.org/3/c-api/module.html", "title": "Module Objects", "content": "Module Objects\u00b6\n-\nPyTypeObject PyModule_Type\u00b6\n- Part of the Stable ABI.\nThis instance of\nPyTypeObject\nrepresents the Python module type. This is exposed to Python programs astypes.ModuleType\n.\n-\nint PyModule_Check(PyObject *p)\u00b6\nReturn true if p is a module object, or a subtype of a module object. This function always succeeds.\n-\nint PyModule_CheckExact(PyObject *p)\u00b6\nReturn true if p is a module object, but not a subtype of\nPyModule_Type\n. This function always succeeds.\n-\nPyObject *PyModule_NewObject(PyObject *name)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.7.\nReturn a new module object with\nmodule.__name__\nset to name. The module\u2019s__name__\n,__doc__\n,__package__\nand__loader__\nattributes are filled in (all but__name__\nare set toNone\n). The caller is responsible for setting a__file__\nattribute.Return\nNULL\nwith an exception set on error.Added in version 3.3.\nChanged in version 3.4:\n__package__\nand__loader__\nare now set toNone\n.\n-\nPyObject *PyModule_New(const char *name)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nSimilar to\nPyModule_NewObject()\n, but the name is a UTF-8 encoded string instead of a Unicode object.\n-\nPyObject *PyModule_GetDict(PyObject *module)\u00b6\n- Return value: Borrowed reference. Part of the Stable ABI.\nReturn the dictionary object that implements module\u2019s namespace; this object is the same as the\n__dict__\nattribute of the module object. If module is not a module object (or a subtype of a module object),SystemError\nis raised andNULL\nis returned.It is recommended extensions use other\nPyModule_*\nandPyObject_*\nfunctions rather than directly manipulate a module\u2019s__dict__\n.The returned reference is borrowed from the module; it is valid until the module is destroyed.\n-\nPyObject *PyModule_GetNameObject(PyObject *module)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.7.\nReturn module\u2019s\n__name__\nvalue. If the module does not provide one, or if it is not a string,SystemError\nis raised andNULL\nis returned.Added in version 3.3.\n-\nconst char *PyModule_GetName(PyObject *module)\u00b6\n- Part of the Stable ABI.\nSimilar to\nPyModule_GetNameObject()\nbut return the name encoded to'utf-8'\n.The returned buffer is only valid until the module is renamed or destroyed. Note that Python code may rename a module by setting its\n__name__\nattribute.\n-\nvoid *PyModule_GetState(PyObject *module)\u00b6\n- Part of the Stable ABI.\nReturn the \u201cstate\u201d of the module, that is, a pointer to the block of memory allocated at module creation time, or\nNULL\n. SeePyModuleDef.m_size\n.\n-\nPyModuleDef *PyModule_GetDef(PyObject *module)\u00b6\n- Part of the Stable ABI.\nReturn a pointer to the\nPyModuleDef\nstruct from which the module was created, orNULL\nif the module wasn\u2019t created from a definition.On error, return\nNULL\nwith an exception set. UsePyErr_Occurred()\nto tell this case apart from a missingPyModuleDef\n.\n-\nPyObject *PyModule_GetFilenameObject(PyObject *module)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn the name of the file from which module was loaded using module\u2019s\n__file__\nattribute. If this is not defined, or if it is not a string, raiseSystemError\nand returnNULL\n; otherwise return a reference to a Unicode object.Added in version 3.2.\n-\nconst char *PyModule_GetFilename(PyObject *module)\u00b6\n- Part of the Stable ABI.\nSimilar to\nPyModule_GetFilenameObject()\nbut return the filename encoded to \u2018utf-8\u2019.The returned buffer is only valid until the module\u2019s\n__file__\nattribute is reassigned or the module is destroyed.Deprecated since version 3.2:\nPyModule_GetFilename()\nraisesUnicodeEncodeError\non unencodable filenames, usePyModule_GetFilenameObject()\ninstead.\nModule definitions\u00b6\nThe functions in the previous section work on any module object, including modules imported from Python code.\nModules defined using the C API typically use a module definition,\nPyModuleDef\n\u2013 a statically allocated, constant \u201cdescription\u201d of\nhow a module should be created.\nThe definition is usually used to define an extension\u2019s \u201cmain\u201d module object (see Defining extension modules for details). It is also used to create extension modules dynamically.\nUnlike PyModule_New()\n, the definition allows management of\nmodule state \u2013 a piece of memory that is allocated and cleared together\nwith the module object.\nUnlike the module\u2019s Python attributes, Python code cannot replace or delete\ndata stored in module state.\n-\ntype PyModuleDef\u00b6\n- Part of the Stable ABI (including all members).\nThe module definition struct, which holds all information needed to create a module object. This structure must be statically allocated (or be otherwise guaranteed to be valid while any modules created from it exist). Usually, there is only one variable of this type for each extension module.\n-\nPyModuleDef_Base m_base\u00b6\nAlways initialize this member to\nPyModuleDef_HEAD_INIT\n.\n-\nconst char *m_name\u00b6\nName for the new module.\n-\nconst char *m_doc\u00b6\nDocstring for the module; usually a docstring variable created with\nPyDoc_STRVAR\nis used.\n-\nPy_ssize_t m_size\u00b6\nModule state may be kept in a per-module memory area that can be retrieved with\nPyModule_GetState()\n, rather than in static globals. This makes modules safe for use in multiple sub-interpreters.This memory area is allocated based on m_size on module creation, and freed when the module object is deallocated, after the\nm_free\nfunction has been called, if present.Setting it to a non-negative value means that the module can be re-initialized and specifies the additional amount of memory it requires for its state.\nSetting\nm_size\nto-1\nmeans that the module does not support sub-interpreters, because it has global state. Negativem_size\nis only allowed when using legacy single-phase initialization or when creating modules dynamically.See PEP 3121 for more details.\n-\nPyMethodDef *m_methods\u00b6\nA pointer to a table of module-level functions, described by\nPyMethodDef\nvalues. Can beNULL\nif no functions are present.\n-\nPyModuleDef_Slot *m_slots\u00b6\nAn array of slot definitions for multi-phase initialization, terminated by a\n{0, NULL}\nentry. When using legacy single-phase initialization, m_slots must beNULL\n.\n-\ntraverseproc m_traverse\u00b6\nA traversal function to call during GC traversal of the module object, or\nNULL\nif not needed.This function is not called if the module state was requested but is not allocated yet. This is the case immediately after the module is created and before the module is executed (\nPy_mod_exec\nfunction). More precisely, this function is not called ifm_size\nis greater than 0 and the module state (as returned byPyModule_GetState()\n) isNULL\n.Changed in version 3.9: No longer called before the module state is allocated.\n-\ninquiry m_clear\u00b6\nA clear function to call during GC clearing of the module object, or\nNULL\nif not needed.This function is not called if the module state was requested but is not allocated yet. This is the case immediately after the module is created and before the module is executed (\nPy_mod_exec\nfunction). More precisely, this function is not called ifm_size\nis greater than 0 and the module state (as returned byPyModule_GetState()\n) isNULL\n.Like\nPyTypeObject.tp_clear\n, this function is not always called before a module is deallocated. For example, when reference counting is enough to determine that an object is no longer used, the cyclic garbage collector is not involved andm_free\nis called directly.Changed in version 3.9: No longer called before the module state is allocated.\n-\nfreefunc m_free\u00b6\nA function to call during deallocation of the module object, or\nNULL\nif not needed.This function is not called if the module state was requested but is not allocated yet. This is the case immediately after the module is created and before the module is executed (\nPy_mod_exec\nfunction). More precisely, this function is not called ifm_size\nis greater than 0 and the module state (as returned byPyModule_GetState()\n) isNULL\n.Changed in version 3.9: No longer called before the module state is allocated.\n-\nPyModuleDef_Base m_base\u00b6\nModule slots\u00b6\n-\ntype PyModuleDef_Slot\u00b6\n- Part of the Stable ABI (including all members) since version 3.5.\n-\nint slot\u00b6\nA slot ID, chosen from the available values explained below.\n-\nvoid *value\u00b6\nValue of the slot, whose meaning depends on the slot ID.\nAdded in version 3.5.\n-\nint slot\u00b6\nThe available slot types are:\n-\nPy_mod_create\u00b6\n- Part of the Stable ABI since version 3.5.\nSpecifies a function that is called to create the module object itself. The value pointer of this slot must point to a function of the signature:\n-\nPyObject *create_module(PyObject *spec, PyModuleDef *def)\u00b6\nThe function receives a\nModuleSpec\ninstance, as defined in PEP 451, and the module definition. It should return a new module object, or set an error and returnNULL\n.This function should be kept minimal. In particular, it should not call arbitrary Python code, as trying to import the same module again may result in an infinite loop.\nMultiple\nPy_mod_create\nslots may not be specified in one module definition.If\nPy_mod_create\nis not specified, the import machinery will create a normal module object usingPyModule_New()\n. The name is taken from spec, not the definition, to allow extension modules to dynamically adjust to their place in the module hierarchy and be imported under different names through symlinks, all while sharing a single module definition.There is no requirement for the returned object to be an instance of\nPyModule_Type\n. Any type can be used, as long as it supports setting and getting import-related attributes. However, onlyPyModule_Type\ninstances may be returned if thePyModuleDef\nhas non-NULL\nm_traverse\n,m_clear\n,m_free\n; non-zerom_size\n; or slots other thanPy_mod_create\n.Added in version 3.5.\n-\nPyObject *create_module(PyObject *spec, PyModuleDef *def)\u00b6\n-\nPy_mod_exec\u00b6\n- Part of the Stable ABI since version 3.5.\nSpecifies a function that is called to execute the module. This is equivalent to executing the code of a Python module: typically, this function adds classes and constants to the module. The signature of the function is:\nIf multiple\nPy_mod_exec\nslots are specified, they are processed in the order they appear in the m_slots array.Added in version 3.5.\n-\nPy_mod_multiple_interpreters\u00b6\n- Part of the Stable ABI since version 3.12.\nSpecifies one of the following values:\n-\nPy_MOD_MULTIPLE_INTERPRETERS_NOT_SUPPORTED\u00b6\nThe module does not support being imported in subinterpreters.\n-\nPy_MOD_MULTIPLE_INTERPRETERS_SUPPORTED\u00b6\nThe module supports being imported in subinterpreters, but only when they share the main interpreter\u2019s GIL. (See Isolating Extension Modules.)\n-\nPy_MOD_PER_INTERPRETER_GIL_SUPPORTED\u00b6\nThe module supports being imported in subinterpreters, even when they have their own GIL. (See Isolating Extension Modules.)\nThis slot determines whether or not importing this module in a subinterpreter will fail.\nMultiple\nPy_mod_multiple_interpreters\nslots may not be specified in one module definition.If\nPy_mod_multiple_interpreters\nis not specified, the import machinery defaults toPy_MOD_MULTIPLE_INTERPRETERS_SUPPORTED\n.Added in version 3.12.\n-\nPy_MOD_MULTIPLE_INTERPRETERS_NOT_SUPPORTED\u00b6\n-\nPy_mod_gil\u00b6\n- Part of the Stable ABI since version 3.13.\nSpecifies one of the following values:\n-\nPy_MOD_GIL_USED\u00b6\nThe module depends on the presence of the global interpreter lock (GIL), and may access global state without synchronization.\n-\nPy_MOD_GIL_NOT_USED\u00b6\nThe module is safe to run without an active GIL.\nThis slot is ignored by Python builds not configured with\n--disable-gil\n. Otherwise, it determines whether or not importing this module will cause the GIL to be automatically enabled. See Free-threaded CPython for more detail.Multiple\nPy_mod_gil\nslots may not be specified in one module definition.If\nPy_mod_gil\nis not specified, the import machinery defaults toPy_MOD_GIL_USED\n.Added in version 3.13.\n-\nPy_MOD_GIL_USED\u00b6\nCreating extension modules dynamically\u00b6\nThe following functions may be used to create a module outside of an extension\u2019s initialization function. They are also used in single-phase initialization.\n-\nPyObject *PyModule_Create(PyModuleDef *def)\u00b6\n- Return value: New reference.\nCreate a new module object, given the definition in def. This is a macro that calls\nPyModule_Create2()\nwith module_api_version set toPYTHON_API_VERSION\n, or toPYTHON_ABI_VERSION\nif using the limited API.\n-\nPyObject *PyModule_Create2(PyModuleDef *def, int module_api_version)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCreate a new module object, given the definition in def, assuming the API version module_api_version. If that version does not match the version of the running interpreter, a\nRuntimeWarning\nis emitted.Return\nNULL\nwith an exception set on error.This function does not support slots. The\nm_slots\nmember of def must beNULL\n.Note\nMost uses of this function should be using\nPyModule_Create()\ninstead; only use this if you are sure you need it.\n-\nPyObject *PyModule_FromDefAndSpec(PyModuleDef *def, PyObject *spec)\u00b6\n- Return value: New reference.\nThis macro calls\nPyModule_FromDefAndSpec2()\nwith module_api_version set toPYTHON_API_VERSION\n, or toPYTHON_ABI_VERSION\nif using the limited API.Added in version 3.5.\n-\nPyObject *PyModule_FromDefAndSpec2(PyModuleDef *def, PyObject *spec, int module_api_version)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.7.\nCreate a new module object, given the definition in def and the ModuleSpec spec, assuming the API version module_api_version. If that version does not match the version of the running interpreter, a\nRuntimeWarning\nis emitted.Return\nNULL\nwith an exception set on error.Note that this does not process execution slots (\nPy_mod_exec\n). BothPyModule_FromDefAndSpec\nandPyModule_ExecDef\nmust be called to fully initialize a module.Note\nMost uses of this function should be using\nPyModule_FromDefAndSpec()\ninstead; only use this if you are sure you need it.Added in version 3.5.\n-\nint PyModule_ExecDef(PyObject *module, PyModuleDef *def)\u00b6\n- Part of the Stable ABI since version 3.7.\nProcess any execution slots (\nPy_mod_exec\n) given in def.Added in version 3.5.\n-\nPYTHON_API_VERSION\u00b6\nThe C API version. Defined for backwards compatibility.\nCurrently, this constant is not updated in new Python versions, and is not useful for versioning. This may change in the future.\n-\nPYTHON_ABI_VERSION\u00b6\nDefined as\n3\nfor backwards compatibility.Currently, this constant is not updated in new Python versions, and is not useful for versioning. This may change in the future.\nSupport functions\u00b6\nThe following functions are provided to help initialize a module\nstate.\nThey are intended for a module\u2019s execution slots (Py_mod_exec\n),\nthe initialization function for legacy single-phase initialization,\nor code that creates modules dynamically.\n-\nint PyModule_AddObjectRef(PyObject *module, const char *name, PyObject *value)\u00b6\n- Part of the Stable ABI since version 3.10.\nAdd an object to module as name. This is a convenience function which can be used from the module\u2019s initialization function.\nOn success, return\n0\n. On error, raise an exception and return-1\n.Example usage:\nstatic int add_spam(PyObject *module, int value) { PyObject *obj = PyLong_FromLong(value); if (obj == NULL) { return -1; } int res = PyModule_AddObjectRef(module, \"spam\", obj); Py_DECREF(obj); return res; }\nTo be convenient, the function accepts\nNULL\nvalue with an exception set. In this case, return-1\nand just leave the raised exception unchanged.The example can also be written without checking explicitly if obj is\nNULL\n:static int add_spam(PyObject *module, int value) { PyObject *obj = PyLong_FromLong(value); int res = PyModule_AddObjectRef(module, \"spam\", obj); Py_XDECREF(obj); return res; }\nNote that\nPy_XDECREF()\nshould be used instead ofPy_DECREF()\nin this case, since obj can beNULL\n.The number of different name strings passed to this function should be kept small, usually by only using statically allocated strings as name. For names that aren\u2019t known at compile time, prefer calling\nPyUnicode_FromString()\nandPyObject_SetAttr()\ndirectly. For more details, seePyUnicode_InternFromString()\n, which may be used internally to create a key object.Added in version 3.10.\n-\nint PyModule_Add(PyObject *module, const char *name, PyObject *value)\u00b6\n- Part of the Stable ABI since version 3.13.\nSimilar to\nPyModule_AddObjectRef()\n, but \u201csteals\u201d a reference to value. It can be called with a result of function that returns a new reference without bothering to check its result or even saving it to a variable.Example usage:\nif (PyModule_Add(module, \"spam\", PyBytes_FromString(value)) < 0) { goto error; }\nAdded in version 3.13.\n-\nint PyModule_AddObject(PyObject *module, const char *name, PyObject *value)\u00b6\n- Part of the Stable ABI.\nSimilar to\nPyModule_AddObjectRef()\n, but steals a reference to value on success (if it returns0\n).The new\nPyModule_Add()\norPyModule_AddObjectRef()\nfunctions are recommended, since it is easy to introduce reference leaks by misusing thePyModule_AddObject()\nfunction.Note\nUnlike other functions that steal references,\nPyModule_AddObject()\nonly releases the reference to value on success.This means that its return value must be checked, and calling code must\nPy_XDECREF()\nvalue manually on error.Example usage:\nPyObject *obj = PyBytes_FromString(value); if (PyModule_AddObject(module, \"spam\", obj) < 0) { // If 'obj' is not NULL and PyModule_AddObject() failed, // 'obj' strong reference must be deleted with Py_XDECREF(). // If 'obj' is NULL, Py_XDECREF() does nothing. Py_XDECREF(obj); goto error; } // PyModule_AddObject() stole a reference to obj: // Py_XDECREF(obj) is not needed here.\nDeprecated since version 3.13:\nPyModule_AddObject()\nis soft deprecated.\n-\nint PyModule_AddIntConstant(PyObject *module, const char *name, long value)\u00b6\n- Part of the Stable ABI.\nAdd an integer constant to module as name. This convenience function can be used from the module\u2019s initialization function. Return\n-1\nwith an exception set on error,0\non success.This is a convenience function that calls\nPyLong_FromLong()\nandPyModule_AddObjectRef()\n; see their documentation for details.\n-\nint PyModule_AddStringConstant(PyObject *module, const char *name, const char *value)\u00b6\n- Part of the Stable ABI.\nAdd a string constant to module as name. This convenience function can be used from the module\u2019s initialization function. The string value must be\nNULL\n-terminated. Return-1\nwith an exception set on error,0\non success.This is a convenience function that calls\nPyUnicode_InternFromString()\nandPyModule_AddObjectRef()\n; see their documentation for details.\n-\nPyModule_AddIntMacro(module, macro)\u00b6\nAdd an int constant to module. The name and the value are taken from macro. For example\nPyModule_AddIntMacro(module, AF_INET)\nadds the int constant AF_INET with the value of AF_INET to module. Return-1\nwith an exception set on error,0\non success.\n-\nPyModule_AddStringMacro(module, macro)\u00b6\nAdd a string constant to module.\n-\nint PyModule_AddType(PyObject *module, PyTypeObject *type)\u00b6\n- Part of the Stable ABI since version 3.10.\nAdd a type object to module. The type object is finalized by calling internally\nPyType_Ready()\n. The name of the type object is taken from the last component oftp_name\nafter dot. Return-1\nwith an exception set on error,0\non success.Added in version 3.9.\n-\nint PyModule_AddFunctions(PyObject *module, PyMethodDef *functions)\u00b6\n- Part of the Stable ABI since version 3.7.\nAdd the functions from the\nNULL\nterminated functions array to module. Refer to thePyMethodDef\ndocumentation for details on individual entries (due to the lack of a shared module namespace, module level \u201cfunctions\u201d implemented in C typically receive the module as their first parameter, making them similar to instance methods on Python classes).This function is called automatically when creating a module from\nPyModuleDef\n(such as when using Multi-phase initialization,PyModule_Create\n, orPyModule_FromDefAndSpec\n). Some module authors may prefer defining functions in multiplePyMethodDef\narrays; in that case they should call this function directly.The functions array must be statically allocated (or otherwise guaranteed to outlive the module object).\nAdded in version 3.5.\n-\nint PyModule_SetDocString(PyObject *module, const char *docstring)\u00b6\n- Part of the Stable ABI since version 3.7.\nSet the docstring for module to docstring. This function is called automatically when creating a module from\nPyModuleDef\n(such as when using Multi-phase initialization,PyModule_Create\n, orPyModule_FromDefAndSpec\n).Return\n0\non success. Return-1\nwith an exception set on error.Added in version 3.5.\n-\nint PyUnstable_Module_SetGIL(PyObject *module, void *gil)\u00b6\n- This is Unstable API. It may change without warning in minor releases.\nIndicate that module does or does not support running without the global interpreter lock (GIL), using one of the values from\nPy_mod_gil\n. It must be called during module\u2019s initialization function when using Legacy single-phase initialization. If this function is not called during module initialization, the import machinery assumes the module does not support running without the GIL. This function is only available in Python builds configured with--disable-gil\n. Return-1\nwith an exception set on error,0\non success.Added in version 3.13.\nModule lookup (single-phase initialization)\u00b6\nThe legacy single-phase initialization initialization scheme creates singleton modules that can be looked up in the context of the current interpreter. This allows the module object to be retrieved later with only a reference to the module definition.\nThese functions will not work on modules created using multi-phase initialization, since multiple such modules can be created from a single definition.\n-\nPyObject *PyState_FindModule(PyModuleDef *def)\u00b6\n- Return value: Borrowed reference. Part of the Stable ABI.\nReturns the module object that was created from def for the current interpreter. This method requires that the module object has been attached to the interpreter state with\nPyState_AddModule()\nbeforehand. In case the corresponding module object is not found or has not been attached to the interpreter state yet, it returnsNULL\n.\n-\nint PyState_AddModule(PyObject *module, PyModuleDef *def)\u00b6\n- Part of the Stable ABI since version 3.3.\nAttaches the module object passed to the function to the interpreter state. This allows the module object to be accessible via\nPyState_FindModule()\n.Only effective on modules created using single-phase initialization.\nPython calls\nPyState_AddModule\nautomatically after importing a module that uses single-phase initialization, so it is unnecessary (but harmless) to call it from module initialization code. An explicit call is needed only if the module\u2019s own init code subsequently callsPyState_FindModule\n. The function is mainly intended for implementing alternative import mechanisms (either by calling it directly, or by referring to its implementation for details of the required state updates).If a module was attached previously using the same def, it is replaced by the new module.\nThe caller must have an attached thread state.\nReturn\n-1\nwith an exception set on error,0\non success.Added in version 3.3.\n-\nint PyState_RemoveModule(PyModuleDef *def)\u00b6\n- Part of the Stable ABI since version 3.3.\nRemoves the module object created from def from the interpreter state. Return\n-1\nwith an exception set on error,0\non success.The caller must have an attached thread state.\nAdded in version 3.3.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 5867} +{"url": "https://docs.python.org/3/c-api/reflection.html", "title": "Reflection", "content": "Reflection\u00b6\n-\nPyObject *PyEval_GetBuiltins(void)\u00b6\n- Return value: Borrowed reference. Part of the Stable ABI.\nDeprecated since version 3.13: Use\nPyEval_GetFrameBuiltins()\ninstead.Return a dictionary of the builtins in the current execution frame, or the interpreter of the thread state if no frame is currently executing.\n-\nPyObject *PyEval_GetLocals(void)\u00b6\n- Return value: Borrowed reference. Part of the Stable ABI.\nDeprecated since version 3.13: Use either\nPyEval_GetFrameLocals()\nto obtain the same behaviour as callinglocals()\nin Python code, or else callPyFrame_GetLocals()\non the result ofPyEval_GetFrame()\nto access thef_locals\nattribute of the currently executing frame.Return a mapping providing access to the local variables in the current execution frame, or\nNULL\nif no frame is currently executing.Refer to\nlocals()\nfor details of the mapping returned at different scopes.As this function returns a borrowed reference, the dictionary returned for optimized scopes is cached on the frame object and will remain alive as long as the frame object does. Unlike\nPyEval_GetFrameLocals()\nandlocals()\n, subsequent calls to this function in the same frame will update the contents of the cached dictionary to reflect changes in the state of the local variables rather than returning a new snapshot.Changed in version 3.13: As part of PEP 667,\nPyFrame_GetLocals()\n,locals()\n, andFrameType.f_locals\nno longer make use of the shared cache dictionary. Refer to the What\u2019s New entry for additional details.\n-\nPyObject *PyEval_GetGlobals(void)\u00b6\n- Return value: Borrowed reference. Part of the Stable ABI.\nDeprecated since version 3.13: Use\nPyEval_GetFrameGlobals()\ninstead.Return a dictionary of the global variables in the current execution frame, or\nNULL\nif no frame is currently executing.\n-\nPyFrameObject *PyEval_GetFrame(void)\u00b6\n- Return value: Borrowed reference. Part of the Stable ABI.\nReturn the attached thread state\u2019s frame, which is\nNULL\nif no frame is currently executing.See also\nPyThreadState_GetFrame()\n.\n-\nPyObject *PyEval_GetFrameBuiltins(void)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.13.\nReturn a dictionary of the builtins in the current execution frame, or the interpreter of the thread state if no frame is currently executing.\nAdded in version 3.13.\n-\nPyObject *PyEval_GetFrameLocals(void)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.13.\nReturn a dictionary of the local variables in the current execution frame, or\nNULL\nif no frame is currently executing. Equivalent to callinglocals()\nin Python code.To access\nf_locals\non the current frame without making an independent snapshot in optimized scopes, callPyFrame_GetLocals()\non the result ofPyEval_GetFrame()\n.Added in version 3.13.\n-\nPyObject *PyEval_GetFrameGlobals(void)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.13.\nReturn a dictionary of the global variables in the current execution frame, or\nNULL\nif no frame is currently executing. Equivalent to callingglobals()\nin Python code.Added in version 3.13.\n-\nconst char *PyEval_GetFuncName(PyObject *func)\u00b6\n- Part of the Stable ABI.\nReturn the name of func if it is a function, class or instance object, else the name of funcs type.\n-\nconst char *PyEval_GetFuncDesc(PyObject *func)\u00b6\n- Part of the Stable ABI.\nReturn a description string, depending on the type of func. Return values include \u201c()\u201d for functions and methods, \u201c constructor\u201d, \u201c instance\u201d, and \u201c object\u201d. Concatenated with the result of\nPyEval_GetFuncName()\n, the result will be a description of func.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 893} +{"url": "https://docs.python.org/3/c-api/frame.html", "title": "Frame Objects", "content": "Frame Objects\u00b6\n-\ntype PyFrameObject\u00b6\n- Part of the Limited API (as an opaque struct).\nThe C structure of the objects used to describe frame objects.\nThere are no public members in this structure.\nChanged in version 3.11: The members of this structure were removed from the public C API. Refer to the What\u2019s New entry for details.\nThe PyEval_GetFrame()\nand PyThreadState_GetFrame()\nfunctions\ncan be used to get a frame object.\nSee also Reflection.\n-\nPyTypeObject PyFrame_Type\u00b6\nThe type of frame objects. It is the same object as\ntypes.FrameType\nin the Python layer.Changed in version 3.11: Previously, this type was only available after including\n\n.\n-\nPyFrameObject *PyFrame_New(PyThreadState *tstate, PyCodeObject *code, PyObject *globals, PyObject *locals)\u00b6\nCreate a new frame object. This function returns a strong reference to the new frame object on success, and returns\nNULL\nwith an exception set on failure.\n-\nint PyFrame_Check(PyObject *obj)\u00b6\nReturn non-zero if obj is a frame object.\nChanged in version 3.11: Previously, this function was only available after including\n\n.\n-\nPyFrameObject *PyFrame_GetBack(PyFrameObject *frame)\u00b6\n- Return value: New reference.\nGet the frame next outer frame.\nReturn a strong reference, or\nNULL\nif frame has no outer frame.Added in version 3.9.\n-\nPyObject *PyFrame_GetBuiltins(PyFrameObject *frame)\u00b6\n- Return value: New reference.\nGet the frame\u2019s\nf_builtins\nattribute.Return a strong reference. The result cannot be\nNULL\n.Added in version 3.11.\n-\nPyCodeObject *PyFrame_GetCode(PyFrameObject *frame)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.10.\nGet the frame code.\nReturn a strong reference.\nThe result (frame code) cannot be\nNULL\n.Added in version 3.9.\n-\nPyObject *PyFrame_GetGenerator(PyFrameObject *frame)\u00b6\n- Return value: New reference.\nGet the generator, coroutine, or async generator that owns this frame, or\nNULL\nif this frame is not owned by a generator. Does not raise an exception, even if the return value isNULL\n.Return a strong reference, or\nNULL\n.Added in version 3.11.\n-\nPyObject *PyFrame_GetGlobals(PyFrameObject *frame)\u00b6\n- Return value: New reference.\nGet the frame\u2019s\nf_globals\nattribute.Return a strong reference. The result cannot be\nNULL\n.Added in version 3.11.\n-\nint PyFrame_GetLasti(PyFrameObject *frame)\u00b6\nGet the frame\u2019s\nf_lasti\nattribute.Returns -1 if\nframe.f_lasti\nisNone\n.Added in version 3.11.\n-\nPyObject *PyFrame_GetVar(PyFrameObject *frame, PyObject *name)\u00b6\n- Return value: New reference.\nGet the variable name of frame.\nReturn a strong reference to the variable value on success.\nRaise\nNameError\nand returnNULL\nif the variable does not exist.Raise an exception and return\nNULL\non error.\nname type must be a\nstr\n.Added in version 3.12.\n-\nPyObject *PyFrame_GetVarString(PyFrameObject *frame, const char *name)\u00b6\n- Return value: New reference.\nSimilar to\nPyFrame_GetVar()\n, but the variable name is a C string encoded in UTF-8.Added in version 3.12.\n-\nPyObject *PyFrame_GetLocals(PyFrameObject *frame)\u00b6\n- Return value: New reference.\nGet the frame\u2019s\nf_locals\nattribute. If the frame refers to an optimized scope, this returns a write-through proxy object that allows modifying the locals. In all other cases (classes, modules,exec()\n,eval()\n) it returns the mapping representing the frame locals directly (as described forlocals()\n).Return a strong reference.\nAdded in version 3.11.\nChanged in version 3.13: As part of PEP 667, return an instance of\nPyFrameLocalsProxy_Type\n.\n-\nint PyFrame_GetLineNumber(PyFrameObject *frame)\u00b6\n- Part of the Stable ABI since version 3.10.\nReturn the line number that frame is currently executing.\nFrame Locals Proxies\u00b6\nAdded in version 3.13.\nThe f_locals\nattribute on a frame object\nis an instance of a \u201cframe-locals proxy\u201d. The proxy object exposes a\nwrite-through view of the underlying locals dictionary for the frame. This\nensures that the variables exposed by f_locals\nare always up to date with\nthe live local variables in the frame itself.\nSee PEP 667 for more information.\n-\nPyTypeObject PyFrameLocalsProxy_Type\u00b6\nThe type of frame\nlocals()\nproxy objects.\nLegacy Local Variable APIs\u00b6\nThese APIs are soft deprecated. As of Python 3.13, they do nothing. They exist solely for backwards compatibility.\n-\nvoid PyFrame_LocalsToFast(PyFrameObject *f, int clear)\u00b6\nThis function is soft deprecated and does nothing.\nPrior to Python 3.13, this function would copy the\nf_locals\nattribute of f to the internal \u201cfast\u201d array of local variables, allowing changes in frame objects to be visible to the interpreter. If clear was true, this function would process variables that were unset in the locals dictionary.Changed in version 3.13: This function now does nothing.\n-\nvoid PyFrame_FastToLocals(PyFrameObject *f)\u00b6\nThis function is soft deprecated and does nothing.\nPrior to Python 3.13, this function would copy the internal \u201cfast\u201d array of local variables (which is used by the interpreter) to the\nf_locals\nattribute of f, allowing changes in local variables to be visible to frame objects.Changed in version 3.13: This function now does nothing.\n-\nint PyFrame_FastToLocalsWithError(PyFrameObject *f)\u00b6\nThis function is soft deprecated and does nothing.\nPrior to Python 3.13, this function was similar to\nPyFrame_FastToLocals()\n, but would return0\non success, and-1\nwith an exception set on failure.Changed in version 3.13: This function now does nothing.\nSee also\nInternal Frames\u00b6\nUnless using PEP 523, you will not need this.\n-\nstruct _PyInterpreterFrame\u00b6\nThe interpreter\u2019s internal frame representation.\nAdded in version 3.11.\n-\nPyObject *PyUnstable_InterpreterFrame_GetCode(struct _PyInterpreterFrame *frame);\u00b6\n- This is Unstable API. It may change without warning in minor releases.\nReturn a strong reference to the code object for the frame.\nAdded in version 3.12.\n-\nint PyUnstable_InterpreterFrame_GetLasti(struct _PyInterpreterFrame *frame);\u00b6\n- This is Unstable API. It may change without warning in minor releases.\nReturn the byte offset into the last executed instruction.\nAdded in version 3.12.\n-\nint PyUnstable_InterpreterFrame_GetLine(struct _PyInterpreterFrame *frame);\u00b6\n- This is Unstable API. It may change without warning in minor releases.\nReturn the currently executing line number, or -1 if there is no line number.\nAdded in version 3.12.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1578} +{"url": "https://docs.python.org/3/c-api/sys.html", "title": "Operating System Utilities", "content": "Operating System Utilities\u00b6\n-\nPyObject *PyOS_FSPath(PyObject *path)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.6.\nReturn the file system representation for path. If the object is a\nstr\norbytes\nobject, then a new strong reference is returned. If the object implements theos.PathLike\ninterface, then__fspath__()\nis returned as long as it is astr\norbytes\nobject. OtherwiseTypeError\nis raised andNULL\nis returned.Added in version 3.6.\n-\nint Py_FdIsInteractive(FILE *fp, const char *filename)\u00b6\nReturn true (nonzero) if the standard I/O file fp with name filename is deemed interactive. This is the case for files for which\nisatty(fileno(fp))\nis true. If thePyConfig.interactive\nis non-zero, this function also returns true if the filename pointer isNULL\nor if the name is equal to one of the strings''\nor'???'\n.This function must not be called before Python is initialized.\n-\nvoid PyOS_BeforeFork()\u00b6\n- Part of the Stable ABI on platforms with fork() since version 3.7.\nFunction to prepare some internal state before a process fork. This should be called before calling\nfork()\nor any similar function that clones the current process. Only available on systems wherefork()\nis defined.Warning\nThe C\nfork()\ncall should only be made from the \u201cmain\u201d thread (of the \u201cmain\u201d interpreter). The same is true forPyOS_BeforeFork()\n.Added in version 3.7.\n-\nvoid PyOS_AfterFork_Parent()\u00b6\n- Part of the Stable ABI on platforms with fork() since version 3.7.\nFunction to update some internal state after a process fork. This should be called from the parent process after calling\nfork()\nor any similar function that clones the current process, regardless of whether process cloning was successful. Only available on systems wherefork()\nis defined.Warning\nThe C\nfork()\ncall should only be made from the \u201cmain\u201d thread (of the \u201cmain\u201d interpreter). The same is true forPyOS_AfterFork_Parent()\n.Added in version 3.7.\n-\nvoid PyOS_AfterFork_Child()\u00b6\n- Part of the Stable ABI on platforms with fork() since version 3.7.\nFunction to update internal interpreter state after a process fork. This must be called from the child process after calling\nfork()\n, or any similar function that clones the current process, if there is any chance the process will call back into the Python interpreter. Only available on systems wherefork()\nis defined.Warning\nThe C\nfork()\ncall should only be made from the \u201cmain\u201d thread (of the \u201cmain\u201d interpreter). The same is true forPyOS_AfterFork_Child()\n.Added in version 3.7.\nSee also\nos.register_at_fork()\nallows registering custom Python functions to be called byPyOS_BeforeFork()\n,PyOS_AfterFork_Parent()\nandPyOS_AfterFork_Child()\n.\n-\nvoid PyOS_AfterFork()\u00b6\n- Part of the Stable ABI on platforms with fork().\nFunction to update some internal state after a process fork; this should be called in the new process if the Python interpreter will continue to be used. If a new executable is loaded into the new process, this function does not need to be called.\nDeprecated since version 3.7: This function is superseded by\nPyOS_AfterFork_Child()\n.\n-\nint PyOS_CheckStack()\u00b6\n- Part of the Stable ABI on platforms with USE_STACKCHECK since version 3.7.\nReturn true when the interpreter runs out of stack space. This is a reliable check, but is only available when\nUSE_STACKCHECK\nis defined (currently on certain versions of Windows using the Microsoft Visual C++ compiler).USE_STACKCHECK\nwill be defined automatically; you should never change the definition in your own code.\n-\ntypedef void (*PyOS_sighandler_t)(int)\u00b6\n- Part of the Stable ABI.\n-\nPyOS_sighandler_t PyOS_getsig(int i)\u00b6\n- Part of the Stable ABI.\nReturn the current signal handler for signal i. This is a thin wrapper around either\nsigaction()\norsignal()\n. Do not call those functions directly!\n-\nPyOS_sighandler_t PyOS_setsig(int i, PyOS_sighandler_t h)\u00b6\n- Part of the Stable ABI.\nSet the signal handler for signal i to be h; return the old signal handler. This is a thin wrapper around either\nsigaction()\norsignal()\n. Do not call those functions directly!\n-\nint PyOS_InterruptOccurred(void)\u00b6\n- Part of the Stable ABI.\nCheck if a\nSIGINT\nsignal has been received.Returns\n1\nif aSIGINT\nhas occurred and clears the signal flag, or0\notherwise.In most cases, you should prefer\nPyErr_CheckSignals()\nover this function.PyErr_CheckSignals()\ninvokes the appropriate signal handlers for all pending signals, allowing Python code to handle the signal properly. This function only detectsSIGINT\nand does not invoke any Python signal handlers.This function is async-signal-safe and this function cannot fail. The caller must hold an attached thread state.\n-\nwchar_t *Py_DecodeLocale(const char *arg, size_t *size)\u00b6\n- Part of the Stable ABI since version 3.7.\nWarning\nThis function should not be called directly: use the\nPyConfig\nAPI with thePyConfig_SetBytesString()\nfunction which ensures that Python is preinitialized.This function must not be called before Python is preinitialized and so that the LC_CTYPE locale is properly configured: see the\nPy_PreInitialize()\nfunction.Decode a byte string from the filesystem encoding and error handler. If the error handler is surrogateescape error handler, undecodable bytes are decoded as characters in range U+DC80..U+DCFF; and if a byte sequence can be decoded as a surrogate character, the bytes are escaped using the surrogateescape error handler instead of decoding them.\nReturn a pointer to a newly allocated wide character string, use\nPyMem_RawFree()\nto free the memory. If size is notNULL\n, write the number of wide characters excluding the null character into*size\nReturn\nNULL\non decoding error or memory allocation error. If size is notNULL\n,*size\nis set to(size_t)-1\non memory error or set to(size_t)-2\non decoding error.The filesystem encoding and error handler are selected by\nPyConfig_Read()\n: seefilesystem_encoding\nandfilesystem_errors\nmembers ofPyConfig\n.Decoding errors should never happen, unless there is a bug in the C library.\nUse the\nPy_EncodeLocale()\nfunction to encode the character string back to a byte string.See also\nThe\nPyUnicode_DecodeFSDefaultAndSize()\nandPyUnicode_DecodeLocaleAndSize()\nfunctions.Added in version 3.5.\nChanged in version 3.7: The function now uses the UTF-8 encoding in the Python UTF-8 Mode.\nChanged in version 3.8: The function now uses the UTF-8 encoding on Windows if\nPyPreConfig.legacy_windows_fs_encoding\nis zero;\n-\nchar *Py_EncodeLocale(const wchar_t *text, size_t *error_pos)\u00b6\n- Part of the Stable ABI since version 3.7.\nEncode a wide character string to the filesystem encoding and error handler. If the error handler is surrogateescape error handler, surrogate characters in the range U+DC80..U+DCFF are converted to bytes 0x80..0xFF.\nReturn a pointer to a newly allocated byte string, use\nPyMem_Free()\nto free the memory. ReturnNULL\non encoding error or memory allocation error.If error_pos is not\nNULL\n,*error_pos\nis set to(size_t)-1\non success, or set to the index of the invalid character on encoding error.The filesystem encoding and error handler are selected by\nPyConfig_Read()\n: seefilesystem_encoding\nandfilesystem_errors\nmembers ofPyConfig\n.Use the\nPy_DecodeLocale()\nfunction to decode the bytes string back to a wide character string.Warning\nThis function must not be called before Python is preinitialized and so that the LC_CTYPE locale is properly configured: see the\nPy_PreInitialize()\nfunction.See also\nThe\nPyUnicode_EncodeFSDefault()\nandPyUnicode_EncodeLocale()\nfunctions.Added in version 3.5.\nChanged in version 3.7: The function now uses the UTF-8 encoding in the Python UTF-8 Mode.\nChanged in version 3.8: The function now uses the UTF-8 encoding on Windows if\nPyPreConfig.legacy_windows_fs_encoding\nis zero.\n-\nFILE *Py_fopen(PyObject *path, const char *mode)\u00b6\nSimilar to\nfopen()\n, but path is a Python object and an exception is set on error.path must be a\nstr\nobject, abytes\nobject, or a path-like object.On success, return the new file pointer. On error, set an exception and return\nNULL\n.The file must be closed by\nPy_fclose()\nrather than calling directlyfclose()\n.The file descriptor is created non-inheritable (PEP 446).\nThe caller must have an attached thread state.\nAdded in version 3.14.\n-\nint Py_fclose(FILE *file)\u00b6\nClose a file that was opened by\nPy_fopen()\n.On success, return\n0\n. On error, returnEOF\nanderrno\nis set to indicate the error. In either case, any further access (including another call toPy_fclose()\n) to the stream results in undefined behavior.Added in version 3.14.\nSystem Functions\u00b6\nThese are utility functions that make functionality from the sys\nmodule\naccessible to C code. They all work with the current interpreter thread\u2019s\nsys\nmodule\u2019s dict, which is contained in the internal thread state structure.\n-\nPyObject *PySys_GetObject(const char *name)\u00b6\n- Return value: Borrowed reference. Part of the Stable ABI.\nReturn the object name from the\nsys\nmodule orNULL\nif it does not exist, without setting an exception.\n-\nint PySys_SetObject(const char *name, PyObject *v)\u00b6\n- Part of the Stable ABI.\nSet name in the\nsys\nmodule to v unless v isNULL\n, in which case name is deleted from the sys module. Returns0\non success,-1\non error.\n-\nvoid PySys_ResetWarnOptions()\u00b6\n- Part of the Stable ABI.\nReset\nsys.warnoptions\nto an empty list. This function may be called prior toPy_Initialize()\n.Deprecated since version 3.13, will be removed in version 3.15: Clear\nsys.warnoptions\nandwarnings.filters\ninstead.\n-\nvoid PySys_WriteStdout(const char *format, ...)\u00b6\n- Part of the Stable ABI.\nWrite the output string described by format to\nsys.stdout\n. No exceptions are raised, even if truncation occurs (see below).format should limit the total size of the formatted output string to 1000 bytes or less \u2013 after 1000 bytes, the output string is truncated. In particular, this means that no unrestricted \u201c%s\u201d formats should occur; these should be limited using \u201c%.s\u201d where is a decimal number calculated so that plus the maximum size of other formatted text does not exceed 1000 bytes. Also watch out for \u201c%f\u201d, which can print hundreds of digits for very large numbers.\nIf a problem occurs, or\nsys.stdout\nis unset, the formatted message is written to the real (C level) stdout.\n-\nvoid PySys_WriteStderr(const char *format, ...)\u00b6\n- Part of the Stable ABI.\nAs\nPySys_WriteStdout()\n, but write tosys.stderr\nor stderr instead.\n-\nvoid PySys_FormatStdout(const char *format, ...)\u00b6\n- Part of the Stable ABI.\nFunction similar to PySys_WriteStdout() but format the message using\nPyUnicode_FromFormatV()\nand don\u2019t truncate the message to an arbitrary length.Added in version 3.2.\n-\nvoid PySys_FormatStderr(const char *format, ...)\u00b6\n- Part of the Stable ABI.\nAs\nPySys_FormatStdout()\n, but write tosys.stderr\nor stderr instead.Added in version 3.2.\n-\nPyObject *PySys_GetXOptions()\u00b6\n- Return value: Borrowed reference. Part of the Stable ABI since version 3.7.\nReturn the current dictionary of\n-X\noptions, similarly tosys._xoptions\n. On error,NULL\nis returned and an exception is set.Added in version 3.2.\n-\nint PySys_Audit(const char *event, const char *format, ...)\u00b6\n- Part of the Stable ABI since version 3.13.\nRaise an auditing event with any active hooks. Return zero for success and non-zero with an exception set on failure.\nThe event string argument must not be NULL.\nIf any hooks have been added, format and other arguments will be used to construct a tuple to pass. Apart from\nN\n, the same format characters as used inPy_BuildValue()\nare available. If the built value is not a tuple, it will be added into a single-element tuple.The\nN\nformat option must not be used. It consumes a reference, but since there is no way to know whether arguments to this function will be consumed, using it may cause reference leaks.Note that\n#\nformat characters should always be treated asPy_ssize_t\n, regardless of whetherPY_SSIZE_T_CLEAN\nwas defined.sys.audit()\nperforms the same function from Python code.See also\nPySys_AuditTuple()\n.Added in version 3.8.\nChanged in version 3.8.2: Require\nPy_ssize_t\nfor#\nformat characters. Previously, an unavoidable deprecation warning was raised.\n-\nint PySys_AuditTuple(const char *event, PyObject *args)\u00b6\n- Part of the Stable ABI since version 3.13.\nSimilar to\nPySys_Audit()\n, but pass arguments as a Python object. args must be atuple\n. To pass no arguments, args can be NULL.Added in version 3.13.\n-\nint PySys_AddAuditHook(Py_AuditHookFunction hook, void *userData)\u00b6\nAppend the callable hook to the list of active auditing hooks. Return zero on success and non-zero on failure. If the runtime has been initialized, also set an error on failure. Hooks added through this API are called for all interpreters created by the runtime.\nThe userData pointer is passed into the hook function. Since hook functions may be called from different runtimes, this pointer should not refer directly to Python state.\nThis function is safe to call before\nPy_Initialize()\n. When called after runtime initialization, existing audit hooks are notified and may silently abort the operation by raising an error subclassed fromException\n(other errors will not be silenced).The hook function is always called with an attached thread state by the Python interpreter that raised the event.\nSee PEP 578 for a detailed description of auditing. Functions in the runtime and standard library that raise events are listed in the audit events table. Details are in each function\u2019s documentation.\nIf the interpreter is initialized, this function raises an auditing event\nsys.addaudithook\nwith no arguments. If any existing hooks raise an exception derived fromException\n, the new hook will not be added and the exception is cleared. As a result, callers cannot assume that their hook has been added unless they control all existing hooks.-\ntypedef int (*Py_AuditHookFunction)(const char *event, PyObject *args, void *userData)\u00b6\nThe type of the hook function. event is the C string event argument passed to\nPySys_Audit()\norPySys_AuditTuple()\n. args is guaranteed to be aPyTupleObject\n. userData is the argument passed to PySys_AddAuditHook().\nAdded in version 3.8.\n-\ntypedef int (*Py_AuditHookFunction)(const char *event, PyObject *args, void *userData)\u00b6\nProcess Control\u00b6\n-\nvoid Py_FatalError(const char *message)\u00b6\n- Part of the Stable ABI.\nPrint a fatal error message and kill the process. No cleanup is performed. This function should only be invoked when a condition is detected that would make it dangerous to continue using the Python interpreter; e.g., when the object administration appears to be corrupted. On Unix, the standard C library function\nabort()\nis called which will attempt to produce acore\nfile.The\nPy_FatalError()\nfunction is replaced with a macro which logs automatically the name of the current function, unless thePy_LIMITED_API\nmacro is defined.Changed in version 3.9: Log the function name automatically.\n-\nvoid Py_Exit(int status)\u00b6\n- Part of the Stable ABI.\nExit the current process. This calls\nPy_FinalizeEx()\nand then calls the standard C library functionexit(status)\n. IfPy_FinalizeEx()\nindicates an error, the exit status is set to 120.Changed in version 3.6: Errors from finalization no longer ignored.\n-\nint Py_AtExit(void (*func)())\u00b6\n- Part of the Stable ABI.\nRegister a cleanup function to be called by\nPy_FinalizeEx()\n. The cleanup function will be called with no arguments and should return no value. At most 32 cleanup functions can be registered. When the registration is successful,Py_AtExit()\nreturns0\n; on failure, it returns-1\n. The cleanup function registered last is called first. Each cleanup function will be called at most once. Since Python\u2019s internal finalization will have completed before the cleanup function, no Python APIs should be called by func.See also\nPyUnstable_AtExit()\nfor passing avoid *data\nargument.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 3944} +{"url": "https://docs.python.org/3/c-api/import.html", "title": "Importing Modules", "content": "Importing Modules\u00b6\n-\nPyObject *PyImport_ImportModule(const char *name)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nThis is a wrapper around\nPyImport_Import()\nwhich takes a const char* as an argument instead of a PyObject*.\n-\nPyObject *PyImport_ImportModuleNoBlock(const char *name)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nThis function is a deprecated alias of\nPyImport_ImportModule()\n.Changed in version 3.3: This function used to fail immediately when the import lock was held by another thread. In Python 3.3 though, the locking scheme switched to per-module locks for most purposes, so this function\u2019s special behaviour isn\u2019t needed anymore.\nDeprecated since version 3.13, will be removed in version 3.15: Use\nPyImport_ImportModule()\ninstead.\n-\nPyObject *PyImport_ImportModuleEx(const char *name, PyObject *globals, PyObject *locals, PyObject *fromlist)\u00b6\n- Return value: New reference.\nImport a module. This is best described by referring to the built-in Python function\n__import__()\n.The return value is a new reference to the imported module or top-level package, or\nNULL\nwith an exception set on failure. Like for__import__()\n, the return value when a submodule of a package was requested is normally the top-level package, unless a non-empty fromlist was given.Failing imports remove incomplete module objects, like with\nPyImport_ImportModule()\n.\n-\nPyObject *PyImport_ImportModuleLevelObject(PyObject *name, PyObject *globals, PyObject *locals, PyObject *fromlist, int level)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.7.\nImport a module. This is best described by referring to the built-in Python function\n__import__()\n, as the standard__import__()\nfunction calls this function directly.The return value is a new reference to the imported module or top-level package, or\nNULL\nwith an exception set on failure. Like for__import__()\n, the return value when a submodule of a package was requested is normally the top-level package, unless a non-empty fromlist was given.Added in version 3.3.\n-\nPyObject *PyImport_ImportModuleLevel(const char *name, PyObject *globals, PyObject *locals, PyObject *fromlist, int level)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nSimilar to\nPyImport_ImportModuleLevelObject()\n, but the name is a UTF-8 encoded string instead of a Unicode object.Changed in version 3.3: Negative values for level are no longer accepted.\n-\nPyObject *PyImport_Import(PyObject *name)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nThis is a higher-level interface that calls the current \u201cimport hook function\u201d (with an explicit level of 0, meaning absolute import). It invokes the\n__import__()\nfunction from the__builtins__\nof the current globals. This means that the import is done using whatever import hooks are installed in the current environment.This function always uses absolute imports.\n-\nPyObject *PyImport_ReloadModule(PyObject *m)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReload a module. Return a new reference to the reloaded module, or\nNULL\nwith an exception set on failure (the module still exists in this case).\n-\nPyObject *PyImport_AddModuleRef(const char *name)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.13.\nReturn the module object corresponding to a module name.\nThe name argument may be of the form\npackage.module\n. First check the modules dictionary if there\u2019s one there, and if not, create a new one and insert it in the modules dictionary.Return a strong reference to the module on success. Return\nNULL\nwith an exception set on failure.The module name name is decoded from UTF-8.\nThis function does not load or import the module; if the module wasn\u2019t already loaded, you will get an empty module object. Use\nPyImport_ImportModule()\nor one of its variants to import a module. Package structures implied by a dotted name for name are not created if not already present.Added in version 3.13.\n-\nPyObject *PyImport_AddModuleObject(PyObject *name)\u00b6\n- Return value: Borrowed reference. Part of the Stable ABI since version 3.7.\nSimilar to\nPyImport_AddModuleRef()\n, but return a borrowed reference and name is a Pythonstr\nobject.Added in version 3.3.\n-\nPyObject *PyImport_AddModule(const char *name)\u00b6\n- Return value: Borrowed reference. Part of the Stable ABI.\nSimilar to\nPyImport_AddModuleRef()\n, but return a borrowed reference.\n-\nPyObject *PyImport_ExecCodeModule(const char *name, PyObject *co)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nGiven a module name (possibly of the form\npackage.module\n) and a code object read from a Python bytecode file or obtained from the built-in functioncompile()\n, load the module. Return a new reference to the module object, orNULL\nwith an exception set if an error occurred. name is removed fromsys.modules\nin error cases, even if name was already insys.modules\non entry toPyImport_ExecCodeModule()\n. Leaving incompletely initialized modules insys.modules\nis dangerous, as imports of such modules have no way to know that the module object is an unknown (and probably damaged with respect to the module author\u2019s intents) state.The module\u2019s\n__spec__\nand__loader__\nwill be set, if not set already, with the appropriate values. The spec\u2019s loader will be set to the module\u2019s__loader__\n(if set) and to an instance ofSourceFileLoader\notherwise.The module\u2019s\n__file__\nattribute will be set to the code object\u2019sco_filename\n. If applicable,__cached__\nwill also be set.This function will reload the module if it was already imported. See\nPyImport_ReloadModule()\nfor the intended way to reload a module.If name points to a dotted name of the form\npackage.module\n, any package structures not already created will still not be created.See also\nPyImport_ExecCodeModuleEx()\nandPyImport_ExecCodeModuleWithPathnames()\n.Changed in version 3.12: The setting of\n__cached__\nand__loader__\nis deprecated. SeeModuleSpec\nfor alternatives.\n-\nPyObject *PyImport_ExecCodeModuleEx(const char *name, PyObject *co, const char *pathname)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nLike\nPyImport_ExecCodeModule()\n, but the__file__\nattribute of the module object is set to pathname if it is non-NULL\n.See also\nPyImport_ExecCodeModuleWithPathnames()\n.\n-\nPyObject *PyImport_ExecCodeModuleObject(PyObject *name, PyObject *co, PyObject *pathname, PyObject *cpathname)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.7.\nLike\nPyImport_ExecCodeModuleEx()\n, but the__cached__\nattribute of the module object is set to cpathname if it is non-NULL\n. Of the three functions, this is the preferred one to use.Added in version 3.3.\nChanged in version 3.12: Setting\n__cached__\nis deprecated. SeeModuleSpec\nfor alternatives.\n-\nPyObject *PyImport_ExecCodeModuleWithPathnames(const char *name, PyObject *co, const char *pathname, const char *cpathname)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nLike\nPyImport_ExecCodeModuleObject()\n, but name, pathname and cpathname are UTF-8 encoded strings. Attempts are also made to figure out what the value for pathname should be from cpathname if the former is set toNULL\n.Added in version 3.2.\nChanged in version 3.3: Uses\nimp.source_from_cache()\nin calculating the source path if only the bytecode path is provided.Changed in version 3.12: No longer uses the removed\nimp\nmodule.\n-\nlong PyImport_GetMagicNumber()\u00b6\n- Part of the Stable ABI.\nReturn the magic number for Python bytecode files (a.k.a.\n.pyc\nfile). The magic number should be present in the first four bytes of the bytecode file, in little-endian byte order. Returns-1\non error.Changed in version 3.3: Return value of\n-1\nupon failure.\n-\nconst char *PyImport_GetMagicTag()\u00b6\n- Part of the Stable ABI.\nReturn the magic tag string for PEP 3147 format Python bytecode file names. Keep in mind that the value at\nsys.implementation.cache_tag\nis authoritative and should be used instead of this function.Added in version 3.2.\n-\nPyObject *PyImport_GetModuleDict()\u00b6\n- Return value: Borrowed reference. Part of the Stable ABI.\nReturn the dictionary used for the module administration (a.k.a.\nsys.modules\n). Note that this is a per-interpreter variable.\n-\nPyObject *PyImport_GetModule(PyObject *name)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.8.\nReturn the already imported module with the given name. If the module has not been imported yet then returns\nNULL\nbut does not set an error. ReturnsNULL\nand sets an error if the lookup failed.Added in version 3.7.\n-\nPyObject *PyImport_GetImporter(PyObject *path)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a finder object for a\nsys.path\n/pkg.__path__\nitem path, possibly by fetching it from thesys.path_importer_cache\ndict. If it wasn\u2019t yet cached, traversesys.path_hooks\nuntil a hook is found that can handle the path item. ReturnNone\nif no hook could; this tells our caller that the path based finder could not find a finder for this path item. Cache the result insys.path_importer_cache\n. Return a new reference to the finder object.\n-\nint PyImport_ImportFrozenModuleObject(PyObject *name)\u00b6\n- Part of the Stable ABI since version 3.7.\nLoad a frozen module named name. Return\n1\nfor success,0\nif the module is not found, and-1\nwith an exception set if the initialization failed. To access the imported module on a successful load, usePyImport_ImportModule()\n. (Note the misnomer \u2014 this function would reload the module if it was already imported.)Added in version 3.3.\nChanged in version 3.4: The\n__file__\nattribute is no longer set on the module.\n-\nint PyImport_ImportFrozenModule(const char *name)\u00b6\n- Part of the Stable ABI.\nSimilar to\nPyImport_ImportFrozenModuleObject()\n, but the name is a UTF-8 encoded string instead of a Unicode object.\n-\nstruct _frozen\u00b6\nThis is the structure type definition for frozen module descriptors, as generated by the freeze utility (see\nTools/freeze/\nin the Python source distribution). Its definition, found inInclude/import.h\n, is:struct _frozen { const char *name; const unsigned char *code; int size; bool is_package; };\nChanged in version 3.11: The new\nis_package\nfield indicates whether the module is a package or not. This replaces setting thesize\nfield to a negative value.\n-\nconst struct _frozen *PyImport_FrozenModules\u00b6\nThis pointer is initialized to point to an array of\n_frozen\nrecords, terminated by one whose members are allNULL\nor zero. When a frozen module is imported, it is searched in this table. Third-party code could play tricks with this to provide a dynamically created collection of frozen modules.\n-\nint PyImport_AppendInittab(const char *name, PyObject *(*initfunc)(void))\u00b6\n- Part of the Stable ABI.\nAdd a single module to the existing table of built-in modules. This is a convenience wrapper around\nPyImport_ExtendInittab()\n, returning-1\nif the table could not be extended. The new module can be imported by the name name, and uses the function initfunc as the initialization function called on the first attempted import. This should be called beforePy_Initialize()\n.\n-\nstruct _inittab\u00b6\nStructure describing a single entry in the list of built-in modules. Programs which embed Python may use an array of these structures in conjunction with\nPyImport_ExtendInittab()\nto provide additional built-in modules. The structure consists of two members:-\nconst char *name\u00b6\nThe module name, as an ASCII encoded string.\n-\nconst char *name\u00b6\n-\nint PyImport_ExtendInittab(struct _inittab *newtab)\u00b6\nAdd a collection of modules to the table of built-in modules. The newtab array must end with a sentinel entry which contains\nNULL\nfor thename\nfield; failure to provide the sentinel value can result in a memory fault. Returns0\non success or-1\nif insufficient memory could be allocated to extend the internal table. In the event of failure, no modules are added to the internal table. This must be called beforePy_Initialize()\n.If Python is initialized multiple times,\nPyImport_AppendInittab()\norPyImport_ExtendInittab()\nmust be called before each Python initialization.\n-\nstruct _inittab *PyImport_Inittab\u00b6\nThe table of built-in modules used by Python initialization. Do not use this directly; use\nPyImport_AppendInittab()\nandPyImport_ExtendInittab()\ninstead.\n-\nPyObject *PyImport_ImportModuleAttr(PyObject *mod_name, PyObject *attr_name)\u00b6\n- Return value: New reference.\nImport the module mod_name and get its attribute attr_name.\nNames must be Python\nstr\nobjects.Helper function combining\nPyImport_Import()\nandPyObject_GetAttr()\n. For example, it can raiseImportError\nif the module is not found, andAttributeError\nif the attribute doesn\u2019t exist.Added in version 3.14.\n-\nPyObject *PyImport_ImportModuleAttrString(const char *mod_name, const char *attr_name)\u00b6\n- Return value: New reference.\nSimilar to\nPyImport_ImportModuleAttr()\n, but names are UTF-8 encoded strings instead of Pythonstr\nobjects.Added in version 3.14.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 3214} +{"url": "https://docs.python.org/3/library/xdrlib.html", "title": " \u2014 Encode and decode XDR data", "content": "xdrlib\n\u2014 Encode and decode XDR data\u00b6\nDeprecated since version 3.11, removed in version 3.13.\nThis module is no longer part of the Python standard library. It was removed in Python 3.13 after being deprecated in Python 3.11. The removal was decided in PEP 594.\nThe last version of Python that provided the xdrlib\nmodule was\nPython 3.12.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 84} +{"url": "https://docs.python.org/3/c-api/iter.html", "title": "Iterator Protocol", "content": "Iterator Protocol\u00b6\nThere are two functions specifically for working with iterators.\n-\nint PyIter_Check(PyObject *o)\u00b6\n- Part of the Stable ABI since version 3.8.\nReturn non-zero if the object o can be safely passed to\nPyIter_NextItem()\nand0\notherwise. This function always succeeds.\n-\nint PyAIter_Check(PyObject *o)\u00b6\n- Part of the Stable ABI since version 3.10.\nReturn non-zero if the object o provides the\nAsyncIterator\nprotocol, and0\notherwise. This function always succeeds.Added in version 3.10.\n-\nint PyIter_NextItem(PyObject *iter, PyObject **item)\u00b6\n- Part of the Stable ABI since version 3.14.\nReturn\n1\nand set item to a strong reference of the next value of the iterator iter on success. Return0\nand set item toNULL\nif there are no remaining values. Return-1\n, set item toNULL\nand set an exception on error.Added in version 3.14.\n-\nPyObject *PyIter_Next(PyObject *o)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nThis is an older version of\nPyIter_NextItem()\n, which is retained for backwards compatibility. PreferPyIter_NextItem()\n.Return the next value from the iterator o. The object must be an iterator according to\nPyIter_Check()\n(it is up to the caller to check this). If there are no remaining values, returnsNULL\nwith no exception set. If an error occurs while retrieving the item, returnsNULL\nand passes along the exception.\n-\ntype PySendResult\u00b6\nThe enum value used to represent different results of\nPyIter_Send()\n.Added in version 3.10.\n-\nPySendResult PyIter_Send(PyObject *iter, PyObject *arg, PyObject **presult)\u00b6\n- Part of the Stable ABI since version 3.10.\nSends the arg value into the iterator iter. Returns:\nPYGEN_RETURN\nif iterator returns. Return value is returned via presult.PYGEN_NEXT\nif iterator yields. Yielded value is returned via presult.PYGEN_ERROR\nif iterator has raised and exception. presult is set toNULL\n.\nAdded in version 3.10.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 469} +{"url": "https://docs.python.org/3/c-api/weakref.html", "title": "Weak Reference Objects", "content": "Weak Reference Objects\u00b6\nPython supports weak references as first-class objects. There are two specific object types which directly implement weak references. The first is a simple reference object, and the second acts as a proxy for the original object as much as it can.\n-\nint PyWeakref_Check(PyObject *ob)\u00b6\nReturn non-zero if ob is either a reference or proxy object. This function always succeeds.\n-\nint PyWeakref_CheckRef(PyObject *ob)\u00b6\nReturn non-zero if ob is a reference object or a subclass of the reference type. This function always succeeds.\n-\nint PyWeakref_CheckRefExact(PyObject *ob)\u00b6\nReturn non-zero if ob is a reference object, but not a subclass of the reference type. This function always succeeds.\n-\nint PyWeakref_CheckProxy(PyObject *ob)\u00b6\nReturn non-zero if ob is a proxy object. This function always succeeds.\n-\nPyObject *PyWeakref_NewRef(PyObject *ob, PyObject *callback)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a weak reference object for the object ob. This will always return a new reference, but is not guaranteed to create a new object; an existing reference object may be returned. The second parameter, callback, can be a callable object that receives notification when ob is garbage collected; it should accept a single parameter, which will be the weak reference object itself. callback may also be\nNone\norNULL\n. If ob is not a weakly referenceable object, or if callback is not callable,None\n, orNULL\n, this will returnNULL\nand raiseTypeError\n.See also\nPyType_SUPPORTS_WEAKREFS()\nfor checking if ob is weakly referenceable.\n-\nPyObject *PyWeakref_NewProxy(PyObject *ob, PyObject *callback)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a weak reference proxy object for the object ob. This will always return a new reference, but is not guaranteed to create a new object; an existing proxy object may be returned. The second parameter, callback, can be a callable object that receives notification when ob is garbage collected; it should accept a single parameter, which will be the weak reference object itself. callback may also be\nNone\norNULL\n. If ob is not a weakly referenceable object, or if callback is not callable,None\n, orNULL\n, this will returnNULL\nand raiseTypeError\n.See also\nPyType_SUPPORTS_WEAKREFS()\nfor checking if ob is weakly referenceable.\n-\nint PyWeakref_GetRef(PyObject *ref, PyObject **pobj)\u00b6\n- Part of the Stable ABI since version 3.13.\nGet a strong reference to the referenced object from a weak reference, ref, into *pobj.\nOn success, set *pobj to a new strong reference to the referenced object and return 1.\nIf the reference is dead, set *pobj to\nNULL\nand return 0.On error, raise an exception and return -1.\nAdded in version 3.13.\n-\nPyObject *PyWeakref_GetObject(PyObject *ref)\u00b6\n- Return value: Borrowed reference. Part of the Stable ABI.\nReturn a borrowed reference to the referenced object from a weak reference, ref. If the referent is no longer live, returns\nPy_None\n.Note\nThis function returns a borrowed reference to the referenced object. This means that you should always call\nPy_INCREF()\non the object except when it cannot be destroyed before the last usage of the borrowed reference.Deprecated since version 3.13, will be removed in version 3.15: Use\nPyWeakref_GetRef()\ninstead.\n-\nPyObject *PyWeakref_GET_OBJECT(PyObject *ref)\u00b6\n- Return value: Borrowed reference.\nSimilar to\nPyWeakref_GetObject()\n, but does no error checking.Deprecated since version 3.13, will be removed in version 3.15: Use\nPyWeakref_GetRef()\ninstead.\n-\nint PyWeakref_IsDead(PyObject *ref)\u00b6\nTest if the weak reference ref is dead. Returns 1 if the reference is dead, 0 if it is alive, and -1 with an error set if ref is not a weak reference object.\nAdded in version 3.14.\n-\nvoid PyObject_ClearWeakRefs(PyObject *object)\u00b6\n- Part of the Stable ABI.\nThis function is called by the\ntp_dealloc\nhandler to clear weak references.This iterates through the weak references for object and calls callbacks for those references which have one. It returns when all callbacks have been attempted.\n-\nvoid PyUnstable_Object_ClearWeakRefsNoCallbacks(PyObject *object)\u00b6\n- This is Unstable API. It may change without warning in minor releases.\nClears the weakrefs for object without calling the callbacks.\nThis function is called by the\ntp_dealloc\nhandler for types with finalizers (i.e.,__del__()\n). The handler for those objects first callsPyObject_ClearWeakRefs()\nto clear weakrefs and call their callbacks, then the finalizer, and finally this function to clear any weakrefs that may have been created by the finalizer.In most circumstances, it\u2019s more appropriate to use\nPyObject_ClearWeakRefs()\nto clear weakrefs instead of this function.Added in version 3.13.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1181} +{"url": "https://docs.python.org/3/c-api/hash.html", "title": "PyHash API", "content": "PyHash API\u00b6\nSee also the PyTypeObject.tp_hash\nmember and Hashing of numeric types.\n-\ntype Py_hash_t\u00b6\nHash value type: signed integer.\nAdded in version 3.2.\n-\ntype Py_uhash_t\u00b6\nHash value type: unsigned integer.\nAdded in version 3.2.\n-\nPy_HASH_ALGORITHM\u00b6\nA numerical value indicating the algorithm for hashing of\nstr\n,bytes\n, andmemoryview\n.The algorithm name is exposed by\nsys.hash_info.algorithm\n.Added in version 3.4.\n-\nPy_HASH_FNV\u00b6\n-\nPy_HASH_SIPHASH24\u00b6\n-\nPy_HASH_SIPHASH13\u00b6\nNumerical values to compare to\nPy_HASH_ALGORITHM\nto determine which algorithm is used for hashing. The hash algorithm can be configured via the configure--with-hash-algorithm\noption.Added in version 3.4: Add\nPy_HASH_FNV\nandPy_HASH_SIPHASH24\n.Added in version 3.11: Add\nPy_HASH_SIPHASH13\n.\n-\nPy_HASH_CUTOFF\u00b6\nBuffers of length in range\n[1, Py_HASH_CUTOFF)\nare hashed using DJBX33A instead of the algorithm described byPy_HASH_ALGORITHM\n.A\nPy_HASH_CUTOFF\nof 0 disables the optimization.Py_HASH_CUTOFF\nmust be non-negative and less or equal than 7.\n32-bit platforms should use a cutoff smaller than 64-bit platforms because it is easier to create colliding strings. A cutoff of 7 on 64-bit platforms and 5 on 32-bit platforms should provide a decent safety margin.\nThis corresponds to the\nsys.hash_info.cutoff\nconstant.Added in version 3.4.\n-\nPyHASH_MODULUS\u00b6\nThe Mersenne prime\nP = 2**n -1\n, used for numeric hash scheme.This corresponds to the\nsys.hash_info.modulus\nconstant.Added in version 3.13.\n-\nPyHASH_BITS\u00b6\nThe exponent\nn\nofP\ninPyHASH_MODULUS\n.Added in version 3.13.\n-\nPyHASH_MULTIPLIER\u00b6\nPrime multiplier used in string and various other hashes.\nAdded in version 3.13.\n-\nPyHASH_INF\u00b6\nThe hash value returned for a positive infinity.\nThis corresponds to the\nsys.hash_info.inf\nconstant.Added in version 3.13.\n-\nPyHASH_IMAG\u00b6\nThe multiplier used for the imaginary part of a complex number.\nThis corresponds to the\nsys.hash_info.imag\nconstant.Added in version 3.13.\n-\ntype PyHash_FuncDef\u00b6\nHash function definition used by\nPyHash_GetFuncDef()\n.-\nPy_hash_t (*const hash)(const void*, Py_ssize_t)\u00b6\nHash function.\n-\nconst char *name\u00b6\nHash function name (UTF-8 encoded string).\nThis corresponds to the\nsys.hash_info.algorithm\nconstant.\n-\nconst int hash_bits\u00b6\nInternal size of the hash value in bits.\nThis corresponds to the\nsys.hash_info.hash_bits\nconstant.\n-\nconst int seed_bits\u00b6\nSize of seed input in bits.\nThis corresponds to the\nsys.hash_info.seed_bits\nconstant.\nAdded in version 3.4.\n-\nPy_hash_t (*const hash)(const void*, Py_ssize_t)\u00b6\n-\nPyHash_FuncDef *PyHash_GetFuncDef(void)\u00b6\nGet the hash function definition.\nSee also\nPEP 456 \u201cSecure and interchangeable hash algorithm\u201d.\nAdded in version 3.4.\n-\nPy_hash_t Py_HashPointer(const void *ptr)\u00b6\nHash a pointer value: process the pointer value as an integer (cast it to\nuintptr_t\ninternally). The pointer is not dereferenced.The function cannot fail: it cannot return\n-1\n.Added in version 3.13.\n-\nPy_hash_t Py_HashBuffer(const void *ptr, Py_ssize_t len)\u00b6\nCompute and return the hash value of a buffer of len bytes starting at address ptr. The hash is guaranteed to match that of\nbytes\n,memoryview\n, and other built-in objects that implement the buffer protocol.Use this function to implement hashing for immutable objects whose\ntp_richcompare\nfunction compares to another object\u2019s buffer.len must be greater than or equal to\n0\n.This function always succeeds.\nAdded in version 3.14.\n-\nPy_hash_t PyObject_GenericHash(PyObject *obj)\u00b6\nGeneric hashing function that is meant to be put into a type object\u2019s\ntp_hash\nslot. Its result only depends on the object\u2019s identity.CPython implementation detail: In CPython, it is equivalent to\nPy_HashPointer()\n.Added in version 3.13.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 921} +{"url": "https://docs.python.org/3/c-api/stable.html", "title": "C API Stability", "content": "C API Stability\u00b6\nUnless documented otherwise, Python\u2019s C API is covered by the Backwards Compatibility Policy, PEP 387. Most changes to it are source-compatible (typically by only adding new API). Changing existing API or removing API is only done after a deprecation period or to fix serious issues.\nCPython\u2019s Application Binary Interface (ABI) is forward- and backwards-compatible across a minor release (if these are compiled the same way; see Platform Considerations below). So, code compiled for Python 3.10.0 will work on 3.10.8 and vice versa, but will need to be compiled separately for 3.9.x and 3.11.x.\nThere are two tiers of C API with different stability expectations:\nUnstable API, may change in minor versions without a deprecation period. It is marked by the\nPyUnstable\nprefix in names.Limited API, is compatible across several minor releases. When\nPy_LIMITED_API\nis defined, only this subset is exposed fromPython.h\n.\nThese are discussed in more detail below.\nNames prefixed by an underscore, such as _Py_InternalState\n,\nare private API that can change without notice even in patch releases.\nIf you need to use this API, consider reaching out to\nCPython developers\nto discuss adding public API for your use case.\nUnstable C API\u00b6\nAny API named with the PyUnstable\nprefix exposes CPython implementation\ndetails, and may change in every minor release (e.g. from 3.9 to 3.10) without\nany deprecation warnings.\nHowever, it will not change in a bugfix release (e.g. from 3.10.0 to 3.10.1).\nIt is generally intended for specialized, low-level tools like debuggers.\nProjects that use this API are expected to follow CPython development and spend extra effort adjusting to changes.\nStable Application Binary Interface\u00b6\nFor simplicity, this document talks about extensions, but the Limited API and Stable ABI work the same way for all uses of the API \u2013 for example, embedding Python.\nLimited C API\u00b6\nPython 3.2 introduced the Limited API, a subset of Python\u2019s C API. Extensions that only use the Limited API can be compiled once and be loaded on multiple versions of Python. Contents of the Limited API are listed below.\n-\nPy_LIMITED_API\u00b6\nDefine this macro before including\nPython.h\nto opt in to only use the Limited API, and to select the Limited API version.Define\nPy_LIMITED_API\nto the value ofPY_VERSION_HEX\ncorresponding to the lowest Python version your extension supports. The extension will be ABI-compatible with all Python 3 releases from the specified one onward, and can use Limited API introduced up to that version.Rather than using the\nPY_VERSION_HEX\nmacro directly, hardcode a minimum minor version (e.g.0x030A0000\nfor Python 3.10) for stability when compiling with future Python versions.You can also define\nPy_LIMITED_API\nto3\n. This works the same as0x03020000\n(Python 3.2, the version that introduced Limited API).\nStable ABI\u00b6\nTo enable this, Python provides a Stable ABI: a set of symbols that will remain ABI-compatible across Python 3.x versions.\nNote\nThe Stable ABI prevents ABI issues, like linker errors due to missing symbols or data corruption due to changes in structure layouts or function signatures. However, other changes in Python can change the behavior of extensions. See Python\u2019s Backwards Compatibility Policy (PEP 387) for details.\nThe Stable ABI contains symbols exposed in the Limited API, but also other ones \u2013 for example, functions necessary to support older versions of the Limited API.\nOn Windows, extensions that use the Stable ABI should be linked against\npython3.dll\nrather than a version-specific library such as\npython39.dll\n.\nOn some platforms, Python will look for and load shared library files named\nwith the abi3\ntag (e.g. mymodule.abi3.so\n).\nIt does not check if such extensions conform to a Stable ABI.\nThe user (or their packaging tools) need to ensure that, for example,\nextensions built with the 3.10+ Limited API are not installed for lower\nversions of Python.\nAll functions in the Stable ABI are present as functions in Python\u2019s shared library, not solely as macros. This makes them usable from languages that don\u2019t use the C preprocessor.\nLimited API Scope and Performance\u00b6\nThe goal for the Limited API is to allow everything that is possible with the full C API, but possibly with a performance penalty.\nFor example, while PyList_GetItem()\nis available, its \u201cunsafe\u201d macro\nvariant PyList_GET_ITEM()\nis not.\nThe macro can be faster because it can rely on version-specific implementation\ndetails of the list object.\nWithout Py_LIMITED_API\ndefined, some C API functions are inlined or\nreplaced by macros.\nDefining Py_LIMITED_API\ndisables this inlining, allowing stability as\nPython\u2019s data structures are improved, but possibly reducing performance.\nBy leaving out the Py_LIMITED_API\ndefinition, it is possible to compile\na Limited API extension with a version-specific ABI. This can improve\nperformance for that Python version, but will limit compatibility.\nCompiling with Py_LIMITED_API\nwill then yield an extension that can be\ndistributed where a version-specific one is not available \u2013 for example,\nfor prereleases of an upcoming Python version.\nLimited API Caveats\u00b6\nNote that compiling with Py_LIMITED_API\nis not a complete guarantee that\ncode conforms to the Limited API or the Stable ABI. Py_LIMITED_API\nonly covers definitions, but an API also\nincludes other issues, such as expected semantics.\nOne issue that Py_LIMITED_API\ndoes not guard against is calling a function\nwith arguments that are invalid in a lower Python version.\nFor example, consider a function that starts accepting NULL\nfor an\nargument. In Python 3.9, NULL\nnow selects a default behavior, but in\nPython 3.8, the argument will be used directly, causing a NULL\ndereference\nand crash. A similar argument works for fields of structs.\nAnother issue is that some struct fields are currently not hidden when\nPy_LIMITED_API\nis defined, even though they\u2019re part of the Limited API.\nFor these reasons, we recommend testing an extension with all minor Python versions it supports, and preferably to build with the lowest such version.\nWe also recommend reviewing documentation of all used API to check\nif it is explicitly part of the Limited API. Even with Py_LIMITED_API\ndefined, a few private declarations are exposed for technical reasons (or\neven unintentionally, as bugs).\nAlso note that the Limited API is not necessarily stable: compiling with\nPy_LIMITED_API\nwith Python 3.8 means that the extension will\nrun with Python 3.12, but it will not necessarily compile with Python 3.12.\nIn particular, parts of the Limited API may be deprecated and removed,\nprovided that the Stable ABI stays stable.\nPlatform Considerations\u00b6\nABI stability depends not only on Python, but also on the compiler used, lower-level libraries and compiler options. For the purposes of the Stable ABI, these details define a \u201cplatform\u201d. They usually depend on the OS type and processor architecture\nIt is the responsibility of each particular distributor of Python\nto ensure that all Python versions on a particular platform are built\nin a way that does not break the Stable ABI.\nThis is the case with Windows and macOS releases from python.org\nand many\nthird-party distributors.\nContents of Limited API\u00b6\nCurrently, the Limited API includes the following items:\nPyErr_Display()\nPyModuleDef_Base\nPyModuleDef_Type\nPyUnicode_AsDecodedObject()\nPyUnicode_AsDecodedUnicode()\nPyUnicode_AsEncodedObject()\nPyUnicode_AsEncodedUnicode()\nPyVarObject.ob_base\nPyWeakReference\nPy_FileSystemDefaultEncodeErrors\nPy_FileSystemDefaultEncoding\nPy_HasFileSystemDefaultEncoding\nPy_UTF8Mode\nPy_intptr_t\nPy_uintptr_t\nssizessizeargfunc\nssizessizeobjargproc\nsymtable", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1912} +{"url": "https://docs.python.org/3/c-api/long.html", "title": "Integer Objects", "content": "Integer Objects\u00b6\nAll integers are implemented as \u201clong\u201d integer objects of arbitrary size.\nOn error, most PyLong_As*\nAPIs return (return type)-1\nwhich cannot be\ndistinguished from a number. Use PyErr_Occurred()\nto disambiguate.\n-\ntype PyLongObject\u00b6\n- Part of the Limited API (as an opaque struct).\nThis subtype of\nPyObject\nrepresents a Python integer object.\n-\nPyTypeObject PyLong_Type\u00b6\n- Part of the Stable ABI.\nThis instance of\nPyTypeObject\nrepresents the Python integer type. This is the same object asint\nin the Python layer.\n-\nint PyLong_Check(PyObject *p)\u00b6\nReturn true if its argument is a\nPyLongObject\nor a subtype ofPyLongObject\n. This function always succeeds.\n-\nint PyLong_CheckExact(PyObject *p)\u00b6\nReturn true if its argument is a\nPyLongObject\n, but not a subtype ofPyLongObject\n. This function always succeeds.\n-\nPyObject *PyLong_FromLong(long v)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a new\nPyLongObject\nobject from v, orNULL\non failure.CPython implementation detail: CPython keeps an array of integer objects for all integers between\n-5\nand256\n. When you create an int in that range you actually just get back a reference to the existing object.\n-\nPyObject *PyLong_FromUnsignedLong(unsigned long v)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a new\nPyLongObject\nobject from a C unsigned long, orNULL\non failure.\n-\nPyObject *PyLong_FromSsize_t(Py_ssize_t v)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a new\nPyLongObject\nobject from a CPy_ssize_t\n, orNULL\non failure.\n-\nPyObject *PyLong_FromSize_t(size_t v)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a new\nPyLongObject\nobject from a Csize_t\n, orNULL\non failure.\n-\nPyObject *PyLong_FromLongLong(long long v)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a new\nPyLongObject\nobject from a C long long, orNULL\non failure.\n-\nPyObject *PyLong_FromInt32(int32_t value)\u00b6\n-\nPyObject *PyLong_FromInt64(int64_t value)\u00b6\n- Part of the Stable ABI since version 3.14.\nReturn a new\nPyLongObject\nobject from a signed C int32_t or int64_t, orNULL\nwith an exception set on failure.Added in version 3.14.\n-\nPyObject *PyLong_FromUnsignedLongLong(unsigned long long v)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a new\nPyLongObject\nobject from a C unsigned long long, orNULL\non failure.\n-\nPyObject *PyLong_FromUInt32(uint32_t value)\u00b6\n-\nPyObject *PyLong_FromUInt64(uint64_t value)\u00b6\n- Part of the Stable ABI since version 3.14.\nReturn a new\nPyLongObject\nobject from an unsigned C uint32_t or uint64_t, orNULL\nwith an exception set on failure.Added in version 3.14.\n-\nPyObject *PyLong_FromDouble(double v)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a new\nPyLongObject\nobject from the integer part of v, orNULL\non failure.\n-\nPyObject *PyLong_FromString(const char *str, char **pend, int base)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a new\nPyLongObject\nbased on the string value in str, which is interpreted according to the radix in base, orNULL\non failure. If pend is non-NULL\n, *pend will point to the end of str on success or to the first character that could not be processed on error. If base is0\n, str is interpreted using the Integer literals definition; in this case, leading zeros in a non-zero decimal number raises aValueError\n. If base is not0\n, it must be between2\nand36\n, inclusive. Leading and trailing whitespace and single underscores after a base specifier and between digits are ignored. If there are no digits or str is not NULL-terminated following the digits and trailing whitespace,ValueError\nwill be raised.See also\nPyLong_AsNativeBytes()\nandPyLong_FromNativeBytes()\nfunctions can be used to convert aPyLongObject\nto/from an array of bytes in base256\n.\n-\nPyObject *PyLong_FromUnicodeObject(PyObject *u, int base)\u00b6\n- Return value: New reference.\nConvert a sequence of Unicode digits in the string u to a Python integer value.\nAdded in version 3.3.\n-\nPyObject *PyLong_FromVoidPtr(void *p)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCreate a Python integer from the pointer p. The pointer value can be retrieved from the resulting value using\nPyLong_AsVoidPtr()\n.\n-\nPyObject *PyLong_FromNativeBytes(const void *buffer, size_t n_bytes, int flags)\u00b6\n- Part of the Stable ABI since version 3.14.\nCreate a Python integer from the value contained in the first n_bytes of buffer, interpreted as a two\u2019s-complement signed number.\nflags are as for\nPyLong_AsNativeBytes()\n. Passing-1\nwill select the native endian that CPython was compiled with and assume that the most-significant bit is a sign bit. PassingPy_ASNATIVEBYTES_UNSIGNED_BUFFER\nwill produce the same result as callingPyLong_FromUnsignedNativeBytes()\n. Other flags are ignored.Added in version 3.13.\n-\nPyObject *PyLong_FromUnsignedNativeBytes(const void *buffer, size_t n_bytes, int flags)\u00b6\n- Part of the Stable ABI since version 3.14.\nCreate a Python integer from the value contained in the first n_bytes of buffer, interpreted as an unsigned number.\nflags are as for\nPyLong_AsNativeBytes()\n. Passing-1\nwill select the native endian that CPython was compiled with and assume that the most-significant bit is not a sign bit. Flags other than endian are ignored.Added in version 3.13.\n-\nPyLong_FromPid(pid)\u00b6\nMacro for creating a Python integer from a process identifier.\nThis can be defined as an alias to\nPyLong_FromLong()\norPyLong_FromLongLong()\n, depending on the size of the system\u2019s PID type.Added in version 3.2.\n-\nlong PyLong_AsLong(PyObject *obj)\u00b6\n- Part of the Stable ABI.\nReturn a C long representation of obj. If obj is not an instance of\nPyLongObject\n, first call its__index__()\nmethod (if present) to convert it to aPyLongObject\n.Raise\nOverflowError\nif the value of obj is out of range for a long.Returns\n-1\non error. UsePyErr_Occurred()\nto disambiguate.Changed in version 3.8: Use\n__index__()\nif available.Changed in version 3.10: This function will no longer use\n__int__()\n.-\nlong PyLong_AS_LONG(PyObject *obj)\u00b6\nA soft deprecated alias. Exactly equivalent to the preferred\nPyLong_AsLong\n. In particular, it can fail withOverflowError\nor another exception.Deprecated since version 3.14: The function is soft deprecated.\n-\nlong PyLong_AS_LONG(PyObject *obj)\u00b6\n-\nint PyLong_AsInt(PyObject *obj)\u00b6\n- Part of the Stable ABI since version 3.13.\nSimilar to\nPyLong_AsLong()\n, but store the result in a C int instead of a C long.Added in version 3.13.\n-\nlong PyLong_AsLongAndOverflow(PyObject *obj, int *overflow)\u00b6\n- Part of the Stable ABI.\nReturn a C long representation of obj. If obj is not an instance of\nPyLongObject\n, first call its__index__()\nmethod (if present) to convert it to aPyLongObject\n.If the value of obj is greater than\nLONG_MAX\nor less thanLONG_MIN\n, set *overflow to1\nor-1\n, respectively, and return-1\n; otherwise, set *overflow to0\n. If any other exception occurs set *overflow to0\nand return-1\nas usual.Returns\n-1\non error. UsePyErr_Occurred()\nto disambiguate.Changed in version 3.8: Use\n__index__()\nif available.Changed in version 3.10: This function will no longer use\n__int__()\n.\n-\nlong long PyLong_AsLongLong(PyObject *obj)\u00b6\n- Part of the Stable ABI.\nReturn a C long long representation of obj. If obj is not an instance of\nPyLongObject\n, first call its__index__()\nmethod (if present) to convert it to aPyLongObject\n.Raise\nOverflowError\nif the value of obj is out of range for a long long.Returns\n-1\non error. UsePyErr_Occurred()\nto disambiguate.Changed in version 3.8: Use\n__index__()\nif available.Changed in version 3.10: This function will no longer use\n__int__()\n.\n-\nlong long PyLong_AsLongLongAndOverflow(PyObject *obj, int *overflow)\u00b6\n- Part of the Stable ABI.\nReturn a C long long representation of obj. If obj is not an instance of\nPyLongObject\n, first call its__index__()\nmethod (if present) to convert it to aPyLongObject\n.If the value of obj is greater than\nLLONG_MAX\nor less thanLLONG_MIN\n, set *overflow to1\nor-1\n, respectively, and return-1\n; otherwise, set *overflow to0\n. If any other exception occurs set *overflow to0\nand return-1\nas usual.Returns\n-1\non error. UsePyErr_Occurred()\nto disambiguate.Added in version 3.2.\nChanged in version 3.8: Use\n__index__()\nif available.Changed in version 3.10: This function will no longer use\n__int__()\n.\n-\nPy_ssize_t PyLong_AsSsize_t(PyObject *pylong)\u00b6\n- Part of the Stable ABI.\nReturn a C\nPy_ssize_t\nrepresentation of pylong. pylong must be an instance ofPyLongObject\n.Raise\nOverflowError\nif the value of pylong is out of range for aPy_ssize_t\n.Returns\n-1\non error. UsePyErr_Occurred()\nto disambiguate.\n-\nunsigned long PyLong_AsUnsignedLong(PyObject *pylong)\u00b6\n- Part of the Stable ABI.\nReturn a C unsigned long representation of pylong. pylong must be an instance of\nPyLongObject\n.Raise\nOverflowError\nif the value of pylong is out of range for a unsigned long.Returns\n(unsigned long)-1\non error. UsePyErr_Occurred()\nto disambiguate.\n-\nsize_t PyLong_AsSize_t(PyObject *pylong)\u00b6\n- Part of the Stable ABI.\nReturn a C\nsize_t\nrepresentation of pylong. pylong must be an instance ofPyLongObject\n.Raise\nOverflowError\nif the value of pylong is out of range for asize_t\n.Returns\n(size_t)-1\non error. UsePyErr_Occurred()\nto disambiguate.\n-\nunsigned long long PyLong_AsUnsignedLongLong(PyObject *pylong)\u00b6\n- Part of the Stable ABI.\nReturn a C unsigned long long representation of pylong. pylong must be an instance of\nPyLongObject\n.Raise\nOverflowError\nif the value of pylong is out of range for an unsigned long long.Returns\n(unsigned long long)-1\non error. UsePyErr_Occurred()\nto disambiguate.Changed in version 3.1: A negative pylong now raises\nOverflowError\n, notTypeError\n.\n-\nunsigned long PyLong_AsUnsignedLongMask(PyObject *obj)\u00b6\n- Part of the Stable ABI.\nReturn a C unsigned long representation of obj. If obj is not an instance of\nPyLongObject\n, first call its__index__()\nmethod (if present) to convert it to aPyLongObject\n.If the value of obj is out of range for an unsigned long, return the reduction of that value modulo\nULONG_MAX + 1\n.Returns\n(unsigned long)-1\non error. UsePyErr_Occurred()\nto disambiguate.Changed in version 3.8: Use\n__index__()\nif available.Changed in version 3.10: This function will no longer use\n__int__()\n.\n-\nunsigned long long PyLong_AsUnsignedLongLongMask(PyObject *obj)\u00b6\n- Part of the Stable ABI.\nReturn a C unsigned long long representation of obj. If obj is not an instance of\nPyLongObject\n, first call its__index__()\nmethod (if present) to convert it to aPyLongObject\n.If the value of obj is out of range for an unsigned long long, return the reduction of that value modulo\nULLONG_MAX + 1\n.Returns\n(unsigned long long)-1\non error. UsePyErr_Occurred()\nto disambiguate.Changed in version 3.8: Use\n__index__()\nif available.Changed in version 3.10: This function will no longer use\n__int__()\n.\n-\nint PyLong_AsInt32(PyObject *obj, int32_t *value)\u00b6\n-\nint PyLong_AsInt64(PyObject *obj, int64_t *value)\u00b6\n- Part of the Stable ABI since version 3.14.\nSet *value to a signed C int32_t or int64_t representation of obj.\nIf obj is not an instance of\nPyLongObject\n, first call its__index__()\nmethod (if present) to convert it to aPyLongObject\n.If the obj value is out of range, raise an\nOverflowError\n.Set *value and return\n0\non success. Set an exception and return-1\non error.value must not be\nNULL\n.Added in version 3.14.\n-\nint PyLong_AsUInt32(PyObject *obj, uint32_t *value)\u00b6\n-\nint PyLong_AsUInt64(PyObject *obj, uint64_t *value)\u00b6\n- Part of the Stable ABI since version 3.14.\nSet *value to an unsigned C uint32_t or uint64_t representation of obj.\nIf obj is not an instance of\nPyLongObject\n, first call its__index__()\nmethod (if present) to convert it to aPyLongObject\n.If obj is negative, raise a\nValueError\n.If the obj value is out of range, raise an\nOverflowError\n.\nSet *value and return\n0\non success. Set an exception and return-1\non error.value must not be\nNULL\n.Added in version 3.14.\n-\ndouble PyLong_AsDouble(PyObject *pylong)\u00b6\n- Part of the Stable ABI.\nReturn a C double representation of pylong. pylong must be an instance of\nPyLongObject\n.Raise\nOverflowError\nif the value of pylong is out of range for a double.Returns\n-1.0\non error. UsePyErr_Occurred()\nto disambiguate.\n-\nvoid *PyLong_AsVoidPtr(PyObject *pylong)\u00b6\n- Part of the Stable ABI.\nConvert a Python integer pylong to a C void pointer. If pylong cannot be converted, an\nOverflowError\nwill be raised. This is only assured to produce a usable void pointer for values created withPyLong_FromVoidPtr()\n.Returns\nNULL\non error. UsePyErr_Occurred()\nto disambiguate.\n-\nPy_ssize_t PyLong_AsNativeBytes(PyObject *pylong, void *buffer, Py_ssize_t n_bytes, int flags)\u00b6\n- Part of the Stable ABI since version 3.14.\nCopy the Python integer value pylong to a native buffer of size n_bytes. The flags can be set to\n-1\nto behave similarly to a C cast, or to values documented below to control the behavior.Returns\n-1\nwith an exception raised on error. This may happen if pylong cannot be interpreted as an integer, or if pylong was negative and thePy_ASNATIVEBYTES_REJECT_NEGATIVE\nflag was set.Otherwise, returns the number of bytes required to store the value. If this is equal to or less than n_bytes, the entire value was copied. All n_bytes of the buffer are written: remaining bytes filled by copies of the sign bit.\nIf the returned value is greater than n_bytes, the value was truncated: as many of the lowest bits of the value as could fit are written, and the higher bits are ignored. This matches the typical behavior of a C-style downcast.\nNote\nOverflow is not considered an error. If the returned value is larger than n_bytes, most significant bits were discarded.\n0\nwill never be returned.Values are always copied as two\u2019s-complement.\nUsage example:\nint32_t value; Py_ssize_t bytes = PyLong_AsNativeBytes(pylong, &value, sizeof(value), -1); if (bytes < 0) { // Failed. A Python exception was set with the reason. return NULL; } else if (bytes <= (Py_ssize_t)sizeof(value)) { // Success! } else { // Overflow occurred, but 'value' contains the truncated // lowest bits of pylong. }\nPassing zero to n_bytes will return the size of a buffer that would be large enough to hold the value. This may be larger than technically necessary, but not unreasonably so. If n_bytes=0, buffer may be\nNULL\n.Note\nPassing n_bytes=0 to this function is not an accurate way to determine the bit length of the value.\nTo get at the entire Python value of an unknown size, the function can be called twice: first to determine the buffer size, then to fill it:\n// Ask how much space we need. Py_ssize_t expected = PyLong_AsNativeBytes(pylong, NULL, 0, -1); if (expected < 0) { // Failed. A Python exception was set with the reason. return NULL; } assert(expected != 0); // Impossible per the API definition. uint8_t *bignum = malloc(expected); if (!bignum) { PyErr_SetString(PyExc_MemoryError, \"bignum malloc failed.\"); return NULL; } // Safely get the entire value. Py_ssize_t bytes = PyLong_AsNativeBytes(pylong, bignum, expected, -1); if (bytes < 0) { // Exception has been set. free(bignum); return NULL; } else if (bytes > expected) { // This should not be possible. PyErr_SetString(PyExc_RuntimeError, \"Unexpected bignum truncation after a size check.\"); free(bignum); return NULL; } // The expected success given the above pre-check. // ... use bignum ... free(bignum);\nflags is either\n-1\n(Py_ASNATIVEBYTES_DEFAULTS\n) to select defaults that behave most like a C cast, or a combination of the other flags in the table below. Note that-1\ncannot be combined with other flags.Currently,\n-1\ncorresponds toPy_ASNATIVEBYTES_NATIVE_ENDIAN | Py_ASNATIVEBYTES_UNSIGNED_BUFFER\n.Flag\nValue\n-\nPy_ASNATIVEBYTES_DEFAULTS\u00b6\n- Part of the Stable ABI since version 3.14.\n-1\n-\nPy_ASNATIVEBYTES_BIG_ENDIAN\u00b6\n- Part of the Stable ABI since version 3.14.\n0\n-\nPy_ASNATIVEBYTES_LITTLE_ENDIAN\u00b6\n- Part of the Stable ABI since version 3.14.\n1\n-\nPy_ASNATIVEBYTES_NATIVE_ENDIAN\u00b6\n- Part of the Stable ABI since version 3.14.\n3\n-\nPy_ASNATIVEBYTES_UNSIGNED_BUFFER\u00b6\n- Part of the Stable ABI since version 3.14.\n4\n-\nPy_ASNATIVEBYTES_REJECT_NEGATIVE\u00b6\n- Part of the Stable ABI since version 3.14.\n8\n-\nPy_ASNATIVEBYTES_ALLOW_INDEX\u00b6\n- Part of the Stable ABI since version 3.14.\n16\nSpecifying\nPy_ASNATIVEBYTES_NATIVE_ENDIAN\nwill override any other endian flags. Passing2\nis reserved.By default, sufficient buffer will be requested to include a sign bit. For example, when converting 128 with n_bytes=1, the function will return 2 (or more) in order to store a zero sign bit.\nIf\nPy_ASNATIVEBYTES_UNSIGNED_BUFFER\nis specified, a zero sign bit will be omitted from size calculations. This allows, for example, 128 to fit in a single-byte buffer. If the destination buffer is later treated as signed, a positive input value may become negative. Note that the flag does not affect handling of negative values: for those, space for a sign bit is always requested.Specifying\nPy_ASNATIVEBYTES_REJECT_NEGATIVE\ncauses an exception to be set if pylong is negative. Without this flag, negative values will be copied provided there is enough space for at least one sign bit, regardless of whetherPy_ASNATIVEBYTES_UNSIGNED_BUFFER\nwas specified.If\nPy_ASNATIVEBYTES_ALLOW_INDEX\nis specified and a non-integer value is passed, its__index__()\nmethod will be called first. This may result in Python code executing and other threads being allowed to run, which could cause changes to other objects or values in use. When flags is-1\n, this option is not set, and non-integer values will raiseTypeError\n.Note\nWith the default flags (\n-1\n, or UNSIGNED_BUFFER without REJECT_NEGATIVE), multiple Python integers can map to a single value without overflow. For example, both255\nand-1\nfit a single-byte buffer and set all its bits. This matches typical C cast behavior.Added in version 3.13.\n-\nPy_ASNATIVEBYTES_DEFAULTS\u00b6\n-\nPyLong_AsPid(pid)\u00b6\nMacro for converting a Python integer into a process identifier.\nThis can be defined as an alias to\nPyLong_AsLong()\n,PyLong_FromLongLong()\n, orPyLong_AsInt()\n, depending on the size of the system\u2019s PID type.Added in version 3.2.\n-\nint PyLong_GetSign(PyObject *obj, int *sign)\u00b6\nGet the sign of the integer object obj.\nOn success, set *sign to the integer sign (0, -1 or +1 for zero, negative or positive integer, respectively) and return 0.\nOn failure, return -1 with an exception set. This function always succeeds if obj is a\nPyLongObject\nor its subtype.Added in version 3.14.\n-\nint PyLong_IsPositive(PyObject *obj)\u00b6\nCheck if the integer object obj is positive (\nobj > 0\n).If obj is an instance of\nPyLongObject\nor its subtype, return1\nwhen it\u2019s positive and0\notherwise. Else set an exception and return-1\n.Added in version 3.14.\n-\nint PyLong_IsNegative(PyObject *obj)\u00b6\nCheck if the integer object obj is negative (\nobj < 0\n).If obj is an instance of\nPyLongObject\nor its subtype, return1\nwhen it\u2019s negative and0\notherwise. Else set an exception and return-1\n.Added in version 3.14.\n-\nint PyLong_IsZero(PyObject *obj)\u00b6\nCheck if the integer object obj is zero.\nIf obj is an instance of\nPyLongObject\nor its subtype, return1\nwhen it\u2019s zero and0\notherwise. Else set an exception and return-1\n.Added in version 3.14.\n-\nPyObject *PyLong_GetInfo(void)\u00b6\n- Part of the Stable ABI.\nOn success, return a read only named tuple, that holds information about Python\u2019s internal representation of integers. See\nsys.int_info\nfor description of individual fields.On failure, return\nNULL\nwith an exception set.Added in version 3.1.\n-\nint PyUnstable_Long_IsCompact(const PyLongObject *op)\u00b6\n- This is Unstable API. It may change without warning in minor releases.\nReturn 1 if op is compact, 0 otherwise.\nThis function makes it possible for performance-critical code to implement a \u201cfast path\u201d for small integers. For compact values use\nPyUnstable_Long_CompactValue()\n; for others fall back to aPyLong_As*\nfunction orPyLong_AsNativeBytes()\n.The speedup is expected to be negligible for most users.\nExactly what values are considered compact is an implementation detail and is subject to change.\nAdded in version 3.12.\n-\nPy_ssize_t PyUnstable_Long_CompactValue(const PyLongObject *op)\u00b6\n- This is Unstable API. It may change without warning in minor releases.\nIf op is compact, as determined by\nPyUnstable_Long_IsCompact()\n, return its value.Otherwise, the return value is undefined.\nAdded in version 3.12.\nExport API\u00b6\nAdded in version 3.14.\n-\nstruct PyLongLayout\u00b6\nLayout of an array of \u201cdigits\u201d (\u201climbs\u201d in the GMP terminology), used to represent absolute value for arbitrary precision integers.\nUse\nPyLong_GetNativeLayout()\nto get the native layout of Pythonint\nobjects, used internally for integers with \u201cbig enough\u201d absolute value.See also\nsys.int_info\nwhich exposes similar information in Python.-\nuint8_t bits_per_digit\u00b6\nBits per digit. For example, a 15 bit digit means that bits 0-14 contain meaningful information.\n-\nuint8_t digit_size\u00b6\nDigit size in bytes. For example, a 15 bit digit will require at least 2 bytes.\n-\nint8_t digits_order\u00b6\nDigits order:\n1\nfor most significant digit first-1\nfor least significant digit first\n-\nint8_t digit_endianness\u00b6\nDigit endianness:\n1\nfor most significant byte first (big endian)-1\nfor least significant byte first (little endian)\n-\nuint8_t bits_per_digit\u00b6\n-\nconst PyLongLayout *PyLong_GetNativeLayout(void)\u00b6\nGet the native layout of Python\nint\nobjects.See the\nPyLongLayout\nstructure.The function must not be called before Python initialization nor after Python finalization. The returned layout is valid until Python is finalized. The layout is the same for all Python sub-interpreters in a process, and so it can be cached.\n-\nstruct PyLongExport\u00b6\nExport of a Python\nint\nobject.There are two cases:\n-\nPy_ssize_t ndigits\u00b6\nNumber of digits in\ndigits\narray. Only valid ifdigits\nis notNULL\n.\n-\nconst void *digits\u00b6\nRead-only array of unsigned digits. Can be\nNULL\n.\n-\nPy_ssize_t ndigits\u00b6\n-\nint PyLong_Export(PyObject *obj, PyLongExport *export_long)\u00b6\nExport a Python\nint\nobject.export_long must point to a\nPyLongExport\nstructure allocated by the caller. It must not beNULL\n.On success, fill in *export_long and return\n0\n. On error, set an exception and return-1\n.PyLong_FreeExport()\nmust be called when the export is no longer needed.CPython implementation detail: This function always succeeds if obj is a Python\nint\nobject or a subclass.\n-\nvoid PyLong_FreeExport(PyLongExport *export_long)\u00b6\nRelease the export export_long created by\nPyLong_Export()\n.CPython implementation detail: Calling\nPyLong_FreeExport()\nis optional if export_long->digits isNULL\n.\nPyLongWriter API\u00b6\nThe PyLongWriter\nAPI can be used to import an integer.\nAdded in version 3.14.\n-\nstruct PyLongWriter\u00b6\nA Python\nint\nwriter instance.The instance must be destroyed by\nPyLongWriter_Finish()\norPyLongWriter_Discard()\n.\n-\nPyLongWriter *PyLongWriter_Create(int negative, Py_ssize_t ndigits, void **digits)\u00b6\nCreate a\nPyLongWriter\n.On success, allocate *digits and return a writer. On error, set an exception and return\nNULL\n.negative is\n1\nif the number is negative, or0\notherwise.ndigits is the number of digits in the digits array. It must be greater than 0.\ndigits must not be NULL.\nAfter a successful call to this function, the caller should fill in the array of digits digits and then call\nPyLongWriter_Finish()\nto get a Pythonint\n. The layout of digits is described byPyLong_GetNativeLayout()\n.Digits must be in the range [\n0\n;(1 << bits_per_digit) - 1\n] (where thebits_per_digit\nis the number of bits per digit). Any unused most significant digits must be set to0\n.Alternately, call\nPyLongWriter_Discard()\nto destroy the writer instance without creating anint\nobject.\n-\nPyObject *PyLongWriter_Finish(PyLongWriter *writer)\u00b6\n- Return value: New reference.\nFinish a\nPyLongWriter\ncreated byPyLongWriter_Create()\n.On success, return a Python\nint\nobject. On error, set an exception and returnNULL\n.The function takes care of normalizing the digits and converts the object to a compact integer if needed.\nThe writer instance and the digits array are invalid after the call.\n-\nvoid PyLongWriter_Discard(PyLongWriter *writer)\u00b6\nDiscard a\nPyLongWriter\ncreated byPyLongWriter_Create()\n.If writer is\nNULL\n, no operation is performed.The writer instance and the digits array are invalid after the call.\nDeprecated API\u00b6\nThese macros are soft deprecated. They describe parameters\nof the internal representation of PyLongObject\ninstances.\nUse PyLong_GetNativeLayout()\ninstead, along with PyLong_Export()\nto read integer data or PyLongWriter\nto write it.\nThese currently use the same layout, but are designed to continue working correctly\neven if CPython\u2019s internal integer representation changes.\n-\nPyLong_SHIFT\u00b6\nThis is equivalent to\nbits_per_digit\nin the output ofPyLong_GetNativeLayout()\n.\n-\nPyLong_BASE\u00b6\nThis is currently equivalent to 1 << PyLong_SHIFT.\n-\nPyLong_MASK\u00b6\nThis is currently equivalent to (1 << PyLong_SHIFT) - 1", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 6201} +{"url": "https://docs.python.org/3/c-api/type.html", "title": "Type Objects", "content": "Type Objects\u00b6\n-\ntype PyTypeObject\u00b6\n- Part of the Limited API (as an opaque struct).\nThe C structure of the objects used to describe built-in types.\n-\nPyTypeObject PyType_Type\u00b6\n- Part of the Stable ABI.\nThis is the type object for type objects; it is the same object as\ntype\nin the Python layer.\n-\nint PyType_Check(PyObject *o)\u00b6\nReturn non-zero if the object o is a type object, including instances of types derived from the standard type object. Return 0 in all other cases. This function always succeeds.\n-\nint PyType_CheckExact(PyObject *o)\u00b6\nReturn non-zero if the object o is a type object, but not a subtype of the standard type object. Return 0 in all other cases. This function always succeeds.\n-\nunsigned int PyType_ClearCache()\u00b6\n- Part of the Stable ABI.\nClear the internal lookup cache. Return the current version tag.\n-\nunsigned long PyType_GetFlags(PyTypeObject *type)\u00b6\n- Part of the Stable ABI.\nReturn the\ntp_flags\nmember of type. This function is primarily meant for use withPy_LIMITED_API\n; the individual flag bits are guaranteed to be stable across Python releases, but access totp_flags\nitself is not part of the limited API.Added in version 3.2.\nChanged in version 3.4: The return type is now\nunsigned long\nrather thanlong\n.\n-\nPyObject *PyType_GetDict(PyTypeObject *type)\u00b6\nReturn the type object\u2019s internal namespace, which is otherwise only exposed via a read-only proxy (\ncls.__dict__\n). This is a replacement for accessingtp_dict\ndirectly. The returned dictionary must be treated as read-only.This function is meant for specific embedding and language-binding cases, where direct access to the dict is necessary and indirect access (e.g. via the proxy or\nPyObject_GetAttr()\n) isn\u2019t adequate.Extension modules should continue to use\ntp_dict\n, directly or indirectly, when setting up their own types.Added in version 3.12.\n-\nvoid PyType_Modified(PyTypeObject *type)\u00b6\n- Part of the Stable ABI.\nInvalidate the internal lookup cache for the type and all of its subtypes. This function must be called after any manual modification of the attributes or base classes of the type.\n-\nint PyType_AddWatcher(PyType_WatchCallback callback)\u00b6\nRegister callback as a type watcher. Return a non-negative integer ID which must be passed to future calls to\nPyType_Watch()\n. In case of error (e.g. no more watcher IDs available), return-1\nand set an exception.In free-threaded builds,\nPyType_AddWatcher()\nis not thread-safe, so it must be called at start up (before spawning the first thread).Added in version 3.12.\n-\nint PyType_ClearWatcher(int watcher_id)\u00b6\nClear watcher identified by watcher_id (previously returned from\nPyType_AddWatcher()\n). Return0\non success,-1\non error (e.g. if watcher_id was never registered.)An extension should never call\nPyType_ClearWatcher\nwith a watcher_id that was not returned to it by a previous call toPyType_AddWatcher()\n.Added in version 3.12.\n-\nint PyType_Watch(int watcher_id, PyObject *type)\u00b6\nMark type as watched. The callback granted watcher_id by\nPyType_AddWatcher()\nwill be called wheneverPyType_Modified()\nreports a change to type. (The callback may be called only once for a series of consecutive modifications to type, if_PyType_Lookup()\nis not called on type between the modifications; this is an implementation detail and subject to change.)An extension should never call\nPyType_Watch\nwith a watcher_id that was not returned to it by a previous call toPyType_AddWatcher()\n.Added in version 3.12.\n-\nint PyType_Unwatch(int watcher_id, PyObject *type)\u00b6\nMark type as not watched. This undoes a previous call to\nPyType_Watch()\n. type must not beNULL\n.An extension should never call this function with a watcher_id that was not returned to it by a previous call to\nPyType_AddWatcher()\n.On success, this function returns\n0\n. On failure, this function returns-1\nwith an exception set.Added in version 3.12.\n-\ntypedef int (*PyType_WatchCallback)(PyObject *type)\u00b6\nType of a type-watcher callback function.\nThe callback must not modify type or cause\nPyType_Modified()\nto be called on type or any type in its MRO; violating this rule could cause infinite recursion.Added in version 3.12.\n-\nint PyType_HasFeature(PyTypeObject *o, int feature)\u00b6\nReturn non-zero if the type object o sets the feature feature. Type features are denoted by single bit flags.\n-\nint PyType_FastSubclass(PyTypeObject *type, int flag)\u00b6\nReturn non-zero if the type object type sets the subclass flag flag. Subclass flags are denoted by\nPy_TPFLAGS_*_SUBCLASS\n. This function is used by many_Check\nfunctions for common types.See also\nPyObject_TypeCheck()\n, which is used as a slower alternative in_Check\nfunctions for types that don\u2019t come with subclass flags.\n-\nint PyType_IS_GC(PyTypeObject *o)\u00b6\nReturn true if the type object includes support for the cycle detector; this tests the type flag\nPy_TPFLAGS_HAVE_GC\n.\n-\nint PyType_IsSubtype(PyTypeObject *a, PyTypeObject *b)\u00b6\n- Part of the Stable ABI.\nReturn true if a is a subtype of b.\nThis function only checks for actual subtypes, which means that\n__subclasscheck__()\nis not called on b. CallPyObject_IsSubclass()\nto do the same check thatissubclass()\nwould do.\n-\nPyObject *PyType_GenericAlloc(PyTypeObject *type, Py_ssize_t nitems)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nGeneric handler for the\ntp_alloc\nslot of a type object. Uses Python\u2019s default memory allocation mechanism to allocate memory for a new instance, zeros the memory, then initializes the memory as if by callingPyObject_Init()\norPyObject_InitVar()\n.Do not call this directly to allocate memory for an object; call the type\u2019s\ntp_alloc\nslot instead.For types that support garbage collection (i.e., the\nPy_TPFLAGS_HAVE_GC\nflag is set), this function behaves likePyObject_GC_New\norPyObject_GC_NewVar\n(except the memory is guaranteed to be zeroed before initialization), and should be paired withPyObject_GC_Del()\nintp_free\n. Otherwise, it behaves likePyObject_New\norPyObject_NewVar\n(except the memory is guaranteed to be zeroed before initialization) and should be paired withPyObject_Free()\nintp_free\n.\n-\nPyObject *PyType_GenericNew(PyTypeObject *type, PyObject *args, PyObject *kwds)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nGeneric handler for the\ntp_new\nslot of a type object. Creates a new instance using the type\u2019stp_alloc\nslot and returns the resulting object.\n-\nint PyType_Ready(PyTypeObject *type)\u00b6\n- Part of the Stable ABI.\nFinalize a type object. This should be called on all type objects to finish their initialization. This function is responsible for adding inherited slots from a type\u2019s base class. Return\n0\non success, or return-1\nand sets an exception on error.Note\nIf some of the base classes implements the GC protocol and the provided type does not include the\nPy_TPFLAGS_HAVE_GC\nin its flags, then the GC protocol will be automatically implemented from its parents. On the contrary, if the type being created does includePy_TPFLAGS_HAVE_GC\nin its flags then it must implement the GC protocol itself by at least implementing thetp_traverse\nhandle.\n-\nPyObject *PyType_GetName(PyTypeObject *type)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.11.\nReturn the type\u2019s name. Equivalent to getting the type\u2019s\n__name__\nattribute.Added in version 3.11.\n-\nPyObject *PyType_GetQualName(PyTypeObject *type)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.11.\nReturn the type\u2019s qualified name. Equivalent to getting the type\u2019s\n__qualname__\nattribute.Added in version 3.11.\n-\nPyObject *PyType_GetFullyQualifiedName(PyTypeObject *type)\u00b6\n- Part of the Stable ABI since version 3.13.\nReturn the type\u2019s fully qualified name. Equivalent to\nf\"{type.__module__}.{type.__qualname__}\"\n, ortype.__qualname__\niftype.__module__\nis not a string or is equal to\"builtins\"\n.Added in version 3.13.\n-\nPyObject *PyType_GetModuleName(PyTypeObject *type)\u00b6\n- Part of the Stable ABI since version 3.13.\nReturn the type\u2019s module name. Equivalent to getting the\ntype.__module__\nattribute.Added in version 3.13.\n-\nvoid *PyType_GetSlot(PyTypeObject *type, int slot)\u00b6\n- Part of the Stable ABI since version 3.4.\nReturn the function pointer stored in the given slot. If the result is\nNULL\n, this indicates that either the slot isNULL\n, or that the function was called with invalid parameters. Callers will typically cast the result pointer into the appropriate function type.See\nPyType_Slot.slot\nfor possible values of the slot argument.Added in version 3.4.\nChanged in version 3.10:\nPyType_GetSlot()\ncan now accept all types. Previously, it was limited to heap types.\n-\nPyObject *PyType_GetModule(PyTypeObject *type)\u00b6\n- Part of the Stable ABI since version 3.10.\nReturn the module object associated with the given type when the type was created using\nPyType_FromModuleAndSpec()\n.If no module is associated with the given type, sets\nTypeError\nand returnsNULL\n.This function is usually used to get the module in which a method is defined. Note that in such a method,\nPyType_GetModule(Py_TYPE(self))\nmay not return the intended result.Py_TYPE(self)\nmay be a subclass of the intended class, and subclasses are not necessarily defined in the same module as their superclass. SeePyCMethod\nto get the class that defines the method. SeePyType_GetModuleByDef()\nfor cases whenPyCMethod\ncannot be used.Added in version 3.9.\n-\nvoid *PyType_GetModuleState(PyTypeObject *type)\u00b6\n- Part of the Stable ABI since version 3.10.\nReturn the state of the module object associated with the given type. This is a shortcut for calling\nPyModule_GetState()\non the result ofPyType_GetModule()\n.If no module is associated with the given type, sets\nTypeError\nand returnsNULL\n.If the type has an associated module but its state is\nNULL\n, returnsNULL\nwithout setting an exception.Added in version 3.9.\n-\nPyObject *PyType_GetModuleByDef(PyTypeObject *type, struct PyModuleDef *def)\u00b6\n- Return value: Borrowed reference. Part of the Stable ABI since version 3.13.\nFind the first superclass whose module was created from the given\nPyModuleDef\ndef, and return that module.If no module is found, raises a\nTypeError\nand returnsNULL\n.This function is intended to be used together with\nPyModule_GetState()\nto get module state from slot methods (such astp_init\nornb_add\n) and other places where a method\u2019s defining class cannot be passed using thePyCMethod\ncalling convention.The returned reference is borrowed from type, and will be valid as long as you hold a reference to type. Do not release it with\nPy_DECREF()\nor similar.Added in version 3.11.\n-\nint PyType_GetBaseByToken(PyTypeObject *type, void *token, PyTypeObject **result)\u00b6\n- Part of the Stable ABI since version 3.14.\nFind the first superclass in type\u2019s method resolution order whose\nPy_tp_token\ntoken is equal to the given one.If found, set *result to a new strong reference to it and return\n1\n.If not found, set *result to\nNULL\nand return0\n.On error, set *result to\nNULL\nand return-1\nwith an exception set.\nThe result argument may be\nNULL\n, in which case *result is not set. Use this if you need only the return value.The token argument may not be\nNULL\n.Added in version 3.14.\n-\nint PyUnstable_Type_AssignVersionTag(PyTypeObject *type)\u00b6\n- This is Unstable API. It may change without warning in minor releases.\nAttempt to assign a version tag to the given type.\nReturns 1 if the type already had a valid version tag or a new one was assigned, or 0 if a new tag could not be assigned.\nAdded in version 3.12.\n-\nint PyType_SUPPORTS_WEAKREFS(PyTypeObject *type)\u00b6\nReturn true if instances of type support creating weak references, false otherwise. This function always succeeds. type must not be\nNULL\n.See also\nCreating Heap-Allocated Types\u00b6\nThe following functions and structs are used to create heap types.\n-\nPyObject *PyType_FromMetaclass(PyTypeObject *metaclass, PyObject *module, PyType_Spec *spec, PyObject *bases)\u00b6\n- Part of the Stable ABI since version 3.12.\nCreate and return a heap type from the spec (see\nPy_TPFLAGS_HEAPTYPE\n).The metaclass metaclass is used to construct the resulting type object. When metaclass is\nNULL\n, the metaclass is derived from bases (or Py_tp_base[s] slots if bases isNULL\n, see below).Metaclasses that override\ntp_new\nare not supported, except iftp_new\nisNULL\n.The bases argument can be used to specify base classes; it can either be only one class or a tuple of classes. If bases is\nNULL\n, thePy_tp_bases\nslot is used instead. If that also isNULL\n, thePy_tp_base\nslot is used instead. If that also isNULL\n, the new type derives fromobject\n.The module argument can be used to record the module in which the new class is defined. It must be a module object or\nNULL\n. If notNULL\n, the module is associated with the new type and can later be retrieved withPyType_GetModule()\n. The associated module is not inherited by subclasses; it must be specified for each class individually.This function calls\nPyType_Ready()\non the new type.Note that this function does not fully match the behavior of calling\ntype()\nor using theclass\nstatement. With user-provided base types or metaclasses, prefer callingtype\n(or the metaclass) overPyType_From*\nfunctions. Specifically:__new__()\nis not called on the new class (and it must be set totype.__new__\n).__init__()\nis not called on the new class.__init_subclass__()\nis not called on any bases.__set_name__()\nis not called on new descriptors.\nAdded in version 3.12.\n-\nPyObject *PyType_FromModuleAndSpec(PyObject *module, PyType_Spec *spec, PyObject *bases)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.10.\nEquivalent to\nPyType_FromMetaclass(NULL, module, spec, bases)\n.Added in version 3.9.\nChanged in version 3.10: The function now accepts a single class as the bases argument and\nNULL\nas thetp_doc\nslot.Changed in version 3.12: The function now finds and uses a metaclass corresponding to the provided base classes. Previously, only\ntype\ninstances were returned.The\ntp_new\nof the metaclass is ignored. which may result in incomplete initialization. Creating classes whose metaclass overridestp_new\nis deprecated.Changed in version 3.14: Creating classes whose metaclass overrides\ntp_new\nis no longer allowed.\n-\nPyObject *PyType_FromSpecWithBases(PyType_Spec *spec, PyObject *bases)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.3.\nEquivalent to\nPyType_FromMetaclass(NULL, NULL, spec, bases)\n.Added in version 3.3.\nChanged in version 3.12: The function now finds and uses a metaclass corresponding to the provided base classes. Previously, only\ntype\ninstances were returned.The\ntp_new\nof the metaclass is ignored. which may result in incomplete initialization. Creating classes whose metaclass overridestp_new\nis deprecated.Changed in version 3.14: Creating classes whose metaclass overrides\ntp_new\nis no longer allowed.\n-\nPyObject *PyType_FromSpec(PyType_Spec *spec)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nEquivalent to\nPyType_FromMetaclass(NULL, NULL, spec, NULL)\n.Changed in version 3.12: The function now finds and uses a metaclass corresponding to the base classes provided in Py_tp_base[s] slots. Previously, only\ntype\ninstances were returned.The\ntp_new\nof the metaclass is ignored. which may result in incomplete initialization. Creating classes whose metaclass overridestp_new\nis deprecated.Changed in version 3.14: Creating classes whose metaclass overrides\ntp_new\nis no longer allowed.\n-\nint PyType_Freeze(PyTypeObject *type)\u00b6\n- Part of the Stable ABI since version 3.14.\nMake a type immutable: set the\nPy_TPFLAGS_IMMUTABLETYPE\nflag.All base classes of type must be immutable.\nOn success, return\n0\n. On error, set an exception and return-1\n.The type must not be used before it\u2019s made immutable. For example, type instances must not be created before the type is made immutable.\nAdded in version 3.14.\n-\ntype PyType_Spec\u00b6\n- Part of the Stable ABI (including all members).\nStructure defining a type\u2019s behavior.\n-\nconst char *name\u00b6\nName of the type, used to set\nPyTypeObject.tp_name\n.\n-\nint basicsize\u00b6\nIf positive, specifies the size of the instance in bytes. It is used to set\nPyTypeObject.tp_basicsize\n.If zero, specifies that\ntp_basicsize\nshould be inherited.If negative, the absolute value specifies how much space instances of the class need in addition to the superclass. Use\nPyObject_GetTypeData()\nto get a pointer to subclass-specific memory reserved this way. For negativebasicsize\n, Python will insert padding when needed to meettp_basicsize\n\u2019s alignment requirements.Changed in version 3.12: Previously, this field could not be negative.\n-\nint itemsize\u00b6\nSize of one element of a variable-size type, in bytes. Used to set\nPyTypeObject.tp_itemsize\n. Seetp_itemsize\ndocumentation for caveats.If zero,\ntp_itemsize\nis inherited. Extending arbitrary variable-sized classes is dangerous, since some types use a fixed offset for variable-sized memory, which can then overlap fixed-sized memory used by a subclass. To help prevent mistakes, inheritingitemsize\nis only possible in the following situations:The base is not variable-sized (its\ntp_itemsize\n).The requested\nPyType_Spec.basicsize\nis positive, suggesting that the memory layout of the base class is known.The requested\nPyType_Spec.basicsize\nis zero, suggesting that the subclass does not access the instance\u2019s memory directly.With the\nPy_TPFLAGS_ITEMS_AT_END\nflag.\n-\nunsigned int flags\u00b6\nType flags, used to set\nPyTypeObject.tp_flags\n.If the\nPy_TPFLAGS_HEAPTYPE\nflag is not set,PyType_FromSpecWithBases()\nsets it automatically.\n-\nPyType_Slot *slots\u00b6\nArray of\nPyType_Slot\nstructures. Terminated by the special slot value{0, NULL}\n.Each slot ID should be specified at most once.\n-\nconst char *name\u00b6\n-\ntype PyType_Slot\u00b6\n- Part of the Stable ABI (including all members).\nStructure defining optional functionality of a type, containing a slot ID and a value pointer.\n-\nint slot\u00b6\nA slot ID.\nSlot IDs are named like the field names of the structures\nPyTypeObject\n,PyNumberMethods\n,PySequenceMethods\n,PyMappingMethods\nandPyAsyncMethods\nwith an addedPy_\nprefix. For example, use:Py_nb_add\nto setPyNumberMethods.nb_add\nAn additional slot is supported that does not correspond to a\nPyTypeObject\nstruct field:The following \u201coffset\u201d fields cannot be set using\nPyType_Slot\n:tp_weaklistoffset\n(usePy_TPFLAGS_MANAGED_WEAKREF\ninstead if possible)tp_dictoffset\n(usePy_TPFLAGS_MANAGED_DICT\ninstead if possible)tp_vectorcall_offset\n(use\"__vectorcalloffset__\"\nin PyMemberDef)\nIf it is not possible to switch to a\nMANAGED\nflag (for example, for vectorcall or to support Python older than 3.12), specify the offset inPy_tp_members\n. See PyMemberDef documentation for details.The following internal fields cannot be set at all when creating a heap type:\ntp_dict\n,tp_mro\n,tp_cache\n,tp_subclasses\n, andtp_weaklist\n.\nSetting\nPy_tp_bases\norPy_tp_base\nmay be problematic on some platforms. To avoid issues, use the bases argument ofPyType_FromSpecWithBases()\ninstead.Changed in version 3.9: Slots in\nPyBufferProcs\nmay be set in the unlimited API.Changed in version 3.11:\nbf_getbuffer\nandbf_releasebuffer\nare now available under the limited API.Changed in version 3.14: The field\ntp_vectorcall\ncan now be set usingPy_tp_vectorcall\n. See the field\u2019s documentation for details.\n-\nvoid *pfunc\u00b6\nThe desired value of the slot. In most cases, this is a pointer to a function.\npfunc values may not be\nNULL\n, except for the following slots:Py_tp_token\n(for clarity, preferPy_TP_USE_SPEC\nrather thanNULL\n)\n-\nint slot\u00b6\n-\nPy_tp_token\u00b6\n- Part of the Stable ABI since version 3.14.\nA\nslot\nthat records a static memory layout ID for a class.If the\nPyType_Spec\nof the class is statically allocated, the token can be set to the spec using the special valuePy_TP_USE_SPEC\n:static PyType_Slot foo_slots[] = { {Py_tp_token, Py_TP_USE_SPEC},\nIt can also be set to an arbitrary pointer, but you must ensure that:\nThe pointer outlives the class, so it\u2019s not reused for something else while the class exists.\nIt \u201cbelongs\u201d to the extension module where the class lives, so it will not clash with other extensions.\nUse\nPyType_GetBaseByToken()\nto check if a class\u2019s superclass has a given token \u2013 that is, check whether the memory layout is compatible.To get the token for a given class (without considering superclasses), use\nPyType_GetSlot()\nwithPy_tp_token\n.Added in version 3.14.\n-\nPy_TP_USE_SPEC\u00b6\n- Part of the Stable ABI since version 3.14.\nUsed as a value with\nPy_tp_token\nto set the token to the class\u2019sPyType_Spec\n. Expands toNULL\n.Added in version 3.14.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 5103} +{"url": "https://docs.python.org/3/c-api/memory.html", "title": "Memory Management", "content": "Memory Management\u00b6\nOverview\u00b6\nMemory management in Python involves a private heap containing all Python objects and data structures. The management of this private heap is ensured internally by the Python memory manager. The Python memory manager has different components which deal with various dynamic storage management aspects, like sharing, segmentation, preallocation or caching.\nAt the lowest level, a raw memory allocator ensures that there is enough room in the private heap for storing all Python-related data by interacting with the memory manager of the operating system. On top of the raw memory allocator, several object-specific allocators operate on the same heap and implement distinct memory management policies adapted to the peculiarities of every object type. For example, integer objects are managed differently within the heap than strings, tuples or dictionaries because integers imply different storage requirements and speed/space tradeoffs. The Python memory manager thus delegates some of the work to the object-specific allocators, but ensures that the latter operate within the bounds of the private heap.\nIt is important to understand that the management of the Python heap is performed by the interpreter itself and that the user has no control over it, even if they regularly manipulate object pointers to memory blocks inside that heap. The allocation of heap space for Python objects and other internal buffers is performed on demand by the Python memory manager through the Python/C API functions listed in this document.\nTo avoid memory corruption, extension writers should never try to operate on\nPython objects with the functions exported by the C library: malloc()\n,\ncalloc()\n, realloc()\nand free()\n. This will result in mixed\ncalls between the C allocator and the Python memory manager with fatal\nconsequences, because they implement different algorithms and operate on\ndifferent heaps. However, one may safely allocate and release memory blocks\nwith the C library allocator for individual purposes, as shown in the following\nexample:\nPyObject *res;\nchar *buf = (char *) malloc(BUFSIZ); /* for I/O */\nif (buf == NULL)\nreturn PyErr_NoMemory();\n...Do some I/O operation involving buf...\nres = PyBytes_FromString(buf);\nfree(buf); /* malloc'ed */\nreturn res;\nIn this example, the memory request for the I/O buffer is handled by the C library allocator. The Python memory manager is involved only in the allocation of the bytes object returned as a result.\nIn most situations, however, it is recommended to allocate memory from the Python heap specifically because the latter is under control of the Python memory manager. For example, this is required when the interpreter is extended with new object types written in C. Another reason for using the Python heap is the desire to inform the Python memory manager about the memory needs of the extension module. Even when the requested memory is used exclusively for internal, highly specific purposes, delegating all memory requests to the Python memory manager causes the interpreter to have a more accurate image of its memory footprint as a whole. Consequently, under certain circumstances, the Python memory manager may or may not trigger appropriate actions, like garbage collection, memory compaction or other preventive procedures. Note that by using the C library allocator as shown in the previous example, the allocated memory for the I/O buffer escapes completely the Python memory manager.\nSee also\nThe PYTHONMALLOC\nenvironment variable can be used to configure\nthe memory allocators used by Python.\nThe PYTHONMALLOCSTATS\nenvironment variable can be used to print\nstatistics of the pymalloc memory allocator every time a\nnew pymalloc object arena is created, and on shutdown.\nAllocator Domains\u00b6\nAll allocating functions belong to one of three different \u201cdomains\u201d (see also\nPyMemAllocatorDomain\n). These domains represent different allocation\nstrategies and are optimized for different purposes. The specific details on\nhow every domain allocates memory or what internal functions each domain calls\nis considered an implementation detail, but for debugging purposes a simplified\ntable can be found at Default Memory Allocators.\nThe APIs used to allocate and free a block of memory must be from the same domain.\nFor example, PyMem_Free()\nmust be used to free memory allocated using PyMem_Malloc()\n.\nThe three allocation domains are:\nRaw domain: intended for allocating memory for general-purpose memory buffers where the allocation must go to the system allocator or where the allocator can operate without an attached thread state. The memory is requested directly from the system. See Raw Memory Interface.\n\u201cMem\u201d domain: intended for allocating memory for Python buffers and general-purpose memory buffers where the allocation must be performed with an attached thread state. The memory is taken from the Python private heap. See Memory Interface.\nObject domain: intended for allocating memory for Python objects. The memory is taken from the Python private heap. See Object allocators.\nNote\nThe free-threaded build requires that only Python objects are allocated using the \u201cobject\u201d domain and that all Python objects are allocated using that domain. This differs from the prior Python versions, where this was only a best practice and not a hard requirement.\nFor example, buffers (non-Python objects) should be allocated using PyMem_Malloc()\n,\nPyMem_RawMalloc()\n, or malloc()\n, but not PyObject_Malloc()\n.\nRaw Memory Interface\u00b6\nThe following function sets are wrappers to the system allocator. These functions are thread-safe, so a thread state does not need to be attached.\nThe default raw memory allocator uses\nthe following functions: malloc()\n, calloc()\n, realloc()\nand free()\n; call malloc(1)\n(or calloc(1, 1)\n) when requesting\nzero bytes.\nAdded in version 3.4.\n-\nvoid *PyMem_RawMalloc(size_t n)\u00b6\n- Part of the Stable ABI since version 3.13.\nAllocates n bytes and returns a pointer of type void* to the allocated memory, or\nNULL\nif the request fails.Requesting zero bytes returns a distinct non-\nNULL\npointer if possible, as ifPyMem_RawMalloc(1)\nhad been called instead. The memory will not have been initialized in any way.\n-\nvoid *PyMem_RawCalloc(size_t nelem, size_t elsize)\u00b6\n- Part of the Stable ABI since version 3.13.\nAllocates nelem elements each whose size in bytes is elsize and returns a pointer of type void* to the allocated memory, or\nNULL\nif the request fails. The memory is initialized to zeros.Requesting zero elements or elements of size zero bytes returns a distinct non-\nNULL\npointer if possible, as ifPyMem_RawCalloc(1, 1)\nhad been called instead.Added in version 3.5.\n-\nvoid *PyMem_RawRealloc(void *p, size_t n)\u00b6\n- Part of the Stable ABI since version 3.13.\nResizes the memory block pointed to by p to n bytes. The contents will be unchanged to the minimum of the old and the new sizes.\nIf p is\nNULL\n, the call is equivalent toPyMem_RawMalloc(n)\n; else if n is equal to zero, the memory block is resized but is not freed, and the returned pointer is non-NULL\n.Unless p is\nNULL\n, it must have been returned by a previous call toPyMem_RawMalloc()\n,PyMem_RawRealloc()\norPyMem_RawCalloc()\n.If the request fails,\nPyMem_RawRealloc()\nreturnsNULL\nand p remains a valid pointer to the previous memory area.\n-\nvoid PyMem_RawFree(void *p)\u00b6\n- Part of the Stable ABI since version 3.13.\nFrees the memory block pointed to by p, which must have been returned by a previous call to\nPyMem_RawMalloc()\n,PyMem_RawRealloc()\norPyMem_RawCalloc()\n. Otherwise, or ifPyMem_RawFree(p)\nhas been called before, undefined behavior occurs.If p is\nNULL\n, no operation is performed.\nMemory Interface\u00b6\nThe following function sets, modeled after the ANSI C standard, but specifying behavior when requesting zero bytes, are available for allocating and releasing memory from the Python heap.\nThe default memory allocator uses the pymalloc memory allocator.\nWarning\nThere must be an attached thread state when using these functions.\nChanged in version 3.6: The default allocator is now pymalloc instead of system malloc()\n.\n-\nvoid *PyMem_Malloc(size_t n)\u00b6\n- Part of the Stable ABI.\nAllocates n bytes and returns a pointer of type void* to the allocated memory, or\nNULL\nif the request fails.Requesting zero bytes returns a distinct non-\nNULL\npointer if possible, as ifPyMem_Malloc(1)\nhad been called instead. The memory will not have been initialized in any way.\n-\nvoid *PyMem_Calloc(size_t nelem, size_t elsize)\u00b6\n- Part of the Stable ABI since version 3.7.\nAllocates nelem elements each whose size in bytes is elsize and returns a pointer of type void* to the allocated memory, or\nNULL\nif the request fails. The memory is initialized to zeros.Requesting zero elements or elements of size zero bytes returns a distinct non-\nNULL\npointer if possible, as ifPyMem_Calloc(1, 1)\nhad been called instead.Added in version 3.5.\n-\nvoid *PyMem_Realloc(void *p, size_t n)\u00b6\n- Part of the Stable ABI.\nResizes the memory block pointed to by p to n bytes. The contents will be unchanged to the minimum of the old and the new sizes.\nIf p is\nNULL\n, the call is equivalent toPyMem_Malloc(n)\n; else if n is equal to zero, the memory block is resized but is not freed, and the returned pointer is non-NULL\n.Unless p is\nNULL\n, it must have been returned by a previous call toPyMem_Malloc()\n,PyMem_Realloc()\norPyMem_Calloc()\n.If the request fails,\nPyMem_Realloc()\nreturnsNULL\nand p remains a valid pointer to the previous memory area.\n-\nvoid PyMem_Free(void *p)\u00b6\n- Part of the Stable ABI.\nFrees the memory block pointed to by p, which must have been returned by a previous call to\nPyMem_Malloc()\n,PyMem_Realloc()\norPyMem_Calloc()\n. Otherwise, or ifPyMem_Free(p)\nhas been called before, undefined behavior occurs.If p is\nNULL\n, no operation is performed.\nThe following type-oriented macros are provided for convenience. Note that TYPE refers to any C type.\n-\nPyMem_New(TYPE, n)\u00b6\nSame as\nPyMem_Malloc()\n, but allocates(n * sizeof(TYPE))\nbytes of memory. Returns a pointer cast toTYPE*\n. The memory will not have been initialized in any way.\n-\nPyMem_Resize(p, TYPE, n)\u00b6\nSame as\nPyMem_Realloc()\n, but the memory block is resized to(n * sizeof(TYPE))\nbytes. Returns a pointer cast toTYPE*\n. On return, p will be a pointer to the new memory area, orNULL\nin the event of failure.This is a C preprocessor macro; p is always reassigned. Save the original value of p to avoid losing memory when handling errors.\n-\nvoid PyMem_Del(void *p)\u00b6\nSame as\nPyMem_Free()\n.\nDeprecated aliases\u00b6\nThese are soft deprecated aliases to existing functions and macros. They exist solely for backwards compatibility.\nDeprecated alias |\nCorresponding function or macro |\n|---|---|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\nChanged in version 3.4: The macros are now aliases of the corresponding functions and macros. Previously, their behavior was the same, but their use did not necessarily preserve binary compatibility across Python versions.\nDeprecated since version 2.0.\nObject allocators\u00b6\nThe following function sets, modeled after the ANSI C standard, but specifying behavior when requesting zero bytes, are available for allocating and releasing memory from the Python heap.\nNote\nThere is no guarantee that the memory returned by these allocators can be successfully cast to a Python object when intercepting the allocating functions in this domain by the methods described in the Customize Memory Allocators section.\nThe default object allocator uses the pymalloc memory allocator.\nWarning\nThere must be an attached thread state when using these functions.\n-\nvoid *PyObject_Malloc(size_t n)\u00b6\n- Part of the Stable ABI.\nAllocates n bytes and returns a pointer of type void* to the allocated memory, or\nNULL\nif the request fails.Requesting zero bytes returns a distinct non-\nNULL\npointer if possible, as ifPyObject_Malloc(1)\nhad been called instead. The memory will not have been initialized in any way.\n-\nvoid *PyObject_Calloc(size_t nelem, size_t elsize)\u00b6\n- Part of the Stable ABI since version 3.7.\nAllocates nelem elements each whose size in bytes is elsize and returns a pointer of type void* to the allocated memory, or\nNULL\nif the request fails. The memory is initialized to zeros.Requesting zero elements or elements of size zero bytes returns a distinct non-\nNULL\npointer if possible, as ifPyObject_Calloc(1, 1)\nhad been called instead.Added in version 3.5.\n-\nvoid *PyObject_Realloc(void *p, size_t n)\u00b6\n- Part of the Stable ABI.\nResizes the memory block pointed to by p to n bytes. The contents will be unchanged to the minimum of the old and the new sizes.\nIf p is\nNULL\n, the call is equivalent toPyObject_Malloc(n)\n; else if n is equal to zero, the memory block is resized but is not freed, and the returned pointer is non-NULL\n.Unless p is\nNULL\n, it must have been returned by a previous call toPyObject_Malloc()\n,PyObject_Realloc()\norPyObject_Calloc()\n.If the request fails,\nPyObject_Realloc()\nreturnsNULL\nand p remains a valid pointer to the previous memory area.\n-\nvoid PyObject_Free(void *p)\u00b6\n- Part of the Stable ABI.\nFrees the memory block pointed to by p, which must have been returned by a previous call to\nPyObject_Malloc()\n,PyObject_Realloc()\norPyObject_Calloc()\n. Otherwise, or ifPyObject_Free(p)\nhas been called before, undefined behavior occurs.If p is\nNULL\n, no operation is performed.Do not call this directly to free an object\u2019s memory; call the type\u2019s\ntp_free\nslot instead.Do not use this for memory allocated by\nPyObject_GC_New\norPyObject_GC_NewVar\n; usePyObject_GC_Del()\ninstead.See also\nPyObject_GC_Del()\nis the equivalent of this function for memory allocated by types that support garbage collection.\nDefault Memory Allocators\u00b6\nDefault memory allocators:\nConfiguration |\nName |\nPyMem_RawMalloc |\nPyMem_Malloc |\nPyObject_Malloc |\n|---|---|---|---|---|\nRelease build |\n|\n|\n|\n|\nDebug build |\n|\n|\n|\n|\nRelease build, without pymalloc |\n|\n|\n|\n|\nDebug build, without pymalloc |\n|\n|\n|\n|\nLegend:\nName: value for\nPYTHONMALLOC\nenvironment variable.malloc\n: system allocators from the standard C library, C functions:malloc()\n,calloc()\n,realloc()\nandfree()\n.pymalloc\n: pymalloc memory allocator.mimalloc\n: mimalloc memory allocator. The pymalloc allocator will be used if mimalloc support isn\u2019t available.\u201c+ debug\u201d: with debug hooks on the Python memory allocators.\n\u201cDebug build\u201d: Python build in debug mode.\nCustomize Memory Allocators\u00b6\nAdded in version 3.4.\n-\ntype PyMemAllocatorEx\u00b6\nStructure used to describe a memory block allocator. The structure has the following fields:\nField\nMeaning\nvoid *ctx\nuser context passed as first argument\nvoid* malloc(void *ctx, size_t size)\nallocate a memory block\nvoid* calloc(void *ctx, size_t nelem, size_t elsize)\nallocate a memory block initialized with zeros\nvoid* realloc(void *ctx, void *ptr, size_t new_size)\nallocate or resize a memory block\nvoid free(void *ctx, void *ptr)\nfree a memory block\nChanged in version 3.5: The\nPyMemAllocator\nstructure was renamed toPyMemAllocatorEx\nand a newcalloc\nfield was added.\n-\ntype PyMemAllocatorDomain\u00b6\nEnum used to identify an allocator domain. Domains:\n-\nPYMEM_DOMAIN_RAW\u00b6\nFunctions:\n-\nPYMEM_DOMAIN_MEM\u00b6\nFunctions:\n-\nPYMEM_DOMAIN_OBJ\u00b6\nFunctions:\n-\nPYMEM_DOMAIN_RAW\u00b6\n-\nvoid PyMem_GetAllocator(PyMemAllocatorDomain domain, PyMemAllocatorEx *allocator)\u00b6\nGet the memory block allocator of the specified domain.\n-\nvoid PyMem_SetAllocator(PyMemAllocatorDomain domain, PyMemAllocatorEx *allocator)\u00b6\nSet the memory block allocator of the specified domain.\nThe new allocator must return a distinct non-\nNULL\npointer when requesting zero bytes.For the\nPYMEM_DOMAIN_RAW\ndomain, the allocator must be thread-safe: a thread state is not attached when the allocator is called.For the remaining domains, the allocator must also be thread-safe: the allocator may be called in different interpreters that do not share a GIL.\nIf the new allocator is not a hook (does not call the previous allocator), the\nPyMem_SetupDebugHooks()\nfunction must be called to reinstall the debug hooks on top on the new allocator.See also\nPyPreConfig.allocator\nand Preinitialize Python with PyPreConfig.Warning\nPyMem_SetAllocator()\ndoes have the following contract:It can be called after\nPy_PreInitialize()\nand beforePy_InitializeFromConfig()\nto install a custom memory allocator. There are no restrictions over the installed allocator other than the ones imposed by the domain (for instance, the Raw Domain allows the allocator to be called without an attached thread state). See the section on allocator domains for more information.If called after Python has finish initializing (after\nPy_InitializeFromConfig()\nhas been called) the allocator must wrap the existing allocator. Substituting the current allocator for some other arbitrary one is not supported.\nChanged in version 3.12: All allocators must be thread-safe.\n-\nvoid PyMem_SetupDebugHooks(void)\u00b6\nSetup debug hooks in the Python memory allocators to detect memory errors.\nDebug hooks on the Python memory allocators\u00b6\nWhen Python is built in debug mode, the\nPyMem_SetupDebugHooks()\nfunction is called at the Python\npreinitialization to setup debug hooks on Python memory allocators\nto detect memory errors.\nThe PYTHONMALLOC\nenvironment variable can be used to install debug\nhooks on a Python compiled in release mode (ex: PYTHONMALLOC=debug\n).\nThe PyMem_SetupDebugHooks()\nfunction can be used to set debug hooks\nafter calling PyMem_SetAllocator()\n.\nThese debug hooks fill dynamically allocated memory blocks with special,\nrecognizable bit patterns. Newly allocated memory is filled with the byte\n0xCD\n(PYMEM_CLEANBYTE\n), freed memory is filled with the byte 0xDD\n(PYMEM_DEADBYTE\n). Memory blocks are surrounded by \u201cforbidden bytes\u201d\nfilled with the byte 0xFD\n(PYMEM_FORBIDDENBYTE\n). Strings of these bytes\nare unlikely to be valid addresses, floats, or ASCII strings.\nRuntime checks:\nDetect API violations. For example, detect if\nPyObject_Free()\nis called on a memory block allocated byPyMem_Malloc()\n.Detect write before the start of the buffer (buffer underflow).\nDetect write after the end of the buffer (buffer overflow).\nCheck that there is an attached thread state when allocator functions of\nPYMEM_DOMAIN_OBJ\n(ex:PyObject_Malloc()\n) andPYMEM_DOMAIN_MEM\n(ex:PyMem_Malloc()\n) domains are called.\nOn error, the debug hooks use the tracemalloc\nmodule to get the\ntraceback where a memory block was allocated. The traceback is only displayed\nif tracemalloc\nis tracing Python memory allocations and the memory block\nwas traced.\nLet S = sizeof(size_t)\n. 2*S\nbytes are added at each end of each block\nof N bytes requested. The memory layout is like so, where p represents the\naddress returned by a malloc-like or realloc-like function (p[i:j]\nmeans\nthe slice of bytes from *(p+i)\ninclusive up to *(p+j)\nexclusive; note\nthat the treatment of negative indices differs from a Python slice):\np[-2*S:-S]\nNumber of bytes originally asked for. This is a size_t, big-endian (easier to read in a memory dump).\np[-S]\nAPI identifier (ASCII character):\n'r'\nforPYMEM_DOMAIN_RAW\n.'m'\nforPYMEM_DOMAIN_MEM\n.'o'\nforPYMEM_DOMAIN_OBJ\n.\np[-S+1:0]\nCopies of PYMEM_FORBIDDENBYTE. Used to catch under- writes and reads.\np[0:N]\nThe requested memory, filled with copies of PYMEM_CLEANBYTE, used to catch reference to uninitialized memory. When a realloc-like function is called requesting a larger memory block, the new excess bytes are also filled with PYMEM_CLEANBYTE. When a free-like function is called, these are overwritten with PYMEM_DEADBYTE, to catch reference to freed memory. When a realloc- like function is called requesting a smaller memory block, the excess old bytes are also filled with PYMEM_DEADBYTE.\np[N:N+S]\nCopies of PYMEM_FORBIDDENBYTE. Used to catch over- writes and reads.\np[N+S:N+2*S]\nOnly used if the\nPYMEM_DEBUG_SERIALNO\nmacro is defined (not defined by default).A serial number, incremented by 1 on each call to a malloc-like or realloc-like function. Big-endian\nsize_t\n. If \u201cbad memory\u201d is detected later, the serial number gives an excellent way to set a breakpoint on the next run, to capture the instant at which this block was passed out. The static function bumpserialno() in obmalloc.c is the only place the serial number is incremented, and exists so you can set such a breakpoint easily.\nA realloc-like or free-like function first checks that the PYMEM_FORBIDDENBYTE bytes at each end are intact. If they\u2019ve been altered, diagnostic output is written to stderr, and the program is aborted via Py_FatalError(). The other main failure mode is provoking a memory error when a program reads up one of the special bit patterns and tries to use it as an address. If you get in a debugger then and look at the object, you\u2019re likely to see that it\u2019s entirely filled with PYMEM_DEADBYTE (meaning freed memory is getting used) or PYMEM_CLEANBYTE (meaning uninitialized memory is getting used).\nChanged in version 3.6: The PyMem_SetupDebugHooks()\nfunction now also works on Python\ncompiled in release mode. On error, the debug hooks now use\ntracemalloc\nto get the traceback where a memory block was allocated.\nThe debug hooks now also check if there is an attached thread state when\nfunctions of PYMEM_DOMAIN_OBJ\nand PYMEM_DOMAIN_MEM\ndomains are\ncalled.\nChanged in version 3.8: Byte patterns 0xCB\n(PYMEM_CLEANBYTE\n), 0xDB\n(PYMEM_DEADBYTE\n)\nand 0xFB\n(PYMEM_FORBIDDENBYTE\n) have been replaced with 0xCD\n,\n0xDD\nand 0xFD\nto use the same values than Windows CRT debug\nmalloc()\nand free()\n.\nThe pymalloc allocator\u00b6\nPython has a pymalloc allocator optimized for small objects (smaller or equal\nto 512 bytes) with a short lifetime. It uses memory mappings called \u201carenas\u201d\nwith a fixed size of either 256 KiB on 32-bit platforms or 1 MiB on 64-bit\nplatforms. It falls back to PyMem_RawMalloc()\nand\nPyMem_RawRealloc()\nfor allocations larger than 512 bytes.\npymalloc is the default allocator of the\nPYMEM_DOMAIN_MEM\n(ex: PyMem_Malloc()\n) and\nPYMEM_DOMAIN_OBJ\n(ex: PyObject_Malloc()\n) domains.\nThe arena allocator uses the following functions:\nVirtualAlloc()\nandVirtualFree()\non Windows,mmap()\nandmunmap()\nif available,malloc()\nandfree()\notherwise.\nThis allocator is disabled if Python is configured with the\n--without-pymalloc\noption. It can also be disabled at runtime using\nthe PYTHONMALLOC\nenvironment variable (ex: PYTHONMALLOC=malloc\n).\nTypically, it makes sense to disable the pymalloc allocator when building\nPython with AddressSanitizer (--with-address-sanitizer\n) which helps\nuncover low level bugs within the C code.\nCustomize pymalloc Arena Allocator\u00b6\nAdded in version 3.4.\n-\ntype PyObjectArenaAllocator\u00b6\nStructure used to describe an arena allocator. The structure has three fields:\nField\nMeaning\nvoid *ctx\nuser context passed as first argument\nvoid* alloc(void *ctx, size_t size)\nallocate an arena of size bytes\nvoid free(void *ctx, void *ptr, size_t size)\nfree an arena\n-\nvoid PyObject_GetArenaAllocator(PyObjectArenaAllocator *allocator)\u00b6\nGet the arena allocator.\n-\nvoid PyObject_SetArenaAllocator(PyObjectArenaAllocator *allocator)\u00b6\nSet the arena allocator.\nThe mimalloc allocator\u00b6\nAdded in version 3.13.\nPython supports the mimalloc allocator when the underlying platform support is available. mimalloc \u201cis a general purpose allocator with excellent performance characteristics. Initially developed by Daan Leijen for the runtime systems of the Koka and Lean languages.\u201d\ntracemalloc C API\u00b6\nAdded in version 3.7.\n-\nint PyTraceMalloc_Track(unsigned int domain, uintptr_t ptr, size_t size)\u00b6\nTrack an allocated memory block in the\ntracemalloc\nmodule.Return\n0\non success, return-1\non error (failed to allocate memory to store the trace). Return-2\nif tracemalloc is disabled.If memory block is already tracked, update the existing trace.\n-\nint PyTraceMalloc_Untrack(unsigned int domain, uintptr_t ptr)\u00b6\nUntrack an allocated memory block in the\ntracemalloc\nmodule. Do nothing if the block was not tracked.Return\n-2\nif tracemalloc is disabled, otherwise return0\n.\nExamples\u00b6\nHere is the example from section Overview, rewritten so that the I/O buffer is allocated from the Python heap by using the first function set:\nPyObject *res;\nchar *buf = (char *) PyMem_Malloc(BUFSIZ); /* for I/O */\nif (buf == NULL)\nreturn PyErr_NoMemory();\n/* ...Do some I/O operation involving buf... */\nres = PyBytes_FromString(buf);\nPyMem_Free(buf); /* allocated with PyMem_Malloc */\nreturn res;\nThe same code using the type-oriented function set:\nPyObject *res;\nchar *buf = PyMem_New(char, BUFSIZ); /* for I/O */\nif (buf == NULL)\nreturn PyErr_NoMemory();\n/* ...Do some I/O operation involving buf... */\nres = PyBytes_FromString(buf);\nPyMem_Free(buf); /* allocated with PyMem_New */\nreturn res;\nNote that in the two examples above, the buffer is always manipulated via functions belonging to the same set. Indeed, it is required to use the same memory API family for a given memory block, so that the risk of mixing different allocators is reduced to a minimum. The following code sequence contains two errors, one of which is labeled as fatal because it mixes two different allocators operating on different heaps.\nchar *buf1 = PyMem_New(char, BUFSIZ);\nchar *buf2 = (char *) malloc(BUFSIZ);\nchar *buf3 = (char *) PyMem_Malloc(BUFSIZ);\n...\nPyMem_Del(buf3); /* Wrong -- should be PyMem_Free() */\nfree(buf2); /* Right -- allocated via malloc() */\nfree(buf1); /* Fatal -- should be PyMem_Free() */\nIn addition to the functions aimed at handling raw memory blocks from the Python\nheap, objects in Python are allocated and released with PyObject_New\n,\nPyObject_NewVar\nand PyObject_Free()\n.\nThese will be explained in the next chapter on defining and implementing new object types in C.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 6401} +{"url": "https://docs.python.org/3/whatsnew/3.14.html", "title": "What\u2019s new in Python 3.14", "content": "What\u2019s new in Python 3.14\u00b6\n- Editors:\nAdam Turner and Hugo van Kemenade\nThis article explains the new features in Python 3.14, compared to 3.13. Python 3.14 was released on 7 October 2025. For full details, see the changelog.\nSee also\nPEP 745 \u2013 Python 3.14 release schedule\nSummary \u2013 Release highlights\u00b6\nPython 3.14 is the latest stable release of the Python programming language, with a mix of changes to the language, the implementation, and the standard library. The biggest changes include template string literals, deferred evaluation of annotations, and support for subinterpreters in the standard library.\nThe library changes include significantly improved capabilities for\nintrospection in asyncio,\nsupport for Zstandard via a new\ncompression.zstd\nmodule, syntax highlighting in the REPL,\nas well as the usual deprecations and removals,\nand improvements in user-friendliness and correctness.\nThis article doesn\u2019t attempt to provide a complete specification of all new features, but instead gives a convenient overview. For full details refer to the documentation, such as the Library Reference and Language Reference. To understand the complete implementation and design rationale for a change, refer to the PEP for a particular new feature; but note that PEPs usually are not kept up-to-date once a feature has been fully implemented. See Porting to Python 3.14 for guidance on upgrading from earlier versions of Python.\nInterpreter improvements:\nSignificant improvements in the standard library:\nSyntax highlighting in the default interactive shell, and color output in several standard library CLIs\nC API improvements:\nPlatform support:\nPEP 776: Emscripten is now an officially supported platform, at tier 3.\nRelease changes:\nNew features\u00b6\nPEP 649 & PEP 749: Deferred evaluation of annotations\u00b6\nThe annotations on functions, classes, and modules are no\nlonger evaluated eagerly. Instead, annotations are stored in special-purpose\nannotate functions and evaluated only when\nnecessary (except if from __future__ import annotations\nis used).\nThis change is designed to improve performance and usability of annotations in Python in most circumstances. The runtime cost for defining annotations is minimized, but it remains possible to introspect annotations at runtime. It is no longer necessary to enclose annotations in strings if they contain forward references.\nThe new annotationlib\nmodule provides tools for inspecting deferred\nannotations. Annotations may be evaluated in the VALUE\nformat (which evaluates annotations to runtime values, similar to the behavior in\nearlier Python versions), the FORWARDREF\nformat\n(which replaces undefined names with special markers), and the\nSTRING\nformat (which returns annotations as strings).\nThis example shows how these formats behave:\n>>> from annotationlib import get_annotations, Format\n>>> def func(arg: Undefined):\n... pass\n>>> get_annotations(func, format=Format.VALUE)\nTraceback (most recent call last):\n...\nNameError: name 'Undefined' is not defined\n>>> get_annotations(func, format=Format.FORWARDREF)\n{'arg': ForwardRef('Undefined', owner=)}\n>>> get_annotations(func, format=Format.STRING)\n{'arg': 'Undefined'}\nThe porting section contains guidance on changes that may be needed due to these changes, though in the majority of cases, code will continue working as-is.\n(Contributed by Jelle Zijlstra in PEP 749 and gh-119180; PEP 649 was written by Larry Hastings.)\nPEP 734: Multiple interpreters in the standard library\u00b6\nThe CPython runtime supports running multiple copies of Python in the same process simultaneously and has done so for over 20 years. Each of these separate copies is called an \u2018interpreter\u2019. However, the feature had been available only through the C-API.\nThat limitation is removed in Python 3.14,\nwith the new concurrent.interpreters\nmodule.\nThere are at least two notable reasons why using multiple interpreters has significant benefits:\nthey support a new (to Python), human-friendly concurrency model\ntrue multi-core parallelism\nFor some use cases, concurrency in software improves efficiency and\ncan simplify design, at a high level.\nAt the same time, implementing and maintaining all but the simplest concurrency\nis often a struggle for the human brain.\nThat especially applies to plain threads (for example, threading\n),\nwhere all memory is shared between all threads.\nWith multiple isolated interpreters, you can take advantage of a class of concurrency models, like Communicating Sequential Processes (CSP) or the actor model, that have found success in other programming languages, like Smalltalk, Erlang, Haskell, and Go. Think of multiple interpreters as threads but with opt-in sharing.\nRegarding multi-core parallelism: as of Python 3.12, interpreters are now sufficiently isolated from one another to be used in parallel (see PEP 684). This unlocks a variety of CPU-intensive use cases for Python that were limited by the GIL.\nUsing multiple interpreters is similar in many ways to\nmultiprocessing\n, in that they both provide isolated logical\n\u201cprocesses\u201d that can run in parallel, with no sharing by default.\nHowever, when using multiple interpreters, an application will use\nfewer system resources and will operate more efficiently (since it\nstays within the same process). Think of multiple interpreters as\nhaving the isolation of processes with the efficiency of threads.\nWhile the feature has been around for decades, multiple interpreters have not been used widely, due to low awareness and the lack of a standard library module. Consequently, they currently have several notable limitations, which are expected to improve significantly now that the feature is going mainstream.\nCurrent limitations:\nstarting each interpreter has not been optimized yet\neach interpreter uses more memory than necessary (work continues on extensive internal sharing between interpreters)\nthere aren\u2019t many options yet for truly sharing objects or other data between interpreters (other than\nmemoryview\n)many third-party extension modules on PyPI are not yet compatible with multiple interpreters (all standard library extension modules are compatible)\nthe approach to writing applications that use multiple isolated interpreters is mostly unfamiliar to Python users, for now\nThe impact of these limitations will depend on future CPython improvements, how interpreters are used, and what the community solves through PyPI packages. Depending on the use case, the limitations may not have much impact, so try it out!\nFurthermore, future CPython releases will reduce or eliminate overhead and provide utilities that are less appropriate on PyPI. In the meantime, most of the limitations can also be addressed through extension modules, meaning PyPI packages can fill any gap for 3.14, and even back to 3.12 where interpreters were finally properly isolated and stopped sharing the GIL. Likewise, libraries on PyPI are expected to emerge for high-level abstractions on top of interpreters.\nRegarding extension modules, work is in progress to update some PyPI projects, as well as tools like Cython, pybind11, nanobind, and PyO3. The steps for isolating an extension module are found at Isolating Extension Modules. Isolating a module has a lot of overlap with what is required to support free-threading, so the ongoing work in the community in that area will help accelerate support for multiple interpreters.\nAlso added in 3.14: concurrent.futures.InterpreterPoolExecutor.\n(Contributed by Eric Snow in gh-134939.)\nSee also\nPEP 750: Template string literals\u00b6\nTemplate strings are a new mechanism for custom string processing.\nThey share the familiar syntax of f-strings but, unlike f-strings,\nreturn an object representing the static and interpolated parts of\nthe string, instead of a simple str\n.\nTo write a t-string, use a 't'\nprefix instead of an 'f'\n:\n>>> variety = 'Stilton'\n>>> template = t'Try some {variety} cheese!'\n>>> type(template)\n\nTemplate\nobjects provide access to the static\nand interpolated (in curly braces) parts of a string before they are combined.\nIterate over Template\ninstances to access their parts in order:\n>>> list(template)\n['Try some ', Interpolation('Stilton', 'variety', None, ''), ' cheese!']\nIt\u2019s easy to write (or call) code to process Template\ninstances.\nFor example, here\u2019s a function that renders static parts lowercase and\nInterpolation\ninstances uppercase:\nfrom string.templatelib import Interpolation\ndef lower_upper(template):\n\"\"\"Render static parts lowercase and interpolations uppercase.\"\"\"\nparts = []\nfor part in template:\nif isinstance(part, Interpolation):\nparts.append(str(part.value).upper())\nelse:\nparts.append(part.lower())\nreturn ''.join(parts)\nname = 'Wenslydale'\ntemplate = t'Mister {name}'\nassert lower_upper(template) == 'mister WENSLYDALE'\nBecause Template\ninstances distinguish between static strings and\ninterpolations at runtime, they can be useful for sanitising user input.\nWriting a html()\nfunction that escapes user input in HTML is an exercise\nleft to the reader!\nTemplate processing code can provide improved flexibility.\nFor instance, a more advanced html()\nfunction could accept\na dict\nof HTML attributes directly in the template:\nattributes = {'src': 'limburger.jpg', 'alt': 'lovely cheese'}\ntemplate = t''\nassert html(template) == '\"lovely'\nOf course, template processing code does not need to return a string-like result.\nAn even more advanced html()\ncould return a custom type representing\na DOM-like structure.\nWith t-strings in place, developers can write systems that sanitise SQL, make safe shell operations, improve logging, tackle modern ideas in web development (HTML, CSS, and so on), and implement lightweight custom business DSLs.\n(Contributed by Jim Baker, Guido van Rossum, Paul Everitt, Koudai Aono, Lysandros Nikolaou, Dave Peck, Adam Turner, Jelle Zijlstra, B\u00e9n\u00e9dikt Tran, and Pablo Galindo Salgado in gh-132661.)\nSee also\nPEP 768: Safe external debugger interface\u00b6\nPython 3.14 introduces a zero-overhead debugging interface that allows debuggers and profilers to safely attach to running Python processes without stopping or restarting them. This is a significant enhancement to Python\u2019s debugging capabilities, meaning that unsafe alternatives are no longer required.\nThe new interface provides safe execution points for attaching debugger code without modifying the interpreter\u2019s normal execution path or adding any overhead at runtime. Due to this, tools can now inspect and interact with Python applications in real-time, which is a crucial capability for high-availability systems and production environments.\nFor convenience, this interface is implemented in the sys.remote_exec()\nfunction. For example:\nimport sys\nfrom tempfile import NamedTemporaryFile\nwith NamedTemporaryFile(mode='w', suffix='.py', delete=False) as f:\nscript_path = f.name\nf.write(f'import my_debugger; my_debugger.connect({os.getpid()})')\n# Execute in process with PID 1234\nprint('Behold! An offering:')\nsys.remote_exec(1234, script_path)\nThis function allows sending Python code to be executed in a target process at the next safe execution point. However, tool authors can also implement the protocol directly as described in the PEP, which details the underlying mechanisms used to safely attach to running processes.\nThe debugging interface has been carefully designed with security in mind and includes several mechanisms to control access:\nA\nPYTHON_DISABLE_REMOTE_DEBUG\nenvironment variable.A\n-X disable-remote-debug\ncommand-line option.A\n--without-remote-debug\nconfigure flag to completely disable the feature at build time.\n(Contributed by Pablo Galindo Salgado, Matt Wozniski, and Ivona Stojanovic in gh-131591.)\nSee also\nA new type of interpreter\u00b6\nA new type of interpreter has been added to CPython.\nIt uses tail calls between small C functions that implement individual\nPython opcodes, rather than one large C case\nstatement.\nFor certain newer compilers, this interpreter provides\nsignificantly better performance. Preliminary benchmarks suggest a geometric\nmean of 3-5% faster on the standard pyperformance\nbenchmark suite,\ndepending on platform and architecture.\nThe baseline is Python 3.14 built with Clang 19, without this new interpreter.\nThis interpreter currently only works with Clang 19 and newer on x86-64 and AArch64 architectures. However, a future release of GCC is expected to support this as well.\nThis feature is opt-in for now. Enabling profile-guided optimization is highly\nrecommendeded when using the new interpreter as it is the only configuration\nthat has been tested and validated for improved performance.\nFor further information, see --with-tail-call-interp\n.\nNote\nThis is not to be confused with tail call optimization of Python functions, which is currently not implemented in CPython.\nThis new interpreter type is an internal implementation detail of the CPython interpreter. It doesn\u2019t change the visible behavior of Python programs at all. It can improve their performance, but doesn\u2019t change anything else.\n(Contributed by Ken Jin in gh-128563, with ideas on how to implement this in CPython by Mark Shannon, Garrett Gu, Haoran Xu, and Josh Haberman.)\nFree-threaded mode improvements\u00b6\nCPython\u2019s free-threaded mode (PEP 703), initially added in 3.13, has been significantly improved in Python 3.14. The implementation described in PEP 703 has been finished, including C API changes, and temporary workarounds in the interpreter were replaced with more permanent solutions. The specializing adaptive interpreter (PEP 659) is now enabled in free-threaded mode, which along with many other optimizations greatly improves its performance. The performance penalty on single-threaded code in free-threaded mode is now roughly 5-10%, depending on the platform and C compiler used.\nFrom Python 3.14, when compiling extension modules for the free-threaded build of\nCPython on Windows, the preprocessor variable Py_GIL_DISABLED\nnow needs to\nbe specified by the build backend, as it will no longer be determined\nautomatically by the C compiler. For a running interpreter, the setting that\nwas used at compile time can be found using sysconfig.get_config_var()\n.\nThe new -X context_aware_warnings\nflag controls if\nconcurrent safe warnings control\nis enabled. The flag defaults to true for the free-threaded build\nand false for the GIL-enabled build.\nA new thread_inherit_context\nflag has been added,\nwhich if enabled means that threads created with threading.Thread\nstart with a copy of the Context()\nof the caller of\nstart()\n. Most significantly, this makes the warning\nfiltering context established by catch_warnings\nbe\n\u201cinherited\u201d by threads (or asyncio tasks) started within that context. It also\naffects other modules that use context variables, such as the decimal\ncontext manager.\nThis flag defaults to true for the free-threaded build and false for\nthe GIL-enabled build.\n(Contributed by Sam Gross, Matt Page, Neil Schemenauer, Thomas Wouters, Donghee Na, Kirill Podoprigora, Ken Jin, Itamar Oren, Brett Simmers, Dino Viehland, Nathan Goldbaum, Ralf Gommers, Lysandros Nikolaou, Kumar Aditya, Edgar Margffoy, and many others. Some of these contributors are employed by Meta, which has continued to provide significant engineering resources to support this project.)\nImproved error messages\u00b6\nThe interpreter now provides helpful suggestions when it detects typos in Python keywords. When a word that closely resembles a Python keyword is encountered, the interpreter will suggest the correct keyword in the error message. This feature helps programmers quickly identify and fix common typing mistakes. For example:\n>>> whille True: ... pass Traceback (most recent call last): File \"\", line 1 whille True: ^^^^^^ SyntaxError: invalid syntax. Did you mean 'while'?\nWhile the feature focuses on the most common cases, some variations of misspellings may still result in regular syntax errors. (Contributed by Pablo Galindo in gh-132449.)\nelif\nstatements that follow anelse\nblock now have a specific error message. (Contributed by Steele Farnsworth in gh-129902.)>>> if who == \"me\": ... print(\"It's me!\") ... else: ... print(\"It's not me!\") ... elif who is None: ... print(\"Who is it?\") File \"\", line 5 elif who is None: ^^^^ SyntaxError: 'elif' block follows an 'else' block\nIf a statement is passed to the Conditional expressions after\nelse\n, or one ofpass\n,break\n, orcontinue\nis passed beforeif\n, then the error message highlights where theexpression\nis required. (Contributed by Sergey Miryanov in gh-129515.)>>> x = 1 if True else pass Traceback (most recent call last): File \"\", line 1 x = 1 if True else pass ^^^^ SyntaxError: expected expression after 'else', but statement is given >>> x = continue if True else break Traceback (most recent call last): File \"\", line 1 x = continue if True else break ^^^^^^^^ SyntaxError: expected expression before 'if', but statement is given\nWhen incorrectly closed strings are detected, the error message suggests that the string may be intended to be part of the string. (Contributed by Pablo Galindo in gh-88535.)\n>>> \"The interesting object \"The important object\" is very important\" Traceback (most recent call last): SyntaxError: invalid syntax. Is this intended to be part of the string?\nWhen strings have incompatible prefixes, the error now shows which prefixes are incompatible. (Contributed by Nikita Sobolev in gh-133197.)\n>>> ub'abc' File \"\", line 1 ub'abc' ^^ SyntaxError: 'u' and 'b' prefixes are incompatible\nImproved error messages when using\nas\nwith incompatible targets in:Imports:\nimport ... as ...\nFrom imports:\nfrom ... import ... as ...\nExcept handlers:\nexcept ... as ...\nPattern-match cases:\ncase ... as ...\n(Contributed by Nikita Sobolev in gh-123539, gh-123562, and gh-123440.)\nImproved error message when trying to add an instance of an unhashable type to a\ndict\norset\n. (Contributed by CF Bolz-Tereick and Victor Stinner in gh-132828.)>>> s = set() >>> s.add({'pages': 12, 'grade': 'A'}) Traceback (most recent call last): File \"\", line 1, in s.add({'pages': 12, 'grade': 'A'}) ~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: cannot use 'dict' as a set element (unhashable type: 'dict') >>> d = {} >>> l = [1, 2, 3] >>> d[l] = 12 Traceback (most recent call last): File \"\", line 1, in d[l] = 12 ~^^^ TypeError: cannot use 'list' as a dict key (unhashable type: 'list')\nImproved error message when an object supporting the synchronous context manager protocol is entered using\nasync with\ninstead ofwith\n, and vice versa for the asynchronous context manager protocol. (Contributed by B\u00e9n\u00e9dikt Tran in gh-128398.)\nPEP 784: Zstandard support in the standard library\u00b6\nThe new compression\npackage contains modules compression.lzma\n,\ncompression.bz2\n, compression.gzip\nand compression.zlib\nwhich re-export the lzma\n, bz2\n, gzip\nand zlib\nmodules respectively. The new import names under compression\nare the\npreferred names for importing these compression modules from Python 3.14. However,\nthe existing modules names have not been deprecated. Any deprecation or removal\nof the existing compression modules will occur no sooner than five years after\nthe release of 3.14.\nThe new compression.zstd\nmodule provides compression and decompression\nAPIs for the Zstandard format via bindings to Meta\u2019s zstd library. Zstandard is a widely adopted, highly\nefficient, and fast compression format. In addition to the APIs introduced in\ncompression.zstd\n, support for reading and writing Zstandard compressed\narchives has been added to the tarfile\n, zipfile\n, and\nshutil\nmodules.\nHere\u2019s an example of using the new module to compress some data:\nfrom compression import zstd\nimport math\ndata = str(math.pi).encode() * 20\ncompressed = zstd.compress(data)\nratio = len(compressed) / len(data)\nprint(f\"Achieved compression ratio of {ratio}\")\nAs can be seen, the API is similar to the APIs of the lzma\nand\nbz2\nmodules.\n(Contributed by Emma Harper Smith, Adam Turner, Gregory P. Smith, Tomas Roun, Victor Stinner, and Rogdham in gh-132983.)\nSee also\nAsyncio introspection capabilities\u00b6\nAdded a new command-line interface to inspect running Python processes\nusing asynchronous tasks, available via python -m asyncio ps PID\nor python -m asyncio pstree PID\n.\nThe ps\nsubcommand inspects the given process ID (PID) and displays\ninformation about currently running asyncio tasks.\nIt outputs a task table: a flat listing of all tasks, their names,\ntheir coroutine stacks, and which tasks are awaiting them.\nThe pstree\nsubcommand fetches the same information, but instead renders a\nvisual async call tree, showing coroutine relationships in a hierarchical format.\nThis command is particularly useful for debugging long-running or stuck\nasynchronous programs.\nIt can help developers quickly identify where a program is blocked,\nwhat tasks are pending, and how coroutines are chained together.\nFor example given this code:\nimport asyncio\nasync def play_track(track):\nawait asyncio.sleep(5)\nprint(f'\ud83c\udfb5 Finished: {track}')\nasync def play_album(name, tracks):\nasync with asyncio.TaskGroup() as tg:\nfor track in tracks:\ntg.create_task(play_track(track), name=track)\nasync def main():\nasync with asyncio.TaskGroup() as tg:\ntg.create_task(\nplay_album('Sundowning', ['TNDNBTG', 'Levitate']),\nname='Sundowning')\ntg.create_task(\nplay_album('TMBTE', ['DYWTYLM', 'Aqua Regia']),\nname='TMBTE')\nif __name__ == '__main__':\nasyncio.run(main())\nExecuting the new tool on the running process will yield a table like this:\npython -m asyncio ps 12345\ntid task id task name coroutine stack awaiter chain awaiter name awaiter id\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n1935500 0x7fc930c18050 Task-1 TaskGroup._aexit -> TaskGroup.__aexit__ -> main 0x0\n1935500 0x7fc930c18230 Sundowning TaskGroup._aexit -> TaskGroup.__aexit__ -> album TaskGroup._aexit -> TaskGroup.__aexit__ -> main Task-1 0x7fc930c18050\n1935500 0x7fc93173fa50 TMBTE TaskGroup._aexit -> TaskGroup.__aexit__ -> album TaskGroup._aexit -> TaskGroup.__aexit__ -> main Task-1 0x7fc930c18050\n1935500 0x7fc93173fdf0 TNDNBTG sleep -> play TaskGroup._aexit -> TaskGroup.__aexit__ -> album Sundowning 0x7fc930c18230\n1935500 0x7fc930d32510 Levitate sleep -> play TaskGroup._aexit -> TaskGroup.__aexit__ -> album Sundowning 0x7fc930c18230\n1935500 0x7fc930d32890 DYWTYLM sleep -> play TaskGroup._aexit -> TaskGroup.__aexit__ -> album TMBTE 0x7fc93173fa50\n1935500 0x7fc93161ec30 Aqua Regia sleep -> play TaskGroup._aexit -> TaskGroup.__aexit__ -> album TMBTE 0x7fc93173fa50\nor a tree like this:\npython -m asyncio pstree 12345\n\u2514\u2500\u2500 (T) Task-1\n\u2514\u2500\u2500 main example.py:13\n\u2514\u2500\u2500 TaskGroup.__aexit__ Lib/asyncio/taskgroups.py:72\n\u2514\u2500\u2500 TaskGroup._aexit Lib/asyncio/taskgroups.py:121\n\u251c\u2500\u2500 (T) Sundowning\n\u2502 \u2514\u2500\u2500 album example.py:8\n\u2502 \u2514\u2500\u2500 TaskGroup.__aexit__ Lib/asyncio/taskgroups.py:72\n\u2502 \u2514\u2500\u2500 TaskGroup._aexit Lib/asyncio/taskgroups.py:121\n\u2502 \u251c\u2500\u2500 (T) TNDNBTG\n\u2502 \u2502 \u2514\u2500\u2500 play example.py:4\n\u2502 \u2502 \u2514\u2500\u2500 sleep Lib/asyncio/tasks.py:702\n\u2502 \u2514\u2500\u2500 (T) Levitate\n\u2502 \u2514\u2500\u2500 play example.py:4\n\u2502 \u2514\u2500\u2500 sleep Lib/asyncio/tasks.py:702\n\u2514\u2500\u2500 (T) TMBTE\n\u2514\u2500\u2500 album example.py:8\n\u2514\u2500\u2500 TaskGroup.__aexit__ Lib/asyncio/taskgroups.py:72\n\u2514\u2500\u2500 TaskGroup._aexit Lib/asyncio/taskgroups.py:121\n\u251c\u2500\u2500 (T) DYWTYLM\n\u2502 \u2514\u2500\u2500 play example.py:4\n\u2502 \u2514\u2500\u2500 sleep Lib/asyncio/tasks.py:702\n\u2514\u2500\u2500 (T) Aqua Regia\n\u2514\u2500\u2500 play example.py:4\n\u2514\u2500\u2500 sleep Lib/asyncio/tasks.py:702\nIf a cycle is detected in the async await graph (which could indicate a programming issue), the tool raises an error and lists the cycle paths that prevent tree construction:\npython -m asyncio pstree 12345\nERROR: await-graph contains cycles - cannot print a tree!\ncycle: Task-2 \u2192 Task-3 \u2192 Task-2\n(Contributed by Pablo Galindo, \u0141ukasz Langa, Yury Selivanov, and Marta Gomez Macias in gh-91048.)\nConcurrent safe warnings control\u00b6\nThe warnings.catch_warnings\ncontext manager will now optionally\nuse a context variable for warning filters. This is enabled by setting\nthe context_aware_warnings\nflag, either with the -X\ncommand-line option or an environment variable. This gives predictable\nwarnings control when using catch_warnings\ncombined with\nmultiple threads or asynchronous tasks. The flag defaults to true for the\nfree-threaded build and false for the GIL-enabled build.\n(Contributed by Neil Schemenauer and Kumar Aditya in gh-130010.)\nOther language changes\u00b6\nAll Windows code pages are now supported as \u2018cpXXX\u2019 codecs on Windows. (Contributed by Serhiy Storchaka in gh-123803.)\nImplement mixed-mode arithmetic rules combining real and complex numbers as specified by the C standard since C99. (Contributed by Sergey B Kirpichev in gh-69639.)\nMore syntax errors are now detected regardless of optimisation and the\n-O\ncommand-line option. This includes writes to__debug__\n, incorrect use ofawait\n, and asynchronous comprehensions outside asynchronous functions. For example,python -O -c 'assert (__debug__ := 1)'\norpython -O -c 'assert await 1'\nnow produceSyntaxError\ns. (Contributed by Irit Katriel and Jelle Zijlstra in gh-122245 & gh-121637.)When subclassing a pure C type, the C slots for the new type are no longer replaced with a wrapped version on class creation if they are not explicitly overridden in the subclass. (Contributed by Tomasz Pytel in gh-132284.)\nBuilt-ins\u00b6\nThe\nbytes.fromhex()\nandbytearray.fromhex()\nmethods now accept ASCIIbytes\nand bytes-like objects. (Contributed by Daniel Pope in gh-129349.)Add class methods\nfloat.from_number()\nandcomplex.from_number()\nto convert a number tofloat\norcomplex\ntype correspondingly. They raise aTypeError\nif the argument is not a real number. (Contributed by Serhiy Storchaka in gh-84978.)Support underscore and comma as thousands separators in the fractional part for floating-point presentation types of the new-style string formatting (with\nformat()\nor f-strings). (Contributed by Sergey B Kirpichev in gh-87790.)The\nint()\nfunction no longer delegates to__trunc__()\n. Classes that want to support conversion toint()\nmust implement either__int__()\nor__index__()\n. (Contributed by Mark Dickinson in gh-119743.)The\nmap()\nfunction now has an optional keyword-only strict flag likezip()\nto check that all the iterables are of equal length. (Contributed by Wannes Boeykens in gh-119793.)The\nmemoryview\ntype now supports subscription, making it a generic type. (Contributed by Brian Schubert in gh-126012.)Using\nNotImplemented\nin a boolean context will now raise aTypeError\n. This has raised aDeprecationWarning\nsince Python 3.9. (Contributed by Jelle Zijlstra in gh-118767.)Three-argument\npow()\nnow tries calling__rpow__()\nif necessary. Previously it was only called in two-argumentpow()\nand the binary power operator. (Contributed by Serhiy Storchaka in gh-130104.)super\nobjects are nowcopyable\nandpickleable\n. (Contributed by Serhiy Storchaka in gh-125767.)\nCommand line and environment\u00b6\nThe import time flag can now track modules that are already loaded (\u2018cached\u2019), via the new\n-X importtime=2\n. When such a module is imported, theself\nandcumulative\ntimes are replaced by the stringcached\n.Values above\n2\nfor-X importtime\nare now reserved for future use.(Contributed by Noah Kim and Adam Turner in gh-118655.)\nThe command-line option\n-c\nnow automatically dedents its code argument before execution. The auto-dedentation behavior mirrorstextwrap.dedent()\n. (Contributed by Jon Crall and Steven Sun in gh-103998.)-J\nis no longer a reserved flag for Jython, and now has no special meaning. (Contributed by Adam Turner in gh-133336.)\nPEP 758: Allow except\nand except*\nexpressions without brackets\u00b6\nThe except\nand except*\nexpressions\nnow allow brackets to be omitted when there are multiple exception types\nand the as\nclause is not used.\nFor example:\ntry:\nconnect_to_server()\nexcept TimeoutError, ConnectionRefusedError:\nprint('The network has ceased to be!')\n(Contributed by Pablo Galindo and Brett Cannon in PEP 758 and gh-131831.)\nPEP 765: Control flow in finally\nblocks\u00b6\nThe compiler now emits a SyntaxWarning\nwhen a return\n,\nbreak\n, or continue\nstatement have the effect of\nleaving a finally\nblock.\nThis change is specified in PEP 765.\nIn situations where this change is inconvenient (such as those where the\nwarnings are redundant due to code linting), the warning filter can be used to turn off all syntax warnings by adding\nignore::SyntaxWarning\nas a filter. This can be specified in combination\nwith a filter that converts other warnings to errors (for example, passing\n-Werror -Wignore::SyntaxWarning\nas CLI options, or setting\nPYTHONWARNINGS=error,ignore::SyntaxWarning\n).\nNote that applying such a filter at runtime using the warnings\nmodule\nwill only suppress the warning in code that is compiled after the filter is\nadjusted. Code that is compiled prior to the filter adjustment (for example,\nwhen a module is imported) will still emit the syntax warning.\n(Contributed by Irit Katriel in gh-130080.)\nIncremental garbage collection\u00b6\nThe cycle garbage collector is now incremental. This means that maximum pause times are reduced by an order of magnitude or more for larger heaps.\nThere are now only two generations: young and old.\nWhen gc.collect()\nis not called directly, the\nGC is invoked a little less frequently. When invoked, it\ncollects the young generation and an increment of the\nold generation, instead of collecting one or more generations.\nThe behavior of gc.collect()\nchanges slightly:\ngc.collect(1)\n: Performs an increment of garbage collection, rather than collecting generation 1.Other calls to\ngc.collect()\nare unchanged.\n(Contributed by Mark Shannon in gh-108362.)\nDefault interactive shell\u00b6\nThe default interactive shell now highlights Python syntax. The feature is enabled by default, save if\nPYTHON_BASIC_REPL\nor any other environment variable that disables colour is set. See Controlling color for details.The default color theme for syntax highlighting strives for good contrast and exclusively uses the 4-bit VGA standard ANSI color codes for maximum compatibility. The theme can be customized using an experimental API\n_colorize.set_theme()\n. This can be called interactively or in thePYTHONSTARTUP\nscript. Note that this function has no stability guarantees, and may change or be removed.(Contributed by \u0141ukasz Langa in gh-131507.)\nThe default interactive shell now supports import auto-completion. This means that typing\nimport co\nand pressing will suggest modules starting withco\n. Similarly, typingfrom concurrent import i\nwill suggest submodules ofconcurrent\nstarting withi\n. Note that autocompletion of module attributes is not currently supported. (Contributed by Tomas Roun in gh-69605.)\nNew modules\u00b6\nannotationlib\n: For introspecting annotations. See PEP 749 for more details. (Contributed by Jelle Zijlstra in gh-119180.)compression\n(includingcompression.zstd\n): A package for compression-related modules, including a new module to support the Zstandard compression format. See PEP 784 for more details. (Contributed by Emma Harper Smith, Adam Turner, Gregory P. Smith, Tomas Roun, Victor Stinner, and Rogdham in gh-132983.)concurrent.interpreters\n: Support for multiple interpreters in the standard library. See PEP 734 for more details. (Contributed by Eric Snow in gh-134939.)string.templatelib\n: Support for template string literals (t-strings). See PEP 750 for more details. (Contributed by Jim Baker, Guido van Rossum, Paul Everitt, Koudai Aono, Lysandros Nikolaou, Dave Peck, Adam Turner, Jelle Zijlstra, B\u00e9n\u00e9dikt Tran, and Pablo Galindo Salgado in gh-132661.)\nImproved modules\u00b6\nargparse\u00b6\nThe default value of the program name for\nargparse.ArgumentParser\nnow reflects the way the Python interpreter was instructed to find the__main__\nmodule code. (Contributed by Serhiy Storchaka and Alyssa Coghlan in gh-66436.)Introduced the optional suggest_on_error parameter to\nargparse.ArgumentParser\n, enabling suggestions for argument choices and subparser names if mistyped by the user. (Contributed by Savannah Ostrowski in gh-124456.)Enable color for help text, which can be disabled with the optional color parameter to\nargparse.ArgumentParser\n. This can also be controlled by environment variables. (Contributed by Hugo van Kemenade in gh-130645.)\nast\u00b6\nAdd\ncompare()\n, a function for comparing two ASTs. (Contributed by Batuhan Taskaya and Jeremy Hylton in gh-60191.)Add support for\ncopy.replace()\nfor AST nodes. (Contributed by B\u00e9n\u00e9dikt Tran in gh-121141.)Docstrings are now removed from an optimized AST in optimization level 2. (Contributed by Irit Katriel in gh-123958.)\nThe\nrepr()\noutput for AST nodes now includes more information. (Contributed by Tomas Roun in gh-116022.)When called with an AST as input, the\nparse()\nfunction now always verifies that the root node type is appropriate. (Contributed by Irit Katriel in gh-130139.)Add new options to the command-line interface:\n--feature-version\n,--optimize\n, and--show-empty\n. (Contributed by Semyon Moroz in gh-133367.)\nasyncio\u00b6\nThe function and methods named\ncreate_task()\nnow take an arbitrary list of keyword arguments. All keyword arguments are passed to theTask\nconstructor or the custom task factory. (Seeset_task_factory()\nfor details.) Thename\nandcontext\nkeyword arguments are no longer special; the name should now be set using thename\nkeyword argument of the factory, andcontext\nmay beNone\n.This affects the following function and methods:\nasyncio.create_task()\n,asyncio.loop.create_task()\n,asyncio.TaskGroup.create_task()\n.(Contributed by Thomas Grainger in gh-128307.)\nThere are two new utility functions for introspecting and printing a program\u2019s call graph:\ncapture_call_graph()\nandprint_call_graph()\n. See Asyncio introspection capabilities for more details. (Contributed by Yury Selivanov, Pablo Galindo Salgado, and \u0141ukasz Langa in gh-91048.)\ncalendar\u00b6\nBy default, today\u2019s date is highlighted in color in\ncalendar\n\u2019s command-line text output. This can be controlled by environment variables. (Contributed by Hugo van Kemenade in gh-128317.)\nconcurrent.futures\u00b6\nAdd a new executor class,\nInterpreterPoolExecutor\n, which exposes multiple Python interpreters in the same process (\u2018subinterpreters\u2019) to Python code. This uses a pool of independent Python interpreters to execute calls asynchronously.This is separate from the new\ninterpreters\nmodule introduced by PEP 734. (Contributed by Eric Snow in gh-124548.)\nOn Unix platforms other than macOS, \u2018forkserver\u2019 is now the default start method for\nProcessPoolExecutor\n(replacing \u2018fork\u2019). This change does not affect Windows or macOS, where \u2018spawn\u2019 remains the default start method.If the threading incompatible fork method is required, you must explicitly request it by supplying a multiprocessing context mp_context to\nProcessPoolExecutor\n.See forkserver restrictions for information and differences with the fork method and how this change may affect existing code with mutable global shared variables and/or shared objects that can not be automatically\npickled\n.(Contributed by Gregory P. Smith in gh-84559.)\nAdd two new methods to\nProcessPoolExecutor\n,terminate_workers()\nandkill_workers()\n, as ways to terminate or kill all living worker processes in the given pool. (Contributed by Charles Machalow in gh-130849.)Add the optional buffersize parameter to\nExecutor.map\nto limit the number of submitted tasks whose results have not yet been yielded. If the buffer is full, iteration over the iterables pauses until a result is yielded from the buffer. (Contributed by Enzo Bonnal and Josh Rosenberg in gh-74028.)\nconfigparser\u00b6\nconfigparser\nwill no longer write config files it cannot read, to improve security. Attempting towrite()\nkeys containing delimiters or beginning with the section header pattern will raise anInvalidWriteError\n. (Contributed by Jacob Lincoln in gh-129270.)\ncontextvars\u00b6\nSupport the context manager protocol for\nToken\nobjects. (Contributed by Andrew Svetlov in gh-129889.)\nctypes\u00b6\nThe layout of bit fields in\nStructure\nandUnion\nobjects is now a closer match to platform defaults (GCC/Clang or MSVC). In particular, fields no longer overlap. (Contributed by Matthias G\u00f6rgens in gh-97702.)The\nStructure._layout_\nclass attribute can now be set to help match a non-default ABI. (Contributed by Petr Viktorin in gh-97702.)The class of\nStructure\n/Union\nfield descriptors is now available asCField\n, and has new attributes to aid debugging and introspection. (Contributed by Petr Viktorin in gh-128715.)On Windows, the\nCOMError\nexception is now public. (Contributed by Jun Komoda in gh-126686.)On Windows, the\nCopyComPointer()\nfunction is now public. (Contributed by Jun Komoda in gh-127275.)Add\nmemoryview_at()\n, a function to create amemoryview\nobject that refers to the supplied pointer and length. This works likectypes.string_at()\nexcept it avoids a buffer copy, and is typically useful when implementing pure Python callback functions that are passed dynamically-sized buffers. (Contributed by Rian Hunter in gh-112018.)Complex types,\nc_float_complex\n,c_double_complex\n, andc_longdouble_complex\n, are now available if both the compiler and thelibffi\nlibrary support complex C types. (Contributed by Sergey B Kirpichev in gh-61103.)Add\nctypes.util.dllist()\nfor listing the shared libraries loaded by the current process. (Contributed by Brian Ward in gh-119349.)Move\nctypes.POINTER()\ntypes cache from a global internal cache (_pointer_type_cache\n) to the_CData.__pointer_type__\nattribute of the correspondingctypes\ntypes. This will stop the cache from growing without limits in some situations. (Contributed by Sergey Miryanov in gh-100926.)The\npy_object\ntype now supports subscription, making it a generic type. (Contributed by Brian Schubert in gh-132168.)ctypes\nnow supports free-threading builds. (Contributed by Kumar Aditya and Peter Bierma in gh-127945.)\ncurses\u00b6\nAdd the\nassume_default_colors()\nfunction, a refinement of theuse_default_colors()\nfunction which allows changing the color pair0\n. (Contributed by Serhiy Storchaka in gh-133139.)\ndatetime\u00b6\nAdd the\nstrptime()\nmethod to thedatetime.date\nanddatetime.time\nclasses. (Contributed by Wannes Boeykens in gh-41431.)\ndecimal\u00b6\nAdd\nDecimal.from_number()\nas an alternative constructor forDecimal\n. (Contributed by Serhiy Storchaka in gh-121798.)Expose\nIEEEContext()\nto support creation of contexts corresponding to the IEEE 754 (2008) decimal interchange formats. (Contributed by Sergey B Kirpichev in gh-53032.)\ndifflib\u00b6\ndis\u00b6\nAdd support for rendering full source location information of\ninstructions\n, rather than only the line number. This feature is added to the following interfaces via the show_positions keyword argument:This feature is also exposed via\ndis --show-positions\n. (Contributed by B\u00e9n\u00e9dikt Tran in gh-123165.)Add the\ndis --specialized\ncommand-line option to show specialized bytecode. (Contributed by B\u00e9n\u00e9dikt Tran in gh-127413.)\nerrno\u00b6\nfaulthandler\u00b6\nAdd support for printing the C stack trace on systems that support it via the new\ndump_c_stack()\nfunction or via the c_stack argument infaulthandler.enable()\n. (Contributed by Peter Bierma in gh-127604.)\nfnmatch\u00b6\nAdd\nfilterfalse()\n, a function to reject names matching a given pattern. (Contributed by B\u00e9n\u00e9dikt Tran in gh-74598.)\nfractions\u00b6\nA\nFraction\nobject may now be constructed from any object with theas_integer_ratio()\nmethod. (Contributed by Serhiy Storchaka in gh-82017.)Add\nFraction.from_number()\nas an alternative constructor forFraction\n. (Contributed by Serhiy Storchaka in gh-121797.)\nfunctools\u00b6\nAdd the\nPlaceholder\nsentinel. This may be used with thepartial()\norpartialmethod()\nfunctions to reserve a place for positional arguments in the returned partial object. (Contributed by Dominykas Grigonis in gh-119127.)Allow the initial parameter of\nreduce()\nto be passed as a keyword argument. (Contributed by Sayandip Dutta in gh-125916.)\ngetopt\u00b6\ngetpass\u00b6\ngraphlib\u00b6\nAllow\nTopologicalSorter.prepare()\nto be called more than once as long as sorting has not started. (Contributed by Daniel Pope in gh-130914.)\nheapq\u00b6\nThe\nheapq\nmodule has improved support for working with max-heaps, via the following new functions:\nhmac\u00b6\nhttp\u00b6\nDirectory lists and error pages generated by the\nhttp.server\nmodule allow the browser to apply its default dark mode. (Contributed by Yorik Hansen in gh-123430.)The\nhttp.server\nmodule now supports serving over HTTPS using thehttp.server.HTTPSServer\nclass. This functionality is exposed by the command-line interface (python -m http.server\n) through the following options:--tls-cert \n: Path to the TLS certificate file.--tls-key \n: Optional path to the private key file.--tls-password-file \n: Optional path to the password file for the private key.\n(Contributed by Semyon Moroz in gh-85162.)\nimaplib\u00b6\nAdd\nIMAP4.idle()\n, implementing the IMAP4IDLE\ncommand as defined in RFC 2177. (Contributed by Forest in gh-55454.)\ninspect\u00b6\nsignature()\ntakes a new argument annotation_format to control theannotationlib.Format\nused for representing annotations. (Contributed by Jelle Zijlstra in gh-101552.)Signature.format()\ntakes a new argument unquote_annotations. If true, string annotations are displayed without surrounding quotes. (Contributed by Jelle Zijlstra in gh-101552.)Add function\nispackage()\nto determine whether an object is a package or not. (Contributed by Zhikang Yan in gh-125634.)\nio\u00b6\nReading text from a non-blocking stream with\nread\nmay now raise aBlockingIOError\nif the operation cannot immediately return bytes. (Contributed by Giovanni Siragusa in gh-109523.)Add the\nReader\nandWriter\nprotocols as simpler alternatives to the pseudo-protocolstyping.IO\n,typing.TextIO\n, andtyping.BinaryIO\n. (Contributed by Sebastian Rittau in gh-127648.)\njson\u00b6\nAdd exception notes for JSON serialization errors that allow identifying the source of the error. (Contributed by Serhiy Storchaka in gh-122163.)\nAllow using the\njson\nmodule as a script using the-m\nswitch: python -m json. This is now preferred to python -m json.tool, which is soft deprecated. See the JSON command-line interface documentation. (Contributed by Trey Hunner in gh-122873.)By default, the output of the JSON command-line interface is highlighted in color. This can be controlled by environment variables. (Contributed by Tomas Roun in gh-131952.)\nlinecache\u00b6\nlogging.handlers\u00b6\nQueueListener\nobjects now support the context manager protocol. (Contributed by Charles Machalow in gh-132106.)QueueListener.start\nnow raises aRuntimeError\nif the listener is already started. (Contributed by Charles Machalow in gh-132106.)\nmath\u00b6\nAdded more detailed error messages for domain errors in the module. (Contributed by Charlie Zhao and Sergey B Kirpichev in gh-101410.)\nmimetypes\u00b6\nAdd a public command-line for the module, invoked via python -m mimetypes. (Contributed by Oleg Iarygin and Hugo van Kemenade in gh-93096.)\nAdd several new MIME types based on RFCs and common usage:\nMicrosoft and RFC 8081 MIME types for fonts\nEmbedded OpenType:\napplication/vnd.ms-fontobject\nOpenType Layout (OTF)\nfont/otf\nTrueType:\nfont/ttf\nWOFF 1.0\nfont/woff\nWOFF 2.0\nfont/woff2\nRFC 9559 MIME types for Matroska audiovisual data container structures\naudio with no video:\naudio/matroska\n(.mka\n)video:\nvideo/matroska\n(.mkv\n)stereoscopic video:\nvideo/matroska-3d\n(.mk3d\n)\nImages with RFCs\nRFC 1494: CCITT Group 3 (\n.g3\n)RFC 3362: Real-time Facsimile, T.38 (\n.t38\n)RFC 3745: JPEG 2000 (\n.jp2\n), extension (.jpx\n) and compound (.jpm\n)RFC 3950: Tag Image File Format Fax eXtended, TIFF-FX (\n.tfx\n)RFC 4047: Flexible Image Transport System (\n.fits\n)RFC 7903: Enhanced Metafile (\n.emf\n) and Windows Metafile (.wmf\n)\nOther MIME type additions and changes\nRFC 2361: Change type for\n.avi\ntovideo/vnd.avi\nand for.wav\ntoaudio/vnd.wave\nRFC 4337: Add MPEG-4\naudio/mp4\n(.m4a\n)RFC 5334: Add Ogg media (\n.oga\n,.ogg\nand.ogx\n)RFC 6713: Add gzip\napplication/gzip\n(.gz\n)RFC 9639: Add FLAC\naudio/flac\n(.flac\n)RFC 9512\napplication/yaml\nMIME type for YAML files (.yaml\nand.yml\n)Add 7z\napplication/x-7z-compressed\n(.7z\n)Add Android Package\napplication/vnd.android.package-archive\n(.apk\n) when not strictAdd deb\napplication/x-debian-package\n(.deb\n)Add glTF binary\nmodel/gltf-binary\n(.glb\n)Add glTF JSON/ASCII\nmodel/gltf+json\n(.gltf\n)Add M4V\nvideo/x-m4v\n(.m4v\n)Add PHP\napplication/x-httpd-php\n(.php\n)Add RAR\napplication/vnd.rar\n(.rar\n)Add RPM\napplication/x-rpm\n(.rpm\n)Add STL\nmodel/stl\n(.stl\n)Add Windows Media Video\nvideo/x-ms-wmv\n(.wmv\n)De facto: Add WebM\naudio/webm\n(.weba\n)ECMA-376: Add\n.docx\n,.pptx\nand.xlsx\ntypesOASIS: Add OpenDocument\n.odg\n,.odp\n,.ods\nand.odt\ntypesW3C: Add EPUB\napplication/epub+zip\n(.epub\n)\n(Contributed by Sahil Prajapati and Hugo van Kemenade in gh-84852, by Sasha \u201cNelie\u201d Chernykh and Hugo van Kemenade in gh-132056, and by Hugo van Kemenade in gh-89416, gh-85957, and gh-129965.)\nmultiprocessing\u00b6\nOn Unix platforms other than macOS, \u2018forkserver\u2019 is now the default start method (replacing \u2018fork\u2019). This change does not affect Windows or macOS, where \u2018spawn\u2019 remains the default start method.\nIf the threading incompatible fork method is required, you must explicitly request it via a context from\nget_context()\n(preferred) or change the default viaset_start_method()\n.See forkserver restrictions for information and differences with the fork method and how this change may affect existing code with mutable global shared variables and/or shared objects that can not be automatically\npickled\n.(Contributed by Gregory P. Smith in gh-84559.)\nmultiprocessing\n\u2019s'forkserver'\nstart method now authenticates its control socket to avoid solely relying on filesystem permissions to restrict what other processes could cause the forkserver to spawn workers and run code. (Contributed by Gregory P. Smith for gh-97514.)The multiprocessing proxy objects for list and dict types gain previously overlooked missing methods:\nclear()\nandcopy()\nfor proxies oflist\nfromkeys()\n,reversed(d)\n,d | {}\n,{} | d\n,d |= {'b': 2}\nfor proxies ofdict\n(Contributed by Roy Hyunjin Han for gh-103134.)\nAdd support for shared\nset\nobjects viaSyncManager.set()\n. Theset()\ninManager()\nmethod is now available. (Contributed by Mingyu Park in gh-129949.)Add the\ninterrupt()\ntomultiprocessing.Process\nobjects, which terminates the child process by sendingSIGINT\n. This enablesfinally\nclauses to print a stack trace for the terminated process. (Contributed by Artem Pulkin in gh-131913.)\noperator\u00b6\nAdd\nis_none()\nandis_not_none()\nas a pair of functions, such thatoperator.is_none(obj)\nis equivalent toobj is None\nandoperator.is_not_none(obj)\nis equivalent toobj is not None\n. (Contributed by Raymond Hettinger and Nico Mexis in gh-115808.)\nos\u00b6\nAdd the\nreload_environ()\nfunction to updateos.environ\nandos.environb\nwith changes to the environment made byos.putenv()\n, byos.unsetenv()\n, or made outside Python in the same process. (Contributed by Victor Stinner in gh-120057.)Add the\nSCHED_DEADLINE\nandSCHED_NORMAL\nconstants to theos\nmodule. (Contributed by James Roy in gh-127688.)Add the\nreadinto()\nfunction to read into a buffer object from a file descriptor. (Contributed by Cody Maloney in gh-129205.)\nos.path\u00b6\nThe strict parameter to\nrealpath()\naccepts a new value,ALLOW_MISSING\n. If used, errors other thanFileNotFoundError\nwill be re-raised; the resulting path can be missing but it will be free of symlinks. (Contributed by Petr Viktorin for CVE 2025-4517.)\npathlib\u00b6\nAdd methods to\npathlib.Path\nto recursively copy or move files and directories:copy()\ncopies a file or directory tree to a destination.copy_into()\ncopies into a destination directory.move()\nmoves a file or directory tree to a destination.move_into()\nmoves into a destination directory.\n(Contributed by Barney Gale in gh-73991.)\nAdd the\ninfo\nattribute, which stores an object implementing the newpathlib.types.PathInfo\nprotocol. The object supports querying the file type and internally cachingstat()\nresults. Path objects generated byiterdir()\nare initialized with file type information gleaned from scanning the parent directory. (Contributed by Barney Gale in gh-125413.)\npdb\u00b6\nThe\npdb\nmodule now supports remote attaching to a running Python process using a new-p PID\ncommand-line option:python -m pdb -p 1234\nThis will connect to the Python process with the given PID and allow you to debug it interactively. Notice that due to how the Python interpreter works attaching to a remote process that is blocked in a system call or waiting for I/O will only work once the next bytecode instruction is executed or when the process receives a signal.\nThis feature uses PEP 768 and the new\nsys.remote_exec()\nfunction to attach to the remote process and send the PDB commands to it.(Contributed by Matt Wozniski and Pablo Galindo in gh-131591.)\nHardcoded breakpoints (\nbreakpoint()\nandset_trace()\n) now reuse the most recentPdb\ninstance that callsset_trace()\n, instead of creating a new one each time. As a result, all the instance specific data likedisplay\nandcommands\nare preserved across hardcoded breakpoints. (Contributed by Tian Gao in gh-121450.)Add a new argument mode to\npdb.Pdb\n. Disable therestart\ncommand whenpdb\nis ininline\nmode. (Contributed by Tian Gao in gh-123757.)A confirmation prompt will be shown when the user tries to quit\npdb\nininline\nmode.y\n,Y\n,\norEOF\nwill confirm the quit and callsys.exit()\n, instead of raisingbdb.BdbQuit\n. (Contributed by Tian Gao in gh-124704.)Inline breakpoints like\nbreakpoint()\norpdb.set_trace()\nwill always stop the program at calling frame, ignoring theskip\npattern (if any). (Contributed by Tian Gao in gh-130493.)\nat the beginning of the line inpdb\nmulti-line input will fill in a 4-space indentation now, instead of inserting a\\t\ncharacter. (Contributed by Tian Gao in gh-130471.)Auto-indent is introduced in\npdb\nmulti-line input. It will either keep the indentation of the last line or insert a 4-space indentation when it detects a new code block. (Contributed by Tian Gao in gh-133350.)$_asynctask\nis added to access the current asyncio task if applicable. (Contributed by Tian Gao in gh-124367.)pdb.set_trace_async()\nis added to support debugging asyncio coroutines.await\nstatements are supported with this function. (Contributed by Tian Gao in gh-132576.)Source code displayed in\npdb\nwill be syntax-highlighted. This feature can be controlled using the same methods as the default interactive shell, in addition to the newly addedcolorize\nargument ofpdb.Pdb\n. (Contributed by Tian Gao and \u0141ukasz Langa in gh-133355.)\npickle\u00b6\nSet the default protocol version on the\npickle\nmodule to 5. For more details, see pickle protocols.Add exception notes for pickle serialization errors that allow identifying the source of the error. (Contributed by Serhiy Storchaka in gh-122213.)\nplatform\u00b6\nAdd\ninvalidate_caches()\n, a function to invalidate cached results in theplatform\nmodule. (Contributed by B\u00e9n\u00e9dikt Tran in gh-122549.)\npydoc\u00b6\nAnnotations in help output are now usually displayed in a format closer to that in the original source. (Contributed by Jelle Zijlstra in gh-101552.)\nre\u00b6\nSupport\n\\z\nas a synonym for\\Z\ninregular expressions\n. It is interpreted unambiguously in many other regular expression engines, unlike\\Z\n, which has subtly different behavior. (Contributed by Serhiy Storchaka in gh-133306.)\\B\ninregular expression\nnow matches the empty input string, meaning that it is now always the opposite of\\b\n. (Contributed by Serhiy Storchaka in gh-124130.)\nsocket\u00b6\nImprove and fix support for Bluetooth sockets.\nFix support of Bluetooth sockets on NetBSD and DragonFly BSD. (Contributed by Serhiy Storchaka in gh-132429.)\nFix support for\nBTPROTO_HCI\non FreeBSD. (Contributed by Victor Stinner in gh-111178.)Add support for\nBTPROTO_SCO\non FreeBSD. (Contributed by Serhiy Storchaka in gh-85302.)Add support for cid and bdaddr_type in the address for\nBTPROTO_L2CAP\non FreeBSD. (Contributed by Serhiy Storchaka in gh-132429.)Add support for channel in the address for\nBTPROTO_HCI\non Linux. (Contributed by Serhiy Storchaka in gh-70145.)Accept an integer as the address for\nBTPROTO_HCI\non Linux. (Contributed by Serhiy Storchaka in gh-132099.)Return cid in\ngetsockname()\nforBTPROTO_L2CAP\n. (Contributed by Serhiy Storchaka in gh-132429.)Add many new constants. (Contributed by Serhiy Storchaka in gh-132734.)\nssl\u00b6\nstruct\u00b6\nsymtable\u00b6\nsys\u00b6\nThe previously undocumented special function\nsys.getobjects()\n, which only exists in specialized builds of Python, may now return objects from other interpreters than the one it\u2019s called in. (Contributed by Eric Snow in gh-125286.)Add\nsys._is_immortal()\nfor determining if an object is immortal. (Contributed by Peter Bierma in gh-128509.)On FreeBSD,\nsys.platform\nno longer contains the major version number. It is always'freebsd'\n, instead of'freebsd13'\nor'freebsd14'\n. (Contributed by Michael Osipov in gh-129393.)Raise\nDeprecationWarning\nforsys._clear_type_cache()\n. This function was deprecated in Python 3.13 but it didn\u2019t raise a runtime warning.Add\nsys.remote_exec()\nto implement the new external debugger interface. See PEP 768 for details. (Contributed by Pablo Galindo Salgado, Matt Wozniski, and Ivona Stojanovic in gh-131591.)Add the\nsys._jit\nnamespace, containing utilities for introspecting just-in-time compilation. (Contributed by Brandt Bucher in gh-133231.)\nsys.monitoring\u00b6\nAdd two new monitoring events,\nBRANCH_LEFT\nandBRANCH_RIGHT\n. These replace and deprecate theBRANCH\nevent. (Contributed by Mark Shannon in gh-122548.)\nsysconfig\u00b6\nAdd\nABIFLAGS\nkey toget_config_vars()\non Windows. (Contributed by Xuehai Pan in gh-131799.)\ntarfile\u00b6\ndata_filter()\nnow normalizes symbolic link targets in order to avoid path traversal attacks. (Contributed by Petr Viktorin in gh-127987 and CVE 2025-4138.)extractall()\nnow skips fixing up directory attributes when a directory was removed or replaced by another kind of file. (Contributed by Petr Viktorin in gh-127987 and CVE 2024-12718.)extract()\nandextractall()\nnow (re-)apply the extraction filter when substituting a link (hard or symbolic) with a copy of another archive member, and when fixing up directory attributes. The former raises a new exception,LinkFallbackError\n. (Contributed by Petr Viktorin for CVE 2025-4330 and CVE 2024-12718.)extract()\nandextractall()\nno longer extract rejected members whenerrorlevel()\nis zero. (Contributed by Matt Prodani and Petr Viktorin in gh-112887 and CVE 2025-4435.)\nthreading\u00b6\nthreading.Thread.start()\nnow sets the operating system thread name tothreading.Thread.name\n. (Contributed by Victor Stinner in gh-59705.)\ntkinter\u00b6\nturtle\u00b6\nAdd context managers for\nturtle.fill()\n,turtle.poly()\n, andturtle.no_animation()\n. (Contributed by Marie Roald and Yngve Mardal Moe in gh-126350.)\ntypes\u00b6\ntypes.UnionType\nis now an alias fortyping.Union\n. See below for more details. (Contributed by Jelle Zijlstra in gh-105499.)\ntyping\u00b6\nThe\ntypes.UnionType\nandtyping.Union\ntypes are now aliases for each other, meaning that both old-style unions (created withUnion[int, str]\n) and new-style unions (int | str\n) now create instances of the same runtime type. This unifies the behavior between the two syntaxes, but leads to some differences in behavior that may affect users who introspect types at runtime:Both syntaxes for creating a union now produce the same string representation in\nrepr()\n. For example,repr(Union[int, str])\nis now\"int | str\"\ninstead of\"typing.Union[int, str]\"\n.Unions created using the old syntax are no longer cached. Previously, running\nUnion[int, str]\nmultiple times would return the same object (Union[int, str] is Union[int, str]\nwould beTrue\n), but now it will return two different objects. Use==\nto compare unions for equality, notis\n. New-style unions have never been cached this way. This change could increase memory usage for some programs that use a large number of unions created by subscriptingtyping.Union\n. However, several factors offset this cost: unions used in annotations are no longer evaluated by default in Python 3.14 because of PEP 649; an instance oftypes.UnionType\nis itself much smaller than the object returned byUnion[]\nwas on prior Python versions; and removing the cache also saves some space. It is therefore unlikely that this change will cause a significant increase in memory usage for most users.Previously, old-style unions were implemented using the private class\ntyping._UnionGenericAlias\n. This class is no longer needed for the implementation, but it has been retained for backward compatibility, with removal scheduled for Python 3.17. Users should use documented introspection helpers likeget_origin()\nandtyping.get_args()\ninstead of relying on private implementation details.It is now possible to use\ntyping.Union\nitself inisinstance()\nchecks. For example,isinstance(int | str, typing.Union)\nwill returnTrue\n; previously this raisedTypeError\n.The\n__args__\nattribute oftyping.Union\nobjects is no longer writable.It is no longer possible to set any attributes on\nUnion\nobjects. This only ever worked for dunder attributes on previous versions, was never documented to work, and was subtly broken in many cases.\n(Contributed by Jelle Zijlstra in gh-105499.)\nTypeAliasType\nnow supports star unpacking.\nunicodedata\u00b6\nThe Unicode database has been updated to Unicode 16.0.0.\nunittest\u00b6\nunittest\noutput is now colored by default. This can be controlled by environment variables. (Contributed by Hugo van Kemenade in gh-127221.)unittest discovery supports namespace package as start directory again. It was removed in Python 3.11. (Contributed by Jacob Walls in gh-80958.)\nA number of new methods were added in the\nTestCase\nclass that provide more specialized tests.assertHasAttr()\nandassertNotHasAttr()\ncheck whether the object has a particular attribute.assertIsSubclass()\nandassertNotIsSubclass()\ncheck whether the object is a subclass of a particular class, or of one of a tuple of classes.assertStartsWith()\n,assertNotStartsWith()\n,assertEndsWith()\nandassertNotEndsWith()\ncheck whether the Unicode or byte string starts or ends with particular strings.\n(Contributed by Serhiy Storchaka in gh-71339.)\nurllib\u00b6\nUpgrade HTTP digest authentication algorithm for\nurllib.request\nby supporting SHA-256 digest authentication as specified in RFC 7616. (Contributed by Calvin Bui in gh-128193.)Improve ergonomics and standards compliance when parsing and emitting\nfile:\nURLs.In\nurl2pathname()\n:Accept a complete URL when the new require_scheme argument is set to true.\nDiscard URL authority if it matches the local hostname.\nDiscard URL authority if it resolves to a local IP address when the new resolve_host argument is set to true.\nDiscard URL query and fragment components.\nRaise\nURLError\nif a URL authority isn\u2019t local, except on Windows where we return a UNC path as before.\nIn\npathname2url()\n:Return a complete URL when the new add_scheme argument is set to true.\nInclude an empty URL authority when a path begins with a slash. For example, the path\n/etc/hosts\nis converted to the URL///etc/hosts\n.\nOn Windows, drive letters are no longer converted to uppercase, and\n:\ncharacters not following a drive letter no longer cause anOSError\nexception to be raised.(Contributed by Barney Gale in gh-125866.)\nuuid\u00b6\nAdd support for UUID versions 6, 7, and 8 via\nuuid6()\n,uuid7()\n, anduuid8()\nrespectively, as specified in RFC 9562. (Contributed by B\u00e9n\u00e9dikt Tran in gh-89083.)NIL\nandMAX\nare now available to represent the Nil and Max UUID formats as defined by RFC 9562. (Contributed by Nick Pope in gh-128427.)Allow generating multiple UUIDs simultaneously on the command-line via\npython -m uuid --count\n. (Contributed by Simon Legner in gh-131236.)\nwebbrowser\u00b6\nNames in the\nBROWSER\nenvironment variable can now refer to already registered browsers for thewebbrowser\nmodule, instead of always generating a new browser command.This makes it possible to set\nBROWSER\nto the value of one of the supported browsers on macOS.\nzipfile\u00b6\nAdded\nZipInfo._for_archive\n, a method to resolve suitable defaults for aZipInfo\nobject as used byZipFile.writestr\n. (Contributed by B\u00e9n\u00e9dikt Tran in gh-123424.)ZipFile.writestr()\nnow respects theSOURCE_DATE_EPOCH\nenvironment variable in order to better support reproducible builds. (Contributed by Jiahao Li in gh-91279.)\nOptimizations\u00b6\nThe import time for several standard library modules has been improved, including\nannotationlib\n,ast\n,asyncio\n,base64\n,cmd\n,csv\n,gettext\n,importlib.util\n,locale\n,mimetypes\n,optparse\n,pickle\n,pprint\n,pstats\n,shlex\n,socket\n,string\n,subprocess\n,threading\n,tomllib\n,types\n, andzipfile\n.(Contributed by Adam Turner, B\u00e9n\u00e9dikt Tran, Chris Markiewicz, Eli Schwartz, Hugo van Kemenade, Jelle Zijlstra, and others in gh-118761.)\nThe interpreter now avoids some reference count modifications internally when it\u2019s safe to do so. This can lead to different values being returned from\nsys.getrefcount()\nandPy_REFCNT()\ncompared to previous versions of Python. See below for details.\nasyncio\u00b6\nStandard benchmark results have improved by 10-20% following the implementation of a new per-thread doubly linked list for\nnative tasks\n, also reducing memory usage. This enables external introspection tools such as python -m asyncio pstree to introspect the call graph of asyncio tasks running in all threads. (Contributed by Kumar Aditya in gh-107803.)The module now has first class support for free-threading builds. This enables parallel execution of multiple event loops across different threads, scaling linearly with the number of threads. (Contributed by Kumar Aditya in gh-128002.)\nbase64\u00b6\nb16decode()\nis now up to six times faster. (Contributed by B\u00e9n\u00e9dikt Tran, Chris Markiewicz, and Adam Turner in gh-118761.)\nbdb\u00b6\nThe basic debugger now has a\nsys.monitoring\n-based backend, which can be selected via the passing'monitoring'\nto theBdb\nclass\u2019s new backend parameter. (Contributed by Tian Gao in gh-124533.)\ndifflib\u00b6\nThe\nIS_LINE_JUNK()\nfunction is now up to twice as fast. (Contributed by Adam Turner and Semyon Moroz in gh-130167.)\ngc\u00b6\nThe new incremental garbage collector means that maximum pause times are reduced by an order of magnitude or more for larger heaps.\nBecause of this optimization, the meaning of the results of\nget_threshold()\nandset_threshold()\nhave changed, along withget_count()\nandget_stats()\n.For backwards compatibility,\nget_threshold()\ncontinues to return a three-item tuple. The first value is the threshold for young collections, as before; the second value determines the rate at which the old collection is scanned (the default is 10, and higher values mean that the old collection is scanned more slowly). The third value is now meaningless and is always zero.set_threshold()\nnow ignores any items after the second.get_count()\nandget_stats()\ncontinue to return the same format of results. The only difference is that instead of the results referring to the young, aging and old generations, the results refer to the young generation and the aging and collecting spaces of the old generation.\nIn summary, code that attempted to manipulate the behavior of the cycle GC may not work exactly as intended, but it is very unlikely to be harmful. All other code will work just fine.\n(Contributed by Mark Shannon in gh-108362.)\nio\u00b6\npathlib\u00b6\nPath.read_bytes\nnow uses unbuffered mode to open files, which is between 9% and 17% faster to read in full. (Contributed by Cody Maloney in gh-120754.)\npdb\u00b6\npdb\nnow supports two backends, based on eithersys.settrace()\norsys.monitoring\n. Using the pdb CLI orbreakpoint()\nwill always use thesys.monitoring\nbackend. Explicitly instantiatingpdb.Pdb\nand its derived classes will use thesys.settrace()\nbackend by default, which is configurable. (Contributed by Tian Gao in gh-124533.)\ntextwrap\u00b6\nOptimize the\ndedent()\nfunction, improving performance by an average of 2.4x, with larger improvements for bigger inputs, and fix a bug with incomplete normalization of blank lines with whitespace characters other than space and tab.\nuuid\u00b6\nzlib\u00b6\nOn Windows, zlib-ng is now used as the implementation of the\nzlib\nmodule in the default binaries. There are no known incompatibilities betweenzlib-ng\nand the previously-usedzlib\nimplementation. This should result in better performance at all compression levels.It is worth noting that\nzlib.Z_BEST_SPEED\n(1\n) may result in significantly less compression than the previous implementation, whilst also significantly reducing the time taken to compress.(Contributed by Steve Dower in gh-91349.)\nRemoved\u00b6\nargparse\u00b6\nRemove the type, choices, and metavar parameters of\nBooleanOptionalAction\n. These have been deprecated since Python 3.12. (Contributed by Nikita Sobolev in gh-118805.)Calling\nadd_argument_group()\non an argument group now raises aValueError\n. Similarly,add_argument_group()\noradd_mutually_exclusive_group()\non a mutually exclusive group now both raiseValueError\ns. This \u2018nesting\u2019 was never supported, often failed to work correctly, and was unintentionally exposed through inheritance. This functionality has been deprecated since Python 3.11. (Contributed by Savannah Ostrowski in gh-127186.)\nast\u00b6\nRemove the following classes, which have been deprecated aliases of\nConstant\nsince Python 3.8 and have emitted deprecation warnings since Python 3.12:Bytes\nEllipsis\nNameConstant\nNum\nStr\nAs a consequence of these removals, user-defined\nvisit_Num\n,visit_Str\n,visit_Bytes\n,visit_NameConstant\nandvisit_Ellipsis\nmethods on customNodeVisitor\nsubclasses will no longer be called when theNodeVisitor\nsubclass is visiting an AST. Define avisit_Constant\nmethod instead.(Contributed by Alex Waygood in gh-119562.)\nRemove the following deprecated properties on\nast.Constant\n, which were present for compatibility with the now-removed AST classes:Constant.n\nConstant.s\nUse\nConstant.value\ninstead. (Contributed by Alex Waygood in gh-119562.)\nasyncio\u00b6\nRemove the following classes, methods, and functions, which have been deprecated since Python 3.12:\nAbstractChildWatcher\nFastChildWatcher\nMultiLoopChildWatcher\nPidfdChildWatcher\nSafeChildWatcher\nThreadedChildWatcher\nAbstractEventLoopPolicy.get_child_watcher()\nAbstractEventLoopPolicy.set_child_watcher()\nget_child_watcher()\nset_child_watcher()\n(Contributed by Kumar Aditya in gh-120804.)\nasyncio.get_event_loop()\nnow raises aRuntimeError\nif there is no current event loop, and no longer implicitly creates an event loop.(Contributed by Kumar Aditya in gh-126353.)\nThere\u2019s a few patterns that use\nasyncio.get_event_loop()\n, most of them can be replaced withasyncio.run()\n.If you\u2019re running an async function, simply use\nasyncio.run()\n.Before:\nasync def main(): ... loop = asyncio.get_event_loop() try: loop.run_until_complete(main()) finally: loop.close()\nAfter:\nasync def main(): ... asyncio.run(main())\nIf you need to start something, for example, a server listening on a socket and then run forever, use\nasyncio.run()\nand anasyncio.Event\n.Before:\ndef start_server(loop): ... loop = asyncio.get_event_loop() try: start_server(loop) loop.run_forever() finally: loop.close()\nAfter:\ndef start_server(loop): ... async def main(): start_server(asyncio.get_running_loop()) await asyncio.Event().wait() asyncio.run(main())\nIf you need to run something in an event loop, then run some blocking code around it, use\nasyncio.Runner\n.Before:\nasync def operation_one(): ... def blocking_code(): ... async def operation_two(): ... loop = asyncio.get_event_loop() try: loop.run_until_complete(operation_one()) blocking_code() loop.run_until_complete(operation_two()) finally: loop.close()\nAfter:\nasync def operation_one(): ... def blocking_code(): ... async def operation_two(): ... with asyncio.Runner() as runner: runner.run(operation_one()) blocking_code() runner.run(operation_two())\nemail\u00b6\nRemove\nemail.utils.localtime()\n\u2019s isdst parameter, which was deprecated in and has been ignored since Python 3.12. (Contributed by Hugo van Kemenade in gh-118798.)\nimportlib.abc\u00b6\nRemove deprecated\nimportlib.abc\nclasses:ResourceReader\n(useTraversableResources\n)Traversable\n(useTraversable\n)TraversableResources\n(useTraversableResources\n)\n(Contributed by Jason R. Coombs and Hugo van Kemenade in gh-93963.)\nitertools\u00b6\nRemove support for copy, deepcopy, and pickle operations from\nitertools\niterators. These have emitted aDeprecationWarning\nsince Python 3.12. (Contributed by Raymond Hettinger in gh-101588.)\npathlib\u00b6\nRemove support for passing additional keyword arguments to\nPath\n. In previous versions, any such arguments are ignored. (Contributed by Barney Gale in gh-74033.)Remove support for passing additional positional arguments to\nPurePath.relative_to()\nandis_relative_to()\n. In previous versions, any such arguments are joined onto other. (Contributed by Barney Gale in gh-78707.)\npkgutil\u00b6\nRemove the\nget_loader()\nandfind_loader()\nfunctions, which have been deprecated since Python 3.12. (Contributed by B\u00e9n\u00e9dikt Tran in gh-97850.)\npty\u00b6\nRemove the\nmaster_open()\nandslave_open()\nfunctions, which have been deprecated since Python 3.12. Usepty.openpty()\ninstead. (Contributed by Nikita Sobolev in gh-118824.)\nsqlite3\u00b6\nRemove\nversion\nandversion_info\nfrom thesqlite3\nmodule; usesqlite_version\nandsqlite_version_info\nfor the actual version number of the runtime SQLite library. (Contributed by Hugo van Kemenade in gh-118924.)Using a sequence of parameters with named placeholders now raises a\nProgrammingError\n, having been deprecated since Python 3.12. (Contributed by Erlend E. Aasland in gh-118928 and gh-101693.)\nurllib\u00b6\nRemove the\nQuoter\nclass fromurllib.parse\n, which has been deprecated since Python 3.11. (Contributed by Nikita Sobolev in gh-118827.)Remove the\nURLopener\nandFancyURLopener\nclasses fromurllib.request\n, which have been deprecated since Python 3.3.myopener.open()\ncan be replaced withurlopen()\n.myopener.retrieve()\ncan be replaced withurlretrieve()\n. Customisations to the opener classes can be replaced by passing customized handlers tobuild_opener()\n. (Contributed by Barney Gale in gh-84850.)\nDeprecated\u00b6\nNew deprecations\u00b6\nPassing a complex number as the real or imag argument in the\ncomplex()\nconstructor is now deprecated; complex numbers should only be passed as a single positional argument. (Contributed by Serhiy Storchaka in gh-109218.)-\nPassing the undocumented keyword argument prefix_chars to the\nadd_argument_group()\nmethod is now deprecated. (Contributed by Savannah Ostrowski in gh-125563.)Deprecated the\nargparse.FileType\ntype converter. Anything relating to resource management should be handled downstream, after the arguments have been parsed. (Contributed by Serhiy Storchaka in gh-58032.)\n-\nThe\nasyncio.iscoroutinefunction()\nis now deprecated and will be removed in Python 3.16; useinspect.iscoroutinefunction()\ninstead. (Contributed by Jiahao Li and Kumar Aditya in gh-122875.)The\nasyncio\npolicy system is deprecated and will be removed in Python 3.16. In particular, the following classes and functions are deprecated:Users should use\nasyncio.run()\norasyncio.Runner\nwith the loop_factory argument to use the desired event loop implementation.For example, to use\nasyncio.SelectorEventLoop\non Windows:import asyncio async def main(): ... asyncio.run(main(), loop_factory=asyncio.SelectorEventLoop)\n(Contributed by Kumar Aditya in gh-127949.)\ncodecs\n: Thecodecs.open()\nfunction is now deprecated, and will be removed in a future version of Python. Useopen()\ninstead. (Contributed by Inada Naoki in gh-133036.)-\nOn non-Windows platforms, setting\nStructure._pack_\nto use a MSVC-compatible default memory layout is now deprecated in favor of settingStructure._layout_\nto'ms'\n, and will be removed in Python 3.19. (Contributed by Petr Viktorin in gh-131747.)Calling\nctypes.POINTER()\non a string is now deprecated. Use incomplete types for self-referential structures. Also, the internalctypes._pointer_type_cache\nis deprecated. Seectypes.POINTER()\nfor updated implementation details. (Contributed by Sergey Myrianov in gh-100926.)\nfunctools\n: Calling the Python implementation offunctools.reduce()\nwith function or sequence as keyword arguments is now deprecated; the parameters will be made positional-only in Python 3.16. (Contributed by Kirill Podoprigora in gh-121676.)logging\n: Support for custom logging handlers with the strm argument is now deprecated and scheduled for removal in Python 3.16. Define handlers with the stream argument instead. (Contributed by Mariusz Felisiak in gh-115032.)mimetypes\n: Valid extensions are either empty or must start with \u2018.\u2019 formimetypes.MimeTypes.add_type()\n. Undotted extensions are deprecated and will raise aValueError\nin Python 3.16. (Contributed by Hugo van Kemenade in gh-75223.)nturl2path\n: This module is now deprecated. Callurllib.request.url2pathname()\nandpathname2url()\ninstead. (Contributed by Barney Gale in gh-125866.)os\n: Theos.popen()\nandos.spawn*\nfunctions are now soft deprecated. They should no longer be used to write new code. Thesubprocess\nmodule is recommended instead. (Contributed by Victor Stinner in gh-120743.)pathlib\n:pathlib.PurePath.as_uri()\nis now deprecated and scheduled for removal in Python 3.19. Usepathlib.Path.as_uri()\ninstead. (Contributed by Barney Gale in gh-123599.)pdb\n: The undocumentedpdb.Pdb.curframe_locals\nattribute is now a deprecated read-only property, which will be removed in a future version of Python. The low overhead dynamic frame locals access added in Python 3.13 by PEP 667 means the frame locals cache reference previously stored in this attribute is no longer needed. Derived debuggers should accesspdb.Pdb.curframe.f_locals\ndirectly in Python 3.13 and later versions. (Contributed by Tian Gao in gh-124369 and gh-125951.)symtable\n: Deprecatesymtable.Class.get_methods()\ndue to the lack of interest, scheduled for removal in Python 3.16. (Contributed by B\u00e9n\u00e9dikt Tran in gh-119698.)tkinter\n: Thetkinter.Variable\nmethodstrace_variable()\n,trace_vdelete()\nandtrace_vinfo()\nare now deprecated. Usetrace_add()\n,trace_remove()\nandtrace_info()\ninstead. (Contributed by Serhiy Storchaka in gh-120220.)urllib.parse\n: Accepting objects with false values (like0\nand[]\n) except empty strings, bytes-like objects andNone\ninparse_qsl()\nandparse_qs()\nis now deprecated. (Contributed by Serhiy Storchaka in gh-116897.)\nPending removal in Python 3.15\u00b6\nThe import system:\nSetting\n__cached__\non a module while failing to set__spec__.cached\nis deprecated. In Python 3.15,__cached__\nwill cease to be set or take into consideration by the import system or standard library. (gh-97879)Setting\n__package__\non a module while failing to set__spec__.parent\nis deprecated. In Python 3.15,__package__\nwill cease to be set or take into consideration by the import system or standard library. (gh-97879)\n-\nThe undocumented\nctypes.SetPointerType()\nfunction has been deprecated since Python 3.13.\n-\nThe obsolete and rarely used\nCGIHTTPRequestHandler\nhas been deprecated since Python 3.13. No direct replacement exists. Anything is better than CGI to interface a web server with a request handler.The\n--cgi\nflag to the python -m http.server command-line interface has been deprecated since Python 3.13.\n-\nload_module()\nmethod: useexec_module()\ninstead.\n-\nThe\ngetdefaultlocale()\nfunction has been deprecated since Python 3.11. Its removal was originally planned for Python 3.13 (gh-90817), but has been postponed to Python 3.15. Usegetlocale()\n,setlocale()\n, andgetencoding()\ninstead. (Contributed by Hugo van Kemenade in gh-111187.)\n-\nPurePath.is_reserved()\nhas been deprecated since Python 3.13. Useos.path.isreserved()\nto detect reserved paths on Windows.\n-\njava_ver()\nhas been deprecated since Python 3.13. This function is only useful for Jython support, has a confusing API, and is largely untested.\n-\nThe check_home argument of\nsysconfig.is_python_build()\nhas been deprecated since Python 3.12.\n-\nRLock()\nwill take no arguments in Python 3.15. Passing any arguments has been deprecated since Python 3.14, as the Python version does not permit any arguments, but the C version allows any number of positional or keyword arguments, ignoring every argument.\n-\ntypes.CodeType\n: Accessingco_lnotab\nwas deprecated in PEP 626 since 3.10 and was planned to be removed in 3.12, but it only got a properDeprecationWarning\nin 3.12. May be removed in 3.15. (Contributed by Nikita Sobolev in gh-101866.)\n-\nThe undocumented keyword argument syntax for creating\nNamedTuple\nclasses (for example,Point = NamedTuple(\"Point\", x=int, y=int)\n) has been deprecated since Python 3.13. Use the class-based syntax or the functional syntax instead.When using the functional syntax of\nTypedDict\ns, failing to pass a value to the fields parameter (TD = TypedDict(\"TD\")\n) or passingNone\n(TD = TypedDict(\"TD\", None)\n) has been deprecated since Python 3.13. Useclass TD(TypedDict): pass\norTD = TypedDict(\"TD\", {})\nto create a TypedDict with zero field.The\ntyping.no_type_check_decorator()\ndecorator function has been deprecated since Python 3.13. After eight years in thetyping\nmodule, it has yet to be supported by any major type checker.\nwave\n:The\ngetmark()\n,setmark()\n, andgetmarkers()\nmethods of theWave_read\nandWave_write\nclasses have been deprecated since Python 3.13.\n-\nload_module()\nhas been deprecated since Python 3.10. Useexec_module()\ninstead. (Contributed by Jiahao Li in gh-125746.)\nPending removal in Python 3.16\u00b6\nThe import system:\nSetting\n__loader__\non a module while failing to set__spec__.loader\nis deprecated. In Python 3.16,__loader__\nwill cease to be set or taken into consideration by the import system or the standard library.\n-\nThe\n'u'\nformat code (wchar_t\n) has been deprecated in documentation since Python 3.3 and at runtime since Python 3.13. Use the'w'\nformat code (Py_UCS4\n) for Unicode characters instead.\n-\nasyncio.iscoroutinefunction()\nis deprecated and will be removed in Python 3.16; useinspect.iscoroutinefunction()\ninstead. (Contributed by Jiahao Li and Kumar Aditya in gh-122875.)asyncio\npolicy system is deprecated and will be removed in Python 3.16. In particular, the following classes and functions are deprecated:Users should use\nasyncio.run()\norasyncio.Runner\nwith loop_factory to use the desired event loop implementation.For example, to use\nasyncio.SelectorEventLoop\non Windows:import asyncio async def main(): ... asyncio.run(main(), loop_factory=asyncio.SelectorEventLoop)\n(Contributed by Kumar Aditya in gh-127949.)\n-\nBitwise inversion on boolean types,\n~True\nor~False\nhas been deprecated since Python 3.12, as it produces surprising and unintuitive results (-2\nand-1\n). Usenot x\ninstead for the logical negation of a Boolean. In the rare case that you need the bitwise inversion of the underlying integer, convert toint\nexplicitly (~int(x)\n).\n-\nCalling the Python implementation of\nfunctools.reduce()\nwith function or sequence as keyword arguments has been deprecated since Python 3.14.\n-\nSupport for custom logging handlers with the strm argument is deprecated and scheduled for removal in Python 3.16. Define handlers with the stream argument instead. (Contributed by Mariusz Felisiak in gh-115032.)\n-\nValid extensions start with a \u2018.\u2019 or are empty for\nmimetypes.MimeTypes.add_type()\n. Undotted extensions are deprecated and will raise aValueError\nin Python 3.16. (Contributed by Hugo van Kemenade in gh-75223.)\n-\nThe\nExecError\nexception has been deprecated since Python 3.14. It has not been used by any function inshutil\nsince Python 3.4, and is now an alias ofRuntimeError\n.\n-\nThe\nClass.get_methods\nmethod has been deprecated since Python 3.14.\nsys\n:The\n_enablelegacywindowsfsencoding()\nfunction has been deprecated since Python 3.13. Use thePYTHONLEGACYWINDOWSFSENCODING\nenvironment variable instead.\n-\nThe\nsysconfig.expand_makefile_vars()\nfunction has been deprecated since Python 3.14. Use thevars\nargument ofsysconfig.get_paths()\ninstead.\n-\nThe undocumented and unused\nTarFile.tarfile\nattribute has been deprecated since Python 3.13.\nPending removal in Python 3.17\u00b6\n-\ncollections.abc.ByteString\nis scheduled for removal in Python 3.17.Use\nisinstance(obj, collections.abc.Buffer)\nto test ifobj\nimplements the buffer protocol at runtime. For use in type annotations, either useBuffer\nor a union that explicitly specifies the types your code supports (e.g.,bytes | bytearray | memoryview\n).ByteString\nwas originally intended to be an abstract class that would serve as a supertype of bothbytes\nandbytearray\n. However, since the ABC never had any methods, knowing that an object was an instance ofByteString\nnever actually told you anything useful about the object. Other common buffer types such asmemoryview\nwere also never understood as subtypes ofByteString\n(either at runtime or by static type checkers).See PEP 688 for more details. (Contributed by Shantanu Jain in gh-91896.)\n-\nBefore Python 3.14, old-style unions were implemented using the private class\ntyping._UnionGenericAlias\n. This class is no longer needed for the implementation, but it has been retained for backward compatibility, with removal scheduled for Python 3.17. Users should use documented introspection helpers liketyping.get_origin()\nandtyping.get_args()\ninstead of relying on private implementation details.typing.ByteString\n, deprecated since Python 3.9, is scheduled for removal in Python 3.17.Use\nisinstance(obj, collections.abc.Buffer)\nto test ifobj\nimplements the buffer protocol at runtime. For use in type annotations, either useBuffer\nor a union that explicitly specifies the types your code supports (e.g.,bytes | bytearray | memoryview\n).ByteString\nwas originally intended to be an abstract class that would serve as a supertype of bothbytes\nandbytearray\n. However, since the ABC never had any methods, knowing that an object was an instance ofByteString\nnever actually told you anything useful about the object. Other common buffer types such asmemoryview\nwere also never understood as subtypes ofByteString\n(either at runtime or by static type checkers).See PEP 688 for more details. (Contributed by Shantanu Jain in gh-91896.)\nPending removal in Python 3.18\u00b6\nPending removal in Python 3.19\u00b6\nPending removal in future versions\u00b6\nThe following APIs will be removed in the future, although there is currently no date scheduled for their removal.\n-\nNesting argument groups and nesting mutually exclusive groups are deprecated.\nPassing the undocumented keyword argument prefix_chars to\nadd_argument_group()\nis now deprecated.The\nargparse.FileType\ntype converter is deprecated.\n-\nGenerators:\nthrow(type, exc, tb)\nandathrow(type, exc, tb)\nsignature is deprecated: usethrow(exc)\nandathrow(exc)\ninstead, the single argument signature.Currently Python accepts numeric literals immediately followed by keywords, for example\n0in x\n,1or x\n,0if 1else 2\n. It allows confusing and ambiguous expressions like[0x1for x in y]\n(which can be interpreted as[0x1 for x in y]\nor[0x1f or x in y]\n). A syntax warning is raised if the numeric literal is immediately followed by one of keywordsand\n,else\n,for\n,if\n,in\n,is\nandor\n. In a future release it will be changed to a syntax error. (gh-87999)Support for\n__index__()\nand__int__()\nmethod returning non-int type: these methods will be required to return an instance of a strict subclass ofint\n.Support for\n__float__()\nmethod returning a strict subclass offloat\n: these methods will be required to return an instance offloat\n.Support for\n__complex__()\nmethod returning a strict subclass ofcomplex\n: these methods will be required to return an instance ofcomplex\n.Passing a complex number as the real or imag argument in the\ncomplex()\nconstructor is now deprecated; it should only be passed as a single positional argument. (Contributed by Serhiy Storchaka in gh-109218.)\ncalendar\n:calendar.January\nandcalendar.February\nconstants are deprecated and replaced bycalendar.JANUARY\nandcalendar.FEBRUARY\n. (Contributed by Prince Roshan in gh-103636.)codecs\n: useopen()\ninstead ofcodecs.open()\n. (gh-133038)codeobject.co_lnotab\n: use thecodeobject.co_lines()\nmethod instead.-\nutcnow()\n: usedatetime.datetime.now(tz=datetime.UTC)\n.utcfromtimestamp()\n: usedatetime.datetime.fromtimestamp(timestamp, tz=datetime.UTC)\n.\ngettext\n: Plural value must be an integer.-\ncache_from_source()\ndebug_override parameter is deprecated: use the optimization parameter instead.\n-\nEntryPoints\ntuple interface.Implicit\nNone\non return values.\nlogging\n: thewarn()\nmethod has been deprecated since Python 3.3, usewarning()\ninstead.mailbox\n: Use of StringIO input and text mode is deprecated, use BytesIO and binary mode instead.os\n: Callingos.register_at_fork()\nin multi-threaded process.pydoc.ErrorDuringImport\n: A tuple value for exc_info parameter is deprecated, use an exception instance.re\n: More strict rules are now applied for numerical group references and group names in regular expressions. Only sequence of ASCII digits is now accepted as a numerical reference. The group name in bytes patterns and replacement strings can now only contain ASCII letters and digits and underscore. (Contributed by Serhiy Storchaka in gh-91760.)sre_compile\n,sre_constants\nandsre_parse\nmodules.shutil\n:rmtree()\n\u2019s onerror parameter is deprecated in Python 3.12; use the onexc parameter instead.ssl\noptions and protocols:ssl.SSLContext\nwithout protocol argument is deprecated.ssl.SSLContext\n:set_npn_protocols()\nandselected_npn_protocol()\nare deprecated: use ALPN instead.ssl.OP_NO_SSL*\noptionsssl.OP_NO_TLS*\noptionsssl.PROTOCOL_SSLv3\nssl.PROTOCOL_TLS\nssl.PROTOCOL_TLSv1\nssl.PROTOCOL_TLSv1_1\nssl.PROTOCOL_TLSv1_2\nssl.TLSVersion.SSLv3\nssl.TLSVersion.TLSv1\nssl.TLSVersion.TLSv1_1\nthreading\nmethods:threading.Condition.notifyAll()\n: usenotify_all()\n.threading.Event.isSet()\n: useis_set()\n.threading.Thread.isDaemon()\n,threading.Thread.setDaemon()\n: usethreading.Thread.daemon\nattribute.threading.Thread.getName()\n,threading.Thread.setName()\n: usethreading.Thread.name\nattribute.threading.currentThread()\n: usethreading.current_thread()\n.threading.activeCount()\n: usethreading.active_count()\n.\nThe internal class\ntyping._UnionGenericAlias\nis no longer used to implementtyping.Union\n. To preserve compatibility with users using this private class, a compatibility shim will be provided until at least Python 3.17. (Contributed by Jelle Zijlstra in gh-105499.)unittest.IsolatedAsyncioTestCase\n: it is deprecated to return a value that is notNone\nfrom a test case.urllib.parse\ndeprecated functions:urlparse()\ninsteadsplitattr()\nsplithost()\nsplitnport()\nsplitpasswd()\nsplitport()\nsplitquery()\nsplittag()\nsplittype()\nsplituser()\nsplitvalue()\nto_bytes()\nwsgiref\n:SimpleHandler.stdout.write()\nshould not do partial writes.xml.etree.ElementTree\n: Testing the truth value of anElement\nis deprecated. In a future release it will always returnTrue\n. Prefer explicitlen(elem)\norelem is not None\ntests instead.sys._clear_type_cache()\nis deprecated: usesys._clear_internal_caches()\ninstead.\nCPython bytecode changes\u00b6\nReplaced the opcode\nBINARY_SUBSCR\nby theBINARY_OP\nopcode with theNB_SUBSCR\noparg. (Contributed by Irit Katriel in gh-100239.)Add the\nBUILD_INTERPOLATION\nandBUILD_TEMPLATE\nopcodes to construct newInterpolation\nandTemplate\ninstances, respectively. (Contributed by Lysandros Nikolaou and others in gh-132661; see also PEP 750: Template strings).Remove the\nBUILD_CONST_KEY_MAP\nopcode. UseBUILD_MAP\ninstead. (Contributed by Mark Shannon in gh-122160.)Replace the\nLOAD_ASSERTION_ERROR\nopcode withLOAD_COMMON_CONSTANT\nand add support for loadingNotImplementedError\n.Add the\nLOAD_FAST_BORROW\nandLOAD_FAST_BORROW_LOAD_FAST_BORROW\nopcodes to reduce reference counting overhead when the interpreter can prove that the reference in the frame outlives the reference loaded onto the stack. (Contributed by Matt Page in gh-130704.)Add the\nLOAD_SMALL_INT\nopcode, which pushes a small integer equal to theoparg\nto the stack. TheRETURN_CONST\nopcode is removed as it is no longer used. (Contributed by Mark Shannon in gh-125837.)Add the new\nLOAD_SPECIAL\ninstruction. Generate code forwith\nandasync with\nstatements using the new instruction. Removed theBEFORE_WITH\nandBEFORE_ASYNC_WITH\ninstructions. (Contributed by Mark Shannon in gh-120507.)Add the\nPOP_ITER\nopcode to support \u2018virtual\u2019 iterators. (Contributed by Mark Shannon in gh-132554.)\nPseudo-instructions\u00b6\nAdd the\nANNOTATIONS_PLACEHOLDER\npseudo instruction to support partially executed module-level annotations with deferred evaluation of annotations. (Contributed by Jelle Zijlstra in gh-130907.)Add the\nBINARY_OP_EXTEND\npseudo instruction, which executes a pair of functions (guard and specialization functions) accessed from the inline cache. (Contributed by Irit Katriel in gh-100239.)Add three specializations for\nCALL_KW\n;CALL_KW_PY\nfor calls to Python functions,CALL_KW_BOUND_METHOD\nfor calls to bound methods, andCALL_KW_NON_PY\nfor all other calls. (Contributed by Mark Shannon in gh-118093.)Add the\nJUMP_IF_TRUE\nandJUMP_IF_FALSE\npseudo instructions, conditional jumps which do not impact the stack. Replaced by the sequenceCOPY 1\n,TO_BOOL\n,POP_JUMP_IF_TRUE/FALSE\n. (Contributed by Irit Katriel in gh-124285.)Add the\nLOAD_CONST_MORTAL\npseudo instruction. (Contributed by Mark Shannon in gh-128685.)Add the\nLOAD_CONST_IMMORTAL\npseudo instruction, which does the same asLOAD_CONST\n, but is more efficient for immortal objects. (Contributed by Mark Shannon in gh-125837.)Add the\nNOT_TAKEN\npseudo instruction, used bysys.monitoring\nto record branch events (such asBRANCH_LEFT\n). (Contributed by Mark Shannon in gh-122548.)\nC API changes\u00b6\nPython configuration C API\u00b6\nAdd a PyInitConfig C API to configure the Python initialization without relying on C structures and the ability to make ABI-compatible changes in the future.\nComplete the PEP 587 PyConfig C API by adding\nPyInitConfig_AddModule()\nwhich can be used to add a built-in extension\nmodule; a feature previously referred to as the \u201cinittab\u201d.\nAdd PyConfig_Get()\nand PyConfig_Set()\nfunctions to get and set\nthe current runtime configuration.\nPEP 587 \u2018Python Initialization Configuration\u2019 unified all the ways to configure Python\u2019s initialization. This PEP also unifies the configuration of Python\u2019s preinitialization and initialization in a single API. Moreover, this PEP only provides a single choice to embed Python, instead of having two \u2018Python\u2019 and \u2018Isolated\u2019 choices (PEP 587), to further simplify the API.\nThe lower level PEP 587 PyConfig API remains available for use cases with an intentionally higher level of coupling to CPython implementation details (such as emulating the full functionality of CPython\u2019s CLI, including its configuration mechanisms).\n(Contributed by Victor Stinner in gh-107954.)\nNew features in the C API\u00b6\nAdd\nPy_PACK_VERSION()\nandPy_PACK_FULL_VERSION()\n, two new macros for bit-packing Python version numbers. This is useful for comparisons withPy_Version\norPY_VERSION_HEX\n. (Contributed by Petr Viktorin in gh-128629.)Add\nPyBytes_Join(sep, iterable)\nfunction, similar tosep.join(iterable)\nin Python. (Contributed by Victor Stinner in gh-121645.)Add functions to manipulate the configuration of the current runtime Python interpreter (PEP 741: Python configuration C API):\n(Contributed by Victor Stinner in gh-107954.)\nAdd functions to configure Python initialization (PEP 741: Python configuration C API):\n(Contributed by Victor Stinner in gh-107954.)\nAdd\nPy_fopen()\nfunction to open a file. This works similarly to the standard Cfopen()\nfunction, instead accepting a Python object for the path parameter and setting an exception on error. The corresponding newPy_fclose()\nfunction should be used to close a file. (Contributed by Victor Stinner in gh-127350.)Add\nPy_HashBuffer()\nto compute and return the hash value of a buffer. (Contributed by Antoine Pitrou and Victor Stinner in gh-122854.)Add\nPyImport_ImportModuleAttr()\nandPyImport_ImportModuleAttrString()\nhelper functions to import a module and get an attribute of the module. (Contributed by Victor Stinner in gh-128911.)Add\nPyIter_NextItem()\nto replacePyIter_Next()\n, which has an ambiguous return value. (Contributed by Irit Katriel and Erlend Aasland in gh-105201.)Add\nPyLong_GetSign()\nfunction to get the sign ofint\nobjects. (Contributed by Sergey B Kirpichev in gh-116560.)Add\nPyLong_IsPositive()\n,PyLong_IsNegative()\nandPyLong_IsZero()\nfor checking ifPyLongObject\nis positive, negative, or zero, respectively. (Contributed by James Roy and Sergey B Kirpichev in gh-126061.)Add new functions to convert C\n\nnumbers to/from Pythonint\nobjects:(Contributed by Victor Stinner in gh-120389.)\nAdd a new import and export API for Python\nint\nobjects (PEP 757):(Contributed by Sergey B Kirpichev and Victor Stinner in gh-102471.)\nAdd\nPyMonitoring_FireBranchLeftEvent()\nandPyMonitoring_FireBranchRightEvent()\nfor generatingBRANCH_LEFT\nandBRANCH_RIGHT\nevents, respectively. (Contributed by Mark Shannon in gh-122548.)Add\nPyType_Freeze()\nfunction to make a type immutable. (Contributed by Victor Stinner in gh-121654.)Add\nPyType_GetBaseByToken()\nandPy_tp_token\nslot for easier superclass identification, which attempts to resolve the type checking issue mentioned in PEP 630. (Contributed in gh-124153.)Add a new\nPyUnicode_Equal()\nfunction to test if two strings are equal. The function is also added to the Limited C API. (Contributed by Victor Stinner in gh-124502.)Add a new\nPyUnicodeWriter\nAPI to create a Pythonstr\nobject, with the following functions:(Contributed by Victor Stinner in gh-119182.)\nThe\nk\nandK\nformats inPyArg_ParseTuple()\nand similar functions now use__index__()\nif available, like all other integer formats. (Contributed by Serhiy Storchaka in gh-112068.)Add support for a new\np\nformat unit inPy_BuildValue()\nthat produces a Pythonbool\nobject from a C integer. (Contributed by Pablo Galindo in bpo-45325.)Add\nPyUnstable_IsImmortal()\nfor determining if an object is immortal, for debugging purposes. (Contributed by Peter Bierma in gh-128509.)Add\nPyUnstable_Object_EnableDeferredRefcount()\nfor enabling deferred reference counting, as outlined in PEP 703.Add\nPyUnstable_Object_IsUniquelyReferenced()\nas a replacement forPy_REFCNT(op) == 1\non free threaded builds. (Contributed by Peter Bierma in gh-133140.)Add\nPyUnstable_Object_IsUniqueReferencedTemporary()\nto determine if an object is a unique temporary object on the interpreter\u2019s operand stack. This can be used in some cases as a replacement for checking ifPy_REFCNT()\nis1\nfor Python objects passed as arguments to C API functions. (Contributed by Sam Gross in gh-133164.)\nLimited C API changes\u00b6\nIn the limited C API version 3.14 and newer,\nPy_TYPE()\nandPy_REFCNT()\nare now implemented as an opaque function call to hide implementation details. (Contributed by Victor Stinner in gh-120600 and gh-124127.)Remove the\nPySequence_Fast_GET_SIZE\n,PySequence_Fast_GET_ITEM\n, andPySequence_Fast_ITEMS\nmacros from the limited C API, since they have always been broken in the limited C API. (Contributed by Victor Stinner in gh-91417.)\nRemoved C APIs\u00b6\nCreating\nimmutable types\nwith mutable bases was deprecated in Python 3.12, and now raises aTypeError\n. (Contributed by Nikita Sobolev in gh-119775.)Remove\nPyDictObject.ma_version_tag\nmember, which was deprecated in Python 3.12. Use thePyDict_AddWatcher()\nAPI instead. (Contributed by Sam Gross in gh-124296.)Remove the private\n_Py_InitializeMain()\nfunction. It was a provisional API added to Python 3.8 by PEP 587. (Contributed by Victor Stinner in gh-129033.)Remove the undocumented APIs\nPy_C_RECURSION_LIMIT\nandPyThreadState.c_recursion_remaining\n. These were added in 3.13 and have been removed without deprecation. UsePy_EnterRecursiveCall()\nto guard against runaway recursion in C code. (Removed by Petr Viktorin in gh-133079, see also gh-130396.)\nDeprecated C APIs\u00b6\nThe\nPy_HUGE_VAL\nmacro is now soft deprecated. UsePy_INFINITY\ninstead. (Contributed by Sergey B Kirpichev in gh-120026.)The\nPy_IS_NAN\n,Py_IS_INFINITY\n, andPy_IS_FINITE\nmacros are now soft deprecated. Useisnan\n,isinf\nandisfinite\ninstead, available frommath.h\nsince C99. (Contributed by Sergey B Kirpichev in gh-119613.)Non-tuple sequences are now deprecated as argument for the\n(items)\nformat unit inPyArg_ParseTuple()\nand other argument parsing functions if items contains format units which store a borrowed buffer or a borrowed reference. (Contributed by Serhiy Storchaka in gh-50333.)The\n_PyMonitoring_FireBranchEvent\nfunction is now deprecated and should be replaced with calls toPyMonitoring_FireBranchLeftEvent()\nandPyMonitoring_FireBranchRightEvent()\n.The previously undocumented function\nPySequence_In()\nis now soft deprecated. UsePySequence_Contains()\ninstead. (Contributed by Yuki Kobayashi in gh-127896.)\nPending removal in Python 3.15\u00b6\nThe\nPyImport_ImportModuleNoBlock()\n: UsePyImport_ImportModule()\ninstead.PyWeakref_GetObject()\nandPyWeakref_GET_OBJECT()\n: UsePyWeakref_GetRef()\ninstead. The pythoncapi-compat project can be used to getPyWeakref_GetRef()\non Python 3.12 and older.Py_UNICODE\ntype and thePy_UNICODE_WIDE\nmacro: Usewchar_t\ninstead.PyUnicode_AsDecodedObject()\n: UsePyCodec_Decode()\ninstead.PyUnicode_AsDecodedUnicode()\n: UsePyCodec_Decode()\ninstead; Note that some codecs (for example, \u201cbase64\u201d) may return a type other thanstr\n, such asbytes\n.PyUnicode_AsEncodedObject()\n: UsePyCodec_Encode()\ninstead.PyUnicode_AsEncodedUnicode()\n: UsePyCodec_Encode()\ninstead; Note that some codecs (for example, \u201cbase64\u201d) may return a type other thanbytes\n, such asstr\n.Python initialization functions, deprecated in Python 3.13:\nPy_GetPath()\n: UsePyConfig_Get(\"module_search_paths\")\n(sys.path\n) instead.Py_GetPrefix()\n: UsePyConfig_Get(\"base_prefix\")\n(sys.base_prefix\n) instead. UsePyConfig_Get(\"prefix\")\n(sys.prefix\n) if virtual environments need to be handled.Py_GetExecPrefix()\n: UsePyConfig_Get(\"base_exec_prefix\")\n(sys.base_exec_prefix\n) instead. UsePyConfig_Get(\"exec_prefix\")\n(sys.exec_prefix\n) if virtual environments need to be handled.Py_GetProgramFullPath()\n: UsePyConfig_Get(\"executable\")\n(sys.executable\n) instead.Py_GetProgramName()\n: UsePyConfig_Get(\"executable\")\n(sys.executable\n) instead.Py_GetPythonHome()\n: UsePyConfig_Get(\"home\")\nor thePYTHONHOME\nenvironment variable instead.\nThe pythoncapi-compat project can be used to get\nPyConfig_Get()\non Python 3.13 and older.Functions to configure Python\u2019s initialization, deprecated in Python 3.11:\nPySys_SetArgvEx()\n: SetPyConfig.argv\ninstead.PySys_SetArgv()\n: SetPyConfig.argv\ninstead.Py_SetProgramName()\n: SetPyConfig.program_name\ninstead.Py_SetPythonHome()\n: SetPyConfig.home\ninstead.PySys_ResetWarnOptions()\n: Clearsys.warnoptions\nandwarnings.filters\ninstead.\nThe\nPy_InitializeFromConfig()\nAPI should be used withPyConfig\ninstead.Global configuration variables:\nPy_DebugFlag\n: UsePyConfig.parser_debug\norPyConfig_Get(\"parser_debug\")\ninstead.Py_VerboseFlag\n: UsePyConfig.verbose\norPyConfig_Get(\"verbose\")\ninstead.Py_QuietFlag\n: UsePyConfig.quiet\norPyConfig_Get(\"quiet\")\ninstead.Py_InteractiveFlag\n: UsePyConfig.interactive\norPyConfig_Get(\"interactive\")\ninstead.Py_InspectFlag\n: UsePyConfig.inspect\norPyConfig_Get(\"inspect\")\ninstead.Py_OptimizeFlag\n: UsePyConfig.optimization_level\norPyConfig_Get(\"optimization_level\")\ninstead.Py_NoSiteFlag\n: UsePyConfig.site_import\norPyConfig_Get(\"site_import\")\ninstead.Py_BytesWarningFlag\n: UsePyConfig.bytes_warning\norPyConfig_Get(\"bytes_warning\")\ninstead.Py_FrozenFlag\n: UsePyConfig.pathconfig_warnings\norPyConfig_Get(\"pathconfig_warnings\")\ninstead.Py_IgnoreEnvironmentFlag\n: UsePyConfig.use_environment\norPyConfig_Get(\"use_environment\")\ninstead.Py_DontWriteBytecodeFlag\n: UsePyConfig.write_bytecode\norPyConfig_Get(\"write_bytecode\")\ninstead.Py_NoUserSiteDirectory\n: UsePyConfig.user_site_directory\norPyConfig_Get(\"user_site_directory\")\ninstead.Py_UnbufferedStdioFlag\n: UsePyConfig.buffered_stdio\norPyConfig_Get(\"buffered_stdio\")\ninstead.Py_HashRandomizationFlag\n: UsePyConfig.use_hash_seed\nandPyConfig.hash_seed\norPyConfig_Get(\"hash_seed\")\ninstead.Py_IsolatedFlag\n: UsePyConfig.isolated\norPyConfig_Get(\"isolated\")\ninstead.Py_LegacyWindowsFSEncodingFlag\n: UsePyPreConfig.legacy_windows_fs_encoding\norPyConfig_Get(\"legacy_windows_fs_encoding\")\ninstead.Py_LegacyWindowsStdioFlag\n: UsePyConfig.legacy_windows_stdio\norPyConfig_Get(\"legacy_windows_stdio\")\ninstead.Py_FileSystemDefaultEncoding\n,Py_HasFileSystemDefaultEncoding\n: UsePyConfig.filesystem_encoding\norPyConfig_Get(\"filesystem_encoding\")\ninstead.Py_FileSystemDefaultEncodeErrors\n: UsePyConfig.filesystem_errors\norPyConfig_Get(\"filesystem_errors\")\ninstead.Py_UTF8Mode\n: UsePyPreConfig.utf8_mode\norPyConfig_Get(\"utf8_mode\")\ninstead. (seePy_PreInitialize()\n)\nThe\nPy_InitializeFromConfig()\nAPI should be used withPyConfig\nto set these options. OrPyConfig_Get()\ncan be used to get these options at runtime.\nPending removal in Python 3.16\u00b6\nThe bundled copy of\nlibmpdec\n.\nPending removal in Python 3.18\u00b6\nThe following private functions are deprecated and planned for removal in Python 3.18:\n_PyBytes_Join()\n: usePyBytes_Join()\n._PyDict_GetItemStringWithError()\n: usePyDict_GetItemStringRef()\n._PyDict_Pop()\n: usePyDict_Pop()\n._PyLong_Sign()\n: usePyLong_GetSign()\n._PyLong_FromDigits()\nand_PyLong_New()\n: usePyLongWriter_Create()\n._PyThreadState_UncheckedGet()\n: usePyThreadState_GetUnchecked()\n._PyUnicode_AsString()\n: usePyUnicode_AsUTF8()\n._PyUnicodeWriter_Init()\n: replace_PyUnicodeWriter_Init(&writer)\nwithwriter = PyUnicodeWriter_Create(0)\n._PyUnicodeWriter_Finish()\n: replace_PyUnicodeWriter_Finish(&writer)\nwithPyUnicodeWriter_Finish(writer)\n._PyUnicodeWriter_Dealloc()\n: replace_PyUnicodeWriter_Dealloc(&writer)\nwithPyUnicodeWriter_Discard(writer)\n._PyUnicodeWriter_WriteChar()\n: replace_PyUnicodeWriter_WriteChar(&writer, ch)\nwithPyUnicodeWriter_WriteChar(writer, ch)\n._PyUnicodeWriter_WriteStr()\n: replace_PyUnicodeWriter_WriteStr(&writer, str)\nwithPyUnicodeWriter_WriteStr(writer, str)\n._PyUnicodeWriter_WriteSubstring()\n: replace_PyUnicodeWriter_WriteSubstring(&writer, str, start, end)\nwithPyUnicodeWriter_WriteSubstring(writer, str, start, end)\n._PyUnicodeWriter_WriteASCIIString()\n: replace_PyUnicodeWriter_WriteASCIIString(&writer, str)\nwithPyUnicodeWriter_WriteASCII(writer, str)\n._PyUnicodeWriter_WriteLatin1String()\n: replace_PyUnicodeWriter_WriteLatin1String(&writer, str)\nwithPyUnicodeWriter_WriteUTF8(writer, str)\n._PyUnicodeWriter_Prepare()\n: (no replacement)._PyUnicodeWriter_PrepareKind()\n: (no replacement)._Py_HashPointer()\n: usePy_HashPointer()\n._Py_fopen_obj()\n: usePy_fopen()\n.\nThe pythoncapi-compat project can be used to get these new public functions on Python 3.13 and older. (Contributed by Victor Stinner in gh-128863.)\nPending removal in future versions\u00b6\nThe following APIs are deprecated and will be removed, although there is currently no date scheduled for their removal.\nPy_TPFLAGS_HAVE_FINALIZE\n: Unneeded since Python 3.8.PyErr_Fetch()\n: UsePyErr_GetRaisedException()\ninstead.PyErr_NormalizeException()\n: UsePyErr_GetRaisedException()\ninstead.PyErr_Restore()\n: UsePyErr_SetRaisedException()\ninstead.PyModule_GetFilename()\n: UsePyModule_GetFilenameObject()\ninstead.PyOS_AfterFork()\n: UsePyOS_AfterFork_Child()\ninstead.PySlice_GetIndicesEx()\n: UsePySlice_Unpack()\nandPySlice_AdjustIndices()\ninstead.PyUnicode_READY()\n: Unneeded since Python 3.12PyErr_Display()\n: UsePyErr_DisplayException()\ninstead._PyErr_ChainExceptions()\n: Use_PyErr_ChainExceptions1()\ninstead.PyBytesObject.ob_shash\nmember: callPyObject_Hash()\ninstead.Thread Local Storage (TLS) API:\nPyThread_create_key()\n: UsePyThread_tss_alloc()\ninstead.PyThread_delete_key()\n: UsePyThread_tss_free()\ninstead.PyThread_set_key_value()\n: UsePyThread_tss_set()\ninstead.PyThread_get_key_value()\n: UsePyThread_tss_get()\ninstead.PyThread_delete_key_value()\n: UsePyThread_tss_delete()\ninstead.PyThread_ReInitTLS()\n: Unneeded since Python 3.7.\nBuild changes\u00b6\nPEP 776: Emscripten is now an officially supported platform at tier 3. As a part of this effort, more than 25 bugs in Emscripten libc were fixed. Emscripten now includes support for\nctypes\n,termios\n, andfcntl\n, as well as experimental support for the new default interactive shell. (Contributed by R. Hood Chatham in gh-127146, gh-127683, and gh-136931.)Official Android binary releases are now provided on python.org.\nGNU Autoconf 2.72 is now required to generate\nconfigure\n. (Contributed by Erlend Aasland in gh-115765.)wasm32-unknown-emscripten\nis now a PEP 11 tier 3 platform. (Contributed by R. Hood Chatham in gh-127146, gh-127683, and gh-136931.)#pragma\n-based linking withpython3*.lib\ncan now be switched off with Py_NO_LINK_LIB. (Contributed by Jean-Christophe Fillion-Robin in gh-82909.)CPython now enables a set of recommended compiler options by default for improved security. Use the\n--disable-safety\nconfigure\noption to disable them, or the--enable-slower-safety\noption for a larger set of compiler options, albeit with a performance cost.The\nWITH_FREELISTS\nmacro and--without-freelists\nconfigure\noption have been removed.The new\nconfigure\noption--with-tail-call-interp\nmay be used to enable the experimental tail call interpreter. See A new type of interpreter for further details.To disable the new remote debugging support, use the\n--without-remote-debug\nconfigure\noption. This may be useful for security reasons.iOS and macOS apps can now be configured to redirect\nstdout\nandstderr\ncontent to the system log. (Contributed by Russell Keith-Magee in gh-127592.)The iOS testbed is now able to stream test output while the test is running. The testbed can also be used to run the test suite of projects other than CPython itself. (Contributed by Russell Keith-Magee in gh-127592.)\nbuild-details.json\n\u00b6\nInstallations of Python now contain a new file, build-details.json\n.\nThis is a static JSON document containing build details for CPython,\nto allow for introspection without needing to run code.\nThis is helpful for use-cases such as Python launchers, cross-compilation,\nand so on.\nbuild-details.json\nmust be installed in the platform-independent\nstandard library directory. This corresponds to the \u2018stdlib\u2019 sysconfig\ninstallation path,\nwhich can be found by running sysconfig.get_path('stdlib')\n.\nSee also\nPEP 739 \u2013 build-details.json\n1.0 \u2013 a static description file\nfor Python build details\nDiscontinuation of PGP signatures\u00b6\nPGP (Pretty Good Privacy) signatures will not be provided for releases of Python 3.14 or future versions. To verify CPython artifacts, users must use Sigstore verification materials. Releases have been signed using Sigstore since Python 3.11.\nThis change in release process was specified in PEP 761.\nFree-threaded Python is officially supported\u00b6\nThe free-threaded build of Python is now supported and no longer experimental. This is the start of phase II where free-threaded Python is officially supported but still optional.\nThe free-threading team are confident that the project is on the right path, and appreciate the continued dedication from everyone working to make free-threading ready for broader adoption across the Python community.\nWith these recommendations and the acceptance of this PEP, the Python developer community should broadly advertise that free-threading is a supported Python build option now and into the future, and that it will not be removed without a proper deprecation schedule.\nAny decision to transition to phase III, with free-threading as the default or sole build of Python is still undecided, and dependent on many factors both within CPython itself and the community. This decision is for the future.\nBinary releases for the experimental just-in-time compiler\u00b6\nThe official macOS and Windows release binaries now include an experimental\njust-in-time (JIT) compiler. Although it is not recommended for production\nuse, it can be tested by setting PYTHON_JIT=1\nas an\nenvironment variable. Downstream source builds and redistributors can use the\n--enable-experimental-jit=yes-off\nconfiguration option for similar\nbehavior.\nThe JIT is at an early stage and still in active development. As such, the\ntypical performance impact of enabling it can range from 10% slower to 20%\nfaster, depending on workload. To aid in testing and evaluation, a set of\nintrospection functions has been provided in the sys._jit\nnamespace.\nsys._jit.is_available()\ncan be used to determine if the current executable\nsupports JIT compilation, while sys._jit.is_enabled()\ncan be used to tell\nif JIT compilation has been enabled for the current process.\nCurrently, the most significant missing functionality is that native debuggers\nand profilers like gdb\nand perf\nare unable to unwind through JIT frames\n(Python debuggers and profilers, like pdb\nor profile\n, continue to\nwork without modification). Free-threaded builds do not support JIT compilation.\nPlease report any bugs or major performance regressions that you encounter!\nSee also\nPorting to Python 3.14\u00b6\nThis section lists previously described changes and other bugfixes that may require changes to your code.\nChanges in the Python API\u00b6\nOn Unix platforms other than macOS, forkserver is now the default start method for\nmultiprocessing\nandProcessPoolExecutor\n, instead of fork.If you encounter\nNameError\ns or pickling errors coming out ofmultiprocessing\norconcurrent.futures\n, see the forkserver restrictions.This change does not affect Windows or macOS, where \u2018spawn\u2019 remains the default start method.\nfunctools.partial\nis now a method descriptor. Wrap it instaticmethod()\nif you want to preserve the old behavior. (Contributed by Serhiy Storchaka and Dominykas Grigonis in gh-121027.)The garbage collector is now incremental, which means that the behavior of\ngc.collect()\nchanges slightly:gc.collect(1)\n: Performs an increment of garbage collection, rather than collecting generation 1.Other calls to\ngc.collect()\nare unchanged.\nThe\nlocale.nl_langinfo()\nfunction now temporarily sets theLC_CTYPE\nlocale in some cases. This temporary change affects other threads. (Contributed by Serhiy Storchaka in gh-69998.)types.UnionType\nis now an alias fortyping.Union\n, causing changes in some behaviors. See above for more details. (Contributed by Jelle Zijlstra in gh-105499.)The runtime behavior of annotations has changed in various ways; see above for details. While most code that interacts with annotations should continue to work, some undocumented details may behave differently.\nAs part of making the\nmimetypes\nCLI public, it now exits with1\non failure instead of0\nand2\non incorrect command-line parameters instead of1\n. Error messages are now printed to stderr.The\n\\B\npattern in regular expression now matches the empty string when given as the entire pattern, which may cause behavioural changes.On FreeBSD,\nsys.platform\nno longer contains the major version number.\nChanges in annotations (PEP 649 and PEP 749)\u00b6\nThis section contains guidance on changes that may be needed to annotations or Python code that interacts with or introspects annotations, due to the changes related to deferred evaluation of annotations.\nIn the majority of cases, working code from older versions of Python will not require any changes.\nImplications for annotated code\u00b6\nIf you define annotations in your code (for example, for use with a static type checker), then this change probably does not affect you: you can keep writing annotations the same way you did with previous versions of Python.\nYou will likely be able to remove quoted strings in annotations, which are frequently\nused for forward references. Similarly, if you use from __future__ import annotations\nto avoid having to write strings in annotations, you may well be able to\nremove that import once you support only Python 3.14 and newer.\nHowever, if you rely on third-party libraries that read annotations,\nthose libraries may need changes to support unquoted annotations before they\nwork as expected.\nImplications for readers of __annotations__\n\u00b6\nIf your code reads the __annotations__\nattribute on objects,\nyou may want to make changes in order to support code that relies on\ndeferred evaluation of annotations.\nFor example, you may want to use annotationlib.get_annotations()\nwith\nthe FORWARDREF\nformat,\nas the dataclasses\nmodule now does.\nThe external typing_extensions package provides partial backports\nof some of the functionality of the annotationlib\nmodule,\nsuch as the Format\nenum and\nthe get_annotations()\nfunction.\nThese can be used to write cross-version code that takes advantage of\nthe new behavior in Python 3.14.\nfrom __future__ import annotations\n\u00b6\nIn Python 3.7, PEP 563 introduced the from __future__ import annotations\nfuture statement, which turns all annotations into strings.\nHowever, this statement is now deprecated and it is expected to be removed in a future version of Python. This removal will not happen until after Python 3.13 reaches its end of life in 2029, being the last version of Python without support for deferred evaluation of annotations.\nIn Python 3.14, the behavior of code using from __future__ import annotations\nis unchanged.\nChanges in the C API\u00b6\nPy_Finalize()\nnow deletes all interned strings. This is backwards incompatible to any C extension that holds onto an interned string after a call toPy_Finalize()\nand is then reused after a call toPy_Initialize()\n. Any issues arising from this behavior will normally result in crashes during the execution of the subsequent call toPy_Initialize()\nfrom accessing uninitialized memory. To fix, use an address sanitizer to identify any use-after-free coming from an interned string and deallocate it during module shutdown. (Contributed by Eddie Elizondo in gh-113601.)The Unicode Exception Objects C API now raises a\nTypeError\nif its exception argument is not aUnicodeError\nobject. (Contributed by B\u00e9n\u00e9dikt Tran in gh-127691.)\nThe interpreter internally avoids some reference count modifications when loading objects onto the operands stack by borrowing references when possible. This can lead to smaller reference count values compared to previous Python versions. C API extensions that checked\nPy_REFCNT()\nof1\nto determine if an function argument is not referenced by any other code should instead usePyUnstable_Object_IsUniqueReferencedTemporary()\nas a safer replacement.Private functions promoted to public C APIs:\n_PyBytes_Join()\n:PyBytes_Join()\n_PyLong_IsNegative()\n:PyLong_IsNegative()\n_PyLong_IsPositive()\n:PyLong_IsPositive()\n_PyLong_IsZero()\n:PyLong_IsZero()\n_PyLong_Sign()\n:PyLong_GetSign()\n_PyUnicodeWriter_Dealloc()\n:PyUnicodeWriter_Discard()\n_PyUnicodeWriter_Finish()\n:PyUnicodeWriter_Finish()\n_PyUnicodeWriter_Init()\n: usePyUnicodeWriter_Create()\n_PyUnicodeWriter_Prepare()\n: (no replacement)_PyUnicodeWriter_PrepareKind()\n: (no replacement)_PyUnicodeWriter_WriteChar()\n:PyUnicodeWriter_WriteChar()\n_PyUnicodeWriter_WriteStr()\n:PyUnicodeWriter_WriteStr()\n_PyUnicodeWriter_WriteSubstring()\n:PyUnicodeWriter_WriteSubstring()\n_PyUnicode_EQ()\n:PyUnicode_Equal()\n_PyUnicode_Equal()\n:PyUnicode_Equal()\n_Py_GetConfig()\n:PyConfig_Get()\nandPyConfig_GetInt()\n_Py_HashBytes()\n:Py_HashBuffer()\n_Py_fopen_obj()\n:Py_fopen()\nPyMutex_IsLocked()\n:PyMutex_IsLocked()\nThe pythoncapi-compat project can be used to get most of these new functions on Python 3.13 and older.\nNotable changes in 3.14.1\u00b6\nAdd\nPyUnstable_ThreadState_SetStackProtection()\nandPyUnstable_ThreadState_ResetStackProtection()\nfunctions to set the stack protection base address and stack protection size of a Python thread state. (Contributed by Victor Stinner in gh-139653.)", "code_snippets": ["\n\n", " ", "\n ", "\n\n", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 28747} +{"url": "https://docs.python.org/3/whatsnew/3.13.html", "title": "What\u2019s New In Python 3.13", "content": "What\u2019s New In Python 3.13\u00b6\n- Editors:\nAdam Turner and Thomas Wouters\nThis article explains the new features in Python 3.13, compared to 3.12. Python 3.13 was released on October 7, 2024. For full details, see the changelog.\nSee also\nPEP 719 \u2013 Python 3.13 Release Schedule\nSummary \u2013 Release Highlights\u00b6\nPython 3.13 is a stable release of the Python programming language, with a mix of changes to the language, the implementation and the standard library. The biggest changes include a new interactive interpreter, experimental support for running in a free-threaded mode (PEP 703), and a Just-In-Time compiler (PEP 744).\nError messages continue to improve, with tracebacks now highlighted in color\nby default. The locals()\nbuiltin now has defined semantics for changing the returned mapping,\nand type parameters now support default values.\nThe library changes contain removal of deprecated APIs and modules, as well as the usual improvements in user-friendliness and correctness. Several legacy standard library modules have now been removed following their deprecation in Python 3.11 (PEP 594).\nThis article doesn\u2019t attempt to provide a complete specification of all new features, but instead gives a convenient overview. For full details refer to the documentation, such as the Library Reference and Language Reference. To understand the complete implementation and design rationale for a change, refer to the PEP for a particular new feature; but note that PEPs usually are not kept up-to-date once a feature has been fully implemented. See Porting to Python 3.13 for guidance on upgrading from earlier versions of Python.\nInterpreter improvements:\nA greatly improved interactive interpreter and improved error messages.\nPEP 667: The\nlocals()\nbuiltin now has defined semantics when mutating the returned mapping. Python debuggers and similar tools may now more reliably update local variables in optimized scopes even during concurrent code execution.PEP 703: CPython 3.13 has experimental support for running with the global interpreter lock disabled. See Free-threaded CPython for more details.\nPEP 744: A basic JIT compiler was added. It is currently disabled by default (though we may turn it on later). Performance improvements are modest \u2013 we expect to improve this over the next few releases.\nColor support in the new interactive interpreter, as well as in tracebacks and doctest output. This can be disabled through the\nPYTHON_COLORS\nandNO_COLOR\nenvironment variables.\nPython data model improvements:\n__static_attributes__\nstores the names of attributes accessed throughself.X\nin any function in a class body.__firstlineno__\nrecords the first line number of a class definition.\nSignificant improvements in the standard library:\nAdd a new\nPythonFinalizationError\nexception, raised when an operation is blocked during finalization.The\nargparse\nmodule now supports deprecating command-line options, positional arguments, and subcommands.The new functions\nbase64.z85encode()\nandbase64.z85decode()\nsupport encoding and decoding Z85 data.The\ncopy\nmodule now has acopy.replace()\nfunction, with support for many builtin types and any class defining the__replace__()\nmethod.The new\ndbm.sqlite3\nmodule is now the defaultdbm\nbackend.The\nos\nmodule has a suite of new functions for working with Linux\u2019s timer notification file descriptors.The\nrandom\nmodule now has a command-line interface.\nSecurity improvements:\nssl.create_default_context()\nsetsssl.VERIFY_X509_PARTIAL_CHAIN\nandssl.VERIFY_X509_STRICT\nas default flags.\nC API improvements:\nThe\nPy_mod_gil\nslot is now used to indicate that an extension module supports running with the GIL disabled.The PyTime C API has been added, providing access to system clocks.\nPyMutex\nis a new lightweight mutex that occupies a single byte.There is a new suite of functions for generating PEP 669 monitoring events in the C API.\nNew typing features:\nPEP 696: Type parameters (\ntyping.TypeVar\n,typing.ParamSpec\n, andtyping.TypeVarTuple\n) now support defaults.PEP 702: The new\nwarnings.deprecated()\ndecorator adds support for marking deprecations in the type system and at runtime.PEP 705:\ntyping.ReadOnly\ncan be used to mark an item of atyping.TypedDict\nas read-only for type checkers.PEP 742:\ntyping.TypeIs\nprovides more intuitive type narrowing behavior, as an alternative totyping.TypeGuard\n.\nPlatform support:\nPEP 730: Apple\u2019s iOS is now an officially supported platform, at tier 3.\nPEP 738: Android is now an officially supported platform, at tier 3.\nwasm32-wasi\nis now supported as a tier 2 platform.wasm32-emscripten\nis no longer an officially supported platform.\nImportant removals:\nPEP 594: The remaining 19 \u201cdead batteries\u201d (legacy stdlib modules) have been removed from the standard library:\naifc\n,audioop\n,cgi\n,cgitb\n,chunk\n,crypt\n,imghdr\n,mailcap\n,msilib\n,nis\n,nntplib\n,ossaudiodev\n,pipes\n,sndhdr\n,spwd\n,sunau\n,telnetlib\n,uu\nandxdrlib\n.Remove the 2to3 tool and\nlib2to3\nmodule (deprecated in Python 3.11).Remove the\ntkinter.tix\nmodule (deprecated in Python 3.6).Remove the\nlocale.resetlocale()\nfunction.Remove the\ntyping.io\nandtyping.re\nnamespaces.Remove chained\nclassmethod\ndescriptors.\nRelease schedule changes:\nPEP 602 (\u201cAnnual Release Cycle for Python\u201d) has been updated to extend the full support (\u2018bugfix\u2019) period for new releases to two years. This updated policy means that:\nPython 3.9\u20133.12 have one and a half years of full support, followed by three and a half years of security fixes.\nPython 3.13 and later have two years of full support, followed by three years of security fixes.\nNew Features\u00b6\nA better interactive interpreter\u00b6\nPython now uses a new interactive shell by default, based on code from the PyPy project. When the user starts the REPL from an interactive terminal, the following new features are now supported:\nMultiline editing with history preservation.\nDirect support for REPL-specific commands like help, exit, and quit, without the need to call them as functions.\nPrompts and tracebacks with color enabled by default.\nInteractive help browsing using F1 with a separate command history.\nHistory browsing using F2 that skips output as well as the >>> and \u2026 prompts.\n\u201cPaste mode\u201d with F3 that makes pasting larger blocks of code easier (press F3 again to return to the regular prompt).\nTo disable the new interactive shell,\nset the PYTHON_BASIC_REPL\nenvironment variable.\nFor more on interactive mode, see Interactive Mode.\n(Contributed by Pablo Galindo Salgado, \u0141ukasz Langa, and Lysandros Nikolaou in gh-111201 based on code from the PyPy project. Windows support contributed by Dino Viehland and Anthony Shaw.)\nImproved error messages\u00b6\nThe interpreter now uses color by default when displaying tracebacks in the terminal. This feature can be controlled via the new\nPYTHON_COLORS\nenvironment variable as well as the canonicalNO_COLOR\nandFORCE_COLOR\nenvironment variables. (Contributed by Pablo Galindo Salgado in gh-112730.)A common mistake is to write a script with the same name as a standard library module. When this results in errors, we now display a more helpful error message:\n$ python random.py Traceback (most recent call last): File \"/home/me/random.py\", line 1, in import random File \"/home/me/random.py\", line 3, in print(random.randint(5)) ^^^^^^^^^^^^^^ AttributeError: module 'random' has no attribute 'randint' (consider renaming '/home/me/random.py' since it has the same name as the standard library module named 'random' and prevents importing that standard library module)\nSimilarly, if a script has the same name as a third-party module that it attempts to import and this results in errors, we also display a more helpful error message:\n$ python numpy.py Traceback (most recent call last): File \"/home/me/numpy.py\", line 1, in import numpy as np File \"/home/me/numpy.py\", line 3, in np.array([1, 2, 3]) ^^^^^^^^ AttributeError: module 'numpy' has no attribute 'array' (consider renaming '/home/me/numpy.py' if it has the same name as a library you intended to import)\n(Contributed by Shantanu Jain in gh-95754.)\nThe error message now tries to suggest the correct keyword argument when an incorrect keyword argument is passed to a function.\n>>> \"Better error messages!\".split(max_split=1) Traceback (most recent call last): File \"\", line 1, in \"Better error messages!\".split(max_split=1) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^ TypeError: split() got an unexpected keyword argument 'max_split'. Did you mean 'maxsplit'?\n(Contributed by Pablo Galindo Salgado and Shantanu Jain in gh-107944.)\nFree-threaded CPython\u00b6\nCPython now has experimental support for running in a free-threaded mode,\nwith the global interpreter lock (GIL) disabled.\nThis is an experimental feature and therefore is not enabled by default.\nThe free-threaded mode requires a different executable,\nusually called python3.13t\nor python3.13t.exe\n.\nPre-built binaries marked as free-threaded can be installed as part of\nthe official Windows\nand macOS installers,\nor CPython can be built from source with the --disable-gil\noption.\nFree-threaded execution allows for full utilization of the available\nprocessing power by running threads in parallel on available CPU cores.\nWhile not all software will benefit from this automatically, programs\ndesigned with threading in mind will run faster on multi-core hardware.\nThe free-threaded mode is experimental and work is ongoing to improve it:\nexpect some bugs and a substantial single-threaded performance hit.\nFree-threaded builds of CPython support optionally running with the GIL\nenabled at runtime using the environment variable PYTHON_GIL\nor\nthe command-line option -X gil=1\n.\nTo check if the current interpreter supports free-threading, python -VV\nand sys.version\ncontain \u201cexperimental free-threading build\u201d.\nThe new sys._is_gil_enabled()\nfunction can be used to check whether\nthe GIL is actually disabled in the running process.\nC-API extension modules need to be built specifically for the free-threaded\nbuild. Extensions that support running with the GIL disabled should\nuse the Py_mod_gil\nslot. Extensions using single-phase init should\nuse PyUnstable_Module_SetGIL()\nto indicate whether they support\nrunning with the GIL disabled. Importing C extensions that don\u2019t use these\nmechanisms will cause the GIL to be enabled, unless the GIL was explicitly\ndisabled with the PYTHON_GIL\nenvironment variable or the\n-X gil=0\noption.\npip 24.1 or newer is required to install packages with C extensions in the\nfree-threaded build.\nThis work was made possible thanks to many individuals and organizations, including the large community of contributors to Python and third-party projects to test and enable free-threading support. Notable contributors include: Sam Gross, Ken Jin, Donghee Na, Itamar Oren, Matt Page, Brett Simmers, Dino Viehland, Carl Meyer, Nathan Goldbaum, Ralf Gommers, Lysandros Nikolaou, and many others. Many of these contributors are employed by Meta, which has provided significant engineering resources to support this project.\nSee also\nPEP 703 \u201cMaking the Global Interpreter Lock Optional in CPython\u201d contains rationale and information surrounding this work.\nPorting Extension Modules to Support Free-Threading: A community-maintained porting guide for extension authors.\nAn experimental just-in-time (JIT) compiler\u00b6\nWhen CPython is configured and built using\nthe --enable-experimental-jit\noption,\na just-in-time (JIT) compiler is added which may speed up some Python programs.\nOn Windows, use PCbuild/build.bat --experimental-jit\nto enable the JIT\nor --experimental-jit-interpreter\nto enable the Tier 2 interpreter.\nBuild requirements and further supporting information are contained at\nTools/jit/README.md\n.\nThe --enable-experimental-jit\noption takes these (optional) values,\ndefaulting to yes\nif --enable-experimental-jit\nis present\nwithout the optional value.\nno\n: Disable the entire Tier 2 and JIT pipeline.yes\n: Enable the JIT. To disable the JIT at runtime, pass the environment variablePYTHON_JIT=0\n.yes-off\n: Build the JIT but disable it by default. To enable the JIT at runtime, pass the environment variablePYTHON_JIT=1\n.interpreter\n: Enable the Tier 2 interpreter but disable the JIT. The interpreter can be disabled by running withPYTHON_JIT=0\n.\nThe internal architecture is roughly as follows:\nWe start with specialized Tier 1 bytecode. See What\u2019s new in 3.11 for details.\nWhen the Tier 1 bytecode gets hot enough, it gets translated to a new purely internal intermediate representation (IR), called the Tier 2 IR, and sometimes referred to as micro-ops (\u201cuops\u201d).\nThe Tier 2 IR uses the same stack-based virtual machine as Tier 1, but the instruction format is better suited to translation to machine code.\nWe have several optimization passes for Tier 2 IR, which are applied before it is interpreted or translated to machine code.\nThere is a Tier 2 interpreter, but it is mostly intended for debugging the earlier stages of the optimization pipeline. The Tier 2 interpreter can be enabled by configuring Python with\n--enable-experimental-jit=interpreter\n.When the JIT is enabled, the optimized Tier 2 IR is translated to machine code, which is then executed.\nThe machine code translation process uses a technique called copy-and-patch. It has no runtime dependencies, but there is a new build-time dependency on LLVM.\nSee also\n(JIT by Brandt Bucher, inspired by a paper by Haoran Xu and Fredrik Kjolstad. Tier 2 IR by Mark Shannon and Guido van Rossum. Tier 2 optimizer by Ken Jin.)\nDefined mutation semantics for locals()\n\u00b6\nHistorically, the expected result of mutating the return value of\nlocals()\nhas been left to individual Python implementations to define.\nStarting from Python 3.13, PEP 667 standardises\nthe historical behavior of CPython for most code execution scopes,\nbut changes optimized scopes\n(functions, generators, coroutines, comprehensions, and generator expressions)\nto explicitly return independent snapshots of the currently assigned local\nvariables, including locally referenced nonlocal variables captured in closures.\nThis change to the semantics of locals()\nin optimized scopes also\naffects the default behavior of code execution functions that implicitly\ntarget locals()\nif no explicit namespace is provided\n(such as exec()\nand eval()\n).\nIn previous versions, whether or not changes could be accessed by calling\nlocals()\nafter calling the code execution function was\nimplementation-dependent. In CPython specifically, such code would typically\nappear to work as desired, but could sometimes fail in optimized scopes based\non other code (including debuggers and code execution tracing tools)\npotentially resetting the shared snapshot in that scope.\nNow, the code will always run against an independent snapshot of\nthe local variables in optimized scopes, and hence the changes will never\nbe visible in subsequent calls to locals()\n.\nTo access the changes made in these cases, an explicit namespace reference\nmust now be passed to the relevant function.\nAlternatively, it may make sense to update affected code to use a higher level\ncode execution API that returns the resulting code execution namespace\n(e.g. runpy.run_path()\nwhen executing Python files from disk).\nTo ensure debuggers and similar tools can reliably update local variables in\nscopes affected by this change, FrameType.f_locals\nnow\nreturns a write-through proxy to the frame\u2019s local and locally referenced\nnonlocal variables in these scopes, rather than returning an inconsistently\nupdated shared dict\ninstance with undefined runtime semantics.\nSee PEP 667 for more details, including related C API changes and deprecations. Porting notes are also provided below for the affected Python APIs and C APIs.\n(PEP and implementation contributed by Mark Shannon and Tian Gao in gh-74929. Documentation updates provided by Guido van Rossum and Alyssa Coghlan.)\nSupport for mobile platforms\u00b6\nPEP 730: iOS is now a PEP 11 supported platform, with the\narm64-apple-ios\nand arm64-apple-ios-simulator\ntargets at tier 3\n(iPhone and iPad devices released after 2013 and the Xcode iOS simulator\nrunning on Apple silicon hardware, respectively).\nx86_64-apple-ios-simulator\n(the Xcode iOS simulator running on older x86_64\nhardware)\nis not a tier 3 supported platform, but will have best-effort support.\n(PEP written and implementation contributed by Russell Keith-Magee in\ngh-114099.)\nPEP 738: Android is now a PEP 11 supported platform, with the\naarch64-linux-android\nand x86_64-linux-android\ntargets at tier 3.\nThe 32-bit targets arm-linux-androideabi\nand i686-linux-android\nare not tier 3 supported platforms, but will have best-effort support.\n(PEP written and implementation contributed by Malcolm Smith in\ngh-116622.)\nOther Language Changes\u00b6\nThe compiler now strips common leading whitespace from every line in a docstring. This reduces the size of the bytecode cache (such as\n.pyc\nfiles), with reductions in file size of around 5%, for example insqlalchemy.orm.session\nfrom SQLAlchemy 2.0. This change affects tools that use docstrings, such asdoctest\n.>>> def spam(): ... \"\"\" ... This is a docstring with ... leading whitespace. ... ... It even has multiple paragraphs! ... \"\"\" ... >>> spam.__doc__ '\\nThis is a docstring with\\n leading whitespace.\\n\\nIt even has multiple paragraphs!\\n'\n(Contributed by Inada Naoki in gh-81283.)\nAnnotation scopes within class scopes can now contain lambdas and comprehensions. Comprehensions that are located within class scopes are not inlined into their parent scope.\nclass C[T]: type Alias = lambda: T\nFuture statements are no longer triggered by relative imports of the\n__future__\nmodule, meaning that statements of the formfrom .__future__ import ...\nare now simply standard relative imports, with no special features activated. (Contributed by Jeremiah Gabriel Pascual in gh-118216.)global\ndeclarations are now permitted inexcept\nblocks when that global is used in theelse\nblock. Previously this raised an erroneousSyntaxError\n. (Contributed by Irit Katriel in gh-111123.)Add\nPYTHON_FROZEN_MODULES\n, a new environment variable that determines whether frozen modules are ignored by the import machinery, equivalent to the-X frozen_modules\ncommand-line option. (Contributed by Yilei Yang in gh-111374.)Add support for the perf profiler working without frame pointers through the new environment variable\nPYTHON_PERF_JIT_SUPPORT\nand command-line option-X perf_jit\n. (Contributed by Pablo Galindo in gh-118518.)The location of a\n.python_history\nfile can be changed via the newPYTHON_HISTORY\nenvironment variable. (Contributed by Levi Sabah, Zackery Spytz and Hugo van Kemenade in gh-73965.)Classes have a new\n__static_attributes__\nattribute. This is populated by the compiler with a tuple of the class\u2019s attribute names which are assigned throughself.\nfrom any function in its body. (Contributed by Irit Katriel in gh-115775.)The compiler now creates a\n__firstlineno__\nattribute on classes with the line number of the first line of the class definition. (Contributed by Serhiy Storchaka in gh-118465.)The\nexec()\nandeval()\nbuiltins now accept the globals and locals arguments as keywords. (Contributed by Raphael Gaschignard in gh-105879)The\ncompile()\nbuiltin now accepts a new flag,ast.PyCF_OPTIMIZED_AST\n, which is similar toast.PyCF_ONLY_AST\nexcept that the returned AST is optimized according to the value of the optimize argument. (Contributed by Irit Katriel in gh-108113).Add a\n__name__\nattribute onproperty\nobjects. (Contributed by Eugene Toder in gh-101860.)Add\nPythonFinalizationError\n, a new exception derived fromRuntimeError\nand used to signal when operations are blocked during finalization. The following callables now raisePythonFinalizationError\n, instead ofRuntimeError\n:(Contributed by Victor Stinner in gh-114570.)\nAllow the count argument of\nstr.replace()\nto be a keyword. (Contributed by Hugo van Kemenade in gh-106487.)Many functions now emit a warning if a boolean value is passed as a file descriptor argument. This can help catch some errors earlier. (Contributed by Serhiy Storchaka in gh-82626.)\nAdded\nname\nandmode\nattributes for compressed and archived file-like objects in thebz2\n,lzma\n,tarfile\n, andzipfile\nmodules. (Contributed by Serhiy Storchaka in gh-115961.)\nNew Modules\u00b6\ndbm.sqlite3\n: An SQLite backend fordbm\n. (Contributed by Raymond Hettinger and Erlend E. Aasland in gh-100414.)\nImproved Modules\u00b6\nargparse\u00b6\nAdd the deprecated parameter to the\nadd_argument()\nandadd_parser()\nmethods, to enable deprecating command-line options, positional arguments, and subcommands. (Contributed by Serhiy Storchaka in gh-83648.)\narray\u00b6\nAdd the\n'w'\ntype code (Py_UCS4\n) for Unicode characters. It should be used instead of the deprecated'u'\ntype code. (Contributed by Inada Naoki in gh-80480.)Register\narray.array\nas aMutableSequence\nby implementing theclear()\nmethod. (Contributed by Mike Zimin in gh-114894.)\nast\u00b6\nThe constructors of node types in the\nast\nmodule are now stricter in the arguments they accept, with more intuitive behavior when arguments are omitted.If an optional field on an AST node is not included as an argument when constructing an instance, the field will now be set to\nNone\n. Similarly, if a list field is omitted, that field will now be set to an empty list, and if anexpr_context\nfield is omitted, it defaults toLoad()\n. (Previously, in all cases, the attribute would be missing on the newly constructed AST node instance.)In all other cases, where a required argument is omitted, the node constructor will emit a\nDeprecationWarning\n. This will raise an exception in Python 3.15. Similarly, passing a keyword argument to the constructor that does not map to a field on the AST node is now deprecated, and will raise an exception in Python 3.15.These changes do not apply to user-defined subclasses of\nast.AST\nunless the class opts in to the new behavior by defining theAST._field_types\nmapping.(Contributed by Jelle Zijlstra in gh-105858, gh-117486, and gh-118851.)\nast.parse()\nnow accepts an optional argument optimize which is passed on tocompile()\n. This makes it possible to obtain an optimized AST. (Contributed by Irit Katriel in gh-108113.)\nasyncio\u00b6\nasyncio.as_completed()\nnow returns an object that is both an asynchronous iterator and a plain iterator of awaitables. The awaitables yielded by asynchronous iteration include original task or future objects that were passed in, making it easier to associate results with the tasks being completed. (Contributed by Justin Arthur in gh-77714.)asyncio.loop.create_unix_server()\nwill now automatically remove the Unix socket when the server is closed. (Contributed by Pierre Ossman in gh-111246.)DatagramTransport.sendto()\nwill now send zero-length datagrams if called with an empty bytes object. The transport flow control also now accounts for the datagram header when calculating the buffer size. (Contributed by Jamie Phan in gh-115199.)Add\nQueue.shutdown\nandQueueShutDown\nto manage queue termination. (Contributed by Laurie Opperman and Yves Duprat in gh-104228.)Add the\nServer.close_clients()\nandServer.abort_clients()\nmethods, which more forcefully close an asyncio server. (Contributed by Pierre Ossman in gh-113538.)Accept a tuple of separators in\nStreamReader.readuntil()\n, stopping when any one of them is encountered. (Contributed by Bruce Merry in gh-81322.)Improve the behavior of\nTaskGroup\nwhen an external cancellation collides with an internal cancellation. For example, when two task groups are nested and both experience an exception in a child task simultaneously, it was possible that the outer task group would hang, because its internal cancellation was swallowed by the inner task group.In the case where a task group is cancelled externally and also must raise an\nExceptionGroup\n, it will now call the parent task\u2019scancel()\nmethod. This ensures that aCancelledError\nwill be raised at the nextawait\n, so the cancellation is not lost.An added benefit of these changes is that task groups now preserve the cancellation count (\ncancelling()\n).In order to handle some corner cases,\nuncancel()\nmay now reset the undocumented_must_cancel\nflag when the cancellation count reaches zero.(Inspired by an issue reported by Arthur Tacca in gh-116720.)\nWhen\nTaskGroup.create_task()\nis called on an inactiveTaskGroup\n, the given coroutine will be closed (which prevents aRuntimeWarning\nabout the given coroutine being never awaited). (Contributed by Arthur Tacca and Jason Zhang in gh-115957.)The function and methods named\ncreate_task\nhave received a new**kwargs\nargument that is passed through to the task constructor. This change was accidentally added in 3.13.3, and broke the API contract for custom task factories. Several third-party task factories implemented workarounds for this. In 3.13.4 and later releases the old factory contract is honored once again (until 3.14). To keep the workarounds working, the extra**kwargs\nargument still allows passing additional keyword arguments toTask\nand to custom task factories.This affects the following function and methods:\nasyncio.create_task()\n,asyncio.loop.create_task()\n,asyncio.TaskGroup.create_task()\n. (Contributed by Thomas Grainger in gh-128307.)\nbase64\u00b6\nAdd\nz85encode()\nandz85decode()\nfunctions for encodingbytes\nas Z85 data and decoding Z85-encoded data tobytes\n. (Contributed by Matan Perelman in gh-75299.)\ncompileall\u00b6\nThe default number of worker threads and processes is now selected using\nos.process_cpu_count()\ninstead ofos.cpu_count()\n. (Contributed by Victor Stinner in gh-109649.)\nconcurrent.futures\u00b6\nThe default number of worker threads and processes is now selected using\nos.process_cpu_count()\ninstead ofos.cpu_count()\n. (Contributed by Victor Stinner in gh-109649.)\nconfigparser\u00b6\nConfigParser\nnow has support for unnamed sections, which allows for top-level key-value pairs. This can be enabled with the new allow_unnamed_section parameter. (Contributed by Pedro Sousa Lacerda in gh-66449.)\ncopy\u00b6\nThe new\nreplace()\nfunction and thereplace protocol\nmake creating modified copies of objects much simpler. This is especially useful when working with immutable objects. The following types support thereplace()\nfunction and implement the replace protocol:Any user-defined class can also support\ncopy.replace()\nby defining the__replace__()\nmethod. (Contributed by Serhiy Storchaka in gh-108751.)\nctypes\u00b6\nAs a consequence of necessary internal refactoring, initialization of internal metaclasses now happens in\n__init__\nrather than in__new__\n. This affects projects that subclass these internal metaclasses to provide custom initialization. Generally:Custom logic that was done in\n__new__\nafter callingsuper().__new__\nshould be moved to__init__\n.To create a class, call the metaclass, not only the metaclass\u2019s\n__new__\nmethod.\nSee gh-124520 for discussion and links to changes in some affected projects.\nctypes.Structure\nobjects have a new_align_\nattribute which allows the alignment of the structure being packed to/from memory to be specified explicitly. (Contributed by Matt Sanderson in gh-112433)\ndbm\u00b6\nAdd\ndbm.sqlite3\n, a new module which implements an SQLite backend, and make it the defaultdbm\nbackend. (Contributed by Raymond Hettinger and Erlend E. Aasland in gh-100414.)Allow removing all items from the database through the new\nclear()\nmethods of the GDBM and NDBM database objects. (Contributed by Donghee Na in gh-107122.)\ndis\u00b6\nChange the output of\ndis\nmodule functions to show logical labels for jump targets and exception handlers, rather than offsets. The offsets can be added with the new-O\ncommand-line option or the show_offsets argument. (Contributed by Irit Katriel in gh-112137.)get_instructions()\nno longer represents cache entries as separate instructions. Instead, it returns them as part of theInstruction\n, in the new cache_info field. The show_caches argument toget_instructions()\nis deprecated and no longer has any effect. (Contributed by Irit Katriel in gh-112962.)\ndoctest\u00b6\ndoctest\noutput is now colored by default. This can be controlled via the newPYTHON_COLORS\nenvironment variable as well as the canonicalNO_COLOR\nandFORCE_COLOR\nenvironment variables. See also Controlling color. (Contributed by Hugo van Kemenade in gh-117225.)The\nDocTestRunner.run()\nmethod now counts the number of skipped tests. Add theDocTestRunner.skips\nandTestResults.skipped\nattributes. (Contributed by Victor Stinner in gh-108794.)\nemail\u00b6\nHeaders with embedded newlines are now quoted on output. The\ngenerator\nwill now refuse to serialize (write) headers that are improperly folded or delimited, such that they would be parsed as multiple headers or joined with adjacent data. If you need to turn this safety feature off, setverify_generated_headers\n. (Contributed by Bas Bloemsaat and Petr Viktorin in gh-121650.)getaddresses()\nandparseaddr()\nnow return('', '')\npairs in more situations where invalid email addresses are encountered instead of potentially inaccurate values. The two functions have a new optional strict parameter (defaultTrue\n). To get the old behavior (accepting malformed input), usestrict=False\n.getattr(email.utils, 'supports_strict_parsing', False)\ncan be used to check if the strict parameter is available. (Contributed by Thomas Dwyer and Victor Stinner for gh-102988 to improve the CVE 2023-27043 fix.)\nenum\u00b6\nfractions\u00b6\nFraction\nobjects now support the standard format specification mini-language rules for fill, alignment, sign handling, minimum width, and grouping. (Contributed by Mark Dickinson in gh-111320.)\nglob\u00b6\nAdd\ntranslate()\n, a function to convert a path specification with shell-style wildcards to a regular expression. (Contributed by Barney Gale in gh-72904.)\nimportlib\u00b6\nThe following functions in\nimportlib.resources\nnow allow accessing a directory (or tree) of resources, using multiple positional arguments (the encoding and errors arguments in the text-reading functions are now keyword-only):These functions are no longer deprecated and are not scheduled for removal. (Contributed by Petr Viktorin in gh-116608.)\ncontents()\nremains deprecated in favor of the fully-featuredTraversable\nAPI. However, there is now no plan to remove it. (Contributed by Petr Viktorin in gh-116608.)\nio\u00b6\nThe\nIOBase\nfinalizer now logs any errors raised by theclose()\nmethod withsys.unraisablehook\n. Previously, errors were ignored silently by default, and only logged in Python Development Mode or when using a Python debug build. (Contributed by Victor Stinner in gh-62948.)\nipaddress\u00b6\nAdd the\nIPv4Address.ipv6_mapped\nproperty, which returns the IPv4-mapped IPv6 address. (Contributed by Charles Machalow in gh-109466.)Fix\nis_global\nandis_private\nbehavior inIPv4Address\n,IPv6Address\n,IPv4Network\n, andIPv6Network\n. (Contributed by Jakub Stasiak in gh-113171.)\nitertools\u00b6\nbatched()\nhas a new strict parameter, which raises aValueError\nif the final batch is shorter than the specified batch size. (Contributed by Raymond Hettinger in gh-113202.)\nmarshal\u00b6\nAdd the allow_code parameter in module functions. Passing\nallow_code=False\nprevents serialization and de-serialization of code objects which are incompatible between Python versions. (Contributed by Serhiy Storchaka in gh-113626.)\nmath\u00b6\nThe new function\nfma()\nperforms fused multiply-add operations. This computesx * y + z\nwith only a single round, and so avoids any intermediate loss of precision. It wraps thefma()\nfunction provided by C99, and follows the specification of the IEEE 754 \u201cfusedMultiplyAdd\u201d operation for special cases. (Contributed by Mark Dickinson and Victor Stinner in gh-73468.)\nmimetypes\u00b6\nAdd the\nguess_file_type()\nfunction to guess a MIME type from a filesystem path. Using paths withguess_type()\nis now soft deprecated. (Contributed by Serhiy Storchaka in gh-66543.)\nmmap\u00b6\nmmap\nis now protected from crashing on Windows when the mapped memory is inaccessible due to file system errors or access violations. (Contributed by Jannis Weigend in gh-118209.)mmap\nhas a newseekable()\nmethod that can be used when a seekable file-like object is required. Theseek()\nmethod now returns the new absolute position. (Contributed by Donghee Na and Sylvie Liberman in gh-111835.)The new UNIX-only trackfd parameter for\nmmap\ncontrols file descriptor duplication; if false, the file descriptor specified by fileno will not be duplicated. (Contributed by Zackery Spytz and Petr Viktorin in gh-78502.)\nmultiprocessing\u00b6\nThe default number of worker threads and processes is now selected using\nos.process_cpu_count()\ninstead ofos.cpu_count()\n. (Contributed by Victor Stinner in gh-109649.)\nos\u00b6\nAdd\nprocess_cpu_count()\nfunction to get the number of logical CPU cores usable by the calling thread of the current process. (Contributed by Victor Stinner in gh-109649.)cpu_count()\nandprocess_cpu_count()\ncan be overridden through the new environment variablePYTHON_CPU_COUNT\nor the new command-line option-X cpu_count\n. This option is useful for users who need to limit CPU resources of a container system without having to modify application code or the container itself. (Contributed by Donghee Na in gh-109595.)Add a low level interface to Linux\u2019s timer file descriptors via\ntimerfd_create()\n,timerfd_settime()\n,timerfd_settime_ns()\n,timerfd_gettime()\n,timerfd_gettime_ns()\n,TFD_NONBLOCK\n,TFD_CLOEXEC\n,TFD_TIMER_ABSTIME\n, andTFD_TIMER_CANCEL_ON_SET\n(Contributed by Masaru Tsuchiyama in gh-108277.)lchmod()\nand the follow_symlinks argument ofchmod()\nare both now available on Windows. Note that the default value of follow_symlinks inlchmod()\nisFalse\non Windows. (Contributed by Serhiy Storchaka in gh-59616.)fchmod()\nand support for file descriptors inchmod()\nare both now available on Windows. (Contributed by Serhiy Storchaka in gh-113191.)On Windows,\nmkdir()\nandmakedirs()\nnow support passing a mode value of0o700\nto apply access control to the new directory. This implicitly affectstempfile.mkdtemp()\nand is a mitigation for CVE 2024-4030. Other values for mode continue to be ignored. (Contributed by Steve Dower in gh-118486.)posix_spawn()\nnow acceptsNone\nfor the env argument, which makes the newly spawned process use the current process environment. (Contributed by Jakub Kulik in gh-113119.)posix_spawn()\ncan now use thePOSIX_SPAWN_CLOSEFROM\nattribute in the file_actions parameter on platforms that supportposix_spawn_file_actions_addclosefrom_np()\n. (Contributed by Jakub Kulik in gh-113117.)\nos.path\u00b6\nAdd\nisreserved()\nto check if a path is reserved on the current system. This function is only available on Windows. (Contributed by Barney Gale in gh-88569.)On Windows,\nisabs()\nno longer considers paths starting with exactly one slash (\\\nor/\n) to be absolute. (Contributed by Barney Gale and Jon Foster in gh-44626.)realpath()\nnow resolves MS-DOS style file names even if the file is not accessible. (Contributed by Moonsik Park in gh-82367.)\npathlib\u00b6\nAdd\nUnsupportedOperation\n, which is raised instead ofNotImplementedError\nwhen a path operation isn\u2019t supported. (Contributed by Barney Gale in gh-89812.)Add a new constructor for creating\nPath\nobjects from \u2018file\u2019 URIs (file:///\n),Path.from_uri()\n. (Contributed by Barney Gale in gh-107465.)Add\nPurePath.full_match()\nfor matching paths with shell-style wildcards, including the recursive wildcard \u201c**\n\u201d. (Contributed by Barney Gale in gh-73435.)Add the\nPurePath.parser\nclass attribute to store the implementation ofos.path\nused for low-level path parsing and joining. This will be eitherposixpath\norntpath\n.Add recurse_symlinks keyword-only argument to\nPath.glob()\nandrglob()\n. (Contributed by Barney Gale in gh-77609.)Path.glob()\nandrglob()\nnow return files and directories when given a pattern that ends with \u201c**\n\u201d. Previously, only directories were returned. (Contributed by Barney Gale in gh-70303.)Add the follow_symlinks keyword-only argument to\nPath.is_file\n,Path.is_dir\n,Path.owner()\n, andPath.group()\n. (Contributed by Barney Gale in gh-105793 and Kamil Turek in gh-107962.)\npdb\u00b6\nbreakpoint()\nandset_trace()\nnow enter the debugger immediately rather than on the next line of code to be executed. This change prevents the debugger from breaking outside of the context whenbreakpoint()\nis positioned at the end of the context. (Contributed by Tian Gao in gh-118579.)sys.path[0]\nis no longer replaced by the directory of the script being debugged whensys.flags.safe_path\nis set. (Contributed by Tian Gao and Christian Walther in gh-111762.)zipapp\nis now supported as a debugging target. (Contributed by Tian Gao in gh-118501.)Add ability to move between chained exceptions during post-mortem debugging in\npm()\nusing the newexceptions [exc_number]\ncommand for Pdb. (Contributed by Matthias Bussonnier in gh-106676.)Expressions and statements whose prefix is a pdb command are now correctly identified and executed. (Contributed by Tian Gao in gh-108464.)\nqueue\u00b6\nAdd\nQueue.shutdown\nandShutDown\nto manage queue termination. (Contributed by Laurie Opperman and Yves Duprat in gh-104750.)\nrandom\u00b6\nAdd a command-line interface. (Contributed by Hugo van Kemenade in gh-118131.)\nre\u00b6\nRename\nre.error\ntoPatternError\nfor improved clarity.re.error\nis kept for backward compatibility.\nshutil\u00b6\nsite\u00b6\n.pth\nfiles are now decoded using UTF-8 first, and then with the locale encoding if UTF-8 decoding fails. (Contributed by Inada Naoki in gh-117802.)\nsqlite3\u00b6\nA\nResourceWarning\nis now emitted if aConnection\nobject is notclosed\nexplicitly. (Contributed by Erlend E. Aasland in gh-105539.)Add the filter keyword-only parameter to\nConnection.iterdump()\nfor filtering database objects to dump. (Contributed by Mariusz Felisiak in gh-91602.)\nssl\u00b6\nThe\ncreate_default_context()\nAPI now includesVERIFY_X509_PARTIAL_CHAIN\nandVERIFY_X509_STRICT\nin its default flags.Note\nVERIFY_X509_STRICT\nmay reject pre-RFC 5280 or malformed certificates that the underlying OpenSSL implementation might otherwise accept. Whilst disabling this is not recommended, you can do so using:import ssl ctx = ssl.create_default_context() ctx.verify_flags &= ~ssl.VERIFY_X509_STRICT\n(Contributed by William Woodruff in gh-112389.)\nstatistics\u00b6\nAdd\nkde()\nfor kernel density estimation. This makes it possible to estimate a continuous probability density function from a fixed number of discrete samples. (Contributed by Raymond Hettinger in gh-115863.)Add\nkde_random()\nfor sampling from an estimated probability density function created bykde()\n. (Contributed by Raymond Hettinger in gh-115863.)\nsubprocess\u00b6\nThe\nsubprocess\nmodule now uses theposix_spawn()\nfunction in more situations.Notably, when close_fds is\nTrue\n(the default),posix_spawn()\nwill be used when the C library providesposix_spawn_file_actions_addclosefrom_np()\n, which includes recent versions of Linux, FreeBSD, and Solaris. On Linux, this should perform similarly to the existing Linuxvfork()\nbased code.A private control knob\nsubprocess._USE_POSIX_SPAWN\ncan be set toFalse\nif you need to forcesubprocess\nto never useposix_spawn()\n. Please report your reason and platform details in the issue tracker if you set this so that we can improve our API selection logic for everyone. (Contributed by Jakub Kulik in gh-113117.)\nsys\u00b6\nAdd the\n_is_interned()\nfunction to test if a string was interned. This function is not guaranteed to exist in all implementations of Python. (Contributed by Serhiy Storchaka in gh-78573.)\ntempfile\u00b6\nOn Windows, the default mode\n0o700\nused bytempfile.mkdtemp()\nnow limits access to the new directory due to changes toos.mkdir()\n. This is a mitigation for CVE 2024-4030. (Contributed by Steve Dower in gh-118486.)\ntime\u00b6\nOn Windows,\nmonotonic()\nnow uses theQueryPerformanceCounter()\nclock for a resolution of 1 microsecond, instead of theGetTickCount64()\nclock which has a resolution of 15.6 milliseconds. (Contributed by Victor Stinner in gh-88494.)On Windows,\ntime()\nnow uses theGetSystemTimePreciseAsFileTime()\nclock for a resolution of 1 microsecond, instead of theGetSystemTimeAsFileTime()\nclock which has a resolution of 15.6 milliseconds. (Contributed by Victor Stinner in gh-63207.)\ntkinter\u00b6\nAdd\ntkinter\nwidget methods:tk_busy_hold()\n,tk_busy_configure()\n,tk_busy_cget()\n,tk_busy_forget()\n,tk_busy_current()\n, andtk_busy_status()\n. (Contributed by Miguel, klappnase and Serhiy Storchaka in gh-72684.)The\ntkinter\nwidget methodwm_attributes()\nnow accepts the attribute name without the minus prefix to get window attributes, for examplew.wm_attributes('alpha')\nand allows specifying attributes and values to set as keyword arguments, for examplew.wm_attributes(alpha=0.5)\n. (Contributed by Serhiy Storchaka in gh-43457.)wm_attributes()\ncan now return attributes as adict\n, by using the new optional keyword-only parameter return_python_dict. (Contributed by Serhiy Storchaka in gh-43457.)Text.count()\ncan now return a simpleint\nwhen the new optional keyword-only parameter return_ints is used. Otherwise, the single count is returned as a 1-tuple orNone\n. (Contributed by Serhiy Storchaka in gh-97928.)Support the \u201cvsapi\u201d element type in the\nelement_create()\nmethod oftkinter.ttk.Style\n. (Contributed by Serhiy Storchaka in gh-68166.)Add the\nafter_info()\nmethod for Tkinter widgets. (Contributed by Cheryl Sabella in gh-77020.)Add a new\ncopy_replace()\nmethod toPhotoImage\nto copy a region from one image to another, possibly with pixel zooming, subsampling, or both. (Contributed by Serhiy Storchaka in gh-118225.)Add from_coords parameter to the\nPhotoImage\nmethodscopy()\n,zoom()\nandsubsample()\n. Add zoom and subsample parameters to thePhotoImage\nmethodcopy()\n. (Contributed by Serhiy Storchaka in gh-118225.)Add the\nPhotoImage\nmethodsread()\nto read an image from a file anddata()\nto get the image data. Add background and grayscale parameters to thewrite()\nmethod. (Contributed by Serhiy Storchaka in gh-118271.)\ntraceback\u00b6\nAdd the\nexc_type_str\nattribute toTracebackException\n, which holds a string display of the exc_type. Deprecate theexc_type\nattribute, which holds the type object itself. Add parameter save_exc_type (defaultTrue\n) to indicate whetherexc_type\nshould be saved. (Contributed by Irit Katriel in gh-112332.)Add a new show_group keyword-only parameter to\nTracebackException.format_exception_only()\nto (recursively) format the nested exceptions of aBaseExceptionGroup\ninstance. (Contributed by Irit Katriel in gh-105292.)\ntypes\u00b6\nSimpleNamespace\ncan now take a single positional argument to initialise the namespace\u2019s arguments. This argument must either be a mapping or an iterable of key-value pairs. (Contributed by Serhiy Storchaka in gh-108191.)\ntyping\u00b6\nPEP 705: Add\nReadOnly\n, a special typing construct to mark aTypedDict\nitem as read-only for type checkers.PEP 742: Add\nTypeIs\n, a typing construct that can be used to instruct a type checker how to narrow a type.Add\nNoDefault\n, a sentinel object used to represent the defaults of some parameters in thetyping\nmodule. (Contributed by Jelle Zijlstra in gh-116126.)Add\nget_protocol_members()\nto return the set of members defining atyping.Protocol\n. (Contributed by Jelle Zijlstra in gh-104873.)Add\nis_protocol()\nto check whether a class is aProtocol\n. (Contributed by Jelle Zijlstra in gh-104873.)ClassVar\ncan now be nested inFinal\n, and vice versa. (Contributed by Mehdi Drissi in gh-89547.)\nunicodedata\u00b6\nUpdate the Unicode database to version 15.1.0. (Contributed by James Gerity in gh-109559.)\nvenv\u00b6\nAdd support for creating source control management (SCM) ignore files in a virtual environment\u2019s directory. By default, Git is supported. This is implemented as opt-in via the API, which can be extended to support other SCMs (\nEnvBuilder\nandcreate()\n), and opt-out via the CLI, using--without-scm-ignore-files\n. (Contributed by Brett Cannon in gh-108125.)\nwarnings\u00b6\nPEP 702: The new\nwarnings.deprecated()\ndecorator provides a way to communicate deprecations to a static type checker and to warn on usage of deprecated classes and functions. ADeprecationWarning\nmay also be emitted when a decorated function or class is used at runtime. (Contributed by Jelle Zijlstra in gh-104003.)\nxml\u00b6\nAllow controlling Expat >=2.6.0 reparse deferral (CVE 2023-52425) by adding five new methods:\nxml.sax.expatreader.ExpatParser.flush()\n(Contributed by Sebastian Pipping in gh-115623.)\nAdd the\nclose()\nmethod for the iterator returned byiterparse()\nfor explicit cleanup. (Contributed by Serhiy Storchaka in gh-69893.)\nzipimport\u00b6\nOptimizations\u00b6\nSeveral standard library modules have had their import times significantly improved. For example, the import time of the\ntyping\nmodule has been reduced by around a third by removing dependencies onre\nandcontextlib\n. Other modules to enjoy import-time speedups includeemail.utils\n,enum\n,functools\n,importlib.metadata\n, andthreading\n. (Contributed by Alex Waygood, Shantanu Jain, Adam Turner, Daniel Hollas, and others in gh-109653.)textwrap.indent()\nis now around 30% faster than before for large input. (Contributed by Inada Naoki in gh-107369.)The\nsubprocess\nmodule now uses theposix_spawn()\nfunction in more situations, including when close_fds isTrue\n(the default) on many modern platforms. This should provide a notable performance increase when launching processes on FreeBSD and Solaris. See the subprocess section above for details. (Contributed by Jakub Kulik in gh-113117.)\nRemoved Modules And APIs\u00b6\nPEP 594: Remove \u201cdead batteries\u201d from the standard library\u00b6\nPEP 594 proposed removing 19 modules from the standard library, colloquially referred to as \u2018dead batteries\u2019 due to their historic, obsolete, or insecure status. All of the following modules were deprecated in Python 3.11, and are now removed:\naifc\nstandard-aifc: Use the redistribution of\naifc\nlibrary from PyPI.\naudioop\naudioop-lts: Use\naudioop-lts\nlibrary from PyPI.\nchunk\nstandard-chunk: Use the redistribution of\nchunk\nlibrary from PyPI.\ncgi\nandcgitb\ncgi.FieldStorage\ncan typically be replaced withurllib.parse.parse_qsl()\nforGET\nandHEAD\nrequests, and theemail.message\nmodule or the multipart library forPOST\nandPUT\nrequests.cgi.parse()\ncan be replaced by callingurllib.parse.parse_qs()\ndirectly on the desired query string, unless the input ismultipart/form-data\n, which should be replaced as described below forcgi.parse_multipart()\n.cgi.parse_header()\ncan be replaced with the functionality in theemail\npackage, which implements the same MIME RFCs. For example, withemail.message.EmailMessage\n:from email.message import EmailMessage msg = EmailMessage() msg['content-type'] = 'application/json; charset=\"utf8\"' main, params = msg.get_content_type(), msg['content-type'].params\ncgi.parse_multipart()\ncan be replaced with the functionality in theemail\npackage, which implements the same MIME RFCs, or with the multipart library. For example, theemail.message.EmailMessage\nandemail.message.Message\nclasses.standard-cgi: and standard-cgitb: Use the redistribution of\ncgi\nandcgitb\nlibrary from PyPI.\ncrypt\nand the private_crypt\nextension. Thehashlib\nmodule may be an appropriate replacement when simply hashing a value is required. Otherwise, various third-party libraries on PyPI are available:bcrypt: Modern password hashing for your software and your servers.\nargon2-cffi: The secure Argon2 password hashing algorithm.\nlegacycrypt:\nctypes\nwrapper to the POSIX crypt library call and associated functionality.crypt_r: Fork of the\ncrypt\nmodule, wrapper to the crypt_r(3) library call and associated functionality.standard-crypt and deprecated-crypt-alternative: Use the redistribution of\ncrypt\nand reimplementation of_crypt\nlibraries from PyPI.\nimghdr\n: The filetype, puremagic, or python-magic libraries should be used as replacements. For example, thepuremagic.what()\nfunction can be used to replace theimghdr.what()\nfunction for all file formats that were supported byimghdr\n.standard-imghdr: Use the redistribution of\nimghdr\nlibrary from PyPI.\nmailcap\n: Use themimetypes\nmodule instead.standard-mailcap: Use the redistribution of\nmailcap\nlibrary from PyPI.\nmsilib\nnis\nnntplib\n: Use the pynntp library from PyPI instead.standard-nntplib: Use the redistribution of\nnntplib\nlibrary from PyPI.\nossaudiodev\n: For audio playback, use the pygame library from PyPI instead.pipes\n: Use thesubprocess\nmodule instead. Useshlex.quote()\nto replace the undocumentedpipes.quote\nfunction.standard-pipes: Use the redistribution of\npipes\nlibrary from PyPI.\nsndhdr\n: The filetype, puremagic, or python-magic libraries should be used as replacements.standard-sndhdr: Use the redistribution of\nsndhdr\nlibrary from PyPI.\nspwd\n: Use the python-pam library from PyPI instead.sunau\nstandard-sunau: Use the redistribution of\nsunau\nlibrary from PyPI.\ntelnetlib\n, Use the telnetlib3 or Exscript libraries from PyPI instead.standard-telnetlib: Use the redistribution of\ntelnetlib\nlibrary from PyPI.\nuu\n: Use thebase64\nmodule instead, as a modern alternative.standard-uu: Use the redistribution of\nuu\nlibrary from PyPI.\nxdrlib\nstandard-xdrlib: Use the redistribution of\nxdrlib\nlibrary from PyPI.\n(Contributed by Victor Stinner and Zachary Ware in gh-104773 and gh-104780.)\n2to3\u00b6\nRemove the 2to3 program and the\nlib2to3\nmodule, previously deprecated in Python 3.11. (Contributed by Victor Stinner in gh-104780.)\nbuiltins\u00b6\nRemove support for chained\nclassmethod\ndescriptors (introduced in gh-63272). These can no longer be used to wrap other descriptors, such asproperty\n. The core design of this feature was flawed and led to several problems. To \u201cpass-through\u201d aclassmethod\n, consider using the__wrapped__\nattribute that was added in Python 3.10. (Contributed by Raymond Hettinger in gh-89519.)Raise a\nRuntimeError\nwhen callingframe.clear()\non a suspended frame (as has always been the case for an executing frame). (Contributed by Irit Katriel in gh-79932.)\nconfigparser\u00b6\nRemove the undocumented\nLegacyInterpolation\nclass, deprecated in the docstring since Python 3.2, and at runtime since Python 3.11. (Contributed by Hugo van Kemenade in gh-104886.)\nimportlib.metadata\u00b6\nRemove deprecated subscript (\n__getitem__()\n) access for EntryPoint objects. (Contributed by Jason R. Coombs in gh-113175.)\nlocale\u00b6\nRemove the\nlocale.resetlocale()\nfunction, deprecated in Python 3.11. Uselocale.setlocale(locale.LC_ALL, \"\")\ninstead. (Contributed by Victor Stinner in gh-104783.)\nopcode\u00b6\nMove\nopcode.ENABLE_SPECIALIZATION\nto_opcode.ENABLE_SPECIALIZATION\n. This field was added in 3.12, it was never documented, and is not intended for external use. (Contributed by Irit Katriel in gh-105481.)Remove\nopcode.is_pseudo()\n,opcode.MIN_PSEUDO_OPCODE\n, andopcode.MAX_PSEUDO_OPCODE\n, which were added in Python 3.12, but were neither documented nor exposed throughdis\n, and were not intended to be used externally. (Contributed by Irit Katriel in gh-105481.)\noptparse\u00b6\nThis module is no longer considered soft deprecated. While\nargparse\nremains preferred for new projects that aren\u2019t using a third party command line argument processing library, there are aspects of the wayargparse\nworks that mean the lower leveloptparse\nmodule may provide a better foundation for writing argument processing libraries, and for implementing command line applications which adhere more strictly thanargparse\ndoes to various Unix command line processing conventions that originate in the behaviour of the Cgetopt()\nfunction . (Contributed by Alyssa Coghlan and Serhiy Storchaka in gh-126180.)\npathlib\u00b6\nre\u00b6\nRemove the undocumented, deprecated, and broken\nre.template()\nfunction andre.TEMPLATE\n/re.T\nflag. (Contributed by Serhiy Storchaka and Nikita Sobolev in gh-105687.)\ntkinter.tix\u00b6\nRemove the\ntkinter.tix\nmodule, deprecated in Python 3.6. The third-party Tix library which the module wrapped is unmaintained. (Contributed by Zachary Ware in gh-75552.)\nturtle\u00b6\nRemove the\nRawTurtle.settiltangle()\nmethod, deprecated in the documentation since Python 3.1 and at runtime since Python 3.11. (Contributed by Hugo van Kemenade in gh-104876.)\ntyping\u00b6\nRemove the\ntyping.io\nandtyping.re\nnamespaces, deprecated since Python 3.8. The items in those namespaces can be imported directly from thetyping\nmodule. (Contributed by Sebastian Rittau in gh-92871.)Remove the keyword-argument method of creating\nTypedDict\ntypes, deprecated in Python 3.11. (Contributed by Tomas Roun in gh-104786.)\nunittest\u00b6\nRemove the following\nunittest\nfunctions, deprecated in Python 3.11:unittest.findTestCases()\nunittest.makeSuite()\nunittest.getTestCaseNames()\nUse\nTestLoader\nmethods instead:(Contributed by Hugo van Kemenade in gh-104835.)\nRemove the untested and undocumented\nTestProgram.usageExit()\nmethod, deprecated in Python 3.11. (Contributed by Hugo van Kemenade in gh-104992.)\nurllib\u00b6\nRemove the cafile, capath, and cadefault parameters of the\nurllib.request.urlopen()\nfunction, deprecated in Python 3.6. Use the context parameter instead with anSSLContext\ninstance. Thessl.SSLContext.load_cert_chain()\nfunction can be used to load specific certificates, or letssl.create_default_context()\nselect the operating system\u2019s trusted certificate authority (CA) certificates. (Contributed by Victor Stinner in gh-105382.)\nwebbrowser\u00b6\nRemove the untested and undocumented\nMacOSX\nclass, deprecated in Python 3.11. Use theMacOSXOSAScript\nclass (introduced in Python 3.2) instead. (Contributed by Hugo van Kemenade in gh-104804.)Remove the deprecated\nMacOSXOSAScript._name\nattribute. Use theMacOSXOSAScript.name\nattribute instead. (Contributed by Nikita Sobolev in gh-105546.)\nNew Deprecations\u00b6\n-\nDeprecate the undocumented\nSetPointerType()\nfunction, to be removed in Python 3.15. (Contributed by Victor Stinner in gh-105733.)Soft-deprecate the\nARRAY()\nfunction in favour oftype * length\nmultiplication. (Contributed by Victor Stinner in gh-105733.)\ndis\n:-\nDeprecate non-integer numbers as arguments to functions and methods that consider plural forms in the\ngettext\nmodule, even if no translation was found. (Contributed by Serhiy Storchaka in gh-88434.)\nglob\n:Deprecate the undocumented\nglob0()\nandglob1()\nfunctions. Useglob()\nand pass a path-like object specifying the root directory to the root_dir parameter instead. (Contributed by Barney Gale in gh-117337.)\n-\nDeprecate\nCGIHTTPRequestHandler\n, to be removed in Python 3.15. Process-based CGI HTTP servers have been out of favor for a very long time. This code was outdated, unmaintained, and rarely used. It has a high potential for both security and functionality bugs. (Contributed by Gregory P. Smith in gh-109096.)Deprecate the\n--cgi\nflag to the python -m http.server command-line interface, to be removed in Python 3.15. (Contributed by Gregory P. Smith in gh-109096.)\n-\nSoft-deprecate file path arguments to\nguess_type()\n, useguess_file_type()\ninstead. (Contributed by Serhiy Storchaka in gh-66543.)\nre\n:Deprecate passing the optional maxsplit, count, or flags arguments as positional arguments to the module-level\nsplit()\n,sub()\n, andsubn()\nfunctions. These parameters will become keyword-only in a future version of Python. (Contributed by Serhiy Storchaka in gh-56166.)\n-\nDeprecate\nPurePath.is_reserved()\n, to be removed in Python 3.15. Useos.path.isreserved()\nto detect reserved paths on Windows. (Contributed by Barney Gale in gh-88569.)\n-\nDeprecate\njava_ver()\n, to be removed in Python 3.15. This function is only useful for Jython support, has a confusing API, and is largely untested. (Contributed by Nikita Sobolev in gh-116349.)\n-\nDeprecate the undocumented\nispackage()\nfunction. (Contributed by Zackery Spytz in gh-64020.)\n-\nDeprecate passing more than one positional argument to the\nconnect()\nfunction and theConnection\nconstructor. The remaining parameters will become keyword-only in Python 3.15. (Contributed by Erlend E. Aasland in gh-107948.)Deprecate passing name, number of arguments, and the callable as keyword arguments for\nConnection.create_function()\nandConnection.create_aggregate()\nThese parameters will become positional-only in Python 3.15. (Contributed by Erlend E. Aasland in gh-108278.)Deprecate passing the callback callable by keyword for the\nset_authorizer()\n,set_progress_handler()\n, andset_trace_callback()\nConnection\nmethods. The callback callables will become positional-only in Python 3.15. (Contributed by Erlend E. Aasland in gh-108278.)\nsys\n:Deprecate the\n_enablelegacywindowsfsencoding()\nfunction, to be removed in Python 3.16. Use thePYTHONLEGACYWINDOWSFSENCODING\nenvironment variable instead. (Contributed by Inada Naoki in gh-73427.)\n-\nDeprecate the undocumented and unused\nTarFile.tarfile\nattribute, to be removed in Python 3.16. (Contributed in gh-115256.)\n-\nDeprecate the\nTracebackException.exc_type\nattribute. UseTracebackException.exc_type_str\ninstead. (Contributed by Irit Katriel in gh-112332.)\n-\nDeprecate the undocumented keyword argument syntax for creating\nNamedTuple\nclasses (e.g.Point = NamedTuple(\"Point\", x=int, y=int)\n), to be removed in Python 3.15. Use the class-based syntax or the functional syntax instead. (Contributed by Alex Waygood in gh-105566.)Deprecate omitting the fields parameter when creating a\nNamedTuple\nortyping.TypedDict\nclass, and deprecate passingNone\nto the fields parameter of both types. Python 3.15 will require a valid sequence for the fields parameter. To create a NamedTuple class with zero fields, useclass NT(NamedTuple): pass\norNT = NamedTuple(\"NT\", ())\n. To create a TypedDict class with zero fields, useclass TD(TypedDict): pass\norTD = TypedDict(\"TD\", {})\n. (Contributed by Alex Waygood in gh-105566 and gh-105570.)Deprecate the\ntyping.no_type_check_decorator()\ndecorator function, to be removed in Python 3.15. After eight years in thetyping\nmodule, it has yet to be supported by any major type checker. (Contributed by Alex Waygood in gh-106309.)Deprecate\ntyping.AnyStr\n. In Python 3.16, it will be removed fromtyping.__all__\n, and aDeprecationWarning\nwill be emitted at runtime when it is imported or accessed. It will be removed entirely in Python 3.18. Use the new type parameter syntax instead. (Contributed by Michael The in gh-107116.)\nwave\n:Deprecate the\ngetmark()\n,setmark()\n, andgetmarkers()\nmethods of theWave_read\nandWave_write\nclasses, to be removed in Python 3.15. (Contributed by Victor Stinner in gh-105096.)\nPending removal in Python 3.14\u00b6\nargparse\n: The type, choices, and metavar parameters ofargparse.BooleanOptionalAction\nare deprecated and will be removed in 3.14. (Contributed by Nikita Sobolev in gh-92248.)ast\n: The following features have been deprecated in documentation since Python 3.8, now cause aDeprecationWarning\nto be emitted at runtime when they are accessed or used, and will be removed in Python 3.14:ast.Num\nast.Str\nast.Bytes\nast.NameConstant\nast.Ellipsis\nUse\nast.Constant\ninstead. (Contributed by Serhiy Storchaka in gh-90953.)-\nThe child watcher classes\nasyncio.MultiLoopChildWatcher\n,asyncio.FastChildWatcher\n,asyncio.AbstractChildWatcher\nandasyncio.SafeChildWatcher\nare deprecated and will be removed in Python 3.14. (Contributed by Kumar Aditya in gh-94597.)asyncio.set_child_watcher()\n,asyncio.get_child_watcher()\n,asyncio.AbstractEventLoopPolicy.set_child_watcher()\nandasyncio.AbstractEventLoopPolicy.get_child_watcher()\nare deprecated and will be removed in Python 3.14. (Contributed by Kumar Aditya in gh-94597.)The\nget_event_loop()\nmethod of the default event loop policy now emits aDeprecationWarning\nif there is no current event loop set and it decides to create one. (Contributed by Serhiy Storchaka and Guido van Rossum in gh-100160.)\nemail\n: Deprecated the isdst parameter inemail.utils.localtime()\n. (Contributed by Alan Williams in gh-72346.)importlib.abc\ndeprecated classes:importlib.abc.ResourceReader\nimportlib.abc.Traversable\nimportlib.abc.TraversableResources\nUse\nimportlib.resources.abc\nclasses instead:(Contributed by Jason R. Coombs and Hugo van Kemenade in gh-93963.)\nitertools\nhad undocumented, inefficient, historically buggy, and inconsistent support for copy, deepcopy, and pickle operations. This will be removed in 3.14 for a significant reduction in code volume and maintenance burden. (Contributed by Raymond Hettinger in gh-101588.)multiprocessing\n: The default start method will change to a safer one on Linux, BSDs, and other non-macOS POSIX platforms where'fork'\nis currently the default (gh-84559). Adding a runtime warning about this was deemed too disruptive as the majority of code is not expected to care. Use theget_context()\norset_start_method()\nAPIs to explicitly specify when your code requires'fork'\n. See Contexts and start methods.pathlib\n:is_relative_to()\nandrelative_to()\n: passing additional arguments is deprecated.pkgutil\n:pkgutil.find_loader()\nandpkgutil.get_loader()\nnow raiseDeprecationWarning\n; useimportlib.util.find_spec()\ninstead. (Contributed by Nikita Sobolev in gh-97850.)pty\n:master_open()\n: usepty.openpty()\n.slave_open()\n: usepty.openpty()\n.\n-\nversion\nandversion_info\n.execute()\nandexecutemany()\nif named placeholders are used and parameters is a sequence instead of adict\n.\nurllib\n:urllib.parse.Quoter\nis deprecated: it was not intended to be a public API. (Contributed by Gregory P. Smith in gh-88168.)\nPending removal in Python 3.15\u00b6\nThe import system:\nSetting\n__cached__\non a module while failing to set__spec__.cached\nis deprecated. In Python 3.15,__cached__\nwill cease to be set or take into consideration by the import system or standard library. (gh-97879)Setting\n__package__\non a module while failing to set__spec__.parent\nis deprecated. In Python 3.15,__package__\nwill cease to be set or take into consideration by the import system or standard library. (gh-97879)\n-\nThe undocumented\nctypes.SetPointerType()\nfunction has been deprecated since Python 3.13.\n-\nThe obsolete and rarely used\nCGIHTTPRequestHandler\nhas been deprecated since Python 3.13. No direct replacement exists. Anything is better than CGI to interface a web server with a request handler.The\n--cgi\nflag to the python -m http.server command-line interface has been deprecated since Python 3.13.\n-\nload_module()\nmethod: useexec_module()\ninstead.\n-\nThe\ngetdefaultlocale()\nfunction has been deprecated since Python 3.11. Its removal was originally planned for Python 3.13 (gh-90817), but has been postponed to Python 3.15. Usegetlocale()\n,setlocale()\n, andgetencoding()\ninstead. (Contributed by Hugo van Kemenade in gh-111187.)\n-\nPurePath.is_reserved()\nhas been deprecated since Python 3.13. Useos.path.isreserved()\nto detect reserved paths on Windows.\n-\njava_ver()\nhas been deprecated since Python 3.13. This function is only useful for Jython support, has a confusing API, and is largely untested.\n-\nThe check_home argument of\nsysconfig.is_python_build()\nhas been deprecated since Python 3.12.\n-\nRLock()\nwill take no arguments in Python 3.15. Passing any arguments has been deprecated since Python 3.14, as the Python version does not permit any arguments, but the C version allows any number of positional or keyword arguments, ignoring every argument.\n-\ntypes.CodeType\n: Accessingco_lnotab\nwas deprecated in PEP 626 since 3.10 and was planned to be removed in 3.12, but it only got a properDeprecationWarning\nin 3.12. May be removed in 3.15. (Contributed by Nikita Sobolev in gh-101866.)\n-\nThe undocumented keyword argument syntax for creating\nNamedTuple\nclasses (for example,Point = NamedTuple(\"Point\", x=int, y=int)\n) has been deprecated since Python 3.13. Use the class-based syntax or the functional syntax instead.When using the functional syntax of\nTypedDict\ns, failing to pass a value to the fields parameter (TD = TypedDict(\"TD\")\n) or passingNone\n(TD = TypedDict(\"TD\", None)\n) has been deprecated since Python 3.13. Useclass TD(TypedDict): pass\norTD = TypedDict(\"TD\", {})\nto create a TypedDict with zero field.The\ntyping.no_type_check_decorator()\ndecorator function has been deprecated since Python 3.13. After eight years in thetyping\nmodule, it has yet to be supported by any major type checker.\nwave\n:The\ngetmark()\n,setmark()\n, andgetmarkers()\nmethods of theWave_read\nandWave_write\nclasses have been deprecated since Python 3.13.\n-\nload_module()\nhas been deprecated since Python 3.10. Useexec_module()\ninstead. (Contributed by Jiahao Li in gh-125746.)\nPending removal in Python 3.16\u00b6\nThe import system:\nSetting\n__loader__\non a module while failing to set__spec__.loader\nis deprecated. In Python 3.16,__loader__\nwill cease to be set or taken into consideration by the import system or the standard library.\n-\nThe\n'u'\nformat code (wchar_t\n) has been deprecated in documentation since Python 3.3 and at runtime since Python 3.13. Use the'w'\nformat code (Py_UCS4\n) for Unicode characters instead.\n-\nasyncio.iscoroutinefunction()\nis deprecated and will be removed in Python 3.16; useinspect.iscoroutinefunction()\ninstead. (Contributed by Jiahao Li and Kumar Aditya in gh-122875.)asyncio\npolicy system is deprecated and will be removed in Python 3.16. In particular, the following classes and functions are deprecated:Users should use\nasyncio.run()\norasyncio.Runner\nwith loop_factory to use the desired event loop implementation.For example, to use\nasyncio.SelectorEventLoop\non Windows:import asyncio async def main(): ... asyncio.run(main(), loop_factory=asyncio.SelectorEventLoop)\n(Contributed by Kumar Aditya in gh-127949.)\n-\nBitwise inversion on boolean types,\n~True\nor~False\nhas been deprecated since Python 3.12, as it produces surprising and unintuitive results (-2\nand-1\n). Usenot x\ninstead for the logical negation of a Boolean. In the rare case that you need the bitwise inversion of the underlying integer, convert toint\nexplicitly (~int(x)\n).\n-\nCalling the Python implementation of\nfunctools.reduce()\nwith function or sequence as keyword arguments has been deprecated since Python 3.14.\n-\nSupport for custom logging handlers with the strm argument is deprecated and scheduled for removal in Python 3.16. Define handlers with the stream argument instead. (Contributed by Mariusz Felisiak in gh-115032.)\n-\nValid extensions start with a \u2018.\u2019 or are empty for\nmimetypes.MimeTypes.add_type()\n. Undotted extensions are deprecated and will raise aValueError\nin Python 3.16. (Contributed by Hugo van Kemenade in gh-75223.)\n-\nThe\nExecError\nexception has been deprecated since Python 3.14. It has not been used by any function inshutil\nsince Python 3.4, and is now an alias ofRuntimeError\n.\n-\nThe\nClass.get_methods\nmethod has been deprecated since Python 3.14.\nsys\n:The\n_enablelegacywindowsfsencoding()\nfunction has been deprecated since Python 3.13. Use thePYTHONLEGACYWINDOWSFSENCODING\nenvironment variable instead.\n-\nThe\nsysconfig.expand_makefile_vars()\nfunction has been deprecated since Python 3.14. Use thevars\nargument ofsysconfig.get_paths()\ninstead.\n-\nThe undocumented and unused\nTarFile.tarfile\nattribute has been deprecated since Python 3.13.\nPending removal in Python 3.17\u00b6\n-\ncollections.abc.ByteString\nis scheduled for removal in Python 3.17.Use\nisinstance(obj, collections.abc.Buffer)\nto test ifobj\nimplements the buffer protocol at runtime. For use in type annotations, either useBuffer\nor a union that explicitly specifies the types your code supports (e.g.,bytes | bytearray | memoryview\n).ByteString\nwas originally intended to be an abstract class that would serve as a supertype of bothbytes\nandbytearray\n. However, since the ABC never had any methods, knowing that an object was an instance ofByteString\nnever actually told you anything useful about the object. Other common buffer types such asmemoryview\nwere also never understood as subtypes ofByteString\n(either at runtime or by static type checkers).See PEP 688 for more details. (Contributed by Shantanu Jain in gh-91896.)\n-\nBefore Python 3.14, old-style unions were implemented using the private class\ntyping._UnionGenericAlias\n. This class is no longer needed for the implementation, but it has been retained for backward compatibility, with removal scheduled for Python 3.17. Users should use documented introspection helpers liketyping.get_origin()\nandtyping.get_args()\ninstead of relying on private implementation details.typing.ByteString\n, deprecated since Python 3.9, is scheduled for removal in Python 3.17.Use\nisinstance(obj, collections.abc.Buffer)\nto test ifobj\nimplements the buffer protocol at runtime. For use in type annotations, either useBuffer\nor a union that explicitly specifies the types your code supports (e.g.,bytes | bytearray | memoryview\n).ByteString\nwas originally intended to be an abstract class that would serve as a supertype of bothbytes\nandbytearray\n. However, since the ABC never had any methods, knowing that an object was an instance ofByteString\nnever actually told you anything useful about the object. Other common buffer types such asmemoryview\nwere also never understood as subtypes ofByteString\n(either at runtime or by static type checkers).See PEP 688 for more details. (Contributed by Shantanu Jain in gh-91896.)\nPending removal in Python 3.18\u00b6\nPending removal in Python 3.19\u00b6\nPending removal in future versions\u00b6\nThe following APIs will be removed in the future, although there is currently no date scheduled for their removal.\n-\nNesting argument groups and nesting mutually exclusive groups are deprecated.\nPassing the undocumented keyword argument prefix_chars to\nadd_argument_group()\nis now deprecated.The\nargparse.FileType\ntype converter is deprecated.\n-\nGenerators:\nthrow(type, exc, tb)\nandathrow(type, exc, tb)\nsignature is deprecated: usethrow(exc)\nandathrow(exc)\ninstead, the single argument signature.Currently Python accepts numeric literals immediately followed by keywords, for example\n0in x\n,1or x\n,0if 1else 2\n. It allows confusing and ambiguous expressions like[0x1for x in y]\n(which can be interpreted as[0x1 for x in y]\nor[0x1f or x in y]\n). A syntax warning is raised if the numeric literal is immediately followed by one of keywordsand\n,else\n,for\n,if\n,in\n,is\nandor\n. In a future release it will be changed to a syntax error. (gh-87999)Support for\n__index__()\nand__int__()\nmethod returning non-int type: these methods will be required to return an instance of a strict subclass ofint\n.Support for\n__float__()\nmethod returning a strict subclass offloat\n: these methods will be required to return an instance offloat\n.Support for\n__complex__()\nmethod returning a strict subclass ofcomplex\n: these methods will be required to return an instance ofcomplex\n.Passing a complex number as the real or imag argument in the\ncomplex()\nconstructor is now deprecated; it should only be passed as a single positional argument. (Contributed by Serhiy Storchaka in gh-109218.)\ncalendar\n:calendar.January\nandcalendar.February\nconstants are deprecated and replaced bycalendar.JANUARY\nandcalendar.FEBRUARY\n. (Contributed by Prince Roshan in gh-103636.)codecs\n: useopen()\ninstead ofcodecs.open()\n. (gh-133038)codeobject.co_lnotab\n: use thecodeobject.co_lines()\nmethod instead.-\nutcnow()\n: usedatetime.datetime.now(tz=datetime.UTC)\n.utcfromtimestamp()\n: usedatetime.datetime.fromtimestamp(timestamp, tz=datetime.UTC)\n.\ngettext\n: Plural value must be an integer.-\ncache_from_source()\ndebug_override parameter is deprecated: use the optimization parameter instead.\n-\nEntryPoints\ntuple interface.Implicit\nNone\non return values.\nlogging\n: thewarn()\nmethod has been deprecated since Python 3.3, usewarning()\ninstead.mailbox\n: Use of StringIO input and text mode is deprecated, use BytesIO and binary mode instead.os\n: Callingos.register_at_fork()\nin multi-threaded process.pydoc.ErrorDuringImport\n: A tuple value for exc_info parameter is deprecated, use an exception instance.re\n: More strict rules are now applied for numerical group references and group names in regular expressions. Only sequence of ASCII digits is now accepted as a numerical reference. The group name in bytes patterns and replacement strings can now only contain ASCII letters and digits and underscore. (Contributed by Serhiy Storchaka in gh-91760.)sre_compile\n,sre_constants\nandsre_parse\nmodules.shutil\n:rmtree()\n\u2019s onerror parameter is deprecated in Python 3.12; use the onexc parameter instead.ssl\noptions and protocols:ssl.SSLContext\nwithout protocol argument is deprecated.ssl.SSLContext\n:set_npn_protocols()\nandselected_npn_protocol()\nare deprecated: use ALPN instead.ssl.OP_NO_SSL*\noptionsssl.OP_NO_TLS*\noptionsssl.PROTOCOL_SSLv3\nssl.PROTOCOL_TLS\nssl.PROTOCOL_TLSv1\nssl.PROTOCOL_TLSv1_1\nssl.PROTOCOL_TLSv1_2\nssl.TLSVersion.SSLv3\nssl.TLSVersion.TLSv1\nssl.TLSVersion.TLSv1_1\nthreading\nmethods:threading.Condition.notifyAll()\n: usenotify_all()\n.threading.Event.isSet()\n: useis_set()\n.threading.Thread.isDaemon()\n,threading.Thread.setDaemon()\n: usethreading.Thread.daemon\nattribute.threading.Thread.getName()\n,threading.Thread.setName()\n: usethreading.Thread.name\nattribute.threading.currentThread()\n: usethreading.current_thread()\n.threading.activeCount()\n: usethreading.active_count()\n.\nThe internal class\ntyping._UnionGenericAlias\nis no longer used to implementtyping.Union\n. To preserve compatibility with users using this private class, a compatibility shim will be provided until at least Python 3.17. (Contributed by Jelle Zijlstra in gh-105499.)unittest.IsolatedAsyncioTestCase\n: it is deprecated to return a value that is notNone\nfrom a test case.urllib.parse\ndeprecated functions:urlparse()\ninsteadsplitattr()\nsplithost()\nsplitnport()\nsplitpasswd()\nsplitport()\nsplitquery()\nsplittag()\nsplittype()\nsplituser()\nsplitvalue()\nto_bytes()\nwsgiref\n:SimpleHandler.stdout.write()\nshould not do partial writes.xml.etree.ElementTree\n: Testing the truth value of anElement\nis deprecated. In a future release it will always returnTrue\n. Prefer explicitlen(elem)\norelem is not None\ntests instead.sys._clear_type_cache()\nis deprecated: usesys._clear_internal_caches()\ninstead.\nCPython Bytecode Changes\u00b6\nThe oparg of\nYIELD_VALUE\nis now1\nif the yield is part of a yield-from or await, and0\notherwise. The oparg ofRESUME\nwas changed to add a bit indicating if the except-depth is 1, which is needed to optimize closing of generators. (Contributed by Irit Katriel in gh-111354.)\nC API Changes\u00b6\nNew Features\u00b6\nAdd the PyMonitoring C API for generating PEP 669 monitoring events:\nPyMonitoring_FireBranchEvent\n(Contributed by Irit Katriel in gh-111997).\nAdd\nPyMutex\n, a lightweight mutex that occupies a single byte, and the newPyMutex_Lock()\nandPyMutex_Unlock()\nfunctions.PyMutex_Lock()\nwill release the GIL (if currently held) if the operation needs to block. (Contributed by Sam Gross in gh-108724.)Add the PyTime C API to provide access to system clocks:\nPyTime_MIN\nandPyTime_MAX\n.\n(Contributed by Victor Stinner and Petr Viktorin in gh-110850.)\nAdd the\nPyDict_ContainsString()\nfunction with the same behavior asPyDict_Contains()\n, but key is specified as a const char* UTF-8 encoded bytes string, rather than a PyObject*. (Contributed by Victor Stinner in gh-108314.)Add the\nPyDict_GetItemRef()\nandPyDict_GetItemStringRef()\nfunctions, which behave similarly toPyDict_GetItemWithError()\n, but return a strong reference instead of a borrowed reference. Moreover, these functions return-1\non error, removing the need to checkPyErr_Occurred()\n. (Contributed by Victor Stinner in gh-106004.)Add the\nPyDict_SetDefaultRef()\nfunction, which behaves similarly toPyDict_SetDefault()\n, but returns a strong reference instead of a borrowed reference. This function returns-1\non error,0\non insertion, and1\nif the key was already present in the dictionary. (Contributed by Sam Gross in gh-112066.)Add the\nPyDict_Pop()\nandPyDict_PopString()\nfunctions to remove a key from a dictionary and optionally return the removed value. This is similar todict.pop()\n, though there is no default value, andKeyError\nis not raised for missing keys. (Contributed by Stefan Behnel and Victor Stinner in gh-111262.)Add the\nPyMapping_GetOptionalItem()\nandPyMapping_GetOptionalItemString()\nfunctions as alternatives toPyObject_GetItem()\nandPyMapping_GetItemString()\nrespectively. The new functions do not raiseKeyError\nif the requested key is missing from the mapping. These variants are more convenient and faster if a missing key should not be treated as a failure. (Contributed by Serhiy Storchaka in gh-106307.)Add the\nPyObject_GetOptionalAttr()\nandPyObject_GetOptionalAttrString()\nfunctions as alternatives toPyObject_GetAttr()\nandPyObject_GetAttrString()\nrespectively. The new functions do not raiseAttributeError\nif the requested attribute is not found on the object. These variants are more convenient and faster if the missing attribute should not be treated as a failure. (Contributed by Serhiy Storchaka in gh-106521.)Add the\nPyErr_FormatUnraisable()\nfunction as an extension toPyErr_WriteUnraisable()\nthat allows customizing the warning message. (Contributed by Serhiy Storchaka in gh-108082.)Add new functions that return a strong reference instead of a borrowed reference for frame locals, globals, and builtins, as part of PEP 667:\n(Contributed by Mark Shannon and Tian Gao in gh-74929.)\nAdd the\nPy_GetConstant()\nandPy_GetConstantBorrowed()\nfunctions to get strong or borrowed references to constants. For example,Py_GetConstant(Py_CONSTANT_ZERO)\nreturns a strong reference to the constant zero. (Contributed by Victor Stinner in gh-115754.)Add the\nPyImport_AddModuleRef()\nfunction as a replacement forPyImport_AddModule()\nthat returns a strong reference instead of a borrowed reference. (Contributed by Victor Stinner in gh-105922.)Add the\nPy_IsFinalizing()\nfunction to check whether the main Python interpreter is shutting down. (Contributed by Victor Stinner in gh-108014.)Add the\nPyList_GetItemRef()\nfunction as a replacement forPyList_GetItem()\nthat returns a strong reference instead of a borrowed reference. (Contributed by Sam Gross in gh-114329.)Add the\nPyList_Extend()\nandPyList_Clear()\nfunctions, mirroring the Pythonlist.extend()\nandlist.clear()\nmethods. (Contributed by Victor Stinner in gh-111138.)Add the\nPyLong_AsInt()\nfunction. It behaves similarly toPyLong_AsLong()\n, but stores the result in a C int instead of a C long. (Contributed by Victor Stinner in gh-108014.)Add the\nPyLong_AsNativeBytes()\n,PyLong_FromNativeBytes()\n, andPyLong_FromUnsignedNativeBytes()\nfunctions to simplify converting between native integer types and Pythonint\nobjects. (Contributed by Steve Dower in gh-111140.)Add\nPyModule_Add()\nfunction, which is similar toPyModule_AddObjectRef()\nandPyModule_AddObject()\n, but always steals a reference to the value. (Contributed by Serhiy Storchaka in gh-86493.)Add the\nPyObject_GenericHash()\nfunction that implements the default hashing function of a Python object. (Contributed by Serhiy Storchaka in gh-113024.)Add the\nPy_HashPointer()\nfunction to hash a raw pointer. (Contributed by Victor Stinner in gh-111545.)Add the\nPyObject_VisitManagedDict()\nandPyObject_ClearManagedDict()\nfunctions. which must be called by the traverse and clear functions of a type using thePy_TPFLAGS_MANAGED_DICT\nflag. The pythoncapi-compat project can be used to use these functions with Python 3.11 and 3.12. (Contributed by Victor Stinner in gh-107073.)Add the\nPyRefTracer_SetTracer()\nandPyRefTracer_GetTracer()\nfunctions, which enable tracking object creation and destruction in the same way that thetracemalloc\nmodule does. (Contributed by Pablo Galindo in gh-93502.)Add the\nPySys_AuditTuple()\nfunction as an alternative toPySys_Audit()\nthat takes event arguments as a Pythontuple\nobject. (Contributed by Victor Stinner in gh-85283.)Add the\nPyThreadState_GetUnchecked()\nfunction as an alternative toPyThreadState_Get()\nthat doesn\u2019t kill the process with a fatal error if it isNULL\n. The caller is responsible for checking if the result isNULL\n. (Contributed by Victor Stinner in gh-108867.)Add the\nPyType_GetFullyQualifiedName()\nfunction to get the type\u2019s fully qualified name. The module name is prepended iftype.__module__\nis a string and is not equal to either'builtins'\nor'__main__'\n. (Contributed by Victor Stinner in gh-111696.)Add the\nPyType_GetModuleName()\nfunction to get the type\u2019s module name. This is equivalent to getting thetype.__module__\nattribute. (Contributed by Eric Snow and Victor Stinner in gh-111696.)Add the\nPyUnicode_EqualToUTF8AndSize()\nandPyUnicode_EqualToUTF8()\nfunctions to compare a Unicode object with a const char* UTF-8 encoded string and1\nif they are equal or0\notherwise. These functions do not raise exceptions. (Contributed by Serhiy Storchaka in gh-110289.)Add the\nPyWeakref_GetRef()\nfunction as an alternative toPyWeakref_GetObject()\nthat returns a strong reference orNULL\nif the referent is no longer live. (Contributed by Victor Stinner in gh-105927.)Add fixed variants of functions which silently ignore errors:\nPyObject_HasAttrStringWithError()\nreplacesPyObject_HasAttrString()\n.PyMapping_HasKeyStringWithError()\nreplacesPyMapping_HasKeyString()\n.\nThe new functions return\n-1\nfor errors and the standard1\nfor true and0\nfor false.(Contributed by Serhiy Storchaka in gh-108511.)\nChanged C APIs\u00b6\nThe keywords parameter of\nPyArg_ParseTupleAndKeywords()\nandPyArg_VaParseTupleAndKeywords()\nnow has type char *const* in C and const char *const* in C++, instead of char**. In C++, this makes these functions compatible with arguments of type const char *const*, const char**, or char *const* without an explicit type cast. In C, the functions only support arguments of type char *const*. This can be overridden with thePY_CXX_CONST\nmacro. (Contributed by Serhiy Storchaka in gh-65210.)PyArg_ParseTupleAndKeywords()\nnow supports non-ASCII keyword parameter names. (Contributed by Serhiy Storchaka in gh-110815.)The\nPyCode_GetFirstFree()\nfunction is now unstable API and is now namedPyUnstable_Code_GetFirstFree()\n. (Contributed by Bogdan Romanyuk in gh-115781.)The\nPyDict_GetItem()\n,PyDict_GetItemString()\n,PyMapping_HasKey()\n,PyMapping_HasKeyString()\n,PyObject_HasAttr()\n,PyObject_HasAttrString()\n, andPySys_GetObject()\nfunctions, each of which clears all errors which occurred when calling them now reports these errors usingsys.unraisablehook()\n. You may replace them with other functions as recommended in the documentation. (Contributed by Serhiy Storchaka in gh-106672.)Add support for the\n%T\n,%#T\n,%N\nand%#N\nformats toPyUnicode_FromFormat()\n:%T\n: Get the fully qualified name of an object type%#T\n: As above, but use a colon as the separator%N\n: Get the fully qualified name of a type%#N\n: As above, but use a colon as the separator\nSee PEP 737 for more information. (Contributed by Victor Stinner in gh-111696.)\nYou no longer have to define the\nPY_SSIZE_T_CLEAN\nmacro before includingPython.h\nwhen using#\nformats in format codes. APIs accepting the format codes always usePy_ssize_t\nfor#\nformats. (Contributed by Inada Naoki in gh-104922.)If Python is built in debug mode or\nwith assertions\n,PyTuple_SET_ITEM()\nandPyList_SET_ITEM()\nnow check the index argument with an assertion. (Contributed by Victor Stinner in gh-106168.)\nLimited C API Changes\u00b6\nThe following functions are now included in the Limited C API:\nPython built with\n--with-trace-refs\n(tracing references) now supports the Limited API. (Contributed by Victor Stinner in gh-108634.)\nRemoved C APIs\u00b6\nRemove several functions, macros, variables, etc with names prefixed by\n_Py\nor_PY\n(which are considered private). If your project is affected by one of these removals and you believe that the removed API should remain available, please open a new issue to request a public C API and addcc: @vstinner\nto the issue to notify Victor Stinner. (Contributed by Victor Stinner in gh-106320.)Remove old buffer protocols deprecated in Python 3.0. Use Buffer Protocol instead.\nPyObject_CheckReadBuffer()\n: UsePyObject_CheckBuffer()\nto test whether the object supports the buffer protocol. Note thatPyObject_CheckBuffer()\ndoesn\u2019t guarantee thatPyObject_GetBuffer()\nwill succeed. To test if the object is actually readable, see the next example ofPyObject_GetBuffer()\n.PyObject_AsCharBuffer()\n,PyObject_AsReadBuffer()\n: UsePyObject_GetBuffer()\nandPyBuffer_Release()\ninstead:Py_buffer view; if (PyObject_GetBuffer(obj, &view, PyBUF_SIMPLE) < 0) { return NULL; } // Use `view.buf` and `view.len` to read from the buffer. // You may need to cast buf as `(const char*)view.buf`. PyBuffer_Release(&view);\nPyObject_AsWriteBuffer()\n: UsePyObject_GetBuffer()\nandPyBuffer_Release()\ninstead:Py_buffer view; if (PyObject_GetBuffer(obj, &view, PyBUF_WRITABLE) < 0) { return NULL; } // Use `view.buf` and `view.len` to write to the buffer. PyBuffer_Release(&view);\n(Contributed by Inada Naoki in gh-85275.)\nRemove various functions deprecated in Python 3.9:\nPyEval_CallObject()\n,PyEval_CallObjectWithKeywords()\n: UsePyObject_CallNoArgs()\norPyObject_Call()\ninstead.Warning\nIn\nPyObject_Call()\n, positional arguments must be atuple\nand must not beNULL\n, and keyword arguments must be adict\norNULL\n, whereas the removed functions checked argument types and acceptedNULL\npositional and keyword arguments. To replacePyEval_CallObjectWithKeywords(func, NULL, kwargs)\nwithPyObject_Call()\n, pass an empty tuple as positional arguments usingPyTuple_New(0)\n.PyEval_CallFunction()\n: UsePyObject_CallFunction()\ninstead.PyEval_CallMethod()\n: UsePyObject_CallMethod()\ninstead.PyCFunction_Call()\n: UsePyObject_Call()\ninstead.\n(Contributed by Victor Stinner in gh-105107.)\nRemove the following old functions to configure the Python initialization, deprecated in Python 3.11:\nPySys_AddWarnOptionUnicode()\n: UsePyConfig.warnoptions\ninstead.PySys_AddWarnOption()\n: UsePyConfig.warnoptions\ninstead.PySys_AddXOption()\n: UsePyConfig.xoptions\ninstead.PySys_HasWarnOptions()\n: UsePyConfig.xoptions\ninstead.PySys_SetPath()\n: SetPyConfig.module_search_paths\ninstead.Py_SetPath()\n: SetPyConfig.module_search_paths\ninstead.Py_SetStandardStreamEncoding()\n: SetPyConfig.stdio_encoding\ninstead, and set also maybePyConfig.legacy_windows_stdio\n(on Windows)._Py_SetProgramFullPath()\n: SetPyConfig.executable\ninstead.\nUse the new\nPyConfig\nAPI of the Python Initialization Configuration instead (PEP 587), added to Python 3.8. (Contributed by Victor Stinner in gh-105145.)Remove\nPyEval_AcquireLock()\nandPyEval_ReleaseLock()\nfunctions, deprecated in Python 3.2. They didn\u2019t update the current thread state. They can be replaced with:low-level\nPyEval_AcquireThread()\nandPyEval_RestoreThread()\n;\n(Contributed by Victor Stinner in gh-105182.)\nRemove the\nPyEval_ThreadsInitialized()\nfunction, deprecated in Python 3.9. Since Python 3.7,Py_Initialize()\nalways creates the GIL: callingPyEval_InitThreads()\ndoes nothing andPyEval_ThreadsInitialized()\nalways returns non-zero. (Contributed by Victor Stinner in gh-105182.)Remove the\n_PyInterpreterState_Get()\nalias toPyInterpreterState_Get()\nwhich was kept for backward compatibility with Python 3.8. The pythoncapi-compat project can be used to getPyInterpreterState_Get()\non Python 3.8 and older. (Contributed by Victor Stinner in gh-106320.)Remove the private\n_PyObject_FastCall()\nfunction: usePyObject_Vectorcall()\nwhich is available since Python 3.8 (PEP 590). (Contributed by Victor Stinner in gh-106023.)Remove the\ncpython/pytime.h\nheader file, which only contained private functions. (Contributed by Victor Stinner in gh-106316.)Remove the undocumented\nPY_TIMEOUT_MAX\nconstant from the limited C API. (Contributed by Victor Stinner in gh-110014.)Remove the old trashcan macros\nPy_TRASHCAN_SAFE_BEGIN\nandPy_TRASHCAN_SAFE_END\n. Replace both with the new macrosPy_TRASHCAN_BEGIN\nandPy_TRASHCAN_END\n. (Contributed by Irit Katriel in gh-105111.)\nDeprecated C APIs\u00b6\nDeprecate old Python initialization functions:\nPySys_ResetWarnOptions()\n: Clearsys.warnoptions\nandwarnings.filters\ninstead.Py_GetExecPrefix()\n: Getsys.exec_prefix\ninstead.Py_GetPath()\n: Getsys.path\ninstead.Py_GetPrefix()\n: Getsys.prefix\ninstead.Py_GetProgramFullPath()\n: Getsys.executable\ninstead.Py_GetProgramName()\n: Getsys.executable\ninstead.Py_GetPythonHome()\n: GetPyConfig.home\nor thePYTHONHOME\nenvironment variable instead.\n(Contributed by Victor Stinner in gh-105145.)\nSoft deprecate the\nPyEval_GetBuiltins()\n,PyEval_GetGlobals()\n, andPyEval_GetLocals()\nfunctions, which return a borrowed reference. (Soft deprecated as part of PEP 667.)Deprecate the\nPyImport_ImportModuleNoBlock()\nfunction, which is just an alias toPyImport_ImportModule()\nsince Python 3.3. (Contributed by Victor Stinner in gh-105396.)Soft deprecate the\nPyModule_AddObject()\nfunction. It should be replaced withPyModule_Add()\norPyModule_AddObjectRef()\n. (Contributed by Serhiy Storchaka in gh-86493.)Deprecate the old\nPy_UNICODE\nandPY_UNICODE_TYPE\ntypes and thePy_UNICODE_WIDE\ndefine. Use thewchar_t\ntype directly instead. Since Python 3.3,Py_UNICODE\nandPY_UNICODE_TYPE\nare just aliases towchar_t\n. (Contributed by Victor Stinner in gh-105156.)Deprecate the\nPyWeakref_GetObject()\nandPyWeakref_GET_OBJECT()\nfunctions, which return a borrowed reference. Replace them with the newPyWeakref_GetRef()\nfunction, which returns a strong reference. The pythoncapi-compat project can be used to getPyWeakref_GetRef()\non Python 3.12 and older. (Contributed by Victor Stinner in gh-105927.)\nPending removal in Python 3.14\u00b6\nThe\nma_version_tag\nfield inPyDictObject\nfor extension modules (PEP 699; gh-101193).Creating\nimmutable types\nwith mutable bases (gh-95388).\nPending removal in Python 3.15\u00b6\nThe\nPyImport_ImportModuleNoBlock()\n: UsePyImport_ImportModule()\ninstead.PyWeakref_GetObject()\nandPyWeakref_GET_OBJECT()\n: UsePyWeakref_GetRef()\ninstead. The pythoncapi-compat project can be used to getPyWeakref_GetRef()\non Python 3.12 and older.Py_UNICODE\ntype and thePy_UNICODE_WIDE\nmacro: Usewchar_t\ninstead.PyUnicode_AsDecodedObject()\n: UsePyCodec_Decode()\ninstead.PyUnicode_AsDecodedUnicode()\n: UsePyCodec_Decode()\ninstead; Note that some codecs (for example, \u201cbase64\u201d) may return a type other thanstr\n, such asbytes\n.PyUnicode_AsEncodedObject()\n: UsePyCodec_Encode()\ninstead.PyUnicode_AsEncodedUnicode()\n: UsePyCodec_Encode()\ninstead; Note that some codecs (for example, \u201cbase64\u201d) may return a type other thanbytes\n, such asstr\n.Python initialization functions, deprecated in Python 3.13:\nPy_GetPath()\n: UsePyConfig_Get(\"module_search_paths\")\n(sys.path\n) instead.Py_GetPrefix()\n: UsePyConfig_Get(\"base_prefix\")\n(sys.base_prefix\n) instead. UsePyConfig_Get(\"prefix\")\n(sys.prefix\n) if virtual environments need to be handled.Py_GetExecPrefix()\n: UsePyConfig_Get(\"base_exec_prefix\")\n(sys.base_exec_prefix\n) instead. UsePyConfig_Get(\"exec_prefix\")\n(sys.exec_prefix\n) if virtual environments need to be handled.Py_GetProgramFullPath()\n: UsePyConfig_Get(\"executable\")\n(sys.executable\n) instead.Py_GetProgramName()\n: UsePyConfig_Get(\"executable\")\n(sys.executable\n) instead.Py_GetPythonHome()\n: UsePyConfig_Get(\"home\")\nor thePYTHONHOME\nenvironment variable instead.\nThe pythoncapi-compat project can be used to get\nPyConfig_Get()\non Python 3.13 and older.Functions to configure Python\u2019s initialization, deprecated in Python 3.11:\nPySys_SetArgvEx()\n: SetPyConfig.argv\ninstead.PySys_SetArgv()\n: SetPyConfig.argv\ninstead.Py_SetProgramName()\n: SetPyConfig.program_name\ninstead.Py_SetPythonHome()\n: SetPyConfig.home\ninstead.PySys_ResetWarnOptions()\n: Clearsys.warnoptions\nandwarnings.filters\ninstead.\nThe\nPy_InitializeFromConfig()\nAPI should be used withPyConfig\ninstead.Global configuration variables:\nPy_DebugFlag\n: UsePyConfig.parser_debug\norPyConfig_Get(\"parser_debug\")\ninstead.Py_VerboseFlag\n: UsePyConfig.verbose\norPyConfig_Get(\"verbose\")\ninstead.Py_QuietFlag\n: UsePyConfig.quiet\norPyConfig_Get(\"quiet\")\ninstead.Py_InteractiveFlag\n: UsePyConfig.interactive\norPyConfig_Get(\"interactive\")\ninstead.Py_InspectFlag\n: UsePyConfig.inspect\norPyConfig_Get(\"inspect\")\ninstead.Py_OptimizeFlag\n: UsePyConfig.optimization_level\norPyConfig_Get(\"optimization_level\")\ninstead.Py_NoSiteFlag\n: UsePyConfig.site_import\norPyConfig_Get(\"site_import\")\ninstead.Py_BytesWarningFlag\n: UsePyConfig.bytes_warning\norPyConfig_Get(\"bytes_warning\")\ninstead.Py_FrozenFlag\n: UsePyConfig.pathconfig_warnings\norPyConfig_Get(\"pathconfig_warnings\")\ninstead.Py_IgnoreEnvironmentFlag\n: UsePyConfig.use_environment\norPyConfig_Get(\"use_environment\")\ninstead.Py_DontWriteBytecodeFlag\n: UsePyConfig.write_bytecode\norPyConfig_Get(\"write_bytecode\")\ninstead.Py_NoUserSiteDirectory\n: UsePyConfig.user_site_directory\norPyConfig_Get(\"user_site_directory\")\ninstead.Py_UnbufferedStdioFlag\n: UsePyConfig.buffered_stdio\norPyConfig_Get(\"buffered_stdio\")\ninstead.Py_HashRandomizationFlag\n: UsePyConfig.use_hash_seed\nandPyConfig.hash_seed\norPyConfig_Get(\"hash_seed\")\ninstead.Py_IsolatedFlag\n: UsePyConfig.isolated\norPyConfig_Get(\"isolated\")\ninstead.Py_LegacyWindowsFSEncodingFlag\n: UsePyPreConfig.legacy_windows_fs_encoding\norPyConfig_Get(\"legacy_windows_fs_encoding\")\ninstead.Py_LegacyWindowsStdioFlag\n: UsePyConfig.legacy_windows_stdio\norPyConfig_Get(\"legacy_windows_stdio\")\ninstead.Py_FileSystemDefaultEncoding\n,Py_HasFileSystemDefaultEncoding\n: UsePyConfig.filesystem_encoding\norPyConfig_Get(\"filesystem_encoding\")\ninstead.Py_FileSystemDefaultEncodeErrors\n: UsePyConfig.filesystem_errors\norPyConfig_Get(\"filesystem_errors\")\ninstead.Py_UTF8Mode\n: UsePyPreConfig.utf8_mode\norPyConfig_Get(\"utf8_mode\")\ninstead. (seePy_PreInitialize()\n)\nThe\nPy_InitializeFromConfig()\nAPI should be used withPyConfig\nto set these options. OrPyConfig_Get()\ncan be used to get these options at runtime.\nPending removal in Python 3.16\u00b6\nThe bundled copy of\nlibmpdec\n.\nPending removal in Python 3.18\u00b6\nThe following private functions are deprecated and planned for removal in Python 3.18:\n_PyBytes_Join()\n: usePyBytes_Join()\n._PyDict_GetItemStringWithError()\n: usePyDict_GetItemStringRef()\n._PyDict_Pop()\n: usePyDict_Pop()\n._PyLong_Sign()\n: usePyLong_GetSign()\n._PyLong_FromDigits()\nand_PyLong_New()\n: usePyLongWriter_Create()\n._PyThreadState_UncheckedGet()\n: usePyThreadState_GetUnchecked()\n._PyUnicode_AsString()\n: usePyUnicode_AsUTF8()\n._PyUnicodeWriter_Init()\n: replace_PyUnicodeWriter_Init(&writer)\nwithwriter = PyUnicodeWriter_Create(0)\n._PyUnicodeWriter_Finish()\n: replace_PyUnicodeWriter_Finish(&writer)\nwithPyUnicodeWriter_Finish(writer)\n._PyUnicodeWriter_Dealloc()\n: replace_PyUnicodeWriter_Dealloc(&writer)\nwithPyUnicodeWriter_Discard(writer)\n._PyUnicodeWriter_WriteChar()\n: replace_PyUnicodeWriter_WriteChar(&writer, ch)\nwithPyUnicodeWriter_WriteChar(writer, ch)\n._PyUnicodeWriter_WriteStr()\n: replace_PyUnicodeWriter_WriteStr(&writer, str)\nwithPyUnicodeWriter_WriteStr(writer, str)\n._PyUnicodeWriter_WriteSubstring()\n: replace_PyUnicodeWriter_WriteSubstring(&writer, str, start, end)\nwithPyUnicodeWriter_WriteSubstring(writer, str, start, end)\n._PyUnicodeWriter_WriteASCIIString()\n: replace_PyUnicodeWriter_WriteASCIIString(&writer, str)\nwithPyUnicodeWriter_WriteASCII(writer, str)\n._PyUnicodeWriter_WriteLatin1String()\n: replace_PyUnicodeWriter_WriteLatin1String(&writer, str)\nwithPyUnicodeWriter_WriteUTF8(writer, str)\n._PyUnicodeWriter_Prepare()\n: (no replacement)._PyUnicodeWriter_PrepareKind()\n: (no replacement)._Py_HashPointer()\n: usePy_HashPointer()\n._Py_fopen_obj()\n: usePy_fopen()\n.\nThe pythoncapi-compat project can be used to get these new public functions on Python 3.13 and older. (Contributed by Victor Stinner in gh-128863.)\nPending removal in future versions\u00b6\nThe following APIs are deprecated and will be removed, although there is currently no date scheduled for their removal.\nPy_TPFLAGS_HAVE_FINALIZE\n: Unneeded since Python 3.8.PyErr_Fetch()\n: UsePyErr_GetRaisedException()\ninstead.PyErr_NormalizeException()\n: UsePyErr_GetRaisedException()\ninstead.PyErr_Restore()\n: UsePyErr_SetRaisedException()\ninstead.PyModule_GetFilename()\n: UsePyModule_GetFilenameObject()\ninstead.PyOS_AfterFork()\n: UsePyOS_AfterFork_Child()\ninstead.PySlice_GetIndicesEx()\n: UsePySlice_Unpack()\nandPySlice_AdjustIndices()\ninstead.PyUnicode_READY()\n: Unneeded since Python 3.12PyErr_Display()\n: UsePyErr_DisplayException()\ninstead._PyErr_ChainExceptions()\n: Use_PyErr_ChainExceptions1()\ninstead.PyBytesObject.ob_shash\nmember: callPyObject_Hash()\ninstead.Thread Local Storage (TLS) API:\nPyThread_create_key()\n: UsePyThread_tss_alloc()\ninstead.PyThread_delete_key()\n: UsePyThread_tss_free()\ninstead.PyThread_set_key_value()\n: UsePyThread_tss_set()\ninstead.PyThread_get_key_value()\n: UsePyThread_tss_get()\ninstead.PyThread_delete_key_value()\n: UsePyThread_tss_delete()\ninstead.PyThread_ReInitTLS()\n: Unneeded since Python 3.7.\nBuild Changes\u00b6\narm64-apple-ios\nandarm64-apple-ios-simulator\nare both now PEP 11 tier 3 platforms. (PEP 730 written and implementation contributed by Russell Keith-Magee in gh-114099.)aarch64-linux-android\nandx86_64-linux-android\nare both now PEP 11 tier 3 platforms. (PEP 738 written and implementation contributed by Malcolm Smith in gh-116622.)wasm32-wasi\nis now a PEP 11 tier 2 platform. (Contributed by Brett Cannon in gh-115192.)wasm32-emscripten\nis no longer a PEP 11 supported platform. (Contributed by Brett Cannon in gh-115192.)Building CPython now requires a compiler with support for the C11 atomic library, GCC built-in atomic functions, or MSVC interlocked intrinsics.\nAutoconf 2.71 and aclocal 1.16.5 are now required to regenerate the\nconfigure\nscript. (Contributed by Christian Heimes in gh-89886 and by Victor Stinner in gh-112090.)SQLite 3.15.2 or newer is required to build the\nsqlite3\nextension module. (Contributed by Erlend Aasland in gh-105875.)CPython now bundles the mimalloc library by default. It is licensed under the MIT license; see mimalloc license. The bundled mimalloc has custom changes, see gh-113141 for details. (Contributed by Dino Viehland in gh-109914.)\nThe\nconfigure\noption--with-system-libmpdec\nnow defaults toyes\n. The bundled copy oflibmpdec\nwill be removed in Python 3.16.Python built with\nconfigure\n--with-trace-refs\n(tracing references) is now ABI compatible with the Python release build and debug build. (Contributed by Victor Stinner in gh-108634.)On POSIX systems, the pkg-config (\n.pc\n) filenames now include the ABI flags. For example, the free-threaded build generatespython-3.13t.pc\nand the debug build generatespython-3.13d.pc\n.The\nerrno\n,fcntl\n,grp\n,md5\n,pwd\n,resource\n,termios\n,winsound\n,_ctypes_test\n,_multiprocessing.posixshmem\n,_scproxy\n,_stat\n,_statistics\n,_testconsole\n,_testimportmultiple\nand_uuid\nC extensions are now built with the limited C API. (Contributed by Victor Stinner in gh-85283.)\nPorting to Python 3.13\u00b6\nThis section lists previously described changes and other bugfixes that may require changes to your code.\nChanges in the Python API\u00b6\nPEP 667 introduces several changes to the semantics of\nlocals()\nandf_locals\n:Calling\nlocals()\nin an optimized scope now produces an independent snapshot on each call, and hence no longer implicitly updates previously returned references. Obtaining the legacy CPython behavior now requires explicit calls to update the initially returned dictionary with the results of subsequent calls tolocals()\n. Code execution functions that implicitly targetlocals()\n(such asexec\nandeval\n) must be passed an explicit namespace to access their results in an optimized scope. (Changed as part of PEP 667.)Calling\nlocals()\nfrom a comprehension at module or class scope (including viaexec\noreval\n) once more behaves as if the comprehension were running as an independent nested function (i.e. the local variables from the containing scope are not included). In Python 3.12, this had changed to include the local variables from the containing scope when implementing PEP 709. (Changed as part of PEP 667.)Accessing\nFrameType.f_locals\nin an optimized scope now returns a write-through proxy rather than a snapshot that gets updated at ill-specified times. If a snapshot is desired, it must be created explicitly withdict\nor the proxy\u2019s.copy()\nmethod. (Changed as part of PEP 667.)\nfunctools.partial\nnow emits aFutureWarning\nwhen used as a method. The behavior will change in future Python versions. Wrap it instaticmethod()\nif you want to preserve the old behavior. (Contributed by Serhiy Storchaka in gh-121027.)An\nOSError\nis now raised bygetpass.getuser()\nfor any failure to retrieve a username, instead ofImportError\non non-Unix platforms orKeyError\non Unix platforms where the password database is empty.The value of the\nmode\nattribute ofgzip.GzipFile\nis now a string ('rb'\nor'wb'\n) instead of an integer (1\nor2\n). The value of themode\nattribute of the readable file-like object returned byzipfile.ZipFile.open()\nis now'rb'\ninstead of'r'\n. (Contributed by Serhiy Storchaka in gh-115961.)mailbox.Maildir\nnow ignores files with a leading dot (.\n). (Contributed by Zackery Spytz in gh-65559.)pathlib.Path.glob()\nandrglob()\nnow return both files and directories if a pattern that ends with \u201c**\n\u201d is given, rather than directories only. Add a trailing slash to keep the previous behavior and only match directories.The\nthreading\nmodule now expects the_thread\nmodule to have an_is_main_interpreter()\nfunction. This function takes no arguments and returnsTrue\nif the current interpreter is the main interpreter.Any library or application that provides a custom\n_thread\nmodule must provide_is_main_interpreter()\n, just like the module\u2019s other \u201cprivate\u201d attributes. (gh-112826.)\nChanges in the C API\u00b6\nPython.h\nno longer includes the\nstandard header. It was included for thefinite()\nfunction which is now provided by the\nheader. It should now be included explicitly if needed. Remove also theHAVE_IEEEFP_H\nmacro. (Contributed by Victor Stinner in gh-108765.)Python.h\nno longer includes these standard header files:\n,\nand\n. If needed, they should now be included explicitly. For example,\nprovides theclock()\nandgmtime()\nfunctions,\nprovides theselect()\nfunction, and\nprovides thefutimes()\n,gettimeofday()\nandsetitimer()\nfunctions. (Contributed by Victor Stinner in gh-108765.)On Windows,\nPython.h\nno longer includes the\nstandard header file. If needed, it should now be included explicitly. For example, it providesoffsetof()\nfunction, andsize_t\nandptrdiff_t\ntypes. Including\nexplicitly was already needed by all other platforms, theHAVE_STDDEF_H\nmacro is only defined on Windows. (Contributed by Victor Stinner in gh-108765.)If the\nPy_LIMITED_API\nmacro is defined,Py_BUILD_CORE\n,Py_BUILD_CORE_BUILTIN\nandPy_BUILD_CORE_MODULE\nmacros are now undefined by\n. (Contributed by Victor Stinner in gh-85283.)The old trashcan macros\nPy_TRASHCAN_SAFE_BEGIN\nandPy_TRASHCAN_SAFE_END\nwere removed. They should be replaced by the new macrosPy_TRASHCAN_BEGIN\nandPy_TRASHCAN_END\n.A\ntp_dealloc\nfunction that has the old macros, such as:static void mytype_dealloc(mytype *p) { PyObject_GC_UnTrack(p); Py_TRASHCAN_SAFE_BEGIN(p); ... Py_TRASHCAN_SAFE_END }\nshould migrate to the new macros as follows:\nstatic void mytype_dealloc(mytype *p) { PyObject_GC_UnTrack(p); Py_TRASHCAN_BEGIN(p, mytype_dealloc) ... Py_TRASHCAN_END }\nNote that\nPy_TRASHCAN_BEGIN\nhas a second argument which should be the deallocation function it is in. The new macros were added in Python 3.8 and the old macros were deprecated in Python 3.11. (Contributed by Irit Katriel in gh-105111.)\nPEP 667 introduces several changes to frame-related functions:\nThe effects of mutating the dictionary returned from\nPyEval_GetLocals()\nin an optimized scope have changed. New dict entries added this way will now only be visible to subsequentPyEval_GetLocals()\ncalls in that frame, asPyFrame_GetLocals()\n,locals()\n, andFrameType.f_locals\nno longer access the same underlying cached dictionary. Changes made to entries for actual variable names and names added via the write-through proxy interfaces will be overwritten on subsequent calls toPyEval_GetLocals()\nin that frame. The recommended code update depends on how the function was being used, so refer to the deprecation notice on the function for details.Calling\nPyFrame_GetLocals()\nin an optimized scope now returns a write-through proxy rather than a snapshot that gets updated at ill-specified times. If a snapshot is desired, it must be created explicitly (e.g. withPyDict_Copy()\n), or by calling the newPyEval_GetFrameLocals()\nAPI.PyFrame_FastToLocals()\nandPyFrame_FastToLocalsWithError()\nno longer have any effect. Calling these functions has been redundant since Python 3.11, whenPyFrame_GetLocals()\nwas first introduced.PyFrame_LocalsToFast()\nno longer has any effect. Calling this function is redundant now thatPyFrame_GetLocals()\nreturns a write-through proxy for optimized scopes.\nPython 3.13 removed many private functions. Some of them can be replaced using these alternatives:\n_PyDict_Pop()\n:PyDict_Pop()\norPyDict_PopString()\n;_PyDict_GetItemWithError()\n:PyDict_GetItemRef()\n;_PyErr_WriteUnraisableMsg()\n:PyErr_FormatUnraisable()\n;_PyEval_SetTrace()\n:PyEval_SetTrace()\norPyEval_SetTraceAllThreads()\n;_PyList_Extend()\n:PyList_Extend()\n;_PyLong_AsInt()\n:PyLong_AsInt()\n;_PyMem_RawStrdup()\n:strdup()\n;_PyMem_Strdup()\n:strdup()\n;_PyObject_ClearManagedDict()\n:PyObject_ClearManagedDict()\n;_PyObject_VisitManagedDict()\n:PyObject_VisitManagedDict()\n;_PyThreadState_UncheckedGet()\n:PyThreadState_GetUnchecked()\n;_PyTime_AsSecondsDouble()\n:PyTime_AsSecondsDouble()\n;_PyTime_GetMonotonicClock()\n:PyTime_Monotonic()\norPyTime_MonotonicRaw()\n;_PyTime_GetPerfCounter()\n:PyTime_PerfCounter()\norPyTime_PerfCounterRaw()\n;_PyTime_GetSystemClock()\n:PyTime_Time()\norPyTime_TimeRaw()\n;_PyTime_MAX\n:PyTime_MAX\n;_PyTime_MIN\n:PyTime_MIN\n;_PyTime_t\n:PyTime_t\n;_Py_HashPointer()\n:Py_HashPointer()\n;_Py_IsFinalizing()\n:Py_IsFinalizing()\n.\nThe pythoncapi-compat project can be used to get most of these new functions on Python 3.12 and older.\nRegression Test Changes\u00b6\nPython built with\nconfigure\n--with-pydebug\nnow supports a-X presite=package.module\ncommand-line option. If used, it specifies a module that should be imported early in the lifecycle of the interpreter, beforesite.py\nis executed. (Contributed by \u0141ukasz Langa in gh-110769.)", "code_snippets": ["\n\n", " ", "\n ", "\n\n", " ", "\n", " ", "\n", " ", "\n", "\n ", "\n ", "\n ", "\n ", "\n", "\n", " ", "\n", " ", "\n", "\n ", "\n ", " ", "\n ", "\n ", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 26330} +{"url": "https://docs.python.org/3/deprecations/index.html", "title": "Deprecations", "content": "Deprecations\u00b6\nPending removal in Python 3.15\u00b6\nThe import system:\nSetting\n__cached__\non a module while failing to set__spec__.cached\nis deprecated. In Python 3.15,__cached__\nwill cease to be set or take into consideration by the import system or standard library. (gh-97879)Setting\n__package__\non a module while failing to set__spec__.parent\nis deprecated. In Python 3.15,__package__\nwill cease to be set or take into consideration by the import system or standard library. (gh-97879)\n-\nThe undocumented\nctypes.SetPointerType()\nfunction has been deprecated since Python 3.13.\n-\nThe obsolete and rarely used\nCGIHTTPRequestHandler\nhas been deprecated since Python 3.13. No direct replacement exists. Anything is better than CGI to interface a web server with a request handler.The\n--cgi\nflag to the python -m http.server command-line interface has been deprecated since Python 3.13.\n-\nload_module()\nmethod: useexec_module()\ninstead.\n-\nThe\ngetdefaultlocale()\nfunction has been deprecated since Python 3.11. Its removal was originally planned for Python 3.13 (gh-90817), but has been postponed to Python 3.15. Usegetlocale()\n,setlocale()\n, andgetencoding()\ninstead. (Contributed by Hugo van Kemenade in gh-111187.)\n-\nPurePath.is_reserved()\nhas been deprecated since Python 3.13. Useos.path.isreserved()\nto detect reserved paths on Windows.\n-\njava_ver()\nhas been deprecated since Python 3.13. This function is only useful for Jython support, has a confusing API, and is largely untested.\n-\nThe check_home argument of\nsysconfig.is_python_build()\nhas been deprecated since Python 3.12.\n-\nRLock()\nwill take no arguments in Python 3.15. Passing any arguments has been deprecated since Python 3.14, as the Python version does not permit any arguments, but the C version allows any number of positional or keyword arguments, ignoring every argument.\n-\ntypes.CodeType\n: Accessingco_lnotab\nwas deprecated in PEP 626 since 3.10 and was planned to be removed in 3.12, but it only got a properDeprecationWarning\nin 3.12. May be removed in 3.15. (Contributed by Nikita Sobolev in gh-101866.)\n-\nThe undocumented keyword argument syntax for creating\nNamedTuple\nclasses (for example,Point = NamedTuple(\"Point\", x=int, y=int)\n) has been deprecated since Python 3.13. Use the class-based syntax or the functional syntax instead.When using the functional syntax of\nTypedDict\ns, failing to pass a value to the fields parameter (TD = TypedDict(\"TD\")\n) or passingNone\n(TD = TypedDict(\"TD\", None)\n) has been deprecated since Python 3.13. Useclass TD(TypedDict): pass\norTD = TypedDict(\"TD\", {})\nto create a TypedDict with zero field.The\ntyping.no_type_check_decorator()\ndecorator function has been deprecated since Python 3.13. After eight years in thetyping\nmodule, it has yet to be supported by any major type checker.\nwave\n:The\ngetmark()\n,setmark()\n, andgetmarkers()\nmethods of theWave_read\nandWave_write\nclasses have been deprecated since Python 3.13.\n-\nload_module()\nhas been deprecated since Python 3.10. Useexec_module()\ninstead. (Contributed by Jiahao Li in gh-125746.)\nPending removal in Python 3.16\u00b6\nThe import system:\nSetting\n__loader__\non a module while failing to set__spec__.loader\nis deprecated. In Python 3.16,__loader__\nwill cease to be set or taken into consideration by the import system or the standard library.\n-\nThe\n'u'\nformat code (wchar_t\n) has been deprecated in documentation since Python 3.3 and at runtime since Python 3.13. Use the'w'\nformat code (Py_UCS4\n) for Unicode characters instead.\n-\nasyncio.iscoroutinefunction()\nis deprecated and will be removed in Python 3.16; useinspect.iscoroutinefunction()\ninstead. (Contributed by Jiahao Li and Kumar Aditya in gh-122875.)asyncio\npolicy system is deprecated and will be removed in Python 3.16. In particular, the following classes and functions are deprecated:Users should use\nasyncio.run()\norasyncio.Runner\nwith loop_factory to use the desired event loop implementation.For example, to use\nasyncio.SelectorEventLoop\non Windows:import asyncio async def main(): ... asyncio.run(main(), loop_factory=asyncio.SelectorEventLoop)\n(Contributed by Kumar Aditya in gh-127949.)\n-\nBitwise inversion on boolean types,\n~True\nor~False\nhas been deprecated since Python 3.12, as it produces surprising and unintuitive results (-2\nand-1\n). Usenot x\ninstead for the logical negation of a Boolean. In the rare case that you need the bitwise inversion of the underlying integer, convert toint\nexplicitly (~int(x)\n).\n-\nCalling the Python implementation of\nfunctools.reduce()\nwith function or sequence as keyword arguments has been deprecated since Python 3.14.\n-\nSupport for custom logging handlers with the strm argument is deprecated and scheduled for removal in Python 3.16. Define handlers with the stream argument instead. (Contributed by Mariusz Felisiak in gh-115032.)\n-\nValid extensions start with a \u2018.\u2019 or are empty for\nmimetypes.MimeTypes.add_type()\n. Undotted extensions are deprecated and will raise aValueError\nin Python 3.16. (Contributed by Hugo van Kemenade in gh-75223.)\n-\nThe\nExecError\nexception has been deprecated since Python 3.14. It has not been used by any function inshutil\nsince Python 3.4, and is now an alias ofRuntimeError\n.\n-\nThe\nClass.get_methods\nmethod has been deprecated since Python 3.14.\nsys\n:The\n_enablelegacywindowsfsencoding()\nfunction has been deprecated since Python 3.13. Use thePYTHONLEGACYWINDOWSFSENCODING\nenvironment variable instead.\n-\nThe\nsysconfig.expand_makefile_vars()\nfunction has been deprecated since Python 3.14. Use thevars\nargument ofsysconfig.get_paths()\ninstead.\n-\nThe undocumented and unused\nTarFile.tarfile\nattribute has been deprecated since Python 3.13.\nPending removal in Python 3.17\u00b6\n-\ncollections.abc.ByteString\nis scheduled for removal in Python 3.17.Use\nisinstance(obj, collections.abc.Buffer)\nto test ifobj\nimplements the buffer protocol at runtime. For use in type annotations, either useBuffer\nor a union that explicitly specifies the types your code supports (e.g.,bytes | bytearray | memoryview\n).ByteString\nwas originally intended to be an abstract class that would serve as a supertype of bothbytes\nandbytearray\n. However, since the ABC never had any methods, knowing that an object was an instance ofByteString\nnever actually told you anything useful about the object. Other common buffer types such asmemoryview\nwere also never understood as subtypes ofByteString\n(either at runtime or by static type checkers).See PEP 688 for more details. (Contributed by Shantanu Jain in gh-91896.)\n-\nBefore Python 3.14, old-style unions were implemented using the private class\ntyping._UnionGenericAlias\n. This class is no longer needed for the implementation, but it has been retained for backward compatibility, with removal scheduled for Python 3.17. Users should use documented introspection helpers liketyping.get_origin()\nandtyping.get_args()\ninstead of relying on private implementation details.typing.ByteString\n, deprecated since Python 3.9, is scheduled for removal in Python 3.17.Use\nisinstance(obj, collections.abc.Buffer)\nto test ifobj\nimplements the buffer protocol at runtime. For use in type annotations, either useBuffer\nor a union that explicitly specifies the types your code supports (e.g.,bytes | bytearray | memoryview\n).ByteString\nwas originally intended to be an abstract class that would serve as a supertype of bothbytes\nandbytearray\n. However, since the ABC never had any methods, knowing that an object was an instance ofByteString\nnever actually told you anything useful about the object. Other common buffer types such asmemoryview\nwere also never understood as subtypes ofByteString\n(either at runtime or by static type checkers).See PEP 688 for more details. (Contributed by Shantanu Jain in gh-91896.)\nPending removal in Python 3.18\u00b6\nPending removal in Python 3.19\u00b6\nPending removal in future versions\u00b6\nThe following APIs will be removed in the future, although there is currently no date scheduled for their removal.\n-\nNesting argument groups and nesting mutually exclusive groups are deprecated.\nPassing the undocumented keyword argument prefix_chars to\nadd_argument_group()\nis now deprecated.The\nargparse.FileType\ntype converter is deprecated.\n-\nGenerators:\nthrow(type, exc, tb)\nandathrow(type, exc, tb)\nsignature is deprecated: usethrow(exc)\nandathrow(exc)\ninstead, the single argument signature.Currently Python accepts numeric literals immediately followed by keywords, for example\n0in x\n,1or x\n,0if 1else 2\n. It allows confusing and ambiguous expressions like[0x1for x in y]\n(which can be interpreted as[0x1 for x in y]\nor[0x1f or x in y]\n). A syntax warning is raised if the numeric literal is immediately followed by one of keywordsand\n,else\n,for\n,if\n,in\n,is\nandor\n. In a future release it will be changed to a syntax error. (gh-87999)Support for\n__index__()\nand__int__()\nmethod returning non-int type: these methods will be required to return an instance of a strict subclass ofint\n.Support for\n__float__()\nmethod returning a strict subclass offloat\n: these methods will be required to return an instance offloat\n.Support for\n__complex__()\nmethod returning a strict subclass ofcomplex\n: these methods will be required to return an instance ofcomplex\n.Passing a complex number as the real or imag argument in the\ncomplex()\nconstructor is now deprecated; it should only be passed as a single positional argument. (Contributed by Serhiy Storchaka in gh-109218.)\ncalendar\n:calendar.January\nandcalendar.February\nconstants are deprecated and replaced bycalendar.JANUARY\nandcalendar.FEBRUARY\n. (Contributed by Prince Roshan in gh-103636.)codecs\n: useopen()\ninstead ofcodecs.open()\n. (gh-133038)codeobject.co_lnotab\n: use thecodeobject.co_lines()\nmethod instead.-\nutcnow()\n: usedatetime.datetime.now(tz=datetime.UTC)\n.utcfromtimestamp()\n: usedatetime.datetime.fromtimestamp(timestamp, tz=datetime.UTC)\n.\ngettext\n: Plural value must be an integer.-\ncache_from_source()\ndebug_override parameter is deprecated: use the optimization parameter instead.\n-\nEntryPoints\ntuple interface.Implicit\nNone\non return values.\nlogging\n: thewarn()\nmethod has been deprecated since Python 3.3, usewarning()\ninstead.mailbox\n: Use of StringIO input and text mode is deprecated, use BytesIO and binary mode instead.os\n: Callingos.register_at_fork()\nin multi-threaded process.pydoc.ErrorDuringImport\n: A tuple value for exc_info parameter is deprecated, use an exception instance.re\n: More strict rules are now applied for numerical group references and group names in regular expressions. Only sequence of ASCII digits is now accepted as a numerical reference. The group name in bytes patterns and replacement strings can now only contain ASCII letters and digits and underscore. (Contributed by Serhiy Storchaka in gh-91760.)sre_compile\n,sre_constants\nandsre_parse\nmodules.shutil\n:rmtree()\n\u2019s onerror parameter is deprecated in Python 3.12; use the onexc parameter instead.ssl\noptions and protocols:ssl.SSLContext\nwithout protocol argument is deprecated.ssl.SSLContext\n:set_npn_protocols()\nandselected_npn_protocol()\nare deprecated: use ALPN instead.ssl.OP_NO_SSL*\noptionsssl.OP_NO_TLS*\noptionsssl.PROTOCOL_SSLv3\nssl.PROTOCOL_TLS\nssl.PROTOCOL_TLSv1\nssl.PROTOCOL_TLSv1_1\nssl.PROTOCOL_TLSv1_2\nssl.TLSVersion.SSLv3\nssl.TLSVersion.TLSv1\nssl.TLSVersion.TLSv1_1\nthreading\nmethods:threading.Condition.notifyAll()\n: usenotify_all()\n.threading.Event.isSet()\n: useis_set()\n.threading.Thread.isDaemon()\n,threading.Thread.setDaemon()\n: usethreading.Thread.daemon\nattribute.threading.Thread.getName()\n,threading.Thread.setName()\n: usethreading.Thread.name\nattribute.threading.currentThread()\n: usethreading.current_thread()\n.threading.activeCount()\n: usethreading.active_count()\n.\nThe internal class\ntyping._UnionGenericAlias\nis no longer used to implementtyping.Union\n. To preserve compatibility with users using this private class, a compatibility shim will be provided until at least Python 3.17. (Contributed by Jelle Zijlstra in gh-105499.)unittest.IsolatedAsyncioTestCase\n: it is deprecated to return a value that is notNone\nfrom a test case.urllib.parse\ndeprecated functions:urlparse()\ninsteadsplitattr()\nsplithost()\nsplitnport()\nsplitpasswd()\nsplitport()\nsplitquery()\nsplittag()\nsplittype()\nsplituser()\nsplitvalue()\nto_bytes()\nwsgiref\n:SimpleHandler.stdout.write()\nshould not do partial writes.xml.etree.ElementTree\n: Testing the truth value of anElement\nis deprecated. In a future release it will always returnTrue\n. Prefer explicitlen(elem)\norelem is not None\ntests instead.sys._clear_type_cache()\nis deprecated: usesys._clear_internal_caches()\ninstead.\nC API deprecations\u00b6\nPending removal in Python 3.15\u00b6\nThe\nPyImport_ImportModuleNoBlock()\n: UsePyImport_ImportModule()\ninstead.PyWeakref_GetObject()\nandPyWeakref_GET_OBJECT()\n: UsePyWeakref_GetRef()\ninstead. The pythoncapi-compat project can be used to getPyWeakref_GetRef()\non Python 3.12 and older.Py_UNICODE\ntype and thePy_UNICODE_WIDE\nmacro: Usewchar_t\ninstead.PyUnicode_AsDecodedObject()\n: UsePyCodec_Decode()\ninstead.PyUnicode_AsDecodedUnicode()\n: UsePyCodec_Decode()\ninstead; Note that some codecs (for example, \u201cbase64\u201d) may return a type other thanstr\n, such asbytes\n.PyUnicode_AsEncodedObject()\n: UsePyCodec_Encode()\ninstead.PyUnicode_AsEncodedUnicode()\n: UsePyCodec_Encode()\ninstead; Note that some codecs (for example, \u201cbase64\u201d) may return a type other thanbytes\n, such asstr\n.Python initialization functions, deprecated in Python 3.13:\nPy_GetPath()\n: UsePyConfig_Get(\"module_search_paths\")\n(sys.path\n) instead.Py_GetPrefix()\n: UsePyConfig_Get(\"base_prefix\")\n(sys.base_prefix\n) instead. UsePyConfig_Get(\"prefix\")\n(sys.prefix\n) if virtual environments need to be handled.Py_GetExecPrefix()\n: UsePyConfig_Get(\"base_exec_prefix\")\n(sys.base_exec_prefix\n) instead. UsePyConfig_Get(\"exec_prefix\")\n(sys.exec_prefix\n) if virtual environments need to be handled.Py_GetProgramFullPath()\n: UsePyConfig_Get(\"executable\")\n(sys.executable\n) instead.Py_GetProgramName()\n: UsePyConfig_Get(\"executable\")\n(sys.executable\n) instead.Py_GetPythonHome()\n: UsePyConfig_Get(\"home\")\nor thePYTHONHOME\nenvironment variable instead.\nThe pythoncapi-compat project can be used to get\nPyConfig_Get()\non Python 3.13 and older.Functions to configure Python\u2019s initialization, deprecated in Python 3.11:\nPySys_SetArgvEx()\n: SetPyConfig.argv\ninstead.PySys_SetArgv()\n: SetPyConfig.argv\ninstead.Py_SetProgramName()\n: SetPyConfig.program_name\ninstead.Py_SetPythonHome()\n: SetPyConfig.home\ninstead.PySys_ResetWarnOptions()\n: Clearsys.warnoptions\nandwarnings.filters\ninstead.\nThe\nPy_InitializeFromConfig()\nAPI should be used withPyConfig\ninstead.Global configuration variables:\nPy_DebugFlag\n: UsePyConfig.parser_debug\norPyConfig_Get(\"parser_debug\")\ninstead.Py_VerboseFlag\n: UsePyConfig.verbose\norPyConfig_Get(\"verbose\")\ninstead.Py_QuietFlag\n: UsePyConfig.quiet\norPyConfig_Get(\"quiet\")\ninstead.Py_InteractiveFlag\n: UsePyConfig.interactive\norPyConfig_Get(\"interactive\")\ninstead.Py_InspectFlag\n: UsePyConfig.inspect\norPyConfig_Get(\"inspect\")\ninstead.Py_OptimizeFlag\n: UsePyConfig.optimization_level\norPyConfig_Get(\"optimization_level\")\ninstead.Py_NoSiteFlag\n: UsePyConfig.site_import\norPyConfig_Get(\"site_import\")\ninstead.Py_BytesWarningFlag\n: UsePyConfig.bytes_warning\norPyConfig_Get(\"bytes_warning\")\ninstead.Py_FrozenFlag\n: UsePyConfig.pathconfig_warnings\norPyConfig_Get(\"pathconfig_warnings\")\ninstead.Py_IgnoreEnvironmentFlag\n: UsePyConfig.use_environment\norPyConfig_Get(\"use_environment\")\ninstead.Py_DontWriteBytecodeFlag\n: UsePyConfig.write_bytecode\norPyConfig_Get(\"write_bytecode\")\ninstead.Py_NoUserSiteDirectory\n: UsePyConfig.user_site_directory\norPyConfig_Get(\"user_site_directory\")\ninstead.Py_UnbufferedStdioFlag\n: UsePyConfig.buffered_stdio\norPyConfig_Get(\"buffered_stdio\")\ninstead.Py_HashRandomizationFlag\n: UsePyConfig.use_hash_seed\nandPyConfig.hash_seed\norPyConfig_Get(\"hash_seed\")\ninstead.Py_IsolatedFlag\n: UsePyConfig.isolated\norPyConfig_Get(\"isolated\")\ninstead.Py_LegacyWindowsFSEncodingFlag\n: UsePyPreConfig.legacy_windows_fs_encoding\norPyConfig_Get(\"legacy_windows_fs_encoding\")\ninstead.Py_LegacyWindowsStdioFlag\n: UsePyConfig.legacy_windows_stdio\norPyConfig_Get(\"legacy_windows_stdio\")\ninstead.Py_FileSystemDefaultEncoding\n,Py_HasFileSystemDefaultEncoding\n: UsePyConfig.filesystem_encoding\norPyConfig_Get(\"filesystem_encoding\")\ninstead.Py_FileSystemDefaultEncodeErrors\n: UsePyConfig.filesystem_errors\norPyConfig_Get(\"filesystem_errors\")\ninstead.Py_UTF8Mode\n: UsePyPreConfig.utf8_mode\norPyConfig_Get(\"utf8_mode\")\ninstead. (seePy_PreInitialize()\n)\nThe\nPy_InitializeFromConfig()\nAPI should be used withPyConfig\nto set these options. OrPyConfig_Get()\ncan be used to get these options at runtime.\nPending removal in Python 3.18\u00b6\nThe following private functions are deprecated and planned for removal in Python 3.18:\n_PyBytes_Join()\n: usePyBytes_Join()\n._PyDict_GetItemStringWithError()\n: usePyDict_GetItemStringRef()\n._PyDict_Pop()\n: usePyDict_Pop()\n._PyLong_Sign()\n: usePyLong_GetSign()\n._PyLong_FromDigits()\nand_PyLong_New()\n: usePyLongWriter_Create()\n._PyThreadState_UncheckedGet()\n: usePyThreadState_GetUnchecked()\n._PyUnicode_AsString()\n: usePyUnicode_AsUTF8()\n._PyUnicodeWriter_Init()\n: replace_PyUnicodeWriter_Init(&writer)\nwithwriter = PyUnicodeWriter_Create(0)\n._PyUnicodeWriter_Finish()\n: replace_PyUnicodeWriter_Finish(&writer)\nwithPyUnicodeWriter_Finish(writer)\n._PyUnicodeWriter_Dealloc()\n: replace_PyUnicodeWriter_Dealloc(&writer)\nwithPyUnicodeWriter_Discard(writer)\n._PyUnicodeWriter_WriteChar()\n: replace_PyUnicodeWriter_WriteChar(&writer, ch)\nwithPyUnicodeWriter_WriteChar(writer, ch)\n._PyUnicodeWriter_WriteStr()\n: replace_PyUnicodeWriter_WriteStr(&writer, str)\nwithPyUnicodeWriter_WriteStr(writer, str)\n._PyUnicodeWriter_WriteSubstring()\n: replace_PyUnicodeWriter_WriteSubstring(&writer, str, start, end)\nwithPyUnicodeWriter_WriteSubstring(writer, str, start, end)\n._PyUnicodeWriter_WriteASCIIString()\n: replace_PyUnicodeWriter_WriteASCIIString(&writer, str)\nwithPyUnicodeWriter_WriteASCII(writer, str)\n._PyUnicodeWriter_WriteLatin1String()\n: replace_PyUnicodeWriter_WriteLatin1String(&writer, str)\nwithPyUnicodeWriter_WriteUTF8(writer, str)\n._PyUnicodeWriter_Prepare()\n: (no replacement)._PyUnicodeWriter_PrepareKind()\n: (no replacement)._Py_HashPointer()\n: usePy_HashPointer()\n._Py_fopen_obj()\n: usePy_fopen()\n.\nThe pythoncapi-compat project can be used to get these new public functions on Python 3.13 and older. (Contributed by Victor Stinner in gh-128863.)\nPending removal in future versions\u00b6\nThe following APIs are deprecated and will be removed, although there is currently no date scheduled for their removal.\nPy_TPFLAGS_HAVE_FINALIZE\n: Unneeded since Python 3.8.PyErr_Fetch()\n: UsePyErr_GetRaisedException()\ninstead.PyErr_NormalizeException()\n: UsePyErr_GetRaisedException()\ninstead.PyErr_Restore()\n: UsePyErr_SetRaisedException()\ninstead.PyModule_GetFilename()\n: UsePyModule_GetFilenameObject()\ninstead.PyOS_AfterFork()\n: UsePyOS_AfterFork_Child()\ninstead.PySlice_GetIndicesEx()\n: UsePySlice_Unpack()\nandPySlice_AdjustIndices()\ninstead.PyUnicode_READY()\n: Unneeded since Python 3.12PyErr_Display()\n: UsePyErr_DisplayException()\ninstead._PyErr_ChainExceptions()\n: Use_PyErr_ChainExceptions1()\ninstead.PyBytesObject.ob_shash\nmember: callPyObject_Hash()\ninstead.Thread Local Storage (TLS) API:\nPyThread_create_key()\n: UsePyThread_tss_alloc()\ninstead.PyThread_delete_key()\n: UsePyThread_tss_free()\ninstead.PyThread_set_key_value()\n: UsePyThread_tss_set()\ninstead.PyThread_get_key_value()\n: UsePyThread_tss_get()\ninstead.PyThread_delete_key_value()\n: UsePyThread_tss_delete()\ninstead.PyThread_ReInitTLS()\n: Unneeded since Python 3.7.", "code_snippets": ["\n\n", " ", "\n ", "\n\n", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 4898} +{"url": "https://docs.python.org/3/using/windows.html", "title": "Using Python on Windows", "content": "4. Using Python on Windows\u00b6\nThis document aims to give an overview of Windows-specific behaviour you should know about when using Python on Microsoft Windows.\nUnlike most Unix systems and services, Windows does not include a system supported installation of Python. Instead, Python can be obtained from a number of distributors, including directly from the CPython team. Each Python distribution will have its own benefits and drawbacks, however, consistency with other tools you are using is generally a worthwhile benefit. Before committing to the process described here, we recommend investigating your existing tools to see if they can provide Python directly.\nTo obtain Python from the CPython team, use the Python Install Manager. This is a standalone tool that makes Python available as global commands on your Windows machine, integrates with the system, and supports updates over time. You can download the Python Install Manager from python.org/downloads or through the Microsoft Store app.\nOnce you have installed the Python Install Manager, the global python\ncommand can be used from any terminal to launch your current latest version of\nPython. This version may change over time as you add or remove different\nversions, and the py list\ncommand will show which is current.\nIn general, we recommend that you create a virtual environment\nfor each project and run \\Scripts\\Activate\nin your terminal to use it.\nThis provides isolation between projects, consistency over time, and ensures\nthat additional commands added by packages are also available in your session.\nCreate a virtual environment using python -m venv \n.\nIf the python\nor py\ncommands do not seem to be working, please see the\nTroubleshooting section below. There are\nsometimes additional manual steps required to configure your PC.\nApart from using the Python install manager, Python can also be obtained as NuGet packages. See The nuget.org packages below for more information on these packages.\nThe embeddable distros are minimal packages of Python suitable for embedding into larger applications. They can be installed using the Python install manager. See The embeddable package below for more information on these packages.\n4.1. Python install manager\u00b6\n4.1.1. Installation\u00b6\nThe Python install manager can be installed from the Microsoft Store app or downloaded and installed from python.org/downloads. The two versions are identical.\nTo install through the Store, simply click \u201cInstall\u201d. After it has completed,\nopen a terminal and type python\nto get started.\nTo install the file downloaded from python.org, either double-click and select\n\u201cInstall\u201d, or run Add-AppxPackage \nin Windows Powershell.\nAfter installation, the python\n, py\n, and pymanager\ncommands should be\navailable. If you have existing installations of Python, or you have modified\nyour PATH\nvariable, you may need to remove them or undo the\nmodifications. See Troubleshooting for more help with fixing\nnon-working commands.\nWhen you first install a runtime, you will likely be prompted to add a directory\nto your PATH\n. This is optional, if you prefer to use the py\ncommand, but is offered for those who prefer the full range of aliases (such\nas python3.14.exe\n) to be available. The directory will be\n%LocalAppData%\\Python\\bin\nby default, but may be customized by an\nadministrator. Click Start and search for \u201cEdit environment variables for your\naccount\u201d for the system settings page to add the path.\nEach Python runtime you install will have its own directory for scripts. These\nalso need to be added to PATH\nif you want to use them.\nThe Python install manager will be automatically updated to new releases. This does not affect any installs of Python runtimes. Uninstalling the Python install manager does not uninstall any Python runtimes.\nIf you are not able to install an MSIX in your context, for example, you are using automated deployment software that does not support it, or are targeting Windows Server 2019, please see Advanced installation below for more information.\n4.1.2. Basic use\u00b6\nThe recommended command for launching Python is python\n, which will either\nlaunch the version requested by the script being launched, an active virtual\nenvironment, or the default installed version, which will be the latest stable\nrelease unless configured otherwise. If no version is specifically requested and\nno runtimes are installed at all, the current latest release will be installed\nautomatically.\nFor all scenarios involving multiple runtime versions, the recommended command\nis py\n. This may be used anywhere in place of python\nor the older\npy.exe\nlauncher. By default, py\nmatches the behaviour of python\n, but\nalso allows command line options to select a specific version as well as\nsubcommands to manage installations. These are detailed below.\nBecause the py\ncommand may already be taken by the previous version, there\nis also an unambiguous pymanager\ncommand. Scripted installs that are\nintending to use Python install manager should consider using pymanager\n, due\nto the lower chance of encountering a conflict with existing installs. The only\ndifference between the two commands is when running without any arguments:\npy\nwill launch your default interpreter, while pymanager\nwill display\nhelp (pymanager exec ...\nprovides equivalent behaviour to py ...\n).\nEach of these commands also has a windowed version that avoids creating a\nconsole window. These are pyw\n, pythonw\nand pymanagerw\n. A python3\ncommand is also included that mimics the python\ncommand. It is intended to\ncatch accidental uses of the typical POSIX command on Windows, but is not meant\nto be widely used or recommended.\nTo launch your default runtime, run python\nor py\nwith the arguments you\nwant to be passed to the runtime (such as script files or the module to launch):\n$> py\n...\n$> python my-script.py\n...\n$> py -m this\n...\nThe default runtime can be overridden with the PYTHON_MANAGER_DEFAULT\nenvironment variable, or a configuration file. See Configuration for\ninformation about configuration settings.\nTo launch a specific runtime, the py\ncommand accepts a -V:\noption.\nThis option must be specified before any others. The tag is part or all of the\nidentifier for the runtime; for those from the CPython team, it looks like the\nversion, potentially with the platform. For compatibility, the V:\nmay be\nomitted in cases where the tag refers to an official release and starts with\n3\n.\n$> py -V:3.14 ...\n$> py -V:3-arm64 ...\nRuntimes from other distributors may require the company to be included as\nwell. This should be separated from the tag by a slash, and may be a prefix.\nSpecifying the company is optional when it is PythonCore\n, and specifying the\ntag is optional (but not the slash) when you want the latest release from a\nspecific company.\n$> py -V:Distributor\\1.0 ...\n$> py -V:distrib/ ...\nIf no version is specified, but a script file is passed, the script will be inspected for a shebang line. This is a special format for the first line in a file that allows overriding the command. See Shebang lines for more information. When there is no shebang line, or it cannot be resolved, the script will be launched with the default runtime.\nIf you are running in an active virtual environment, have not requested a\nparticular version, and there is no shebang line, the default runtime will be\nthat virtual environment. In this scenario, the python\ncommand was likely\nalready overridden and none of these checks occurred. However, this behaviour\nensures that the py\ncommand can be used interchangeably.\nWhen no runtimes are installed, any launch command will try to install the\nrequested version and launch it. However, after any version is installed, only\nthe py exec ...\nand pymanager exec ...\ncommands will install if the\nrequested version is absent. Other forms of commands will display an error and\ndirect you to use py install\nfirst.\n4.1.3. Command help\u00b6\nThe py help\ncommand will display the full list of supported commands, along\nwith their options. Any command may be passed the -?\noption to display its\nhelp, or its name passed to py help\n.\n$> py help\n$> py help install\n$> py install /?\nAll commands support some common options, which will be shown by py help\n.\nThese options must be specified after any subcommand. Specifying -v\nor\n--verbose\nwill increase the amount of output shown, and -vv\nwill\nincrease it further for debugging purposes. Passing -q\nor --quiet\nwill\nreduce output, and -qq\nwill reduce it further.\nThe --config=\noption allows specifying a configuration file to\noverride multiple settings at once. See Configuration below for more\ninformation about these files.\n4.1.4. Listing runtimes\u00b6\n$> py list [-f=|--format=] [-1|--one] [--online|-s=|--source=] [...]\nThe list of installed runtimes can be seen using py list\n. A filter may be\nadded in the form of one or more tags (with or without company specifier), and\neach may include a <\n, <=\n, >=\nor >\nprefix to restrict to a range.\nA range of formats are supported, and can be passed as the --format=\nor\n-f \noption. Formats include table\n(a user friendly table view),\ncsv\n(comma-separated table), json\n(a single JSON blob), jsonl\n(one\nJSON blob per result), exe\n(just the executable path), prefix\n(just the\nprefix path).\nThe --one\nor -1\noption only displays a single result. If the default\nruntime is included, it will be the one. Otherwise, the \u201cbest\u201d result is shown\n(\u201cbest\u201d is deliberately vaguely defined, but will usually be the most recent\nversion). The result shown by py list --one \nwill match the runtime\nthat would be launched by py -V:\n.\nThe --only-managed\noption excludes results that were not installed by the\nPython install manager. This is useful when determining which runtimes may be\nupdated or uninstalled through the py\ncommand.\nThe --online\noption is short for passing --source=\nwith the default\nsource. Passing either of these options will search the online index for\nruntimes that can be installed. The result shown by py list --online --one\n\nwill match the runtime that would be installed by py install \n.\n$> py list --online 3.14\nFor compatibility with the old launcher, the --list\n, --list-paths\n,\n-0\nand -0p\ncommands (e.g. py -0p\n) are retained. They do not allow\nadditional options, and will produce legacy formatted output.\n4.1.5. Installing runtimes\u00b6\n$> py install [-s=|--source=] [-f|--force] [-u|--update] [--dry-run] [...]\nNew runtime versions may be added using py install\n. One or more tags may be\nspecified, and the special tag default\nmay be used to select the default.\nRanges are not supported for installation.\nThe --source=\noption allows overriding the online index that is used to\nobtain runtimes. This may be used with an offline index, as shown in\nOffline installs.\nPassing --force\nwill ignore any cached files and remove any existing install\nto replace it with the specified one.\nPassing --update\nwill replace existing installs if the new version is newer.\nOtherwise, they will be left. If no tags are provided with --update\n, all\ninstalls managed by the Python install manager will be updated if newer versions\nare available. Updates will remove any modifications made to the install,\nincluding globally installed packages, but virtual environments will continue to\nwork.\nPassing --dry-run\nwill generate output and logs, but will not modify any\ninstalls.\nIn addition to the above options, the --target\noption will extract the\nruntime to the specified directory instead of doing a normal install.\nThis is useful for embedding runtimes into larger applications.\nUnlike a normal install, py\nwill not be aware of the extracted runtime,\nand no Start menu or other shortcuts will be created.\nTo launch the runtime, directly execute the main executable (typically\npython.exe\n) in the target directory.\n$> py install ... [-t=|--target=] \nThe py exec\ncommand will install the requested runtime if it is not already\npresent. This is controlled by the automatic_install\nconfiguration\n(PYTHON_MANAGER_AUTOMATIC_INSTALL\n), and is enabled by default.\nIf no runtimes are available at all, all launch commands will do an automatic\ninstall if the configuration setting allows. This is to ensure a good experience\nfor new users, but should not generally be relied on rather than using the\npy exec\ncommand or explicit install commands.\n4.1.6. Offline installs\u00b6\nTo perform offline installs of Python, you will need to first create an offline index on a machine that has network access.\n$> py install --download= ... ...\nThe --download=\noption will download the packages for the listed tags\nand create a directory containing them and an index.json\nfile suitable for\nlater installation. This entire directory can be moved to the offline machine\nand used to install one or more of the bundled runtimes:\n$> py install --source=\"\\index.json\" ...\nThe Python install manager can be installed by downloading its installer and moving it to another machine before installing.\nAlternatively, the ZIP files in an offline index directory can simply be transferred to another machine and extracted. This will not register the install in any way, and so it must be launched by directly referencing the executables in the extracted directory, but it is sometimes a preferable approach in cases where installing the Python install manager is not possible or convenient.\nIn this way, Python runtimes can be installed and managed on a machine without access to the internet.\n4.1.7. Uninstalling runtimes\u00b6\n$> py uninstall [-y|--yes] ...\nRuntimes may be removed using the py uninstall\ncommand. One or more tags\nmust be specified. Ranges are not supported here.\nThe --yes\noption bypasses the confirmation prompt before uninstalling.\nInstead of passing tags individually, the --purge\noption may be specified.\nThis will remove all runtimes managed by the Python install manager, including\ncleaning up the Start menu, registry, and any download caches. Runtimes that\nwere not installed by the Python install manager will not be impacted, and\nneither will manually created configuration files.\n$> py uninstall [-y|--yes] --purge\nThe Python install manager can be uninstalled through the Windows \u201cInstalled\napps\u201d settings page. This does not remove any runtimes, and they will still be\nusable, though the global python\nand py\ncommands will be removed.\nReinstalling the Python install manager will allow you to manage these runtimes\nagain. To completely clean up all Python runtimes, run with --purge\nbefore\nuninstalling the Python install manager.\n4.1.8. Configuration\u00b6\nPython install manager is configured with a hierarchy of configuration files, environment variables, command-line options, and registry settings. In general, configuration files have the ability to configure everything, including the location of other configuration files, while registry settings are administrator-only and will override configuration files. Command-line options override all other settings, but not every option is available.\nThis section will describe the defaults, but be aware that modified or overridden installs may resolve settings differently.\nA global configuration file may be configured by an administrator, and would be\nread first. The user configuration file is stored at\n%AppData%\\Python\\pymanager.json\n(note that this location is under Roaming\n, not Local\n) and is read next,\noverwriting any settings from earlier files. An additional configuration file\nmay be specified as the PYTHON_MANAGER_CONFIG\nenvironment variable or the\n--config\ncommand line option (but not both).\nThese locations may be modified by administrative customization options listed\nlater.\nThe following settings are those that are considered likely to be modified in normal use. Later sections list those that are intended for administrative customization.\nStandard configuration options\nConfig Key |\nEnvironment Variable |\nDescription |\n|---|---|---|\n|\n|\nThe preferred default version to launch or install. By default, this is interpreted as the most recent non-prerelease version from the CPython team. |\n|\n|\nThe preferred default platform to launch or install.\nThis is treated as a suffix to the specified tag, such that |\n|\n|\nThe location where log files are written.\nBy default, |\n|\n|\nTrue to allow automatic installs when using |\n|\n|\nTrue to allow listing and launching runtimes that were not installed by the Python install manager, or false to exclude them. By default, true. |\n|\n|\nTrue to allow shebangs in |\n|\n|\nSet the default level of output (0-50). By default, 20. Lower values produce more output. The environment variables are boolean, and may produce additional output during startup that is later suppressed by other configuration. |\n|\n|\nTrue to confirm certain actions before taking them (such as uninstall), or false to skip the confirmation. By default, true. |\n|\n|\nOverride the index feed to obtain new installs from. |\n|\n|\nSpecify the default format used by the |\n|\n(none) |\nSpecify the root directory that runtimes will be installed into. If you change this setting, previously installed runtimes will not be usable unless you move them to the new location. |\n|\n(none) |\nSpecify the directory where global commands (such as |\n|\n(none) |\nSpecify the directory where downloaded files are stored. This directory is a temporary cache, and can be cleaned up from time to time. |\nDotted names should be nested inside JSON objects, for example, list.format\nwould be specified as {\"list\": {\"format\": \"table\"}}\n.\n4.1.9. Shebang lines\u00b6\nIf the first line of a script file starts with #!\n, it is known as a\n\u201cshebang\u201d line. Linux and other Unix like operating systems have native\nsupport for such lines and they are commonly used on such systems to indicate\nhow a script should be executed. The python\nand py\ncommands allow the\nsame facilities to be used with Python scripts on Windows.\nTo allow shebang lines in Python scripts to be portable between Unix and Windows, a number of \u2018virtual\u2019 commands are supported to specify which interpreter to use. The supported virtual commands are:\n/usr/bin/env \n/usr/bin/env -S \n/usr/bin/\n/usr/local/bin/\n\nFor example, if the first line of your script starts with\n#! /usr/bin/python\nThe default Python or an active virtual environment will be located and used.\nAs many Python scripts written to work on Unix will already have this line,\nyou should find these scripts can be used by the launcher without modification.\nIf you are writing a new script on Windows which you hope will be useful on\nUnix, you should use one of the shebang lines starting with /usr\n.\nAny of the above virtual commands can have \nreplaced by an alias from\nan installed runtime. That is, any command generated in the global aliases\ndirectory (which you may have added to your PATH\nenvironment variable)\ncan be used in a shebang, even if it is not on your PATH\n. This allows\nthe use of shebangs like /usr/bin/python3.12\nto select a particular runtime.\nIf no runtimes are installed, or if automatic installation is enabled, the requested runtime will be installed if necessary. See Configuration for information about configuration settings.\nThe /usr/bin/env\nform of shebang line will also search the PATH\nenvironment variable for unrecognized commands. This corresponds to the\nbehaviour of the Unix env\nprogram, which performs the same search, but\nprefers launching known Python commands. A warning may be displayed when\nsearching for arbitrary executables, and this search may be disabled by the\nshebang_can_run_anything\nconfiguration option.\nShebang lines that do not match any of patterns are treated as Windows\nexecutable paths that are absolute or relative to the directory containing the\nscript file. This is a convenience for Windows-only scripts, such as those\ngenerated by an installer, since the behavior is not compatible with Unix-style\nshells. These paths may be quoted, and may include multiple arguments, after\nwhich the path to the script and any additional arguments will be appended.\nThis functionality may be disabled by the shebang_can_run_anything\nconfiguration option.\nNote\nThe behaviour of shebangs in the Python install manager is subtly different\nfrom the previous py.exe\nlauncher, and the old configuration options no\nlonger apply. If you are specifically reliant on the old behaviour or\nconfiguration, we recommend installing the legacy launcher. The legacy\nlauncher\u2019s py\ncommand will override PyManager\u2019s one by default, and you\nwill need to use pymanager\ncommands for installing and uninstalling.\n4.1.10. Advanced installation\u00b6\nFor situations where an MSIX cannot be installed, such as some older\nadministrative distribution platforms, there is an MSI available from the\npython.org downloads page. This MSI has no user interface, and can only perform\nper-machine installs to its default location in Program Files. It will attempt\nto modify the system PATH\nenvironment variable to include this install\nlocation, but be sure to validate this on your configuration.\nNote\nWindows Server 2019 is the only version of Windows that CPython supports that does not support MSIX. For Windows Server 2019, you should use the MSI.\nBe aware that the MSI package does not bundle any runtimes, and so is not suitable for installs into offline environments without also creating an offline install index. See Offline installs and Administrative configuration for information on handling these scenarios.\nRuntimes installed by the MSI are shared with those installed by the MSIX, and\nare all per-user only. The Python install manager does not support installing\nruntimes per-machine. To emulate a per-machine install, you can use py install\n--target=\nas administrator and add your own system-wide\nmodifications to PATH\n, the registry, or the Start menu.\nWhen the MSIX is installed, but commands are not available in the PATH\nenvironment variable, they can be found under\n%LocalAppData%\\Microsoft\\WindowsApps\\PythonSoftwareFoundation.PythonManager_3847v3x7pw1km\nor\n%LocalAppData%\\Microsoft\\WindowsApps\\PythonSoftwareFoundation.PythonManager_qbz5n2kfra8p0\n,\ndepending on whether it was installed from python.org or through the Windows\nStore. Attempting to run the executable directly from Program Files is not\nrecommended.\nTo programmatically install the Python install manager, it is easiest to use WinGet, which is included with all supported versions of Windows:\n$> winget install 9NQ7512CXL7T -e --accept-package-agreements --disable-interactivity\n# Optionally run the configuration checker and accept all changes\n$> py install --configure -y\nTo download the Python install manager and install on another machine, the\nfollowing WinGet command will download the required files from the Store to your\nDownloads directory (add -d \nto customize the output location).\nThis also generates a YAML file that appears to be unnecessary, as the\ndownloaded MSIX can be installed by launching or using the commands below.\n$> winget download 9NQ7512CXL7T -e --skip-license --accept-package-agreements --accept-source-agreements\nTo programmatically install or uninstall an MSIX using only PowerShell, the Add-AppxPackage and Remove-AppxPackage PowerShell cmdlets are recommended:\n$> Add-AppxPackage C:\\Downloads\\python-manager-25.0.msix\n...\n$> Get-AppxPackage PythonSoftwareFoundation.PythonManager | Remove-AppxPackage\nThe latest release can be downloaded and installed by Windows by passing the AppInstaller file to the Add-AppxPackage command. This installs using the MSIX on python.org, and is only recommended for cases where installing via the Store (interactively or using WinGet) is not possible.\n$> Add-AppxPackage -AppInstallerFile https://www.python.org/ftp/python/pymanager/pymanager.appinstaller\nOther tools and APIs may also be used to provision an MSIX package for all users on a machine, but Python does not consider this a supported scenario. We suggest looking into the PowerShell Add-AppxProvisionedPackage cmdlet, the native Windows PackageManager class, or the documentation and support for your deployment tool.\nRegardless of the install method, users will still need to install their own copies of Python itself, as there is no way to trigger those installs without being a logged in user. When using the MSIX, the latest version of Python will be available for all users to install without network access.\nNote that the MSIX downloadable from the Store and from the Python website are subtly different and cannot be installed at the same time. Wherever possible, we suggest using the above WinGet commands to download the package from the Store to reduce the risk of setting up conflicting installs. There are no licensing restrictions on the Python install manager that would prevent using the Store package in this way.\n4.1.11. Administrative configuration\u00b6\nThere are a number of options that may be useful for administrators to override configuration of the Python install manager. These can be used to provide local caching, disable certain shortcut types, override bundled content. All of the above configuration options may be set, as well as those below.\nConfiguration options may be overridden in the registry by setting values under\nHKEY_LOCAL_MACHINE\\Software\\Policies\\Python\\PyManager\n, where the\nvalue name matches the configuration key and the value type is REG_SZ\n. Note\nthat this key can itself be customized, but only by modifying the core config\nfile distributed with the Python install manager. We recommend, however, that\nregistry values are used only to set base_config\nto a JSON file containing\nthe full set of overrides. Registry key overrides will replace any other\nconfigured setting, while base_config\nallows users to further modify\nsettings they may need.\nNote that most settings with environment variables support those variables\nbecause their default setting specifies the variable. If you override them, the\nenvironment variable will no longer work, unless you override it with another\none. For example, the default value of confirm\nis literally\n%PYTHON_MANAGER_CONFIRM%\n, which will resolve the variable at load time. If\nyou override the value to yes\n, then the environment variable will no longer\nbe used. If you override the value to %CONFIRM%\n, then that environment\nvariable will be used instead.\nConfiguration settings that are paths are interpreted as relative to the directory containing the configuration file that specified them.\nAdministrative configuration options\nConfig Key |\nDescription |\n|---|---|\n|\nThe highest priority configuration file to read. Note that only the built-in configuration file and the registry can modify this setting. |\n|\nThe second configuration file to read. |\n|\nThe third configuration file to read. |\n|\nRegistry location to check for overrides. Note that only the built-in configuration file can modify this setting. |\n|\nRead-only directory containing locally cached files. |\n|\nPath or URL to an index to consult when the main index cannot be accessed. |\n|\nComma-separated list of shortcut kinds to allow (e.g. |\n|\nComma-separated list of shortcut kinds to exclude\n(e.g. |\n|\nRegistry location to read and write PEP 514 entries into.\nBy default, |\n|\nStart menu folder to write shortcuts into.\nBy default, |\n|\nPath to the active virtual environment.\nBy default, this is |\n|\nTrue to suppress visible warnings when a shebang launches an application other than a Python runtime. |\n4.1.12. Installing free-threaded binaries\u00b6\nAdded in version 3.13.\nPre-built distributions of the free-threaded build are available\nby installing tags with the t\nsuffix.\n$> py install 3.14t\n$> py install 3.14t-arm64\n$> py install 3.14t-32\nThis will install and register as normal. If you have no other runtimes\ninstalled, then python\nwill launch this one. Otherwise, you will need to use\npy -V:3.14t ...\nor, if you have added the global aliases directory to your\nPATH\nenvironment variable, the python3.14t.exe\ncommands.\n4.1.13. Troubleshooting\u00b6\nIf your Python install manager does not seem to be working correctly, please\nwork through these tests and fixes to see if it helps. If not, please report an\nissue at our bug tracker,\nincluding any relevant log files (written to your %TEMP%\ndirectory by\ndefault).\nTroubleshooting\nSymptom |\nThings to try |\n|---|---|\n|\nDid you install the Python install manager? |\nClick Start, open \u201cManage app execution aliases\u201d, and check that the aliases for \u201cPython (default)\u201d are enabled. If they already are, try disabling and re-enabling to refresh the command. The \u201cPython (default windowed)\u201d and \u201cPython install manager\u201d commands may also need refreshing. |\n|\nCheck that the |\n|\nEnsure your |\n|\n|\nDid you install the Python install manager? |\nClick Start, open \u201cManage app execution aliases\u201d, and check that the aliases for \u201cPython (default)\u201d are enabled. If they already are, try disabling and re-enabling to refresh the command. The \u201cPython (default windowed)\u201d and \u201cPython install manager\u201d commands may also need refreshing. |\n|\nEnsure your |\n|\n|\nThis usually means you have the legacy launcher installed and it has priority over the Python install manager. To remove, click Start, open \u201cInstalled apps\u201d, search for \u201cPython launcher\u201d and uninstall it. |\n|\nClick Start, open \u201cInstalled apps\u201d, look for any existing Python runtimes,\nand either remove them or Modify and disable the |\nClick Start, open \u201cManage app execution aliases\u201d, and check that your\n|\n|\n|\nCheck your |\nInstalls that are managed by the Python install manager will be chosen\nahead of unmanaged installs.\nUse |\n|\nPrerelease and experimental installs that are not managed by the Python\ninstall manager may be chosen ahead of stable releases.\nConfigure your default tag or uninstall the prerelease runtime\nand reinstall it using |\n|\n|\nClick Start, open \u201cManage app execution aliases\u201d, and check that your\n|\n|\nHave you activated a virtual environment?\nRun the |\nThe package may be available but missing the generated executable.\nWe recommend using the |\n|\nTyping |\nThis is a known limitation of the operating system. Either specify |\nDrag-dropping files onto a script doesn\u2019t work |\nThis is a known limitation of the operating system. It is supported with the legacy launcher, or with the Python install manager when installed from the MSI. |\nI have installed the Python install manager multiple times. |\nIt is possible to install from the Store or WinGet, from the MSIX on the Python website, and from the MSI, all at once. They are all compatible and will share configuration and runtimes. |\nSee the earlier Advanced installation section for ways to uninstall the install manager other than the typical Installed Apps (Add and Remove Programs) settings page. |\n|\nMy old |\nThe new Python install manager no longer supports this configuration file or its settings, and so it will be ignored. See Configuration for information about configuration settings. |\n4.2. The embeddable package\u00b6\nAdded in version 3.5.\nThe embedded distribution is a ZIP file containing a minimal Python environment. It is intended for acting as part of another application, rather than being directly accessed by end-users.\nTo install an embedded distribution, we recommend using py install\nwith the\n--target\noption:\n$> py install 3.14-embed --target=\nWhen extracted, the embedded distribution is (almost) fully isolated from the\nuser\u2019s system, including environment variables, system registry settings, and\ninstalled packages. The standard library is included as pre-compiled and\noptimized .pyc\nfiles in a ZIP, and python3.dll\n, python313.dll\n,\npython.exe\nand pythonw.exe\nare all provided. Tcl/tk (including all\ndependents, such as Idle), pip and the Python documentation are not included.\nA default ._pth\nfile is included, which further restricts the default search\npaths (as described below in Finding modules). This file is\nintended for embedders to modify as necessary.\nThird-party packages should be installed by the application installer alongside the embedded distribution. Using pip to manage dependencies as for a regular Python installation is not supported with this distribution, though with some care it may be possible to include and use pip for automatic updates. In general, third-party packages should be treated as part of the application (\u201cvendoring\u201d) so that the developer can ensure compatibility with newer versions before providing updates to users.\nThe two recommended use cases for this distribution are described below.\n4.2.1. Python application\u00b6\nAn application written in Python does not necessarily require users to be aware of that fact. The embedded distribution may be used in this case to include a private version of Python in an install package. Depending on how transparent it should be (or conversely, how professional it should appear), there are two options.\nUsing a specialized executable as a launcher requires some coding, but provides\nthe most transparent experience for users. With a customized launcher, there are\nno obvious indications that the program is running on Python: icons can be\ncustomized, company and version information can be specified, and file\nassociations behave properly. In most cases, a custom launcher should simply be\nable to call Py_Main\nwith a hard-coded command line.\nThe simpler approach is to provide a batch file or generated shortcut that\ndirectly calls the python.exe\nor pythonw.exe\nwith the required\ncommand-line arguments. In this case, the application will appear to be Python\nand not its actual name, and users may have trouble distinguishing it from other\nrunning Python processes or file associations.\nWith the latter approach, packages should be installed as directories alongside the Python executable to ensure they are available on the path. With the specialized launcher, packages can be located in other locations as there is an opportunity to specify the search path before launching the application.\n4.2.2. Embedding Python\u00b6\nApplications written in native code often require some form of scripting\nlanguage, and the embedded Python distribution can be used for this purpose. In\ngeneral, the majority of the application is in native code, and some part will\neither invoke python.exe\nor directly use python3.dll\n. For either case,\nextracting the embedded distribution to a subdirectory of the application\ninstallation is sufficient to provide a loadable Python interpreter.\nAs with the application use, packages can be installed to any location as there is an opportunity to specify search paths before initializing the interpreter. Otherwise, there is no fundamental differences between using the embedded distribution and a regular installation.\n4.3. The nuget.org packages\u00b6\nAdded in version 3.5.2.\nThe nuget.org package is a reduced size Python environment intended for use on continuous integration and build systems that do not have a system-wide install of Python. While nuget is \u201cthe package manager for .NET\u201d, it also works perfectly fine for packages containing build-time tools.\nVisit nuget.org for the most up-to-date information on using nuget. What follows is a summary that is sufficient for Python developers.\nThe nuget.exe\ncommand line tool may be downloaded directly from\nhttps://aka.ms/nugetclidl\n, for example, using curl or PowerShell. With the\ntool, the latest version of Python for 64-bit or 32-bit machines is installed\nusing:\nnuget.exe install python -ExcludeVersion -OutputDirectory .\nnuget.exe install pythonx86 -ExcludeVersion -OutputDirectory .\nTo select a particular version, add a -Version 3.x.y\n. The output directory\nmay be changed from .\n, and the package will be installed into a\nsubdirectory. By default, the subdirectory is named the same as the package,\nand without the -ExcludeVersion\noption this name will include the specific\nversion installed. Inside the subdirectory is a tools\ndirectory that\ncontains the Python installation:\n# Without -ExcludeVersion\n> .\\python.3.5.2\\tools\\python.exe -V\nPython 3.5.2\n# With -ExcludeVersion\n> .\\python\\tools\\python.exe -V\nPython 3.5.2\nIn general, nuget packages are not upgradeable, and newer versions should be installed side-by-side and referenced using the full path. Alternatively, delete the package directory manually and install it again. Many CI systems will do this automatically if they do not preserve files between builds.\nAlongside the tools\ndirectory is a build\\native\ndirectory. This\ncontains a MSBuild properties file python.props\nthat can be used in a\nC++ project to reference the Python install. Including the settings will\nautomatically use the headers and import libraries in your build.\nThe package information pages on nuget.org are www.nuget.org/packages/python for the 64-bit version, www.nuget.org/packages/pythonx86 for the 32-bit version, and www.nuget.org/packages/pythonarm64 for the ARM64 version\n4.3.1. Free-threaded packages\u00b6\nAdded in version 3.13.\nPackages containing free-threaded binaries are named\npython-freethreaded\nfor the 64-bit version, pythonx86-freethreaded for the 32-bit\nversion, and pythonarm64-freethreaded for the ARM64\nversion. These packages contain both the python3.13t.exe\nand\npython.exe\nentry points, both of which run free threaded.\n4.4. Alternative bundles\u00b6\nBesides the standard CPython distribution, there are modified packages including additional functionality. The following is a list of popular versions and their key features:\n- ActivePython\nInstaller with multi-platform compatibility, documentation, PyWin32\n- Anaconda\nPopular scientific modules (such as numpy, scipy and pandas) and the\nconda\npackage manager.- Enthought Deployment Manager\n\u201cThe Next Generation Python Environment and Package Manager\u201d.\nPreviously Enthought provided Canopy, but it reached end of life in 2016.\n- WinPython\nWindows-specific distribution with prebuilt scientific packages and tools for building packages.\nNote that these packages may not include the latest versions of Python or other libraries, and are not maintained or supported by the core Python team.\n4.5. Supported Windows versions\u00b6\nAs specified in PEP 11, a Python release only supports a Windows platform while Microsoft considers the platform under extended support. This means that Python 3.14 supports Windows 10 and newer. If you require Windows 7 support, please install Python 3.8. If you require Windows 8.1 support, please install Python 3.12.\n4.6. Removing the MAX_PATH limitation\u00b6\nWindows historically has limited path lengths to 260 characters. This meant that paths longer than this would not resolve and errors would result.\nIn the latest versions of Windows, this limitation can be expanded to over\n32,000 characters. Your administrator will need to activate the \u201cEnable Win32\nlong paths\u201d group policy, or set LongPathsEnabled\nto 1\nin the registry\nkey HKEY_LOCAL_MACHINE\\SYSTEM\\CurrentControlSet\\Control\\FileSystem\n.\nThis allows the open()\nfunction, the os\nmodule and most other\npath functionality to accept and return paths longer than 260 characters.\nAfter changing the above option and rebooting, no further configuration is required.\n4.7. UTF-8 mode\u00b6\nAdded in version 3.7.\nWindows still uses legacy encodings for the system encoding (the ANSI Code\nPage). Python uses it for the default encoding of text files (e.g.\nlocale.getencoding()\n).\nThis may cause issues because UTF-8 is widely used on the internet and most Unix systems, including WSL (Windows Subsystem for Linux).\nYou can use the Python UTF-8 Mode to change the default text\nencoding to UTF-8. You can enable the Python UTF-8 Mode via\nthe -X utf8\ncommand line option, or the PYTHONUTF8=1\nenvironment\nvariable. See PYTHONUTF8\nfor enabling UTF-8 mode, and\nPython install manager for how to modify environment variables.\nWhen the Python UTF-8 Mode is enabled, you can still use the system encoding (the ANSI Code Page) via the \u201cmbcs\u201d codec.\nNote that adding PYTHONUTF8=1\nto the default environment variables\nwill affect all Python 3.7+ applications on your system.\nIf you have any Python 3.7+ applications which rely on the legacy\nsystem encoding, it is recommended to set the environment variable\ntemporarily or use the -X utf8\ncommand line option.\nNote\nEven when UTF-8 mode is disabled, Python uses UTF-8 by default on Windows for:\nConsole I/O including standard I/O (see PEP 528 for details).\nThe filesystem encoding (see PEP 529 for details).\n4.8. Finding modules\u00b6\nThese notes supplement the description at The initialization of the sys.path module search path with detailed Windows notes.\nWhen no ._pth\nfile is found, this is how sys.path\nis populated on\nWindows:\nAn empty entry is added at the start, which corresponds to the current directory.\nIf the environment variable\nPYTHONPATH\nexists, as described in Environment variables, its entries are added next. Note that on Windows, paths in this variable must be separated by semicolons, to distinguish them from the colon used in drive identifiers (C:\\\netc.).Additional \u201capplication paths\u201d can be added in the registry as subkeys of\n\\SOFTWARE\\Python\\PythonCore{version}\\PythonPath\nunder both theHKEY_CURRENT_USER\nandHKEY_LOCAL_MACHINE\nhives. Subkeys which have semicolon-delimited path strings as their default value will cause each path to be added tosys.path\n. (Note that all known installers only use HKLM, so HKCU is typically empty.)If the environment variable\nPYTHONHOME\nis set, it is assumed as \u201cPython Home\u201d. Otherwise, the path of the main Python executable is used to locate a \u201clandmark file\u201d (eitherLib\\os.py\norpythonXY.zip\n) to deduce the \u201cPython Home\u201d. If a Python home is found, the relevant sub-directories added tosys.path\n(Lib\n,plat-win\n, etc) are based on that folder. Otherwise, the core Python path is constructed from the PythonPath stored in the registry.If the Python Home cannot be located, no\nPYTHONPATH\nis specified in the environment, and no registry entries can be found, a default path with relative entries is used (e.g..\\Lib;.\\plat-win\n, etc).\nIf a pyvenv.cfg\nfile is found alongside the main executable or in the\ndirectory one level above the executable, the following variations apply:\nIf\nhome\nis an absolute path andPYTHONHOME\nis not set, this path is used instead of the path to the main executable when deducing the home location.\nThe end result of all this is:\nWhen running\npython.exe\n, or any other .exe in the main Python directory (either an installed version, or directly from the PCbuild directory), the core path is deduced, and the core paths in the registry are ignored. Other \u201capplication paths\u201d in the registry are always read.When Python is hosted in another .exe (different directory, embedded via COM, etc), the \u201cPython Home\u201d will not be deduced, so the core path from the registry is used. Other \u201capplication paths\u201d in the registry are always read.\nIf Python can\u2019t find its home and there are no registry value (frozen .exe, some very strange installation setup) you get a path with some default, but relative, paths.\nFor those who want to bundle Python into their application or distribution, the following advice will prevent conflicts with other installations:\nInclude a\n._pth\nfile alongside your executable containing the directories to include. This will ignore paths listed in the registry and environment variables, and also ignoresite\nunlessimport site\nis listed.If you are loading\npython3.dll\norpython37.dll\nin your own executable, explicitly setPyConfig.module_search_paths\nbeforePy_InitializeFromConfig()\n.Clear and/or overwrite\nPYTHONPATH\nand setPYTHONHOME\nbefore launchingpython.exe\nfrom your application.If you cannot use the previous suggestions (for example, you are a distribution that allows people to run\npython.exe\ndirectly), ensure that the landmark file (Lib\\os.py\n) exists in your install directory. (Note that it will not be detected inside a ZIP file, but a correctly named ZIP file will be detected instead.)\nThese will ensure that the files in a system-wide installation will not take precedence over the copy of the standard library bundled with your application. Otherwise, your users may experience problems using your application. Note that the first suggestion is the best, as the others may still be susceptible to non-standard paths in the registry and user site-packages.\nChanged in version 3.6: Add ._pth\nfile support and removes applocal\noption from\npyvenv.cfg\n.\nChanged in version 3.6: Add pythonXX.zip\nas a potential landmark when directly adjacent\nto the executable.\nDeprecated since version 3.6: Modules specified in the registry under Modules\n(not PythonPath\n)\nmay be imported by importlib.machinery.WindowsRegistryFinder\n.\nThis finder is enabled on Windows in 3.6.0 and earlier, but may need to\nbe explicitly added to sys.meta_path\nin the future.\n4.9. Additional modules\u00b6\nEven though Python aims to be portable among all platforms, there are features that are unique to Windows. A couple of modules, both in the standard library and external, and snippets exist to use these features.\nThe Windows-specific standard modules are documented in MS Windows Specific Services.\n4.9.1. PyWin32\u00b6\nThe PyWin32 module by Mark Hammond is a collection of modules for advanced Windows-specific support. This includes utilities for:\nComponent Object Model (COM)\nWin32 API calls\nRegistry\nEvent log\nMicrosoft Foundation Classes (MFC) user interfaces\nPythonWin is a sample MFC application shipped with PyWin32. It is an embeddable IDE with a built-in debugger.\nSee also\n- Win32 How Do I\u2026?\nby Tim Golden\n- Python and COM\nby David and Paul Boddie\n4.9.2. cx_Freeze\u00b6\ncx_Freeze\nwraps Python scripts into executable Windows programs\n(*.exe\nfiles). When you have done this, you can distribute your\napplication without requiring your users to install Python.\n4.10. Compiling Python on Windows\u00b6\nIf you want to compile CPython yourself, first thing you should do is get the source. You can download either the latest release\u2019s source or just grab a fresh checkout.\nThe source tree contains a build solution and project files for Microsoft\nVisual Studio, which is the compiler used to build the official Python\nreleases. These files are in the PCbuild\ndirectory.\nCheck PCbuild/readme.txt\nfor general information on the build process.\nFor extension modules, consult Building C and C++ Extensions on Windows.\n4.11. The full installer (deprecated)\u00b6\nDeprecated since version 3.14: This installer is deprecated since 3.14 and will not be produced for Python 3.16 or later. See Python install manager for the modern installer.\n4.11.1. Installation steps\u00b6\nFour Python 3.14 installers are available for download - two each for the 32-bit and 64-bit versions of the interpreter. The web installer is a small initial download, and it will automatically download the required components as necessary. The offline installer includes the components necessary for a default installation and only requires an internet connection for optional features. See Installing without downloading for other ways to avoid downloading during installation.\nAfter starting the installer, one of two options may be selected:\nIf you select \u201cInstall Now\u201d:\nYou will not need to be an administrator (unless a system update for the C Runtime Library is required or you install the Python install manager for all users)\nPython will be installed into your user directory\nThe Python install manager will be installed according to the option at the bottom of the first page\nThe standard library, test suite, launcher and pip will be installed\nIf selected, the install directory will be added to your\nPATH\nShortcuts will only be visible for the current user\nSelecting \u201cCustomize installation\u201d will allow you to select the features to install, the installation location and other options or post-install actions. To install debugging symbols or binaries, you will need to use this option.\nTo perform an all-users installation, you should select \u201cCustomize installation\u201d. In this case:\nYou may be required to provide administrative credentials or approval\nPython will be installed into the Program Files directory\nThe Python install manager will be installed into the Windows directory\nOptional features may be selected during installation\nThe standard library can be pre-compiled to bytecode\nIf selected, the install directory will be added to the system\nPATH\nShortcuts are available for all users\n4.11.2. Removing the MAX_PATH limitation\u00b6\nWindows historically has limited path lengths to 260 characters. This meant that paths longer than this would not resolve and errors would result.\nIn the latest versions of Windows, this limitation can be expanded to\napproximately 32,000 characters. Your administrator will need to activate the\n\u201cEnable Win32 long paths\u201d group policy, or set LongPathsEnabled\nto 1\nin the registry key\nHKEY_LOCAL_MACHINE\\SYSTEM\\CurrentControlSet\\Control\\FileSystem\n.\nThis allows the open()\nfunction, the os\nmodule and most other\npath functionality to accept and return paths longer than 260 characters.\nAfter changing the above option, no further configuration is required.\nChanged in version 3.6: Support for long paths was enabled in Python.\n4.11.3. Installing without UI\u00b6\nAll of the options available in the installer UI can also be specified from the command line, allowing scripted installers to replicate an installation on many machines without user interaction. These options may also be set without suppressing the UI in order to change some of the defaults.\nThe following options (found by executing the installer with /?\n) can be\npassed into the installer:\nName |\nDescription |\n|---|---|\n/passive |\nto display progress without requiring user interaction |\n/quiet |\nto install/uninstall without displaying any UI |\n/simple |\nto prevent user customization |\n/uninstall |\nto remove Python (without confirmation) |\n/layout [directory] |\nto pre-download all components |\n/log [filename] |\nto specify log files location |\nAll other options are passed as name=value\n, where the value is usually\n0\nto disable a feature, 1\nto enable a feature, or a path. The full list\nof available options is shown below.\nName |\nDescription |\nDefault |\n|---|---|---|\nInstallAllUsers |\nPerform a system-wide installation. |\n0 |\nTargetDir |\nThe installation directory |\nSelected based on InstallAllUsers |\nDefaultAllUsersTargetDir |\nThe default installation directory for all-user installs |\n|\nDefaultJustForMeTargetDir |\nThe default install directory for just-for-me installs |\n|\nDefaultCustomTargetDir |\nThe default custom install directory displayed in the UI |\n(empty) |\nAssociateFiles |\nCreate file associations if the launcher is also installed. |\n1 |\nCompileAll |\nCompile all |\n0 |\nPrependPath |\nPrepend install and Scripts\ndirectories to |\n0 |\nAppendPath |\nAppend install and Scripts\ndirectories to |\n0 |\nShortcuts |\nCreate shortcuts for the interpreter, documentation and IDLE if installed. |\n1 |\nInclude_doc |\nInstall Python manual |\n1 |\nInclude_debug |\nInstall debug binaries |\n0 |\nInclude_dev |\nInstall developer headers and libraries. Omitting this may lead to an unusable installation. |\n1 |\nInclude_exe |\nInstall |\n1 |\nInclude_launcher |\nInstall Python install manager. |\n1 |\nInstallLauncherAllUsers |\nInstalls the launcher for all\nusers. Also requires\n|\n1 |\nInclude_lib |\nInstall standard library and extension modules. Omitting this may lead to an unusable installation. |\n1 |\nInclude_pip |\nInstall bundled pip and setuptools |\n1 |\nInclude_symbols |\nInstall debugging symbols ( |\n0 |\nInclude_tcltk |\nInstall Tcl/Tk support and IDLE |\n1 |\nInclude_test |\nInstall standard library test suite |\n1 |\nInclude_tools |\nInstall utility scripts |\n1 |\nLauncherOnly |\nOnly installs the launcher. This will override most other options. |\n0 |\nSimpleInstall |\nDisable most install UI |\n0 |\nSimpleInstallDescription |\nA custom message to display when the simplified install UI is used. |\n(empty) |\nFor example, to silently install a default, system-wide Python installation, you could use the following command (from an elevated command prompt):\npython-3.9.0.exe /quiet InstallAllUsers=1 PrependPath=1 Include_test=0\nTo allow users to easily install a personal copy of Python without the test suite, you could provide a shortcut with the following command. This will display a simplified initial page and disallow customization:\npython-3.9.0.exe InstallAllUsers=0 Include_launcher=0 Include_test=0\nSimpleInstall=1 SimpleInstallDescription=\"Just for me, no test suite.\"\n(Note that omitting the launcher also omits file associations, and is only recommended for per-user installs when there is also a system-wide installation that included the launcher.)\nThe options listed above can also be provided in a file named unattend.xml\nalongside the executable. This file specifies a list of options and values.\nWhen a value is provided as an attribute, it will be converted to a number if\npossible. Values provided as element text are always left as strings. This\nexample file sets the same options as the previous example:\n\n\n\n4.11.4. Installing without downloading\u00b6\nAs some features of Python are not included in the initial installer download, selecting those features may require an internet connection. To avoid this need, all possible components may be downloaded on-demand to create a complete layout that will no longer require an internet connection regardless of the selected features. Note that this download may be bigger than required, but where a large number of installations are going to be performed it is very useful to have a locally cached copy.\nExecute the following command from Command Prompt to download all possible\nrequired files. Remember to substitute python-3.9.0.exe\nfor the actual\nname of your installer, and to create layouts in their own directories to\navoid collisions between files with the same name.\npython-3.9.0.exe /layout [optional target directory]\nYou may also specify the /quiet\noption to hide the progress display.\n4.11.5. Modifying an install\u00b6\nOnce Python has been installed, you can add or remove features through the Programs and Features tool that is part of Windows. Select the Python entry and choose \u201cUninstall/Change\u201d to open the installer in maintenance mode.\n\u201cModify\u201d allows you to add or remove features by modifying the checkboxes - unchanged checkboxes will not install or remove anything. Some options cannot be changed in this mode, such as the install directory; to modify these, you will need to remove and then reinstall Python completely.\n\u201cRepair\u201d will verify all the files that should be installed using the current settings and replace any that have been removed or modified.\n\u201cUninstall\u201d will remove Python entirely, with the exception of the Python install manager, which has its own entry in Programs and Features.\n4.11.6. Installing free-threaded binaries\u00b6\nAdded in version 3.13.\nTo install pre-built binaries with free-threading enabled (see PEP 703), you should select \u201cCustomize installation\u201d. The second page of options includes the \u201cDownload free-threaded binaries\u201d checkbox.\nSelecting this option will download and install additional binaries to the same\nlocation as the main Python install. The main executable is called\npython3.13t.exe\n, and other binaries either receive a t\nsuffix or a full\nABI suffix. Python source files and bundled third-party dependencies are shared\nwith the main install.\nThe free-threaded version is registered as a regular Python install with the\ntag 3.13t\n(with a -32\nor -arm64\nsuffix as normal for those\nplatforms). This allows tools to discover it, and for the Python install manager to\nsupport py.exe -3.13t\n. Note that the launcher will interpret py.exe -3\n(or a python3\nshebang) as \u201cthe latest 3.x install\u201d, which will prefer the\nfree-threaded binaries over the regular ones, while py.exe -3.13\nwill not.\nIf you use the short style of option, you may prefer to not install the\nfree-threaded binaries at this time.\nTo specify the install option at the command line, use\nInclude_freethreaded=1\n. See Installing without downloading for instructions on\npre-emptively downloading the additional binaries for offline install. The\noptions to include debug symbols and binaries also apply to the free-threaded\nbuilds.\nFree-threaded binaries are also available on nuget.org.\n4.12. Python launcher for Windows (deprecated)\u00b6\nDeprecated since version 3.14: The launcher and this documentation have been superseded by the Python Install Manager described above. This is preserved temporarily for historical interest.\nAdded in version 3.3.\nThe Python launcher for Windows is a utility which aids in locating and executing of different Python versions. It allows scripts (or the command-line) to indicate a preference for a specific Python version, and will locate and execute that version.\nUnlike the PATH\nvariable, the launcher will correctly select the most\nappropriate version of Python. It will prefer per-user installations over\nsystem-wide ones, and orders by language version rather than using the most\nrecently installed version.\nThe launcher was originally specified in PEP 397.\n4.12.1. Getting started\u00b6\n4.12.1.1. From the command-line\u00b6\nChanged in version 3.6.\nSystem-wide installations of Python 3.3 and later will put the launcher on your\nPATH\n. The launcher is compatible with all available versions of\nPython, so it does not matter which version is installed. To check that the\nlauncher is available, execute the following command in Command Prompt:\npy\nYou should find that the latest version of Python you have installed is started - it can be exited as normal, and any additional command-line arguments specified will be sent directly to Python.\nIf you have multiple versions of Python installed (e.g., 3.7 and 3.14) you will have noticed that Python 3.14 was started - to launch Python 3.7, try the command:\npy -3.7\nIf you want the latest version of Python 2 you have installed, try the command:\npy -2\nIf you see the following error, you do not have the launcher installed:\n'py' is not recognized as an internal or external command,\noperable program or batch file.\nThe command:\npy --list\ndisplays the currently installed version(s) of Python.\nThe -x.y\nargument is the short form of the -V:Company/Tag\nargument,\nwhich allows selecting a specific Python runtime, including those that may have\ncome from somewhere other than python.org. Any runtime registered by following\nPEP 514 will be discoverable. The --list\ncommand lists all available\nruntimes using the -V:\nformat.\nWhen using the -V:\nargument, specifying the Company will limit selection to\nruntimes from that provider, while specifying only the Tag will select from all\nproviders. Note that omitting the slash implies a tag:\n# Select any '3.*' tagged runtime\npy -V:3\n# Select any 'PythonCore' released runtime\npy -V:PythonCore/\n# Select PythonCore's latest Python 3 runtime\npy -V:PythonCore/3\nThe short form of the argument (-3\n) only ever selects from core Python\nreleases, and not other distributions. However, the longer form (-V:3\n) will\nselect from any.\nThe Company is matched on the full string, case-insensitive. The Tag is matched\non either the full string, or a prefix, provided the next character is a dot or a\nhyphen. This allows -V:3.1\nto match 3.1-32\n, but not 3.10\n. Tags are\nsorted using numerical ordering (3.10\nis newer than 3.1\n), but are\ncompared using text (-V:3.01\ndoes not match 3.1\n).\n4.12.1.2. Virtual environments\u00b6\nAdded in version 3.5.\nIf the launcher is run with no explicit Python version specification, and a\nvirtual environment (created with the standard library venv\nmodule or\nthe external virtualenv\ntool) active, the launcher will run the virtual\nenvironment\u2019s interpreter rather than the global one. To run the global\ninterpreter, either deactivate the virtual environment, or explicitly specify\nthe global Python version.\n4.12.1.3. From a script\u00b6\nLet\u2019s create a test Python script - create a file called hello.py\nwith the\nfollowing contents\n#! python\nimport sys\nsys.stdout.write(\"hello from Python %s\\n\" % (sys.version,))\nFrom the directory in which hello.py lives, execute the command:\npy hello.py\nYou should notice the version number of your latest Python 2.x installation is printed. Now try changing the first line to be:\n#! python3\nRe-executing the command should now print the latest Python 3.x information.\nAs with the above command-line examples, you can specify a more explicit\nversion qualifier. Assuming you have Python 3.7 installed, try changing\nthe first line to #! python3.7\nand you should find the 3.7\nversion information printed.\nNote that unlike interactive use, a bare \u201cpython\u201d will use the latest\nversion of Python 2.x that you have installed. This is for backward\ncompatibility and for compatibility with Unix, where the command python\ntypically refers to Python 2.\n4.12.1.4. From file associations\u00b6\nThe launcher should have been associated with Python files (i.e. .py\n,\n.pyw\n, .pyc\nfiles) when it was installed. This means that\nwhen you double-click on one of these files from Windows explorer the launcher\nwill be used, and therefore you can use the same facilities described above to\nhave the script specify the version which should be used.\nThe key benefit of this is that a single launcher can support multiple Python versions at the same time depending on the contents of the first line.\n4.12.2. Shebang lines\u00b6\nIf the first line of a script file starts with #!\n, it is known as a\n\u201cshebang\u201d line. Linux and other Unix like operating systems have native\nsupport for such lines and they are commonly used on such systems to indicate\nhow a script should be executed. This launcher allows the same facilities to\nbe used with Python scripts on Windows and the examples above demonstrate their\nuse.\nTo allow shebang lines in Python scripts to be portable between Unix and Windows, this launcher supports a number of \u2018virtual\u2019 commands to specify which interpreter to use. The supported virtual commands are:\n/usr/bin/env\n/usr/bin/python\n/usr/local/bin/python\npython\nFor example, if the first line of your script starts with\n#! /usr/bin/python\nThe default Python or an active virtual environment will be located and used.\nAs many Python scripts written to work on Unix will already have this line,\nyou should find these scripts can be used by the launcher without modification.\nIf you are writing a new script on Windows which you hope will be useful on\nUnix, you should use one of the shebang lines starting with /usr\n.\nAny of the above virtual commands can be suffixed with an explicit version\n(either just the major version, or the major and minor version).\nFurthermore the 32-bit version can be requested by adding \u201c-32\u201d after the\nminor version. I.e. /usr/bin/python3.7-32\nwill request usage of the\n32-bit Python 3.7. If a virtual environment is active, the version will be\nignored and the environment will be used.\nAdded in version 3.7: Beginning with python launcher 3.7 it is possible to request 64-bit version\nby the \u201c-64\u201d suffix. Furthermore it is possible to specify a major and\narchitecture without minor (i.e. /usr/bin/python3-64\n).\nChanged in version 3.11: The \u201c-64\u201d suffix is deprecated, and now implies \u201cany architecture that is\nnot provably i386/32-bit\u201d. To request a specific environment, use the new\n-V:TAG\nargument with the complete tag.\nChanged in version 3.13: Virtual commands referencing python\nnow prefer an active virtual\nenvironment rather than searching PATH\n. This handles cases where\nthe shebang specifies /usr/bin/env python3\nbut python3.exe\nis\nnot present in the active environment.\nThe /usr/bin/env\nform of shebang line has one further special property.\nBefore looking for installed Python interpreters, this form will search the\nexecutable PATH\nfor a Python executable matching the name provided\nas the first argument. This corresponds to the behaviour of the Unix env\nprogram, which performs a PATH\nsearch.\nIf an executable matching the first argument after the env\ncommand cannot\nbe found, but the argument starts with python\n, it will be handled as\ndescribed for the other virtual commands.\nThe environment variable PYLAUNCHER_NO_SEARCH_PATH\nmay be set\n(to any value) to skip this search of PATH\n.\nShebang lines that do not match any of these patterns are looked up in the\n[commands]\nsection of the launcher\u2019s .INI file.\nThis may be used to handle certain commands in a way that makes sense for your\nsystem. The name of the command must be a single argument (no spaces in the\nshebang executable), and the value substituted is the full path to the\nexecutable (additional arguments specified in the .INI will be quoted as part\nof the filename).\n[commands]\n/bin/xpython=C:\\Program Files\\XPython\\python.exe\nAny commands not found in the .INI file are treated as Windows executable paths that are absolute or relative to the directory containing the script file. This is a convenience for Windows-only scripts, such as those generated by an installer, since the behavior is not compatible with Unix-style shells. These paths may be quoted, and may include multiple arguments, after which the path to the script and any additional arguments will be appended.\n4.12.3. Arguments in shebang lines\u00b6\nThe shebang lines can also specify additional options to be passed to the Python interpreter. For example, if you have a shebang line:\n#! /usr/bin/python -v\nThen Python will be started with the -v\noption\n4.12.4. Customization\u00b6\n4.12.4.1. Customization via INI files\u00b6\nTwo .ini files will be searched by the launcher - py.ini\nin the current\nuser\u2019s application data directory (%LOCALAPPDATA%\nor $env:LocalAppData\n)\nand py.ini\nin the same directory as the launcher. The same .ini files are\nused for both the \u2018console\u2019 version of the launcher (i.e. py.exe) and for the\n\u2018windows\u2019 version (i.e. pyw.exe).\nCustomization specified in the \u201capplication directory\u201d will have precedence over the one next to the executable, so a user, who may not have write access to the .ini file next to the launcher, can override commands in that global .ini file.\n4.12.4.2. Customizing default Python versions\u00b6\nIn some cases, a version qualifier can be included in a command to dictate which version of Python will be used by the command. A version qualifier starts with a major version number and can optionally be followed by a period (\u2018.\u2019) and a minor version specifier. Furthermore it is possible to specify if a 32 or 64 bit implementation shall be requested by adding \u201c-32\u201d or \u201c-64\u201d.\nFor example, a shebang line of #!python\nhas no version qualifier, while\n#!python3\nhas a version qualifier which specifies only a major version.\nIf no version qualifiers are found in a command, the environment\nvariable PY_PYTHON\ncan be set to specify the default version\nqualifier. If it is not set, the default is \u201c3\u201d. The variable can\nspecify any value that may be passed on the command line, such as \u201c3\u201d,\n\u201c3.7\u201d, \u201c3.7-32\u201d or \u201c3.7-64\u201d. (Note that the \u201c-64\u201d option is only\navailable with the launcher included with Python 3.7 or newer.)\nIf no minor version qualifiers are found, the environment variable\nPY_PYTHON{major}\n(where {major}\nis the current major version qualifier\nas determined above) can be set to specify the full version. If no such option\nis found, the launcher will enumerate the installed Python versions and use\nthe latest minor release found for the major version, which is likely,\nalthough not guaranteed, to be the most recently installed version in that\nfamily.\nOn 64-bit Windows with both 32-bit and 64-bit implementations of the same (major.minor) Python version installed, the 64-bit version will always be preferred. This will be true for both 32-bit and 64-bit implementations of the launcher - a 32-bit launcher will prefer to execute a 64-bit Python installation of the specified version if available. This is so the behavior of the launcher can be predicted knowing only what versions are installed on the PC and without regard to the order in which they were installed (i.e., without knowing whether a 32 or 64-bit version of Python and corresponding launcher was installed last). As noted above, an optional \u201c-32\u201d or \u201c-64\u201d suffix can be used on a version specifier to change this behaviour.\nExamples:\nIf no relevant options are set, the commands\npython\nandpython2\nwill use the latest Python 2.x version installed and the commandpython3\nwill use the latest Python 3.x installed.The command\npython3.7\nwill not consult any options at all as the versions are fully specified.If\nPY_PYTHON=3\n, the commandspython\nandpython3\nwill both use the latest installed Python 3 version.If\nPY_PYTHON=3.7-32\n, the commandpython\nwill use the 32-bit implementation of 3.7 whereas the commandpython3\nwill use the latest installed Python (PY_PYTHON was not considered at all as a major version was specified.)If\nPY_PYTHON=3\nandPY_PYTHON3=3.7\n, the commandspython\nandpython3\nwill both use specifically 3.7\nIn addition to environment variables, the same settings can be configured\nin the .INI file used by the launcher. The section in the INI file is\ncalled [defaults]\nand the key name will be the same as the\nenvironment variables without the leading PY_\nprefix (and note that\nthe key names in the INI file are case insensitive.) The contents of\nan environment variable will override things specified in the INI file.\nFor example:\nSetting\nPY_PYTHON=3.7\nis equivalent to the INI file containing:\n[defaults]\npython=3.7\nSetting\nPY_PYTHON=3\nandPY_PYTHON3=3.7\nis equivalent to the INI file containing:\n[defaults]\npython=3\npython3=3.7\n4.12.5. Diagnostics\u00b6\nIf an environment variable PYLAUNCHER_DEBUG\nis set (to any value), the\nlauncher will print diagnostic information to stderr (i.e. to the console).\nWhile this information manages to be simultaneously verbose and terse, it\nshould allow you to see what versions of Python were located, why a\nparticular version was chosen and the exact command-line used to execute the\ntarget Python. It is primarily intended for testing and debugging.\n4.12.6. Dry run\u00b6\nIf an environment variable PYLAUNCHER_DRYRUN\nis set (to any value),\nthe launcher will output the command it would have run, but will not actually\nlaunch Python. This may be useful for tools that want to use the launcher to\ndetect and then launch Python directly. Note that the command written to\nstandard output is always encoded using UTF-8, and may not render correctly in\nthe console.\n4.12.7. Install on demand\u00b6\nIf an environment variable PYLAUNCHER_ALLOW_INSTALL\nis set (to any\nvalue), and the requested Python version is not installed but is available on\nthe Microsoft Store, the launcher will attempt to install it. This may require\nuser interaction to complete, and you may need to run the command again.\nAn additional PYLAUNCHER_ALWAYS_INSTALL\nvariable causes the launcher\nto always try to install Python, even if it is detected. This is mainly intended\nfor testing (and should be used with PYLAUNCHER_DRYRUN\n).\n4.12.8. Return codes\u00b6\nThe following exit codes may be returned by the Python launcher. Unfortunately, there is no way to distinguish these from the exit code of Python itself.\nThe names of codes are as used in the sources, and are only for reference. There is no way to access or resolve them apart from reading this page. Entries are listed in alphabetical order of names.\nName |\nValue |\nDescription |\n|---|---|---|\nRC_BAD_VENV_CFG |\n107 |\nA |\nRC_CREATE_PROCESS |\n101 |\nFailed to launch Python. |\nRC_INSTALLING |\n111 |\nAn install was started, but the command will need to be re-run after it completes. |\nRC_INTERNAL_ERROR |\n109 |\nUnexpected error. Please report a bug. |\nRC_NO_COMMANDLINE |\n108 |\nUnable to obtain command line from the operating system. |\nRC_NO_PYTHON |\n103 |\nUnable to locate the requested version. |\nRC_NO_VENV_CFG |\n106 |\nA |", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 17959} +{"url": "https://docs.python.org/3/using/unix.html", "title": "Using Python on Unix platforms", "content": "2. Using Python on Unix platforms\u00b6\n2.1. Getting and installing the latest version of Python\u00b6\n2.1.1. On Linux\u00b6\nPython comes preinstalled on most Linux distributions, and is available as a package on all others. However there are certain features you might want to use that are not available on your distro\u2019s package. You can compile the latest version of Python from source.\nIn the event that the latest version of Python doesn\u2019t come preinstalled and isn\u2019t in the repositories as well, you can make packages for your own distro. Have a look at the following links:\nSee also\n- https://www.debian.org/doc/manuals/maint-guide/first.en.html\nfor Debian users\n- https://en.opensuse.org/Portal:Packaging\nfor OpenSuse users\n- https://docs.fedoraproject.org/en-US/package-maintainers/Packaging_Tutorial_GNU_Hello/\nfor Fedora users\n- https://slackbook.org/html/package-management-making-packages.html\nfor Slackware users\n2.1.1.1. Installing IDLE\u00b6\nIn some cases, IDLE might not be included in your Python installation.\nFor Debian and Ubuntu users:\nsudo apt update sudo apt install idle\nFor Fedora, RHEL, and CentOS users:\nsudo dnf install python3-idle\nFor SUSE and OpenSUSE users:\nsudo zypper install python3-idle\nFor Alpine Linux users:\nsudo apk add python3-idle\n2.1.2. On FreeBSD and OpenBSD\u00b6\nFreeBSD users, to add the package use:\npkg install python3\nOpenBSD users, to add the package use:\npkg_add -r python pkg_add ftp://ftp.openbsd.org/pub/OpenBSD/4.2/packages//python-.tgz\nFor example i386 users get the 2.5.1 version of Python using:\npkg_add ftp://ftp.openbsd.org/pub/OpenBSD/4.2/packages/i386/python-2.5.1p2.tgz\n2.2. Building Python\u00b6\nSee also\nIf you want to contribute to CPython, refer to the devguide, which includes build instructions and other tips on setting up environment.\nIf you want to compile CPython yourself, first thing you should do is get the source. You can download either the latest release\u2019s source or grab a fresh clone. You will also need to install the build requirements.\nThe build process consists of the usual commands:\n./configure\nmake\nmake install\nConfiguration options and caveats for specific Unix platforms are extensively documented in the README.rst file in the root of the Python source tree.\nWarning\nmake install\ncan overwrite or masquerade the python3\nbinary.\nmake altinstall\nis therefore recommended instead of make install\nsince it only installs exec_prefix/bin/pythonversion\n.\n2.4. Miscellaneous\u00b6\nTo easily use Python scripts on Unix, you need to make them executable, e.g. with\n$ chmod +x script\nand put an appropriate Shebang line at the top of the script. A good choice is usually\n#!/usr/bin/env python3\nwhich searches for the Python interpreter in the whole PATH\n. However,\nsome Unices may not have the env command, so you may need to hardcode\n/usr/bin/python3\nas the interpreter path.\nTo use shell commands in your Python scripts, look at the subprocess\nmodule.\n2.5. Custom OpenSSL\u00b6\nTo use your vendor\u2019s OpenSSL configuration and system trust store, locate the directory with\nopenssl.cnf\nfile or symlink in/etc\n. On most distribution the file is either in/etc/ssl\nor/etc/pki/tls\n. The directory should also contain acert.pem\nfile and/or acerts\ndirectory.$ find /etc/ -name openssl.cnf -printf \"%h\\n\" /etc/ssl\nDownload, build, and install OpenSSL. Make sure you use\ninstall_sw\nand notinstall\n. Theinstall_sw\ntarget does not overrideopenssl.cnf\n.$ curl -O https://www.openssl.org/source/openssl-VERSION.tar.gz $ tar xzf openssl-VERSION $ pushd openssl-VERSION $ ./config \\ --prefix=/usr/local/custom-openssl \\ --libdir=lib \\ --openssldir=/etc/ssl $ make -j1 depend $ make -j8 $ make install_sw $ popd\nBuild Python with custom OpenSSL (see the configure\n--with-openssl\nand--with-openssl-rpath\noptions)$ pushd python-3.x.x $ ./configure -C \\ --with-openssl=/usr/local/custom-openssl \\ --with-openssl-rpath=auto \\ --prefix=/usr/local/python-3.x.x $ make -j8 $ make altinstall\nNote\nPatch releases of OpenSSL have a backwards compatible ABI. You don\u2019t need to recompile Python to update OpenSSL. It\u2019s sufficient to replace the custom OpenSSL installation with a newer version.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1037} +{"url": "https://docs.python.org/3/whatsnew/3.8.html", "title": "What\u2019s New In Python 3.8", "content": "What\u2019s New In Python 3.8\u00b6\n- Editor:\nRaymond Hettinger\nThis article explains the new features in Python 3.8, compared to 3.7. Python 3.8 was released on October 14, 2019. For full details, see the changelog.\nSummary \u2013 Release highlights\u00b6\nNew Features\u00b6\nAssignment expressions\u00b6\nThere is new syntax :=\nthat assigns values to variables as part of a larger\nexpression. It is affectionately known as \u201cthe walrus operator\u201d due to\nits resemblance to the eyes and tusks of a walrus.\nIn this example, the assignment expression helps avoid calling\nlen()\ntwice:\nif (n := len(a)) > 10:\nprint(f\"List is too long ({n} elements, expected <= 10)\")\nA similar benefit arises during regular expression matching where match objects are needed twice, once to test whether a match occurred and another to extract a subgroup:\ndiscount = 0.0\nif (mo := re.search(r'(\\d+)% discount', advertisement)):\ndiscount = float(mo.group(1)) / 100.0\nThe operator is also useful with while-loops that compute a value to test loop termination and then need that same value again in the body of the loop:\n# Loop over fixed length blocks\nwhile (block := f.read(256)) != '':\nprocess(block)\nAnother motivating use case arises in list comprehensions where a value computed in a filtering condition is also needed in the expression body:\n[clean_name.title() for name in names\nif (clean_name := normalize('NFC', name)) in allowed_names]\nTry to limit use of the walrus operator to clean cases that reduce complexity and improve readability.\nSee PEP 572 for a full description.\n(Contributed by Emily Morehouse in bpo-35224.)\nPositional-only parameters\u00b6\nThere is a new function parameter syntax /\nto indicate that some\nfunction parameters must be specified positionally and cannot be used as\nkeyword arguments. This is the same notation shown by help()\nfor C\nfunctions annotated with Larry Hastings\u2019\nArgument Clinic tool.\nIn the following example, parameters a and b are positional-only, while c or d can be positional or keyword, and e or f are required to be keywords:\ndef f(a, b, /, c, d, *, e, f):\nprint(a, b, c, d, e, f)\nThe following is a valid call:\nf(10, 20, 30, d=40, e=50, f=60)\nHowever, these are invalid calls:\nf(10, b=20, c=30, d=40, e=50, f=60) # b cannot be a keyword argument\nf(10, 20, 30, 40, 50, f=60) # e must be a keyword argument\nOne use case for this notation is that it allows pure Python functions\nto fully emulate behaviors of existing C coded functions. For example,\nthe built-in divmod()\nfunction does not accept keyword arguments:\ndef divmod(a, b, /):\n\"Emulate the built in divmod() function\"\nreturn (a // b, a % b)\nAnother use case is to preclude keyword arguments when the parameter\nname is not helpful. For example, the builtin len()\nfunction has\nthe signature len(obj, /)\n. This precludes awkward calls such as:\nlen(obj='hello') # The \"obj\" keyword argument impairs readability\nA further benefit of marking a parameter as positional-only is that it\nallows the parameter name to be changed in the future without risk of\nbreaking client code. For example, in the statistics\nmodule, the\nparameter name dist may be changed in the future. This was made\npossible with the following function specification:\ndef quantiles(dist, /, *, n=4, method='exclusive')\n...\nSince the parameters to the left of /\nare not exposed as possible\nkeywords, the parameters names remain available for use in **kwargs\n:\n>>> def f(a, b, /, **kwargs):\n... print(a, b, kwargs)\n...\n>>> f(10, 20, a=1, b=2, c=3) # a and b are used in two ways\n10 20 {'a': 1, 'b': 2, 'c': 3}\nThis greatly simplifies the implementation of functions and methods\nthat need to accept arbitrary keyword arguments. For example, here\nis an excerpt from code in the collections\nmodule:\nclass Counter(dict):\ndef __init__(self, iterable=None, /, **kwds):\n# Note \"iterable\" is a possible keyword argument\nSee PEP 570 for a full description.\n(Contributed by Pablo Galindo in bpo-36540.)\nParallel filesystem cache for compiled bytecode files\u00b6\nThe new PYTHONPYCACHEPREFIX\nsetting (also available as\n-X\npycache_prefix\n) configures the implicit bytecode\ncache to use a separate parallel filesystem tree, rather than\nthe default __pycache__\nsubdirectories within each source\ndirectory.\nThe location of the cache is reported in sys.pycache_prefix\n(None\nindicates the default location in __pycache__\nsubdirectories).\n(Contributed by Carl Meyer in bpo-33499.)\nDebug build uses the same ABI as release build\u00b6\nPython now uses the same ABI whether it\u2019s built in release or debug mode. On Unix, when Python is built in debug mode, it is now possible to load C extensions built in release mode and C extensions built using the stable ABI.\nRelease builds and debug builds are now ABI compatible: defining the\nPy_DEBUG\nmacro no longer implies the Py_TRACE_REFS\nmacro, which\nintroduces the only ABI incompatibility. The Py_TRACE_REFS\nmacro, which\nadds the sys.getobjects()\nfunction and the PYTHONDUMPREFS\nenvironment variable, can be set using the new ./configure\n--with-trace-refs\nbuild option.\n(Contributed by Victor Stinner in bpo-36465.)\nOn Unix, C extensions are no longer linked to libpython except on Android and Cygwin. It is now possible for a statically linked Python to load a C extension built using a shared library Python. (Contributed by Victor Stinner in bpo-21536.)\nOn Unix, when Python is built in debug mode, import now also looks for C extensions compiled in release mode and for C extensions compiled with the stable ABI. (Contributed by Victor Stinner in bpo-36722.)\nTo embed Python into an application, a new --embed\noption must be passed to\npython3-config --libs --embed\nto get -lpython3.8\n(link the application\nto libpython). To support both 3.8 and older, try python3-config --libs\n--embed\nfirst and fallback to python3-config --libs\n(without --embed\n)\nif the previous command fails.\nAdd a pkg-config python-3.8-embed\nmodule to embed Python into an\napplication: pkg-config python-3.8-embed --libs\nincludes -lpython3.8\n.\nTo support both 3.8 and older, try pkg-config python-X.Y-embed --libs\nfirst\nand fallback to pkg-config python-X.Y --libs\n(without --embed\n) if the\nprevious command fails (replace X.Y\nwith the Python version).\nOn the other hand, pkg-config python3.8 --libs\nno longer contains\n-lpython3.8\n. C extensions must not be linked to libpython (except on\nAndroid and Cygwin, whose cases are handled by the script);\nthis change is backward incompatible on purpose.\n(Contributed by Victor Stinner in bpo-36721.)\nf-strings support =\nfor self-documenting expressions and debugging\u00b6\nAdded an =\nspecifier to f-strings. An f-string such as\nf'{expr=}'\nwill expand to the text of the expression, an equal sign,\nthen the representation of the evaluated expression. For example:\n>>> user = 'eric_idle'\n>>> member_since = date(1975, 7, 31)\n>>> f'{user=} {member_since=}'\n\"user='eric_idle' member_since=datetime.date(1975, 7, 31)\"\nThe usual f-string format specifiers allow more control over how the result of the expression is displayed:\n>>> delta = date.today() - member_since\n>>> f'{user=!s} {delta.days=:,d}'\n'user=eric_idle delta.days=16,075'\nThe =\nspecifier will display the whole expression so that\ncalculations can be shown:\n>>> print(f'{theta=} {cos(radians(theta))=:.3f}')\ntheta=30 cos(radians(theta))=0.866\n(Contributed by Eric V. Smith and Larry Hastings in bpo-36817.)\nPEP 578: Python Runtime Audit Hooks\u00b6\nThe PEP adds an Audit Hook and Verified Open Hook. Both are available from Python and native code, allowing applications and frameworks written in pure Python code to take advantage of extra notifications, while also allowing embedders or system administrators to deploy builds of Python where auditing is always enabled.\nSee PEP 578 for full details.\nPEP 587: Python Initialization Configuration\u00b6\nThe PEP 587 adds a new C API to configure the Python Initialization providing finer control on the whole configuration and better error reporting.\nNew structures:\nNew functions:\nThis PEP also adds _PyRuntimeState.preconfig\n(PyPreConfig\ntype)\nand PyInterpreterState.config\n(PyConfig\ntype) fields to these\ninternal structures. PyInterpreterState.config\nbecomes the new\nreference configuration, replacing global configuration variables and\nother private variables.\nSee Python Initialization Configuration for the documentation.\nSee PEP 587 for a full description.\n(Contributed by Victor Stinner in bpo-36763.)\nPEP 590: Vectorcall: a fast calling protocol for CPython\u00b6\nThe Vectorcall Protocol is added to the Python/C API. It is meant to formalize existing optimizations which were already done for various classes. Any static type implementing a callable can use this protocol.\nThis is currently provisional. The aim is to make it fully public in Python 3.9.\nSee PEP 590 for a full description.\n(Contributed by Jeroen Demeyer, Mark Shannon and Petr Viktorin in bpo-36974.)\nPickle protocol 5 with out-of-band data buffers\u00b6\nWhen pickle\nis used to transfer large data between Python processes\nin order to take advantage of multi-core or multi-machine processing,\nit is important to optimize the transfer by reducing memory copies, and\npossibly by applying custom techniques such as data-dependent compression.\nThe pickle\nprotocol 5 introduces support for out-of-band buffers\nwhere PEP 3118-compatible data can be transmitted separately from the\nmain pickle stream, at the discretion of the communication layer.\nSee PEP 574 for a full description.\n(Contributed by Antoine Pitrou in bpo-36785.)\nOther Language Changes\u00b6\nA\ncontinue\nstatement was illegal in thefinally\nclause due to a problem with the implementation. In Python 3.8 this restriction was lifted. (Contributed by Serhiy Storchaka in bpo-32489.)The\nbool\n,int\n, andfractions.Fraction\ntypes now have anas_integer_ratio()\nmethod like that found infloat\nanddecimal.Decimal\n. This minor API extension makes it possible to writenumerator, denominator = x.as_integer_ratio()\nand have it work across multiple numeric types. (Contributed by Lisa Roach in bpo-33073 and Raymond Hettinger in bpo-37819.)Constructors of\nint\n,float\nandcomplex\nwill now use the__index__()\nspecial method, if available and the corresponding method__int__()\n,__float__()\nor__complex__()\nis not available. (Contributed by Serhiy Storchaka in bpo-20092.)Added support of\n\\N{name}\nescapes inregular expressions\n:>>> notice = 'Copyright \u00a9 2019' >>> copyright_year_pattern = re.compile(r'\\N{copyright sign}\\s*(\\d{4})') >>> int(copyright_year_pattern.search(notice).group(1)) 2019\n(Contributed by Jonathan Eunice and Serhiy Storchaka in bpo-30688.)\nDict and dictviews are now iterable in reversed insertion order using\nreversed()\n. (Contributed by R\u00e9mi Lapeyre in bpo-33462.)The syntax allowed for keyword names in function calls was further restricted. In particular,\nf((keyword)=arg)\nis no longer allowed. It was never intended to permit more than a bare name on the left-hand side of a keyword argument assignment term. (Contributed by Benjamin Peterson in bpo-34641.)Generalized iterable unpacking in\nyield\nandreturn\nstatements no longer requires enclosing parentheses. This brings the yield and return syntax into better agreement with normal assignment syntax:>>> def parse(family): ... lastname, *members = family.split() ... return lastname.upper(), *members ... >>> parse('simpsons homer marge bart lisa maggie') ('SIMPSONS', 'homer', 'marge', 'bart', 'lisa', 'maggie')\n(Contributed by David Cuthbert and Jordan Chapman in bpo-32117.)\nWhen a comma is missed in code such as\n[(10, 20) (30, 40)]\n, the compiler displays aSyntaxWarning\nwith a helpful suggestion. This improves on just having aTypeError\nindicating that the first tuple was not callable. (Contributed by Serhiy Storchaka in bpo-15248.)Arithmetic operations between subclasses of\ndatetime.date\nordatetime.datetime\nanddatetime.timedelta\nobjects now return an instance of the subclass, rather than the base class. This also affects the return type of operations whose implementation (directly or indirectly) usesdatetime.timedelta\narithmetic, such asastimezone()\n. (Contributed by Paul Ganssle in bpo-32417.)When the Python interpreter is interrupted by Ctrl-C (SIGINT) and the resulting\nKeyboardInterrupt\nexception is not caught, the Python process now exits via a SIGINT signal or with the correct exit code such that the calling process can detect that it died due to a Ctrl-C. Shells on POSIX and Windows use this to properly terminate scripts in interactive sessions. (Contributed by Google via Gregory P. Smith in bpo-1054041.)Some advanced styles of programming require updating the\ntypes.CodeType\nobject for an existing function. Since code objects are immutable, a new code object needs to be created, one that is modeled on the existing code object. With 19 parameters, this was somewhat tedious. Now, the newreplace()\nmethod makes it possible to create a clone with a few altered parameters.Here\u2019s an example that alters the\nstatistics.mean()\nfunction to prevent the data parameter from being used as a keyword argument:>>> from statistics import mean >>> mean(data=[10, 20, 90]) 40 >>> mean.__code__ = mean.__code__.replace(co_posonlyargcount=1) >>> mean(data=[10, 20, 90]) Traceback (most recent call last): ... TypeError: mean() got some positional-only arguments passed as keyword arguments: 'data'\n(Contributed by Victor Stinner in bpo-37032.)\nFor integers, the three-argument form of the\npow()\nfunction now permits the exponent to be negative in the case where the base is relatively prime to the modulus. It then computes a modular inverse to the base when the exponent is-1\n, and a suitable power of that inverse for other negative exponents. For example, to compute the modular multiplicative inverse of 38 modulo 137, write:>>> pow(38, -1, 137) 119 >>> 119 * 38 % 137 1\nModular inverses arise in the solution of linear Diophantine equations. For example, to find integer solutions for\n4258\ud835\udc65 + 147\ud835\udc66 = 369\n, first rewrite as4258\ud835\udc65 \u2261 369 (mod 147)\nthen solve:>>> x = 369 * pow(4258, -1, 147) % 147 >>> y = (4258 * x - 369) // -147 >>> 4258 * x + 147 * y 369\n(Contributed by Mark Dickinson in bpo-36027.)\nDict comprehensions have been synced-up with dict literals so that the key is computed first and the value second:\n>>> # Dict comprehension >>> cast = {input('role? '): input('actor? ') for i in range(2)} role? King Arthur actor? Chapman role? Black Knight actor? Cleese >>> # Dict literal >>> cast = {input('role? '): input('actor? ')} role? Sir Robin actor? Eric Idle\nThe guaranteed execution order is helpful with assignment expressions because variables assigned in the key expression will be available in the value expression:\n>>> names = ['Martin von L\u00f6wis', '\u0141ukasz Langa', 'Walter D\u00f6rwald'] >>> {(n := normalize('NFC', name)).casefold() : n for name in names} {'martin von l\u00f6wis': 'Martin von L\u00f6wis', '\u0142ukasz langa': '\u0141ukasz Langa', 'walter d\u00f6rwald': 'Walter D\u00f6rwald'}\n(Contributed by J\u00f6rn Heissler in bpo-35224.)\nThe\nobject.__reduce__()\nmethod can now return a tuple from two to six elements long. Formerly, five was the limit. The new, optional sixth element is a callable with a(obj, state)\nsignature. This allows the direct control over the state-updating behavior of a specific object. If not None, this callable will have priority over the object\u2019s__setstate__()\nmethod. (Contributed by Pierre Glaser and Olivier Grisel in bpo-35900.)\nNew Modules\u00b6\nThe new\nimportlib.metadata\nmodule provides (provisional) support for reading metadata from third-party packages. For example, it can extract an installed package\u2019s version number, list of entry points, and more:>>> # Note following example requires that the popular \"requests\" >>> # package has been installed. >>> >>> from importlib.metadata import version, requires, files >>> version('requests') '2.22.0' >>> list(requires('requests')) ['chardet (<3.1.0,>=3.0.2)'] >>> list(files('requests'))[:5] [PackagePath('requests-2.22.0.dist-info/INSTALLER'), PackagePath('requests-2.22.0.dist-info/LICENSE'), PackagePath('requests-2.22.0.dist-info/METADATA'), PackagePath('requests-2.22.0.dist-info/RECORD'), PackagePath('requests-2.22.0.dist-info/WHEEL')]\n(Contributed by Barry Warsaw and Jason R. Coombs in bpo-34632.)\nImproved Modules\u00b6\nast\u00b6\nAST nodes now have end_lineno\nand end_col_offset\nattributes,\nwhich give the precise location of the end of the node. (This only\napplies to nodes that have lineno\nand col_offset\nattributes.)\nNew function ast.get_source_segment()\nreturns the source code\nfor a specific AST node.\n(Contributed by Ivan Levkivskyi in bpo-33416.)\nThe ast.parse()\nfunction has some new flags:\ntype_comments=True\ncauses it to return the text of PEP 484 and PEP 526 type comments associated with certain AST nodes;mode='func_type'\ncan be used to parse PEP 484 \u201csignature type comments\u201d (returned for function definition AST nodes);feature_version=(3, N)\nallows specifying an earlier Python 3 version. For example,feature_version=(3, 4)\nwill treatasync\nandawait\nas non-reserved words.\n(Contributed by Guido van Rossum in bpo-35766.)\nasyncio\u00b6\nasyncio.run()\nhas graduated from the provisional to stable API. This\nfunction can be used to execute a coroutine and return the result while\nautomatically managing the event loop. For example:\nimport asyncio\nasync def main():\nawait asyncio.sleep(0)\nreturn 42\nasyncio.run(main())\nThis is roughly equivalent to:\nimport asyncio\nasync def main():\nawait asyncio.sleep(0)\nreturn 42\nloop = asyncio.new_event_loop()\nasyncio.set_event_loop(loop)\ntry:\nloop.run_until_complete(main())\nfinally:\nasyncio.set_event_loop(None)\nloop.close()\nThe actual implementation is significantly more complex. Thus,\nasyncio.run()\nshould be the preferred way of running asyncio programs.\n(Contributed by Yury Selivanov in bpo-32314.)\nRunning python -m asyncio\nlaunches a natively async REPL. This allows rapid\nexperimentation with code that has a top-level await\n. There is no\nlonger a need to directly call asyncio.run()\nwhich would spawn a new event\nloop on every invocation:\n$ python -m asyncio\nasyncio REPL 3.8.0\nUse \"await\" directly instead of \"asyncio.run()\".\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import asyncio\n>>> await asyncio.sleep(10, result='hello')\nhello\n(Contributed by Yury Selivanov in bpo-37028.)\nThe exception asyncio.CancelledError\nnow inherits from\nBaseException\nrather than Exception\nand no longer inherits\nfrom concurrent.futures.CancelledError\n.\n(Contributed by Yury Selivanov in bpo-32528.)\nOn Windows, the default event loop is now ProactorEventLoop\n.\n(Contributed by Victor Stinner in bpo-34687.)\nProactorEventLoop\nnow also supports UDP.\n(Contributed by Adam Meily and Andrew Svetlov in bpo-29883.)\nProactorEventLoop\ncan now be interrupted by\nKeyboardInterrupt\n(\u201cCTRL+C\u201d).\n(Contributed by Vladimir Matveev in bpo-23057.)\nAdded asyncio.Task.get_coro()\nfor getting the wrapped coroutine\nwithin an asyncio.Task\n.\n(Contributed by Alex Gr\u00f6nholm in bpo-36999.)\nAsyncio tasks can now be named, either by passing the name\nkeyword\nargument to asyncio.create_task()\nor\nthe create_task()\nevent loop method, or by\ncalling the set_name()\nmethod on the task object. The\ntask name is visible in the repr()\noutput of asyncio.Task\nand\ncan also be retrieved using the get_name()\nmethod.\n(Contributed by Alex Gr\u00f6nholm in bpo-34270.)\nAdded support for\nHappy Eyeballs to\nasyncio.loop.create_connection()\n. To specify the behavior, two new\nparameters have been added: happy_eyeballs_delay and interleave. The Happy\nEyeballs algorithm improves responsiveness in applications that support IPv4\nand IPv6 by attempting to simultaneously connect using both.\n(Contributed by twisteroid ambassador in bpo-33530.)\nbuiltins\u00b6\nThe compile()\nbuilt-in has been improved to accept the\nast.PyCF_ALLOW_TOP_LEVEL_AWAIT\nflag. With this new flag passed,\ncompile()\nwill allow top-level await\n, async for\nand async with\nconstructs that are usually considered invalid syntax. Asynchronous code object\nmarked with the CO_COROUTINE\nflag may then be returned.\n(Contributed by Matthias Bussonnier in bpo-34616)\ncollections\u00b6\nThe _asdict()\nmethod for\ncollections.namedtuple()\nnow returns a dict\ninstead of a\ncollections.OrderedDict\n. This works because regular dicts have\nguaranteed ordering since Python 3.7. If the extra features of\nOrderedDict\nare required, the suggested remediation is to cast the\nresult to the desired type: OrderedDict(nt._asdict())\n.\n(Contributed by Raymond Hettinger in bpo-35864.)\ncProfile\u00b6\nThe cProfile.Profile\nclass can now be used as a context manager.\nProfile a block of code by running:\nimport cProfile\nwith cProfile.Profile() as profiler:\n# code to be profiled\n...\n(Contributed by Scott Sanderson in bpo-29235.)\ncsv\u00b6\nThe csv.DictReader\nnow returns instances of dict\ninstead of\na collections.OrderedDict\n. The tool is now faster and uses less\nmemory while still preserving the field order.\n(Contributed by Michael Selik in bpo-34003.)\ncurses\u00b6\nAdded a new variable holding structured version information for the\nunderlying ncurses library: ncurses_version\n.\n(Contributed by Serhiy Storchaka in bpo-31680.)\nctypes\u00b6\nOn Windows, CDLL\nand subclasses now accept a winmode parameter\nto specify flags for the underlying LoadLibraryEx\ncall. The default flags are\nset to only load DLL dependencies from trusted locations, including the path\nwhere the DLL is stored (if a full or partial path is used to load the initial\nDLL) and paths added by add_dll_directory()\n.\n(Contributed by Steve Dower in bpo-36085.)\ndatetime\u00b6\nAdded new alternate constructors datetime.date.fromisocalendar()\nand\ndatetime.datetime.fromisocalendar()\n, which construct date\nand\ndatetime\nobjects respectively from ISO year, week number, and weekday;\nthese are the inverse of each class\u2019s isocalendar\nmethod.\n(Contributed by Paul Ganssle in bpo-36004.)\nfunctools\u00b6\nfunctools.lru_cache()\ncan now be used as a straight decorator rather\nthan as a function returning a decorator. So both of these are now supported:\n@lru_cache\ndef f(x):\n...\n@lru_cache(maxsize=256)\ndef f(x):\n...\n(Contributed by Raymond Hettinger in bpo-36772.)\nAdded a new functools.cached_property()\ndecorator, for computed properties\ncached for the life of the instance.\nimport functools\nimport statistics\nclass Dataset:\ndef __init__(self, sequence_of_numbers):\nself.data = sequence_of_numbers\n@functools.cached_property\ndef variance(self):\nreturn statistics.variance(self.data)\n(Contributed by Carl Meyer in bpo-21145)\nAdded a new functools.singledispatchmethod()\ndecorator that converts\nmethods into generic functions using\nsingle dispatch:\nfrom functools import singledispatchmethod\nfrom contextlib import suppress\nclass TaskManager:\ndef __init__(self, tasks):\nself.tasks = list(tasks)\n@singledispatchmethod\ndef discard(self, value):\nwith suppress(ValueError):\nself.tasks.remove(value)\n@discard.register(list)\ndef _(self, tasks):\ntargets = set(tasks)\nself.tasks = [x for x in self.tasks if x not in targets]\n(Contributed by Ethan Smith in bpo-32380)\ngc\u00b6\nget_objects()\ncan now receive an optional generation parameter\nindicating a generation to get objects from.\n(Contributed by Pablo Galindo in bpo-36016.)\ngettext\u00b6\nAdded pgettext()\nand its variants.\n(Contributed by Franz Glasner, \u00c9ric Araujo, and Cheryl Sabella in bpo-2504.)\ngzip\u00b6\nAdded the mtime parameter to gzip.compress()\nfor reproducible output.\n(Contributed by Guo Ci Teo in bpo-34898.)\nA BadGzipFile\nexception is now raised instead of OSError\nfor certain types of invalid or corrupt gzip files.\n(Contributed by Filip Gruszczy\u0144ski, Michele Orr\u00f9, and Zackery Spytz in\nbpo-6584.)\nIDLE and idlelib\u00b6\nOutput over N lines (50 by default) is squeezed down to a button. N can be changed in the PyShell section of the General page of the Settings dialog. Fewer, but possibly extra long, lines can be squeezed by right clicking on the output. Squeezed output can be expanded in place by double-clicking the button or into the clipboard or a separate window by right-clicking the button. (Contributed by Tal Einat in bpo-1529353.)\nAdd \u201cRun Customized\u201d to the Run menu to run a module with customized settings. Any command line arguments entered are added to sys.argv. They also re-appear in the box for the next customized run. One can also suppress the normal Shell main module restart. (Contributed by Cheryl Sabella, Terry Jan Reedy, and others in bpo-5680 and bpo-37627.)\nAdded optional line numbers for IDLE editor windows. Windows open without line numbers unless set otherwise in the General tab of the configuration dialog. Line numbers for an existing window are shown and hidden in the Options menu. (Contributed by Tal Einat and Saimadhav Heblikar in bpo-17535.)\nOS native encoding is now used for converting between Python strings and Tcl objects. This allows IDLE to work with emoji and other non-BMP characters. These characters can be displayed or copied and pasted to or from the clipboard. Converting strings from Tcl to Python and back now never fails. (Many people worked on this for eight years but the problem was finally solved by Serhiy Storchaka in bpo-13153.)\nNew in 3.8.1:\nAdd option to toggle cursor blink off. (Contributed by Zackery Spytz in bpo-4603.)\nEscape key now closes IDLE completion windows. (Contributed by Johnny Najera in bpo-38944.)\nThe changes above have been backported to 3.7 maintenance releases.\nAdd keywords to module name completion list. (Contributed by Terry J. Reedy in bpo-37765.)\ninspect\u00b6\nThe inspect.getdoc()\nfunction can now find docstrings for __slots__\nif that attribute is a dict\nwhere the values are docstrings.\nThis provides documentation options similar to what we already have\nfor property()\n, classmethod()\n, and staticmethod()\n:\nclass AudioClip:\n__slots__ = {'bit_rate': 'expressed in kilohertz to one decimal place',\n'duration': 'in seconds, rounded up to an integer'}\ndef __init__(self, bit_rate, duration):\nself.bit_rate = round(bit_rate / 1000.0, 1)\nself.duration = ceil(duration)\n(Contributed by Raymond Hettinger in bpo-36326.)\nio\u00b6\nIn development mode (-X\nenv\n) and in debug build, the\nio.IOBase\nfinalizer now logs the exception if the close()\nmethod\nfails. The exception is ignored silently by default in release build.\n(Contributed by Victor Stinner in bpo-18748.)\nitertools\u00b6\nThe itertools.accumulate()\nfunction added an option initial keyword\nargument to specify an initial value:\n>>> from itertools import accumulate\n>>> list(accumulate([10, 5, 30, 15], initial=1000))\n[1000, 1010, 1015, 1045, 1060]\n(Contributed by Lisa Roach in bpo-34659.)\njson.tool\u00b6\nAdd option --json-lines\nto parse every input line as a separate JSON object.\n(Contributed by Weipeng Hong in bpo-31553.)\nlogging\u00b6\nAdded a force keyword argument to logging.basicConfig()\n.\nWhen set to true, any existing handlers attached\nto the root logger are removed and closed before carrying out the\nconfiguration specified by the other arguments.\nThis solves a long-standing problem. Once a logger or basicConfig() had been called, subsequent calls to basicConfig() were silently ignored. This made it difficult to update, experiment with, or teach the various logging configuration options using the interactive prompt or a Jupyter notebook.\n(Suggested by Raymond Hettinger, implemented by Donghee Na, and reviewed by Vinay Sajip in bpo-33897.)\nmath\u00b6\nAdded new function math.dist()\nfor computing Euclidean distance\nbetween two points. (Contributed by Raymond Hettinger in bpo-33089.)\nExpanded the math.hypot()\nfunction to handle multiple dimensions.\nFormerly, it only supported the 2-D case.\n(Contributed by Raymond Hettinger in bpo-33089.)\nAdded new function, math.prod()\n, as analogous function to sum()\nthat returns the product of a \u2018start\u2019 value (default: 1) times an iterable of\nnumbers:\n>>> prior = 0.8\n>>> likelihoods = [0.625, 0.84, 0.30]\n>>> math.prod(likelihoods, start=prior)\n0.126\n(Contributed by Pablo Galindo in bpo-35606.)\nAdded two new combinatoric functions math.perm()\nand math.comb()\n:\n>>> math.perm(10, 3) # Permutations of 10 things taken 3 at a time\n720\n>>> math.comb(10, 3) # Combinations of 10 things taken 3 at a time\n120\n(Contributed by Yash Aggarwal, Keller Fuchs, Serhiy Storchaka, and Raymond Hettinger in bpo-37128, bpo-37178, and bpo-35431.)\nAdded a new function math.isqrt()\nfor computing accurate integer square\nroots without conversion to floating point. The new function supports\narbitrarily large integers. It is faster than floor(sqrt(n))\nbut slower\nthan math.sqrt()\n:\n>>> r = 650320427\n>>> s = r ** 2\n>>> isqrt(s - 1) # correct\n650320426\n>>> floor(sqrt(s - 1)) # incorrect\n650320427\n(Contributed by Mark Dickinson in bpo-36887.)\nThe function math.factorial()\nno longer accepts arguments that are not\nint-like. (Contributed by Pablo Galindo in bpo-33083.)\nmmap\u00b6\nThe mmap.mmap\nclass now has an madvise()\nmethod to\naccess the madvise()\nsystem call.\n(Contributed by Zackery Spytz in bpo-32941.)\nmultiprocessing\u00b6\nAdded new multiprocessing.shared_memory\nmodule.\n(Contributed by Davin Potts in bpo-35813.)\nOn macOS, the spawn start method is now used by default. (Contributed by Victor Stinner in bpo-33725.)\nos\u00b6\nAdded new function add_dll_directory()\non Windows for providing\nadditional search paths for native dependencies when importing extension\nmodules or loading DLLs using ctypes\n.\n(Contributed by Steve Dower in bpo-36085.)\nA new os.memfd_create()\nfunction was added to wrap the\nmemfd_create()\nsyscall.\n(Contributed by Zackery Spytz and Christian Heimes in bpo-26836.)\nOn Windows, much of the manual logic for handling reparse points (including\nsymlinks and directory junctions) has been delegated to the operating system.\nSpecifically, os.stat()\nwill now traverse anything supported by the\noperating system, while os.lstat()\nwill only open reparse points that\nidentify as \u201cname surrogates\u201d while others are opened as for os.stat()\n.\nIn all cases, os.stat_result.st_mode\nwill only have S_IFLNK\nset for\nsymbolic links and not other kinds of reparse points. To identify other kinds\nof reparse point, check the new os.stat_result.st_reparse_tag\nattribute.\nOn Windows, os.readlink()\nis now able to read directory junctions. Note\nthat islink()\nwill return False\nfor directory junctions,\nand so code that checks islink\nfirst will continue to treat junctions as\ndirectories, while code that handles errors from os.readlink()\nmay now\ntreat junctions as links.\n(Contributed by Steve Dower in bpo-37834.)\nos.path\u00b6\nos.path\nfunctions that return a boolean result like\nexists()\n, lexists()\n, isdir()\n,\nisfile()\n, islink()\n, and ismount()\nnow return False\ninstead of raising ValueError\nor its subclasses\nUnicodeEncodeError\nand UnicodeDecodeError\nfor paths that contain\ncharacters or bytes unrepresentable at the OS level.\n(Contributed by Serhiy Storchaka in bpo-33721.)\nexpanduser()\non Windows now prefers the USERPROFILE\nenvironment variable and does not use HOME\n, which is not normally set\nfor regular user accounts.\n(Contributed by Anthony Sottile in bpo-36264.)\nisdir()\non Windows no longer returns True\nfor a link to a\nnon-existent directory.\nrealpath()\non Windows now resolves reparse points, including\nsymlinks and directory junctions.\n(Contributed by Steve Dower in bpo-37834.)\npathlib\u00b6\npathlib.Path\nmethods that return a boolean result like\nexists()\n, is_dir()\n,\nis_file()\n, is_mount()\n,\nis_symlink()\n, is_block_device()\n,\nis_char_device()\n, is_fifo()\n,\nis_socket()\nnow return False\ninstead of raising\nValueError\nor its subclass UnicodeEncodeError\nfor paths that\ncontain characters unrepresentable at the OS level.\n(Contributed by Serhiy Storchaka in bpo-33721.)\nAdded pathlib.Path.link_to()\nwhich creates a hard link pointing\nto a path.\n(Contributed by Joannah Nanjekye in bpo-26978)\nNote that link_to\nwas deprecated in 3.10 and removed in 3.12 in\nfavor of a hardlink_to\nmethod added in 3.10 which matches the\nsemantics of the existing symlink_to\nmethod.\npickle\u00b6\npickle\nextensions subclassing the C-optimized Pickler\ncan now override the pickling logic of functions and classes by defining the\nspecial reducer_override()\nmethod.\n(Contributed by Pierre Glaser and Olivier Grisel in bpo-35900.)\nplistlib\u00b6\nAdded new plistlib.UID\nand enabled support for reading and writing\nNSKeyedArchiver-encoded binary plists.\n(Contributed by Jon Janzen in bpo-26707.)\npprint\u00b6\nThe pprint\nmodule added a sort_dicts parameter to several functions.\nBy default, those functions continue to sort dictionaries before rendering or\nprinting. However, if sort_dicts is set to false, the dictionaries retain\nthe order that keys were inserted. This can be useful for comparison to JSON\ninputs during debugging.\nIn addition, there is a convenience new function, pprint.pp()\nthat is\nlike pprint.pprint()\nbut with sort_dicts defaulting to False\n:\n>>> from pprint import pprint, pp\n>>> d = dict(source='input.txt', operation='filter', destination='output.txt')\n>>> pp(d, width=40) # Original order\n{'source': 'input.txt',\n'operation': 'filter',\n'destination': 'output.txt'}\n>>> pprint(d, width=40) # Keys sorted alphabetically\n{'destination': 'output.txt',\n'operation': 'filter',\n'source': 'input.txt'}\n(Contributed by R\u00e9mi Lapeyre in bpo-30670.)\npy_compile\u00b6\npy_compile.compile()\nnow supports silent mode.\n(Contributed by Joannah Nanjekye in bpo-22640.)\nshlex\u00b6\nThe new shlex.join()\nfunction acts as the inverse of shlex.split()\n.\n(Contributed by Bo Bayles in bpo-32102.)\nshutil\u00b6\nshutil.copytree()\nnow accepts a new dirs_exist_ok\nkeyword argument.\n(Contributed by Josh Bronson in bpo-20849.)\nshutil.make_archive()\nnow defaults to the modern pax (POSIX.1-2001)\nformat for new archives to improve portability and standards conformance,\ninherited from the corresponding change to the tarfile\nmodule.\n(Contributed by C.A.M. Gerlach in bpo-30661.)\nshutil.rmtree()\non Windows now removes directory junctions without\nrecursively removing their contents first.\n(Contributed by Steve Dower in bpo-37834.)\nsocket\u00b6\nAdded create_server()\nand has_dualstack_ipv6()\nconvenience functions to automate the necessary tasks usually involved when\ncreating a server socket, including accepting both IPv4 and IPv6 connections\non the same socket. (Contributed by Giampaolo Rodol\u00e0 in bpo-17561.)\nThe socket.if_nameindex()\n, socket.if_nametoindex()\n, and\nsocket.if_indextoname()\nfunctions have been implemented on Windows.\n(Contributed by Zackery Spytz in bpo-37007.)\nssl\u00b6\nAdded post_handshake_auth\nto enable and\nverify_client_post_handshake()\nto initiate TLS 1.3\npost-handshake authentication.\n(Contributed by Christian Heimes in bpo-34670.)\nstatistics\u00b6\nAdded statistics.fmean()\nas a faster, floating-point variant of\nstatistics.mean()\n. (Contributed by Raymond Hettinger and\nSteven D\u2019Aprano in bpo-35904.)\nAdded statistics.geometric_mean()\n(Contributed by Raymond Hettinger in bpo-27181.)\nAdded statistics.multimode()\nthat returns a list of the most\ncommon values. (Contributed by Raymond Hettinger in bpo-35892.)\nAdded statistics.quantiles()\nthat divides data or a distribution\nin to equiprobable intervals (e.g. quartiles, deciles, or percentiles).\n(Contributed by Raymond Hettinger in bpo-36546.)\nAdded statistics.NormalDist\n, a tool for creating\nand manipulating normal distributions of a random variable.\n(Contributed by Raymond Hettinger in bpo-36018.)\n>>> temperature_feb = NormalDist.from_samples([4, 12, -3, 2, 7, 14])\n>>> temperature_feb.mean\n6.0\n>>> temperature_feb.stdev\n6.356099432828281\n>>> temperature_feb.cdf(3) # Chance of being under 3 degrees\n0.3184678262814532\n>>> # Relative chance of being 7 degrees versus 10 degrees\n>>> temperature_feb.pdf(7) / temperature_feb.pdf(10)\n1.2039930378537762\n>>> el_ni\u00f1o = NormalDist(4, 2.5)\n>>> temperature_feb += el_ni\u00f1o # Add in a climate effect\n>>> temperature_feb\nNormalDist(mu=10.0, sigma=6.830080526611674)\n>>> temperature_feb * (9/5) + 32 # Convert to Fahrenheit\nNormalDist(mu=50.0, sigma=12.294144947901014)\n>>> temperature_feb.samples(3) # Generate random samples\n[7.672102882379219, 12.000027119750287, 4.647488369766392]\nsys\u00b6\nAdd new sys.unraisablehook()\nfunction which can be overridden to control\nhow \u201cunraisable exceptions\u201d are handled. It is called when an exception has\noccurred but there is no way for Python to handle it. For example, when a\ndestructor raises an exception or during garbage collection\n(gc.collect()\n).\n(Contributed by Victor Stinner in bpo-36829.)\ntarfile\u00b6\nThe tarfile\nmodule now defaults to the modern pax (POSIX.1-2001)\nformat for new archives, instead of the previous GNU-specific one.\nThis improves cross-platform portability with a consistent encoding (UTF-8)\nin a standardized and extensible format, and offers several other benefits.\n(Contributed by C.A.M. Gerlach in bpo-36268.)\nthreading\u00b6\nAdd a new threading.excepthook()\nfunction which handles uncaught\nthreading.Thread.run()\nexception. It can be overridden to control how\nuncaught threading.Thread.run()\nexceptions are handled.\n(Contributed by Victor Stinner in bpo-1230540.)\nAdd a new threading.get_native_id()\nfunction and\na native_id\nattribute to the threading.Thread\nclass. These return the native\nintegral Thread ID of the current thread assigned by the kernel.\nThis feature is only available on certain platforms, see\nget_native_id\nfor more information.\n(Contributed by Jake Tesler in bpo-36084.)\ntokenize\u00b6\nThe tokenize\nmodule now implicitly emits a NEWLINE\ntoken when\nprovided with input that does not have a trailing new line. This behavior\nnow matches what the C tokenizer does internally.\n(Contributed by Ammar Askar in bpo-33899.)\ntkinter\u00b6\nAdded methods selection_from()\n,\nselection_present()\n,\nselection_range()\nand\nselection_to()\nin the tkinter.Spinbox\nclass.\n(Contributed by Juliette Monsel in bpo-34829.)\nAdded method moveto()\nin the tkinter.Canvas\nclass.\n(Contributed by Juliette Monsel in bpo-23831.)\nThe tkinter.PhotoImage\nclass now has\ntransparency_get()\nand\ntransparency_set()\nmethods. (Contributed by\nZackery Spytz in bpo-25451.)\ntime\u00b6\nAdded new clock CLOCK_UPTIME_RAW\nfor macOS 10.12.\n(Contributed by Joannah Nanjekye in bpo-35702.)\ntyping\u00b6\nThe typing\nmodule incorporates several new features:\nA dictionary type with per-key types. See PEP 589 and\ntyping.TypedDict\n. TypedDict uses only string keys. By default, every key is required to be present. Specify \u201ctotal=False\u201d to allow keys to be optional:class Location(TypedDict, total=False): lat_long: tuple grid_square: str xy_coordinate: tuple\nLiteral types. See PEP 586 and\ntyping.Literal\n. Literal types indicate that a parameter or return value is constrained to one or more specific literal values:def get_status(port: int) -> Literal['connected', 'disconnected']: ...\n\u201cFinal\u201d variables, functions, methods and classes. See PEP 591,\ntyping.Final\nandtyping.final()\n. The final qualifier instructs a static type checker to restrict subclassing, overriding, or reassignment:pi: Final[float] = 3.1415926536\nProtocol definitions. See PEP 544,\ntyping.Protocol\nandtyping.runtime_checkable()\n. Simple ABCs liketyping.SupportsInt\nare nowProtocol\nsubclasses.New protocol class\ntyping.SupportsIndex\n.New functions\ntyping.get_origin()\nandtyping.get_args()\n.\nunicodedata\u00b6\nThe unicodedata\nmodule has been upgraded to use the Unicode 12.1.0 release.\nNew function is_normalized()\ncan be used to verify a string\nis in a specific normal form, often much faster than by actually normalizing\nthe string. (Contributed by Max Belanger, David Euresti, and Greg Price in\nbpo-32285 and bpo-37966).\nunittest\u00b6\nAdded AsyncMock\nto support an asynchronous version of\nMock\n. Appropriate new assert functions for testing\nhave been added as well.\n(Contributed by Lisa Roach in bpo-26467).\nAdded addModuleCleanup()\nand\naddClassCleanup()\nto unittest to support\ncleanups for setUpModule()\nand\nsetUpClass()\n.\n(Contributed by Lisa Roach in bpo-24412.)\nSeveral mock assert functions now also print a list of actual calls upon failure. (Contributed by Petter Strandmark in bpo-35047.)\nunittest\nmodule gained support for coroutines to be used as test cases\nwith unittest.IsolatedAsyncioTestCase\n.\n(Contributed by Andrew Svetlov in bpo-32972.)\nExample:\nimport unittest\nclass TestRequest(unittest.IsolatedAsyncioTestCase):\nasync def asyncSetUp(self):\nself.connection = await AsyncConnection()\nasync def test_get(self):\nresponse = await self.connection.get(\"https://example.com\")\nself.assertEqual(response.status_code, 200)\nasync def asyncTearDown(self):\nawait self.connection.close()\nif __name__ == \"__main__\":\nunittest.main()\nvenv\u00b6\nvenv\nnow includes an Activate.ps1\nscript on all platforms for\nactivating virtual environments under PowerShell Core 6.1.\n(Contributed by Brett Cannon in bpo-32718.)\nweakref\u00b6\nThe proxy objects returned by weakref.proxy()\nnow support the matrix\nmultiplication operators @\nand @=\nin addition to the other\nnumeric operators. (Contributed by Mark Dickinson in bpo-36669.)\nxml\u00b6\nAs mitigation against DTD and external entity retrieval, the\nxml.dom.minidom\nand xml.sax\nmodules no longer process\nexternal entities by default.\n(Contributed by Christian Heimes in bpo-17239.)\nThe .find*()\nmethods in the xml.etree.ElementTree\nmodule\nsupport wildcard searches like {*}tag\nwhich ignores the namespace\nand {namespace}*\nwhich returns all tags in the given namespace.\n(Contributed by Stefan Behnel in bpo-28238.)\nThe xml.etree.ElementTree\nmodule provides a new function\ncanonicalize()\nthat implements C14N 2.0.\n(Contributed by Stefan Behnel in bpo-13611.)\nThe target object of xml.etree.ElementTree.XMLParser\ncan\nreceive namespace declaration events through the new callback methods\nstart_ns()\nand end_ns()\n. Additionally, the\nxml.etree.ElementTree.TreeBuilder\ntarget can be configured\nto process events about comments and processing instructions to include\nthem in the generated tree.\n(Contributed by Stefan Behnel in bpo-36676 and bpo-36673.)\nxmlrpc\u00b6\nxmlrpc.client.ServerProxy\nnow supports an optional headers keyword\nargument for a sequence of HTTP headers to be sent with each request. Among\nother things, this makes it possible to upgrade from default basic\nauthentication to faster session authentication.\n(Contributed by C\u00e9dric Krier in bpo-35153.)\nOptimizations\u00b6\nThe\nsubprocess\nmodule can now use theos.posix_spawn()\nfunction in some cases for better performance. Currently, it is only used on macOS and Linux (using glibc 2.24 or newer) if all these conditions are met:close_fds is false;\npreexec_fn, pass_fds, cwd and start_new_session parameters are not set;\nthe executable path contains a directory.\n(Contributed by Joannah Nanjekye and Victor Stinner in bpo-35537.)\nshutil.copyfile()\n,shutil.copy()\n,shutil.copy2()\n,shutil.copytree()\nandshutil.move()\nuse platform-specific \u201cfast-copy\u201d syscalls on Linux and macOS in order to copy the file more efficiently. \u201cfast-copy\u201d means that the copying operation occurs within the kernel, avoiding the use of userspace buffers in Python as in \u201coutfd.write(infd.read())\n\u201d. On Windowsshutil.copyfile()\nuses a bigger default buffer size (1 MiB instead of 16 KiB) and amemoryview()\n-based variant ofshutil.copyfileobj()\nis used. The speedup for copying a 512 MiB file within the same partition is about +26% on Linux, +50% on macOS and +40% on Windows. Also, much less CPU cycles are consumed. See Platform-dependent efficient copy operations section. (Contributed by Giampaolo Rodol\u00e0 in bpo-33671.)shutil.copytree()\nusesos.scandir()\nfunction and all copy functions depending from it use cachedos.stat()\nvalues. The speedup for copying a directory with 8000 files is around +9% on Linux, +20% on Windows and +30% on a Windows SMB share. Also the number ofos.stat()\nsyscalls is reduced by 38% makingshutil.copytree()\nespecially faster on network filesystems. (Contributed by Giampaolo Rodol\u00e0 in bpo-33695.)The default protocol in the\npickle\nmodule is now Protocol 4, first introduced in Python 3.4. It offers better performance and smaller size compared to Protocol 3 available since Python 3.0.Removed one\nPy_ssize_t\nmember fromPyGC_Head\n. All GC tracked objects (e.g. tuple, list, dict) size is reduced 4 or 8 bytes. (Contributed by Inada Naoki in bpo-33597.)uuid.UUID\nnow uses__slots__\nto reduce its memory footprint. (Contributed by Wouter Bolsterlee and Tal Einat in bpo-30977)Improved performance of\noperator.itemgetter()\nby 33%. Optimized argument handling and added a fast path for the common case of a single non-negative integer index into a tuple (which is the typical use case in the standard library). (Contributed by Raymond Hettinger in bpo-35664.)Sped-up field lookups in\ncollections.namedtuple()\n. They are now more than two times faster, making them the fastest form of instance variable lookup in Python. (Contributed by Raymond Hettinger, Pablo Galindo, and Joe Jevnik, Serhiy Storchaka in bpo-32492.)The\nlist\nconstructor does not overallocate the internal item buffer if the input iterable has a known length (the input implements__len__\n). This makes the created list 12% smaller on average. (Contributed by Raymond Hettinger and Pablo Galindo in bpo-33234.)Doubled the speed of class variable writes. When a non-dunder attribute was updated, there was an unnecessary call to update slots. (Contributed by Stefan Behnel, Pablo Galindo Salgado, Raymond Hettinger, Neil Schemenauer, and Serhiy Storchaka in bpo-36012.)\nReduced an overhead of converting arguments passed to many builtin functions and methods. This sped up calling some simple builtin functions and methods up to 20\u201350%. (Contributed by Serhiy Storchaka in bpo-23867, bpo-35582 and bpo-36127.)\nLOAD_GLOBAL\ninstruction now uses new \u201cper opcode cache\u201d mechanism. It is about 40% faster now. (Contributed by Yury Selivanov and Inada Naoki in bpo-26219.)\nBuild and C API Changes\u00b6\nDefault\nsys.abiflags\nbecame an empty string: them\nflag for pymalloc became useless (builds with and without pymalloc are ABI compatible) and so has been removed. (Contributed by Victor Stinner in bpo-36707.)Example of changes:\nOnly\npython3.8\nprogram is installed,python3.8m\nprogram is gone.Only\npython3.8-config\nscript is installed,python3.8m-config\nscript is gone.The\nm\nflag has been removed from the suffix of dynamic library filenames: extension modules in the standard library as well as those produced and installed by third-party packages, like those downloaded from PyPI. On Linux, for example, the Python 3.7 suffix.cpython-37m-x86_64-linux-gnu.so\nbecame.cpython-38-x86_64-linux-gnu.so\nin Python 3.8.\nThe header files have been reorganized to better separate the different kinds of APIs:\nInclude/*.h\nshould be the portable public stable C API.Include/cpython/*.h\nshould be the unstable C API specific to CPython; public API, with some private API prefixed by_Py\nor_PY\n.Include/internal/*.h\nis the private internal C API very specific to CPython. This API comes with no backward compatibility warranty and should not be used outside CPython. It is only exposed for very specific needs like debuggers and profiles which has to access to CPython internals without calling functions. This API is now installed bymake install\n.\n(Contributed by Victor Stinner in bpo-35134 and bpo-35081, work initiated by Eric Snow in Python 3.7.)\nSome macros have been converted to static inline functions: parameter types and return type are well defined, they don\u2019t have issues specific to macros, variables have a local scopes. Examples:\nPyObject_INIT\n,PyObject_INIT_VAR\nPrivate functions:\n_PyObject_GC_TRACK()\n,_PyObject_GC_UNTRACK()\n,_Py_Dealloc()\n(Contributed by Victor Stinner in bpo-35059.)\nThe\nPyByteArray_Init()\nandPyByteArray_Fini()\nfunctions have been removed. They did nothing since Python 2.7.4 and Python 3.2.0, were excluded from the limited API (stable ABI), and were not documented. (Contributed by Victor Stinner in bpo-35713.)The result of\nPyExceptionClass_Name()\nis now of typeconst char *\nrather ofchar *\n. (Contributed by Serhiy Storchaka in bpo-33818.)The duality of\nModules/Setup.dist\nandModules/Setup\nhas been removed. Previously, when updating the CPython source tree, one had to manually copyModules/Setup.dist\n(inside the source tree) toModules/Setup\n(inside the build tree) in order to reflect any changes upstream. This was of a small benefit to packagers at the expense of a frequent annoyance to developers following CPython development, as forgetting to copy the file could produce build failures.Now the build system always reads from\nModules/Setup\ninside the source tree. People who want to customize that file are encouraged to maintain their changes in a git fork of CPython or as patch files, as they would do for any other change to the source tree.(Contributed by Antoine Pitrou in bpo-32430.)\nFunctions that convert Python number to C integer like\nPyLong_AsLong()\nand argument parsing functions likePyArg_ParseTuple()\nwith integer converting format units like'i'\nwill now use the__index__()\nspecial method instead of__int__()\n, if available. The deprecation warning will be emitted for objects with the__int__()\nmethod but without the__index__()\nmethod (likeDecimal\nandFraction\n).PyNumber_Check()\nwill now return1\nfor objects implementing__index__()\n.PyNumber_Long()\n,PyNumber_Float()\nandPyFloat_AsDouble()\nalso now use the__index__()\nmethod if available. (Contributed by Serhiy Storchaka in bpo-36048 and bpo-20092.)Heap-allocated type objects will now increase their reference count in\nPyObject_Init()\n(and its parallel macroPyObject_INIT\n) instead of inPyType_GenericAlloc()\n. Types that modify instance allocation or deallocation may need to be adjusted. (Contributed by Eddie Elizondo in bpo-35810.)The new function\nPyCode_NewWithPosOnlyArgs()\nallows to create code objects likePyCode_New()\n, but with an extra posonlyargcount parameter for indicating the number of positional-only arguments. (Contributed by Pablo Galindo in bpo-37221.)Py_SetPath()\nnow setssys.executable\nto the program full path (Py_GetProgramFullPath()\n) rather than to the program name (Py_GetProgramName()\n). (Contributed by Victor Stinner in bpo-38234.)\nDeprecated\u00b6\nThe distutils\nbdist_wininst\ncommand is now deprecated, usebdist_wheel\n(wheel packages) instead. (Contributed by Victor Stinner in bpo-37481.)Deprecated methods\ngetchildren()\nandgetiterator()\nin theElementTree\nmodule now emit aDeprecationWarning\ninstead ofPendingDeprecationWarning\n. They will be removed in Python 3.9. (Contributed by Serhiy Storchaka in bpo-29209.)Passing an object that is not an instance of\nconcurrent.futures.ThreadPoolExecutor\ntoloop.set_default_executor()\nis deprecated and will be prohibited in Python 3.9. (Contributed by Elvis Pranskevichus in bpo-34075.)The\n__getitem__()\nmethods ofxml.dom.pulldom.DOMEventStream\n,wsgiref.util.FileWrapper\nandfileinput.FileInput\nhave been deprecated.Implementations of these methods have been ignoring their index parameter, and returning the next item instead. (Contributed by Berker Peksag in bpo-9372.)\nThe\ntyping.NamedTuple\nclass has deprecated the_field_types\nattribute in favor of the__annotations__\nattribute which has the same information. (Contributed by Raymond Hettinger in bpo-36320.)ast\nclassesNum\n,Str\n,Bytes\n,NameConstant\nandEllipsis\nare considered deprecated and will be removed in future Python versions.Constant\nshould be used instead. (Contributed by Serhiy Storchaka in bpo-32892.)ast.NodeVisitor\nmethodsvisit_Num()\n,visit_Str()\n,visit_Bytes()\n,visit_NameConstant()\nandvisit_Ellipsis()\nare deprecated now and will not be called in future Python versions. Add thevisit_Constant()\nmethod to handle all constant nodes. (Contributed by Serhiy Storchaka in bpo-36917.)The\n@asyncio.coroutine\ndecorator is deprecated and will be removed in version 3.10. Instead of@asyncio.coroutine\n, useasync def\ninstead. (Contributed by Andrew Svetlov in bpo-36921.)In\nasyncio\n, the explicit passing of a loop argument has been deprecated and will be removed in version 3.10 for the following:asyncio.sleep()\n,asyncio.gather()\n,asyncio.shield()\n,asyncio.wait_for()\n,asyncio.wait()\n,asyncio.as_completed()\n,asyncio.Task\n,asyncio.Lock\n,asyncio.Event\n,asyncio.Condition\n,asyncio.Semaphore\n,asyncio.BoundedSemaphore\n,asyncio.Queue\n,asyncio.create_subprocess_exec()\n, andasyncio.create_subprocess_shell()\n.The explicit passing of coroutine objects to\nasyncio.wait()\nhas been deprecated and will be removed in version 3.11. (Contributed by Yury Selivanov in bpo-34790.)The following functions and methods are deprecated in the\ngettext\nmodule:lgettext()\n,ldgettext()\n,lngettext()\nandldngettext()\n. They return encoded bytes, and it\u2019s possible that you will get unexpected Unicode-related exceptions if there are encoding problems with the translated strings. It\u2019s much better to use alternatives which return Unicode strings in Python 3. These functions have been broken for a long time.Function\nbind_textdomain_codeset()\n, methodsNullTranslations.output_charset()\nandNullTranslations.set_output_charset()\n, and the codeset parameter of functionstranslation()\nandinstall()\nare also deprecated, since they are only used for thel*gettext()\nfunctions. (Contributed by Serhiy Storchaka in bpo-33710.)The\nisAlive()\nmethod ofthreading.Thread\nhas been deprecated. (Contributed by Donghee Na in bpo-35283.)Many builtin and extension functions that take integer arguments will now emit a deprecation warning for\nDecimal\ns,Fraction\ns and any other objects that can be converted to integers only with a loss (e.g. that have the__int__()\nmethod but do not have the__index__()\nmethod). In future version they will be errors. (Contributed by Serhiy Storchaka in bpo-36048.)Deprecated passing the following arguments as keyword arguments:\nfunc in\nfunctools.partialmethod()\n,weakref.finalize()\n,profile.Profile.runcall()\n,cProfile.Profile.runcall()\n,bdb.Bdb.runcall()\n,trace.Trace.runfunc()\nandcurses.wrapper()\n.function in\nunittest.TestCase.addCleanup()\n.fn in the\nsubmit()\nmethod ofconcurrent.futures.ThreadPoolExecutor\nandconcurrent.futures.ProcessPoolExecutor\n.callback in\ncontextlib.ExitStack.callback()\n,contextlib.AsyncExitStack.callback()\nandcontextlib.AsyncExitStack.push_async_callback()\n.c and typeid in the\ncreate()\nmethod ofmultiprocessing.managers.Server\nandmultiprocessing.managers.SharedMemoryServer\n.obj in\nweakref.finalize()\n.\nIn future releases of Python, they will be positional-only. (Contributed by Serhiy Storchaka in bpo-36492.)\nAPI and Feature Removals\u00b6\nThe following features and APIs have been removed from Python 3.8:\nStarting with Python 3.3, importing ABCs from\ncollections\nwas deprecated, and importing should be done fromcollections.abc\n. Being able to import from collections was marked for removal in 3.8, but has been delayed to 3.9. (See gh-81134.)The\nmacpath\nmodule, deprecated in Python 3.7, has been removed. (Contributed by Victor Stinner in bpo-35471.)The function\nplatform.popen()\nhas been removed, after having been deprecated since Python 3.3: useos.popen()\ninstead. (Contributed by Victor Stinner in bpo-35345.)The function\ntime.clock()\nhas been removed, after having been deprecated since Python 3.3: usetime.perf_counter()\nortime.process_time()\ninstead, depending on your requirements, to have well-defined behavior. (Contributed by Matthias Bussonnier in bpo-36895.)The\npyvenv\nscript has been removed in favor ofpython3.8 -m venv\nto help eliminate confusion as to what Python interpreter thepyvenv\nscript is tied to. (Contributed by Brett Cannon in bpo-25427.)parse_qs\n,parse_qsl\n, andescape\nare removed from thecgi\nmodule. They are deprecated in Python 3.2 or older. They should be imported from theurllib.parse\nandhtml\nmodules instead.filemode\nfunction is removed from thetarfile\nmodule. It is not documented and deprecated since Python 3.3.The\nXMLParser\nconstructor no longer accepts the html argument. It never had an effect and was deprecated in Python 3.4. All other parameters are now keyword-only. (Contributed by Serhiy Storchaka in bpo-29209.)Removed the\ndoctype()\nmethod ofXMLParser\n. (Contributed by Serhiy Storchaka in bpo-29209.)\u201cunicode_internal\u201d codec is removed. (Contributed by Inada Naoki in bpo-36297.)\nThe\nCache\nandStatement\nobjects of thesqlite3\nmodule are not exposed to the user. (Contributed by Aviv Palivoda in bpo-30262.)The\nbufsize\nkeyword argument offileinput.input()\nandfileinput.FileInput()\nwhich was ignored and deprecated since Python 3.6 has been removed. bpo-36952 (Contributed by Matthias Bussonnier.)The functions\nsys.set_coroutine_wrapper()\nandsys.get_coroutine_wrapper()\ndeprecated in Python 3.7 have been removed; bpo-36933 (Contributed by Matthias Bussonnier.)\nPorting to Python 3.8\u00b6\nThis section lists previously described changes and other bugfixes that may require changes to your code.\nChanges in Python behavior\u00b6\nYield expressions (both\nyield\nandyield from\nclauses) are now disallowed in comprehensions and generator expressions (aside from the iterable expression in the leftmostfor\nclause). (Contributed by Serhiy Storchaka in bpo-10544.)The compiler now produces a\nSyntaxWarning\nwhen identity checks (is\nandis not\n) are used with certain types of literals (e.g. strings, numbers). These can often work by accident in CPython, but are not guaranteed by the language spec. The warning advises users to use equality tests (==\nand!=\n) instead. (Contributed by Serhiy Storchaka in bpo-34850.)The CPython interpreter can swallow exceptions in some circumstances. In Python 3.8 this happens in fewer cases. In particular, exceptions raised when getting the attribute from the type dictionary are no longer ignored. (Contributed by Serhiy Storchaka in bpo-35459.)\nRemoved\n__str__\nimplementations from builtin typesbool\n,int\n,float\n,complex\nand few classes from the standard library. They now inherit__str__()\nfromobject\n. As result, defining the__repr__()\nmethod in the subclass of these classes will affect their string representation. (Contributed by Serhiy Storchaka in bpo-36793.)On AIX,\nsys.platform\ndoesn\u2019t contain the major version anymore. It is always'aix'\n, instead of'aix3'\n..'aix7'\n. Since older Python versions include the version number, so it is recommended to always usesys.platform.startswith('aix')\n. (Contributed by M. Felt in bpo-36588.)PyEval_AcquireLock()\nandPyEval_AcquireThread()\nnow terminate the current thread if called while the interpreter is finalizing, making them consistent withPyEval_RestoreThread()\n,Py_END_ALLOW_THREADS()\n, andPyGILState_Ensure()\n. If this behavior is not desired, guard the call by checking_Py_IsFinalizing()\norsys.is_finalizing()\n. (Contributed by Joannah Nanjekye in bpo-36475.)\nChanges in the Python API\u00b6\nThe\nos.getcwdb()\nfunction now uses the UTF-8 encoding on Windows, rather than the ANSI code page: see PEP 529 for the rationale. The function is no longer deprecated on Windows. (Contributed by Victor Stinner in bpo-37412.)subprocess.Popen\ncan now useos.posix_spawn()\nin some cases for better performance. On Windows Subsystem for Linux and QEMU User Emulation, thePopen\nconstructor usingos.posix_spawn()\nno longer raises an exception on errors like \u201cmissing program\u201d. Instead the child process fails with a non-zeroreturncode\n. (Contributed by Joannah Nanjekye and Victor Stinner in bpo-35537.)The preexec_fn argument of *\nsubprocess.Popen\nis no longer compatible with subinterpreters. The use of the parameter in a subinterpreter now raisesRuntimeError\n. (Contributed by Eric Snow in bpo-34651, modified by Christian Heimes in bpo-37951.)The\nimaplib.IMAP4.logout()\nmethod no longer silently ignores arbitrary exceptions. (Contributed by Victor Stinner in bpo-36348.)The function\nplatform.popen()\nhas been removed, after having been deprecated since Python 3.3: useos.popen()\ninstead. (Contributed by Victor Stinner in bpo-35345.)The\nstatistics.mode()\nfunction no longer raises an exception when given multimodal data. Instead, it returns the first mode encountered in the input data. (Contributed by Raymond Hettinger in bpo-35892.)The\nselection()\nmethod of thetkinter.ttk.Treeview\nclass no longer takes arguments. Using it with arguments for changing the selection was deprecated in Python 3.6. Use specialized methods likeselection_set()\nfor changing the selection. (Contributed by Serhiy Storchaka in bpo-31508.)The\nwritexml()\n,toxml()\nandtoprettyxml()\nmethods ofxml.dom.minidom\nand thewrite()\nmethod ofxml.etree.ElementTree\nnow preserve the attribute order specified by the user. (Contributed by Diego Rojas and Raymond Hettinger in bpo-34160.)A\ndbm.dumb\ndatabase opened with flags'r'\nis now read-only.dbm.dumb.open()\nwith flags'r'\nand'w'\nno longer creates a database if it does not exist. (Contributed by Serhiy Storchaka in bpo-32749.)The\ndoctype()\nmethod defined in a subclass ofXMLParser\nwill no longer be called and will emit aRuntimeWarning\ninstead of aDeprecationWarning\n. Define thedoctype()\nmethod on a target for handling an XML doctype declaration. (Contributed by Serhiy Storchaka in bpo-29209.)A\nRuntimeError\nis now raised when the custom metaclass doesn\u2019t provide the__classcell__\nentry in the namespace passed totype.__new__\n. ADeprecationWarning\nwas emitted in Python 3.6\u20133.7. (Contributed by Serhiy Storchaka in bpo-23722.)The\ncProfile.Profile\nclass can now be used as a context manager. (Contributed by Scott Sanderson in bpo-29235.)shutil.copyfile()\n,shutil.copy()\n,shutil.copy2()\n,shutil.copytree()\nandshutil.move()\nuse platform-specific \u201cfast-copy\u201d syscalls (see Platform-dependent efficient copy operations section).shutil.copyfile()\ndefault buffer size on Windows was changed from 16 KiB to 1 MiB.The\nPyGC_Head\nstruct has changed completely. All code that touched the struct member should be rewritten. (See bpo-33597.)The\nPyInterpreterState\nstruct has been moved into the \u201cinternal\u201d header files (specifically Include/internal/pycore_pystate.h). An opaquePyInterpreterState\nis still available as part of the public API (and stable ABI). The docs indicate that none of the struct\u2019s fields are public, so we hope no one has been using them. However, if you do rely on one or more of those private fields and have no alternative then please open a BPO issue. We\u2019ll work on helping you adjust (possibly including adding accessor functions to the public API). (See bpo-35886.)The\nmmap.flush()\nmethod now returnsNone\non success and raises an exception on error under all platforms. Previously, its behavior was platform-dependent: a nonzero value was returned on success; zero was returned on error under Windows. A zero value was returned on success; an exception was raised on error under Unix. (Contributed by Berker Peksag in bpo-2122.)xml.dom.minidom\nandxml.sax\nmodules no longer process external entities by default. (Contributed by Christian Heimes in bpo-17239.)Deleting a key from a read-only\ndbm\ndatabase (dbm.dumb\n,dbm.gnu\nordbm.ndbm\n) raiseserror\n(dbm.dumb.error\n,dbm.gnu.error\nordbm.ndbm.error\n) instead ofKeyError\n. (Contributed by Xiang Zhang in bpo-33106.)Simplified AST for literals. All constants will be represented as\nast.Constant\ninstances. Instantiating old classesNum\n,Str\n,Bytes\n,NameConstant\nandEllipsis\nwill return an instance ofConstant\n. (Contributed by Serhiy Storchaka in bpo-32892.)expanduser()\non Windows now prefers theUSERPROFILE\nenvironment variable and does not useHOME\n, which is not normally set for regular user accounts. (Contributed by Anthony Sottile in bpo-36264.)The exception\nasyncio.CancelledError\nnow inherits fromBaseException\nrather thanException\nand no longer inherits fromconcurrent.futures.CancelledError\n. (Contributed by Yury Selivanov in bpo-32528.)The function\nasyncio.wait_for()\nnow correctly waits for cancellation when using an instance ofasyncio.Task\n. Previously, upon reaching timeout, it was cancelled and immediately returned. (Contributed by Elvis Pranskevichus in bpo-32751.)The function\nasyncio.BaseTransport.get_extra_info()\nnow returns a safe to use socket object when \u2018socket\u2019 is passed to the name parameter. (Contributed by Yury Selivanov in bpo-37027.)asyncio.BufferedProtocol\nhas graduated to the stable API.\nDLL dependencies for extension modules and DLLs loaded with\nctypes\non Windows are now resolved more securely. Only the system paths, the directory containing the DLL or PYD file, and directories added withadd_dll_directory()\nare searched for load-time dependencies. Specifically,PATH\nand the current working directory are no longer used, and modifications to these will no longer have any effect on normal DLL resolution. If your application relies on these mechanisms, you should check foradd_dll_directory()\nand if it exists, use it to add your DLLs directory while loading your library. Note that Windows 7 users will need to ensure that Windows Update KB2533623 has been installed (this is also verified by the installer). (Contributed by Steve Dower in bpo-36085.)The header files and functions related to pgen have been removed after its replacement by a pure Python implementation. (Contributed by Pablo Galindo in bpo-36623.)\ntypes.CodeType\nhas a new parameter in the second position of the constructor (posonlyargcount) to support positional-only arguments defined in PEP 570. The first argument (argcount) now represents the total number of positional arguments (including positional-only arguments). The newreplace()\nmethod oftypes.CodeType\ncan be used to make the code future-proof.The parameter\ndigestmod\nforhmac.new()\nno longer uses the MD5 digest by default.\nChanges in the C API\u00b6\nThe\nPyCompilerFlags\nstructure got a new cf_feature_version field. It should be initialized toPY_MINOR_VERSION\n. The field is ignored by default, and is used if and only ifPyCF_ONLY_AST\nflag is set in cf_flags. (Contributed by Guido van Rossum in bpo-35766.)The\nPyEval_ReInitThreads()\nfunction has been removed from the C API. It should not be called explicitly: usePyOS_AfterFork_Child()\ninstead. (Contributed by Victor Stinner in bpo-36728.)On Unix, C extensions are no longer linked to libpython except on Android and Cygwin. When Python is embedded,\nlibpython\nmust not be loaded withRTLD_LOCAL\n, butRTLD_GLOBAL\ninstead. Previously, usingRTLD_LOCAL\n, it was already not possible to load C extensions which were not linked tolibpython\n, like C extensions of the standard library built by the*shared*\nsection ofModules/Setup\n. (Contributed by Victor Stinner in bpo-21536.)Use of\n#\nvariants of formats in parsing or building value (e.g.PyArg_ParseTuple()\n,Py_BuildValue()\n,PyObject_CallFunction()\n, etc.) withoutPY_SSIZE_T_CLEAN\ndefined raisesDeprecationWarning\nnow. It will be removed in 3.10 or 4.0. Read Parsing arguments and building values for detail. (Contributed by Inada Naoki in bpo-36381.)Instances of heap-allocated types (such as those created with\nPyType_FromSpec()\n) hold a reference to their type object. Increasing the reference count of these type objects has been moved fromPyType_GenericAlloc()\nto the more low-level functions,PyObject_Init()\nandPyObject_INIT\n. This makes types created throughPyType_FromSpec()\nbehave like other classes in managed code.Statically allocated types are not affected.\nFor the vast majority of cases, there should be no side effect. However, types that manually increase the reference count after allocating an instance (perhaps to work around the bug) may now become immortal. To avoid this, these classes need to call Py_DECREF on the type object during instance deallocation.\nTo correctly port these types into 3.8, please apply the following changes:\nRemove\nPy_INCREF\non the type object after allocating an instance - if any. This may happen after callingPyObject_New\n,PyObject_NewVar\n,PyObject_GC_New()\n,PyObject_GC_NewVar()\n, or any other custom allocator that usesPyObject_Init()\norPyObject_INIT\n.Example:\nstatic foo_struct * foo_new(PyObject *type) { foo_struct *foo = PyObject_GC_New(foo_struct, (PyTypeObject *) type); if (foo == NULL) return NULL; #if PY_VERSION_HEX < 0x03080000 // Workaround for Python issue 35810; no longer necessary in Python 3.8 PY_INCREF(type) #endif return foo; }\nEnsure that all custom\ntp_dealloc\nfunctions of heap-allocated types decrease the type\u2019s reference count.Example:\nstatic void foo_dealloc(foo_struct *instance) { PyObject *type = Py_TYPE(instance); PyObject_GC_Del(instance); #if PY_VERSION_HEX >= 0x03080000 // This was not needed before Python 3.8 (Python issue 35810) Py_DECREF(type); #endif }\n(Contributed by Eddie Elizondo in bpo-35810.)\nThe\nPy_DEPRECATED()\nmacro has been implemented for MSVC. The macro now must be placed before the symbol name.Example:\nPy_DEPRECATED(3.8) PyAPI_FUNC(int) Py_OldFunction(void);\n(Contributed by Zackery Spytz in bpo-33407.)\nThe interpreter does not pretend to support binary compatibility of extension types across feature releases, anymore. A\nPyTypeObject\nexported by a third-party extension module is supposed to have all the slots expected in the current Python version, includingtp_finalize\n(Py_TPFLAGS_HAVE_FINALIZE\nis not checked anymore before readingtp_finalize\n).(Contributed by Antoine Pitrou in bpo-32388.)\nThe functions\nPyNode_AddChild()\nandPyParser_AddToken()\nnow accept two additionalint\narguments end_lineno and end_col_offset.The\nlibpython38.a\nfile to allow MinGW tools to link directly againstpython38.dll\nis no longer included in the regular Windows distribution. If you require this file, it may be generated with thegendef\nanddlltool\ntools, which are part of the MinGW binutils package:gendef - python38.dll > tmp.def dlltool --dllname python38.dll --def tmp.def --output-lib libpython38.a\nThe location of an installed\npythonXY.dll\nwill depend on the installation options and the version and language of Windows. See Using Python on Windows for more information. The resulting library should be placed in the same directory aspythonXY.lib\n, which is generally thelibs\ndirectory under your Python installation.(Contributed by Steve Dower in bpo-37351.)\nCPython bytecode changes\u00b6\nThe interpreter loop has been simplified by moving the logic of unrolling the stack of blocks into the compiler. The compiler emits now explicit instructions for adjusting the stack of values and calling the cleaning-up code for\nbreak\n,continue\nandreturn\n.Removed opcodes\nBREAK_LOOP\n,CONTINUE_LOOP\n,SETUP_LOOP\nandSETUP_EXCEPT\n. Added new opcodesROT_FOUR\n,BEGIN_FINALLY\n,CALL_FINALLY\nandPOP_FINALLY\n. Changed the behavior ofEND_FINALLY\nandWITH_CLEANUP_START\n.(Contributed by Mark Shannon, Antoine Pitrou and Serhiy Storchaka in bpo-17611.)\nAdded new opcode\nEND_ASYNC_FOR\nfor handling exceptions raised when awaiting a next item in anasync for\nloop. (Contributed by Serhiy Storchaka in bpo-33041.)The\nMAP_ADD\nnow expects the value as the first element in the stack and the key as the second element. This change was made so the key is always evaluated before the value in dictionary comprehensions, as proposed by PEP 572. (Contributed by J\u00f6rn Heissler in bpo-35224.)\nDemos and Tools\u00b6\nAdded a benchmark script for timing various ways to access variables:\nTools/scripts/var_access_benchmark.py\n.\n(Contributed by Raymond Hettinger in bpo-35884.)\nHere\u2019s a summary of performance improvements since Python 3.3:\nPython version 3.3 3.4 3.5 3.6 3.7 3.8\n-------------- --- --- --- --- --- ---\nVariable and attribute read access:\nread_local 4.0 7.1 7.1 5.4 5.1 3.9\nread_nonlocal 5.3 7.1 8.1 5.8 5.4 4.4\nread_global 13.3 15.5 19.0 14.3 13.6 7.6\nread_builtin 20.0 21.1 21.6 18.5 19.0 7.5\nread_classvar_from_class 20.5 25.6 26.5 20.7 19.5 18.4\nread_classvar_from_instance 18.5 22.8 23.5 18.8 17.1 16.4\nread_instancevar 26.8 32.4 33.1 28.0 26.3 25.4\nread_instancevar_slots 23.7 27.8 31.3 20.8 20.8 20.2\nread_namedtuple 68.5 73.8 57.5 45.0 46.8 18.4\nread_boundmethod 29.8 37.6 37.9 29.6 26.9 27.7\nVariable and attribute write access:\nwrite_local 4.6 8.7 9.3 5.5 5.3 4.3\nwrite_nonlocal 7.3 10.5 11.1 5.6 5.5 4.7\nwrite_global 15.9 19.7 21.2 18.0 18.0 15.8\nwrite_classvar 81.9 92.9 96.0 104.6 102.1 39.2\nwrite_instancevar 36.4 44.6 45.8 40.0 38.9 35.5\nwrite_instancevar_slots 28.7 35.6 36.1 27.3 26.6 25.7\nData structure read access:\nread_list 19.2 24.2 24.5 20.8 20.8 19.0\nread_deque 19.9 24.7 25.5 20.2 20.6 19.8\nread_dict 19.7 24.3 25.7 22.3 23.0 21.0\nread_strdict 17.9 22.6 24.3 19.5 21.2 18.9\nData structure write access:\nwrite_list 21.2 27.1 28.5 22.5 21.6 20.0\nwrite_deque 23.8 28.7 30.1 22.7 21.8 23.5\nwrite_dict 25.9 31.4 33.3 29.3 29.2 24.7\nwrite_strdict 22.9 28.4 29.9 27.5 25.2 23.1\nStack (or queue) operations:\nlist_append_pop 144.2 93.4 112.7 75.4 74.2 50.8\ndeque_append_pop 30.4 43.5 57.0 49.4 49.2 42.5\ndeque_append_popleft 30.8 43.7 57.3 49.7 49.7 42.8\nTiming loop:\nloop_overhead 0.3 0.5 0.6 0.4 0.3 0.3\nThe benchmarks were measured on an Intel\u00ae Core\u2122 i7-4960HQ processor running the macOS 64-bit builds found at python.org. The benchmark script displays timings in nanoseconds.\nNotable changes in Python 3.8.1\u00b6\nDue to significant security concerns, the reuse_address parameter of\nasyncio.loop.create_datagram_endpoint()\nis no longer supported. This is\nbecause of the behavior of the socket option SO_REUSEADDR\nin UDP. For more\ndetails, see the documentation for loop.create_datagram_endpoint()\n.\n(Contributed by Kyle Stanley, Antoine Pitrou, and Yury Selivanov in\nbpo-37228.)\nNotable changes in Python 3.8.2\u00b6\nFixed a regression with the ignore\ncallback of shutil.copytree()\n.\nThe argument types are now str and List[str] again.\n(Contributed by Manuel Barkhau and Giampaolo Rodola in gh-83571.)\nNotable changes in Python 3.8.3\u00b6\nThe constant values of future flags in the __future__\nmodule\nare updated in order to prevent collision with compiler flags. Previously\nPyCF_ALLOW_TOP_LEVEL_AWAIT\nwas clashing with CO_FUTURE_DIVISION\n.\n(Contributed by Batuhan Taskaya in gh-83743)\nNotable changes in Python 3.8.8\u00b6\nEarlier Python versions allowed using both ;\nand &\nas\nquery parameter separators in urllib.parse.parse_qs()\nand\nurllib.parse.parse_qsl()\n. Due to security concerns, and to conform with\nnewer W3C recommendations, this has been changed to allow only a single\nseparator key, with &\nas the default. This change also affects\ncgi.parse()\nand cgi.parse_multipart()\nas they use the affected\nfunctions internally. For more details, please see their respective\ndocumentation.\n(Contributed by Adam Goldschmidt, Senthil Kumaran and Ken Jin in bpo-42967.)\nNotable changes in Python 3.8.9\u00b6\nA security fix alters the ftplib.FTP\nbehavior to not trust the\nIPv4 address sent from the remote server when setting up a passive data\nchannel. We reuse the ftp server IP address instead. For unusual code\nrequiring the old behavior, set a trust_server_pasv_ipv4_address\nattribute on your FTP instance to True\n. (See gh-87451)\nNotable changes in Python 3.8.10\u00b6\nmacOS 11.0 (Big Sur) and Apple Silicon Mac support\u00b6\nAs of 3.8.10, Python now supports building and running on macOS 11\n(Big Sur) and on Apple Silicon Macs (based on the ARM64\narchitecture).\nA new universal build variant, universal2\n, is now available to natively\nsupport both ARM64\nand Intel 64\nin one set of executables.\nNote that support for \u201cweaklinking\u201d, building binaries targeted for newer\nversions of macOS that will also run correctly on older versions by\ntesting at runtime for missing features, is not included in this backport\nfrom Python 3.9; to support a range of macOS versions, continue to target\nfor and build on the oldest version in the range.\n(Originally contributed by Ronald Oussoren and Lawrence D\u2019Anna in gh-85272, with fixes by FX Coudert and Eli Rykoff, and backported to 3.8 by Maxime B\u00e9langer and Ned Deily)\nNotable changes in Python 3.8.10\u00b6\nurllib.parse\u00b6\nThe presence of newline or tab characters in parts of a URL allows for some\nforms of attacks. Following the WHATWG specification that updates RFC 3986,\nASCII newline \\n\n, \\r\nand tab \\t\ncharacters are stripped from the\nURL by the parser in urllib.parse\npreventing such attacks. The removal\ncharacters are controlled by a new module level variable\nurllib.parse._UNSAFE_URL_BYTES_TO_REMOVE\n. (See bpo-43882)\nNotable changes in Python 3.8.12\u00b6\nChanges in the Python API\u00b6\nStarting with Python 3.8.12 the ipaddress\nmodule no longer accepts\nany leading zeros in IPv4 address strings. Leading zeros are ambiguous and\ninterpreted as octal notation by some libraries. For example the legacy\nfunction socket.inet_aton()\ntreats leading zeros as octal notation.\nglibc implementation of modern inet_pton()\ndoes not accept\nany leading zeros.\n(Originally contributed by Christian Heimes in bpo-36384, and backported to 3.8 by Achraf Merzouki.)\nNotable security feature in 3.8.14\u00b6\nConverting between int\nand str\nin bases other than 2\n(binary), 4, 8 (octal), 16 (hexadecimal), or 32 such as base 10 (decimal)\nnow raises a ValueError\nif the number of digits in string form is\nabove a limit to avoid potential denial of service attacks due to the\nalgorithmic complexity. This is a mitigation for CVE 2020-10735.\nThis limit can be configured or disabled by environment variable, command\nline flag, or sys\nAPIs. See the integer string conversion\nlength limitation documentation. The default limit\nis 4300 digits in string form.\nNotable changes in 3.8.17\u00b6\ntarfile\u00b6\nThe extraction methods in\ntarfile\n, andshutil.unpack_archive()\n, have a new a filter argument that allows limiting tar features than may be surprising or dangerous, such as creating files outside the destination directory. See Extraction filters for details. In Python 3.12, use without the filter argument will show aDeprecationWarning\n. In Python 3.14, the default will switch to'data'\n. (Contributed by Petr Viktorin in PEP 706.)", "code_snippets": [" ", " ", " ", " ", " ", "\n ", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n", "\n", " ", " ", " ", " ", " ", "\n ", "\n", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", "\n ", "\n ", " ", " ", " ", " ", " ", " ", "\n", " ", "\n", " ", " ", " ", " ", "\n ", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", " ", " ", "\n", "\n", "\n\n ", " ", " ", " ", "\n ", "\n", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", ": ", "\n", " ", " ", "\n", "\n", " ", " ", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n\n", "\n", " ", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n\n", " ", "\n ", " ", "\n ", " ", "\n\n", "\n", "\n\n", " ", "\n ", " ", "\n ", " ", "\n\n", " ", " ", "\n", "\n", "\n ", "\n", "\n ", "\n ", "\n", "\n\n", " ", " ", " ", "\n ", "\n ", "\n", "\n", "\n ", "\n\n", "\n", "\n ", "\n", "\n", "\n\n", "\n ", " ", "\n ", " ", " ", "\n\n ", "\n ", "\n ", " ", "\n", " ", "\n", " ", "\n\n", "\n\n ", " ", "\n ", " ", " ", "\n\n ", "\n ", " ", "\n ", " ", "\n ", "\n\n ", "\n ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n ", " ", " ", "\n", " ", "\n", " ", " ", " ", " ", "\n", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n\n", " ", "\n", "\n", "\n", " ", " ", "\n", "\n\n", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n\n", " ", " ", " ", " ", " ", "\n", "\n", " ", "\n", "\n", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n", " ", " ", " ", " ", "\n ", "\n", " ", " ", " ", "\n", "\n\n\n", "\n\n ", " ", "\n ", " ", " ", " ", "\n\n ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n\n ", " ", "\n ", " ", "\n\n\n", " ", " ", " ", "\n ", "\n"], "language": "Python", "source": "python.org", "token_count": 19241} +{"url": "https://docs.python.org/3/c-api/structures.html", "title": "Common Object Structures", "content": "Common Object Structures\u00b6\nThere are a large number of structures which are used in the definition of object types for Python. This section describes these structures and how they are used.\nBase object types and macros\u00b6\nAll Python objects ultimately share a small number of fields at the beginning\nof the object\u2019s representation in memory. These are represented by the\nPyObject\nand PyVarObject\ntypes, which are defined, in turn,\nby the expansions of some macros also used, whether directly or indirectly, in\nthe definition of all other Python objects. Additional macros can be found\nunder reference counting.\n-\ntype PyObject\u00b6\n- Part of the Limited API. (Only some members are part of the stable ABI.)\nAll object types are extensions of this type. This is a type which contains the information Python needs to treat a pointer to an object as an object. In a normal \u201crelease\u201d build, it contains only the object\u2019s reference count and a pointer to the corresponding type object. Nothing is actually declared to be a\nPyObject\n, but every pointer to a Python object can be cast to a PyObject*.The members must not be accessed directly; instead use macros such as\nPy_REFCNT\nandPy_TYPE\n.-\nPy_ssize_t ob_refcnt\u00b6\n- Part of the Stable ABI.\nThe object\u2019s reference count, as returned by\nPy_REFCNT\n. Do not use this field directly; instead use functions and macros such asPy_REFCNT\n,Py_INCREF()\nandPy_DecRef()\n.The field type may be different from\nPy_ssize_t\n, depending on build configuration and platform.\n-\nPyTypeObject *ob_type\u00b6\n- Part of the Stable ABI.\nThe object\u2019s type. Do not use this field directly; use\nPy_TYPE\nandPy_SET_TYPE()\ninstead.\n-\nPy_ssize_t ob_refcnt\u00b6\n-\ntype PyVarObject\u00b6\n- Part of the Limited API. (Only some members are part of the stable ABI.)\nAn extension of\nPyObject\nthat adds theob_size\nfield. This is intended for objects that have some notion of length.As with\nPyObject\n, the members must not be accessed directly; instead use macros such asPy_SIZE\n,Py_REFCNT\nandPy_TYPE\n.-\nPy_ssize_t ob_size\u00b6\n- Part of the Stable ABI.\nA size field, whose contents should be considered an object\u2019s internal implementation detail.\nDo not use this field directly; use\nPy_SIZE\ninstead.Object creation functions such as\nPyObject_NewVar()\nwill generally set this field to the requested size (number of items). After creation, arbitrary values can be stored inob_size\nusingPy_SET_SIZE\n.To get an object\u2019s publicly exposed length, as returned by the Python function\nlen()\n, usePyObject_Length()\ninstead.\n-\nPy_ssize_t ob_size\u00b6\n-\nPyObject_HEAD\u00b6\nThis is a macro used when declaring new types which represent objects without a varying length. The PyObject_HEAD macro expands to:\nPyObject ob_base;\nSee documentation of\nPyObject\nabove.\n-\nPyObject_VAR_HEAD\u00b6\nThis is a macro used when declaring new types which represent objects with a length that varies from instance to instance. The PyObject_VAR_HEAD macro expands to:\nPyVarObject ob_base;\nSee documentation of\nPyVarObject\nabove.\n-\nPyTypeObject PyBaseObject_Type\u00b6\n- Part of the Stable ABI.\nThe base class of all other objects, the same as\nobject\nin Python.\n-\nint Py_Is(PyObject *x, PyObject *y)\u00b6\n- Part of the Stable ABI since version 3.10.\nTest if the x object is the y object, the same as\nx is y\nin Python.Added in version 3.10.\n-\nint Py_IsNone(PyObject *x)\u00b6\n- Part of the Stable ABI since version 3.10.\nTest if an object is the\nNone\nsingleton, the same asx is None\nin Python.Added in version 3.10.\n-\nint Py_IsTrue(PyObject *x)\u00b6\n- Part of the Stable ABI since version 3.10.\nTest if an object is the\nTrue\nsingleton, the same asx is True\nin Python.Added in version 3.10.\n-\nint Py_IsFalse(PyObject *x)\u00b6\n- Part of the Stable ABI since version 3.10.\nTest if an object is the\nFalse\nsingleton, the same asx is False\nin Python.Added in version 3.10.\n-\nPyTypeObject *Py_TYPE(PyObject *o)\u00b6\n- Return value: Borrowed reference. Part of the Stable ABI since version 3.14.\nGet the type of the Python object o.\nThe returned reference is borrowed from o. Do not release it with\nPy_DECREF()\nor similar.\n-\nint Py_IS_TYPE(PyObject *o, PyTypeObject *type)\u00b6\nReturn non-zero if the object o type is type. Return zero otherwise. Equivalent to:\nPy_TYPE(o) == type\n.Added in version 3.9.\n-\nvoid Py_SET_TYPE(PyObject *o, PyTypeObject *type)\u00b6\nSet the type of object o to type, without any checking or reference counting.\nThis is a very low-level operation. Consider instead setting the Python attribute\n__class__\nusingPyObject_SetAttrString()\nor similar.Note that assigning an incompatible type can lead to undefined behavior.\nIf type is a heap type, the caller must create a new reference to it. Similarly, if the old type of o is a heap type, the caller must release a reference to that type.\nAdded in version 3.9.\n-\nPy_ssize_t Py_SIZE(PyVarObject *o)\u00b6\nGet the\nob_size\nfield of o.Changed in version 3.11:\nPy_SIZE()\nis changed to an inline static function. The parameter type is no longer const PyVarObject*.\n-\nvoid Py_SET_SIZE(PyVarObject *o, Py_ssize_t size)\u00b6\nSet the\nob_size\nfield of o to size.Added in version 3.9.\n-\nPyObject_HEAD_INIT(type)\u00b6\nThis is a macro which expands to initialization values for a new\nPyObject\ntype. This macro expands to:_PyObject_EXTRA_INIT 1, type,\n-\nPyVarObject_HEAD_INIT(type, size)\u00b6\nThis is a macro which expands to initialization values for a new\nPyVarObject\ntype, including theob_size\nfield. This macro expands to:_PyObject_EXTRA_INIT 1, type, size,\nImplementing functions and methods\u00b6\n-\ntype PyCFunction\u00b6\n- Part of the Stable ABI.\nType of the functions used to implement most Python callables in C. Functions of this type take two PyObject* parameters and return one such value. If the return value is\nNULL\n, an exception shall have been set. If notNULL\n, the return value is interpreted as the return value of the function as exposed in Python. The function must return a new reference.The function signature is:\nPyObject *PyCFunction(PyObject *self, PyObject *args);\n-\ntype PyCFunctionWithKeywords\u00b6\n- Part of the Stable ABI.\nType of the functions used to implement Python callables in C with signature METH_VARARGS | METH_KEYWORDS. The function signature is:\nPyObject *PyCFunctionWithKeywords(PyObject *self, PyObject *args, PyObject *kwargs);\n-\ntype PyCFunctionFast\u00b6\n- Part of the Stable ABI since version 3.13.\nType of the functions used to implement Python callables in C with signature\nMETH_FASTCALL\n. The function signature is:PyObject *PyCFunctionFast(PyObject *self, PyObject *const *args, Py_ssize_t nargs);\n-\ntype PyCFunctionFastWithKeywords\u00b6\n- Part of the Stable ABI since version 3.13.\nType of the functions used to implement Python callables in C with signature METH_FASTCALL | METH_KEYWORDS. The function signature is:\nPyObject *PyCFunctionFastWithKeywords(PyObject *self, PyObject *const *args, Py_ssize_t nargs, PyObject *kwnames);\n-\ntype PyCMethod\u00b6\nType of the functions used to implement Python callables in C with signature METH_METHOD | METH_FASTCALL | METH_KEYWORDS. The function signature is:\nPyObject *PyCMethod(PyObject *self, PyTypeObject *defining_class, PyObject *const *args, Py_ssize_t nargs, PyObject *kwnames)\nAdded in version 3.9.\n-\ntype PyMethodDef\u00b6\n- Part of the Stable ABI (including all members).\nStructure used to describe a method of an extension type. This structure has four fields:\n-\nconst char *ml_name\u00b6\nName of the method.\n-\nPyCFunction ml_meth\u00b6\nPointer to the C implementation.\n-\nint ml_flags\u00b6\nFlags bits indicating how the call should be constructed.\n-\nconst char *ml_doc\u00b6\nPoints to the contents of the docstring.\n-\nconst char *ml_name\u00b6\nThe ml_meth\nis a C function pointer.\nThe functions may be of different\ntypes, but they always return PyObject*. If the function is not of\nthe PyCFunction\n, the compiler will require a cast in the method table.\nEven though PyCFunction\ndefines the first parameter as\nPyObject*, it is common that the method implementation uses the\nspecific C type of the self object.\nThe ml_flags\nfield is a bitfield which can include\nthe following flags.\nThe individual flags indicate either a calling convention or a binding\nconvention.\nThere are these calling conventions:\n-\nMETH_VARARGS\u00b6\n- Part of the Stable ABI.\nThis is the typical calling convention, where the methods have the type\nPyCFunction\n. The function expects two PyObject* values. The first one is the self object for methods; for module functions, it is the module object. The second parameter (often called args) is a tuple object representing all arguments. This parameter is typically processed usingPyArg_ParseTuple()\norPyArg_UnpackTuple()\n.\n-\nMETH_KEYWORDS\u00b6\nCan only be used in certain combinations with other flags: METH_VARARGS | METH_KEYWORDS, METH_FASTCALL | METH_KEYWORDS and METH_METHOD | METH_FASTCALL | METH_KEYWORDS.\n- METH_VARARGS | METH_KEYWORDS\nMethods with these flags must be of type\nPyCFunctionWithKeywords\n. The function expects three parameters: self, args, kwargs where kwargs is a dictionary of all the keyword arguments or possiblyNULL\nif there are no keyword arguments. The parameters are typically processed usingPyArg_ParseTupleAndKeywords()\n.\n-\nMETH_FASTCALL\u00b6\n- Part of the Stable ABI since version 3.7.\nFast calling convention supporting only positional arguments. The methods have the type\nPyCFunctionFast\n. The first parameter is self, the second parameter is a C array of PyObject* values indicating the arguments and the third parameter is the number of arguments (the length of the array).Added in version 3.7.\nChanged in version 3.10:\nMETH_FASTCALL\nis now part of the stable ABI.\n- METH_FASTCALL | METH_KEYWORDS\nExtension of\nMETH_FASTCALL\nsupporting also keyword arguments, with methods of typePyCFunctionFastWithKeywords\n. Keyword arguments are passed the same way as in the vectorcall protocol: there is an additional fourth PyObject* parameter which is a tuple representing the names of the keyword arguments (which are guaranteed to be strings) or possiblyNULL\nif there are no keywords. The values of the keyword arguments are stored in the args array, after the positional arguments.Added in version 3.7.\n-\nMETH_METHOD\u00b6\n- Part of the Stable ABI since version 3.7.\nCan only be used in the combination with other flags: METH_METHOD | METH_FASTCALL | METH_KEYWORDS.\n- METH_METHOD | METH_FASTCALL | METH_KEYWORDS\nExtension of METH_FASTCALL | METH_KEYWORDS supporting the defining class, that is, the class that contains the method in question. The defining class might be a superclass of\nPy_TYPE(self)\n.The method needs to be of type\nPyCMethod\n, the same as forMETH_FASTCALL | METH_KEYWORDS\nwithdefining_class\nargument added afterself\n.Added in version 3.9.\n-\nMETH_NOARGS\u00b6\n- Part of the Stable ABI.\nMethods without parameters don\u2019t need to check whether arguments are given if they are listed with the\nMETH_NOARGS\nflag. They need to be of typePyCFunction\n. The first parameter is typically named self and will hold a reference to the module or object instance. In all cases the second parameter will beNULL\n.The function must have 2 parameters. Since the second parameter is unused,\nPy_UNUSED\ncan be used to prevent a compiler warning.\n-\nMETH_O\u00b6\n- Part of the Stable ABI.\nMethods with a single object argument can be listed with the\nMETH_O\nflag, instead of invokingPyArg_ParseTuple()\nwith a\"O\"\nargument. They have the typePyCFunction\n, with the self parameter, and a PyObject* parameter representing the single argument.\nThese two constants are not used to indicate the calling convention but the binding when used with methods of classes. These may not be used for functions defined for modules. At most one of these flags may be set for any given method.\n-\nMETH_CLASS\u00b6\n- Part of the Stable ABI.\nThe method will be passed the type object as the first parameter rather than an instance of the type. This is used to create class methods, similar to what is created when using the\nclassmethod()\nbuilt-in function.\n-\nMETH_STATIC\u00b6\n- Part of the Stable ABI.\nThe method will be passed\nNULL\nas the first parameter rather than an instance of the type. This is used to create static methods, similar to what is created when using thestaticmethod()\nbuilt-in function.\nOne other constant controls whether a method is loaded in place of another definition with the same method name.\n-\nMETH_COEXIST\u00b6\n- Part of the Stable ABI.\nThe method will be loaded in place of existing definitions. Without METH_COEXIST, the default is to skip repeated definitions. Since slot wrappers are loaded before the method table, the existence of a sq_contains slot, for example, would generate a wrapped method named\n__contains__()\nand preclude the loading of a corresponding PyCFunction with the same name. With the flag defined, the PyCFunction will be loaded in place of the wrapper object and will co-exist with the slot. This is helpful because calls to PyCFunctions are optimized more than wrapper object calls.\n-\nPyTypeObject PyCMethod_Type\u00b6\nThe type object corresponding to Python C method objects. This is available as\ntypes.BuiltinMethodType\nin the Python layer.\n-\nint PyCMethod_Check(PyObject *op)\u00b6\nReturn true if op is an instance of the\nPyCMethod_Type\ntype or a subtype of it. This function always succeeds.\n-\nint PyCMethod_CheckExact(PyObject *op)\u00b6\nThis is the same as\nPyCMethod_Check()\n, but does not account for subtypes.\n-\nPyObject *PyCMethod_New(PyMethodDef *ml, PyObject *self, PyObject *module, PyTypeObject *cls)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.9.\nTurn ml into a Python callable object. The caller must ensure that ml outlives the callable. Typically, ml is defined as a static variable.\nThe self parameter will be passed as the self argument to the C function in\nml->ml_meth\nwhen invoked. self can beNULL\n.The callable object\u2019s\n__module__\nattribute can be set from the given module argument. module should be a Python string, which will be used as name of the module the function is defined in. If unavailable, it can be set toNone\norNULL\n.See also\nThe cls parameter will be passed as the defining_class argument to the C function. Must be set if\nMETH_METHOD\nis set onml->ml_flags\n.Added in version 3.9.\n-\nPyTypeObject PyCFunction_Type\u00b6\n- Part of the Stable ABI.\nThe type object corresponding to Python C function objects. This is available as\ntypes.BuiltinFunctionType\nin the Python layer.\n-\nint PyCFunction_Check(PyObject *op)\u00b6\nReturn true if op is an instance of the\nPyCFunction_Type\ntype or a subtype of it. This function always succeeds.\n-\nint PyCFunction_CheckExact(PyObject *op)\u00b6\nThis is the same as\nPyCFunction_Check()\n, but does not account for subtypes.\n-\nPyObject *PyCFunction_NewEx(PyMethodDef *ml, PyObject *self, PyObject *module)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nEquivalent to\nPyCMethod_New(ml, self, module, NULL)\n.\n-\nPyObject *PyCFunction_New(PyMethodDef *ml, PyObject *self)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.4.\nEquivalent to\nPyCMethod_New(ml, self, NULL, NULL)\n.\n-\nint PyCFunction_GetFlags(PyObject *func)\u00b6\n- Part of the Stable ABI.\nGet the function\u2019s flags on func as they were passed to\nml_flags\n.If func is not a C function object, this fails with an exception. func must not be\nNULL\n.This function returns the function\u2019s flags on success, and\n-1\nwith an exception set on failure.\n-\nint PyCFunction_GET_FLAGS(PyObject *func)\u00b6\nThis is the same as\nPyCFunction_GetFlags()\n, but without error or type checking.\n-\nPyCFunction PyCFunction_GetFunction(PyObject *func)\u00b6\n- Part of the Stable ABI.\nGet the function pointer on func as it was passed to\nml_meth\n.If func is not a C function object, this fails with an exception. func must not be\nNULL\n.This function returns the function pointer on success, and\nNULL\nwith an exception set on failure.\n-\nint PyCFunction_GET_FUNCTION(PyObject *func)\u00b6\nThis is the same as\nPyCFunction_GetFunction()\n, but without error or type checking.\n-\nPyObject *PyCFunction_GetSelf(PyObject *func)\u00b6\n- Part of the Stable ABI.\nGet the \u201cself\u201d object on func. This is the object that would be passed to the first argument of a\nPyCFunction\n. For C function objects created through aPyMethodDef\non aPyModuleDef\n, this is the resulting module object.If func is not a C function object, this fails with an exception. func must not be\nNULL\n.This function returns a borrowed reference to the \u201cself\u201d object on success, and\nNULL\nwith an exception set on failure.\n-\nPyObject *PyCFunction_GET_SELF(PyObject *func)\u00b6\nThis is the same as\nPyCFunction_GetSelf()\n, but without error or type checking.\nAccessing attributes of extension types\u00b6\n-\ntype PyMemberDef\u00b6\n- Part of the Stable ABI (including all members).\nStructure which describes an attribute of a type which corresponds to a C struct member. When defining a class, put a NULL-terminated array of these structures in the\ntp_members\nslot.Its fields are, in order:\n-\nconst char *name\u00b6\nName of the member. A NULL value marks the end of a\nPyMemberDef[]\narray.The string should be static, no copy is made of it.\n-\nint type\u00b6\nThe type of the member in the C struct. See Member types for the possible values.\n-\nPy_ssize_t offset\u00b6\nThe offset in bytes that the member is located on the type\u2019s object struct.\n-\nint flags\u00b6\nZero or more of the Member flags, combined using bitwise OR.\n-\nconst char *doc\u00b6\nThe docstring, or NULL. The string should be static, no copy is made of it. Typically, it is defined using\nPyDoc_STR\n.\nBy default (when\nflags\nis0\n), members allow both read and write access. Use thePy_READONLY\nflag for read-only access. Certain types, likePy_T_STRING\n, implyPy_READONLY\n. OnlyPy_T_OBJECT_EX\n(and legacyT_OBJECT\n) members can be deleted.For heap-allocated types (created using\nPyType_FromSpec()\nor similar),PyMemberDef\nmay contain a definition for the special member\"__vectorcalloffset__\"\n, corresponding totp_vectorcall_offset\nin type objects. This member must be defined withPy_T_PYSSIZET\n, and eitherPy_READONLY\norPy_READONLY | Py_RELATIVE_OFFSET\n. For example:static PyMemberDef spam_type_members[] = { {\"__vectorcalloffset__\", Py_T_PYSSIZET, offsetof(Spam_object, vectorcall), Py_READONLY}, {NULL} /* Sentinel */ };\n(You may need to\n#include \nforoffsetof()\n.)The legacy offsets\ntp_dictoffset\nandtp_weaklistoffset\ncan be defined similarly using\"__dictoffset__\"\nand\"__weaklistoffset__\"\nmembers, but extensions are strongly encouraged to usePy_TPFLAGS_MANAGED_DICT\nandPy_TPFLAGS_MANAGED_WEAKREF\ninstead.Changed in version 3.12:\nPyMemberDef\nis always available. Previously, it required including\"structmember.h\"\n.Changed in version 3.14:\nPy_RELATIVE_OFFSET\nis now allowed for\"__vectorcalloffset__\"\n,\"__dictoffset__\"\nand\"__weaklistoffset__\"\n. -\nconst char *name\u00b6\n-\nPyObject *PyMember_GetOne(const char *obj_addr, struct PyMemberDef *m)\u00b6\n- Part of the Stable ABI.\nGet an attribute belonging to the object at address obj_addr. The attribute is described by\nPyMemberDef\nm. ReturnsNULL\non error.Changed in version 3.12:\nPyMember_GetOne\nis always available. Previously, it required including\"structmember.h\"\n.\n-\nint PyMember_SetOne(char *obj_addr, struct PyMemberDef *m, PyObject *o)\u00b6\n- Part of the Stable ABI.\nSet an attribute belonging to the object at address obj_addr to object o. The attribute to set is described by\nPyMemberDef\nm. Returns0\nif successful and a negative value on failure.Changed in version 3.12:\nPyMember_SetOne\nis always available. Previously, it required including\"structmember.h\"\n.\nMember flags\u00b6\nThe following flags can be used with PyMemberDef.flags\n:\n-\nPy_READONLY\u00b6\n- Part of the Stable ABI since version 3.12.\nNot writable.\n-\nPy_AUDIT_READ\u00b6\n- Part of the Stable ABI since version 3.12.\nEmit an\nobject.__getattr__\naudit event before reading.\n-\nPy_RELATIVE_OFFSET\u00b6\n- Part of the Stable ABI since version 3.12.\nIndicates that the\noffset\nof thisPyMemberDef\nentry indicates an offset from the subclass-specific data, rather than fromPyObject\n.Can only be used as part of the\nPy_tp_members\nslot\nwhen creating a class using negativebasicsize\n. It is mandatory in that case. When settingtp_members\nfrom the slot during class creation, Python clears the flag and setsPyMemberDef.offset\nto the offset from thePyObject\nstruct.\nChanged in version 3.10: The RESTRICTED\n, READ_RESTRICTED\nand\nWRITE_RESTRICTED\nmacros available with\n#include \"structmember.h\"\nare deprecated.\nREAD_RESTRICTED\nand RESTRICTED\nare equivalent to\nPy_AUDIT_READ\n; WRITE_RESTRICTED\ndoes nothing.\nChanged in version 3.12: The READONLY\nmacro was renamed to Py_READONLY\n.\nThe PY_AUDIT_READ\nmacro was renamed with the Py_\nprefix.\nThe new names are now always available.\nPreviously, these required #include \"structmember.h\"\n.\nThe header is still available and it provides the old names.\nMember types\u00b6\nPyMemberDef.type\ncan be one of the following macros corresponding\nto various C types.\nWhen the member is accessed in Python, it will be converted to the\nequivalent Python type.\nWhen it is set from Python, it will be converted back to the C type.\nIf that is not possible, an exception such as TypeError\nor\nValueError\nis raised.\nUnless marked (D), attributes defined this way cannot be deleted\nusing e.g. del\nor delattr()\n.\nMacro name |\nC type |\nPython type |\n|---|---|---|\n|\nchar |\n|\n|\nshort |\n|\n|\nint |\n|\n|\nlong |\n|\n|\nlong long |\n|\n|\nunsigned char |\n|\n|\nunsigned int |\n|\n|\nunsigned short |\n|\n|\nunsigned long |\n|\n|\nunsigned long long |\n|\n|\n||\n|\nfloat |\n|\n|\ndouble |\n|\n|\nchar (written as 0 or 1) |\n|\n|\nconst char* (*) |\n|\n|\nconst char[] (*) |\n|\n|\nchar (0-127) |\n|\n|\n|\n(*): Zero-terminated, UTF8-encoded C string. With\nPy_T_STRING\nthe C representation is a pointer; withPy_T_STRING_INPLACE\nthe string is stored directly in the structure.(**): String of length 1. Only ASCII is accepted.\n(RO): Implies\nPy_READONLY\n.(D): Can be deleted, in which case the pointer is set to\nNULL\n. Reading aNULL\npointer raisesAttributeError\n.\nAdded in version 3.12: In previous versions, the macros were only available with\n#include \"structmember.h\"\nand were named without the Py_\nprefix\n(e.g. as T_INT\n).\nThe header is still available and contains the old names, along with\nthe following deprecated types:\n-\nT_OBJECT\u00b6\nLike\nPy_T_OBJECT_EX\n, butNULL\nis converted toNone\n. This results in surprising behavior in Python: deleting the attribute effectively sets it toNone\n.\n-\nT_NONE\u00b6\nAlways\nNone\n. Must be used withPy_READONLY\n.\nDefining Getters and Setters\u00b6\n-\ntype PyGetSetDef\u00b6\n- Part of the Stable ABI (including all members).\nStructure to define property-like access for a type. See also description of the\nPyTypeObject.tp_getset\nslot.-\nconst char *name\u00b6\nattribute name\n-\nsetter set\u00b6\nOptional C function to set or delete the attribute. If\nNULL\n, the attribute is read-only.\n-\nconst char *doc\u00b6\noptional docstring\n-\nvoid *closure\u00b6\nOptional user data pointer, providing additional data for getter and setter.\n-\nconst char *name\u00b6\n-\ntypedef PyObject *(*getter)(PyObject*, void*)\u00b6\n- Part of the Stable ABI.\nThe\nget\nfunction takes one PyObject* parameter (the instance) and a user data pointer (the associatedclosure\n):It should return a new reference on success or\nNULL\nwith a set exception on failure.\n-\ntypedef int (*setter)(PyObject*, PyObject*, void*)\u00b6\n- Part of the Stable ABI.\nset\nfunctions take two PyObject* parameters (the instance and the value to be set) and a user data pointer (the associatedclosure\n):In case the attribute should be deleted the second parameter is\nNULL\n. Should return0\non success or-1\nwith a set exception on failure.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 5824} +{"url": "https://docs.python.org/3/c-api/apiabiversion.html", "title": "API and ABI Versioning", "content": "API and ABI Versioning\u00b6\nBuild-time version constants\u00b6\nCPython exposes its version number in the following macros.\nNote that these correspond to the version code is built with.\nSee Py_Version\nfor the version used at run time.\nSee C API Stability for a discussion of API and ABI stability across versions.\n-\nPY_MAJOR_VERSION\u00b6\nThe\n3\nin3.4.1a2\n.\n-\nPY_MINOR_VERSION\u00b6\nThe\n4\nin3.4.1a2\n.\n-\nPY_MICRO_VERSION\u00b6\nThe\n1\nin3.4.1a2\n.\n-\nPY_RELEASE_LEVEL\u00b6\nThe\na\nin3.4.1a2\n. This can be0xA\nfor alpha,0xB\nfor beta,0xC\nfor release candidate or0xF\nfor final.\n-\nPY_RELEASE_SERIAL\u00b6\nThe\n2\nin3.4.1a2\n. Zero for final releases.\n-\nPY_VERSION_HEX\u00b6\nThe Python version number encoded in a single integer. See\nPy_PACK_FULL_VERSION()\nfor the encoding details.Use this for numeric comparisons, for example,\n#if PY_VERSION_HEX >= ...\n.\nThese macros are defined in Include/patchlevel.h.\nRun-time version\u00b6\n-\nconst unsigned long Py_Version\u00b6\n- Part of the Stable ABI since version 3.11.\nThe Python runtime version number encoded in a single constant integer. See\nPy_PACK_FULL_VERSION()\nfor the encoding details. This contains the Python version used at run time.Use this for numeric comparisons, for example,\nif (Py_Version >= ...)\n.Added in version 3.11.\nBit-packing macros\u00b6\n-\nuint32_t Py_PACK_FULL_VERSION(int major, int minor, int micro, int release_level, int release_serial)\u00b6\n- Part of the Stable ABI since version 3.14.\nReturn the given version, encoded as a single 32-bit integer with the following structure:\nArgument\nNo. of bits\nBit mask\nBit shift\nExample values\n3.4.1a2\n3.10.0\nmajor\n8\n0xFF000000\n24\n0x03\n0x03\nminor\n8\n0x00FF0000\n16\n0x04\n0x0A\nmicro\n8\n0x0000FF00\n8\n0x01\n0x00\nrelease_level\n4\n0x000000F0\n4\n0xA\n0xF\nrelease_serial\n4\n0x0000000F\n0\n0x2\n0x0\nFor example:\nVersion\nPy_PACK_FULL_VERSION\nargumentsEncoded version\n3.4.1a2\n(3, 4, 1, 0xA, 2)\n0x030401a2\n3.10.0\n(3, 10, 0, 0xF, 0)\n0x030a00f0\nOut-of range bits in the arguments are ignored. That is, the macro can be defined as:\n#ifndef Py_PACK_FULL_VERSION #define Py_PACK_FULL_VERSION(X, Y, Z, LEVEL, SERIAL) ( \\ (((X) & 0xff) << 24) | \\ (((Y) & 0xff) << 16) | \\ (((Z) & 0xff) << 8) | \\ (((LEVEL) & 0xf) << 4) | \\ (((SERIAL) & 0xf) << 0)) #endif\nPy_PACK_FULL_VERSION\nis primarily a macro, intended for use in#if\ndirectives, but it is also available as an exported function.Added in version 3.14.\n-\nuint32_t Py_PACK_VERSION(int major, int minor)\u00b6\n- Part of the Stable ABI since version 3.14.\nEquivalent to\nPy_PACK_FULL_VERSION(major, minor, 0, 0, 0)\n. The result does not correspond to any Python release, but is useful in numeric comparisons.Added in version 3.14.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 645} +{"url": "https://docs.python.org/3/reference/lexical_analysis.html", "title": "Lexical analysis", "content": "2. Lexical analysis\u00b6\nA Python program is read by a parser. Input to the parser is a stream of tokens, generated by the lexical analyzer (also known as the tokenizer). This chapter describes how the lexical analyzer produces these tokens.\nThe lexical analyzer determines the program text\u2019s encoding\n(UTF-8 by default), and decodes the text into\nsource characters.\nIf the text cannot be decoded, a SyntaxError\nis raised.\nNext, the lexical analyzer uses the source characters to generate a stream of tokens. The type of a generated token generally depends on the next source character to be processed. Similarly, other special behavior of the analyzer depends on the first source character that hasn\u2019t yet been processed. The following table gives a quick summary of these source characters, with links to sections that contain more information.\nCharacter |\nNext token (or other relevant documentation) |\n|---|---|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n2.1. Line structure\u00b6\nA Python program is divided into a number of logical lines.\n2.1.1. Logical lines\u00b6\nThe end of a logical line is represented by the token NEWLINE\n.\nStatements cannot cross logical line boundaries except where NEWLINE\nis allowed by the syntax (e.g., between statements in compound statements).\nA logical line is constructed from one or more physical lines by following\nthe explicit or implicit\nline joining rules.\n2.1.2. Physical lines\u00b6\nA physical line is a sequence of characters terminated by one the following end-of-line sequences:\nthe Unix form using ASCII LF (linefeed),\nthe Windows form using the ASCII sequence CR LF (return followed by linefeed),\nthe \u2018Classic Mac OS\u2019 form using the ASCII CR (return) character.\nRegardless of platform, each of these sequences is replaced by a single ASCII LF (linefeed) character. (This is done even inside string literals.) Each line can use any of the sequences; they do not need to be consistent within a file.\nThe end of input also serves as an implicit terminator for the final physical line.\nFormally:\nnewline: | | \n2.1.4. Encoding declarations\u00b6\nIf a comment in the first or second line of the Python script matches the\nregular expression coding[=:]\\s*([-\\w.]+)\n, this comment is processed as an\nencoding declaration; the first group of this expression names the encoding of\nthe source code file. The encoding declaration must appear on a line of its\nown. If it is the second line, the first line must also be a comment-only line.\nThe recommended forms of an encoding expression are\n# -*- coding: -*-\nwhich is recognized also by GNU Emacs, and\n# vim:fileencoding=\nwhich is recognized by Bram Moolenaar\u2019s VIM.\nIf no encoding declaration is found, the default encoding is UTF-8. If the\nimplicit or explicit encoding of a file is UTF-8, an initial UTF-8 byte-order\nmark (b'\\xef\\xbb\\xbf'\n) is ignored rather than being a syntax error.\nIf an encoding is declared, the encoding name must be recognized by Python (see Standard Encodings). The encoding is used for all lexical analysis, including string literals, comments and identifiers.\nAll lexical analysis, including string literals, comments and identifiers, works on Unicode text decoded using the source encoding. Any Unicode code point, except the NUL control character, can appear in Python source.\nsource_character: \n2.1.5. Explicit line joining\u00b6\nTwo or more physical lines may be joined into logical lines using backslash\ncharacters (\\\n), as follows: when a physical line ends in a backslash that is\nnot part of a string literal or comment, it is joined with the following forming\na single logical line, deleting the backslash and the following end-of-line\ncharacter. For example:\nif 1900 < year < 2100 and 1 <= month <= 12 \\\nand 1 <= day <= 31 and 0 <= hour < 24 \\\nand 0 <= minute < 60 and 0 <= second < 60: # Looks like a valid date\nreturn 1\nA line ending in a backslash cannot carry a comment. A backslash does not continue a comment. A backslash does not continue a token except for string literals (i.e., tokens other than string literals cannot be split across physical lines using a backslash). A backslash is illegal elsewhere on a line outside a string literal.\n2.1.6. Implicit line joining\u00b6\nExpressions in parentheses, square brackets or curly braces can be split over more than one physical line without using backslashes. For example:\nmonth_names = ['Januari', 'Februari', 'Maart', # These are the\n'April', 'Mei', 'Juni', # Dutch names\n'Juli', 'Augustus', 'September', # for the months\n'Oktober', 'November', 'December'] # of the year\nImplicitly continued lines can carry comments. The indentation of the continuation lines is not important. Blank continuation lines are allowed. There is no NEWLINE token between implicit continuation lines. Implicitly continued lines can also occur within triple-quoted strings (see below); in that case they cannot carry comments.\n2.1.7. Blank lines\u00b6\nA logical line that contains only spaces, tabs, formfeeds and possibly a\ncomment, is ignored (i.e., no NEWLINE\ntoken is generated).\nDuring interactive input of statements, handling of a blank line may differ\ndepending on the implementation of the read-eval-print loop.\nIn the standard interactive interpreter, an entirely blank logical line (that\nis, one containing not even whitespace or a comment) terminates a multi-line\nstatement.\n2.1.8. Indentation\u00b6\nLeading whitespace (spaces and tabs) at the beginning of a logical line is used to compute the indentation level of the line, which in turn is used to determine the grouping of statements.\nTabs are replaced (from left to right) by one to eight spaces such that the total number of characters up to and including the replacement is a multiple of eight (this is intended to be the same rule as used by Unix). The total number of spaces preceding the first non-blank character then determines the line\u2019s indentation. Indentation cannot be split over multiple physical lines using backslashes; the whitespace up to the first backslash determines the indentation.\nIndentation is rejected as inconsistent if a source file mixes tabs and spaces\nin a way that makes the meaning dependent on the worth of a tab in spaces; a\nTabError\nis raised in that case.\nCross-platform compatibility note: because of the nature of text editors on non-UNIX platforms, it is unwise to use a mixture of spaces and tabs for the indentation in a single source file. It should also be noted that different platforms may explicitly limit the maximum indentation level.\nA formfeed character may be present at the start of the line; it will be ignored for the indentation calculations above. Formfeed characters occurring elsewhere in the leading whitespace have an undefined effect (for instance, they may reset the space count to zero).\nThe indentation levels of consecutive lines are used to generate\nINDENT\nand DEDENT\ntokens, using a stack,\nas follows.\nBefore the first line of the file is read, a single zero is pushed on the stack;\nthis will never be popped off again. The numbers pushed on the stack will\nalways be strictly increasing from bottom to top. At the beginning of each\nlogical line, the line\u2019s indentation level is compared to the top of the stack.\nIf it is equal, nothing happens. If it is larger, it is pushed on the stack, and\none INDENT\ntoken is generated. If it is smaller, it must be one of the\nnumbers occurring on the stack; all numbers on the stack that are larger are\npopped off, and for each number popped off a DEDENT\ntoken is generated.\nAt the end of the file, a DEDENT\ntoken is generated for each number\nremaining on the stack that is larger than zero.\nHere is an example of a correctly (though confusingly) indented piece of Python code:\ndef perm(l):\n# Compute the list of all permutations of l\nif len(l) <= 1:\nreturn [l]\nr = []\nfor i in range(len(l)):\ns = l[:i] + l[i+1:]\np = perm(s)\nfor x in p:\nr.append(l[i:i+1] + x)\nreturn r\nThe following example shows various indentation errors:\ndef perm(l): # error: first line indented\nfor i in range(len(l)): # error: not indented\ns = l[:i] + l[i+1:]\np = perm(l[:i] + l[i+1:]) # error: unexpected indent\nfor x in p:\nr.append(l[i:i+1] + x)\nreturn r # error: inconsistent dedent\n(Actually, the first three errors are detected by the parser; only the last\nerror is found by the lexical analyzer \u2014 the indentation of return r\ndoes\nnot match a level popped off the stack.)\n2.1.9. Whitespace between tokens\u00b6\nExcept at the beginning of a logical line or in string literals, the whitespace characters space, tab and formfeed can be used interchangeably to separate tokens:\nwhitespace: ' ' | tab | formfeed\nWhitespace is needed between two tokens only if their concatenation\ncould otherwise be interpreted as a different token. For example, ab\nis one\ntoken, but a b\nis two tokens. However, +a\nand + a\nboth produce\ntwo tokens, +\nand a\n, as +a\nis not a valid token.\n2.1.10. End marker\u00b6\nAt the end of non-interactive input, the lexical analyzer generates an\nENDMARKER\ntoken.\n2.2. Other tokens\u00b6\nBesides NEWLINE\n, INDENT\nand DEDENT\n,\nthe following categories of tokens exist:\nidentifiers and keywords (NAME\n), literals (such as\nNUMBER\nand STRING\n), and other symbols\n(operators and delimiters, OP\n).\nWhitespace characters (other than logical line terminators, discussed earlier)\nare not tokens, but serve to delimit tokens.\nWhere ambiguity exists, a token comprises the longest possible string that\nforms a legal token, when read from left to right.\n2.3. Names (identifiers and keywords)\u00b6\nNAME\ntokens represent identifiers, keywords, and\nsoft keywords.\nNames are composed of the following characters:\nuppercase and lowercase letters (\nA-Z\nanda-z\n),the underscore (\n_\n),digits (\n0\nthrough9\n), which cannot appear as the first character, andnon-ASCII characters. Valid names may only contain \u201cletter-like\u201d and \u201cdigit-like\u201d characters; see Non-ASCII characters in names for details.\nNames must contain at least one character, but have no upper length limit. Case is significant.\nFormally, names are described by the following lexical definitions:\nNAME:name_start\nname_continue\n* name_start: \"a\"...\"z\" | \"A\"...\"Z\" | \"_\" | name_continue: name_start | \"0\"...\"9\" identifier: \nNote that not all names matched by this grammar are valid; see Non-ASCII characters in names for details.\n2.3.1. Keywords\u00b6\nThe following names are used as reserved words, or keywords of the language, and cannot be used as ordinary identifiers. They must be spelled exactly as written here:\nFalse await else import pass\nNone break except in raise\nTrue class finally is return\nand continue for lambda try\nas def from nonlocal while\nassert del global not with\nasync elif if or yield\n2.3.2. Soft Keywords\u00b6\nAdded in version 3.10.\nSome names are only reserved under specific contexts. These are known as soft keywords:\nThese syntactically act as keywords in their specific contexts, but this distinction is done at the parser level, not when tokenizing.\nAs soft keywords, their use in the grammar is possible while still preserving compatibility with existing code that uses these names as identifier names.\nChanged in version 3.12: type\nis now a soft keyword.\n2.3.3. Reserved classes of identifiers\u00b6\nCertain classes of identifiers (besides keywords) have special meanings. These classes are identified by the patterns of leading and trailing underscore characters:\n_*\nNot imported by\nfrom module import *\n._\nIn a\ncase\npattern within amatch\nstatement,_\nis a soft keyword that denotes a wildcard.Separately, the interactive interpreter makes the result of the last evaluation available in the variable\n_\n. (It is stored in thebuiltins\nmodule, alongside built-in functions likeprint\n.)Elsewhere,\n_\nis a regular identifier. It is often used to name \u201cspecial\u201d items, but it is not special to Python itself.Note\nThe name\n_\nis often used in conjunction with internationalization; refer to the documentation for thegettext\nmodule for more information on this convention.It is also commonly used for unused variables.\n__*__\nSystem-defined names, informally known as \u201cdunder\u201d names. These names are defined by the interpreter and its implementation (including the standard library). Current system names are discussed in the Special method names section and elsewhere. More will likely be defined in future versions of Python. Any use of\n__*__\nnames, in any context, that does not follow explicitly documented use, is subject to breakage without warning.__*\nClass-private names. Names in this category, when used within the context of a class definition, are re-written to use a mangled form to help avoid name clashes between \u201cprivate\u201d attributes of base and derived classes. See section Identifiers (Names).\n2.3.4. Non-ASCII characters in names\u00b6\nNames that contain non-ASCII characters need additional normalization\nand validation beyond the rules and grammar explained\nabove.\nFor example, \u0159_1\n, \u86c7\n, or \u0938\u093e\u0901\u092a\nare valid names, but r\u30302\n,\n\u20ac\n, or \ud83d\udc0d\nare not.\nThis section explains the exact rules.\nAll names are converted into the normalization form NFKC while parsing.\nThis means that, for example, some typographic variants of characters are\nconverted to their \u201cbasic\u201d form. For example, \ufb01\u207f\u2090\u02e1\u1d62\u1dbb\u2090\u1d57\u1d62\u1d52\u2099\nnormalizes to\nfinalization\n, so Python treats them as the same name:\n>>> \ufb01\u207f\u2090\u02e1\u1d62\u1dbb\u2090\u1d57\u1d62\u1d52\u2099 = 3\n>>> finalization\n3\nNote\nNormalization is done at the lexical level only.\nRun-time functions that take names as strings generally do not normalize\ntheir arguments.\nFor example, the variable defined above is accessible at run time in the\nglobals()\ndictionary as globals()[\"finalization\"]\nbut not\nglobals()[\"\ufb01\u207f\u2090\u02e1\u1d62\u1dbb\u2090\u1d57\u1d62\u1d52\u2099\"]\n.\nSimilarly to how ASCII-only names must contain only letters, digits and\nthe underscore, and cannot start with a digit, a valid name must\nstart with a character in the \u201cletter-like\u201d set xid_start\n,\nand the remaining characters must be in the \u201cletter- and digit-like\u201d set\nxid_continue\n.\nThese sets based on the XID_Start and XID_Continue sets as defined by the\nUnicode standard annex UAX-31.\nPython\u2019s xid_start\nadditionally includes the underscore (_\n).\nNote that Python does not necessarily conform to UAX-31.\nA non-normative listing of characters in the XID_Start and XID_Continue\nsets as defined by Unicode is available in the DerivedCoreProperties.txt\nfile in the Unicode Character Database.\nFor reference, the construction rules for the xid_*\nsets are given below.\nThe set id_start\nis defined as the union of:\nUnicode category\n\n- uppercase letters (includesA\ntoZ\n)Unicode category\n\n- lowercase letters (includesa\ntoz\n)Unicode category\n\n- titlecase lettersUnicode category\n\n- modifier lettersUnicode category\n\n- other lettersUnicode category\n\n- letter numbers{\n\"_\"\n} - the underscore\n- an explicit set of characters in PropList.txt to support backwards compatibility\nThe set xid_start\nthen closes this set under NFKC normalization, by\nremoving all characters whose normalization is not of the form\nid_start id_continue*\n.\nThe set id_continue\nis defined as the union of:\nid_start\n(see above)Unicode category\n\n- decimal numbers (includes0\nto9\n)Unicode category\n\n- connector punctuationsUnicode category\n\n- nonspacing marksUnicode category\n\n- spacing combining marks\n- another explicit set of characters in PropList.txt to support backwards compatibility\nAgain, xid_continue\ncloses this set under NFKC normalization.\nUnicode categories use the version of the Unicode Character Database as\nincluded in the unicodedata\nmodule.\n2.4. Literals\u00b6\nLiterals are notations for constant values of some built-in types.\nIn terms of lexical analysis, Python has string, bytes and numeric literals.\nOther \u201cliterals\u201d are lexically denoted using keywords\n(None\n, True\n, False\n) and the special\nellipsis token (...\n).\n2.5. String and Bytes literals\u00b6\nString literals are text enclosed in single quotes ('\n) or double\nquotes (\"\n). For example:\n\"spam\"\n'eggs'\nThe quote used to start the literal also terminates it, so a string literal can only contain the other quote (except with escape sequences, see below). For example:\n'Say \"Hello\", please.'\n\"Don't do that!\"\nExcept for this limitation, the choice of quote character ('\nor \"\n)\ndoes not affect how the literal is parsed.\nInside a string literal, the backslash (\\\n) character introduces an\nescape sequence, which has special meaning depending on the character\nafter the backslash.\nFor example, \\\"\ndenotes the double quote character, and does not end\nthe string:\n>>> print(\"Say \\\"Hello\\\" to everyone!\")\nSay \"Hello\" to everyone!\nSee escape sequences below for a full list of such sequences, and more details.\n2.5.1. Triple-quoted strings\u00b6\nStrings can also be enclosed in matching groups of three single or double quotes. These are generally referred to as triple-quoted strings:\n\"\"\"This is a triple-quoted string.\"\"\"\nIn triple-quoted literals, unescaped quotes are allowed (and are\nretained), except that three unescaped quotes in a row terminate the literal,\nif they are of the same kind ('\nor \"\n) used at the start:\n\"\"\"This string has \"quotes\" inside.\"\"\"\nUnescaped newlines are also allowed and retained:\n'''This triple-quoted string\ncontinues on the next line.'''\n2.5.2. String prefixes\u00b6\nString literals can have an optional prefix that influences how the content of the literal is parsed, for example:\nb\"data\"\nf'{result=}'\nThe allowed prefixes are:\nr\n: Raw stringf\n: Formatted string literal (\u201cf-string\u201d)t\n: Template string literal (\u201ct-string\u201d)u\n: No effect (allowed for backwards compatibility)\nSee the linked sections for details on each type.\nPrefixes are case-insensitive (for example, \u2018B\n\u2019 works the same as \u2018b\n\u2019).\nThe \u2018r\n\u2019 prefix can be combined with \u2018f\n\u2019, \u2018t\n\u2019 or \u2018b\n\u2019, so \u2018fr\n\u2019,\n\u2018rf\n\u2019, \u2018tr\n\u2019, \u2018rt\n\u2019, \u2018br\n\u2019, and \u2018rb\n\u2019 are also valid prefixes.\nAdded in version 3.3: The 'rb'\nprefix of raw bytes literals has been added as a synonym\nof 'br'\n.\nSupport for the unicode legacy literal (u'value'\n) was reintroduced\nto simplify the maintenance of dual Python 2.x and 3.x codebases.\nSee PEP 414 for more information.\n2.5.3. Formal grammar\u00b6\nString literals, except \u201cf-strings\u201d and \u201ct-strings\u201d, are described by the following lexical definitions.\nThese definitions use negative lookaheads (!\n)\nto indicate that an ending quote ends the literal.\nSTRING: [stringprefix\n] (stringcontent\n) stringprefix: <(\"r\" | \"u\" | \"b\" | \"br\" | \"rb\"), case-insensitive> stringcontent: | \"'''\" ( !\"'''\"longstringitem\n)* \"'''\" | '\"\"\"' ( !'\"\"\"'longstringitem\n)* '\"\"\"' | \"'\" ( !\"'\"stringitem\n)* \"'\" | '\"' ( !'\"'stringitem\n)* '\"' stringitem:stringchar\n|stringescapeseq\nstringchar: longstringitem:stringitem\n| newline stringescapeseq: \"\\\" \nNote that as in all lexical definitions, whitespace is significant. In particular, the prefix (if any) must be immediately followed by the starting quote.\n2.5.4. Escape sequences\u00b6\nUnless an \u2018r\n\u2019 or \u2018R\n\u2019 prefix is present, escape sequences in string and\nbytes literals are interpreted according to rules similar to those used by\nStandard C. The recognized escape sequences are:\nEscape Sequence |\nMeaning |\n|---|---|\n|\n|\n|\n|\n|\n|\n|\n|\n|\nASCII Bell (BEL) |\n|\nASCII Backspace (BS) |\n|\nASCII Formfeed (FF) |\n|\nASCII Linefeed (LF) |\n|\nASCII Carriage Return (CR) |\n|\nASCII Horizontal Tab (TAB) |\n|\nASCII Vertical Tab (VT) |\n|\n|\n|\n|\n|\n|\n|\n|\n|\n2.5.4.1. Ignored end of line\u00b6\nA backslash can be added at the end of a line to ignore the newline:\n>>> 'This string will not include \\\n... backslashes or newline characters.'\n'This string will not include backslashes or newline characters.'\nThe same result can be achieved using triple-quoted strings, or parentheses and string literal concatenation.\n2.5.4.2. Escaped characters\u00b6\nTo include a backslash in a non-raw Python string\nliteral, it must be doubled. The \\\\\nescape sequence denotes a single\nbackslash character:\n>>> print('C:\\\\Program Files')\nC:\\Program Files\nSimilarly, the \\'\nand \\\"\nsequences denote the single and double\nquote character, respectively:\n>>> print('\\' and \\\"')\n' and \"\n2.5.4.3. Octal character\u00b6\nThe sequence \\ooo\ndenotes a character with the octal (base 8)\nvalue ooo:\n>>> '\\120'\n'P'\nUp to three octal digits (0 through 7) are accepted.\nIn a bytes literal, character means a byte with the given value. In a string literal, it means a Unicode character with the given value.\nChanged in version 3.11: Octal escapes with value larger than 0o377\n(255) produce a\nDeprecationWarning\n.\nChanged in version 3.12: Octal escapes with value larger than 0o377\n(255) produce a\nSyntaxWarning\n.\nIn a future Python version they will raise a SyntaxError\n.\n2.5.4.4. Hexadecimal character\u00b6\nThe sequence \\xhh\ndenotes a character with the hex (base 16)\nvalue hh:\n>>> '\\x50'\n'P'\nUnlike in Standard C, exactly two hex digits are required.\nIn a bytes literal, character means a byte with the given value. In a string literal, it means a Unicode character with the given value.\n2.5.4.5. Named Unicode character\u00b6\nThe sequence \\N{name}\ndenotes a Unicode character\nwith the given name:\n>>> '\\N{LATIN CAPITAL LETTER P}'\n'P'\n>>> '\\N{SNAKE}'\n'\ud83d\udc0d'\nThis sequence cannot appear in bytes literals.\nChanged in version 3.3: Support for name aliases has been added.\n2.5.4.6. Hexadecimal Unicode characters\u00b6\nThese sequences \\uxxxx\nand \\Uxxxxxxxx\ndenote the\nUnicode character with the given hex (base 16) value.\nExactly four digits are required for \\u\n; exactly eight digits are\nrequired for \\U\n.\nThe latter can encode any Unicode character.\n>>> '\\u1234'\n'\u1234'\n>>> '\\U0001f40d'\n'\ud83d\udc0d'\nThese sequences cannot appear in bytes literals.\n2.5.4.7. Unrecognized escape sequences\u00b6\nUnlike in Standard C, all unrecognized escape sequences are left in the string unchanged, that is, the backslash is left in the result:\n>>> print('\\q')\n\\q\n>>> list('\\q')\n['\\\\', 'q']\nNote that for bytes literals, the escape sequences only recognized in string\nliterals (\\N...\n, \\u...\n, \\U...\n) fall into the category of\nunrecognized escapes.\nChanged in version 3.6: Unrecognized escape sequences produce a DeprecationWarning\n.\nChanged in version 3.12: Unrecognized escape sequences produce a SyntaxWarning\n.\nIn a future Python version they will raise a SyntaxError\n.\n2.5.5. Bytes literals\u00b6\nBytes literals are always prefixed with \u2018b\n\u2019 or \u2018B\n\u2019; they produce an\ninstance of the bytes\ntype instead of the str\ntype.\nThey may only contain ASCII characters; bytes with a numeric value of 128\nor greater must be expressed with escape sequences (typically\nHexadecimal character or Octal character):\n>>> b'\\x89PNG\\r\\n\\x1a\\n'\nb'\\x89PNG\\r\\n\\x1a\\n'\n>>> list(b'\\x89PNG\\r\\n\\x1a\\n')\n[137, 80, 78, 71, 13, 10, 26, 10]\nSimilarly, a zero byte must be expressed using an escape sequence (typically\n\\0\nor \\x00\n).\n2.5.6. Raw string literals\u00b6\nBoth string and bytes literals may optionally be prefixed with a letter \u2018r\n\u2019\nor \u2018R\n\u2019; such constructs are called raw string literals\nand raw bytes literals respectively and treat backslashes as\nliteral characters.\nAs a result, in raw string literals, escape sequences\nare not treated specially:\n>>> r'\\d{4}-\\d{2}-\\d{2}'\n'\\\\d{4}-\\\\d{2}-\\\\d{2}'\nEven in a raw literal, quotes can be escaped with a backslash, but the\nbackslash remains in the result; for example, r\"\\\"\"\nis a valid string\nliteral consisting of two characters: a backslash and a double quote; r\"\\\"\nis not a valid string literal (even a raw string cannot end in an odd number of\nbackslashes). Specifically, a raw literal cannot end in a single backslash\n(since the backslash would escape the following quote character). Note also\nthat a single backslash followed by a newline is interpreted as those two\ncharacters as part of the literal, not as a line continuation.\n2.5.7. f-strings\u00b6\nAdded in version 3.6.\nChanged in version 3.8: Added the debug specifier (=\n)\nChanged in version 3.12: Many restrictions on expressions within f-strings have been removed. Notably, nested strings, comments, and backslashes are now permitted.\nA formatted string literal or f-string is a string literal\nthat is prefixed with \u2018f\n\u2019 or \u2018F\n\u2019.\nUnlike other string literals, f-strings do not have a constant value.\nThey may contain replacement fields delimited by curly braces {}\n.\nReplacement fields contain expressions which are evaluated at run time.\nFor example:\n>>> who = 'nobody'\n>>> nationality = 'Spanish'\n>>> f'{who.title()} expects the {nationality} Inquisition!'\n'Nobody expects the Spanish Inquisition!'\nAny doubled curly braces ({{\nor }}\n) outside replacement fields\nare replaced with the corresponding single curly brace:\n>>> print(f'{{...}}')\n{...}\nOther characters outside replacement fields are treated like in ordinary string literals. This means that escape sequences are decoded (except when a literal is also marked as a raw string), and newlines are possible in triple-quoted f-strings:\n>>> name = 'Galahad'\n>>> favorite_color = 'blue'\n>>> print(f'{name}:\\t{favorite_color}')\nGalahad: blue\n>>> print(rf\"C:\\Users\\{name}\")\nC:\\Users\\Galahad\n>>> print(f'''Three shall be the number of the counting\n... and the number of the counting shall be three.''')\nThree shall be the number of the counting\nand the number of the counting shall be three.\nExpressions in formatted string literals are treated like regular\nPython expressions.\nEach expression is evaluated in the context where the formatted string literal\nappears, in order from left to right.\nAn empty expression is not allowed, and both lambda\nand\nassignment expressions :=\nmust be surrounded by explicit parentheses:\n>>> f'{(half := 1/2)}, {half * 42}'\n'0.5, 21.0'\nReusing the outer f-string quoting type inside a replacement field is permitted:\n>>> a = dict(x=2)\n>>> f\"abc {a[\"x\"]} def\"\n'abc 2 def'\nBackslashes are also allowed in replacement fields and are evaluated the same way as in any other context:\n>>> a = [\"a\", \"b\", \"c\"]\n>>> print(f\"List a contains:\\n{\"\\n\".join(a)}\")\nList a contains:\na\nb\nc\nIt is possible to nest f-strings:\n>>> name = 'world'\n>>> f'Repeated:{f' hello {name}' * 3}'\n'Repeated: hello world hello world hello world'\nPortable Python programs should not use more than 5 levels of nesting.\nCPython implementation detail: CPython does not limit nesting of f-strings.\nReplacement expressions can contain newlines in both single-quoted and\ntriple-quoted f-strings and they can contain comments.\nEverything that comes after a #\ninside a replacement field\nis a comment (even closing braces and quotes).\nThis means that replacement fields with comments must be closed in a\ndifferent line:\n>>> a = 2\n>>> f\"abc{a # This comment }\" continues until the end of the line\n... + 3}\"\n'abc5'\nAfter the expression, replacement fields may optionally contain:\na debug specifier \u2013 an equal sign (\n=\n), optionally surrounded by whitespace on one or both sides;a conversion specifier \u2013\n!s\n,!r\nor!a\n; and/ora format specifier prefixed with a colon (\n:\n).\nSee the Standard Library section on f-strings for details on how these fields are evaluated.\nAs that section explains, format specifiers are passed as the second argument\nto the format()\nfunction to format a replacement field value.\nFor example, they can be used to specify a field width and padding characters\nusing the Format Specification Mini-Language:\n>>> number = 14.3\n>>> f'{number:20.7f}'\n' 14.3000000'\nTop-level format specifiers may include nested replacement fields:\n>>> field_size = 20\n>>> precision = 7\n>>> f'{number:{field_size}.{precision}f}'\n' 14.3000000'\nThese nested fields may include their own conversion fields and format specifiers:\n>>> number = 3\n>>> f'{number:{field_size}}'\n' 3'\n>>> f'{number:{field_size:05}}'\n'00000000000000000003'\nHowever, these nested fields may not include more deeply nested replacement fields.\nFormatted string literals cannot be used as docstrings, even if they do not include expressions:\n>>> def foo():\n... f\"Not a docstring\"\n...\n>>> print(foo.__doc__)\nNone\nSee also\nPEP 498 \u2013 Literal String Interpolation\nPEP 701 \u2013 Syntactic formalization of f-strings\nstr.format()\n, which uses a related format string mechanism.\n2.5.8. t-strings\u00b6\nAdded in version 3.14.\nA template string literal or t-string is a string literal\nthat is prefixed with \u2018t\n\u2019 or \u2018T\n\u2019.\nThese strings follow the same syntax rules as\nformatted string literals.\nFor differences in evaluation rules, see the\nStandard Library section on t-strings\n2.5.9. Formal grammar for f-strings\u00b6\nF-strings are handled partly by the lexical analyzer, which produces the\ntokens FSTRING_START\n, FSTRING_MIDDLE\nand FSTRING_END\n, and partly by the parser, which handles\nexpressions in the replacement field.\nThe exact way the work is split is a CPython implementation detail.\nCorrespondingly, the f-string grammar is a mix of lexical and syntactic definitions.\nWhitespace is significant in these situations:\nThere may be no whitespace in\nFSTRING_START\n(between the prefix and quote).Whitespace in\nFSTRING_MIDDLE\nis part of the literal string contents.In\nfstring_replacement_field\n, iff_debug_specifier\nis present, all whitespace after the opening brace until thef_debug_specifier\n, as well as whitespace immediately followingf_debug_specifier\n, is retained as part of the expression.CPython implementation detail: The expression is not handled in the tokenization phase; it is retrieved from the source code using locations of the\n{\ntoken and the token after=\n.\nThe FSTRING_MIDDLE\ndefinition uses\nnegative lookaheads (!\n)\nto indicate special characters (backslash, newline, {\n, }\n) and\nsequences (f_quote\n).\nfstring:FSTRING_START\nfstring_middle\n*FSTRING_END\nFSTRING_START:fstringprefix\n(\"'\" | '\"' | \"'''\" | '\"\"\"') FSTRING_END:f_quote\nfstringprefix: <(\"f\" | \"fr\" | \"rf\"), case-insensitive> f_debug_specifier: '=' f_quote: fstring_middle: |fstring_replacement_field\n|FSTRING_MIDDLE\nFSTRING_MIDDLE: | (!\"\\\" !newline\n!'{' !'}' !f_quote\n)source_character\n|stringescapeseq\n| \"{{\" | \"}}\" | fstring_replacement_field: | '{'f_expression\n[f_debug_specifier\n] [fstring_conversion\n] [fstring_full_format_spec\n] '}' fstring_conversion: | \"!\" (\"s\" | \"r\" | \"a\") fstring_full_format_spec: | ':'fstring_format_spec\n* fstring_format_spec: |FSTRING_MIDDLE\n|fstring_replacement_field\nf_expression: | ','.(conditional_expression\n| \"*\"or_expr\n)+ [\",\"] |yield_expression\nNote\nIn the above grammar snippet, the f_quote\nand FSTRING_MIDDLE\nrules\nare context-sensitive \u2013 they depend on the contents of FSTRING_START\nof the nearest enclosing fstring\n.\nConstructing a more traditional formal grammar from this template is left as an exercise for the reader.\nThe grammar for t-strings is identical to the one for f-strings, with t instead of f at the beginning of rule and token names and in the prefix.\ntstring: TSTRING_START tstring_middle* TSTRING_END \n2.6. Numeric literals\u00b6\nNUMBER\ntokens represent numeric literals, of which there are\nthree types: integers, floating-point numbers, and imaginary numbers.\nNUMBER:integer\n|floatnumber\n|imagnumber\nThe numeric value of a numeric literal is the same as if it were passed as a\nstring to the int\n, float\nor complex\nclass\nconstructor, respectively.\nNote that not all valid inputs for those constructors are also valid literals.\nNumeric literals do not include a sign; a phrase like -1\nis\nactually an expression composed of the unary operator \u2018-\n\u2019 and the literal\n1\n.\n2.6.1. Integer literals\u00b6\nInteger literals denote whole numbers. For example:\n7\n3\n2147483647\nThere is no limit for the length of integer literals apart from what can be stored in available memory:\n7922816251426433759354395033679228162514264337593543950336\nUnderscores can be used to group digits for enhanced readability, and are ignored for determining the numeric value of the literal. For example, the following literals are equivalent:\n100_000_000_000\n100000000000\n1_00_00_00_00_000\nUnderscores can only occur between digits.\nFor example, _123\n, 321_\n, and 123__321\nare not valid literals.\nIntegers can be specified in binary (base 2), octal (base 8), or hexadecimal\n(base 16) using the prefixes 0b\n, 0o\nand 0x\n, respectively.\nHexadecimal digits 10 through 15 are represented by letters A\n-F\n,\ncase-insensitive. For example:\n0b100110111\n0b_1110_0101\n0o177\n0o377\n0xdeadbeef\n0xDead_Beef\nAn underscore can follow the base specifier.\nFor example, 0x_1f\nis a valid literal, but 0_x1f\nand 0x__1f\nare\nnot.\nLeading zeros in a non-zero decimal number are not allowed.\nFor example, 0123\nis not a valid literal.\nThis is for disambiguation with C-style octal literals, which Python used\nbefore version 3.0.\nFormally, integer literals are described by the following lexical definitions:\ninteger:decinteger\n|bininteger\n|octinteger\n|hexinteger\n|zerointeger\ndecinteger:nonzerodigit\n([\"_\"]digit\n)* bininteger: \"0\" (\"b\" | \"B\") ([\"_\"]bindigit\n)+ octinteger: \"0\" (\"o\" | \"O\") ([\"_\"]octdigit\n)+ hexinteger: \"0\" (\"x\" | \"X\") ([\"_\"]hexdigit\n)+ zerointeger: \"0\"+ ([\"_\"] \"0\")* nonzerodigit: \"1\"...\"9\" digit: \"0\"...\"9\" bindigit: \"0\" | \"1\" octdigit: \"0\"...\"7\" hexdigit:digit\n| \"a\"...\"f\" | \"A\"...\"F\"\nChanged in version 3.6: Underscores are now allowed for grouping purposes in literals.\n2.6.2. Floating-point literals\u00b6\nFloating-point (float) literals, such as 3.14\nor 1.5\n, denote\napproximations of real numbers.\nThey consist of integer and fraction parts, each composed of decimal digits.\nThe parts are separated by a decimal point, .\n:\n2.71828\n4.0\nUnlike in integer literals, leading zeros are allowed.\nFor example, 077.010\nis legal, and denotes the same number as 77.01\n.\nAs in integer literals, single underscores may occur between digits to help readability:\n96_485.332_123\n3.14_15_93\nEither of these parts, but not both, can be empty. For example:\n10. # (equivalent to 10.0)\n.001 # (equivalent to 0.001)\nOptionally, the integer and fraction may be followed by an exponent:\nthe letter e\nor E\n, followed by an optional sign, +\nor -\n,\nand a number in the same format as the integer and fraction parts.\nThe e\nor E\nrepresents \u201ctimes ten raised to the power of\u201d:\n1.0e3 # (represents 1.0\u00d710\u00b3, or 1000.0)\n1.166e-5 # (represents 1.166\u00d710\u207b\u2075, or 0.00001166)\n6.02214076e+23 # (represents 6.02214076\u00d710\u00b2\u00b3, or 602214076000000000000000.)\nIn floats with only integer and exponent parts, the decimal point may be omitted:\n1e3 # (equivalent to 1.e3 and 1.0e3)\n0e0 # (equivalent to 0.)\nFormally, floating-point literals are described by the following lexical definitions:\nfloatnumber: |digitpart\n\".\" [digitpart\n] [exponent\n] | \".\"digitpart\n[exponent\n] |digitpart\nexponent\ndigitpart:digit\n([\"_\"]digit\n)* exponent: (\"e\" | \"E\") [\"+\" | \"-\"]digitpart\nChanged in version 3.6: Underscores are now allowed for grouping purposes in literals.\n2.6.3. Imaginary literals\u00b6\nPython has complex number objects, but no complex literals. Instead, imaginary literals denote complex numbers with a zero real part.\nFor example, in math, the complex number 3+4.2i is written\nas the real number 3 added to the imaginary number 4.2i.\nPython uses a similar syntax, except the imaginary unit is written as j\nrather than i:\n3+4.2j\nThis is an expression composed\nof the integer literal 3\n,\nthe operator \u2018+\n\u2019,\nand the imaginary literal 4.2j\n.\nSince these are three separate tokens, whitespace is allowed between them:\n3 + 4.2j\nNo whitespace is allowed within each token.\nIn particular, the j\nsuffix, may not be separated from the number\nbefore it.\nThe number before the j\nhas the same syntax as a floating-point literal.\nThus, the following are valid imaginary literals:\n4.2j\n3.14j\n10.j\n.001j\n1e100j\n3.14e-10j\n3.14_15_93j\nUnlike in a floating-point literal the decimal point can be omitted if the imaginary number only has an integer part. The number is still evaluated as a floating-point number, not an integer:\n10j\n0j\n1000000000000000000000000j # equivalent to 1e+24j\nThe j\nsuffix is case-insensitive.\nThat means you can use J\ninstead:\n3.14J # equivalent to 3.14j\nFormally, imaginary literals are described by the following lexical definition:\nimagnumber: (floatnumber\n|digitpart\n) (\"j\" | \"J\")\n2.7. Operators and delimiters\u00b6\nThe following grammar defines operator and delimiter tokens,\nthat is, the generic OP\ntoken type.\nA list of these tokens and their names\nis also available in the token\nmodule documentation.\nOP: | assignment_operator | bitwise_operator | comparison_operator | enclosing_delimiter | other_delimiter | arithmetic_operator | \"...\" | other_op assignment_operator: \"+=\" | \"-=\" | \"*=\" | \"**=\" | \"/=\" | \"//=\" | \"%=\" | \"&=\" | \"|=\" | \"^=\" | \"<<=\" | \">>=\" | \"@=\" | \":=\" bitwise_operator: \"&\" | \"|\" | \"^\" | \"~\" | \"<<\" | \">>\" comparison_operator: \"<=\" | \">=\" | \"<\" | \">\" | \"==\" | \"!=\" enclosing_delimiter: \"(\" | \")\" | \"[\" | \"]\" | \"{\" | \"}\" other_delimiter: \",\" | \":\" | \"!\" | \";\" | \"=\" | \"->\" arithmetic_operator: \"+\" | \"-\" | \"**\" | \"*\" | \"//\" | \"/\" | \"%\" other_op: \".\" | \"@\"\nNote\nGenerally, operators are used to combine expressions, while delimiters serve other purposes. However, there is no clear, formal distinction between the two categories.\nSome tokens can serve as either operators or delimiters, depending on usage.\nFor example, *\nis both the multiplication operator and a delimiter used\nfor sequence unpacking, and @\nis both the matrix multiplication and\na delimiter that introduces decorators.\nFor some tokens, the distinction is unclear.\nFor example, some people consider .\n, (\n, and )\nto be delimiters, while others\nsee the getattr()\noperator and the function call operator(s).\nSome of Python\u2019s operators, like and\n, or\n, and not in\n, use\nkeyword tokens rather than \u201csymbols\u201d (operator tokens).\nA sequence of three consecutive periods (...\n) has a special\nmeaning as an Ellipsis\nliteral.\n2.1.3. Comments\u00b6\nA comment starts with a hash character (\n#\n) that is not part of a string literal, and ends at the end of the physical line. A comment signifies the end of the logical line unless the implicit line joining rules are invoked. Comments are ignored by the syntax.", "code_snippets": ["\n", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " \\\n ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " \\\n ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", "\n", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n", "\n ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 9476} +{"url": "https://docs.python.org/3/howto/instrumentation.html", "title": "Instrumenting CPython with DTrace and SystemTap", "content": "Instrumenting CPython with DTrace and SystemTap\u00b6\n- author:\nDavid Malcolm\n- author:\n\u0141ukasz Langa\nDTrace and SystemTap are monitoring tools, each providing a way to inspect what the processes on a computer system are doing. They both use domain-specific languages allowing a user to write scripts which:\nfilter which processes are to be observed\ngather data from the processes of interest\ngenerate reports on the data\nAs of Python 3.6, CPython can be built with embedded \u201cmarkers\u201d, also known as \u201cprobes\u201d, that can be observed by a DTrace or SystemTap script, making it easier to monitor what the CPython processes on a system are doing.\nCPython implementation detail: DTrace markers are implementation details of the CPython interpreter. No guarantees are made about probe compatibility between versions of CPython. DTrace scripts can stop working or work incorrectly without warning when changing CPython versions.\nEnabling the static markers\u00b6\nmacOS comes with built-in support for DTrace. On Linux, in order to build CPython with the embedded markers for SystemTap, the SystemTap development tools must be installed.\nOn a Linux machine, this can be done via:\n$ yum install systemtap-sdt-devel\nor:\n$ sudo apt-get install systemtap-sdt-dev\nCPython must then be configured with the --with-dtrace option\n:\nchecking for --with-dtrace... yes\nOn macOS, you can list available DTrace probes by running a Python process in the background and listing all probes made available by the Python provider:\n$ python3.6 -q &\n$ sudo dtrace -l -P python$! # or: dtrace -l -m python3.6\nID PROVIDER MODULE FUNCTION NAME\n29564 python18035 python3.6 _PyEval_EvalFrameDefault function-entry\n29565 python18035 python3.6 dtrace_function_entry function-entry\n29566 python18035 python3.6 _PyEval_EvalFrameDefault function-return\n29567 python18035 python3.6 dtrace_function_return function-return\n29568 python18035 python3.6 collect gc-done\n29569 python18035 python3.6 collect gc-start\n29570 python18035 python3.6 _PyEval_EvalFrameDefault line\n29571 python18035 python3.6 maybe_dtrace_line line\nOn Linux, you can verify if the SystemTap static markers are present in the built binary by seeing if it contains a \u201c.note.stapsdt\u201d section.\n$ readelf -S ./python | grep .note.stapsdt\n[30] .note.stapsdt NOTE 0000000000000000 00308d78\nIf you\u2019ve built Python as a shared library\n(with the --enable-shared\nconfigure option), you\nneed to look instead within the shared library. For example:\n$ readelf -S libpython3.3dm.so.1.0 | grep .note.stapsdt\n[29] .note.stapsdt NOTE 0000000000000000 00365b68\nSufficiently modern readelf can print the metadata:\n$ readelf -n ./python\nDisplaying notes found at file offset 0x00000254 with length 0x00000020:\nOwner Data size Description\nGNU 0x00000010 NT_GNU_ABI_TAG (ABI version tag)\nOS: Linux, ABI: 2.6.32\nDisplaying notes found at file offset 0x00000274 with length 0x00000024:\nOwner Data size Description\nGNU 0x00000014 NT_GNU_BUILD_ID (unique build ID bitstring)\nBuild ID: df924a2b08a7e89f6e11251d4602022977af2670\nDisplaying notes found at file offset 0x002d6c30 with length 0x00000144:\nOwner Data size Description\nstapsdt 0x00000031 NT_STAPSDT (SystemTap probe descriptors)\nProvider: python\nName: gc__start\nLocation: 0x00000000004371c3, Base: 0x0000000000630ce2, Semaphore: 0x00000000008d6bf6\nArguments: -4@%ebx\nstapsdt 0x00000030 NT_STAPSDT (SystemTap probe descriptors)\nProvider: python\nName: gc__done\nLocation: 0x00000000004374e1, Base: 0x0000000000630ce2, Semaphore: 0x00000000008d6bf8\nArguments: -8@%rax\nstapsdt 0x00000045 NT_STAPSDT (SystemTap probe descriptors)\nProvider: python\nName: function__entry\nLocation: 0x000000000053db6c, Base: 0x0000000000630ce2, Semaphore: 0x00000000008d6be8\nArguments: 8@%rbp 8@%r12 -4@%eax\nstapsdt 0x00000046 NT_STAPSDT (SystemTap probe descriptors)\nProvider: python\nName: function__return\nLocation: 0x000000000053dba8, Base: 0x0000000000630ce2, Semaphore: 0x00000000008d6bea\nArguments: 8@%rbp 8@%r12 -4@%eax\nThe above metadata contains information for SystemTap describing how it can patch strategically placed machine code instructions to enable the tracing hooks used by a SystemTap script.\nStatic DTrace probes\u00b6\nThe following example DTrace script can be used to show the call/return hierarchy of a Python script, only tracing within the invocation of a function called \u201cstart\u201d. In other words, import-time function invocations are not going to be listed:\nself int indent;\npython$target:::function-entry\n/copyinstr(arg1) == \"start\"/\n{\nself->trace = 1;\n}\npython$target:::function-entry\n/self->trace/\n{\nprintf(\"%d\\t%*s:\", timestamp, 15, probename);\nprintf(\"%*s\", self->indent, \"\");\nprintf(\"%s:%s:%d\\n\", basename(copyinstr(arg0)), copyinstr(arg1), arg2);\nself->indent++;\n}\npython$target:::function-return\n/self->trace/\n{\nself->indent--;\nprintf(\"%d\\t%*s:\", timestamp, 15, probename);\nprintf(\"%*s\", self->indent, \"\");\nprintf(\"%s:%s:%d\\n\", basename(copyinstr(arg0)), copyinstr(arg1), arg2);\n}\npython$target:::function-return\n/copyinstr(arg1) == \"start\"/\n{\nself->trace = 0;\n}\nIt can be invoked like this:\n$ sudo dtrace -q -s call_stack.d -c \"python3.6 script.py\"\nThe output looks like this:\n156641360502280 function-entry:call_stack.py:start:23\n156641360518804 function-entry: call_stack.py:function_1:1\n156641360532797 function-entry: call_stack.py:function_3:9\n156641360546807 function-return: call_stack.py:function_3:10\n156641360563367 function-return: call_stack.py:function_1:2\n156641360578365 function-entry: call_stack.py:function_2:5\n156641360591757 function-entry: call_stack.py:function_1:1\n156641360605556 function-entry: call_stack.py:function_3:9\n156641360617482 function-return: call_stack.py:function_3:10\n156641360629814 function-return: call_stack.py:function_1:2\n156641360642285 function-return: call_stack.py:function_2:6\n156641360656770 function-entry: call_stack.py:function_3:9\n156641360669707 function-return: call_stack.py:function_3:10\n156641360687853 function-entry: call_stack.py:function_4:13\n156641360700719 function-return: call_stack.py:function_4:14\n156641360719640 function-entry: call_stack.py:function_5:18\n156641360732567 function-return: call_stack.py:function_5:21\n156641360747370 function-return:call_stack.py:start:28\nStatic SystemTap markers\u00b6\nThe low-level way to use the SystemTap integration is to use the static markers directly. This requires you to explicitly state the binary file containing them.\nFor example, this SystemTap script can be used to show the call/return hierarchy of a Python script:\nprobe process(\"python\").mark(\"function__entry\") {\nfilename = user_string($arg1);\nfuncname = user_string($arg2);\nlineno = $arg3;\nprintf(\"%s => %s in %s:%d\\\\n\",\nthread_indent(1), funcname, filename, lineno);\n}\nprobe process(\"python\").mark(\"function__return\") {\nfilename = user_string($arg1);\nfuncname = user_string($arg2);\nlineno = $arg3;\nprintf(\"%s <= %s in %s:%d\\\\n\",\nthread_indent(-1), funcname, filename, lineno);\n}\nIt can be invoked like this:\n$ stap \\\nshow-call-hierarchy.stp \\\n-c \"./python test.py\"\nThe output looks like this:\n11408 python(8274): => __contains__ in Lib/_abcoll.py:362\n11414 python(8274): => __getitem__ in Lib/os.py:425\n11418 python(8274): => encode in Lib/os.py:490\n11424 python(8274): <= encode in Lib/os.py:493\n11428 python(8274): <= __getitem__ in Lib/os.py:426\n11433 python(8274): <= __contains__ in Lib/_abcoll.py:366\nwhere the columns are:\ntime in microseconds since start of script\nname of executable\nPID of process\nand the remainder indicates the call/return hierarchy as the script executes.\nFor a --enable-shared\nbuild of CPython, the markers are contained within the\nlibpython shared library, and the probe\u2019s dotted path needs to reflect this. For\nexample, this line from the above example:\nprobe process(\"python\").mark(\"function__entry\") {\nshould instead read:\nprobe process(\"python\").library(\"libpython3.6dm.so.1.0\").mark(\"function__entry\") {\n(assuming a debug build of CPython 3.6)\nAvailable static markers\u00b6\n- function__entry(str filename, str funcname, int lineno)\nThis marker indicates that execution of a Python function has begun. It is only triggered for pure-Python (bytecode) functions.\nThe filename, function name, and line number are provided back to the tracing script as positional arguments, which must be accessed using\n$arg1\n,$arg2\n,$arg3\n:$arg1\n:(const char *)\nfilename, accessible usinguser_string($arg1)\n$arg2\n:(const char *)\nfunction name, accessible usinguser_string($arg2)\n$arg3\n:int\nline number\n- function__return(str filename, str funcname, int lineno)\nThis marker is the converse of\nfunction__entry()\n, and indicates that execution of a Python function has ended (either viareturn\n, or via an exception). It is only triggered for pure-Python (bytecode) functions.The arguments are the same as for\nfunction__entry()\n- line(str filename, str funcname, int lineno)\nThis marker indicates a Python line is about to be executed. It is the equivalent of line-by-line tracing with a Python profiler. It is not triggered within C functions.\nThe arguments are the same as for\nfunction__entry()\n.\n- gc__start(int generation)\nFires when the Python interpreter starts a garbage collection cycle.\narg0\nis the generation to scan, likegc.collect()\n.\n- gc__done(long collected)\nFires when the Python interpreter finishes a garbage collection cycle.\narg0\nis the number of collected objects.\n- import__find__load__start(str modulename)\nFires before\nimportlib\nattempts to find and load the module.arg0\nis the module name.Added in version 3.7.\n- import__find__load__done(str modulename, int found)\nFires after\nimportlib\n\u2019s find_and_load function is called.arg0\nis the module name,arg1\nindicates if module was successfully loaded.Added in version 3.7.\n- audit(str event, void *tuple)\nFires when\nsys.audit()\norPySys_Audit()\nis called.arg0\nis the event name as C string,arg1\nis aPyObject\npointer to a tuple object.Added in version 3.8.\nSystemTap Tapsets\u00b6\nThe higher-level way to use the SystemTap integration is to use a \u201ctapset\u201d: SystemTap\u2019s equivalent of a library, which hides some of the lower-level details of the static markers.\nHere is a tapset file, based on a non-shared build of CPython:\n/*\nProvide a higher-level wrapping around the function__entry and\nfunction__return markers:\n\\*/\nprobe python.function.entry = process(\"python\").mark(\"function__entry\")\n{\nfilename = user_string($arg1);\nfuncname = user_string($arg2);\nlineno = $arg3;\nframeptr = $arg4\n}\nprobe python.function.return = process(\"python\").mark(\"function__return\")\n{\nfilename = user_string($arg1);\nfuncname = user_string($arg2);\nlineno = $arg3;\nframeptr = $arg4\n}\nIf this file is installed in SystemTap\u2019s tapset directory (e.g.\n/usr/share/systemtap/tapset\n), then these additional probepoints become\navailable:\n- python.function.entry(str filename, str funcname, int lineno, frameptr)\nThis probe point indicates that execution of a Python function has begun. It is only triggered for pure-Python (bytecode) functions.\n- python.function.return(str filename, str funcname, int lineno, frameptr)\nThis probe point is the converse of\npython.function.return\n, and indicates that execution of a Python function has ended (either viareturn\n, or via an exception). It is only triggered for pure-Python (bytecode) functions.\nExamples\u00b6\nThis SystemTap script uses the tapset above to more cleanly implement the example given above of tracing the Python function-call hierarchy, without needing to directly name the static markers:\nprobe python.function.entry\n{\nprintf(\"%s => %s in %s:%d\\n\",\nthread_indent(1), funcname, filename, lineno);\n}\nprobe python.function.return\n{\nprintf(\"%s <= %s in %s:%d\\n\",\nthread_indent(-1), funcname, filename, lineno);\n}\nThe following script uses the tapset above to provide a top-like view of all running CPython code, showing the top 20 most frequently entered bytecode frames, each second, across the whole system:\nglobal fn_calls;\nprobe python.function.entry\n{\nfn_calls[pid(), filename, funcname, lineno] += 1;\n}\nprobe timer.ms(1000) {\nprintf(\"\\033[2J\\033[1;1H\") /* clear screen \\*/\nprintf(\"%6s %80s %6s %30s %6s\\n\",\n\"PID\", \"FILENAME\", \"LINE\", \"FUNCTION\", \"CALLS\")\nforeach ([pid, filename, funcname, lineno] in fn_calls- limit 20) {\nprintf(\"%6d %80s %6d %30s %6d\\n\",\npid, filename, lineno, funcname,\nfn_calls[pid, filename, funcname, lineno]);\n}\ndelete fn_calls;\n}", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 3070} +{"url": "https://docs.python.org/3/howto/gdb_helpers.html", "title": "Debugging C API extensions and CPython Internals with GDB", "content": "Debugging C API extensions and CPython Internals with GDB\u00b6\nThis document explains how the Python GDB extension, python-gdb.py\n, can\nbe used with the GDB debugger to debug CPython extensions and the\nCPython interpreter itself.\nWhen debugging low-level problems such as crashes or deadlocks, a low-level debugger, such as GDB, is useful to diagnose and correct the issue. By default, GDB (or any of its front-ends) doesn\u2019t support high-level information specific to the CPython interpreter.\nThe python-gdb.py\nextension adds CPython interpreter information to GDB.\nThe extension helps introspect the stack of currently executing Python functions.\nGiven a Python object represented by a PyObject* pointer,\nthe extension surfaces the type and value of the object.\nDevelopers who are working on CPython extensions or tinkering with parts\nof CPython that are written in C can use this document to learn how to use the\npython-gdb.py\nextension with GDB.\nNote\nThis document assumes that you are familiar with the basics of GDB and the CPython C API. It consolidates guidance from the devguide and the Python wiki.\nPrerequisites\u00b6\nYou need to have:\nGDB 7 or later. (For earlier versions of GDB, see\nMisc/gdbinit\nin the sources of Python 3.11 or earlier.)GDB-compatible debugging information for Python and any extension you are debugging.\nThe\npython-gdb.py\nextension.\nThe extension is built with Python, but might be distributed separately or not at all. Below, we include tips for a few common systems as examples. Note that even if the instructions match your system, they might be outdated.\nSetup with Python built from source\u00b6\nWhen you build CPython from source, debugging information should be available,\nand the build should add a python-gdb.py\nfile to the root directory of\nyour repository.\nTo activate support, you must add the directory containing python-gdb.py\nto GDB\u2019s \u201cauto-load-safe-path\u201d.\nIf you haven\u2019t done this, recent versions of GDB will print out a warning\nwith instructions on how to do this.\nNote\nIf you do not see instructions for your version of GDB, put this in your\nconfiguration file (~/.gdbinit\nor ~/.config/gdb/gdbinit\n):\nadd-auto-load-safe-path /path/to/cpython\nYou can also add multiple paths, separated by :\n.\nSetup for Python from a Linux distro\u00b6\nMost Linux systems provide debug information for the system Python\nin a package called python-debuginfo\n, python-dbg\nor similar.\nFor example:\nFedora:\nsudo dnf install gdb sudo dnf debuginfo-install python3\nUbuntu:\nsudo apt install gdb python3-dbg\nOn several recent Linux systems, GDB can download debugging symbols\nautomatically using debuginfod.\nHowever, this will not install the python-gdb.py\nextension;\nyou generally do need to install the debug info package separately.\nUsing the Debug build and Development mode\u00b6\nFor easier debugging, you might want to:\nUse a debug build of Python. (When building from source, use\nconfigure --with-pydebug\n. On Linux distros, install and run a package likepython-debug\norpython-dbg\n, if available.)Use the runtime development mode (\n-X dev\n).\nBoth enable extra assertions and disable some optimizations. Sometimes this hides the bug you are trying to find, but in most cases they make the process easier.\nUsing the python-gdb\nextension\u00b6\nWhen the extension is loaded, it provides two main features: pretty printers for Python values, and additional commands.\nPretty-printers\u00b6\nThis is what a GDB backtrace looks like (truncated) when this extension is enabled:\n#0 0x000000000041a6b1 in PyObject_Malloc (nbytes=Cannot access memory at address 0x7fffff7fefe8\n) at Objects/obmalloc.c:748\n#1 0x000000000041b7c0 in _PyObject_DebugMallocApi (id=111 'o', nbytes=24) at Objects/obmalloc.c:1445\n#2 0x000000000041b717 in _PyObject_DebugMalloc (nbytes=24) at Objects/obmalloc.c:1412\n#3 0x000000000044060a in _PyUnicode_New (length=11) at Objects/unicodeobject.c:346\n#4 0x00000000004466aa in PyUnicodeUCS2_DecodeUTF8Stateful (s=0x5c2b8d \"__lltrace__\", size=11, errors=0x0, consumed=\n0x0) at Objects/unicodeobject.c:2531\n#5 0x0000000000446647 in PyUnicodeUCS2_DecodeUTF8 (s=0x5c2b8d \"__lltrace__\", size=11, errors=0x0)\nat Objects/unicodeobject.c:2495\n#6 0x0000000000440d1b in PyUnicodeUCS2_FromStringAndSize (u=0x5c2b8d \"__lltrace__\", size=11)\nat Objects/unicodeobject.c:551\n#7 0x0000000000440d94 in PyUnicodeUCS2_FromString (u=0x5c2b8d \"__lltrace__\") at Objects/unicodeobject.c:569\n#8 0x0000000000584abd in PyDict_GetItemString (v=\n{'Yuck': , '__builtins__': , '__file__': 'Lib/test/crashers/nasty_eq_vs_dict.py', '__package__': None, 'y': , 'dict': {0: 0, 1: 1, 2: 2, 3: 3}, '__cached__': None, '__name__': '__main__', 'z': , '__doc__': None}, key=\n0x5c2b8d \"__lltrace__\") at Objects/dictobject.c:2171\nNotice how the dictionary argument to PyDict_GetItemString\nis displayed\nas its repr()\n, rather than an opaque PyObject *\npointer.\nThe extension works by supplying a custom printing routine for values of type\nPyObject *\n. If you need to access lower-level details of an object, then\ncast the value to a pointer of the appropriate type. For example:\n(gdb) p globals\n$1 = {'__builtins__': , '__name__':\n'__main__', 'ctypes': , '__doc__': None,\n'__package__': None}\n(gdb) p *(PyDictObject*)globals\n$2 = {ob_refcnt = 3, ob_type = 0x3dbdf85820, ma_fill = 5, ma_used = 5,\nma_mask = 7, ma_table = 0x63d0f8, ma_lookup = 0x3dbdc7ea70\n, ma_smalltable = {{me_hash = 7065186196740147912,\nme_key = '__builtins__', me_value = },\n{me_hash = -368181376027291943, me_key = '__name__',\nme_value ='__main__'}, {me_hash = 0, me_key = 0x0, me_value = 0x0},\n{me_hash = 0, me_key = 0x0, me_value = 0x0},\n{me_hash = -9177857982131165996, me_key = 'ctypes',\nme_value = },\n{me_hash = -8518757509529533123, me_key = '__doc__', me_value = None},\n{me_hash = 0, me_key = 0x0, me_value = 0x0}, {\nme_hash = 6614918939584953775, me_key = '__package__', me_value = None}}}\nNote that the pretty-printers do not actually call repr()\n.\nFor basic types, they try to match its result closely.\nAn area that can be confusing is that the custom printer for some types look a\nlot like GDB\u2019s built-in printer for standard types. For example, the\npretty-printer for a Python int\n(PyLongObject*)\ngives a representation that is not distinguishable from one of a\nregular machine-level integer:\n(gdb) p some_machine_integer\n$3 = 42\n(gdb) p some_python_integer\n$4 = 42\nThe internal structure can be revealed with a cast to PyLongObject*:\n(gdb) p *(PyLongObject*)some_python_integer\n$5 = {ob_base = {ob_base = {ob_refcnt = 8, ob_type = 0x3dad39f5e0}, ob_size = 1},\nob_digit = {42}}\nA similar confusion can arise with the str\ntype, where the output looks a\nlot like gdb\u2019s built-in printer for char *\n:\n(gdb) p ptr_to_python_str\n$6 = '__builtins__'\nThe pretty-printer for str\ninstances defaults to using single-quotes (as\ndoes Python\u2019s repr\nfor strings) whereas the standard printer for char *\nvalues uses double-quotes and contains a hexadecimal address:\n(gdb) p ptr_to_char_star\n$7 = 0x6d72c0 \"hello world\"\nAgain, the implementation details can be revealed with a cast to PyUnicodeObject*:\n(gdb) p *(PyUnicodeObject*)$6\n$8 = {ob_base = {ob_refcnt = 33, ob_type = 0x3dad3a95a0}, length = 12,\nstr = 0x7ffff2128500, hash = 7065186196740147912, state = 1, defenc = 0x0}\npy-list\n\u00b6\nThe extension adds a\npy-list\ncommand, which lists the Python source code (if any) for the current frame in the selected thread. The current line is marked with a \u201c>\u201d:(gdb) py-list 901 if options.profile: 902 options.profile = False 903 profile_me() 904 return 905 >906 u = UI() 907 if not u.quit: 908 try: 909 gtk.main() 910 except KeyboardInterrupt: 911 # properly quit on a keyboard interrupt...Use\npy-list START\nto list at a different line number within the Python source, andpy-list START,END\nto list a specific range of lines within the Python source.\npy-up\nand py-down\n\u00b6\nThe\npy-up\nandpy-down\ncommands are analogous to GDB\u2019s regularup\nanddown\ncommands, but try to move at the level of CPython frames, rather than C frames.GDB is not always able to read the relevant frame information, depending on the optimization level with which CPython was compiled. Internally, the commands look for C frames that are executing the default frame evaluation function (that is, the core bytecode interpreter loop within CPython) and look up the value of the related\nPyFrameObject *\n.They emit the frame number (at the C level) within the thread.\nFor example:\n(gdb) py-up #37 Frame 0x9420b04, for file /usr/lib/python2.6/site-packages/ gnome_sudoku/main.py, line 906, in start_game () u = UI() (gdb) py-up #40 Frame 0x948e82c, for file /usr/lib/python2.6/site-packages/ gnome_sudoku/gnome_sudoku.py, line 22, in start_game(main=) main.start_game() (gdb) py-up Unable to find an older python frameso we\u2019re at the top of the Python stack.\nThe frame numbers correspond to those displayed by GDB\u2019s standard\nbacktrace\ncommand. The command skips C frames which are not executing Python code.Going back down:\n(gdb) py-down #37 Frame 0x9420b04, for file /usr/lib/python2.6/site-packages/gnome_sudoku/main.py, line 906, in start_game () u = UI() (gdb) py-down #34 (unable to read python frame information) (gdb) py-down #23 (unable to read python frame information) (gdb) py-down #19 (unable to read python frame information) (gdb) py-down #14 Frame 0x99262ac, for file /usr/lib/python2.6/site-packages/gnome_sudoku/game_selector.py, line 201, in run_swallowed_dialog (self=, puzzle=None, saved_games=[{'gsd.auto_fills': 0, 'tracking': {}, 'trackers': {}, 'notes': [], 'saved_at': 1270084485, 'game': '7 8 0 0 0 0 0 5 6 0 0 9 0 8 0 1 0 0 0 4 6 0 0 0 0 7 0 6 5 0 0 0 4 7 9 2 0 0 0 9 0 1 0 0 0 3 9 7 6 0 0 0 1 8 0 6 0 0 0 0 2 8 0 0 0 5 0 4 0 6 0 0 2 1 0 0 0 0 0 4 5\\n7 8 0 0 0 0 0 5 6 0 0 9 0 8 0 1 0 0 0 4 6 0 0 0 0 7 0 6 5 1 8 3 4 7 9 2 0 0 0 9 0 1 0 0 0 3 9 7 6 0 0 0 1 8 0 6 0 0 0 0 2 8 0 0 0 5 0 4 0 6 0 0 2 1 0 0 0 0 0 4 5', 'gsd.impossible_hints': 0, 'timer.__absolute_start_time__': , 'gsd.hints': 0, 'timer.active_time': , 'timer.total_time': }], dialog=, saved_game_model=, sudoku_maker=, main_page=0) at remote 0x98fa6e4>, d=) gtk.main() (gdb) py-down #8 (unable to read python frame information) (gdb) py-down Unable to find a newer python frameand we\u2019re at the bottom of the Python stack.\nNote that in Python 3.12 and newer, the same C stack frame can be used for multiple Python stack frames. This means that\npy-up\nandpy-down\nmay move multiple Python frames at once. For example:(gdb) py-up #6 Frame 0x7ffff7fb62b0, for file /tmp/rec.py, line 5, in recursive_function (n=0) time.sleep(5) #6 Frame 0x7ffff7fb6240, for file /tmp/rec.py, line 7, in recursive_function (n=1) recursive_function(n-1) #6 Frame 0x7ffff7fb61d0, for file /tmp/rec.py, line 7, in recursive_function (n=2) recursive_function(n-1) #6 Frame 0x7ffff7fb6160, for file /tmp/rec.py, line 7, in recursive_function (n=3) recursive_function(n-1) #6 Frame 0x7ffff7fb60f0, for file /tmp/rec.py, line 7, in recursive_function (n=4) recursive_function(n-1) #6 Frame 0x7ffff7fb6080, for file /tmp/rec.py, line 7, in recursive_function (n=5) recursive_function(n-1) #6 Frame 0x7ffff7fb6020, for file /tmp/rec.py, line 9, in () recursive_function(5) (gdb) py-up Unable to find an older python frame\npy-bt\n\u00b6\nThe\npy-bt\ncommand attempts to display a Python-level backtrace of the current thread.For example:\n(gdb) py-bt #8 (unable to read python frame information) #11 Frame 0x9aead74, for file /usr/lib/python2.6/site-packages/gnome_sudoku/dialog_swallower.py, line 48, in run_dialog (self=, main_page=0) at remote 0x98fa6e4>, d=) gtk.main() #14 Frame 0x99262ac, for file /usr/lib/python2.6/site-packages/gnome_sudoku/game_selector.py, line 201, in run_swallowed_dialog (self=, puzzle=None, saved_games=[{'gsd.auto_fills': 0, 'tracking': {}, 'trackers': {}, 'notes': [], 'saved_at': 1270084485, 'game': '7 8 0 0 0 0 0 5 6 0 0 9 0 8 0 1 0 0 0 4 6 0 0 0 0 7 0 6 5 0 0 0 4 7 9 2 0 0 0 9 0 1 0 0 0 3 9 7 6 0 0 0 1 8 0 6 0 0 0 0 2 8 0 0 0 5 0 4 0 6 0 0 2 1 0 0 0 0 0 4 5\\n7 8 0 0 0 0 0 5 6 0 0 9 0 8 0 1 0 0 0 4 6 0 0 0 0 7 0 6 5 1 8 3 4 7 9 2 0 0 0 9 0 1 0 0 0 3 9 7 6 0 0 0 1 8 0 6 0 0 0 0 2 8 0 0 0 5 0 4 0 6 0 0 2 1 0 0 0 0 0 4 5', 'gsd.impossible_hints': 0, 'timer.__absolute_start_time__': , 'gsd.hints': 0, 'timer.active_time': , 'timer.total_time': }], dialog=, saved_game_model=, sudoku_maker=) main.start_game()The frame numbers correspond to those displayed by GDB\u2019s standard\nbacktrace\ncommand.\npy-print\n\u00b6\nThe\npy-print\ncommand looks up a Python name and tries to print it. It looks in locals within the current thread, then globals, then finally builtins:(gdb) py-print self local 'self' = , main_page=0) at remote 0x98fa6e4> (gdb) py-print __name__ global '__name__' = 'gnome_sudoku.dialog_swallower' (gdb) py-print len builtin 'len' = (gdb) py-print scarlet_pimpernel 'scarlet_pimpernel' not foundIf the current C frame corresponds to multiple Python frames,\npy-print\nonly considers the first one.\npy-locals\n\u00b6\nThe\npy-locals\ncommand looks up all Python locals within the current Python frame in the selected thread, and prints their representations:(gdb) py-locals self = , main_page=0) at remote 0x98fa6e4> d = If the current C frame corresponds to multiple Python frames, locals from all of them will be shown:\n(gdb) py-locals Locals for recursive_function n = 0 Locals for recursive_function n = 1 Locals for recursive_function n = 2 Locals for recursive_function n = 3 Locals for recursive_function n = 4 Locals for recursive_function n = 5 Locals for \nUse with GDB commands\u00b6\nThe extension commands complement GDB\u2019s built-in commands.\nFor example, you can use a frame numbers shown by py-bt\nwith the frame\ncommand to go a specific frame within the selected thread, like this:\n(gdb) py-bt\n(output snipped)\n#68 Frame 0xaa4560, for file Lib/test/regrtest.py, line 1548, in ()\nmain()\n(gdb) frame 68\n#68 0x00000000004cd1e6 in PyEval_EvalFrameEx (f=Frame 0xaa4560, for file Lib/test/regrtest.py, line 1548, in (), throwflag=0) at Python/ceval.c:2665\n2665 x = call_function(&sp, oparg);\n(gdb) py-list\n1543 # Run the tests in a context manager that temporary changes the CWD to a\n1544 # temporary and writable directory. If it's not possible to create or\n1545 # change the CWD, the original CWD will be used. The original CWD is\n1546 # available from test_support.SAVEDCWD.\n1547 with test_support.temp_cwd(TESTCWD, quiet=True):\n>1548 main()\nThe info threads\ncommand will give you a list of the threads within the\nprocess, and you can use the thread\ncommand to select a different one:\n(gdb) info threads\n105 Thread 0x7fffefa18710 (LWP 10260) sem_wait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/sem_wait.S:86\n104 Thread 0x7fffdf5fe710 (LWP 10259) sem_wait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/sem_wait.S:86\n* 1 Thread 0x7ffff7fe2700 (LWP 10145) 0x00000038e46d73e3 in select () at ../sysdeps/unix/syscall-template.S:82\nYou can use thread apply all COMMAND\nor (t a a COMMAND\nfor short) to run\na command on all threads. With py-bt\n, this lets you see what every\nthread is doing at the Python level:\n(gdb) t a a py-bt\nThread 105 (Thread 0x7fffefa18710 (LWP 10260)):\n#5 Frame 0x7fffd00019d0, for file /home/david/coding/python-svn/Lib/threading.py, line 155, in _acquire_restore (self=<_RLock(_Verbose__verbose=False, _RLock__owner=140737354016512, _RLock__block=, _RLock__count=1) at remote 0xd7ff40>, count_owner=(1, 140737213728528), count=1, owner=140737213728528)\nself.__block.acquire()\n#8 Frame 0x7fffac001640, for file /home/david/coding/python-svn/Lib/threading.py, line 269, in wait (self=<_Condition(_Condition__lock=<_RLock(_Verbose__verbose=False, _RLock__owner=140737354016512, _RLock__block=, _RLock__count=1) at remote 0xd7ff40>, acquire=, _is_owned=, _release_save=, release=, _acquire_restore=, _Verbose__verbose=False, _Condition__waiters=[]) at remote 0xd7fd10>, timeout=None, waiter=, saved_state=(1, 140737213728528))\nself._acquire_restore(saved_state)\n#12 Frame 0x7fffb8001a10, for file /home/david/coding/python-svn/Lib/test/lock_tests.py, line 348, in f ()\ncond.wait()\n#16 Frame 0x7fffb8001c40, for file /home/david/coding/python-svn/Lib/test/lock_tests.py, line 37, in task (tid=140737213728528)\nf()\nThread 104 (Thread 0x7fffdf5fe710 (LWP 10259)):\n#5 Frame 0x7fffe4001580, for file /home/david/coding/python-svn/Lib/threading.py, line 155, in _acquire_restore (self=<_RLock(_Verbose__verbose=False, _RLock__owner=140737354016512, _RLock__block=, _RLock__count=1) at remote 0xd7ff40>, count_owner=(1, 140736940992272), count=1, owner=140736940992272)\nself.__block.acquire()\n#8 Frame 0x7fffc8002090, for file /home/david/coding/python-svn/Lib/threading.py, line 269, in wait (self=<_Condition(_Condition__lock=<_RLock(_Verbose__verbose=False, _RLock__owner=140737354016512, _RLock__block=, _RLock__count=1) at remote 0xd7ff40>, acquire=, _is_owned=, _release_save=, release=, _acquire_restore=, _Verbose__verbose=False, _Condition__waiters=[]) at remote 0xd7fd10>, timeout=None, waiter=, saved_state=(1, 140736940992272))\nself._acquire_restore(saved_state)\n#12 Frame 0x7fffac001c90, for file /home/david/coding/python-svn/Lib/test/lock_tests.py, line 348, in f ()\ncond.wait()\n#16 Frame 0x7fffac0011c0, for file /home/david/coding/python-svn/Lib/test/lock_tests.py, line 37, in task (tid=140736940992272)\nf()\nThread 1 (Thread 0x7ffff7fe2700 (LWP 10145)):\n#5 Frame 0xcb5380, for file /home/david/coding/python-svn/Lib/test/lock_tests.py, line 16, in _wait ()\ntime.sleep(0.01)\n#8 Frame 0x7fffd00024a0, for file /home/david/coding/python-svn/Lib/test/lock_tests.py, line 378, in _check_notify (self=, skipped=[], _mirrorOutput=False, testsRun=39, buffer=False, _original_stderr=, _stdout_buffer=, _stderr_buffer=, _moduleSetUpFailed=False, expectedFailures=[], errors=[], _previousTestClass=, unexpectedSuccesses=[], failures=[], shouldStop=False, failfast=False) at remote 0xc185a0>, _threads=(0,), _cleanups=[], _type_equality_funcs={: , : , : , : , >> import sys\n>>> import binascii\n>>> old_binascii = binascii\n>>> del sys.modules['binascii']\n>>> import binascii # create a new module object\n>>> old_binascii == binascii\nFalse\nAs a rule of thumb, the two modules should be completely independent. All objects and state specific to the module should be encapsulated within the module object, not shared with other module objects, and cleaned up when the module object is deallocated. Since this just is a rule of thumb, exceptions are possible (see Managing Global State), but they will need more thought and attention to edge cases.\nWhile some modules could do with less stringent restrictions, isolated modules make it easier to set clear expectations and guidelines that work across a variety of use cases.\nSurprising Edge Cases\u00b6\nNote that isolated modules do create some surprising edge cases. Most\nnotably, each module object will typically not share its classes and\nexceptions with other similar modules. Continuing from the\nexample above,\nnote that old_binascii.Error\nand binascii.Error\nare\nseparate objects. In the following code, the exception is not caught:\n>>> old_binascii.Error == binascii.Error\nFalse\n>>> try:\n... old_binascii.unhexlify(b'qwertyuiop')\n... except binascii.Error:\n... print('boo')\n...\nTraceback (most recent call last):\nFile \"\", line 2, in \nbinascii.Error: Non-hexadecimal digit found\nThis is expected. Notice that pure-Python modules behave the same way: it is a part of how Python works.\nThe goal is to make extension modules safe at the C level, not to make\nhacks behave intuitively. Mutating sys.modules\n\u201cmanually\u201d counts\nas a hack.\nMaking Modules Safe with Multiple Interpreters\u00b6\nManaging Global State\u00b6\nSometimes, the state associated with a Python module is not specific to that module, but to the entire process (or something else \u201cmore global\u201d than a module). For example:\nThe\nreadline\nmodule manages the terminal.A module running on a circuit board wants to control the on-board LED.\nIn these cases, the Python module should provide access to the global state, rather than own it. If possible, write the module so that multiple copies of it can access the state independently (along with other libraries, whether for Python or other languages). If that is not possible, consider explicit locking.\nIf it is necessary to use process-global state, the simplest way to avoid issues with multiple interpreters is to explicitly prevent a module from being loaded more than once per process\u2014see Opt-Out: Limiting to One Module Object per Process.\nManaging Per-Module State\u00b6\nTo use per-module state, use multi-phase extension module initialization. This signals that your module supports multiple interpreters correctly.\nSet PyModuleDef.m_size\nto a positive number to request that many\nbytes of storage local to the module. Usually, this will be set to the\nsize of some module-specific struct\n, which can store all of the\nmodule\u2019s C-level state. In particular, it is where you should put\npointers to classes (including exceptions, but excluding static types)\nand settings (e.g. csv\n\u2019s field_size_limit\n)\nwhich the C code needs to function.\nNote\nAnother option is to store state in the module\u2019s __dict__\n,\nbut you must avoid crashing when users modify __dict__\nfrom\nPython code. This usually means error- and type-checking at the C level,\nwhich is easy to get wrong and hard to test sufficiently.\nHowever, if module state is not needed in C code, storing it in\n__dict__\nonly is a good idea.\nIf the module state includes PyObject\npointers, the module object\nmust hold references to those objects and implement the module-level hooks\nm_traverse\n, m_clear\nand m_free\n. These work like\ntp_traverse\n, tp_clear\nand tp_free\nof a class. Adding them will\nrequire some work and make the code longer; this is the price for\nmodules which can be unloaded cleanly.\nAn example of a module with per-module state is currently available as xxlimited; example module initialization shown at the bottom of the file.\nOpt-Out: Limiting to One Module Object per Process\u00b6\nA non-negative PyModuleDef.m_size\nsignals that a module supports\nmultiple interpreters correctly. If this is not yet the case for your\nmodule, you can explicitly make your module loadable only once per\nprocess. For example:\n// A process-wide flag\nstatic int loaded = 0;\n// Mutex to provide thread safety (only needed for free-threaded Python)\nstatic PyMutex modinit_mutex = {0};\nstatic int\nexec_module(PyObject* module)\n{\nPyMutex_Lock(&modinit_mutex);\nif (loaded) {\nPyMutex_Unlock(&modinit_mutex);\nPyErr_SetString(PyExc_ImportError,\n\"cannot load module more than once per process\");\nreturn -1;\n}\nloaded = 1;\nPyMutex_Unlock(&modinit_mutex);\n// ... rest of initialization\n}\nIf your module\u2019s PyModuleDef.m_clear\nfunction is able to prepare\nfor future re-initialization, it should clear the loaded\nflag.\nIn this case, your module won\u2019t support multiple instances existing\nconcurrently, but it will, for example, support being loaded after\nPython runtime shutdown (Py_FinalizeEx()\n) and re-initialization\n(Py_Initialize()\n).\nModule State Access from Functions\u00b6\nAccessing the state from module-level functions is straightforward.\nFunctions get the module object as their first argument; for extracting\nthe state, you can use PyModule_GetState\n:\nstatic PyObject *\nfunc(PyObject *module, PyObject *args)\n{\nmy_struct *state = (my_struct*)PyModule_GetState(module);\nif (state == NULL) {\nreturn NULL;\n}\n// ... rest of logic\n}\nNote\nPyModule_GetState\nmay return NULL\nwithout setting an\nexception if there is no module state, i.e. PyModuleDef.m_size\nwas\nzero. In your own module, you\u2019re in control of m_size\n, so this is\neasy to prevent.\nHeap Types\u00b6\nTraditionally, types defined in C code are static; that is,\nstatic PyTypeObject\nstructures defined directly in code and\ninitialized using PyType_Ready()\n.\nSuch types are necessarily shared across the process. Sharing them\nbetween module objects requires paying attention to any state they own\nor access. To limit the possible issues, static types are immutable at\nthe Python level: for example, you can\u2019t set str.myattribute = 123\n.\nCPython implementation detail: Sharing truly immutable objects between interpreters is fine, as long as they don\u2019t provide access to mutable objects. However, in CPython, every Python object has a mutable implementation detail: the reference count. Changes to the refcount are guarded by the GIL. Thus, code that shares any Python objects across interpreters implicitly depends on CPython\u2019s current, process-wide GIL.\nBecause they are immutable and process-global, static types cannot access\n\u201ctheir\u201d module state.\nIf any method of such a type requires access to module state,\nthe type must be converted to a heap-allocated type, or heap type\nfor short. These correspond more closely to classes created by Python\u2019s\nclass\nstatement.\nFor new modules, using heap types by default is a good rule of thumb.\nChanging Static Types to Heap Types\u00b6\nStatic types can be converted to heap types, but note that the heap type API was not designed for \u201clossless\u201d conversion from static types\u2014that is, creating a type that works exactly like a given static type. So, when rewriting the class definition in a new API, you are likely to unintentionally change a few details (e.g. pickleability or inherited slots). Always test the details that are important to you.\nWatch out for the following two points in particular (but note that this is not a comprehensive list):\nUnlike static types, heap type objects are mutable by default. Use the\nPy_TPFLAGS_IMMUTABLETYPE\nflag to prevent mutability.Heap types inherit\ntp_new\nby default, so it may become possible to instantiate them from Python code. You can prevent this with thePy_TPFLAGS_DISALLOW_INSTANTIATION\nflag.\nDefining Heap Types\u00b6\nHeap types can be created by filling a PyType_Spec\nstructure, a\ndescription or \u201cblueprint\u201d of a class, and calling\nPyType_FromModuleAndSpec()\nto construct a new class object.\nNote\nOther functions, like PyType_FromSpec()\n, can also create\nheap types, but PyType_FromModuleAndSpec()\nassociates the module\nwith the class, allowing access to the module state from methods.\nThe class should generally be stored in both the module state (for\nsafe access from C) and the module\u2019s __dict__\n(for access from\nPython code).\nGarbage-Collection Protocol\u00b6\nInstances of heap types hold a reference to their type. This ensures that the type isn\u2019t destroyed before all its instances are, but may result in reference cycles that need to be broken by the garbage collector.\nTo avoid memory leaks, instances of heap types must implement the garbage collection protocol. That is, heap types should:\nHave the\nPy_TPFLAGS_HAVE_GC\nflag.Define a traverse function using\nPy_tp_traverse\n, which visits the type (e.g. usingPy_VISIT(Py_TYPE(self))\n).\nPlease refer to the documentation of\nPy_TPFLAGS_HAVE_GC\nand tp_traverse\nfor additional considerations.\nThe API for defining heap types grew organically, leaving it somewhat awkward to use in its current state. The following sections will guide you through common issues.\ntp_traverse\nin Python 3.8 and lower\u00b6\nThe requirement to visit the type from tp_traverse\nwas added in Python 3.9.\nIf you support Python 3.8 and lower, the traverse function must not\nvisit the type, so it must be more complicated:\nstatic int my_traverse(PyObject *self, visitproc visit, void *arg)\n{\nif (Py_Version >= 0x03090000) {\nPy_VISIT(Py_TYPE(self));\n}\nreturn 0;\n}\nUnfortunately, Py_Version\nwas only added in Python 3.11.\nAs a replacement, use:\nPY_VERSION_HEX\n, if not using the stable ABI, orsys.version_info\n(viaPySys_GetObject()\nandPyArg_ParseTuple()\n).\nDelegating tp_traverse\n\u00b6\nIf your traverse function delegates to the tp_traverse\nof its base class (or another type), ensure that Py_TYPE(self)\nis visited\nonly once.\nNote that only heap type are expected to visit the type in tp_traverse\n.\nFor example, if your traverse function includes:\nbase->tp_traverse(self, visit, arg)\n\u2026and base\nmay be a static type, then it should also include:\nif (base->tp_flags & Py_TPFLAGS_HEAPTYPE) {\n// a heap type's tp_traverse already visited Py_TYPE(self)\n} else {\nif (Py_Version >= 0x03090000) {\nPy_VISIT(Py_TYPE(self));\n}\n}\nIt is not necessary to handle the type\u2019s reference count in\ntp_new\nand tp_clear\n.\nDefining tp_dealloc\n\u00b6\nIf your type has a custom tp_dealloc\nfunction,\nit needs to:\ncall\nPyObject_GC_UnTrack()\nbefore any fields are invalidated, anddecrement the reference count of the type.\nTo keep the type valid while tp_free\nis called, the type\u2019s refcount needs\nto be decremented after the instance is deallocated. For example:\nstatic void my_dealloc(PyObject *self)\n{\nPyObject_GC_UnTrack(self);\n...\nPyTypeObject *type = Py_TYPE(self);\ntype->tp_free(self);\nPy_DECREF(type);\n}\nThe default tp_dealloc\nfunction does this, so\nif your type does not override\ntp_dealloc\nyou don\u2019t need to add it.\nNot overriding tp_free\n\u00b6\nThe tp_free\nslot of a heap type must be set to\nPyObject_GC_Del()\n.\nThis is the default; do not override it.\nAvoiding PyObject_New\n\u00b6\nGC-tracked objects need to be allocated using GC-aware functions.\nIf you use PyObject_New()\nor PyObject_NewVar()\n:\nGet and call type\u2019s\ntp_alloc\nslot, if possible. That is, replaceTYPE *o = PyObject_New(TYPE, typeobj)\nwith:TYPE *o = typeobj->tp_alloc(typeobj, 0);\nReplace\no = PyObject_NewVar(TYPE, typeobj, size)\nwith the same, but use size instead of the 0.If the above is not possible (e.g. inside a custom\ntp_alloc\n), callPyObject_GC_New()\norPyObject_GC_NewVar()\n:TYPE *o = PyObject_GC_New(TYPE, typeobj); TYPE *o = PyObject_GC_NewVar(TYPE, typeobj, size);\nModule State Access from Classes\u00b6\nIf you have a type object defined with PyType_FromModuleAndSpec()\n,\nyou can call PyType_GetModule()\nto get the associated module, and then\nPyModule_GetState()\nto get the module\u2019s state.\nTo save a some tedious error-handling boilerplate code, you can combine\nthese two steps with PyType_GetModuleState()\n, resulting in:\nmy_struct *state = (my_struct*)PyType_GetModuleState(type);\nif (state == NULL) {\nreturn NULL;\n}\nModule State Access from Regular Methods\u00b6\nAccessing the module-level state from methods of a class is somewhat more complicated, but is possible thanks to API introduced in Python 3.9. To get the state, you need to first get the defining class, and then get the module state from it.\nThe largest roadblock is getting the class a method was defined in, or that method\u2019s \u201cdefining class\u201d for short. The defining class can have a reference to the module it is part of.\nDo not confuse the defining class with Py_TYPE(self)\n. If the method\nis called on a subclass of your type, Py_TYPE(self)\nwill refer to\nthat subclass, which may be defined in different module than yours.\nNote\nThe following Python code can illustrate the concept.\nBase.get_defining_class\nreturns Base\neven\nif type(self) == Sub\n:\nclass Base:\ndef get_type_of_self(self):\nreturn type(self)\ndef get_defining_class(self):\nreturn __class__\nclass Sub(Base):\npass\nFor a method to get its \u201cdefining class\u201d, it must use the\nMETH_METHOD | METH_FASTCALL | METH_KEYWORDS\ncalling convention\nand the corresponding PyCMethod\nsignature:\nPyObject *PyCMethod(\nPyObject *self, // object the method was called on\nPyTypeObject *defining_class, // defining class\nPyObject *const *args, // C array of arguments\nPy_ssize_t nargs, // length of \"args\"\nPyObject *kwnames) // NULL, or dict of keyword arguments\nOnce you have the defining class, call PyType_GetModuleState()\nto get\nthe state of its associated module.\nFor example:\nstatic PyObject *\nexample_method(PyObject *self,\nPyTypeObject *defining_class,\nPyObject *const *args,\nPy_ssize_t nargs,\nPyObject *kwnames)\n{\nmy_struct *state = (my_struct*)PyType_GetModuleState(defining_class);\nif (state == NULL) {\nreturn NULL;\n}\n... // rest of logic\n}\nPyDoc_STRVAR(example_method_doc, \"...\");\nstatic PyMethodDef my_methods[] = {\n{\"example_method\",\n(PyCFunction)(void(*)(void))example_method,\nMETH_METHOD|METH_FASTCALL|METH_KEYWORDS,\nexample_method_doc}\n{NULL},\n}\nModule State Access from Slot Methods, Getters and Setters\u00b6\nNote\nThis is new in Python 3.11.\nSlot methods\u2014the fast C equivalents for special methods, such as\nnb_add\nfor __add__\nor\ntp_new\nfor initialization\u2014have a very simple API that\ndoesn\u2019t allow passing in the defining class, unlike with PyCMethod\n.\nThe same goes for getters and setters defined with\nPyGetSetDef\n.\nTo access the module state in these cases, use the\nPyType_GetModuleByDef()\nfunction, and pass in the module definition.\nOnce you have the module, call PyModule_GetState()\nto get the state:\nPyObject *module = PyType_GetModuleByDef(Py_TYPE(self), &module_def);\nmy_struct *state = (my_struct*)PyModule_GetState(module);\nif (state == NULL) {\nreturn NULL;\n}\nPyType_GetModuleByDef()\nworks by searching the\nmethod resolution order (i.e. all superclasses) for the first\nsuperclass that has a corresponding module.\nNote\nIn very exotic cases (inheritance chains spanning multiple modules\ncreated from the same definition), PyType_GetModuleByDef()\nmight not\nreturn the module of the true defining class. However, it will always\nreturn a module with the same definition, ensuring a compatible\nC memory layout.\nLifetime of the Module State\u00b6\nWhen a module object is garbage-collected, its module state is freed. For each pointer to (a part of) the module state, you must hold a reference to the module object.\nUsually this is not an issue, because types created with\nPyType_FromModuleAndSpec()\n, and their instances, hold a reference\nto the module.\nHowever, you must be careful in reference counting when you reference\nmodule state from other places, such as callbacks for external\nlibraries.\nOpen Issues\u00b6\nSeveral issues around per-module state and heap types are still open.\nDiscussions about improving the situation are best held on the discuss forum under c-api tag.\nPer-Class Scope\u00b6\nIt is currently (as of Python 3.11) not possible to attach state to individual types without relying on CPython implementation details (which may change in the future\u2014perhaps, ironically, to allow a proper solution for per-class scope).\nLossless Conversion to Heap Types\u00b6\nThe heap type API was not designed for \u201clossless\u201d conversion from static types; that is, creating a type that works exactly like a given static type.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 4777} +{"url": "https://docs.python.org/3/whatsnew/3.12.html", "title": "What\u2019s New In Python 3.12", "content": "What\u2019s New In Python 3.12\u00b6\n- Editor:\nAdam Turner\nThis article explains the new features in Python 3.12, compared to 3.11. Python 3.12 was released on October 2, 2023. For full details, see the changelog.\nSee also\nPEP 693 \u2013 Python 3.12 Release Schedule\nSummary \u2013 Release highlights\u00b6\nPython 3.12 is a stable release of the Python programming language,\nwith a mix of changes to the language and the standard library.\nThe library changes focus on cleaning up deprecated APIs, usability, and correctness.\nOf note, the distutils\npackage has been removed from the standard library.\nFilesystem support in os\nand pathlib\nhas seen a number of improvements,\nand several modules have better performance.\nThe language changes focus on usability,\nas f-strings have had many limitations removed\nand \u2018Did you mean \u2026\u2019 suggestions continue to improve.\nThe new type parameter syntax\nand type\nstatement improve ergonomics for using generic types and type aliases with static type checkers.\nThis article doesn\u2019t attempt to provide a complete specification of all new features, but instead gives a convenient overview. For full details, you should refer to the documentation, such as the Library Reference and Language Reference. If you want to understand the complete implementation and design rationale for a change, refer to the PEP for a particular new feature; but note that PEPs usually are not kept up-to-date once a feature has been fully implemented.\nNew syntax features:\nNew grammar features:\nInterpreter improvements:\nPEP 669, low impact monitoring\nImproved \u2018Did you mean \u2026\u2019 suggestions for\nNameError\n,ImportError\n, andSyntaxError\nexceptions\nPython data model improvements:\nPEP 688, using the buffer protocol from Python\nSignificant improvements in the standard library:\nThe\npathlib.Path\nclass now supports subclassingThe\nos\nmodule received several improvements for Windows supportA command-line interface has been added to the\nsqlite3\nmoduleisinstance()\nchecks againstruntime-checkable protocols\nenjoy a speed up of between two and 20 timesThe\nasyncio\npackage has had a number of performance improvements, with some benchmarks showing a 75% speed up.A command-line interface has been added to the\nuuid\nmoduleDue to the changes in PEP 701, producing tokens via the\ntokenize\nmodule is up to 64% faster.\nSecurity improvements:\nReplace the builtin\nhashlib\nimplementations of SHA1, SHA3, SHA2-384, SHA2-512, and MD5 with formally verified code from the HACL* project. These builtin implementations remain as fallbacks that are only used when OpenSSL does not provide them.\nC API improvements:\nCPython implementation improvements:\nPEP 709, comprehension inlining\nCPython support for the Linux\nperf\nprofilerImplement stack overflow protection on supported platforms\nNew typing features:\nPEP 698,\ntyping.override()\ndecorator\nImportant deprecations, removals or restrictions:\nPEP 623: Remove\nwstr\nfrom Unicode objects in Python\u2019s C API, reducing the size of everystr\nobject by at least 8 bytes.PEP 632: Remove the\ndistutils\npackage. See the migration guide for advice replacing the APIs it provided. The third-party Setuptools package continues to providedistutils\n, if you still require it in Python 3.12 and beyond.gh-95299: Do not pre-install\nsetuptools\nin virtual environments created withvenv\n. This means thatdistutils\n,setuptools\n,pkg_resources\n, andeasy_install\nwill no longer available by default; to access these runpip install setuptools\nin the activated virtual environment.The\nasynchat\n,asyncore\n, andimp\nmodules have been removed, along with severalunittest.TestCase\nmethod aliases.\nNew Features\u00b6\nPEP 695: Type Parameter Syntax\u00b6\nGeneric classes and functions under PEP 484 were declared using a verbose syntax that left the scope of type parameters unclear and required explicit declarations of variance.\nPEP 695 introduces a new, more compact and explicit way to create generic classes and functions:\ndef max[T](args: Iterable[T]) -> T:\n...\nclass list[T]:\ndef __getitem__(self, index: int, /) -> T:\n...\ndef append(self, element: T) -> None:\n...\nIn addition, the PEP introduces a new way to declare type aliases\nusing the type\nstatement, which creates an instance of\nTypeAliasType\n:\ntype Point = tuple[float, float]\nType aliases can also be generic:\ntype Point[T] = tuple[T, T]\nThe new syntax allows declaring TypeVarTuple\nand ParamSpec\nparameters, as well as TypeVar\nparameters with bounds or constraints:\ntype IntFunc[**P] = Callable[P, int] # ParamSpec\ntype LabeledTuple[*Ts] = tuple[str, *Ts] # TypeVarTuple\ntype HashableSequence[T: Hashable] = Sequence[T] # TypeVar with bound\ntype IntOrStrSequence[T: (int, str)] = Sequence[T] # TypeVar with constraints\nThe value of type aliases and the bound and constraints of type variables created through this syntax are evaluated only on demand (see lazy evaluation). This means type aliases are able to refer to other types defined later in the file.\nType parameters declared through a type parameter list are visible within the scope of the declaration and any nested scopes, but not in the outer scope. For example, they can be used in the type annotations for the methods of a generic class or in the class body. However, they cannot be used in the module scope after the class is defined. See Type parameter lists for a detailed description of the runtime semantics of type parameters.\nIn order to support these scoping semantics, a new kind of scope is introduced, the annotation scope. Annotation scopes behave for the most part like function scopes, but interact differently with enclosing class scopes. In Python 3.13, annotations will also be evaluated in annotation scopes.\nSee PEP 695 for more details.\n(PEP written by Eric Traut. Implementation by Jelle Zijlstra, Eric Traut, and others in gh-103764.)\nPEP 701: Syntactic formalization of f-strings\u00b6\nPEP 701 lifts some restrictions on the usage of f-strings. Expression components inside f-strings can now be any valid Python expression, including strings reusing the same quote as the containing f-string, multi-line expressions, comments, backslashes, and unicode escape sequences. Let\u2019s cover these in detail:\nQuote reuse: in Python 3.11, reusing the same quotes as the enclosing f-string raises a\nSyntaxError\n, forcing the user to either use other available quotes (like using double quotes or triple quotes if the f-string uses single quotes). In Python 3.12, you can now do things like this:>>> songs = ['Take me back to Eden', 'Alkaline', 'Ascensionism'] >>> f\"This is the playlist: {\", \".join(songs)}\" 'This is the playlist: Take me back to Eden, Alkaline, Ascensionism'\nNote that before this change there was no explicit limit in how f-strings can be nested, but the fact that string quotes cannot be reused inside the expression component of f-strings made it impossible to nest f-strings arbitrarily. In fact, this is the most nested f-string that could be written:\n>>> f\"\"\"{f'''{f'{f\"{1+1}\"}'}'''}\"\"\" '2'\nAs now f-strings can contain any valid Python expression inside expression components, it is now possible to nest f-strings arbitrarily:\n>>> f\"{f\"{f\"{f\"{f\"{f\"{1+1}\"}\"}\"}\"}\"}\" '2'\nMulti-line expressions and comments: In Python 3.11, f-string expressions must be defined in a single line, even if the expression within the f-string could normally span multiple lines (like literal lists being defined over multiple lines), making them harder to read. In Python 3.12 you can now define f-strings spanning multiple lines, and add inline comments:\n>>> f\"This is the playlist: {\", \".join([ ... 'Take me back to Eden', # My, my, those eyes like fire ... 'Alkaline', # Not acid nor alkaline ... 'Ascensionism' # Take to the broken skies at last ... ])}\" 'This is the playlist: Take me back to Eden, Alkaline, Ascensionism'\nBackslashes and unicode characters: before Python 3.12 f-string expressions couldn\u2019t contain any\n\\\ncharacter. This also affected unicode escape sequences (such as\\N{snowman}\n) as these contain the\\N\npart that previously could not be part of expression components of f-strings. Now, you can define expressions like this:>>> print(f\"This is the playlist: {\"\\n\".join(songs)}\") This is the playlist: Take me back to Eden Alkaline Ascensionism >>> print(f\"This is the playlist: {\"\\N{BLACK HEART SUIT}\".join(songs)}\") This is the playlist: Take me back to Eden\u2665Alkaline\u2665Ascensionism\nSee PEP 701 for more details.\nAs a positive side-effect of how this feature has been implemented (by parsing f-strings\nwith the PEG parser), now error messages for f-strings are more precise\nand include the exact location of the error. For example, in Python 3.11, the following\nf-string raises a SyntaxError\n:\n>>> my_string = f\"{x z y}\" + f\"{1 + 1}\"\nFile \"\", line 1\n(x z y)\n^^^\nSyntaxError: f-string: invalid syntax. Perhaps you forgot a comma?\nbut the error message doesn\u2019t include the exact location of the error within the line and also has the expression artificially surrounded by parentheses. In Python 3.12, as f-strings are parsed with the PEG parser, error messages can be more precise and show the entire line:\n>>> my_string = f\"{x z y}\" + f\"{1 + 1}\"\nFile \"\", line 1\nmy_string = f\"{x z y}\" + f\"{1 + 1}\"\n^^^\nSyntaxError: invalid syntax. Perhaps you forgot a comma?\n(Contributed by Pablo Galindo, Batuhan Taskaya, Lysandros Nikolaou, Cristi\u00e1n Maureira-Fredes and Marta G\u00f3mez in gh-102856. PEP written by Pablo Galindo, Batuhan Taskaya, Lysandros Nikolaou and Marta G\u00f3mez).\nPEP 684: A Per-Interpreter GIL\u00b6\nPEP 684 introduces a per-interpreter GIL, so that sub-interpreters may now be created with a unique GIL per interpreter. This allows Python programs to take full advantage of multiple CPU cores. This is currently only available through the C-API, though a Python API is anticipated for 3.13.\nUse the new Py_NewInterpreterFromConfig()\nfunction to\ncreate an interpreter with its own GIL:\nPyInterpreterConfig config = {\n.check_multi_interp_extensions = 1,\n.gil = PyInterpreterConfig_OWN_GIL,\n};\nPyThreadState *tstate = NULL;\nPyStatus status = Py_NewInterpreterFromConfig(&tstate, &config);\nif (PyStatus_Exception(status)) {\nreturn -1;\n}\n/* The new interpreter is now active in the current thread. */\nFor further examples how to use the C-API for sub-interpreters with a\nper-interpreter GIL, see Modules/_xxsubinterpretersmodule.c\n.\n(Contributed by Eric Snow in gh-104210, etc.)\nPEP 669: Low impact monitoring for CPython\u00b6\nPEP 669 defines a new API\nfor profilers,\ndebuggers, and other tools to monitor events in CPython.\nIt covers a wide range of events, including calls,\nreturns, lines, exceptions, jumps, and more.\nThis means that you only pay for what you use, providing support\nfor near-zero overhead debuggers and coverage tools.\nSee sys.monitoring\nfor details.\n(Contributed by Mark Shannon in gh-103082.)\nPEP 688: Making the buffer protocol accessible in Python\u00b6\nPEP 688 introduces a way to use the buffer protocol\nfrom Python code. Classes that implement the __buffer__()\nmethod\nare now usable as buffer types.\nThe new collections.abc.Buffer\nABC provides a standard\nway to represent buffer objects, for example in type annotations.\nThe new inspect.BufferFlags\nenum represents the flags that\ncan be used to customize buffer creation.\n(Contributed by Jelle Zijlstra in gh-102500.)\nPEP 709: Comprehension inlining\u00b6\nDictionary, list, and set comprehensions are now inlined, rather than creating a new single-use function object for each execution of the comprehension. This speeds up execution of a comprehension by up to two times. See PEP 709 for further details.\nComprehension iteration variables remain isolated and don\u2019t overwrite a variable of the same name in the outer scope, nor are they visible after the comprehension. Inlining does result in a few visible behavior changes:\nThere is no longer a separate frame for the comprehension in tracebacks, and tracing/profiling no longer shows the comprehension as a function call.\nThe\nsymtable\nmodule will no longer produce child symbol tables for each comprehension; instead, the comprehension\u2019s locals will be included in the parent function\u2019s symbol table.Calling\nlocals()\ninside a comprehension now includes variables from outside the comprehension, and no longer includes the synthetic.0\nvariable for the comprehension \u201cargument\u201d.A comprehension iterating directly over\nlocals()\n(e.g.[k for k in locals()]\n) may see \u201cRuntimeError: dictionary changed size during iteration\u201d when run under tracing (e.g. code coverage measurement). This is the same behavior already seen in e.g.for k in locals():\n. To avoid the error, first create a list of keys to iterate over:keys = list(locals()); [k for k in keys]\n.\n(Contributed by Carl Meyer and Vladimir Matveev in PEP 709.)\nImproved Error Messages\u00b6\nModules from the standard library are now potentially suggested as part of the error messages displayed by the interpreter when a\nNameError\nis raised to the top level. (Contributed by Pablo Galindo in gh-98254.)>>> sys.version_info Traceback (most recent call last): File \"\", line 1, in NameError: name 'sys' is not defined. Did you forget to import 'sys'?\nImprove the error suggestion for\nNameError\nexceptions for instances. Now if aNameError\nis raised in a method and the instance has an attribute that\u2019s exactly equal to the name in the exception, the suggestion will includeself.\ninstead of the closest match in the method scope. (Contributed by Pablo Galindo in gh-99139.)>>> class A: ... def __init__(self): ... self.blech = 1 ... ... def foo(self): ... somethin = blech ... >>> A().foo() Traceback (most recent call last): File \"\", line 1 somethin = blech ^^^^^ NameError: name 'blech' is not defined. Did you mean: 'self.blech'?\nImprove the\nSyntaxError\nerror message when the user typesimport x from y\ninstead offrom y import x\n. (Contributed by Pablo Galindo in gh-98931.)>>> import a.y.z from b.y.z Traceback (most recent call last): File \"\", line 1 import a.y.z from b.y.z ^^^^^^^^^^^^^^^^^^^^^^^ SyntaxError: Did you mean to use 'from ... import ...' instead?\nImportError\nexceptions raised from failedfrom import \nstatements now include suggestions for the value of\nbased on the available names in\n. (Contributed by Pablo Galindo in gh-91058.)>>> from collections import chainmap Traceback (most recent call last): File \"\", line 1, in ImportError: cannot import name 'chainmap' from 'collections'. Did you mean: 'ChainMap'?\nOther Language Changes\u00b6\nThe parser now raises\nSyntaxError\nwhen parsing source code containing null bytes. (Contributed by Pablo Galindo in gh-96670.)A backslash-character pair that is not a valid escape sequence now generates a\nSyntaxWarning\n, instead ofDeprecationWarning\n. For example,re.compile(\"\\d+\\.\\d+\")\nnow emits aSyntaxWarning\n(\"\\d\"\nis an invalid escape sequence, use raw strings for regular expression:re.compile(r\"\\d+\\.\\d+\")\n). In a future Python version,SyntaxError\nwill eventually be raised, instead ofSyntaxWarning\n. (Contributed by Victor Stinner in gh-98401.)Octal escapes with value larger than\n0o377\n(ex:\"\\477\"\n), deprecated in Python 3.11, now produce aSyntaxWarning\n, instead ofDeprecationWarning\n. In a future Python version they will be eventually aSyntaxError\n. (Contributed by Victor Stinner in gh-98401.)Variables used in the target part of comprehensions that are not stored to can now be used in assignment expressions (\n:=\n). For example, in[(b := 1) for a, b.prop in some_iter]\n, the assignment tob\nis now allowed. Note that assigning to variables stored to in the target part of comprehensions (likea\n) is still disallowed, as per PEP 572. (Contributed by Nikita Sobolev in gh-100581.)Exceptions raised in a class or type\u2019s\n__set_name__\nmethod are no longer wrapped by aRuntimeError\n. Context information is added to the exception as a PEP 678 note. (Contributed by Irit Katriel in gh-77757.)When a\ntry-except*\nconstruct handles the entireExceptionGroup\nand raises one other exception, that exception is no longer wrapped in anExceptionGroup\n. Also changed in version 3.11.4. (Contributed by Irit Katriel in gh-103590.)The Garbage Collector now runs only on the eval breaker mechanism of the Python bytecode evaluation loop instead of object allocations. The GC can also run when\nPyErr_CheckSignals()\nis called so C extensions that need to run for a long time without executing any Python code also have a chance to execute the GC periodically. (Contributed by Pablo Galindo in gh-97922.)All builtin and extension callables expecting boolean parameters now accept arguments of any type instead of just\nbool\nandint\n. (Contributed by Serhiy Storchaka in gh-60203.)memoryview\nnow supports the half-float type (the \u201ce\u201d format code). (Contributed by Donghee Na and Antoine Pitrou in gh-90751.)slice\nobjects are now hashable, allowing them to be used as dict keys and set items. (Contributed by Will Bradshaw, Furkan Onder, and Raymond Hettinger in gh-101264.)sum()\nnow uses Neumaier summation to improve accuracy and commutativity when summing floats or mixed ints and floats. (Contributed by Raymond Hettinger in gh-100425.)ast.parse()\nnow raisesSyntaxError\ninstead ofValueError\nwhen parsing source code containing null bytes. (Contributed by Pablo Galindo in gh-96670.)The extraction methods in\ntarfile\n, andshutil.unpack_archive()\n, have a new a filter argument that allows limiting tar features than may be surprising or dangerous, such as creating files outside the destination directory. See tarfile extraction filters for details. In Python 3.14, the default will switch to'data'\n. (Contributed by Petr Viktorin in PEP 706.)types.MappingProxyType\ninstances are now hashable if the underlying mapping is hashable. (Contributed by Serhiy Storchaka in gh-87995.)Add support for the perf profiler through the new environment variable\nPYTHONPERFSUPPORT\nand command-line option-X perf\n, as well as the newsys.activate_stack_trampoline()\n,sys.deactivate_stack_trampoline()\n, andsys.is_stack_trampoline_active()\nfunctions. (Design by Pablo Galindo. Contributed by Pablo Galindo and Christian Heimes with contributions from Gregory P. Smith [Google] and Mark Shannon in gh-96123.)\nNew Modules\u00b6\nNone.\nImproved Modules\u00b6\narray\u00b6\nThe\narray.array\nclass now supports subscripting, making it a generic type. (Contributed by Jelle Zijlstra in gh-98658.)\nasyncio\u00b6\nThe performance of writing to sockets in\nasyncio\nhas been significantly improved.asyncio\nnow avoids unnecessary copying when writing to sockets and usessendmsg()\nif the platform supports it. (Contributed by Kumar Aditya in gh-91166.)Add\nasyncio.eager_task_factory()\nandasyncio.create_eager_task_factory()\nfunctions to allow opting an event loop in to eager task execution, making some use-cases 2x to 5x faster. (Contributed by Jacob Bower & Itamar Oren in gh-102853, gh-104140, and gh-104138)On Linux,\nasyncio\nusesasyncio.PidfdChildWatcher\nby default ifos.pidfd_open()\nis available and functional instead ofasyncio.ThreadedChildWatcher\n. (Contributed by Kumar Aditya in gh-98024.)The event loop now uses the best available child watcher for each platform (\nasyncio.PidfdChildWatcher\nif supported andasyncio.ThreadedChildWatcher\notherwise), so manually configuring a child watcher is not recommended. (Contributed by Kumar Aditya in gh-94597.)Add loop_factory parameter to\nasyncio.run()\nto allow specifying a custom event loop factory. (Contributed by Kumar Aditya in gh-99388.)Add C implementation of\nasyncio.current_task()\nfor 4x-6x speedup. (Contributed by Itamar Oren and Pranav Thulasiram Bhat in gh-100344.)asyncio.iscoroutine()\nnow returnsFalse\nfor generators asasyncio\ndoes not support legacy generator-based coroutines. (Contributed by Kumar Aditya in gh-102748.)asyncio.wait()\nandasyncio.as_completed()\nnow accepts generators yielding tasks. (Contributed by Kumar Aditya in gh-78530.)\ncalendar\u00b6\nAdd enums\ncalendar.Month\nandcalendar.Day\ndefining months of the year and days of the week. (Contributed by Prince Roshan in gh-103636.)\ncsv\u00b6\nAdd\ncsv.QUOTE_NOTNULL\nandcsv.QUOTE_STRINGS\nflags to provide finer grained control ofNone\nand empty strings byreader\nandwriter\nobjects.\ndis\u00b6\nPseudo instruction opcodes (which are used by the compiler but do not appear in executable bytecode) are now exposed in the\ndis\nmodule.HAVE_ARGUMENT\nis still relevant to real opcodes, but it is not useful for pseudo instructions. Use the newdis.hasarg\ncollection instead. (Contributed by Irit Katriel in gh-94216.)Add the\ndis.hasexc\ncollection to signify instructions that set an exception handler. (Contributed by Irit Katriel in gh-94216.)\nfractions\u00b6\nObjects of type\nfractions.Fraction\nnow support float-style formatting. (Contributed by Mark Dickinson in gh-100161.)\nimportlib.resources\u00b6\nimportlib.resources.as_file()\nnow supports resource directories. (Contributed by Jason R. Coombs in gh-97930.)Rename first parameter of\nimportlib.resources.files()\nto anchor. (Contributed by Jason R. Coombs in gh-100598.)\ninspect\u00b6\nAdd\ninspect.markcoroutinefunction()\nto mark sync functions that return a coroutine for use withinspect.iscoroutinefunction()\n. (Contributed by Carlton Gibson in gh-99247.)Add\ninspect.getasyncgenstate()\nandinspect.getasyncgenlocals()\nfor determining the current state of asynchronous generators. (Contributed by Thomas Krennwallner in gh-79940.)The performance of\ninspect.getattr_static()\nhas been considerably improved. Most calls to the function should be at least 2x faster than they were in Python 3.11. (Contributed by Alex Waygood in gh-103193.)\nitertools\u00b6\nAdd\nitertools.batched()\nfor collecting into even-sized tuples where the last batch may be shorter than the rest. (Contributed by Raymond Hettinger in gh-98363.)\nmath\u00b6\nAdd\nmath.sumprod()\nfor computing a sum of products. (Contributed by Raymond Hettinger in gh-100485.)Extend\nmath.nextafter()\nto include a steps argument for moving up or down multiple steps at a time. (Contributed by Matthias Goergens, Mark Dickinson, and Raymond Hettinger in gh-94906.)\nos\u00b6\nAdd\nos.PIDFD_NONBLOCK\nto open a file descriptor for a process withos.pidfd_open()\nin non-blocking mode. (Contributed by Kumar Aditya in gh-93312.)os.DirEntry\nnow includes anos.DirEntry.is_junction()\nmethod to check if the entry is a junction. (Contributed by Charles Machalow in gh-99547.)Add\nos.listdrives()\n,os.listvolumes()\nandos.listmounts()\nfunctions on Windows for enumerating drives, volumes and mount points. (Contributed by Steve Dower in gh-102519.)os.stat()\nandos.lstat()\nare now more accurate on Windows. Thest_birthtime\nfield will now be filled with the creation time of the file, andst_ctime\nis deprecated but still contains the creation time (but in the future will return the last metadata change, for consistency with other platforms).st_dev\nmay be up to 64 bits andst_ino\nup to 128 bits depending on your file system, andst_rdev\nis always set to zero rather than incorrect values. Both functions may be significantly faster on newer releases of Windows. (Contributed by Steve Dower in gh-99726.)\nos.path\u00b6\nAdd\nos.path.isjunction()\nto check if a given path is a junction. (Contributed by Charles Machalow in gh-99547.)Add\nos.path.splitroot()\nto split a path into a triad(drive, root, tail)\n. (Contributed by Barney Gale in gh-101000.)\npathlib\u00b6\nAdd support for subclassing\npathlib.PurePath\nandpathlib.Path\n, plus their Posix- and Windows-specific variants. Subclasses may override thepathlib.PurePath.with_segments()\nmethod to pass information between path instances.Add\npathlib.Path.walk()\nfor walking the directory trees and generating all file or directory names within them, similar toos.walk()\n. (Contributed by Stanislav Zmiev in gh-90385.)Add walk_up optional parameter to\npathlib.PurePath.relative_to()\nto allow the insertion of..\nentries in the result; this behavior is more consistent withos.path.relpath()\n. (Contributed by Domenico Ragusa in gh-84538.)Add\npathlib.Path.is_junction()\nas a proxy toos.path.isjunction()\n. (Contributed by Charles Machalow in gh-99547.)Add case_sensitive optional parameter to\npathlib.Path.glob()\n,pathlib.Path.rglob()\nandpathlib.PurePath.match()\nfor matching the path\u2019s case sensitivity, allowing for more precise control over the matching process.\nplatform\u00b6\nAdd support for detecting Windows 11 and Windows Server releases past 2012. Previously, lookups on Windows Server platforms newer than Windows Server 2012 and on Windows 11 would return\nWindows-10\n. (Contributed by Steve Dower in gh-89545.)\npdb\u00b6\nAdd convenience variables to hold values temporarily for debug session and provide quick access to values like the current frame or the return value. (Contributed by Tian Gao in gh-103693.)\nrandom\u00b6\nAdd\nrandom.binomialvariate()\n. (Contributed by Raymond Hettinger in gh-81620.)Add a default of\nlambd=1.0\ntorandom.expovariate()\n. (Contributed by Raymond Hettinger in gh-100234.)\nshutil\u00b6\nshutil.make_archive()\nnow passes the root_dir argument to custom archivers which support it. In this case it no longer temporarily changes the current working directory of the process to root_dir to perform archiving. (Contributed by Serhiy Storchaka in gh-74696.)shutil.rmtree()\nnow accepts a new argument onexc which is an error handler like onerror but which expects an exception instance rather than a (typ, val, tb) triplet. onerror is deprecated. (Contributed by Irit Katriel in gh-102828.)shutil.which()\nnow consults the PATHEXT environment variable to find matches within PATH on Windows even when the given cmd includes a directory component. (Contributed by Charles Machalow in gh-103179.)shutil.which()\nwill callNeedCurrentDirectoryForExePathW\nwhen querying for executables on Windows to determine if the current working directory should be prepended to the search path. (Contributed by Charles Machalow in gh-103179.)shutil.which()\nwill return a path matching the cmd with a component fromPATHEXT\nprior to a direct match elsewhere in the search path on Windows. (Contributed by Charles Machalow in gh-103179.)\nsqlite3\u00b6\nAdd a command-line interface. (Contributed by Erlend E. Aasland in gh-77617.)\nAdd the\nsqlite3.Connection.autocommit\nattribute tosqlite3.Connection\nand the autocommit parameter tosqlite3.connect()\nto control PEP 249-compliant transaction handling. (Contributed by Erlend E. Aasland in gh-83638.)Add entrypoint keyword-only parameter to\nsqlite3.Connection.load_extension()\n, for overriding the SQLite extension entry point. (Contributed by Erlend E. Aasland in gh-103015.)Add\nsqlite3.Connection.getconfig()\nandsqlite3.Connection.setconfig()\ntosqlite3.Connection\nto make configuration changes to a database connection. (Contributed by Erlend E. Aasland in gh-103489.)\nstatistics\u00b6\nExtend\nstatistics.correlation()\nto include as aranked\nmethod for computing the Spearman correlation of ranked data. (Contributed by Raymond Hettinger in gh-95861.)\nsys\u00b6\nAdd the\nsys.monitoring\nnamespace to expose the new PEP 669 monitoring API. (Contributed by Mark Shannon in gh-103082.)Add\nsys.activate_stack_trampoline()\nandsys.deactivate_stack_trampoline()\nfor activating and deactivating stack profiler trampolines, andsys.is_stack_trampoline_active()\nfor querying if stack profiler trampolines are active. (Contributed by Pablo Galindo and Christian Heimes with contributions from Gregory P. Smith [Google] and Mark Shannon in gh-96123.)Add\nsys.last_exc\nwhich holds the last unhandled exception that was raised (for post-mortem debugging use cases). Deprecate the three fields that have the same information in its legacy form:sys.last_type\n,sys.last_value\nandsys.last_traceback\n. (Contributed by Irit Katriel in gh-102778.)sys._current_exceptions()\nnow returns a mapping from thread-id to an exception instance, rather than to a(typ, exc, tb)\ntuple. (Contributed by Irit Katriel in gh-103176.)sys.setrecursionlimit()\nandsys.getrecursionlimit()\n. The recursion limit now applies only to Python code. Builtin functions do not use the recursion limit, but are protected by a different mechanism that prevents recursion from causing a virtual machine crash.\ntempfile\u00b6\nThe\ntempfile.NamedTemporaryFile\nfunction has a new optional parameter delete_on_close (Contributed by Evgeny Zorin in gh-58451.)tempfile.mkdtemp()\nnow always returns an absolute path, even if the argument provided to the dir parameter is a relative path.\nthreading\u00b6\nAdd\nthreading.settrace_all_threads()\nandthreading.setprofile_all_threads()\nthat allow to set tracing and profiling functions in all running threads in addition to the calling one. (Contributed by Pablo Galindo in gh-93503.)\ntkinter\u00b6\ntkinter.Canvas.coords()\nnow flattens its arguments. It now accepts not only coordinates as separate arguments (x1, y1, x2, y2, ...\n) and a sequence of coordinates ([x1, y1, x2, y2, ...]\n), but also coordinates grouped in pairs ((x1, y1), (x2, y2), ...\nand[(x1, y1), (x2, y2), ...]\n), likecreate_*()\nmethods. (Contributed by Serhiy Storchaka in gh-94473.)\ntokenize\u00b6\nThe\ntokenize\nmodule includes the changes introduced in PEP 701. (Contributed by Marta G\u00f3mez Mac\u00edas and Pablo Galindo in gh-102856.) See Porting to Python 3.12 for more information on the changes to thetokenize\nmodule.\ntypes\u00b6\nAdd\ntypes.get_original_bases()\nto allow for further introspection of User-defined generic types when subclassed. (Contributed by James Hilton-Balfe and Alex Waygood in gh-101827.)\ntyping\u00b6\nisinstance()\nchecks againstruntime-checkable protocols\nnow useinspect.getattr_static()\nrather thanhasattr()\nto lookup whether attributes exist. This means that descriptors and__getattr__()\nmethods are no longer unexpectedly evaluated duringisinstance()\nchecks against runtime-checkable protocols. However, it may also mean that some objects which used to be considered instances of a runtime-checkable protocol may no longer be considered instances of that protocol on Python 3.12+, and vice versa. Most users are unlikely to be affected by this change. (Contributed by Alex Waygood in gh-102433.)The members of a runtime-checkable protocol are now considered \u201cfrozen\u201d at runtime as soon as the class has been created. Monkey-patching attributes onto a runtime-checkable protocol will still work, but will have no impact on\nisinstance()\nchecks comparing objects to the protocol. For example:>>> from typing import Protocol, runtime_checkable >>> @runtime_checkable ... class HasX(Protocol): ... x = 1 ... >>> class Foo: ... ... >>> f = Foo() >>> isinstance(f, HasX) False >>> f.x = 1 >>> isinstance(f, HasX) True >>> HasX.y = 2 >>> isinstance(f, HasX) # unchanged, even though HasX now also has a \"y\" attribute True\nThis change was made in order to speed up\nisinstance()\nchecks against runtime-checkable protocols.The performance profile of\nisinstance()\nchecks againstruntime-checkable protocols\nhas changed significantly. Mostisinstance()\nchecks against protocols with only a few members should be at least 2x faster than in 3.11, and some may be 20x faster or more. However,isinstance()\nchecks against protocols with many members may be slower than in Python 3.11. (Contributed by Alex Waygood in gh-74690 and gh-103193.)All\ntyping.TypedDict\nandtyping.NamedTuple\nclasses now have the__orig_bases__\nattribute. (Contributed by Adrian Garcia Badaracco in gh-103699.)Add\nfrozen_default\nparameter totyping.dataclass_transform()\n. (Contributed by Erik De Bonte in gh-99957.)\nunicodedata\u00b6\nThe Unicode database has been updated to version 15.0.0. (Contributed by Benjamin Peterson in gh-96734).\nunittest\u00b6\nAdd a --durations\ncommand line option, showing the N slowest test cases:\npython3 -m unittest --durations=3 lib.tests.test_threading\n.....\nSlowest test durations\n----------------------------------------------------------------------\n1.210s test_timeout (Lib.test.test_threading.BarrierTests)\n1.003s test_default_timeout (Lib.test.test_threading.BarrierTests)\n0.518s test_timeout (Lib.test.test_threading.EventTests)\n(0.000 durations hidden. Use -v to show these durations.)\n----------------------------------------------------------------------\nRan 158 tests in 9.869s\nOK (skipped=3)\n(Contributed by Giampaolo Rodola in gh-48330)\nuuid\u00b6\nAdd a command-line interface. (Contributed by Adam Chhina in gh-88597.)\nOptimizations\u00b6\nRemove\nwstr\nandwstr_length\nmembers from Unicode objects. It reduces object size by 8 or 16 bytes on 64bit platform. (PEP 623) (Contributed by Inada Naoki in gh-92536.)Add experimental support for using the BOLT binary optimizer in the build process, which improves performance by 1-5%. (Contributed by Kevin Modzelewski in gh-90536 and tuned by Donghee Na in gh-101525)\nSpeed up the regular expression substitution (functions\nre.sub()\nandre.subn()\nand correspondingre.Pattern\nmethods) for replacement strings containing group references by 2\u20133 times. (Contributed by Serhiy Storchaka in gh-91524.)Speed up\nasyncio.Task\ncreation by deferring expensive string formatting. (Contributed by Itamar Oren in gh-103793.)The\ntokenize.tokenize()\nandtokenize.generate_tokens()\nfunctions are up to 64% faster as a side effect of the changes required to cover PEP 701 in thetokenize\nmodule. (Contributed by Marta G\u00f3mez Mac\u00edas and Pablo Galindo in gh-102856.)Speed up\nsuper()\nmethod calls and attribute loads via the newLOAD_SUPER_ATTR\ninstruction. (Contributed by Carl Meyer and Vladimir Matveev in gh-103497.)\nCPython bytecode changes\u00b6\nRemove the\nLOAD_METHOD\ninstruction. It has been merged intoLOAD_ATTR\n.LOAD_ATTR\nwill now behave like the oldLOAD_METHOD\ninstruction if the low bit of its oparg is set. (Contributed by Ken Jin in gh-93429.)Remove the\nJUMP_IF_FALSE_OR_POP\nandJUMP_IF_TRUE_OR_POP\ninstructions. (Contributed by Irit Katriel in gh-102859.)Remove the\nPRECALL\ninstruction. (Contributed by Mark Shannon in gh-92925.)Add the\nBINARY_SLICE\nandSTORE_SLICE\ninstructions. (Contributed by Mark Shannon in gh-94163.)Add the\nCALL_INTRINSIC_1\ninstructions. (Contributed by Mark Shannon in gh-99005.)Add the\nCALL_INTRINSIC_2\ninstruction. (Contributed by Irit Katriel in gh-101799.)Add the\nCLEANUP_THROW\ninstruction. (Contributed by Brandt Bucher in gh-90997.)Add the\nEND_SEND\ninstruction. (Contributed by Mark Shannon in gh-103082.)Add the\nLOAD_FAST_AND_CLEAR\ninstruction as part of the implementation of PEP 709. (Contributed by Carl Meyer in gh-101441.)Add the\nLOAD_FAST_CHECK\ninstruction. (Contributed by Dennis Sweeney in gh-93143.)Add the\nLOAD_FROM_DICT_OR_DEREF\n,LOAD_FROM_DICT_OR_GLOBALS\n, andLOAD_LOCALS\nopcodes as part of the implementation of PEP 695. Remove theLOAD_CLASSDEREF\nopcode, which can be replaced withLOAD_LOCALS\nplusLOAD_FROM_DICT_OR_DEREF\n. (Contributed by Jelle Zijlstra in gh-103764.)Add the\nLOAD_SUPER_ATTR\ninstruction. (Contributed by Carl Meyer and Vladimir Matveev in gh-103497.)Add the\nRETURN_CONST\ninstruction. (Contributed by Wenyang Wang in gh-101632.)\nDemos and Tools\u00b6\nRemove the\nTools/demo/\ndirectory which contained old demo scripts. A copy can be found in the old-demos project. (Contributed by Victor Stinner in gh-97681.)Remove outdated example scripts of the\nTools/scripts/\ndirectory. A copy can be found in the old-demos project. (Contributed by Victor Stinner in gh-97669.)\nDeprecated\u00b6\nargparse\n: The type, choices, and metavar parameters ofargparse.BooleanOptionalAction\nare deprecated and will be removed in 3.14. (Contributed by Nikita Sobolev in gh-92248.)ast\n: The followingast\nfeatures have been deprecated in documentation since Python 3.8, now cause aDeprecationWarning\nto be emitted at runtime when they are accessed or used, and will be removed in Python 3.14:ast.Num\nast.Str\nast.Bytes\nast.NameConstant\nast.Ellipsis\nUse\nast.Constant\ninstead. (Contributed by Serhiy Storchaka in gh-90953.)-\nThe child watcher classes\nasyncio.MultiLoopChildWatcher\n,asyncio.FastChildWatcher\n,asyncio.AbstractChildWatcher\nandasyncio.SafeChildWatcher\nare deprecated and will be removed in Python 3.14. (Contributed by Kumar Aditya in gh-94597.)asyncio.set_child_watcher()\n,asyncio.get_child_watcher()\n,asyncio.AbstractEventLoopPolicy.set_child_watcher()\nandasyncio.AbstractEventLoopPolicy.get_child_watcher()\nare deprecated and will be removed in Python 3.14. (Contributed by Kumar Aditya in gh-94597.)The\nget_event_loop()\nmethod of the default event loop policy now emits aDeprecationWarning\nif there is no current event loop set and it decides to create one. (Contributed by Serhiy Storchaka and Guido van Rossum in gh-100160.)\ncalendar\n:calendar.January\nandcalendar.February\nconstants are deprecated and replaced bycalendar.JANUARY\nandcalendar.FEBRUARY\n. (Contributed by Prince Roshan in gh-103636.)collections.abc\n: Deprecatedcollections.abc.ByteString\n.Use\nisinstance(obj, collections.abc.Buffer)\nto test ifobj\nimplements the buffer protocol at runtime. For use in type annotations, either useBuffer\nor a union that explicitly specifies the types your code supports (e.g.,bytes | bytearray | memoryview\n).ByteString\nwas originally intended to be an abstract class that would serve as a supertype of bothbytes\nandbytearray\n. However, since the ABC never had any methods, knowing that an object was an instance ofByteString\nnever actually told you anything useful about the object. Other common buffer types such asmemoryview\nwere also never understood as subtypes ofByteString\n(either at runtime or by static type checkers).See PEP 688 for more details. (Contributed by Shantanu Jain in gh-91896.)\ndatetime\n:datetime.datetime\n\u2019sutcnow()\nandutcfromtimestamp()\nare deprecated and will be removed in a future version. Instead, use timezone-aware objects to represent datetimes in UTC: respectively, callnow()\nandfromtimestamp()\nwith the tz parameter set todatetime.UTC\n. (Contributed by Paul Ganssle in gh-103857.)email\n: Deprecate the isdst parameter inemail.utils.localtime()\n. (Contributed by Alan Williams in gh-72346.)importlib.abc\n: Deprecated the following classes, scheduled for removal in Python 3.14:importlib.abc.ResourceReader\nimportlib.abc.Traversable\nimportlib.abc.TraversableResources\nUse\nimportlib.resources.abc\nclasses instead:(Contributed by Jason R. Coombs and Hugo van Kemenade in gh-93963.)\nitertools\n: Deprecate the support for copy, deepcopy, and pickle operations, which is undocumented, inefficient, historically buggy, and inconsistent. This will be removed in 3.14 for a significant reduction in code volume and maintenance burden. (Contributed by Raymond Hettinger in gh-101588.)multiprocessing\n: In Python 3.14, the defaultmultiprocessing\nstart method will change to a safer one on Linux, BSDs, and other non-macOS POSIX platforms where'fork'\nis currently the default (gh-84559). Adding a runtime warning about this was deemed too disruptive as the majority of code is not expected to care. Use theget_context()\norset_start_method()\nAPIs to explicitly specify when your code requires'fork'\n. See contexts and start methods.pkgutil\n:pkgutil.find_loader()\nandpkgutil.get_loader()\nare deprecated and will be removed in Python 3.14; useimportlib.util.find_spec()\ninstead. (Contributed by Nikita Sobolev in gh-97850.)pty\n: The module has two undocumentedmaster_open()\nandslave_open()\nfunctions that have been deprecated since Python 2 but only gained a properDeprecationWarning\nin 3.12. Remove them in 3.14. (Contributed by Soumendra Ganguly and Gregory P. Smith in gh-85984.)os\n:The\nst_ctime\nfields return byos.stat()\nandos.lstat()\non Windows are deprecated. In a future release, they will contain the last metadata change time, consistent with other platforms. For now, they still contain the creation time, which is also available in the newst_birthtime\nfield. (Contributed by Steve Dower in gh-99726.)On POSIX platforms,\nos.fork()\ncan now raise aDeprecationWarning\nwhen it can detect being called from a multithreaded process. There has always been a fundamental incompatibility with the POSIX platform when doing so. Even if such code appeared to work. We added the warning to raise awareness as issues encountered by code doing this are becoming more frequent. See theos.fork()\ndocumentation for more details along with this discussion on fork being incompatible with threads for why we\u2019re now surfacing this longstanding platform compatibility problem to developers.\nWhen this warning appears due to usage of\nmultiprocessing\norconcurrent.futures\nthe fix is to use a differentmultiprocessing\nstart method such as\"spawn\"\nor\"forkserver\"\n.shutil\n: The onerror argument ofshutil.rmtree()\nis deprecated; use onexc instead. (Contributed by Irit Katriel in gh-102828.)-\ndefault adapters and converters are now deprecated. Instead, use the Adapter and converter recipes and tailor them to your needs. (Contributed by Erlend E. Aasland in gh-90016.)\nIn\nexecute()\n,DeprecationWarning\nis now emitted when named placeholders are used together with parameters supplied as a sequence instead of as adict\n. Starting from Python 3.14, using named placeholders with parameters supplied as a sequence will raise aProgrammingError\n. (Contributed by Erlend E. Aasland in gh-101698.)\nsys\n: Thesys.last_type\n,sys.last_value\nandsys.last_traceback\nfields are deprecated. Usesys.last_exc\ninstead. (Contributed by Irit Katriel in gh-102778.)tarfile\n: Extracting tar archives without specifying filter is deprecated until Python 3.14, when'data'\nfilter will become the default. See Extraction filters for details.-\ntyping.Hashable\nandtyping.Sized\n, aliases forcollections.abc.Hashable\nandcollections.abc.Sized\nrespectively, are deprecated. (gh-94309.)typing.ByteString\n, deprecated since Python 3.9, now causes aDeprecationWarning\nto be emitted when it is used. (Contributed by Alex Waygood in gh-91896.)\nxml.etree.ElementTree\n: The module now emitsDeprecationWarning\nwhen testing the truth value of anxml.etree.ElementTree.Element\n. Before, the Python implementation emittedFutureWarning\n, and the C implementation emitted nothing. (Contributed by Jacob Walls in gh-83122.)The 3-arg signatures (type, value, traceback) of\ncoroutine throw()\n,generator throw()\nandasync generator throw()\nare deprecated and may be removed in a future version of Python. Use the single-arg versions of these functions instead. (Contributed by Ofey Chan in gh-89874.)DeprecationWarning\nis now raised when__package__\non a module differs from__spec__.parent\n(previously it wasImportWarning\n). (Contributed by Brett Cannon in gh-65961.)Setting\n__package__\nor__cached__\non a module is deprecated, and will cease to be set or taken into consideration by the import system in Python 3.14. (Contributed by Brett Cannon in gh-65961.)The bitwise inversion operator (\n~\n) on bool is deprecated. It will throw an error in Python 3.16. Usenot\nfor logical negation of bools instead. In the rare case that you really need the bitwise inversion of the underlyingint\n, convert to int explicitly:~int(x)\n. (Contributed by Tim Hoffmann in gh-103487.)Accessing\nco_lnotab\non code objects was deprecated in Python 3.10 via PEP 626, but it only got a properDeprecationWarning\nin 3.12. May be removed in 3.15. (Contributed by Nikita Sobolev in gh-101866.)\nPending removal in Python 3.13\u00b6\nModules (see PEP 594):\naifc\naudioop\ncgi\ncgitb\nchunk\ncrypt\nimghdr\nmailcap\nmsilib\nnis\nnntplib\nossaudiodev\npipes\nsndhdr\nspwd\nsunau\ntelnetlib\nuu\nxdrlib\nOther modules:\nlib2to3\n, and the 2to3 program (gh-84540)\nAPIs:\nconfigparser.LegacyInterpolation\n(gh-90765)locale.resetlocale()\n(gh-90817)turtle.RawTurtle.settiltangle()\n(gh-50096)unittest.findTestCases()\n(gh-50096)unittest.getTestCaseNames()\n(gh-50096)unittest.makeSuite()\n(gh-50096)unittest.TestProgram.usageExit()\n(gh-67048)webbrowser.MacOSX\n(gh-86421)classmethod\ndescriptor chaining (gh-89519)\nPending removal in Python 3.14\u00b6\nargparse\n: The type, choices, and metavar parameters ofargparse.BooleanOptionalAction\nare deprecated and will be removed in 3.14. (Contributed by Nikita Sobolev in gh-92248.)ast\n: The following features have been deprecated in documentation since Python 3.8, now cause aDeprecationWarning\nto be emitted at runtime when they are accessed or used, and will be removed in Python 3.14:ast.Num\nast.Str\nast.Bytes\nast.NameConstant\nast.Ellipsis\nUse\nast.Constant\ninstead. (Contributed by Serhiy Storchaka in gh-90953.)-\nThe child watcher classes\nasyncio.MultiLoopChildWatcher\n,asyncio.FastChildWatcher\n,asyncio.AbstractChildWatcher\nandasyncio.SafeChildWatcher\nare deprecated and will be removed in Python 3.14. (Contributed by Kumar Aditya in gh-94597.)asyncio.set_child_watcher()\n,asyncio.get_child_watcher()\n,asyncio.AbstractEventLoopPolicy.set_child_watcher()\nandasyncio.AbstractEventLoopPolicy.get_child_watcher()\nare deprecated and will be removed in Python 3.14. (Contributed by Kumar Aditya in gh-94597.)The\nget_event_loop()\nmethod of the default event loop policy now emits aDeprecationWarning\nif there is no current event loop set and it decides to create one. (Contributed by Serhiy Storchaka and Guido van Rossum in gh-100160.)\nemail\n: Deprecated the isdst parameter inemail.utils.localtime()\n. (Contributed by Alan Williams in gh-72346.)importlib.abc\ndeprecated classes:importlib.abc.ResourceReader\nimportlib.abc.Traversable\nimportlib.abc.TraversableResources\nUse\nimportlib.resources.abc\nclasses instead:(Contributed by Jason R. Coombs and Hugo van Kemenade in gh-93963.)\nitertools\nhad undocumented, inefficient, historically buggy, and inconsistent support for copy, deepcopy, and pickle operations. This will be removed in 3.14 for a significant reduction in code volume and maintenance burden. (Contributed by Raymond Hettinger in gh-101588.)multiprocessing\n: The default start method will change to a safer one on Linux, BSDs, and other non-macOS POSIX platforms where'fork'\nis currently the default (gh-84559). Adding a runtime warning about this was deemed too disruptive as the majority of code is not expected to care. Use theget_context()\norset_start_method()\nAPIs to explicitly specify when your code requires'fork'\n. See Contexts and start methods.pathlib\n:is_relative_to()\nandrelative_to()\n: passing additional arguments is deprecated.pkgutil\n:pkgutil.find_loader()\nandpkgutil.get_loader()\nnow raiseDeprecationWarning\n; useimportlib.util.find_spec()\ninstead. (Contributed by Nikita Sobolev in gh-97850.)pty\n:master_open()\n: usepty.openpty()\n.slave_open()\n: usepty.openpty()\n.\n-\nversion\nandversion_info\n.execute()\nandexecutemany()\nif named placeholders are used and parameters is a sequence instead of adict\n.\nurllib\n:urllib.parse.Quoter\nis deprecated: it was not intended to be a public API. (Contributed by Gregory P. Smith in gh-88168.)\nPending removal in Python 3.15\u00b6\nThe import system:\nSetting\n__cached__\non a module while failing to set__spec__.cached\nis deprecated. In Python 3.15,__cached__\nwill cease to be set or take into consideration by the import system or standard library. (gh-97879)Setting\n__package__\non a module while failing to set__spec__.parent\nis deprecated. In Python 3.15,__package__\nwill cease to be set or take into consideration by the import system or standard library. (gh-97879)\n-\nThe undocumented\nctypes.SetPointerType()\nfunction has been deprecated since Python 3.13.\n-\nThe obsolete and rarely used\nCGIHTTPRequestHandler\nhas been deprecated since Python 3.13. No direct replacement exists. Anything is better than CGI to interface a web server with a request handler.The\n--cgi\nflag to the python -m http.server command-line interface has been deprecated since Python 3.13.\n-\nload_module()\nmethod: useexec_module()\ninstead.\n-\nThe\ngetdefaultlocale()\nfunction has been deprecated since Python 3.11. Its removal was originally planned for Python 3.13 (gh-90817), but has been postponed to Python 3.15. Usegetlocale()\n,setlocale()\n, andgetencoding()\ninstead. (Contributed by Hugo van Kemenade in gh-111187.)\n-\nPurePath.is_reserved()\nhas been deprecated since Python 3.13. Useos.path.isreserved()\nto detect reserved paths on Windows.\n-\njava_ver()\nhas been deprecated since Python 3.13. This function is only useful for Jython support, has a confusing API, and is largely untested.\n-\nThe check_home argument of\nsysconfig.is_python_build()\nhas been deprecated since Python 3.12.\n-\nRLock()\nwill take no arguments in Python 3.15. Passing any arguments has been deprecated since Python 3.14, as the Python version does not permit any arguments, but the C version allows any number of positional or keyword arguments, ignoring every argument.\n-\ntypes.CodeType\n: Accessingco_lnotab\nwas deprecated in PEP 626 since 3.10 and was planned to be removed in 3.12, but it only got a properDeprecationWarning\nin 3.12. May be removed in 3.15. (Contributed by Nikita Sobolev in gh-101866.)\n-\nThe undocumented keyword argument syntax for creating\nNamedTuple\nclasses (for example,Point = NamedTuple(\"Point\", x=int, y=int)\n) has been deprecated since Python 3.13. Use the class-based syntax or the functional syntax instead.When using the functional syntax of\nTypedDict\ns, failing to pass a value to the fields parameter (TD = TypedDict(\"TD\")\n) or passingNone\n(TD = TypedDict(\"TD\", None)\n) has been deprecated since Python 3.13. Useclass TD(TypedDict): pass\norTD = TypedDict(\"TD\", {})\nto create a TypedDict with zero field.The\ntyping.no_type_check_decorator()\ndecorator function has been deprecated since Python 3.13. After eight years in thetyping\nmodule, it has yet to be supported by any major type checker.\nwave\n:The\ngetmark()\n,setmark()\n, andgetmarkers()\nmethods of theWave_read\nandWave_write\nclasses have been deprecated since Python 3.13.\n-\nload_module()\nhas been deprecated since Python 3.10. Useexec_module()\ninstead. (Contributed by Jiahao Li in gh-125746.)\nPending removal in Python 3.16\u00b6\nThe import system:\nSetting\n__loader__\non a module while failing to set__spec__.loader\nis deprecated. In Python 3.16,__loader__\nwill cease to be set or taken into consideration by the import system or the standard library.\n-\nThe\n'u'\nformat code (wchar_t\n) has been deprecated in documentation since Python 3.3 and at runtime since Python 3.13. Use the'w'\nformat code (Py_UCS4\n) for Unicode characters instead.\n-\nasyncio.iscoroutinefunction()\nis deprecated and will be removed in Python 3.16; useinspect.iscoroutinefunction()\ninstead. (Contributed by Jiahao Li and Kumar Aditya in gh-122875.)asyncio\npolicy system is deprecated and will be removed in Python 3.16. In particular, the following classes and functions are deprecated:Users should use\nasyncio.run()\norasyncio.Runner\nwith loop_factory to use the desired event loop implementation.For example, to use\nasyncio.SelectorEventLoop\non Windows:import asyncio async def main(): ... asyncio.run(main(), loop_factory=asyncio.SelectorEventLoop)\n(Contributed by Kumar Aditya in gh-127949.)\n-\nBitwise inversion on boolean types,\n~True\nor~False\nhas been deprecated since Python 3.12, as it produces surprising and unintuitive results (-2\nand-1\n). Usenot x\ninstead for the logical negation of a Boolean. In the rare case that you need the bitwise inversion of the underlying integer, convert toint\nexplicitly (~int(x)\n).\n-\nCalling the Python implementation of\nfunctools.reduce()\nwith function or sequence as keyword arguments has been deprecated since Python 3.14.\n-\nSupport for custom logging handlers with the strm argument is deprecated and scheduled for removal in Python 3.16. Define handlers with the stream argument instead. (Contributed by Mariusz Felisiak in gh-115032.)\n-\nValid extensions start with a \u2018.\u2019 or are empty for\nmimetypes.MimeTypes.add_type()\n. Undotted extensions are deprecated and will raise aValueError\nin Python 3.16. (Contributed by Hugo van Kemenade in gh-75223.)\n-\nThe\nExecError\nexception has been deprecated since Python 3.14. It has not been used by any function inshutil\nsince Python 3.4, and is now an alias ofRuntimeError\n.\n-\nThe\nClass.get_methods\nmethod has been deprecated since Python 3.14.\nsys\n:The\n_enablelegacywindowsfsencoding()\nfunction has been deprecated since Python 3.13. Use thePYTHONLEGACYWINDOWSFSENCODING\nenvironment variable instead.\n-\nThe\nsysconfig.expand_makefile_vars()\nfunction has been deprecated since Python 3.14. Use thevars\nargument ofsysconfig.get_paths()\ninstead.\n-\nThe undocumented and unused\nTarFile.tarfile\nattribute has been deprecated since Python 3.13.\nPending removal in Python 3.17\u00b6\n-\ncollections.abc.ByteString\nis scheduled for removal in Python 3.17.Use\nisinstance(obj, collections.abc.Buffer)\nto test ifobj\nimplements the buffer protocol at runtime. For use in type annotations, either useBuffer\nor a union that explicitly specifies the types your code supports (e.g.,bytes | bytearray | memoryview\n).ByteString\nwas originally intended to be an abstract class that would serve as a supertype of bothbytes\nandbytearray\n. However, since the ABC never had any methods, knowing that an object was an instance ofByteString\nnever actually told you anything useful about the object. Other common buffer types such asmemoryview\nwere also never understood as subtypes ofByteString\n(either at runtime or by static type checkers).See PEP 688 for more details. (Contributed by Shantanu Jain in gh-91896.)\n-\nBefore Python 3.14, old-style unions were implemented using the private class\ntyping._UnionGenericAlias\n. This class is no longer needed for the implementation, but it has been retained for backward compatibility, with removal scheduled for Python 3.17. Users should use documented introspection helpers liketyping.get_origin()\nandtyping.get_args()\ninstead of relying on private implementation details.typing.ByteString\n, deprecated since Python 3.9, is scheduled for removal in Python 3.17.Use\nisinstance(obj, collections.abc.Buffer)\nto test ifobj\nimplements the buffer protocol at runtime. For use in type annotations, either useBuffer\nor a union that explicitly specifies the types your code supports (e.g.,bytes | bytearray | memoryview\n).ByteString\nwas originally intended to be an abstract class that would serve as a supertype of bothbytes\nandbytearray\n. However, since the ABC never had any methods, knowing that an object was an instance ofByteString\nnever actually told you anything useful about the object. Other common buffer types such asmemoryview\nwere also never understood as subtypes ofByteString\n(either at runtime or by static type checkers).See PEP 688 for more details. (Contributed by Shantanu Jain in gh-91896.)\nPending removal in future versions\u00b6\nThe following APIs will be removed in the future, although there is currently no date scheduled for their removal.\n-\nNesting argument groups and nesting mutually exclusive groups are deprecated.\nPassing the undocumented keyword argument prefix_chars to\nadd_argument_group()\nis now deprecated.The\nargparse.FileType\ntype converter is deprecated.\n-\nGenerators:\nthrow(type, exc, tb)\nandathrow(type, exc, tb)\nsignature is deprecated: usethrow(exc)\nandathrow(exc)\ninstead, the single argument signature.Currently Python accepts numeric literals immediately followed by keywords, for example\n0in x\n,1or x\n,0if 1else 2\n. It allows confusing and ambiguous expressions like[0x1for x in y]\n(which can be interpreted as[0x1 for x in y]\nor[0x1f or x in y]\n). A syntax warning is raised if the numeric literal is immediately followed by one of keywordsand\n,else\n,for\n,if\n,in\n,is\nandor\n. In a future release it will be changed to a syntax error. (gh-87999)Support for\n__index__()\nand__int__()\nmethod returning non-int type: these methods will be required to return an instance of a strict subclass ofint\n.Support for\n__float__()\nmethod returning a strict subclass offloat\n: these methods will be required to return an instance offloat\n.Support for\n__complex__()\nmethod returning a strict subclass ofcomplex\n: these methods will be required to return an instance ofcomplex\n.Passing a complex number as the real or imag argument in the\ncomplex()\nconstructor is now deprecated; it should only be passed as a single positional argument. (Contributed by Serhiy Storchaka in gh-109218.)\ncalendar\n:calendar.January\nandcalendar.February\nconstants are deprecated and replaced bycalendar.JANUARY\nandcalendar.FEBRUARY\n. (Contributed by Prince Roshan in gh-103636.)codecs\n: useopen()\ninstead ofcodecs.open()\n. (gh-133038)codeobject.co_lnotab\n: use thecodeobject.co_lines()\nmethod instead.-\nutcnow()\n: usedatetime.datetime.now(tz=datetime.UTC)\n.utcfromtimestamp()\n: usedatetime.datetime.fromtimestamp(timestamp, tz=datetime.UTC)\n.\ngettext\n: Plural value must be an integer.-\ncache_from_source()\ndebug_override parameter is deprecated: use the optimization parameter instead.\n-\nEntryPoints\ntuple interface.Implicit\nNone\non return values.\nlogging\n: thewarn()\nmethod has been deprecated since Python 3.3, usewarning()\ninstead.mailbox\n: Use of StringIO input and text mode is deprecated, use BytesIO and binary mode instead.os\n: Callingos.register_at_fork()\nin multi-threaded process.pydoc.ErrorDuringImport\n: A tuple value for exc_info parameter is deprecated, use an exception instance.re\n: More strict rules are now applied for numerical group references and group names in regular expressions. Only sequence of ASCII digits is now accepted as a numerical reference. The group name in bytes patterns and replacement strings can now only contain ASCII letters and digits and underscore. (Contributed by Serhiy Storchaka in gh-91760.)sre_compile\n,sre_constants\nandsre_parse\nmodules.shutil\n:rmtree()\n\u2019s onerror parameter is deprecated in Python 3.12; use the onexc parameter instead.ssl\noptions and protocols:ssl.SSLContext\nwithout protocol argument is deprecated.ssl.SSLContext\n:set_npn_protocols()\nandselected_npn_protocol()\nare deprecated: use ALPN instead.ssl.OP_NO_SSL*\noptionsssl.OP_NO_TLS*\noptionsssl.PROTOCOL_SSLv3\nssl.PROTOCOL_TLS\nssl.PROTOCOL_TLSv1\nssl.PROTOCOL_TLSv1_1\nssl.PROTOCOL_TLSv1_2\nssl.TLSVersion.SSLv3\nssl.TLSVersion.TLSv1\nssl.TLSVersion.TLSv1_1\nthreading\nmethods:threading.Condition.notifyAll()\n: usenotify_all()\n.threading.Event.isSet()\n: useis_set()\n.threading.Thread.isDaemon()\n,threading.Thread.setDaemon()\n: usethreading.Thread.daemon\nattribute.threading.Thread.getName()\n,threading.Thread.setName()\n: usethreading.Thread.name\nattribute.threading.currentThread()\n: usethreading.current_thread()\n.threading.activeCount()\n: usethreading.active_count()\n.\nThe internal class\ntyping._UnionGenericAlias\nis no longer used to implementtyping.Union\n. To preserve compatibility with users using this private class, a compatibility shim will be provided until at least Python 3.17. (Contributed by Jelle Zijlstra in gh-105499.)unittest.IsolatedAsyncioTestCase\n: it is deprecated to return a value that is notNone\nfrom a test case.urllib.parse\ndeprecated functions:urlparse()\ninsteadsplitattr()\nsplithost()\nsplitnport()\nsplitpasswd()\nsplitport()\nsplitquery()\nsplittag()\nsplittype()\nsplituser()\nsplitvalue()\nto_bytes()\nwsgiref\n:SimpleHandler.stdout.write()\nshould not do partial writes.xml.etree.ElementTree\n: Testing the truth value of anElement\nis deprecated. In a future release it will always returnTrue\n. Prefer explicitlen(elem)\norelem is not None\ntests instead.sys._clear_type_cache()\nis deprecated: usesys._clear_internal_caches()\ninstead.\nRemoved\u00b6\nasynchat and asyncore\u00b6\nconfigparser\u00b6\nSeveral names deprecated in the\nconfigparser\nway back in 3.2 have been removed per gh-89336:configparser.ParsingError\nno longer has afilename\nattribute or argument. Use thesource\nattribute and argument instead.configparser\nno longer has aSafeConfigParser\nclass. Use the shorterConfigParser\nname instead.configparser.ConfigParser\nno longer has areadfp\nmethod. Useread_file()\ninstead.\ndistutils\u00b6\nensurepip\u00b6\nRemove the bundled setuptools wheel from\nensurepip\n, and stop installing setuptools in environments created byvenv\n.pip (>= 22.1)\ndoes not require setuptools to be installed in the environment.setuptools\n-based (anddistutils\n-based) packages can still be used withpip install\n, since pip will providesetuptools\nin the build environment it uses for building a package.easy_install\n,pkg_resources\n,setuptools\nanddistutils\nare no longer provided by default in environments created withvenv\nor bootstrapped withensurepip\n, since they are part of thesetuptools\npackage. For projects relying on these at runtime, thesetuptools\nproject should be declared as a dependency and installed separately (typically, using pip).(Contributed by Pradyun Gedam in gh-95299.)\nenum\u00b6\nftplib\u00b6\ngzip\u00b6\nRemove the\nfilename\nattribute ofgzip\n\u2019sgzip.GzipFile\n, deprecated since Python 2.6, use thename\nattribute instead. In write mode, thefilename\nattribute added'.gz'\nfile extension if it was not present. (Contributed by Victor Stinner in gh-94196.)\nhashlib\u00b6\nRemove the pure Python implementation of\nhashlib\n\u2019shashlib.pbkdf2_hmac()\n, deprecated in Python 3.10. Python 3.10 and newer requires OpenSSL 1.1.1 (PEP 644): this OpenSSL version provides a C implementation ofpbkdf2_hmac()\nwhich is faster. (Contributed by Victor Stinner in gh-94199.)\nimportlib\u00b6\nMany previously deprecated cleanups in\nimportlib\nhave now been completed:References to, and support for\nmodule_repr()\nhas been removed. (Contributed by Barry Warsaw in gh-97850.)importlib.util.set_package\n,importlib.util.set_loader\nandimportlib.util.module_for_loader\nhave all been removed. (Contributed by Brett Cannon and Nikita Sobolev in gh-65961 and gh-97850.)Support for\nfind_loader()\nandfind_module()\nAPIs have been removed. (Contributed by Barry Warsaw in gh-98040.)importlib.abc.Finder\n,pkgutil.ImpImporter\n, andpkgutil.ImpLoader\nhave been removed. (Contributed by Barry Warsaw in gh-98040.)\nimp\u00b6\nThe\nimp\nmodule has been removed. (Contributed by Barry Warsaw in gh-98040.)To migrate, consult the following correspondence table:\nimp\nimportlib\nimp.NullImporter\nInsert\nNone\nintosys.path_importer_cache\nimp.cache_from_source()\nimp.find_module()\nimp.get_magic()\nimp.get_suffixes()\nimportlib.machinery.SOURCE_SUFFIXES\n,importlib.machinery.EXTENSION_SUFFIXES\n, andimportlib.machinery.BYTECODE_SUFFIXES\nimp.get_tag()\nimp.load_module()\nimp.new_module(name)\ntypes.ModuleType(name)\nimp.reload()\nimp.source_from_cache()\nimp.load_source()\nSee below\nReplace\nimp.load_source()\nwith:import importlib.util import importlib.machinery def load_source(modname, filename): loader = importlib.machinery.SourceFileLoader(modname, filename) spec = importlib.util.spec_from_file_location(modname, filename, loader=loader) module = importlib.util.module_from_spec(spec) # The module is always executed and not cached in sys.modules. # Uncomment the following line to cache the module. # sys.modules[module.__name__] = module loader.exec_module(module) return module\nRemove\nimp\nfunctions and attributes with no replacements:Undocumented functions:\nimp.init_builtin()\nimp.load_compiled()\nimp.load_dynamic()\nimp.load_package()\nimp.lock_held()\n,imp.acquire_lock()\n,imp.release_lock()\n: the locking scheme has changed in Python 3.3 to per-module locks.imp.find_module()\nconstants:SEARCH_ERROR\n,PY_SOURCE\n,PY_COMPILED\n,C_EXTENSION\n,PY_RESOURCE\n,PKG_DIRECTORY\n,C_BUILTIN\n,PY_FROZEN\n,PY_CODERESOURCE\n,IMP_HOOK\n.\nio\u00b6\nlocale\u00b6\nRemove\nlocale\n\u2019slocale.format()\nfunction, deprecated in Python 3.7: uselocale.format_string()\ninstead. (Contributed by Victor Stinner in gh-94226.)\nsmtpd\u00b6\nsqlite3\u00b6\nThe following undocumented\nsqlite3\nfeatures, deprecated in Python 3.10, are now removed:sqlite3.enable_shared_cache()\nsqlite3.OptimizedUnicode\nIf a shared cache must be used, open the database in URI mode using the\ncache=shared\nquery parameter.The\nsqlite3.OptimizedUnicode\ntext factory has been an alias forstr\nsince Python 3.3. Code that previously set the text factory toOptimizedUnicode\ncan either usestr\nexplicitly, or rely on the default value which is alsostr\n.(Contributed by Erlend E. Aasland in gh-92548.)\nssl\u00b6\nRemove\nssl\n\u2019sssl.RAND_pseudo_bytes()\nfunction, deprecated in Python 3.6: useos.urandom()\norssl.RAND_bytes()\ninstead. (Contributed by Victor Stinner in gh-94199.)Remove the\nssl.match_hostname()\nfunction. It was deprecated in Python 3.7. OpenSSL performs hostname matching since Python 3.7, Python no longer uses thessl.match_hostname()\nfunction. (Contributed by Victor Stinner in gh-94199.)Remove the\nssl.wrap_socket()\nfunction, deprecated in Python 3.7: instead, create assl.SSLContext\nobject and call itsssl.SSLContext.wrap_socket\nmethod. Any package that still usesssl.wrap_socket()\nis broken and insecure. The function neither sends a SNI TLS extension nor validates the server hostname. Code is subject to CWE 295 (Improper Certificate Validation). (Contributed by Victor Stinner in gh-94199.)\nunittest\u00b6\nRemove many long-deprecated\nunittest\nfeatures:A number of\nTestCase\nmethod aliases:Deprecated alias\nMethod Name\nDeprecated in\nfailUnless\n3.1\nfailIf\n3.1\nfailUnlessEqual\n3.1\nfailIfEqual\n3.1\nfailUnlessAlmostEqual\n3.1\nfailIfAlmostEqual\n3.1\nfailUnlessRaises\n3.1\nassert_\n3.2\nassertEquals\n3.2\nassertNotEquals\n3.2\nassertAlmostEquals\n3.2\nassertNotAlmostEquals\n3.2\nassertRegexpMatches\n3.2\nassertRaisesRegexp\n3.2\nassertNotRegexpMatches\n3.5\nYou can use https://github.com/isidentical/teyit to automatically modernise your unit tests.\nUndocumented and broken\nTestCase\nmethodassertDictContainsSubset\n(deprecated in Python 3.2).Undocumented\nTestLoader.loadTestsFromModule\nparameter use_load_tests (deprecated and ignored since Python 3.5).An alias of the\nTextTestResult\nclass:_TextTestResult\n(deprecated in Python 3.2).\n(Contributed by Serhiy Storchaka in gh-89325.)\nwebbrowser\u00b6\nRemove support for obsolete browsers from\nwebbrowser\n. The removed browsers include: Grail, Mosaic, Netscape, Galeon, Skipstone, Iceape, Firebird, and Firefox versions 35 and below (gh-102871).\nxml.etree.ElementTree\u00b6\nRemove the\nElementTree.Element.copy()\nmethod of the pure Python implementation, deprecated in Python 3.10, use thecopy.copy()\nfunction instead. The C implementation ofxml.etree.ElementTree\nhas nocopy()\nmethod, only a__copy__()\nmethod. (Contributed by Victor Stinner in gh-94383.)\nzipimport\u00b6\nOthers\u00b6\nRemove the\nsuspicious\nrule from the documentationMakefile\nandDoc/tools/rstlint.py\n, both in favor of sphinx-lint. (Contributed by Julien Palard in gh-98179.)Remove the keyfile and certfile parameters from the\nftplib\n,imaplib\n,poplib\nandsmtplib\nmodules, and the key_file, cert_file and check_hostname parameters from thehttp.client\nmodule, all deprecated since Python 3.6. Use the context parameter (ssl_context inimaplib\n) instead. (Contributed by Victor Stinner in gh-94172.)Remove\nJython\ncompatibility hacks from several stdlib modules and tests. (Contributed by Nikita Sobolev in gh-99482.)Remove\n_use_broken_old_ctypes_structure_semantics_\nflag fromctypes\nmodule. (Contributed by Nikita Sobolev in gh-99285.)\nPorting to Python 3.12\u00b6\nThis section lists previously described changes and other bugfixes that may require changes to your code.\nChanges in the Python API\u00b6\nMore strict rules are now applied for numerical group references and group names in regular expressions. Only sequence of ASCII digits is now accepted as a numerical reference. The group name in bytes patterns and replacement strings can now only contain ASCII letters and digits and underscore. (Contributed by Serhiy Storchaka in gh-91760.)\nRemove\nrandrange()\nfunctionality deprecated since Python 3.10. Formerly,randrange(10.0)\nlosslessly converted torandrange(10)\n. Now, it raises aTypeError\n. Also, the exception raised for non-integer values such asrandrange(10.5)\norrandrange('10')\nhas been changed fromValueError\ntoTypeError\n. This also prevents bugs whererandrange(1e25)\nwould silently select from a larger range thanrandrange(10**25)\n. (Originally suggested by Serhiy Storchaka gh-86388.)argparse.ArgumentParser\nchanged encoding and error handler for reading arguments from file (e.g.fromfile_prefix_chars\noption) from default text encoding (e.g.locale.getpreferredencoding(False)\n) to filesystem encoding and error handler. Argument files should be encoded in UTF-8 instead of ANSI Codepage on Windows.Remove the\nasyncore\n-basedsmtpd\nmodule deprecated in Python 3.4.7 and 3.5.4. A recommended replacement is theasyncio\n-based aiosmtpd PyPI module.shlex.split()\n: PassingNone\nfor s argument now raises an exception, rather than readingsys.stdin\n. The feature was deprecated in Python 3.9. (Contributed by Victor Stinner in gh-94352.)The\nos\nmodule no longer accepts bytes-like paths, likebytearray\nandmemoryview\ntypes: only the exactbytes\ntype is accepted for bytes strings. (Contributed by Victor Stinner in gh-98393.)syslog.openlog()\nandsyslog.closelog()\nnow fail if used in subinterpreters.syslog.syslog()\nmay still be used in subinterpreters, but now only ifsyslog.openlog()\nhas already been called in the main interpreter. These new restrictions do not apply to the main interpreter, so only a very small set of users might be affected. This change helps with interpreter isolation. Furthermore,syslog\nis a wrapper around process-global resources, which are best managed from the main interpreter. (Contributed by Donghee Na in gh-99127.)The undocumented locking behavior of\ncached_property()\nis removed, because it locked across all instances of the class, leading to high lock contention. This means that a cached property getter function could now run more than once for a single instance, if two threads race. For most simple cached properties (e.g. those that are idempotent and simply calculate a value based on other attributes of the instance) this will be fine. If synchronization is needed, implement locking within the cached property getter function or around multi-threaded access points.sys._current_exceptions()\nnow returns a mapping from thread-id to an exception instance, rather than to a(typ, exc, tb)\ntuple. (Contributed by Irit Katriel in gh-103176.)When extracting tar files using\ntarfile\norshutil.unpack_archive()\n, pass the filter argument to limit features that may be surprising or dangerous. See Extraction filters for details.The output of the\ntokenize.tokenize()\nandtokenize.generate_tokens()\nfunctions is now changed due to the changes introduced in PEP 701. This means thatSTRING\ntokens are not emitted any more for f-strings and the tokens described in PEP 701 are now produced instead:FSTRING_START\n,FSTRING_MIDDLE\nandFSTRING_END\nare now emitted for f-string \u201cstring\u201d parts in addition to the appropriate tokens for the tokenization in the expression components. For example for the f-stringf\"start {1+1} end\"\nthe old version of the tokenizer emitted:1,0-1,18: STRING 'f\"start {1+1} end\"'\nwhile the new version emits:\n1,0-1,2: FSTRING_START 'f\"' 1,2-1,8: FSTRING_MIDDLE 'start ' 1,8-1,9: OP '{' 1,9-1,10: NUMBER '1' 1,10-1,11: OP '+' 1,11-1,12: NUMBER '1' 1,12-1,13: OP '}' 1,13-1,17: FSTRING_MIDDLE ' end' 1,17-1,18: FSTRING_END '\"'\nAdditionally, there may be some minor behavioral changes as a consequence of the changes required to support PEP 701. Some of these changes include:\nThe\ntype\nattribute of the tokens emitted when tokenizing some invalid Python characters such as!\nhas changed fromERRORTOKEN\ntoOP\n.Incomplete single-line strings now also raise\ntokenize.TokenError\nas incomplete multiline strings do.Some incomplete or invalid Python code now raises\ntokenize.TokenError\ninstead of returning arbitraryERRORTOKEN\ntokens when tokenizing it.Mixing tabs and spaces as indentation in the same file is not supported anymore and will raise a\nTabError\n.\nThe\nthreading\nmodule now expects the_thread\nmodule to have an_is_main_interpreter\nattribute. It is a function with no arguments that returnsTrue\nif the current interpreter is the main interpreter.Any library or application that provides a custom\n_thread\nmodule should provide_is_main_interpreter()\n. (See gh-112826.)\nBuild Changes\u00b6\nPython no longer uses\nsetup.py\nto build shared C extension modules. Build parameters like headers and libraries are detected inconfigure\nscript. Extensions are built byMakefile\n. Most extensions usepkg-config\nand fall back to manual detection. (Contributed by Christian Heimes in gh-93939.)va_start()\nwith two parameters, likeva_start(args, format),\nis now required to build Python.va_start()\nis no longer called with a single parameter. (Contributed by Kumar Aditya in gh-93207.)CPython now uses the ThinLTO option as the default link time optimization policy if the Clang compiler accepts the flag. (Contributed by Donghee Na in gh-89536.)\nAdd\nCOMPILEALL_OPTS\nvariable inMakefile\nto overridecompileall\noptions (default:-j0\n) inmake install\n. Also merged the 3compileall\ncommands into a single command to build .pyc files for all optimization levels (0, 1, 2) at once. (Contributed by Victor Stinner in gh-99289.)Add platform triplets for 64-bit LoongArch:\nloongarch64-linux-gnusf\nloongarch64-linux-gnuf32\nloongarch64-linux-gnu\n(Contributed by Zhang Na in gh-90656.)\nPYTHON_FOR_REGEN\nnow require Python 3.10 or newer.Autoconf 2.71 and aclocal 1.16.4 is now required to regenerate\nconfigure\n. (Contributed by Christian Heimes in gh-89886.)Windows builds and macOS installers from python.org now use OpenSSL 3.0.\nC API Changes\u00b6\nNew Features\u00b6\nPEP 697: Introduce the Unstable C API tier, intended for low-level tools like debuggers and JIT compilers. This API may change in each minor release of CPython without deprecation warnings. Its contents are marked by the\nPyUnstable_\nprefix in names.Code object constructors:\nPyUnstable_Code_New()\n(renamed fromPyCode_New\n)PyUnstable_Code_NewWithPosOnlyArgs()\n(renamed fromPyCode_NewWithPosOnlyArgs\n)\nExtra storage for code objects (PEP 523):\nPyUnstable_Eval_RequestCodeExtraIndex()\n(renamed from_PyEval_RequestCodeExtraIndex\n)PyUnstable_Code_GetExtra()\n(renamed from_PyCode_GetExtra\n)PyUnstable_Code_SetExtra()\n(renamed from_PyCode_SetExtra\n)\nThe original names will continue to be available until the respective API changes.\n(Contributed by Petr Viktorin in gh-101101.)\nPEP 697: Add an API for extending types whose instance memory layout is opaque:\nPyType_Spec.basicsize\ncan be zero or negative to specify inheriting or extending the base class size.PyObject_GetTypeData()\nandPyType_GetTypeDataSize()\nadded to allow access to subclass-specific instance data.Py_TPFLAGS_ITEMS_AT_END\nandPyObject_GetItemData()\nadded to allow safely extending certain variable-sized types, includingPyType_Type\n.Py_RELATIVE_OFFSET\nadded to allow definingmembers\nin terms of a subclass-specific struct.\n(Contributed by Petr Viktorin in gh-103509.)\nAdd the new limited C API function\nPyType_FromMetaclass()\n, which generalizes the existingPyType_FromModuleAndSpec()\nusing an additional metaclass argument. (Contributed by Wenzel Jakob in gh-93012.)API for creating objects that can be called using the vectorcall protocol was added to the Limited API:\nThe\nPy_TPFLAGS_HAVE_VECTORCALL\nflag is now removed from a class when the class\u2019s__call__()\nmethod is reassigned. This makes vectorcall safe to use with mutable types (i.e. heap types without the immutable flag,Py_TPFLAGS_IMMUTABLETYPE\n). Mutable types that do not overridetp_call\nnow inherit thePy_TPFLAGS_HAVE_VECTORCALL\nflag. (Contributed by Petr Viktorin in gh-93274.)The\nPy_TPFLAGS_MANAGED_DICT\nandPy_TPFLAGS_MANAGED_WEAKREF\nflags have been added. This allows extensions classes to support object__dict__\nand weakrefs with less bookkeeping, using less memory and with faster access.API for performing calls using the vectorcall protocol was added to the Limited API:\nThis means that both the incoming and outgoing ends of the vector call protocol are now available in the Limited API. (Contributed by Wenzel Jakob in gh-98586.)\nAdd two new public functions,\nPyEval_SetProfileAllThreads()\nandPyEval_SetTraceAllThreads()\n, that allow to set tracing and profiling functions in all running threads in addition to the calling one. (Contributed by Pablo Galindo in gh-93503.)Add new function\nPyFunction_SetVectorcall()\nto the C API which sets the vectorcall field of a givenPyFunctionObject\n. (Contributed by Andrew Frost in gh-92257.)The C API now permits registering callbacks via\nPyDict_AddWatcher()\n,PyDict_Watch()\nand related APIs to be called whenever a dictionary is modified. This is intended for use by optimizing interpreters, JIT compilers, or debuggers. (Contributed by Carl Meyer in gh-91052.)Add\nPyType_AddWatcher()\nandPyType_Watch()\nAPI to register callbacks to receive notification on changes to a type. (Contributed by Carl Meyer in gh-91051.)Add\nPyCode_AddWatcher()\nandPyCode_ClearWatcher()\nAPIs to register callbacks to receive notification on creation and destruction of code objects. (Contributed by Itamar Oren in gh-91054.)Add\nPyFrame_GetVar()\nandPyFrame_GetVarString()\nfunctions to get a frame variable by its name. (Contributed by Victor Stinner in gh-91248.)Add\nPyErr_GetRaisedException()\nandPyErr_SetRaisedException()\nfor saving and restoring the current exception. These functions return and accept a single exception object, rather than the triple arguments of the now-deprecatedPyErr_Fetch()\nandPyErr_Restore()\n. This is less error prone and a bit more efficient. (Contributed by Mark Shannon in gh-101578.)Add\n_PyErr_ChainExceptions1\n, which takes an exception instance, to replace the legacy-API_PyErr_ChainExceptions\n, which is now deprecated. (Contributed by Mark Shannon in gh-101578.)Add\nPyException_GetArgs()\nandPyException_SetArgs()\nas convenience functions for retrieving and modifying theargs\npassed to the exception\u2019s constructor. (Contributed by Mark Shannon in gh-101578.)Add\nPyErr_DisplayException()\n, which takes an exception instance, to replace the legacy-apiPyErr_Display()\n. (Contributed by Irit Katriel in gh-102755).\nPEP 683: Introduce Immortal Objects, which allows objects to bypass reference counts, and related changes to the C-API:\n_Py_IMMORTAL_REFCNT\n: The reference count that defines an objectas immortal.\n_Py_IsImmortal\nChecks if an object has the immortal reference count.PyObject_HEAD_INIT\nThis will now initialize reference count to_Py_IMMORTAL_REFCNT\nwhen used withPy_BUILD_CORE\n.\nSSTATE_INTERNED_IMMORTAL\nAn identifier for interned unicode objectsthat are immortal.\nSSTATE_INTERNED_IMMORTAL_STATIC\nAn identifier for interned unicodeobjects that are immortal and static\nsys.getunicodeinternedsize\nThis returns the total number of unicodeobjects that have been interned. This is now needed for\nrefleak.py\nto correctly track reference counts and allocated blocks\n(Contributed by Eddie Elizondo in gh-84436.)\nPEP 684: Add the new\nPy_NewInterpreterFromConfig()\nfunction andPyInterpreterConfig\n, which may be used to create sub-interpreters with their own GILs. (See PEP 684: A Per-Interpreter GIL for more info.) (Contributed by Eric Snow in gh-104110.)In the limited C API version 3.12,\nPy_INCREF()\nandPy_DECREF()\nfunctions are now implemented as opaque function calls to hide implementation details. (Contributed by Victor Stinner in gh-105387.)\nPorting to Python 3.12\u00b6\nLegacy Unicode APIs based on\nPy_UNICODE*\nrepresentation has been removed. Please migrate to APIs based on UTF-8 orwchar_t*\n.Argument parsing functions like\nPyArg_ParseTuple()\ndoesn\u2019t supportPy_UNICODE*\nbased format (e.g.u\n,Z\n) anymore. Please migrate to other formats for Unicode likes\n,z\n,es\n, andU\n.tp_weaklist\nfor all static builtin types is alwaysNULL\n. This is an internal-only field onPyTypeObject\nbut we\u2019re pointing out the change in case someone happens to be accessing the field directly anyway. To avoid breakage, consider using the existing public C-API instead, or, if necessary, the (internal-only)_PyObject_GET_WEAKREFS_LISTPTR()\nmacro.This internal-only\nPyTypeObject.tp_subclasses\nmay now not be a valid object pointer. Its type was changed to void* to reflect this. We mention this in case someone happens to be accessing the internal-only field directly.To get a list of subclasses, call the Python method\n__subclasses__()\n(usingPyObject_CallMethod()\n, for example).Add support of more formatting options (left aligning, octals, uppercase hexadecimals,\nintmax_t\n,ptrdiff_t\n,wchar_t\nC strings, variable width and precision) inPyUnicode_FromFormat()\nandPyUnicode_FromFormatV()\n. (Contributed by Serhiy Storchaka in gh-98836.)An unrecognized format character in\nPyUnicode_FromFormat()\nandPyUnicode_FromFormatV()\nnow sets aSystemError\n. In previous versions it caused all the rest of the format string to be copied as-is to the result string, and any extra arguments discarded. (Contributed by Serhiy Storchaka in gh-95781.)Fix wrong sign placement in\nPyUnicode_FromFormat()\nandPyUnicode_FromFormatV()\n. (Contributed by Philip Georgi in gh-95504.)Extension classes wanting to add a\n__dict__\nor weak reference slot should usePy_TPFLAGS_MANAGED_DICT\nandPy_TPFLAGS_MANAGED_WEAKREF\ninstead oftp_dictoffset\nandtp_weaklistoffset\n, respectively. The use oftp_dictoffset\nandtp_weaklistoffset\nis still supported, but does not fully support multiple inheritance (gh-95589), and performance may be worse. Classes declaringPy_TPFLAGS_MANAGED_DICT\nmust call_PyObject_VisitManagedDict()\nand_PyObject_ClearManagedDict()\nto traverse and clear their instance\u2019s dictionaries. To clear weakrefs, callPyObject_ClearWeakRefs()\n, as before.The\nPyUnicode_FSDecoder()\nfunction no longer accepts bytes-like paths, likebytearray\nandmemoryview\ntypes: only the exactbytes\ntype is accepted for bytes strings. (Contributed by Victor Stinner in gh-98393.)The\nPy_CLEAR\n,Py_SETREF\nandPy_XSETREF\nmacros now only evaluate their arguments once. If an argument has side effects, these side effects are no longer duplicated. (Contributed by Victor Stinner in gh-98724.)The interpreter\u2019s error indicator is now always normalized. This means that\nPyErr_SetObject()\n,PyErr_SetString()\nand the other functions that set the error indicator now normalize the exception before storing it. (Contributed by Mark Shannon in gh-101578.)_Py_RefTotal\nis no longer authoritative and only kept around for ABI compatibility. Note that it is an internal global and only available on debug builds. If you happen to be using it then you\u2019ll need to start using_Py_GetGlobalRefTotal()\n.The following functions now select an appropriate metaclass for the newly created type:\nCreating classes whose metaclass overrides\ntp_new\nis deprecated, and in Python 3.14+ it will be disallowed. Note that these functions ignoretp_new\nof the metaclass, possibly allowing incomplete initialization.Note that\nPyType_FromMetaclass()\n(added in Python 3.12) already disallows creating classes whose metaclass overridestp_new\n(__new__()\nin Python).Since\ntp_new\noverrides almost everythingPyType_From*\nfunctions do, the two are incompatible with each other. The existing behavior \u2013 ignoring the metaclass for several steps of type creation \u2013 is unsafe in general, since (meta)classes assume thattp_new\nwas called. There is no simple general workaround. One of the following may work for you:If you control the metaclass, avoid using\ntp_new\nin it:If initialization can be skipped, it can be done in\ntp_init\ninstead.If the metaclass doesn\u2019t need to be instantiated from Python, set its\ntp_new\ntoNULL\nusing thePy_TPFLAGS_DISALLOW_INSTANTIATION\nflag. This makes it acceptable forPyType_From*\nfunctions.\nAvoid\nPyType_From*\nfunctions: if you don\u2019t need C-specific features (slots or setting the instance size), create types by calling the metaclass.If you know the\ntp_new\ncan be skipped safely, filter the deprecation warning out usingwarnings.catch_warnings()\nfrom Python.\nPyOS_InputHook\nandPyOS_ReadlineFunctionPointer\nare no longer called in subinterpreters. This is because clients generally rely on process-wide global state (since these callbacks have no way of recovering extension module state).This also avoids situations where extensions may find themselves running in a subinterpreter that they don\u2019t support (or haven\u2019t yet been loaded in). See gh-104668 for more info.\nPyLongObject\nhas had its internals changed for better performance. Although the internals ofPyLongObject\nare private, they are used by some extension modules. The internal fields should no longer be accessed directly, instead the API functions beginningPyLong_...\nshould be used instead. Two new unstable API functions are provided for efficient access to the value ofPyLongObject\ns which fit into a single machine word:Custom allocators, set via\nPyMem_SetAllocator()\n, are now required to be thread-safe, regardless of memory domain. Allocators that don\u2019t have their own state, including \u201chooks\u201d, are not affected. If your custom allocator is not already thread-safe and you need guidance then please create a new GitHub issue and CC@ericsnowcurrently\n.\nDeprecated\u00b6\nIn accordance with PEP 699, the\nma_version_tag\nfield inPyDictObject\nis deprecated for extension modules. Accessing this field will generate a compiler warning at compile time. This field will be removed in Python 3.14. (Contributed by Ramvikrams and Kumar Aditya in gh-101193. PEP by Ken Jin.)Deprecate global configuration variable:\nPy_HashRandomizationFlag\n: usePyConfig.use_hash_seed\nandPyConfig.hash_seed\nPy_LegacyWindowsFSEncodingFlag\n: usePyPreConfig.legacy_windows_fs_encoding\nPy_LegacyWindowsStdioFlag\n: usePyConfig.legacy_windows_stdio\nPy_FileSystemDefaultEncoding\n: usePyConfig.filesystem_encoding\nPy_HasFileSystemDefaultEncoding\n: usePyConfig.filesystem_encoding\nPy_FileSystemDefaultEncodeErrors\n: usePyConfig.filesystem_errors\nPy_UTF8Mode\n: usePyPreConfig.utf8_mode\n(seePy_PreInitialize()\n)\nThe\nPy_InitializeFromConfig()\nAPI should be used withPyConfig\ninstead. (Contributed by Victor Stinner in gh-77782.)Creating\nimmutable types\nwith mutable bases is deprecated and will be disabled in Python 3.14. (gh-95388)The\nstructmember.h\nheader is deprecated, though it continues to be available and there are no plans to remove it.Its contents are now available just by including\nPython.h\n, with aPy\nprefix added if it was missing:Type macros like\nPy_T_INT\n,Py_T_DOUBLE\n, etc. (previouslyT_INT\n,T_DOUBLE\n, etc.)The flags\nPy_READONLY\n(previouslyREADONLY\n) andPy_AUDIT_READ\n(previously all uppercase)\nSeveral items are not exposed from\nPython.h\n:T_OBJECT\n(usePy_T_OBJECT_EX\n)T_NONE\n(previously undocumented, and pretty quirky)The macro\nWRITE_RESTRICTED\nwhich does nothing.The macros\nRESTRICTED\nandREAD_RESTRICTED\n, equivalents ofPy_AUDIT_READ\n.In some configurations,\n\nis not included fromPython.h\n. It should be included manually when usingoffsetof()\n.\nThe deprecated header continues to provide its original contents under the original names. Your old code can stay unchanged, unless the extra include and non-namespaced macros bother you greatly.\n(Contributed in gh-47146 by Petr Viktorin, based on earlier work by Alexander Belopolsky and Matthias Braun.)\nPyErr_Fetch()\nandPyErr_Restore()\nare deprecated. UsePyErr_GetRaisedException()\nandPyErr_SetRaisedException()\ninstead. (Contributed by Mark Shannon in gh-101578.)PyErr_Display()\nis deprecated. UsePyErr_DisplayException()\ninstead. (Contributed by Irit Katriel in gh-102755)._PyErr_ChainExceptions\nis deprecated. Use_PyErr_ChainExceptions1\ninstead. (Contributed by Irit Katriel in gh-102192.)Using\nPyType_FromSpec()\n,PyType_FromSpecWithBases()\norPyType_FromModuleAndSpec()\nto create a class whose metaclass overridestp_new\nis deprecated. Call the metaclass instead.\nPending removal in Python 3.14\u00b6\nThe\nma_version_tag\nfield inPyDictObject\nfor extension modules (PEP 699; gh-101193).Creating\nimmutable types\nwith mutable bases (gh-95388).\nPending removal in Python 3.15\u00b6\nThe\nPyImport_ImportModuleNoBlock()\n: UsePyImport_ImportModule()\ninstead.PyWeakref_GetObject()\nandPyWeakref_GET_OBJECT()\n: UsePyWeakref_GetRef()\ninstead. The pythoncapi-compat project can be used to getPyWeakref_GetRef()\non Python 3.12 and older.Py_UNICODE\ntype and thePy_UNICODE_WIDE\nmacro: Usewchar_t\ninstead.PyUnicode_AsDecodedObject()\n: UsePyCodec_Decode()\ninstead.PyUnicode_AsDecodedUnicode()\n: UsePyCodec_Decode()\ninstead; Note that some codecs (for example, \u201cbase64\u201d) may return a type other thanstr\n, such asbytes\n.PyUnicode_AsEncodedObject()\n: UsePyCodec_Encode()\ninstead.PyUnicode_AsEncodedUnicode()\n: UsePyCodec_Encode()\ninstead; Note that some codecs (for example, \u201cbase64\u201d) may return a type other thanbytes\n, such asstr\n.Python initialization functions, deprecated in Python 3.13:\nPy_GetPath()\n: UsePyConfig_Get(\"module_search_paths\")\n(sys.path\n) instead.Py_GetPrefix()\n: UsePyConfig_Get(\"base_prefix\")\n(sys.base_prefix\n) instead. UsePyConfig_Get(\"prefix\")\n(sys.prefix\n) if virtual environments need to be handled.Py_GetExecPrefix()\n: UsePyConfig_Get(\"base_exec_prefix\")\n(sys.base_exec_prefix\n) instead. UsePyConfig_Get(\"exec_prefix\")\n(sys.exec_prefix\n) if virtual environments need to be handled.Py_GetProgramFullPath()\n: UsePyConfig_Get(\"executable\")\n(sys.executable\n) instead.Py_GetProgramName()\n: UsePyConfig_Get(\"executable\")\n(sys.executable\n) instead.Py_GetPythonHome()\n: UsePyConfig_Get(\"home\")\nor thePYTHONHOME\nenvironment variable instead.\nThe pythoncapi-compat project can be used to get\nPyConfig_Get()\non Python 3.13 and older.Functions to configure Python\u2019s initialization, deprecated in Python 3.11:\nPySys_SetArgvEx()\n: SetPyConfig.argv\ninstead.PySys_SetArgv()\n: SetPyConfig.argv\ninstead.Py_SetProgramName()\n: SetPyConfig.program_name\ninstead.Py_SetPythonHome()\n: SetPyConfig.home\ninstead.PySys_ResetWarnOptions()\n: Clearsys.warnoptions\nandwarnings.filters\ninstead.\nThe\nPy_InitializeFromConfig()\nAPI should be used withPyConfig\ninstead.Global configuration variables:\nPy_DebugFlag\n: UsePyConfig.parser_debug\norPyConfig_Get(\"parser_debug\")\ninstead.Py_VerboseFlag\n: UsePyConfig.verbose\norPyConfig_Get(\"verbose\")\ninstead.Py_QuietFlag\n: UsePyConfig.quiet\norPyConfig_Get(\"quiet\")\ninstead.Py_InteractiveFlag\n: UsePyConfig.interactive\norPyConfig_Get(\"interactive\")\ninstead.Py_InspectFlag\n: UsePyConfig.inspect\norPyConfig_Get(\"inspect\")\ninstead.Py_OptimizeFlag\n: UsePyConfig.optimization_level\norPyConfig_Get(\"optimization_level\")\ninstead.Py_NoSiteFlag\n: UsePyConfig.site_import\norPyConfig_Get(\"site_import\")\ninstead.Py_BytesWarningFlag\n: UsePyConfig.bytes_warning\norPyConfig_Get(\"bytes_warning\")\ninstead.Py_FrozenFlag\n: UsePyConfig.pathconfig_warnings\norPyConfig_Get(\"pathconfig_warnings\")\ninstead.Py_IgnoreEnvironmentFlag\n: UsePyConfig.use_environment\norPyConfig_Get(\"use_environment\")\ninstead.Py_DontWriteBytecodeFlag\n: UsePyConfig.write_bytecode\norPyConfig_Get(\"write_bytecode\")\ninstead.Py_NoUserSiteDirectory\n: UsePyConfig.user_site_directory\norPyConfig_Get(\"user_site_directory\")\ninstead.Py_UnbufferedStdioFlag\n: UsePyConfig.buffered_stdio\norPyConfig_Get(\"buffered_stdio\")\ninstead.Py_HashRandomizationFlag\n: UsePyConfig.use_hash_seed\nandPyConfig.hash_seed\norPyConfig_Get(\"hash_seed\")\ninstead.Py_IsolatedFlag\n: UsePyConfig.isolated\norPyConfig_Get(\"isolated\")\ninstead.Py_LegacyWindowsFSEncodingFlag\n: UsePyPreConfig.legacy_windows_fs_encoding\norPyConfig_Get(\"legacy_windows_fs_encoding\")\ninstead.Py_LegacyWindowsStdioFlag\n: UsePyConfig.legacy_windows_stdio\norPyConfig_Get(\"legacy_windows_stdio\")\ninstead.Py_FileSystemDefaultEncoding\n,Py_HasFileSystemDefaultEncoding\n: UsePyConfig.filesystem_encoding\norPyConfig_Get(\"filesystem_encoding\")\ninstead.Py_FileSystemDefaultEncodeErrors\n: UsePyConfig.filesystem_errors\norPyConfig_Get(\"filesystem_errors\")\ninstead.Py_UTF8Mode\n: UsePyPreConfig.utf8_mode\norPyConfig_Get(\"utf8_mode\")\ninstead. (seePy_PreInitialize()\n)\nThe\nPy_InitializeFromConfig()\nAPI should be used withPyConfig\nto set these options. OrPyConfig_Get()\ncan be used to get these options at runtime.\nPending removal in Python 3.16\u00b6\nThe bundled copy of\nlibmpdec\n.\nPending removal in future versions\u00b6\nThe following APIs are deprecated and will be removed, although there is currently no date scheduled for their removal.\nPy_TPFLAGS_HAVE_FINALIZE\n: Unneeded since Python 3.8.PyErr_Fetch()\n: UsePyErr_GetRaisedException()\ninstead.PyErr_NormalizeException()\n: UsePyErr_GetRaisedException()\ninstead.PyErr_Restore()\n: UsePyErr_SetRaisedException()\ninstead.PyModule_GetFilename()\n: UsePyModule_GetFilenameObject()\ninstead.PyOS_AfterFork()\n: UsePyOS_AfterFork_Child()\ninstead.PySlice_GetIndicesEx()\n: UsePySlice_Unpack()\nandPySlice_AdjustIndices()\ninstead.PyUnicode_READY()\n: Unneeded since Python 3.12PyErr_Display()\n: UsePyErr_DisplayException()\ninstead._PyErr_ChainExceptions()\n: Use_PyErr_ChainExceptions1()\ninstead.PyBytesObject.ob_shash\nmember: callPyObject_Hash()\ninstead.Thread Local Storage (TLS) API:\nPyThread_create_key()\n: UsePyThread_tss_alloc()\ninstead.PyThread_delete_key()\n: UsePyThread_tss_free()\ninstead.PyThread_set_key_value()\n: UsePyThread_tss_set()\ninstead.PyThread_get_key_value()\n: UsePyThread_tss_get()\ninstead.PyThread_delete_key_value()\n: UsePyThread_tss_delete()\ninstead.PyThread_ReInitTLS()\n: Unneeded since Python 3.7.\nRemoved\u00b6\nRemove the\ntoken.h\nheader file. There was never any public tokenizer C API. Thetoken.h\nheader file was only designed to be used by Python internals. (Contributed by Victor Stinner in gh-92651.)Legacy Unicode APIs have been removed. See PEP 623 for detail.\nPyUnicode_WCHAR_KIND\nPyUnicode_AS_UNICODE()\nPyUnicode_AsUnicode()\nPyUnicode_AsUnicodeAndSize()\nPyUnicode_AS_DATA()\nPyUnicode_FromUnicode()\nPyUnicode_GET_SIZE()\nPyUnicode_GetSize()\nPyUnicode_GET_DATA_SIZE()\nRemove the\nPyUnicode_InternImmortal()\nfunction macro. (Contributed by Victor Stinner in gh-85858.)", "code_snippets": [" ", " ", " ", "\n ", "\n\n", "\n ", " ", " ", " ", " ", " ", "\n ", "\n\n ", " ", " ", " ", " ", "\n ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", "\n\n", "\n ", " ", "\n ", " ", "\n\n", " ", " ", "\n", " ", "\n\n", "\n ", " ", " ", "\n ", " ", "\n\n", "\n ", " ", "\n ", " ", " ", "\n ", " ", "\n\n", "\n ", " ", "\n ", " ", " ", "\n ", " ", "\n", " ", " ", "\n", "\n", "\n", " ", " ", " ", "\n", "\n", " ", "\n", "\n", " ", " ", "\n", " ", "\n", "\n", " ", " ", "\n", " ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", " ", " ", " ", " ", "\n\n", " ", "\n", "\n\n", " ", "\n ", "\n\n", " ", "\n", "\n", "\n\n", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", "\n ", "\n ", "\n ", "\n ", "\n ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 23154} +{"url": "https://docs.python.org/3/c-api/object.html", "title": "Object Protocol", "content": "Object Protocol\u00b6\n-\nPyObject *Py_GetConstant(unsigned int constant_id)\u00b6\n- Part of the Stable ABI since version 3.13.\nGet a strong reference to a constant.\nSet an exception and return\nNULL\nif constant_id is invalid.constant_id must be one of these constant identifiers:\nConstant Identifier\nValue\nReturned object\n-\nPy_CONSTANT_NONE\u00b6\n0\n-\nPy_CONSTANT_FALSE\u00b6\n1\n-\nPy_CONSTANT_TRUE\u00b6\n2\n-\nPy_CONSTANT_ELLIPSIS\u00b6\n3\n-\nPy_CONSTANT_NOT_IMPLEMENTED\u00b6\n4\n-\nPy_CONSTANT_ZERO\u00b6\n5\n0\n-\nPy_CONSTANT_ONE\u00b6\n6\n1\n-\nPy_CONSTANT_EMPTY_STR\u00b6\n7\n''\n-\nPy_CONSTANT_EMPTY_BYTES\u00b6\n8\nb''\n-\nPy_CONSTANT_EMPTY_TUPLE\u00b6\n9\n()\nNumeric values are only given for projects which cannot use the constant identifiers.\nAdded in version 3.13.\nCPython implementation detail: In CPython, all of these constants are immortal.\n-\nPy_CONSTANT_NONE\u00b6\n-\nPyObject *Py_GetConstantBorrowed(unsigned int constant_id)\u00b6\n- Part of the Stable ABI since version 3.13.\nSimilar to\nPy_GetConstant()\n, but return a borrowed reference.This function is primarily intended for backwards compatibility: using\nPy_GetConstant()\nis recommended for new code.The reference is borrowed from the interpreter, and is valid until the interpreter finalization.\nAdded in version 3.13.\n-\nPyObject *Py_NotImplemented\u00b6\nThe\nNotImplemented\nsingleton, used to signal that an operation is not implemented for the given type combination.\n-\nPy_RETURN_NOTIMPLEMENTED\u00b6\nProperly handle returning\nPy_NotImplemented\nfrom within a C function (that is, create a new strong reference toNotImplemented\nand return it).\n-\nPy_PRINT_RAW\u00b6\nFlag to be used with multiple functions that print the object (like\nPyObject_Print()\nandPyFile_WriteObject()\n). If passed, these functions use thestr()\nof the object instead of therepr()\n.\n-\nint PyObject_Print(PyObject *o, FILE *fp, int flags)\u00b6\nPrint an object o, on file fp. Returns\n-1\non error. The flags argument is used to enable certain printing options. The only option currently supported isPy_PRINT_RAW\n; if given, thestr()\nof the object is written instead of therepr()\n.\n-\nint PyObject_HasAttrWithError(PyObject *o, PyObject *attr_name)\u00b6\n- Part of the Stable ABI since version 3.13.\nReturns\n1\nif o has the attribute attr_name, and0\notherwise. This is equivalent to the Python expressionhasattr(o, attr_name)\n. On failure, return-1\n.Added in version 3.13.\n-\nint PyObject_HasAttrStringWithError(PyObject *o, const char *attr_name)\u00b6\n- Part of the Stable ABI since version 3.13.\nThis is the same as\nPyObject_HasAttrWithError()\n, but attr_name is specified as a const char* UTF-8 encoded bytes string, rather than a PyObject*.Added in version 3.13.\n-\nint PyObject_HasAttr(PyObject *o, PyObject *attr_name)\u00b6\n- Part of the Stable ABI.\nReturns\n1\nif o has the attribute attr_name, and0\notherwise. This function always succeeds.Note\nExceptions that occur when this calls\n__getattr__()\nand__getattribute__()\nmethods aren\u2019t propagated, but instead given tosys.unraisablehook()\n. For proper error handling, usePyObject_HasAttrWithError()\n,PyObject_GetOptionalAttr()\norPyObject_GetAttr()\ninstead.\n-\nint PyObject_HasAttrString(PyObject *o, const char *attr_name)\u00b6\n- Part of the Stable ABI.\nThis is the same as\nPyObject_HasAttr()\n, but attr_name is specified as a const char* UTF-8 encoded bytes string, rather than a PyObject*.Note\nExceptions that occur when this calls\n__getattr__()\nand__getattribute__()\nmethods or while creating the temporarystr\nobject are silently ignored. For proper error handling, usePyObject_HasAttrStringWithError()\n,PyObject_GetOptionalAttrString()\norPyObject_GetAttrString()\ninstead.\n-\nPyObject *PyObject_GetAttr(PyObject *o, PyObject *attr_name)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nRetrieve an attribute named attr_name from object o. Returns the attribute value on success, or\nNULL\non failure. This is the equivalent of the Python expressiono.attr_name\n.If the missing attribute should not be treated as a failure, you can use\nPyObject_GetOptionalAttr()\ninstead.\n-\nPyObject *PyObject_GetAttrString(PyObject *o, const char *attr_name)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nThis is the same as\nPyObject_GetAttr()\n, but attr_name is specified as a const char* UTF-8 encoded bytes string, rather than a PyObject*.If the missing attribute should not be treated as a failure, you can use\nPyObject_GetOptionalAttrString()\ninstead.\n-\nint PyObject_GetOptionalAttr(PyObject *obj, PyObject *attr_name, PyObject **result);\u00b6\n- Part of the Stable ABI since version 3.13.\nVariant of\nPyObject_GetAttr()\nwhich doesn\u2019t raiseAttributeError\nif the attribute is not found.If the attribute is found, return\n1\nand set *result to a new strong reference to the attribute. If the attribute is not found, return0\nand set *result toNULL\n; theAttributeError\nis silenced. If an error other thanAttributeError\nis raised, return-1\nand set *result toNULL\n.Added in version 3.13.\n-\nint PyObject_GetOptionalAttrString(PyObject *obj, const char *attr_name, PyObject **result);\u00b6\n- Part of the Stable ABI since version 3.13.\nThis is the same as\nPyObject_GetOptionalAttr()\n, but attr_name is specified as a const char* UTF-8 encoded bytes string, rather than a PyObject*.Added in version 3.13.\n-\nPyObject *PyObject_GenericGetAttr(PyObject *o, PyObject *name)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nGeneric attribute getter function that is meant to be put into a type object\u2019s\ntp_getattro\nslot. It looks for a descriptor in the dictionary of classes in the object\u2019s MRO as well as an attribute in the object\u2019s__dict__\n(if present). As outlined in Implementing Descriptors, data descriptors take preference over instance attributes, while non-data descriptors don\u2019t. Otherwise, anAttributeError\nis raised.\n-\nint PyObject_SetAttr(PyObject *o, PyObject *attr_name, PyObject *v)\u00b6\n- Part of the Stable ABI.\nSet the value of the attribute named attr_name, for object o, to the value v. Raise an exception and return\n-1\non failure; return0\non success. This is the equivalent of the Python statemento.attr_name = v\n.If v is\nNULL\n, the attribute is deleted. This behaviour is deprecated in favour of usingPyObject_DelAttr()\n, but there are currently no plans to remove it.\n-\nint PyObject_SetAttrString(PyObject *o, const char *attr_name, PyObject *v)\u00b6\n- Part of the Stable ABI.\nThis is the same as\nPyObject_SetAttr()\n, but attr_name is specified as a const char* UTF-8 encoded bytes string, rather than a PyObject*.If v is\nNULL\n, the attribute is deleted, but this feature is deprecated in favour of usingPyObject_DelAttrString()\n.The number of different attribute names passed to this function should be kept small, usually by using a statically allocated string as attr_name. For attribute names that aren\u2019t known at compile time, prefer calling\nPyUnicode_FromString()\nandPyObject_SetAttr()\ndirectly. For more details, seePyUnicode_InternFromString()\n, which may be used internally to create a key object.\n-\nint PyObject_GenericSetAttr(PyObject *o, PyObject *name, PyObject *value)\u00b6\n- Part of the Stable ABI.\nGeneric attribute setter and deleter function that is meant to be put into a type object\u2019s\ntp_setattro\nslot. It looks for a data descriptor in the dictionary of classes in the object\u2019s MRO, and if found it takes preference over setting or deleting the attribute in the instance dictionary. Otherwise, the attribute is set or deleted in the object\u2019s__dict__\n(if present). On success,0\nis returned, otherwise anAttributeError\nis raised and-1\nis returned.\n-\nint PyObject_DelAttr(PyObject *o, PyObject *attr_name)\u00b6\n- Part of the Stable ABI since version 3.13.\nDelete attribute named attr_name, for object o. Returns\n-1\non failure. This is the equivalent of the Python statementdel o.attr_name\n.\n-\nint PyObject_DelAttrString(PyObject *o, const char *attr_name)\u00b6\n- Part of the Stable ABI since version 3.13.\nThis is the same as\nPyObject_DelAttr()\n, but attr_name is specified as a const char* UTF-8 encoded bytes string, rather than a PyObject*.The number of different attribute names passed to this function should be kept small, usually by using a statically allocated string as attr_name. For attribute names that aren\u2019t known at compile time, prefer calling\nPyUnicode_FromString()\nandPyObject_DelAttr()\ndirectly. For more details, seePyUnicode_InternFromString()\n, which may be used internally to create a key object for lookup.\n-\nPyObject *PyObject_GenericGetDict(PyObject *o, void *context)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.10.\nA generic implementation for the getter of a\n__dict__\ndescriptor. It creates the dictionary if necessary.This function may also be called to get the\n__dict__\nof the object o. PassNULL\nfor context when calling it. Since this function may need to allocate memory for the dictionary, it may be more efficient to callPyObject_GetAttr()\nwhen accessing an attribute on the object.On failure, returns\nNULL\nwith an exception set.Added in version 3.3.\n-\nint PyObject_GenericSetDict(PyObject *o, PyObject *value, void *context)\u00b6\n- Part of the Stable ABI since version 3.7.\nA generic implementation for the setter of a\n__dict__\ndescriptor. This implementation does not allow the dictionary to be deleted.Added in version 3.3.\n-\nPyObject **_PyObject_GetDictPtr(PyObject *obj)\u00b6\nReturn a pointer to\n__dict__\nof the object obj. If there is no__dict__\n, returnNULL\nwithout setting an exception.This function may need to allocate memory for the dictionary, so it may be more efficient to call\nPyObject_GetAttr()\nwhen accessing an attribute on the object.\n-\nPyObject *PyObject_RichCompare(PyObject *o1, PyObject *o2, int opid)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCompare the values of o1 and o2 using the operation specified by opid, which must be one of\nPy_LT\n,Py_LE\n,Py_EQ\n,Py_NE\n,Py_GT\n, orPy_GE\n, corresponding to<\n,<=\n,==\n,!=\n,>\n, or>=\nrespectively. This is the equivalent of the Python expressiono1 op o2\n, whereop\nis the operator corresponding to opid. Returns the value of the comparison on success, orNULL\non failure.\n-\nint PyObject_RichCompareBool(PyObject *o1, PyObject *o2, int opid)\u00b6\n- Part of the Stable ABI.\nCompare the values of o1 and o2 using the operation specified by opid, like\nPyObject_RichCompare()\n, but returns-1\non error,0\nif the result is false,1\notherwise.\nNote\nIf o1 and o2 are the same object, PyObject_RichCompareBool()\nwill always return 1\nfor Py_EQ\nand 0\nfor Py_NE\n.\n-\nPyObject *PyObject_Format(PyObject *obj, PyObject *format_spec)\u00b6\n- Part of the Stable ABI.\nFormat obj using format_spec. This is equivalent to the Python expression\nformat(obj, format_spec)\n.format_spec may be\nNULL\n. In this case the call is equivalent toformat(obj)\n. Returns the formatted string on success,NULL\non failure.\n-\nPyObject *PyObject_Repr(PyObject *o)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCompute a string representation of object o. Returns the string representation on success,\nNULL\non failure. This is the equivalent of the Python expressionrepr(o)\n. Called by therepr()\nbuilt-in function.Changed in version 3.4: This function now includes a debug assertion to help ensure that it does not silently discard an active exception.\n-\nPyObject *PyObject_ASCII(PyObject *o)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nAs\nPyObject_Repr()\n, compute a string representation of object o, but escape the non-ASCII characters in the string returned byPyObject_Repr()\nwith\\x\n,\\u\nor\\U\nescapes. This generates a string similar to that returned byPyObject_Repr()\nin Python 2. Called by theascii()\nbuilt-in function.\n-\nPyObject *PyObject_Str(PyObject *o)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCompute a string representation of object o. Returns the string representation on success,\nNULL\non failure. This is the equivalent of the Python expressionstr(o)\n. Called by thestr()\nbuilt-in function and, therefore, by theprint()\nfunction.Changed in version 3.4: This function now includes a debug assertion to help ensure that it does not silently discard an active exception.\n-\nPyObject *PyObject_Bytes(PyObject *o)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCompute a bytes representation of object o.\nNULL\nis returned on failure and a bytes object on success. This is equivalent to the Python expressionbytes(o)\n, when o is not an integer. Unlikebytes(o)\n, a TypeError is raised when o is an integer instead of a zero-initialized bytes object.\n-\nint PyObject_IsSubclass(PyObject *derived, PyObject *cls)\u00b6\n- Part of the Stable ABI.\nReturn\n1\nif the class derived is identical to or derived from the class cls, otherwise return0\n. In case of an error, return-1\n.If cls is a tuple, the check will be done against every entry in cls. The result will be\n1\nwhen at least one of the checks returns1\n, otherwise it will be0\n.If cls has a\n__subclasscheck__()\nmethod, it will be called to determine the subclass status as described in PEP 3119. Otherwise, derived is a subclass of cls if it is a direct or indirect subclass, i.e. contained incls.__mro__\n.Normally only class objects, i.e. instances of\ntype\nor a derived class, are considered classes. However, objects can override this by having a__bases__\nattribute (which must be a tuple of base classes).\n-\nint PyObject_IsInstance(PyObject *inst, PyObject *cls)\u00b6\n- Part of the Stable ABI.\nReturn\n1\nif inst is an instance of the class cls or a subclass of cls, or0\nif not. On error, returns-1\nand sets an exception.If cls is a tuple, the check will be done against every entry in cls. The result will be\n1\nwhen at least one of the checks returns1\n, otherwise it will be0\n.If cls has a\n__instancecheck__()\nmethod, it will be called to determine the subclass status as described in PEP 3119. Otherwise, inst is an instance of cls if its class is a subclass of cls.An instance inst can override what is considered its class by having a\n__class__\nattribute.An object cls can override if it is considered a class, and what its base classes are, by having a\n__bases__\nattribute (which must be a tuple of base classes).\n-\nPy_hash_t PyObject_Hash(PyObject *o)\u00b6\n- Part of the Stable ABI.\nCompute and return the hash value of an object o. On failure, return\n-1\n. This is the equivalent of the Python expressionhash(o)\n.Changed in version 3.2: The return type is now Py_hash_t. This is a signed integer the same size as\nPy_ssize_t\n.\n-\nPy_hash_t PyObject_HashNotImplemented(PyObject *o)\u00b6\n- Part of the Stable ABI.\nSet a\nTypeError\nindicating thattype(o)\nis not hashable and return-1\n. This function receives special treatment when stored in atp_hash\nslot, allowing a type to explicitly indicate to the interpreter that it is not hashable.\n-\nint PyObject_IsTrue(PyObject *o)\u00b6\n- Part of the Stable ABI.\nReturns\n1\nif the object o is considered to be true, and0\notherwise. This is equivalent to the Python expressionnot not o\n. On failure, return-1\n.\n-\nint PyObject_Not(PyObject *o)\u00b6\n- Part of the Stable ABI.\nReturns\n0\nif the object o is considered to be true, and1\notherwise. This is equivalent to the Python expressionnot o\n. On failure, return-1\n.\n-\nPyObject *PyObject_Type(PyObject *o)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nWhen o is non-\nNULL\n, returns a type object corresponding to the object type of object o. On failure, raisesSystemError\nand returnsNULL\n. This is equivalent to the Python expressiontype(o)\n. This function creates a new strong reference to the return value. There\u2019s really no reason to use this function instead of thePy_TYPE()\nfunction, which returns a pointer of type PyTypeObject*, except when a new strong reference is needed.\n-\nint PyObject_TypeCheck(PyObject *o, PyTypeObject *type)\u00b6\nReturn non-zero if the object o is of type type or a subtype of type, and\n0\notherwise. Both parameters must be non-NULL\n.\n-\nPy_ssize_t PyObject_Size(PyObject *o)\u00b6\n-\nPy_ssize_t PyObject_Length(PyObject *o)\u00b6\n- Part of the Stable ABI.\nReturn the length of object o. If the object o provides either the sequence and mapping protocols, the sequence length is returned. On error,\n-1\nis returned. This is the equivalent to the Python expressionlen(o)\n.\n-\nPy_ssize_t PyObject_LengthHint(PyObject *o, Py_ssize_t defaultvalue)\u00b6\nReturn an estimated length for the object o. First try to return its actual length, then an estimate using\n__length_hint__()\n, and finally return the default value. On error return-1\n. This is the equivalent to the Python expressionoperator.length_hint(o, defaultvalue)\n.Added in version 3.4.\n-\nPyObject *PyObject_GetItem(PyObject *o, PyObject *key)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn element of o corresponding to the object key or\nNULL\non failure. This is the equivalent of the Python expressiono[key]\n.\n-\nint PyObject_SetItem(PyObject *o, PyObject *key, PyObject *v)\u00b6\n- Part of the Stable ABI.\nMap the object key to the value v. Raise an exception and return\n-1\non failure; return0\non success. This is the equivalent of the Python statemento[key] = v\n. This function does not steal a reference to v.\n-\nint PyObject_DelItem(PyObject *o, PyObject *key)\u00b6\n- Part of the Stable ABI.\nRemove the mapping for the object key from the object o. Return\n-1\non failure. This is equivalent to the Python statementdel o[key]\n.\n-\nint PyObject_DelItemString(PyObject *o, const char *key)\u00b6\n- Part of the Stable ABI.\nThis is the same as\nPyObject_DelItem()\n, but key is specified as a const char* UTF-8 encoded bytes string, rather than a PyObject*.\n-\nPyObject *PyObject_Dir(PyObject *o)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nThis is equivalent to the Python expression\ndir(o)\n, returning a (possibly empty) list of strings appropriate for the object argument, orNULL\nif there was an error. If the argument isNULL\n, this is like the Pythondir()\n, returning the names of the current locals; in this case, if no execution frame is active thenNULL\nis returned butPyErr_Occurred()\nwill return false.\n-\nPyObject *PyObject_GetIter(PyObject *o)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nThis is equivalent to the Python expression\niter(o)\n. It returns a new iterator for the object argument, or the object itself if the object is already an iterator. RaisesTypeError\nand returnsNULL\nif the object cannot be iterated.\n-\nPyObject *PyObject_SelfIter(PyObject *obj)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nThis is equivalent to the Python\n__iter__(self): return self\nmethod. It is intended for iterator types, to be used in thePyTypeObject.tp_iter\nslot.\n-\nPyObject *PyObject_GetAIter(PyObject *o)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.10.\nThis is the equivalent to the Python expression\naiter(o)\n. Takes anAsyncIterable\nobject and returns anAsyncIterator\nfor it. This is typically a new iterator but if the argument is anAsyncIterator\n, this returns itself. RaisesTypeError\nand returnsNULL\nif the object cannot be iterated.Added in version 3.10.\n-\nvoid *PyObject_GetTypeData(PyObject *o, PyTypeObject *cls)\u00b6\n- Part of the Stable ABI since version 3.12.\nGet a pointer to subclass-specific data reserved for cls.\nThe object o must be an instance of cls, and cls must have been created using negative\nPyType_Spec.basicsize\n. Python does not check this.On error, set an exception and return\nNULL\n.Added in version 3.12.\n-\nPy_ssize_t PyType_GetTypeDataSize(PyTypeObject *cls)\u00b6\n- Part of the Stable ABI since version 3.12.\nReturn the size of the instance memory space reserved for cls, i.e. the size of the memory\nPyObject_GetTypeData()\nreturns.This may be larger than requested using\n-PyType_Spec.basicsize\n; it is safe to use this larger size (e.g. withmemset()\n).The type cls must have been created using negative\nPyType_Spec.basicsize\n. Python does not check this.On error, set an exception and return a negative value.\nAdded in version 3.12.\n-\nvoid *PyObject_GetItemData(PyObject *o)\u00b6\nGet a pointer to per-item data for a class with\nPy_TPFLAGS_ITEMS_AT_END\n.On error, set an exception and return\nNULL\n.TypeError\nis raised if o does not havePy_TPFLAGS_ITEMS_AT_END\nset.Added in version 3.12.\n-\nint PyObject_VisitManagedDict(PyObject *obj, visitproc visit, void *arg)\u00b6\nVisit the managed dictionary of obj.\nThis function must only be called in a traverse function of the type which has the\nPy_TPFLAGS_MANAGED_DICT\nflag set.Added in version 3.13.\n-\nvoid PyObject_ClearManagedDict(PyObject *obj)\u00b6\nClear the managed dictionary of obj.\nThis function must only be called in a clear function of the type which has the\nPy_TPFLAGS_MANAGED_DICT\nflag set.Added in version 3.13.\n-\nint PyUnstable_Object_EnableDeferredRefcount(PyObject *obj)\u00b6\n- This is Unstable API. It may change without warning in minor releases.\nEnable deferred reference counting on obj, if supported by the runtime. In the free-threaded build, this allows the interpreter to avoid reference count adjustments to obj, which may improve multi-threaded performance. The tradeoff is that obj will only be deallocated by the tracing garbage collector, and not when the interpreter no longer has any references to it.\nThis function returns\n1\nif deferred reference counting is enabled on obj, and0\nif deferred reference counting is not supported or if the hint was ignored by the interpreter, such as when deferred reference counting is already enabled on obj. This function is thread-safe, and cannot fail.This function does nothing on builds with the GIL enabled, which do not support deferred reference counting. This also does nothing if obj is not an object tracked by the garbage collector (see\ngc.is_tracked()\nandPyObject_GC_IsTracked()\n).This function is intended to be used soon after obj is created, by the code that creates it, such as in the object\u2019s\ntp_new\nslot.Added in version 3.14.\n-\nint PyUnstable_Object_IsUniqueReferencedTemporary(PyObject *obj)\u00b6\n- This is Unstable API. It may change without warning in minor releases.\nCheck if obj is a unique temporary object. Returns\n1\nif obj is known to be a unique temporary object, and0\notherwise. This function cannot fail, but the check is conservative, and may return0\nin some cases even if obj is a unique temporary object.If an object is a unique temporary, it is guaranteed that the current code has the only reference to the object. For arguments to C functions, this should be used instead of checking if the reference count is\n1\n. Starting with Python 3.14, the interpreter internally avoids some reference count modifications when loading objects onto the operands stack by borrowing references when possible, which means that a reference count of1\nby itself does not guarantee that a function argument uniquely referenced.In the example below,\nmy_func\nis called with a unique temporary object as its argument:my_func([1, 2, 3])\nIn the example below,\nmy_func\nis not called with a unique temporary object as its argument, even if its refcount is1\n:my_list = [1, 2, 3] my_func(my_list)\nSee also the function\nPy_REFCNT()\n.Added in version 3.14.\n-\nint PyUnstable_IsImmortal(PyObject *obj)\u00b6\n- This is Unstable API. It may change without warning in minor releases.\nThis function returns non-zero if obj is immortal, and zero otherwise. This function cannot fail.\nNote\nObjects that are immortal in one CPython version are not guaranteed to be immortal in another.\nAdded in version 3.14.\n-\nint PyUnstable_TryIncRef(PyObject *obj)\u00b6\n- This is Unstable API. It may change without warning in minor releases.\nIncrements the reference count of obj if it is not zero. Returns\n1\nif the object\u2019s reference count was successfully incremented. Otherwise, this function returns0\n.PyUnstable_EnableTryIncRef()\nmust have been called earlier on obj or this function may spuriously return0\nin the free-threaded build.This function is logically equivalent to the following C code, except that it behaves atomically in the free-threaded build:\nif (Py_REFCNT(op) > 0) { Py_INCREF(op); return 1; } return 0;\nThis is intended as a building block for managing weak references without the overhead of a Python weak reference object.\nTypically, correct use of this function requires support from obj\u2019s deallocator (\ntp_dealloc\n). For example, the following sketch could be adapted to implement a \u201cweakmap\u201d that works like aWeakValueDictionary\nfor a specific type:PyMutex mutex; PyObject * add_entry(weakmap_key_type *key, PyObject *value) { PyUnstable_EnableTryIncRef(value); weakmap_type weakmap = ...; PyMutex_Lock(&mutex); weakmap_add_entry(weakmap, key, value); PyMutex_Unlock(&mutex); Py_RETURN_NONE; } PyObject * get_value(weakmap_key_type *key) { weakmap_type weakmap = ...; PyMutex_Lock(&mutex); PyObject *result = weakmap_find(weakmap, key); if (PyUnstable_TryIncRef(result)) { // `result` is safe to use PyMutex_Unlock(&mutex); return result; } // if we get here, `result` is starting to be garbage-collected, // but has not been removed from the weakmap yet PyMutex_Unlock(&mutex); return NULL; } // tp_dealloc function for weakmap values void value_dealloc(PyObject *value) { weakmap_type weakmap = ...; PyMutex_Lock(&mutex); weakmap_remove_value(weakmap, value); ... PyMutex_Unlock(&mutex); }\nAdded in version 3.14.\n-\nvoid PyUnstable_EnableTryIncRef(PyObject *obj)\u00b6\n- This is Unstable API. It may change without warning in minor releases.\nEnables subsequent uses of\nPyUnstable_TryIncRef()\non obj. The caller must hold a strong reference to obj when calling this.Added in version 3.14.\n-\nint PyUnstable_Object_IsUniquelyReferenced(PyObject *op)\u00b6\n- This is Unstable API. It may change without warning in minor releases.\nDetermine if op only has one reference.\nOn GIL-enabled builds, this function is equivalent to Py_REFCNT(op) == 1.\nOn a free-threaded build, this checks if op\u2019s reference count is equal to one and additionally checks if op is only used by this thread. Py_REFCNT(op) == 1 is not thread-safe on free-threaded builds; prefer this function.\nThe caller must hold an attached thread state, despite the fact that this function doesn\u2019t call into the Python interpreter. This function cannot fail.\nAdded in version 3.14.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 6470} +{"url": "https://docs.python.org/3/c-api/refcounting.html", "title": "Reference Counting", "content": "Reference Counting\u00b6\nThe functions and macros in this section are used for managing reference counts of Python objects.\n-\nPy_ssize_t Py_REFCNT(PyObject *o)\u00b6\n- Part of the Stable ABI since version 3.14.\nGet the reference count of the Python object o.\nNote that the returned value may not actually reflect how many references to the object are actually held. For example, some objects are immortal and have a very high refcount that does not reflect the actual number of references. Consequently, do not rely on the returned value to be accurate, other than a value of 0 or 1.\nUse the\nPy_SET_REFCNT()\nfunction to set an object reference count.Note\nOn free-threaded builds of Python, returning 1 isn\u2019t sufficient to determine if it\u2019s safe to treat o as having no access by other threads. Use\nPyUnstable_Object_IsUniquelyReferenced()\nfor that instead.See also the function\nPyUnstable_Object_IsUniqueReferencedTemporary()\n.Changed in version 3.10:\nPy_REFCNT()\nis changed to the inline static function.Changed in version 3.11: The parameter type is no longer const PyObject*.\n-\nvoid Py_SET_REFCNT(PyObject *o, Py_ssize_t refcnt)\u00b6\nSet the object o reference counter to refcnt.\nOn Python build with Free Threading, if refcnt is larger than\nUINT32_MAX\n, the object is made immortal.This function has no effect on immortal objects.\nAdded in version 3.9.\nChanged in version 3.12: Immortal objects are not modified.\n-\nvoid Py_INCREF(PyObject *o)\u00b6\nIndicate taking a new strong reference to object o, indicating it is in use and should not be destroyed.\nThis function has no effect on immortal objects.\nThis function is usually used to convert a borrowed reference to a strong reference in-place. The\nPy_NewRef()\nfunction can be used to create a new strong reference.When done using the object, release is by calling\nPy_DECREF()\n.The object must not be\nNULL\n; if you aren\u2019t sure that it isn\u2019tNULL\n, usePy_XINCREF()\n.Do not expect this function to actually modify o in any way. For at least some objects, this function has no effect.\nChanged in version 3.12: Immortal objects are not modified.\n-\nvoid Py_XINCREF(PyObject *o)\u00b6\nSimilar to\nPy_INCREF()\n, but the object o can beNULL\n, in which case this has no effect.See also\nPy_XNewRef()\n.\n-\nPyObject *Py_NewRef(PyObject *o)\u00b6\n- Part of the Stable ABI since version 3.10.\nCreate a new strong reference to an object: call\nPy_INCREF()\non o and return the object o.When the strong reference is no longer needed,\nPy_DECREF()\nshould be called on it to release the reference.The object o must not be\nNULL\n; usePy_XNewRef()\nif o can beNULL\n.For example:\nPy_INCREF(obj); self->attr = obj;\ncan be written as:\nself->attr = Py_NewRef(obj);\nSee also\nPy_INCREF()\n.Added in version 3.10.\n-\nPyObject *Py_XNewRef(PyObject *o)\u00b6\n- Part of the Stable ABI since version 3.10.\nSimilar to\nPy_NewRef()\n, but the object o can be NULL.If the object o is\nNULL\n, the function just returnsNULL\n.Added in version 3.10.\n-\nvoid Py_DECREF(PyObject *o)\u00b6\nRelease a strong reference to object o, indicating the reference is no longer used.\nThis function has no effect on immortal objects.\nOnce the last strong reference is released (i.e. the object\u2019s reference count reaches 0), the object\u2019s type\u2019s deallocation function (which must not be\nNULL\n) is invoked.This function is usually used to delete a strong reference before exiting its scope.\nThe object must not be\nNULL\n; if you aren\u2019t sure that it isn\u2019tNULL\n, usePy_XDECREF()\n.Do not expect this function to actually modify o in any way. For at least some objects, this function has no effect.\nWarning\nThe deallocation function can cause arbitrary Python code to be invoked (e.g. when a class instance with a\n__del__()\nmethod is deallocated). While exceptions in such code are not propagated, the executed code has free access to all Python global variables. This means that any object that is reachable from a global variable should be in a consistent state beforePy_DECREF()\nis invoked. For example, code to delete an object from a list should copy a reference to the deleted object in a temporary variable, update the list data structure, and then callPy_DECREF()\nfor the temporary variable.Changed in version 3.12: Immortal objects are not modified.\n-\nvoid Py_XDECREF(PyObject *o)\u00b6\nSimilar to\nPy_DECREF()\n, but the object o can beNULL\n, in which case this has no effect. The same warning fromPy_DECREF()\napplies here as well.\n-\nvoid Py_CLEAR(PyObject *o)\u00b6\nRelease a strong reference for object o. The object may be\nNULL\n, in which case the macro has no effect; otherwise the effect is the same as forPy_DECREF()\n, except that the argument is also set toNULL\n. The warning forPy_DECREF()\ndoes not apply with respect to the object passed because the macro carefully uses a temporary variable and sets the argument toNULL\nbefore releasing the reference.It is a good idea to use this macro whenever releasing a reference to an object that might be traversed during garbage collection.\nChanged in version 3.12: The macro argument is now only evaluated once. If the argument has side effects, these are no longer duplicated.\n-\nvoid Py_IncRef(PyObject *o)\u00b6\n- Part of the Stable ABI.\nIndicate taking a new strong reference to object o. A function version of\nPy_XINCREF()\n. It can be used for runtime dynamic embedding of Python.\n-\nvoid Py_DecRef(PyObject *o)\u00b6\n- Part of the Stable ABI.\nRelease a strong reference to object o. A function version of\nPy_XDECREF()\n. It can be used for runtime dynamic embedding of Python.\n-\nPy_SETREF(dst, src)\u00b6\nMacro safely releasing a strong reference to object dst and setting dst to src.\nAs in case of\nPy_CLEAR()\n, \u201cthe obvious\u201d code can be deadly:Py_DECREF(dst); dst = src;\nThe safe way is:\nPy_SETREF(dst, src);\nThat arranges to set dst to src before releasing the reference to the old value of dst, so that any code triggered as a side-effect of dst getting torn down no longer believes dst points to a valid object.\nAdded in version 3.6.\nChanged in version 3.12: The macro arguments are now only evaluated once. If an argument has side effects, these are no longer duplicated.\n-\nPy_XSETREF(dst, src)\u00b6\nVariant of\nPy_SETREF\nmacro that usesPy_XDECREF()\ninstead ofPy_DECREF()\n.Added in version 3.6.\nChanged in version 3.12: The macro arguments are now only evaluated once. If an argument has side effects, these are no longer duplicated.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1578} +{"url": "https://docs.python.org/3/c-api/arg.html", "title": "Parsing arguments and building values", "content": "Parsing arguments and building values\u00b6\nThese functions are useful when creating your own extension functions and methods. Additional information and examples are available in Extending and Embedding the Python Interpreter.\nThe first three of these functions described, PyArg_ParseTuple()\n,\nPyArg_ParseTupleAndKeywords()\n, and PyArg_Parse()\n, all use format\nstrings which are used to tell the function about the expected arguments. The\nformat strings use the same syntax for each of these functions.\nParsing arguments\u00b6\nA format string consists of zero or more \u201cformat units.\u201d A format unit describes one Python object; it is usually a single character or a parenthesized sequence of format units. With a few exceptions, a format unit that is not a parenthesized sequence normally corresponds to a single address argument to these functions. In the following description, the quoted form is the format unit; the entry in (round) parentheses is the Python object type that matches the format unit; and the entry in [square] brackets is the type of the C variable(s) whose address should be passed.\nStrings and buffers\u00b6\nNote\nOn Python 3.12 and older, the macro PY_SSIZE_T_CLEAN\nmust be\ndefined before including Python.h\nto use all #\nvariants of\nformats (s#\n, y#\n, etc.) explained below.\nThis is not necessary on Python 3.13 and later.\nThese formats allow accessing an object as a contiguous chunk of memory. You don\u2019t have to provide raw storage for the returned unicode or bytes area.\nUnless otherwise stated, buffers are not NUL-terminated.\nThere are three ways strings and buffers can be converted to C:\nFormats such as\ny*\nands*\nfill aPy_buffer\nstructure. This locks the underlying buffer so that the caller can subsequently use the buffer even inside aPy_BEGIN_ALLOW_THREADS\nblock without the risk of mutable data being resized or destroyed. As a result, you have to callPyBuffer_Release()\nafter you have finished processing the data (or in any early abort case).The\nes\n,es#\n,et\nandet#\nformats allocate the result buffer. You have to callPyMem_Free()\nafter you have finished processing the data (or in any early abort case).Other formats take a\nstr\nor a read-only bytes-like object, such asbytes\n, and provide aconst char *\npointer to its buffer. In this case the buffer is \u201cborrowed\u201d: it is managed by the corresponding Python object, and shares the lifetime of this object. You won\u2019t have to release any memory yourself.To ensure that the underlying buffer may be safely borrowed, the object\u2019s\nPyBufferProcs.bf_releasebuffer\nfield must beNULL\n. This disallows common mutable objects such asbytearray\n, but also some read-only objects such asmemoryview\nofbytes\n.Besides this\nbf_releasebuffer\nrequirement, there is no check to verify whether the input object is immutable (e.g. whether it would honor a request for a writable buffer, or whether another thread can mutate the data).\ns\n(str\n) [const char *]Convert a Unicode object to a C pointer to a character string. A pointer to an existing string is stored in the character pointer variable whose address you pass. The C string is NUL-terminated. The Python string must not contain embedded null code points; if it does, a\nValueError\nexception is raised. Unicode objects are converted to C strings using'utf-8'\nencoding. If this conversion fails, aUnicodeError\nis raised.Note\nThis format does not accept bytes-like objects. If you want to accept filesystem paths and convert them to C character strings, it is preferable to use the\nO&\nformat withPyUnicode_FSConverter()\nas converter.Changed in version 3.5: Previously,\nTypeError\nwas raised when embedded null code points were encountered in the Python string.s*\n(str\nor bytes-like object) [Py_buffer]This format accepts Unicode objects as well as bytes-like objects. It fills a\nPy_buffer\nstructure provided by the caller. In this case the resulting C string may contain embedded NUL bytes. Unicode objects are converted to C strings using'utf-8'\nencoding.s#\n(str\n, read-only bytes-like object) [const char *,Py_ssize_t\n]Like\ns*\n, except that it provides a borrowed buffer. The result is stored into two C variables, the first one a pointer to a C string, the second one its length. The string may contain embedded null bytes. Unicode objects are converted to C strings using'utf-8'\nencoding.z\n(str\norNone\n) [const char *]Like\ns\n, but the Python object may also beNone\n, in which case the C pointer is set toNULL\n.z*\n(str\n, bytes-like object orNone\n) [Py_buffer]Like\ns*\n, but the Python object may also beNone\n, in which case thebuf\nmember of thePy_buffer\nstructure is set toNULL\n.z#\n(str\n, read-only bytes-like object orNone\n) [const char *,Py_ssize_t\n]Like\ns#\n, but the Python object may also beNone\n, in which case the C pointer is set toNULL\n.y\n(read-only bytes-like object) [const char *]This format converts a bytes-like object to a C pointer to a borrowed character string; it does not accept Unicode objects. The bytes buffer must not contain embedded null bytes; if it does, a\nValueError\nexception is raised.Changed in version 3.5: Previously,\nTypeError\nwas raised when embedded null bytes were encountered in the bytes buffer.y*\n(bytes-like object) [Py_buffer]This variant on\ns*\ndoesn\u2019t accept Unicode objects, only bytes-like objects. This is the recommended way to accept binary data.y#\n(read-only bytes-like object) [const char *,Py_ssize_t\n]This variant on\ns#\ndoesn\u2019t accept Unicode objects, only bytes-like objects.S\n(bytes\n) [PyBytesObject *]Requires that the Python object is a\nbytes\nobject, without attempting any conversion. RaisesTypeError\nif the object is not a bytes object. The C variable may also be declared as PyObject*.Y\n(bytearray\n) [PyByteArrayObject *]Requires that the Python object is a\nbytearray\nobject, without attempting any conversion. RaisesTypeError\nif the object is not abytearray\nobject. The C variable may also be declared as PyObject*.U\n(str\n) [PyObject *]Requires that the Python object is a Unicode object, without attempting any conversion. Raises\nTypeError\nif the object is not a Unicode object. The C variable may also be declared as PyObject*.w*\n(read-write bytes-like object) [Py_buffer]This format accepts any object which implements the read-write buffer interface. It fills a\nPy_buffer\nstructure provided by the caller. The buffer may contain embedded null bytes. The caller has to callPyBuffer_Release()\nwhen it is done with the buffer.es\n(str\n) [const char *encoding, char **buffer]This variant on\ns\nis used for encoding Unicode into a character buffer. It only works for encoded data without embedded NUL bytes.This format requires two arguments. The first is only used as input, and must be a const char* which points to the name of an encoding as a NUL-terminated string, or\nNULL\n, in which case'utf-8'\nencoding is used. An exception is raised if the named encoding is not known to Python. The second argument must be a char**; the value of the pointer it references will be set to a buffer with the contents of the argument text. The text will be encoded in the encoding specified by the first argument.PyArg_ParseTuple()\nwill allocate a buffer of the needed size, copy the encoded data into this buffer and adjust *buffer to reference the newly allocated storage. The caller is responsible for callingPyMem_Free()\nto free the allocated buffer after use.et\n(str\n,bytes\norbytearray\n) [const char *encoding, char **buffer]Same as\nes\nexcept that byte string objects are passed through without recoding them. Instead, the implementation assumes that the byte string object uses the encoding passed in as parameter.es#\n(str\n) [const char *encoding, char **buffer,Py_ssize_t\n*buffer_length]This variant on\ns#\nis used for encoding Unicode into a character buffer. Unlike thees\nformat, this variant allows input data which contains NUL characters.It requires three arguments. The first is only used as input, and must be a const char* which points to the name of an encoding as a NUL-terminated string, or\nNULL\n, in which case'utf-8'\nencoding is used. An exception is raised if the named encoding is not known to Python. The second argument must be a char**; the value of the pointer it references will be set to a buffer with the contents of the argument text. The text will be encoded in the encoding specified by the first argument. The third argument must be a pointer to an integer; the referenced integer will be set to the number of bytes in the output buffer.There are two modes of operation:\nIf *buffer points a\nNULL\npointer, the function will allocate a buffer of the needed size, copy the encoded data into this buffer and set *buffer to reference the newly allocated storage. The caller is responsible for callingPyMem_Free()\nto free the allocated buffer after usage.If *buffer points to a non-\nNULL\npointer (an already allocated buffer),PyArg_ParseTuple()\nwill use this location as the buffer and interpret the initial value of *buffer_length as the buffer size. It will then copy the encoded data into the buffer and NUL-terminate it. If the buffer is not large enough, aValueError\nwill be set.In both cases, *buffer_length is set to the length of the encoded data without the trailing NUL byte.\net#\n(str\n,bytes\norbytearray\n) [const char *encoding, char **buffer,Py_ssize_t\n*buffer_length]Same as\nes#\nexcept that byte string objects are passed through without recoding them. Instead, the implementation assumes that the byte string object uses the encoding passed in as parameter.\nChanged in version 3.12: u\n, u#\n, Z\n, and Z#\nare removed because they used a legacy\nPy_UNICODE*\nrepresentation.\nNumbers\u00b6\nThese formats allow representing Python numbers or single characters as C numbers.\nFormats that require int\n, float\nor complex\ncan\nalso use the corresponding special methods __index__()\n,\n__float__()\nor __complex__()\nto convert\nthe Python object to the required type.\nFor signed integer formats, OverflowError\nis raised if the value\nis out of range for the C type.\nFor unsigned integer formats, no range checking is done \u2014 the\nmost significant bits are silently truncated when the receiving field is too\nsmall to receive the value.\nb\n(int\n) [unsigned char]Convert a nonnegative Python integer to an unsigned tiny integer, stored in a C unsigned char.\nB\n(int\n) [unsigned char]Convert a Python integer to a tiny integer without overflow checking, stored in a C unsigned char.\nh\n(int\n) [short int]Convert a Python integer to a C short int.\nH\n(int\n) [unsigned short int]Convert a Python integer to a C unsigned short int, without overflow checking.\ni\n(int\n) [int]Convert a Python integer to a plain C int.\nI\n(int\n) [unsigned int]Convert a Python integer to a C unsigned int, without overflow checking.\nl\n(int\n) [long int]Convert a Python integer to a C long int.\nk\n(int\n) [unsigned long]Convert a Python integer to a C unsigned long without overflow checking.\nChanged in version 3.14: Use\n__index__()\nif available.L\n(int\n) [long long]Convert a Python integer to a C long long.\nK\n(int\n) [unsigned long long]Convert a Python integer to a C unsigned long long without overflow checking.\nChanged in version 3.14: Use\n__index__()\nif available.n\n(int\n) [Py_ssize_t\n]Convert a Python integer to a C\nPy_ssize_t\n.c\n(bytes\norbytearray\nof length 1) [char]Convert a Python byte, represented as a\nbytes\norbytearray\nobject of length 1, to a C char.Changed in version 3.3: Allow\nbytearray\nobjects.C\n(str\nof length 1) [int]Convert a Python character, represented as a\nstr\nobject of length 1, to a C int.f\n(float\n) [float]Convert a Python floating-point number to a C float.\nd\n(float\n) [double]Convert a Python floating-point number to a C double.\nD\n(complex\n) [Py_complex]Convert a Python complex number to a C\nPy_complex\nstructure.\nOther objects\u00b6\nO\n(object) [PyObject *]Store a Python object (without any conversion) in a C object pointer. The C program thus receives the actual object that was passed. A new strong reference to the object is not created (i.e. its reference count is not increased). The pointer stored is not\nNULL\n.O!\n(object) [typeobject, PyObject *]Store a Python object in a C object pointer. This is similar to\nO\n, but takes two C arguments: the first is the address of a Python type object, the second is the address of the C variable (of type PyObject*) into which the object pointer is stored. If the Python object does not have the required type,TypeError\nis raised.\nO&\n(object) [converter, address]Convert a Python object to a C variable through a converter function. This takes two arguments: the first is a function, the second is the address of a C variable (of arbitrary type), converted to void*. The converter function in turn is called as follows:\nstatus = converter(object, address);\nwhere object is the Python object to be converted and address is the void* argument that was passed to the\nPyArg_Parse*\nfunction. The returned status should be1\nfor a successful conversion and0\nif the conversion has failed. When the conversion fails, the converter function should raise an exception and leave the content of address unmodified.If the converter returns\nPy_CLEANUP_SUPPORTED\n, it may get called a second time if the argument parsing eventually fails, giving the converter a chance to release any memory that it had already allocated. In this second call, the object parameter will beNULL\n; address will have the same value as in the original call.Examples of converters:\nPyUnicode_FSConverter()\nandPyUnicode_FSDecoder()\n.Changed in version 3.1:\nPy_CLEANUP_SUPPORTED\nwas added.p\n(bool\n) [int]Tests the value passed in for truth (a boolean predicate) and converts the result to its equivalent C true/false integer value. Sets the int to\n1\nif the expression was true and0\nif it was false. This accepts any valid Python value. See Truth Value Testing for more information about how Python tests values for truth.Added in version 3.3.\n(items)\n(sequence) [matching-items]The object must be a Python sequence (except\nstr\n,bytes\norbytearray\n) whose length is the number of format units in items. The C arguments must correspond to the individual format units in items. Format units for sequences may be nested.If items contains format units which store a borrowed buffer (\ns\n,s#\n,z\n,z#\n,y\n, ory#\n) or a borrowed reference (S\n,Y\n,U\n,O\n, orO!\n), the object must be a Python tuple. The converter for theO&\nformat unit in items must not store a borrowed buffer or a borrowed reference.Deprecated since version 3.14: Non-tuple sequences are deprecated if items contains format units which store a borrowed buffer or a borrowed reference.\nA few other characters have a meaning in a format string. These may not occur inside nested parentheses. They are:\n|\nIndicates that the remaining arguments in the Python argument list are optional. The C variables corresponding to optional arguments should be initialized to their default value \u2014 when an optional argument is not specified,\nPyArg_ParseTuple()\ndoes not touch the contents of the corresponding C variable(s).$\nPyArg_ParseTupleAndKeywords()\nonly: Indicates that the remaining arguments in the Python argument list are keyword-only. Currently, all keyword-only arguments must also be optional arguments, so|\nmust always be specified before$\nin the format string.Added in version 3.3.\n:\nThe list of format units ends here; the string after the colon is used as the function name in error messages (the \u201cassociated value\u201d of the exception that\nPyArg_ParseTuple()\nraises).;\nThe list of format units ends here; the string after the semicolon is used as the error message instead of the default error message.\n:\nand;\nmutually exclude each other.\nNote that any Python object references which are provided to the caller are borrowed references; do not release them (i.e. do not decrement their reference count)!\nAdditional arguments passed to these functions must be addresses of variables whose type is determined by the format string; these are used to store values from the input tuple. There are a few cases, as described in the list of format units above, where these parameters are used as input values; they should match what is specified for the corresponding format unit in that case.\nFor the conversion to succeed, the arg object must match the format\nand the format must be exhausted. On success, the\nPyArg_Parse*\nfunctions return true, otherwise they return\nfalse and raise an appropriate exception. When the\nPyArg_Parse*\nfunctions fail due to conversion failure in one\nof the format units, the variables at the addresses corresponding to that\nand the following format units are left untouched.\nAPI Functions\u00b6\n-\nint PyArg_ParseTuple(PyObject *args, const char *format, ...)\u00b6\n- Part of the Stable ABI.\nParse the parameters of a function that takes only positional parameters into local variables. Returns true on success; on failure, it returns false and raises the appropriate exception.\n-\nint PyArg_VaParse(PyObject *args, const char *format, va_list vargs)\u00b6\n- Part of the Stable ABI.\nIdentical to\nPyArg_ParseTuple()\n, except that it accepts a va_list rather than a variable number of arguments.\n-\nint PyArg_ParseTupleAndKeywords(PyObject *args, PyObject *kw, const char *format, char *const *keywords, ...)\u00b6\n- Part of the Stable ABI.\nParse the parameters of a function that takes both positional and keyword parameters into local variables. The keywords argument is a\nNULL\n-terminated array of keyword parameter names specified as null-terminated ASCII or UTF-8 encoded C strings. Empty names denote positional-only parameters. Returns true on success; on failure, it returns false and raises the appropriate exception.Note\nThe keywords parameter declaration is char *const* in C and const char *const* in C++. This can be overridden with the\nPY_CXX_CONST\nmacro.Changed in version 3.6: Added support for positional-only parameters.\nChanged in version 3.13: The keywords parameter has now type char *const* in C and const char *const* in C++, instead of char**. Added support for non-ASCII keyword parameter names.\n-\nint PyArg_VaParseTupleAndKeywords(PyObject *args, PyObject *kw, const char *format, char *const *keywords, va_list vargs)\u00b6\n- Part of the Stable ABI.\nIdentical to\nPyArg_ParseTupleAndKeywords()\n, except that it accepts a va_list rather than a variable number of arguments.\n-\nint PyArg_ValidateKeywordArguments(PyObject*)\u00b6\n- Part of the Stable ABI.\nEnsure that the keys in the keywords argument dictionary are strings. This is only needed if\nPyArg_ParseTupleAndKeywords()\nis not used, since the latter already does this check.Added in version 3.2.\n-\nint PyArg_Parse(PyObject *args, const char *format, ...)\u00b6\n- Part of the Stable ABI.\nParse the parameter of a function that takes a single positional parameter into a local variable. Returns true on success; on failure, it returns false and raises the appropriate exception.\nExample:\n// Function using METH_O calling convention static PyObject* my_function(PyObject *module, PyObject *arg) { int value; if (!PyArg_Parse(arg, \"i:my_function\", &value)) { return NULL; } // ... use value ... }\n-\nint PyArg_UnpackTuple(PyObject *args, const char *name, Py_ssize_t min, Py_ssize_t max, ...)\u00b6\n- Part of the Stable ABI.\nA simpler form of parameter retrieval which does not use a format string to specify the types of the arguments. Functions which use this method to retrieve their parameters should be declared as\nMETH_VARARGS\nin function or method tables. The tuple containing the actual parameters should be passed as args; it must actually be a tuple. The length of the tuple must be at least min and no more than max; min and max may be equal. Additional arguments must be passed to the function, each of which should be a pointer to a PyObject* variable; these will be filled in with the values from args; they will contain borrowed references. The variables which correspond to optional parameters not given by args will not be filled in; these should be initialized by the caller. This function returns true on success and false if args is not a tuple or contains the wrong number of elements; an exception will be set if there was a failure.This is an example of the use of this function, taken from the sources for the\n_weakref\nhelper module for weak references:static PyObject * weakref_ref(PyObject *self, PyObject *args) { PyObject *object; PyObject *callback = NULL; PyObject *result = NULL; if (PyArg_UnpackTuple(args, \"ref\", 1, 2, &object, &callback)) { result = PyWeakref_NewRef(object, callback); } return result; }\nThe call to\nPyArg_UnpackTuple()\nin this example is entirely equivalent to this call toPyArg_ParseTuple()\n:PyArg_ParseTuple(args, \"O|O:ref\", &object, &callback)\n-\nPY_CXX_CONST\u00b6\nThe value to be inserted, if any, before char *const* in the keywords parameter declaration of\nPyArg_ParseTupleAndKeywords()\nandPyArg_VaParseTupleAndKeywords()\n. Default empty for C andconst\nfor C++ (const char *const*). To override, define it to the desired value before includingPython.h\n.Added in version 3.13.\nBuilding values\u00b6\n-\nPyObject *Py_BuildValue(const char *format, ...)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCreate a new value based on a format string similar to those accepted by the\nPyArg_Parse*\nfamily of functions and a sequence of values. Returns the value orNULL\nin the case of an error; an exception will be raised ifNULL\nis returned.Py_BuildValue()\ndoes not always build a tuple. It builds a tuple only if its format string contains two or more format units. If the format string is empty, it returnsNone\n; if it contains exactly one format unit, it returns whatever object is described by that format unit. To force it to return a tuple of size 0 or one, parenthesize the format string.When memory buffers are passed as parameters to supply data to build objects, as for the\ns\nands#\nformats, the required data is copied. Buffers provided by the caller are never referenced by the objects created byPy_BuildValue()\n. In other words, if your code invokesmalloc()\nand passes the allocated memory toPy_BuildValue()\n, your code is responsible for callingfree()\nfor that memory oncePy_BuildValue()\nreturns.In the following description, the quoted form is the format unit; the entry in (round) parentheses is the Python object type that the format unit will return; and the entry in [square] brackets is the type of the C value(s) to be passed.\nThe characters space, tab, colon and comma are ignored in format strings (but not within format units such as\ns#\n). This can be used to make long format strings a tad more readable.s\n(str\norNone\n) [const char *]Convert a null-terminated C string to a Python\nstr\nobject using'utf-8'\nencoding. If the C string pointer isNULL\n,None\nis used.s#\n(str\norNone\n) [const char *,Py_ssize_t\n]Convert a C string and its length to a Python\nstr\nobject using'utf-8'\nencoding. If the C string pointer isNULL\n, the length is ignored andNone\nis returned.y\n(bytes\n) [const char *]This converts a C string to a Python\nbytes\nobject. If the C string pointer isNULL\n,None\nis returned.y#\n(bytes\n) [const char *,Py_ssize_t\n]This converts a C string and its lengths to a Python object. If the C string pointer is\nNULL\n,None\nis returned.z\n(str\norNone\n) [const char *]Same as\ns\n.z#\n(str\norNone\n) [const char *,Py_ssize_t\n]Same as\ns#\n.u\n(str\n) [const wchar_t *]Convert a null-terminated\nwchar_t\nbuffer of Unicode (UTF-16 or UCS-4) data to a Python Unicode object. If the Unicode buffer pointer isNULL\n,None\nis returned.u#\n(str\n) [const wchar_t *,Py_ssize_t\n]Convert a Unicode (UTF-16 or UCS-4) data buffer and its length to a Python Unicode object. If the Unicode buffer pointer is\nNULL\n, the length is ignored andNone\nis returned.U\n(str\norNone\n) [const char *]Same as\ns\n.U#\n(str\norNone\n) [const char *,Py_ssize_t\n]Same as\ns#\n.i\n(int\n) [int]Convert a plain C int to a Python integer object.\nb\n(int\n) [char]Convert a plain C char to a Python integer object.\nh\n(int\n) [short int]Convert a plain C short int to a Python integer object.\nl\n(int\n) [long int]Convert a C long int to a Python integer object.\nB\n(int\n) [unsigned char]Convert a C unsigned char to a Python integer object.\nH\n(int\n) [unsigned short int]Convert a C unsigned short int to a Python integer object.\nI\n(int\n) [unsigned int]Convert a C unsigned int to a Python integer object.\nk\n(int\n) [unsigned long]Convert a C unsigned long to a Python integer object.\nL\n(int\n) [long long]Convert a C long long to a Python integer object.\nK\n(int\n) [unsigned long long]Convert a C unsigned long long to a Python integer object.\nn\n(int\n) [Py_ssize_t\n]Convert a C\nPy_ssize_t\nto a Python integer.p\n(bool\n) [int]Convert a C int to a Python\nbool\nobject.Be aware that this format requires an\nint\nargument. Unlike most other contexts in C, variadic arguments are not coerced to a suitable type automatically. You can convert another type (for example, a pointer or a float) to a suitableint\nvalue using(x) ? 1 : 0\nor!!x\n.Added in version 3.14.\nc\n(bytes\nof length 1) [char]Convert a C int representing a byte to a Python\nbytes\nobject of length 1.C\n(str\nof length 1) [int]Convert a C int representing a character to Python\nstr\nobject of length 1.d\n(float\n) [double]Convert a C double to a Python floating-point number.\nf\n(float\n) [float]Convert a C float to a Python floating-point number.\nD\n(complex\n) [Py_complex *]Convert a C\nPy_complex\nstructure to a Python complex number.O\n(object) [PyObject *]Pass a Python object untouched but create a new strong reference to it (i.e. its reference count is incremented by one). If the object passed in is a\nNULL\npointer, it is assumed that this was caused because the call producing the argument found an error and set an exception. Therefore,Py_BuildValue()\nwill returnNULL\nbut won\u2019t raise an exception. If no exception has been raised yet,SystemError\nis set.S\n(object) [PyObject *]Same as\nO\n.N\n(object) [PyObject *]Same as\nO\n, except it doesn\u2019t create a new strong reference. Useful when the object is created by a call to an object constructor in the argument list.O&\n(object) [converter, anything]Convert anything to a Python object through a converter function. The function is called with anything (which should be compatible with void*) as its argument and should return a \u201cnew\u201d Python object, or\nNULL\nif an error occurred.(items)\n(tuple\n) [matching-items]Convert a sequence of C values to a Python tuple with the same number of items.\n[items]\n(list\n) [matching-items]Convert a sequence of C values to a Python list with the same number of items.\n{items}\n(dict\n) [matching-items]Convert a sequence of C values to a Python dictionary. Each pair of consecutive C values adds one item to the dictionary, serving as key and value, respectively.\nIf there is an error in the format string, the\nSystemError\nexception is set andNULL\nreturned.\n-\nPyObject *Py_VaBuildValue(const char *format, va_list vargs)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nIdentical to\nPy_BuildValue()\n, except that it accepts a va_list rather than a variable number of arguments.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 6723} +{"url": "https://docs.python.org/3/c-api/call.html", "title": "Call Protocol", "content": "Call Protocol\u00b6\nCPython supports two different calling protocols: tp_call and vectorcall.\nThe tp_call Protocol\u00b6\nInstances of classes that set tp_call\nare callable.\nThe signature of the slot is:\nPyObject *tp_call(PyObject *callable, PyObject *args, PyObject *kwargs);\nA call is made using a tuple for the positional arguments\nand a dict for the keyword arguments, similarly to\ncallable(*args, **kwargs)\nin Python code.\nargs must be non-NULL (use an empty tuple if there are no arguments)\nbut kwargs may be NULL if there are no keyword arguments.\nThis convention is not only used by tp_call:\ntp_new\nand tp_init\nalso pass arguments this way.\nTo call an object, use PyObject_Call()\nor another\ncall API.\nThe Vectorcall Protocol\u00b6\nAdded in version 3.9.\nThe vectorcall protocol was introduced in PEP 590 as an additional protocol for making calls more efficient.\nAs rule of thumb, CPython will prefer the vectorcall for internal calls\nif the callable supports it. However, this is not a hard rule.\nAdditionally, some third-party extensions use tp_call directly\n(rather than using PyObject_Call()\n).\nTherefore, a class supporting vectorcall must also implement\ntp_call\n.\nMoreover, the callable must behave the same\nregardless of which protocol is used.\nThe recommended way to achieve this is by setting\ntp_call\nto PyVectorcall_Call()\n.\nThis bears repeating:\nWarning\nA class supporting vectorcall must also implement\ntp_call\nwith the same semantics.\nChanged in version 3.12: The Py_TPFLAGS_HAVE_VECTORCALL\nflag is now removed from a class\nwhen the class\u2019s __call__()\nmethod is reassigned.\n(This internally sets tp_call\nonly, and thus\nmay make it behave differently than the vectorcall function.)\nIn earlier Python versions, vectorcall should only be used with\nimmutable\nor static types.\nA class should not implement vectorcall if that would be slower than tp_call. For example, if the callee needs to convert the arguments to an args tuple and kwargs dict anyway, then there is no point in implementing vectorcall.\nClasses can implement the vectorcall protocol by enabling the\nPy_TPFLAGS_HAVE_VECTORCALL\nflag and setting\ntp_vectorcall_offset\nto the offset inside the\nobject structure where a vectorcallfunc appears.\nThis is a pointer to a function with the following signature:\n-\ntypedef PyObject *(*vectorcallfunc)(PyObject *callable, PyObject *const *args, size_t nargsf, PyObject *kwnames)\u00b6\n- Part of the Stable ABI since version 3.12.\ncallable is the object being called.\n- args is a C array consisting of the positional arguments followed by the\nvalues of the keyword arguments. This can be NULL if there are no arguments.\n- nargsf is the number of positional arguments plus possibly the\nPY_VECTORCALL_ARGUMENTS_OFFSET\nflag. To get the actual number of positional arguments from nargsf, usePyVectorcall_NARGS()\n.\n- kwnames is a tuple containing the names of the keyword arguments;\nin other words, the keys of the kwargs dict. These names must be strings (instances of\nstr\nor a subclass) and they must be unique. If there are no keyword arguments, then kwnames can instead be NULL.\n-\nPY_VECTORCALL_ARGUMENTS_OFFSET\u00b6\n- Part of the Stable ABI since version 3.12.\nIf this flag is set in a vectorcall nargsf argument, the callee is allowed to temporarily change\nargs[-1]\n. In other words, args points to argument 1 (not 0) in the allocated vector. The callee must restore the value ofargs[-1]\nbefore returning.For\nPyObject_VectorcallMethod()\n, this flag means instead thatargs[0]\nmay be changed.Whenever they can do so cheaply (without additional allocation), callers are encouraged to use\nPY_VECTORCALL_ARGUMENTS_OFFSET\n. Doing so will allow callables such as bound methods to make their onward calls (which include a prepended self argument) very efficiently.Added in version 3.8.\nTo call an object that implements vectorcall, use a call API\nfunction as with any other callable.\nPyObject_Vectorcall()\nwill usually be most efficient.\nRecursion Control\u00b6\nWhen using tp_call, callees do not need to worry about\nrecursion: CPython uses\nPy_EnterRecursiveCall()\nand Py_LeaveRecursiveCall()\nfor calls made using tp_call.\nFor efficiency, this is not the case for calls done using vectorcall: the callee should use Py_EnterRecursiveCall and Py_LeaveRecursiveCall if needed.\nVectorcall Support API\u00b6\n-\nPy_ssize_t PyVectorcall_NARGS(size_t nargsf)\u00b6\n- Part of the Stable ABI since version 3.12.\nGiven a vectorcall nargsf argument, return the actual number of arguments. Currently equivalent to:\n(Py_ssize_t)(nargsf & ~PY_VECTORCALL_ARGUMENTS_OFFSET)\nHowever, the function\nPyVectorcall_NARGS\nshould be used to allow for future extensions.Added in version 3.8.\n-\nvectorcallfunc PyVectorcall_Function(PyObject *op)\u00b6\nIf op does not support the vectorcall protocol (either because the type does not or because the specific instance does not), return NULL. Otherwise, return the vectorcall function pointer stored in op. This function never raises an exception.\nThis is mostly useful to check whether or not op supports vectorcall, which can be done by checking\nPyVectorcall_Function(op) != NULL\n.Added in version 3.9.\n-\nPyObject *PyVectorcall_Call(PyObject *callable, PyObject *tuple, PyObject *dict)\u00b6\n- Part of the Stable ABI since version 3.12.\nCall callable\u2019s\nvectorcallfunc\nwith positional and keyword arguments given in a tuple and dict, respectively.This is a specialized function, intended to be put in the\ntp_call\nslot or be used in an implementation oftp_call\n. It does not check thePy_TPFLAGS_HAVE_VECTORCALL\nflag and it does not fall back totp_call\n.Added in version 3.8.\nObject Calling API\u00b6\nVarious functions are available for calling a Python object. Each converts its arguments to a convention supported by the called object \u2013 either tp_call or vectorcall. In order to do as little conversion as possible, pick one that best fits the format of data you have available.\nThe following table summarizes the available functions; please see individual documentation for details.\nFunction |\ncallable |\nargs |\nkwargs |\n|---|---|---|---|\n|\ntuple |\ndict/ |\n|\n|\n\u2014 |\n\u2014 |\n|\n|\n1 object |\n\u2014 |\n|\n|\ntuple/ |\n\u2014 |\n|\n|\nformat |\n\u2014 |\n|\nobj + |\nformat |\n\u2014 |\n|\n|\nvariadic |\n\u2014 |\n|\nobj + name |\nvariadic |\n\u2014 |\n|\nobj + name |\n\u2014 |\n\u2014 |\n|\nobj + name |\n1 object |\n\u2014 |\n|\n|\nvectorcall |\nvectorcall |\n|\n|\nvectorcall |\ndict/ |\n|\narg + name |\nvectorcall |\nvectorcall |\n-\nPyObject *PyObject_Call(PyObject *callable, PyObject *args, PyObject *kwargs)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCall a callable Python object callable, with arguments given by the tuple args, and named arguments given by the dictionary kwargs.\nargs must not be NULL; use an empty tuple if no arguments are needed. If no named arguments are needed, kwargs can be NULL.\nReturn the result of the call on success, or raise an exception and return NULL on failure.\nThis is the equivalent of the Python expression:\ncallable(*args, **kwargs)\n.\n-\nPyObject *PyObject_CallNoArgs(PyObject *callable)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.10.\nCall a callable Python object callable without any arguments. It is the most efficient way to call a callable Python object without any argument.\nReturn the result of the call on success, or raise an exception and return NULL on failure.\nAdded in version 3.9.\n-\nPyObject *PyObject_CallOneArg(PyObject *callable, PyObject *arg)\u00b6\n- Return value: New reference.\nCall a callable Python object callable with exactly 1 positional argument arg and no keyword arguments.\nReturn the result of the call on success, or raise an exception and return NULL on failure.\nAdded in version 3.9.\n-\nPyObject *PyObject_CallObject(PyObject *callable, PyObject *args)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCall a callable Python object callable, with arguments given by the tuple args. If no arguments are needed, then args can be NULL.\nReturn the result of the call on success, or raise an exception and return NULL on failure.\nThis is the equivalent of the Python expression:\ncallable(*args)\n.\n-\nPyObject *PyObject_CallFunction(PyObject *callable, const char *format, ...)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCall a callable Python object callable, with a variable number of C arguments. The C arguments are described using a\nPy_BuildValue()\nstyle format string. The format can be NULL, indicating that no arguments are provided.Return the result of the call on success, or raise an exception and return NULL on failure.\nThis is the equivalent of the Python expression:\ncallable(*args)\n.Note that if you only pass PyObject* args,\nPyObject_CallFunctionObjArgs()\nis a faster alternative.Changed in version 3.4: The type of format was changed from\nchar *\n.\n-\nPyObject *PyObject_CallMethod(PyObject *obj, const char *name, const char *format, ...)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCall the method named name of object obj with a variable number of C arguments. The C arguments are described by a\nPy_BuildValue()\nformat string that should produce a tuple.The format can be NULL, indicating that no arguments are provided.\nReturn the result of the call on success, or raise an exception and return NULL on failure.\nThis is the equivalent of the Python expression:\nobj.name(arg1, arg2, ...)\n.Note that if you only pass PyObject* args,\nPyObject_CallMethodObjArgs()\nis a faster alternative.Changed in version 3.4: The types of name and format were changed from\nchar *\n.\n-\nPyObject *PyObject_CallFunctionObjArgs(PyObject *callable, ...)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCall a callable Python object callable, with a variable number of PyObject* arguments. The arguments are provided as a variable number of parameters followed by NULL.\nReturn the result of the call on success, or raise an exception and return NULL on failure.\nThis is the equivalent of the Python expression:\ncallable(arg1, arg2, ...)\n.\n-\nPyObject *PyObject_CallMethodObjArgs(PyObject *obj, PyObject *name, ...)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCall a method of the Python object obj, where the name of the method is given as a Python string object in name. It is called with a variable number of PyObject* arguments. The arguments are provided as a variable number of parameters followed by NULL.\nReturn the result of the call on success, or raise an exception and return NULL on failure.\n-\nPyObject *PyObject_CallMethodNoArgs(PyObject *obj, PyObject *name)\u00b6\nCall a method of the Python object obj without arguments, where the name of the method is given as a Python string object in name.\nReturn the result of the call on success, or raise an exception and return NULL on failure.\nAdded in version 3.9.\n-\nPyObject *PyObject_CallMethodOneArg(PyObject *obj, PyObject *name, PyObject *arg)\u00b6\nCall a method of the Python object obj with a single positional argument arg, where the name of the method is given as a Python string object in name.\nReturn the result of the call on success, or raise an exception and return NULL on failure.\nAdded in version 3.9.\n-\nPyObject *PyObject_Vectorcall(PyObject *callable, PyObject *const *args, size_t nargsf, PyObject *kwnames)\u00b6\n- Part of the Stable ABI since version 3.12.\nCall a callable Python object callable. The arguments are the same as for\nvectorcallfunc\n. If callable supports vectorcall, this directly calls the vectorcall function stored in callable.Return the result of the call on success, or raise an exception and return NULL on failure.\nAdded in version 3.8: as\n_PyObject_Vectorcall\nChanged in version 3.9: Renamed to the current name, without the leading underscore. The old provisional name is soft deprecated.\n-\nPyObject *PyObject_VectorcallDict(PyObject *callable, PyObject *const *args, size_t nargsf, PyObject *kwdict)\u00b6\nCall callable with positional arguments passed exactly as in the vectorcall protocol, but with keyword arguments passed as a dictionary kwdict. The args array contains only the positional arguments.\nRegardless of which protocol is used internally, a conversion of arguments needs to be done. Therefore, this function should only be used if the caller already has a dictionary ready to use for the keyword arguments, but not a tuple for the positional arguments.\nAdded in version 3.9.\n-\nPyObject *PyObject_VectorcallMethod(PyObject *name, PyObject *const *args, size_t nargsf, PyObject *kwnames)\u00b6\n- Part of the Stable ABI since version 3.12.\nCall a method using the vectorcall calling convention. The name of the method is given as a Python string name. The object whose method is called is args[0], and the args array starting at args[1] represents the arguments of the call. There must be at least one positional argument. nargsf is the number of positional arguments including args[0], plus\nPY_VECTORCALL_ARGUMENTS_OFFSET\nif the value ofargs[0]\nmay temporarily be changed. Keyword arguments can be passed just like inPyObject_Vectorcall()\n.If the object has the\nPy_TPFLAGS_METHOD_DESCRIPTOR\nfeature, this will call the unbound method object with the full args vector as arguments.Return the result of the call on success, or raise an exception and return NULL on failure.\nAdded in version 3.9.\nCall Support API\u00b6\n-\nint PyCallable_Check(PyObject *o)\u00b6\n- Part of the Stable ABI.\nDetermine if the object o is callable. Return\n1\nif the object is callable and0\notherwise. This function always succeeds.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 3332} +{"url": "https://docs.python.org/3/howto/sorting.html", "title": "Sorting Techniques", "content": "Sorting Techniques\u00b6\n- Author:\nAndrew Dalke and Raymond Hettinger\nPython lists have a built-in list.sort()\nmethod that modifies the list\nin-place. There is also a sorted()\nbuilt-in function that builds a new\nsorted list from an iterable.\nIn this document, we explore the various techniques for sorting data using Python.\nSorting Basics\u00b6\nA simple ascending sort is very easy: just call the sorted()\nfunction. It\nreturns a new sorted list:\n>>> sorted([5, 2, 3, 1, 4])\n[1, 2, 3, 4, 5]\nYou can also use the list.sort()\nmethod. It modifies the list\nin-place (and returns None\nto avoid confusion). Usually it\u2019s less convenient\nthan sorted()\n- but if you don\u2019t need the original list, it\u2019s slightly\nmore efficient.\n>>> a = [5, 2, 3, 1, 4]\n>>> a.sort()\n>>> a\n[1, 2, 3, 4, 5]\nAnother difference is that the list.sort()\nmethod is only defined for\nlists. In contrast, the sorted()\nfunction accepts any iterable.\n>>> sorted({1: 'D', 2: 'B', 3: 'B', 4: 'E', 5: 'A'})\n[1, 2, 3, 4, 5]\nKey Functions\u00b6\nThe list.sort()\nmethod and the functions sorted()\n,\nmin()\n, max()\n, heapq.nsmallest()\n, and\nheapq.nlargest()\nhave a key parameter to specify a function (or\nother callable) to be called on each list element prior to making\ncomparisons.\nFor example, here\u2019s a case-insensitive string comparison using\nstr.casefold()\n:\n>>> sorted(\"This is a test string from Andrew\".split(), key=str.casefold)\n['a', 'Andrew', 'from', 'is', 'string', 'test', 'This']\nThe value of the key parameter should be a function (or other callable) that takes a single argument and returns a key to use for sorting purposes. This technique is fast because the key function is called exactly once for each input record.\nA common pattern is to sort complex objects using some of the object\u2019s indices as keys. For example:\n>>> student_tuples = [\n... ('john', 'A', 15),\n... ('jane', 'B', 12),\n... ('dave', 'B', 10),\n... ]\n>>> sorted(student_tuples, key=lambda student: student[2]) # sort by age\n[('dave', 'B', 10), ('jane', 'B', 12), ('john', 'A', 15)]\nThe same technique works for objects with named attributes. For example:\n>>> class Student:\n... def __init__(self, name, grade, age):\n... self.name = name\n... self.grade = grade\n... self.age = age\n... def __repr__(self):\n... return repr((self.name, self.grade, self.age))\n>>> student_objects = [\n... Student('john', 'A', 15),\n... Student('jane', 'B', 12),\n... Student('dave', 'B', 10),\n... ]\n>>> sorted(student_objects, key=lambda student: student.age) # sort by age\n[('dave', 'B', 10), ('jane', 'B', 12), ('john', 'A', 15)]\nObjects with named attributes can be made by a regular class as shown\nabove, or they can be instances of dataclass\nor\na named tuple.\nOperator Module Functions and Partial Function Evaluation\u00b6\nThe key function patterns shown above are very common, so Python provides\nconvenience functions to make accessor functions easier and faster. The\noperator\nmodule has itemgetter()\n,\nattrgetter()\n, and a methodcaller()\nfunction.\nUsing those functions, the above examples become simpler and faster:\n>>> from operator import itemgetter, attrgetter\n>>> sorted(student_tuples, key=itemgetter(2))\n[('dave', 'B', 10), ('jane', 'B', 12), ('john', 'A', 15)]\n>>> sorted(student_objects, key=attrgetter('age'))\n[('dave', 'B', 10), ('jane', 'B', 12), ('john', 'A', 15)]\nThe operator module functions allow multiple levels of sorting. For example, to sort by grade then by age:\n>>> sorted(student_tuples, key=itemgetter(1,2))\n[('john', 'A', 15), ('dave', 'B', 10), ('jane', 'B', 12)]\n>>> sorted(student_objects, key=attrgetter('grade', 'age'))\n[('john', 'A', 15), ('dave', 'B', 10), ('jane', 'B', 12)]\nThe functools\nmodule provides another helpful tool for making\nkey-functions. The partial()\nfunction can reduce the\narity of a multi-argument\nfunction making it suitable for use as a key-function.\n>>> from functools import partial\n>>> from unicodedata import normalize\n>>> names = 'Zo\u00eb \u00c5bj\u00f8rn N\u00fa\u00f1ez \u00c9lana Zeke Abe Nubia Eloise'.split()\n>>> sorted(names, key=partial(normalize, 'NFD'))\n['Abe', '\u00c5bj\u00f8rn', 'Eloise', '\u00c9lana', 'Nubia', 'N\u00fa\u00f1ez', 'Zeke', 'Zo\u00eb']\n>>> sorted(names, key=partial(normalize, 'NFC'))\n['Abe', 'Eloise', 'Nubia', 'N\u00fa\u00f1ez', 'Zeke', 'Zo\u00eb', '\u00c5bj\u00f8rn', '\u00c9lana']\nAscending and Descending\u00b6\nBoth list.sort()\nand sorted()\naccept a reverse parameter with a\nboolean value. This is used to flag descending sorts. For example, to get the\nstudent data in reverse age order:\n>>> sorted(student_tuples, key=itemgetter(2), reverse=True)\n[('john', 'A', 15), ('jane', 'B', 12), ('dave', 'B', 10)]\n>>> sorted(student_objects, key=attrgetter('age'), reverse=True)\n[('john', 'A', 15), ('jane', 'B', 12), ('dave', 'B', 10)]\nSort Stability and Complex Sorts\u00b6\nSorts are guaranteed to be stable. That means that when multiple records have the same key, their original order is preserved.\n>>> data = [('red', 1), ('blue', 1), ('red', 2), ('blue', 2)]\n>>> sorted(data, key=itemgetter(0))\n[('blue', 1), ('blue', 2), ('red', 1), ('red', 2)]\nNotice how the two records for blue retain their original order so that\n('blue', 1)\nis guaranteed to precede ('blue', 2)\n.\nThis wonderful property lets you build complex sorts in a series of sorting steps. For example, to sort the student data by descending grade and then ascending age, do the age sort first and then sort again using grade:\n>>> s = sorted(student_objects, key=attrgetter('age')) # sort on secondary key\n>>> sorted(s, key=attrgetter('grade'), reverse=True) # now sort on primary key, descending\n[('dave', 'B', 10), ('jane', 'B', 12), ('john', 'A', 15)]\nThis can be abstracted out into a wrapper function that can take a list and tuples of field and order to sort them on multiple passes.\n>>> def multisort(xs, specs):\n... for key, reverse in reversed(specs):\n... xs.sort(key=attrgetter(key), reverse=reverse)\n... return xs\n>>> multisort(list(student_objects), (('grade', True), ('age', False)))\n[('dave', 'B', 10), ('jane', 'B', 12), ('john', 'A', 15)]\nThe Timsort algorithm used in Python does multiple sorts efficiently because it can take advantage of any ordering already present in a dataset.\nDecorate-Sort-Undecorate\u00b6\nThis idiom is called Decorate-Sort-Undecorate after its three steps:\nFirst, the initial list is decorated with new values that control the sort order.\nSecond, the decorated list is sorted.\nFinally, the decorations are removed, creating a list that contains only the initial values in the new order.\nFor example, to sort the student data by grade using the DSU approach:\n>>> decorated = [(student.grade, i, student) for i, student in enumerate(student_objects)]\n>>> decorated.sort()\n>>> [student for grade, i, student in decorated] # undecorate\n[('john', 'A', 15), ('jane', 'B', 12), ('dave', 'B', 10)]\nThis idiom works because tuples are compared lexicographically; the first items are compared; if they are the same then the second items are compared, and so on.\nIt is not strictly necessary in all cases to include the index i in the decorated list, but including it gives two benefits:\nThe sort is stable \u2013 if two items have the same key, their order will be preserved in the sorted list.\nThe original items do not have to be comparable because the ordering of the decorated tuples will be determined by at most the first two items. So for example the original list could contain complex numbers which cannot be sorted directly.\nAnother name for this idiom is Schwartzian transform, after Randal L. Schwartz, who popularized it among Perl programmers.\nNow that Python sorting provides key-functions, this technique is not often needed.\nComparison Functions\u00b6\nUnlike key functions that return an absolute value for sorting, a comparison function computes the relative ordering for two inputs.\nFor example, a balance scale\ncompares two samples giving a relative ordering: lighter, equal, or heavier.\nLikewise, a comparison function such as cmp(a, b)\nwill return a negative\nvalue for less-than, zero if the inputs are equal, or a positive value for\ngreater-than.\nIt is common to encounter comparison functions when translating algorithms from\nother languages. Also, some libraries provide comparison functions as part of\ntheir API. For example, locale.strcoll()\nis a comparison function.\nTo accommodate those situations, Python provides\nfunctools.cmp_to_key\nto wrap the comparison function\nto make it usable as a key function:\nsorted(words, key=cmp_to_key(strcoll)) # locale-aware sort order\nStrategies For Unorderable Types and Values\u00b6\nA number of type and value issues can arise when sorting. Here are some strategies that can help:\nConvert non-comparable input types to strings prior to sorting:\n>>> data = ['twelve', '11', 10]\n>>> sorted(map(str, data))\n['10', '11', 'twelve']\nThis is needed because most cross-type comparisons raise a\nTypeError\n.\nRemove special values prior to sorting:\n>>> from math import isnan\n>>> from itertools import filterfalse\n>>> data = [3.3, float('nan'), 1.1, 2.2]\n>>> sorted(filterfalse(isnan, data))\n[1.1, 2.2, 3.3]\nThis is needed because the IEEE-754 standard specifies that, \u201cEvery NaN shall compare unordered with everything, including itself.\u201d\nLikewise, None\ncan be stripped from datasets as well:\n>>> data = [3.3, None, 1.1, 2.2]\n>>> sorted(x for x in data if x is not None)\n[1.1, 2.2, 3.3]\nThis is needed because None\nis not comparable to other types.\nConvert mapping types into sorted item lists before sorting:\n>>> data = [{'a': 1}, {'b': 2}]\n>>> sorted(data, key=lambda d: sorted(d.items()))\n[{'a': 1}, {'b': 2}]\nThis is needed because dict-to-dict comparisons raise a\nTypeError\n.\nConvert set types into sorted lists before sorting:\n>>> data = [{'a', 'b', 'c'}, {'b', 'c', 'd'}]\n>>> sorted(map(sorted, data))\n[['a', 'b', 'c'], ['b', 'c', 'd']]\nThis is needed because the elements contained in set types do not have a\ndeterministic order. For example, list({'a', 'b'})\nmay produce\neither ['a', 'b']\nor ['b', 'a']\n.\nOdds and Ends\u00b6\nFor locale aware sorting, use\nlocale.strxfrm()\nfor a key function orlocale.strcoll()\nfor a comparison function. This is necessary because \u201calphabetical\u201d sort orderings can vary across cultures even if the underlying alphabet is the same.The reverse parameter still maintains sort stability (so that records with equal keys retain the original order). Interestingly, that effect can be simulated without the parameter by using the builtin\nreversed()\nfunction twice:>>> data = [('red', 1), ('blue', 1), ('red', 2), ('blue', 2)] >>> standard_way = sorted(data, key=itemgetter(0), reverse=True) >>> double_reversed = list(reversed(sorted(reversed(data), key=itemgetter(0)))) >>> assert standard_way == double_reversed >>> standard_way [('red', 1), ('red', 2), ('blue', 1), ('blue', 2)]\nThe sort routines use\n<\nwhen making comparisons between two objects. So, it is easy to add a standard sort order to a class by defining an__lt__()\nmethod:>>> Student.__lt__ = lambda self, other: self.age < other.age >>> sorted(student_objects) [('dave', 'B', 10), ('jane', 'B', 12), ('john', 'A', 15)]\nHowever, note that\n<\ncan fall back to using__gt__()\nif__lt__()\nis not implemented (seeobject.__lt__()\nfor details on the mechanics). To avoid surprises, PEP 8 recommends that all six comparison methods be implemented. Thetotal_ordering()\ndecorator is provided to make that task easier.Key functions need not depend directly on the objects being sorted. A key function can also access external resources. For instance, if the student grades are stored in a dictionary, they can be used to sort a separate list of student names:\n>>> students = ['dave', 'john', 'jane'] >>> newgrades = {'john': 'F', 'jane':'A', 'dave': 'C'} >>> sorted(students, key=newgrades.__getitem__) ['jane', 'dave', 'john']\nPartial Sorts\u00b6\nSome applications require only some of the data to be ordered. The standard library provides several tools that do less work than a full sort:\nmin()\nandmax()\nreturn the smallest and largest values, respectively. These functions make a single pass over the input data and require almost no auxiliary memory.heapq.nsmallest()\nandheapq.nlargest()\nreturn the n smallest and largest values, respectively. These functions make a single pass over the data keeping only n elements in memory at a time. For values of n that are small relative to the number of inputs, these functions make far fewer comparisons than a full sort.heapq.heappush()\nandheapq.heappop()\ncreate and maintain a partially sorted arrangement of data that keeps the smallest element at position0\n. These functions are suitable for implementing priority queues which are commonly used for task scheduling.", "code_snippets": [" ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 3127} +{"url": "https://docs.python.org/3/c-api/number.html", "title": "Number Protocol", "content": "Number Protocol\u00b6\n-\nint PyNumber_Check(PyObject *o)\u00b6\n- Part of the Stable ABI.\nReturns\n1\nif the object o provides numeric protocols, and false otherwise. This function always succeeds.Changed in version 3.8: Returns\n1\nif o is an index integer.\n-\nPyObject *PyNumber_Add(PyObject *o1, PyObject *o2)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturns the result of adding o1 and o2, or\nNULL\non failure. This is the equivalent of the Python expressiono1 + o2\n.\n-\nPyObject *PyNumber_Subtract(PyObject *o1, PyObject *o2)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturns the result of subtracting o2 from o1, or\nNULL\non failure. This is the equivalent of the Python expressiono1 - o2\n.\n-\nPyObject *PyNumber_Multiply(PyObject *o1, PyObject *o2)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturns the result of multiplying o1 and o2, or\nNULL\non failure. This is the equivalent of the Python expressiono1 * o2\n.\n-\nPyObject *PyNumber_MatrixMultiply(PyObject *o1, PyObject *o2)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.7.\nReturns the result of matrix multiplication on o1 and o2, or\nNULL\non failure. This is the equivalent of the Python expressiono1 @ o2\n.Added in version 3.5.\n-\nPyObject *PyNumber_FloorDivide(PyObject *o1, PyObject *o2)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn the floor of o1 divided by o2, or\nNULL\non failure. This is the equivalent of the Python expressiono1 // o2\n.\n-\nPyObject *PyNumber_TrueDivide(PyObject *o1, PyObject *o2)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a reasonable approximation for the mathematical value of o1 divided by o2, or\nNULL\non failure. The return value is \u201capproximate\u201d because binary floating-point numbers are approximate; it is not possible to represent all real numbers in base two. This function can return a floating-point value when passed two integers. This is the equivalent of the Python expressiono1 / o2\n.\n-\nPyObject *PyNumber_Remainder(PyObject *o1, PyObject *o2)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturns the remainder of dividing o1 by o2, or\nNULL\non failure. This is the equivalent of the Python expressiono1 % o2\n.\n-\nPyObject *PyNumber_Divmod(PyObject *o1, PyObject *o2)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nSee the built-in function\ndivmod()\n. ReturnsNULL\non failure. This is the equivalent of the Python expressiondivmod(o1, o2)\n.\n-\nPyObject *PyNumber_Power(PyObject *o1, PyObject *o2, PyObject *o3)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nSee the built-in function\npow()\n. ReturnsNULL\non failure. This is the equivalent of the Python expressionpow(o1, o2, o3)\n, where o3 is optional. If o3 is to be ignored, passPy_None\nin its place (passingNULL\nfor o3 would cause an illegal memory access).\n-\nPyObject *PyNumber_Negative(PyObject *o)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturns the negation of o on success, or\nNULL\non failure. This is the equivalent of the Python expression-o\n.\n-\nPyObject *PyNumber_Positive(PyObject *o)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturns o on success, or\nNULL\non failure. This is the equivalent of the Python expression+o\n.\n-\nPyObject *PyNumber_Absolute(PyObject *o)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturns the absolute value of o, or\nNULL\non failure. This is the equivalent of the Python expressionabs(o)\n.\n-\nPyObject *PyNumber_Invert(PyObject *o)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturns the bitwise negation of o on success, or\nNULL\non failure. This is the equivalent of the Python expression~o\n.\n-\nPyObject *PyNumber_Lshift(PyObject *o1, PyObject *o2)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturns the result of left shifting o1 by o2 on success, or\nNULL\non failure. This is the equivalent of the Python expressiono1 << o2\n.\n-\nPyObject *PyNumber_Rshift(PyObject *o1, PyObject *o2)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturns the result of right shifting o1 by o2 on success, or\nNULL\non failure. This is the equivalent of the Python expressiono1 >> o2\n.\n-\nPyObject *PyNumber_And(PyObject *o1, PyObject *o2)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturns the \u201cbitwise and\u201d of o1 and o2 on success and\nNULL\non failure. This is the equivalent of the Python expressiono1 & o2\n.\n-\nPyObject *PyNumber_Xor(PyObject *o1, PyObject *o2)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturns the \u201cbitwise exclusive or\u201d of o1 by o2 on success, or\nNULL\non failure. This is the equivalent of the Python expressiono1 ^ o2\n.\n-\nPyObject *PyNumber_Or(PyObject *o1, PyObject *o2)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturns the \u201cbitwise or\u201d of o1 and o2 on success, or\nNULL\non failure. This is the equivalent of the Python expressiono1 | o2\n.\n-\nPyObject *PyNumber_InPlaceAdd(PyObject *o1, PyObject *o2)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturns the result of adding o1 and o2, or\nNULL\non failure. The operation is done in-place when o1 supports it. This is the equivalent of the Python statemento1 += o2\n.\n-\nPyObject *PyNumber_InPlaceSubtract(PyObject *o1, PyObject *o2)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturns the result of subtracting o2 from o1, or\nNULL\non failure. The operation is done in-place when o1 supports it. This is the equivalent of the Python statemento1 -= o2\n.\n-\nPyObject *PyNumber_InPlaceMultiply(PyObject *o1, PyObject *o2)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturns the result of multiplying o1 and o2, or\nNULL\non failure. The operation is done in-place when o1 supports it. This is the equivalent of the Python statemento1 *= o2\n.\n-\nPyObject *PyNumber_InPlaceMatrixMultiply(PyObject *o1, PyObject *o2)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.7.\nReturns the result of matrix multiplication on o1 and o2, or\nNULL\non failure. The operation is done in-place when o1 supports it. This is the equivalent of the Python statemento1 @= o2\n.Added in version 3.5.\n-\nPyObject *PyNumber_InPlaceFloorDivide(PyObject *o1, PyObject *o2)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturns the mathematical floor of dividing o1 by o2, or\nNULL\non failure. The operation is done in-place when o1 supports it. This is the equivalent of the Python statemento1 //= o2\n.\n-\nPyObject *PyNumber_InPlaceTrueDivide(PyObject *o1, PyObject *o2)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a reasonable approximation for the mathematical value of o1 divided by o2, or\nNULL\non failure. The return value is \u201capproximate\u201d because binary floating-point numbers are approximate; it is not possible to represent all real numbers in base two. This function can return a floating-point value when passed two integers. The operation is done in-place when o1 supports it. This is the equivalent of the Python statemento1 /= o2\n.\n-\nPyObject *PyNumber_InPlaceRemainder(PyObject *o1, PyObject *o2)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturns the remainder of dividing o1 by o2, or\nNULL\non failure. The operation is done in-place when o1 supports it. This is the equivalent of the Python statemento1 %= o2\n.\n-\nPyObject *PyNumber_InPlacePower(PyObject *o1, PyObject *o2, PyObject *o3)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nSee the built-in function\npow()\n. ReturnsNULL\non failure. The operation is done in-place when o1 supports it. This is the equivalent of the Python statemento1 **= o2\nwhen o3 isPy_None\n, or an in-place variant ofpow(o1, o2, o3)\notherwise. If o3 is to be ignored, passPy_None\nin its place (passingNULL\nfor o3 would cause an illegal memory access).\n-\nPyObject *PyNumber_InPlaceLshift(PyObject *o1, PyObject *o2)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturns the result of left shifting o1 by o2 on success, or\nNULL\non failure. The operation is done in-place when o1 supports it. This is the equivalent of the Python statemento1 <<= o2\n.\n-\nPyObject *PyNumber_InPlaceRshift(PyObject *o1, PyObject *o2)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturns the result of right shifting o1 by o2 on success, or\nNULL\non failure. The operation is done in-place when o1 supports it. This is the equivalent of the Python statemento1 >>= o2\n.\n-\nPyObject *PyNumber_InPlaceAnd(PyObject *o1, PyObject *o2)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturns the \u201cbitwise and\u201d of o1 and o2 on success and\nNULL\non failure. The operation is done in-place when o1 supports it. This is the equivalent of the Python statemento1 &= o2\n.\n-\nPyObject *PyNumber_InPlaceXor(PyObject *o1, PyObject *o2)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturns the \u201cbitwise exclusive or\u201d of o1 by o2 on success, or\nNULL\non failure. The operation is done in-place when o1 supports it. This is the equivalent of the Python statemento1 ^= o2\n.\n-\nPyObject *PyNumber_InPlaceOr(PyObject *o1, PyObject *o2)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturns the \u201cbitwise or\u201d of o1 and o2 on success, or\nNULL\non failure. The operation is done in-place when o1 supports it. This is the equivalent of the Python statemento1 |= o2\n.\n-\nPyObject *PyNumber_Long(PyObject *o)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturns the o converted to an integer object on success, or\nNULL\non failure. This is the equivalent of the Python expressionint(o)\n.\n-\nPyObject *PyNumber_Float(PyObject *o)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturns the o converted to a float object on success, or\nNULL\non failure. This is the equivalent of the Python expressionfloat(o)\n.\n-\nPyObject *PyNumber_Index(PyObject *o)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturns the o converted to a Python int on success or\nNULL\nwith aTypeError\nexception raised on failure.Changed in version 3.10: The result always has exact type\nint\n. Previously, the result could have been an instance of a subclass ofint\n.\n-\nPyObject *PyNumber_ToBase(PyObject *n, int base)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturns the integer n converted to base base as a string. The base argument must be one of 2, 8, 10, or 16. For base 2, 8, or 16, the returned string is prefixed with a base marker of\n'0b'\n,'0o'\n, or'0x'\n, respectively. If n is not a Python int, it is converted withPyNumber_Index()\nfirst.\n-\nPy_ssize_t PyNumber_AsSsize_t(PyObject *o, PyObject *exc)\u00b6\n- Part of the Stable ABI.\nReturns o converted to a\nPy_ssize_t\nvalue if o can be interpreted as an integer. If the call fails, an exception is raised and-1\nis returned.If o can be converted to a Python int but the attempt to convert to a\nPy_ssize_t\nvalue would raise anOverflowError\n, then the exc argument is the type of exception that will be raised (usuallyIndexError\norOverflowError\n). If exc isNULL\n, then the exception is cleared and the value is clipped toPY_SSIZE_T_MIN\nfor a negative integer orPY_SSIZE_T_MAX\nfor a positive integer.\n-\nint PyIndex_Check(PyObject *o)\u00b6\n- Part of the Stable ABI since version 3.8.\nReturns\n1\nif o is an index integer (has thenb_index\nslot of thetp_as_number\nstructure filled in), and0\notherwise. This function always succeeds.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 2811} +{"url": "https://docs.python.org/3/c-api/sequence.html", "title": "Sequence Protocol", "content": "Sequence Protocol\u00b6\n-\nint PySequence_Check(PyObject *o)\u00b6\n- Part of the Stable ABI.\nReturn\n1\nif the object provides the sequence protocol, and0\notherwise. Note that it returns1\nfor Python classes with a__getitem__()\nmethod, unless they aredict\nsubclasses, since in general it is impossible to determine what type of keys the class supports. This function always succeeds.\n-\nPy_ssize_t PySequence_Size(PyObject *o)\u00b6\n-\nPy_ssize_t PySequence_Length(PyObject *o)\u00b6\n- Part of the Stable ABI.\nReturns the number of objects in sequence o on success, and\n-1\non failure. This is equivalent to the Python expressionlen(o)\n.\n-\nPyObject *PySequence_Concat(PyObject *o1, PyObject *o2)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn the concatenation of o1 and o2 on success, and\nNULL\non failure. This is the equivalent of the Python expressiono1 + o2\n.\n-\nPyObject *PySequence_Repeat(PyObject *o, Py_ssize_t count)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn the result of repeating sequence object o count times, or\nNULL\non failure. This is the equivalent of the Python expressiono * count\n.\n-\nPyObject *PySequence_InPlaceConcat(PyObject *o1, PyObject *o2)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn the concatenation of o1 and o2 on success, and\nNULL\non failure. The operation is done in-place when o1 supports it. This is the equivalent of the Python expressiono1 += o2\n.\n-\nPyObject *PySequence_InPlaceRepeat(PyObject *o, Py_ssize_t count)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn the result of repeating sequence object o count times, or\nNULL\non failure. The operation is done in-place when o supports it. This is the equivalent of the Python expressiono *= count\n.\n-\nPyObject *PySequence_GetItem(PyObject *o, Py_ssize_t i)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn the ith element of o, or\nNULL\non failure. This is the equivalent of the Python expressiono[i]\n.\n-\nPyObject *PySequence_GetSlice(PyObject *o, Py_ssize_t i1, Py_ssize_t i2)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn the slice of sequence object o between i1 and i2, or\nNULL\non failure. This is the equivalent of the Python expressiono[i1:i2]\n.\n-\nint PySequence_SetItem(PyObject *o, Py_ssize_t i, PyObject *v)\u00b6\n- Part of the Stable ABI.\nAssign object v to the ith element of o. Raise an exception and return\n-1\non failure; return0\non success. This is the equivalent of the Python statemento[i] = v\n. This function does not steal a reference to v.If v is\nNULL\n, the element is deleted, but this feature is deprecated in favour of usingPySequence_DelItem()\n.\n-\nint PySequence_DelItem(PyObject *o, Py_ssize_t i)\u00b6\n- Part of the Stable ABI.\nDelete the ith element of object o. Returns\n-1\non failure. This is the equivalent of the Python statementdel o[i]\n.\n-\nint PySequence_SetSlice(PyObject *o, Py_ssize_t i1, Py_ssize_t i2, PyObject *v)\u00b6\n- Part of the Stable ABI.\nAssign the sequence object v to the slice in sequence object o from i1 to i2. This is the equivalent of the Python statement\no[i1:i2] = v\n.\n-\nint PySequence_DelSlice(PyObject *o, Py_ssize_t i1, Py_ssize_t i2)\u00b6\n- Part of the Stable ABI.\nDelete the slice in sequence object o from i1 to i2. Returns\n-1\non failure. This is the equivalent of the Python statementdel o[i1:i2]\n.\n-\nPy_ssize_t PySequence_Count(PyObject *o, PyObject *value)\u00b6\n- Part of the Stable ABI.\nReturn the number of occurrences of value in o, that is, return the number of keys for which\no[key] == value\n. On failure, return-1\n. This is equivalent to the Python expressiono.count(value)\n.\n-\nint PySequence_Contains(PyObject *o, PyObject *value)\u00b6\n- Part of the Stable ABI.\nDetermine if o contains value. If an item in o is equal to value, return\n1\n, otherwise return0\n. On error, return-1\n. This is equivalent to the Python expressionvalue in o\n.\n-\nint PySequence_In(PyObject *o, PyObject *value)\u00b6\n- Part of the Stable ABI.\nAlias for\nPySequence_Contains()\n.Deprecated since version 3.14: The function is soft deprecated and should no longer be used to write new code.\n-\nPy_ssize_t PySequence_Index(PyObject *o, PyObject *value)\u00b6\n- Part of the Stable ABI.\nReturn the first index i for which\no[i] == value\n. On error, return-1\n. This is equivalent to the Python expressiono.index(value)\n.\n-\nPyObject *PySequence_List(PyObject *o)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a list object with the same contents as the sequence or iterable o, or\nNULL\non failure. The returned list is guaranteed to be new. This is equivalent to the Python expressionlist(o)\n.\n-\nPyObject *PySequence_Tuple(PyObject *o)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a tuple object with the same contents as the sequence or iterable o, or\nNULL\non failure. If o is a tuple, a new reference will be returned, otherwise a tuple will be constructed with the appropriate contents. This is equivalent to the Python expressiontuple(o)\n.\n-\nPyObject *PySequence_Fast(PyObject *o, const char *m)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn the sequence or iterable o as an object usable by the other\nPySequence_Fast*\nfamily of functions. If the object is not a sequence or iterable, raisesTypeError\nwith m as the message text. ReturnsNULL\non failure.The\nPySequence_Fast*\nfunctions are thus named because they assume o is aPyTupleObject\nor aPyListObject\nand access the data fields of o directly.As a CPython implementation detail, if o is already a sequence or list, it will be returned.\n-\nPy_ssize_t PySequence_Fast_GET_SIZE(PyObject *o)\u00b6\nReturns the length of o, assuming that o was returned by\nPySequence_Fast()\nand that o is notNULL\n. The size can also be retrieved by callingPySequence_Size()\non o, butPySequence_Fast_GET_SIZE()\nis faster because it can assume o is a list or tuple.\n-\nPyObject *PySequence_Fast_GET_ITEM(PyObject *o, Py_ssize_t i)\u00b6\n- Return value: Borrowed reference.\nReturn the ith element of o, assuming that o was returned by\nPySequence_Fast()\n, o is notNULL\n, and that i is within bounds.\n-\nPyObject **PySequence_Fast_ITEMS(PyObject *o)\u00b6\nReturn the underlying array of PyObject pointers. Assumes that o was returned by\nPySequence_Fast()\nand o is notNULL\n.Note, if a list gets resized, the reallocation may relocate the items array. So, only use the underlying array pointer in contexts where the sequence cannot change.\n-\nPyObject *PySequence_ITEM(PyObject *o, Py_ssize_t i)\u00b6\n- Return value: New reference.\nReturn the ith element of o or\nNULL\non failure. Faster form ofPySequence_GetItem()\nbut without checking thatPySequence_Check()\non o is true and without adjustment for negative indices.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1657} +{"url": "https://docs.python.org/3/c-api/bytes.html", "title": "Bytes Objects", "content": "Bytes Objects\u00b6\nThese functions raise TypeError\nwhen expecting a bytes parameter and\ncalled with a non-bytes parameter.\n-\nPyTypeObject PyBytes_Type\u00b6\n- Part of the Stable ABI.\nThis instance of\nPyTypeObject\nrepresents the Python bytes type; it is the same object asbytes\nin the Python layer.\n-\nint PyBytes_Check(PyObject *o)\u00b6\nReturn true if the object o is a bytes object or an instance of a subtype of the bytes type. This function always succeeds.\n-\nint PyBytes_CheckExact(PyObject *o)\u00b6\nReturn true if the object o is a bytes object, but not an instance of a subtype of the bytes type. This function always succeeds.\n-\nPyObject *PyBytes_FromString(const char *v)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a new bytes object with a copy of the string v as value on success, and\nNULL\non failure. The parameter v must not beNULL\n; it will not be checked.\n-\nPyObject *PyBytes_FromStringAndSize(const char *v, Py_ssize_t len)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a new bytes object with a copy of the string v as value and length len on success, and\nNULL\non failure. If v isNULL\n, the contents of the bytes object are uninitialized.\n-\nPyObject *PyBytes_FromFormat(const char *format, ...)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nTake a C\nprintf()\n-style format string and a variable number of arguments, calculate the size of the resulting Python bytes object and return a bytes object with the values formatted into it. The variable arguments must be C types and must correspond exactly to the format characters in the format string. The following format characters are allowed:Format Characters\nType\nComment\n%%\nn/a\nThe literal % character.\n%c\nint\nA single byte, represented as a C int.\n%d\nint\nEquivalent to\nprintf(\"%d\")\n. [1]%u\nunsigned int\nEquivalent to\nprintf(\"%u\")\n. [1]%ld\nlong\nEquivalent to\nprintf(\"%ld\")\n. [1]%lu\nunsigned long\nEquivalent to\nprintf(\"%lu\")\n. [1]%zd\nEquivalent to\nprintf(\"%zd\")\n. [1]%zu\nsize_t\nEquivalent to\nprintf(\"%zu\")\n. [1]%i\nint\nEquivalent to\nprintf(\"%i\")\n. [1]%x\nint\nEquivalent to\nprintf(\"%x\")\n. [1]%s\nconst char*\nA null-terminated C character array.\n%p\nconst void*\nThe hex representation of a C pointer. Mostly equivalent to\nprintf(\"%p\")\nexcept that it is guaranteed to start with the literal0x\nregardless of what the platform\u2019sprintf\nyields.An unrecognized format character causes all the rest of the format string to be copied as-is to the result object, and any extra arguments discarded.\n-\nPyObject *PyBytes_FromFormatV(const char *format, va_list vargs)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nIdentical to\nPyBytes_FromFormat()\nexcept that it takes exactly two arguments.\n-\nPyObject *PyBytes_FromObject(PyObject *o)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn the bytes representation of object o that implements the buffer protocol.\n-\nPy_ssize_t PyBytes_Size(PyObject *o)\u00b6\n- Part of the Stable ABI.\nReturn the length of the bytes in bytes object o.\n-\nPy_ssize_t PyBytes_GET_SIZE(PyObject *o)\u00b6\nSimilar to\nPyBytes_Size()\n, but without error checking.\n-\nchar *PyBytes_AsString(PyObject *o)\u00b6\n- Part of the Stable ABI.\nReturn a pointer to the contents of o. The pointer refers to the internal buffer of o, which consists of\nlen(o) + 1\nbytes. The last byte in the buffer is always null, regardless of whether there are any other null bytes. The data must not be modified in any way, unless the object was just created usingPyBytes_FromStringAndSize(NULL, size)\n. It must not be deallocated. If o is not a bytes object at all,PyBytes_AsString()\nreturnsNULL\nand raisesTypeError\n.\n-\nchar *PyBytes_AS_STRING(PyObject *string)\u00b6\nSimilar to\nPyBytes_AsString()\n, but without error checking.\n-\nint PyBytes_AsStringAndSize(PyObject *obj, char **buffer, Py_ssize_t *length)\u00b6\n- Part of the Stable ABI.\nReturn the null-terminated contents of the object obj through the output variables buffer and length. Returns\n0\non success.If length is\nNULL\n, the bytes object may not contain embedded null bytes; if it does, the function returns-1\nand aValueError\nis raised.The buffer refers to an internal buffer of obj, which includes an additional null byte at the end (not counted in length). The data must not be modified in any way, unless the object was just created using\nPyBytes_FromStringAndSize(NULL, size)\n. It must not be deallocated. If obj is not a bytes object at all,PyBytes_AsStringAndSize()\nreturns-1\nand raisesTypeError\n.Changed in version 3.5: Previously,\nTypeError\nwas raised when embedded null bytes were encountered in the bytes object.\n-\nvoid PyBytes_Concat(PyObject **bytes, PyObject *newpart)\u00b6\n- Part of the Stable ABI.\nCreate a new bytes object in *bytes containing the contents of newpart appended to bytes; the caller will own the new reference. The reference to the old value of bytes will be stolen. If the new object cannot be created, the old reference to bytes will still be discarded and the value of *bytes will be set to\nNULL\n; the appropriate exception will be set.\n-\nvoid PyBytes_ConcatAndDel(PyObject **bytes, PyObject *newpart)\u00b6\n- Part of the Stable ABI.\nCreate a new bytes object in *bytes containing the contents of newpart appended to bytes. This version releases the strong reference to newpart (i.e. decrements its reference count).\n-\nPyObject *PyBytes_Join(PyObject *sep, PyObject *iterable)\u00b6\nSimilar to\nsep.join(iterable)\nin Python.sep must be Python\nbytes\nobject. (Note thatPyUnicode_Join()\nacceptsNULL\nseparator and treats it as a space, whereasPyBytes_Join()\ndoesn\u2019t acceptNULL\nseparator.)iterable must be an iterable object yielding objects that implement the buffer protocol.\nOn success, return a new\nbytes\nobject. On error, set an exception and returnNULL\n.Added in version 3.14.\n-\nint _PyBytes_Resize(PyObject **bytes, Py_ssize_t newsize)\u00b6\nResize a bytes object. newsize will be the new length of the bytes object. You can think of it as creating a new bytes object and destroying the old one, only more efficiently. Pass the address of an existing bytes object as an lvalue (it may be written into), and the new size desired. On success, *bytes holds the resized bytes object and\n0\nis returned; the address in *bytes may differ from its input value. If the reallocation fails, the original bytes object at *bytes is deallocated, *bytes is set toNULL\n,MemoryError\nis set, and-1\nis returned.\n-\nPyObject *PyBytes_Repr(PyObject *bytes, int smartquotes)\u00b6\n- Part of the Stable ABI.\nGet the string representation of bytes. This function is currently used to implement\nbytes.__repr__()\nin Python.This function does not do type checking; it is undefined behavior to pass bytes as a non-bytes object or\nNULL\n.If smartquotes is true, the representation will use a double-quoted string instead of single-quoted string when single-quotes are present in bytes. For example, the byte string\n'Python'\nwould be represented asb\"'Python'\"\nwhen smartquotes is true, orb'\\'Python\\''\nwhen it is false.On success, this function returns a strong reference to a\nstr\nobject containing the representation. On failure, this returnsNULL\nwith an exception set.\n-\nPyObject *PyBytes_DecodeEscape(const char *s, Py_ssize_t len, const char *errors, Py_ssize_t unicode, const char *recode_encoding)\u00b6\n- Part of the Stable ABI.\nUnescape a backslash-escaped string s. s must not be\nNULL\n. len must be the size of s.errors must be one of\n\"strict\"\n,\"replace\"\n, or\"ignore\"\n. If errors isNULL\n, then\"strict\"\nis used by default.On success, this function returns a strong reference to a Python\nbytes\nobject containing the unescaped string. On failure, this function returnsNULL\nwith an exception set.Changed in version 3.9: unicode and recode_encoding are now unused.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1919} +{"url": "https://docs.python.org/3/howto/ipaddress.html", "title": "An introduction to the ipaddress module", "content": "An introduction to the ipaddress module\u00b6\n- author:\nPeter Moody\n- author:\nNick Coghlan\nCreating Address/Network/Interface objects\u00b6\nSince ipaddress\nis a module for inspecting and manipulating IP addresses,\nthe first thing you\u2019ll want to do is create some objects. You can use\nipaddress\nto create objects from strings and integers.\nA Note on IP Versions\u00b6\nFor readers that aren\u2019t particularly familiar with IP addressing, it\u2019s important to know that the Internet Protocol (IP) is currently in the process of moving from version 4 of the protocol to version 6. This transition is occurring largely because version 4 of the protocol doesn\u2019t provide enough addresses to handle the needs of the whole world, especially given the increasing number of devices with direct connections to the internet.\nExplaining the details of the differences between the two versions of the protocol is beyond the scope of this introduction, but readers need to at least be aware that these two versions exist, and it will sometimes be necessary to force the use of one version or the other.\nIP Host Addresses\u00b6\nAddresses, often referred to as \u201chost addresses\u201d are the most basic unit\nwhen working with IP addressing. The simplest way to create addresses is\nto use the ipaddress.ip_address()\nfactory function, which automatically\ndetermines whether to create an IPv4 or IPv6 address based on the passed in\nvalue:\n>>> ipaddress.ip_address('192.0.2.1')\nIPv4Address('192.0.2.1')\n>>> ipaddress.ip_address('2001:DB8::1')\nIPv6Address('2001:db8::1')\nAddresses can also be created directly from integers. Values that will fit within 32 bits are assumed to be IPv4 addresses:\n>>> ipaddress.ip_address(3221225985)\nIPv4Address('192.0.2.1')\n>>> ipaddress.ip_address(42540766411282592856903984951653826561)\nIPv6Address('2001:db8::1')\nTo force the use of IPv4 or IPv6 addresses, the relevant classes can be invoked directly. This is particularly useful to force creation of IPv6 addresses for small integers:\n>>> ipaddress.ip_address(1)\nIPv4Address('0.0.0.1')\n>>> ipaddress.IPv4Address(1)\nIPv4Address('0.0.0.1')\n>>> ipaddress.IPv6Address(1)\nIPv6Address('::1')\nDefining Networks\u00b6\nHost addresses are usually grouped together into IP networks, so\nipaddress\nprovides a way to create, inspect and manipulate network\ndefinitions. IP network objects are constructed from strings that define the\nrange of host addresses that are part of that network. The simplest form\nfor that information is a \u201cnetwork address/network prefix\u201d pair, where the\nprefix defines the number of leading bits that are compared to determine\nwhether or not an address is part of the network and the network address\ndefines the expected value of those bits.\nAs for addresses, a factory function is provided that determines the correct IP version automatically:\n>>> ipaddress.ip_network('192.0.2.0/24')\nIPv4Network('192.0.2.0/24')\n>>> ipaddress.ip_network('2001:db8::0/96')\nIPv6Network('2001:db8::/96')\nNetwork objects cannot have any host bits set. The practical effect of this\nis that 192.0.2.1/24\ndoes not describe a network. Such definitions are\nreferred to as interface objects since the ip-on-a-network notation is\ncommonly used to describe network interfaces of a computer on a given network\nand are described further in the next section.\nBy default, attempting to create a network object with host bits set will\nresult in ValueError\nbeing raised. To request that the\nadditional bits instead be coerced to zero, the flag strict=False\ncan\nbe passed to the constructor:\n>>> ipaddress.ip_network('192.0.2.1/24')\nTraceback (most recent call last):\n...\nValueError: 192.0.2.1/24 has host bits set\n>>> ipaddress.ip_network('192.0.2.1/24', strict=False)\nIPv4Network('192.0.2.0/24')\nWhile the string form offers significantly more flexibility, networks can also be defined with integers, just like host addresses. In this case, the network is considered to contain only the single address identified by the integer, so the network prefix includes the entire network address:\n>>> ipaddress.ip_network(3221225984)\nIPv4Network('192.0.2.0/32')\n>>> ipaddress.ip_network(42540766411282592856903984951653826560)\nIPv6Network('2001:db8::/128')\nAs with addresses, creation of a particular kind of network can be forced by calling the class constructor directly instead of using the factory function.\nHost Interfaces\u00b6\nAs mentioned just above, if you need to describe an address on a particular\nnetwork, neither the address nor the network classes are sufficient.\nNotation like 192.0.2.1/24\nis commonly used by network engineers and the\npeople who write tools for firewalls and routers as shorthand for \u201cthe host\n192.0.2.1\non the network 192.0.2.0/24\n\u201d, Accordingly, ipaddress\nprovides a set of hybrid classes that associate an address with a particular\nnetwork. The interface for creation is identical to that for defining network\nobjects, except that the address portion isn\u2019t constrained to being a network\naddress.\n>>> ipaddress.ip_interface('192.0.2.1/24')\nIPv4Interface('192.0.2.1/24')\n>>> ipaddress.ip_interface('2001:db8::1/96')\nIPv6Interface('2001:db8::1/96')\nInteger inputs are accepted (as with networks), and use of a particular IP version can be forced by calling the relevant constructor directly.\nInspecting Address/Network/Interface Objects\u00b6\nYou\u2019ve gone to the trouble of creating an IPv(4|6)(Address|Network|Interface)\nobject, so you probably want to get information about it. ipaddress\ntries to make doing this easy and intuitive.\nExtracting the IP version:\n>>> addr4 = ipaddress.ip_address('192.0.2.1')\n>>> addr6 = ipaddress.ip_address('2001:db8::1')\n>>> addr6.version\n6\n>>> addr4.version\n4\nObtaining the network from an interface:\n>>> host4 = ipaddress.ip_interface('192.0.2.1/24')\n>>> host4.network\nIPv4Network('192.0.2.0/24')\n>>> host6 = ipaddress.ip_interface('2001:db8::1/96')\n>>> host6.network\nIPv6Network('2001:db8::/96')\nFinding out how many individual addresses are in a network:\n>>> net4 = ipaddress.ip_network('192.0.2.0/24')\n>>> net4.num_addresses\n256\n>>> net6 = ipaddress.ip_network('2001:db8::0/96')\n>>> net6.num_addresses\n4294967296\nIterating through the \u201cusable\u201d addresses on a network:\n>>> net4 = ipaddress.ip_network('192.0.2.0/24')\n>>> for x in net4.hosts():\n... print(x)\n192.0.2.1\n192.0.2.2\n192.0.2.3\n192.0.2.4\n...\n192.0.2.252\n192.0.2.253\n192.0.2.254\nObtaining the netmask (i.e. set bits corresponding to the network prefix) or the hostmask (any bits that are not part of the netmask):\n>>> net4 = ipaddress.ip_network('192.0.2.0/24')\n>>> net4.netmask\nIPv4Address('255.255.255.0')\n>>> net4.hostmask\nIPv4Address('0.0.0.255')\n>>> net6 = ipaddress.ip_network('2001:db8::0/96')\n>>> net6.netmask\nIPv6Address('ffff:ffff:ffff:ffff:ffff:ffff::')\n>>> net6.hostmask\nIPv6Address('::ffff:ffff')\nExploding or compressing the address:\n>>> addr6.exploded\n'2001:0db8:0000:0000:0000:0000:0000:0001'\n>>> addr6.compressed\n'2001:db8::1'\n>>> net6.exploded\n'2001:0db8:0000:0000:0000:0000:0000:0000/96'\n>>> net6.compressed\n'2001:db8::/96'\nWhile IPv4 doesn\u2019t support explosion or compression, the associated objects still provide the relevant properties so that version neutral code can easily ensure the most concise or most verbose form is used for IPv6 addresses while still correctly handling IPv4 addresses.\nNetworks as lists of Addresses\u00b6\nIt\u2019s sometimes useful to treat networks as lists. This means it is possible to index them like this:\n>>> net4[1]\nIPv4Address('192.0.2.1')\n>>> net4[-1]\nIPv4Address('192.0.2.255')\n>>> net6[1]\nIPv6Address('2001:db8::1')\n>>> net6[-1]\nIPv6Address('2001:db8::ffff:ffff')\nIt also means that network objects lend themselves to using the list membership test syntax like this:\nif address in network:\n# do something\nContainment testing is done efficiently based on the network prefix:\n>>> addr4 = ipaddress.ip_address('192.0.2.1')\n>>> addr4 in ipaddress.ip_network('192.0.2.0/24')\nTrue\n>>> addr4 in ipaddress.ip_network('192.0.3.0/24')\nFalse\nComparisons\u00b6\nipaddress\nprovides some simple, hopefully intuitive ways to compare\nobjects, where it makes sense:\n>>> ipaddress.ip_address('192.0.2.1') < ipaddress.ip_address('192.0.2.2')\nTrue\nA TypeError\nexception is raised if you try to compare objects of\ndifferent versions or different types.\nUsing IP Addresses with other modules\u00b6\nOther modules that use IP addresses (such as socket\n) usually won\u2019t\naccept objects from this module directly. Instead, they must be coerced to\nan integer or string that the other module will accept:\n>>> addr4 = ipaddress.ip_address('192.0.2.1')\n>>> str(addr4)\n'192.0.2.1'\n>>> int(addr4)\n3221225985\nGetting more detail when instance creation fails\u00b6\nWhen creating address/network/interface objects using the version-agnostic\nfactory functions, any errors will be reported as ValueError\nwith\na generic error message that simply says the passed in value was not\nrecognized as an object of that type. The lack of a specific error is\nbecause it\u2019s necessary to know whether the value is supposed to be IPv4\nor IPv6 in order to provide more detail on why it has been rejected.\nTo support use cases where it is useful to have access to this additional\ndetail, the individual class constructors actually raise the\nValueError\nsubclasses ipaddress.AddressValueError\nand\nipaddress.NetmaskValueError\nto indicate exactly which part of\nthe definition failed to parse correctly.\nThe error messages are significantly more detailed when using the class constructors directly. For example:\n>>> ipaddress.ip_address(\"192.168.0.256\")\nTraceback (most recent call last):\n...\nValueError: '192.168.0.256' does not appear to be an IPv4 or IPv6 address\n>>> ipaddress.IPv4Address(\"192.168.0.256\")\nTraceback (most recent call last):\n...\nipaddress.AddressValueError: Octet 256 (> 255) not permitted in '192.168.0.256'\n>>> ipaddress.ip_network(\"192.168.0.1/64\")\nTraceback (most recent call last):\n...\nValueError: '192.168.0.1/64' does not appear to be an IPv4 or IPv6 network\n>>> ipaddress.IPv4Network(\"192.168.0.1/64\")\nTraceback (most recent call last):\n...\nipaddress.NetmaskValueError: '64' is not a valid netmask\nHowever, both of the module specific exceptions have ValueError\nas their\nparent class, so if you\u2019re not concerned with the particular type of error,\nyou can still write code like the following:\ntry:\nnetwork = ipaddress.IPv4Network(address)\nexcept ValueError:\nprint('address/netmask is invalid for IPv4:', address)", "code_snippets": ["\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", ": ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", "\n ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", ": ", "\n", "\n", "\n", "\n", ": ", "\n\n", "\n", "\n", "\n", ": ", "\n", "\n", "\n", "\n", ": ", "\n", "\n ", " ", " ", "\n", " ", "\n ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 2615} +{"url": "https://docs.python.org/3/c-api/list.html", "title": "List Objects", "content": "List Objects\u00b6\n-\nPyTypeObject PyList_Type\u00b6\n- Part of the Stable ABI.\nThis instance of\nPyTypeObject\nrepresents the Python list type. This is the same object aslist\nin the Python layer.\n-\nint PyList_Check(PyObject *p)\u00b6\nReturn true if p is a list object or an instance of a subtype of the list type. This function always succeeds.\n-\nint PyList_CheckExact(PyObject *p)\u00b6\nReturn true if p is a list object, but not an instance of a subtype of the list type. This function always succeeds.\n-\nPyObject *PyList_New(Py_ssize_t len)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a new list of length len on success, or\nNULL\non failure.Note\nIf len is greater than zero, the returned list object\u2019s items are set to\nNULL\n. Thus you cannot use abstract API functions such asPySequence_SetItem()\nor expose the object to Python code before setting all items to a real object withPyList_SetItem()\norPyList_SET_ITEM()\n. The following APIs are safe APIs before the list is fully initialized:PyList_SetItem()\nandPyList_SET_ITEM()\n.\n-\nPy_ssize_t PyList_Size(PyObject *list)\u00b6\n- Part of the Stable ABI.\nReturn the length of the list object in list; this is equivalent to\nlen(list)\non a list object.\n-\nPy_ssize_t PyList_GET_SIZE(PyObject *list)\u00b6\nSimilar to\nPyList_Size()\n, but without error checking.\n-\nPyObject *PyList_GetItemRef(PyObject *list, Py_ssize_t index)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.13.\nReturn the object at position index in the list pointed to by list. The position must be non-negative; indexing from the end of the list is not supported. If index is out of bounds (\n<0 or >=len(list)\n), returnNULL\nand set anIndexError\nexception.Added in version 3.13.\n-\nPyObject *PyList_GetItem(PyObject *list, Py_ssize_t index)\u00b6\n- Return value: Borrowed reference. Part of the Stable ABI.\nLike\nPyList_GetItemRef()\n, but returns a borrowed reference instead of a strong reference.\n-\nPyObject *PyList_GET_ITEM(PyObject *list, Py_ssize_t i)\u00b6\n- Return value: Borrowed reference.\nSimilar to\nPyList_GetItem()\n, but without error checking.\n-\nint PyList_SetItem(PyObject *list, Py_ssize_t index, PyObject *item)\u00b6\n- Part of the Stable ABI.\nSet the item at index index in list to item. Return\n0\non success. If index is out of bounds, return-1\nand set anIndexError\nexception.Note\nThis function \u201csteals\u201d a reference to item and discards a reference to an item already in the list at the affected position.\n-\nvoid PyList_SET_ITEM(PyObject *list, Py_ssize_t i, PyObject *o)\u00b6\nMacro form of\nPyList_SetItem()\nwithout error checking. This is normally only used to fill in new lists where there is no previous content.Bounds checking is performed as an assertion if Python is built in debug mode or\nwith assertions\n.Note\nThis macro \u201csteals\u201d a reference to item, and, unlike\nPyList_SetItem()\n, does not discard a reference to any item that is being replaced; any reference in list at position i will be leaked.\n-\nint PyList_Insert(PyObject *list, Py_ssize_t index, PyObject *item)\u00b6\n- Part of the Stable ABI.\nInsert the item item into list list in front of index index. Return\n0\nif successful; return-1\nand set an exception if unsuccessful. Analogous tolist.insert(index, item)\n.\n-\nint PyList_Append(PyObject *list, PyObject *item)\u00b6\n- Part of the Stable ABI.\nAppend the object item at the end of list list. Return\n0\nif successful; return-1\nand set an exception if unsuccessful. Analogous tolist.append(item)\n.\n-\nPyObject *PyList_GetSlice(PyObject *list, Py_ssize_t low, Py_ssize_t high)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a list of the objects in list containing the objects between low and high. Return\nNULL\nand set an exception if unsuccessful. Analogous tolist[low:high]\n. Indexing from the end of the list is not supported.\n-\nint PyList_SetSlice(PyObject *list, Py_ssize_t low, Py_ssize_t high, PyObject *itemlist)\u00b6\n- Part of the Stable ABI.\nSet the slice of list between low and high to the contents of itemlist. Analogous to\nlist[low:high] = itemlist\n. The itemlist may beNULL\n, indicating the assignment of an empty list (slice deletion). Return0\non success,-1\non failure. Indexing from the end of the list is not supported.\n-\nint PyList_Extend(PyObject *list, PyObject *iterable)\u00b6\nExtend list with the contents of iterable. This is the same as\nPyList_SetSlice(list, PY_SSIZE_T_MAX, PY_SSIZE_T_MAX, iterable)\nand analogous tolist.extend(iterable)\norlist += iterable\n.Raise an exception and return\n-1\nif list is not alist\nobject. Return 0 on success.Added in version 3.13.\n-\nint PyList_Clear(PyObject *list)\u00b6\nRemove all items from list. This is the same as\nPyList_SetSlice(list, 0, PY_SSIZE_T_MAX, NULL)\nand analogous tolist.clear()\nordel list[:]\n.Raise an exception and return\n-1\nif list is not alist\nobject. Return 0 on success.Added in version 3.13.\n-\nint PyList_Sort(PyObject *list)\u00b6\n- Part of the Stable ABI.\nSort the items of list in place. Return\n0\non success,-1\non failure. This is equivalent tolist.sort()\n.\n-\nint PyList_Reverse(PyObject *list)\u00b6\n- Part of the Stable ABI.\nReverse the items of list in place. Return\n0\non success,-1\non failure. This is the equivalent oflist.reverse()\n.\n-\nPyObject *PyList_AsTuple(PyObject *list)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a new tuple object containing the contents of list; equivalent to\ntuple(list)\n.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1331} +{"url": "https://docs.python.org/3/c-api/tuple.html", "title": "Tuple Objects", "content": "Tuple Objects\u00b6\n-\nPyTypeObject PyTuple_Type\u00b6\n- Part of the Stable ABI.\nThis instance of\nPyTypeObject\nrepresents the Python tuple type; it is the same object astuple\nin the Python layer.\n-\nint PyTuple_Check(PyObject *p)\u00b6\nReturn true if p is a tuple object or an instance of a subtype of the tuple type. This function always succeeds.\n-\nint PyTuple_CheckExact(PyObject *p)\u00b6\nReturn true if p is a tuple object, but not an instance of a subtype of the tuple type. This function always succeeds.\n-\nPyObject *PyTuple_New(Py_ssize_t len)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a new tuple object of size len, or\nNULL\nwith an exception set on failure.\n-\nPyObject *PyTuple_Pack(Py_ssize_t n, ...)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a new tuple object of size n, or\nNULL\nwith an exception set on failure. The tuple values are initialized to the subsequent n C arguments pointing to Python objects.PyTuple_Pack(2, a, b)\nis equivalent toPy_BuildValue(\"(OO)\", a, b)\n.\n-\nPy_ssize_t PyTuple_Size(PyObject *p)\u00b6\n- Part of the Stable ABI.\nTake a pointer to a tuple object, and return the size of that tuple. On error, return\n-1\nwith an exception set.\n-\nPy_ssize_t PyTuple_GET_SIZE(PyObject *p)\u00b6\nLike\nPyTuple_Size()\n, but without error checking.\n-\nPyObject *PyTuple_GetItem(PyObject *p, Py_ssize_t pos)\u00b6\n- Return value: Borrowed reference. Part of the Stable ABI.\nReturn the object at position pos in the tuple pointed to by p. If pos is negative or out of bounds, return\nNULL\nand set anIndexError\nexception.The returned reference is borrowed from the tuple p (that is: it is only valid as long as you hold a reference to p). To get a strong reference, use\nPy_NewRef(PyTuple_GetItem(...))\norPySequence_GetItem()\n.\n-\nPyObject *PyTuple_GET_ITEM(PyObject *p, Py_ssize_t pos)\u00b6\n- Return value: Borrowed reference.\nLike\nPyTuple_GetItem()\n, but does no checking of its arguments.\n-\nPyObject *PyTuple_GetSlice(PyObject *p, Py_ssize_t low, Py_ssize_t high)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn the slice of the tuple pointed to by p between low and high, or\nNULL\nwith an exception set on failure.This is the equivalent of the Python expression\np[low:high]\n. Indexing from the end of the tuple is not supported.\n-\nint PyTuple_SetItem(PyObject *p, Py_ssize_t pos, PyObject *o)\u00b6\n- Part of the Stable ABI.\nInsert a reference to object o at position pos of the tuple pointed to by p. Return\n0\non success. If pos is out of bounds, return-1\nand set anIndexError\nexception.Note\nThis function \u201csteals\u201d a reference to o and discards a reference to an item already in the tuple at the affected position.\n-\nvoid PyTuple_SET_ITEM(PyObject *p, Py_ssize_t pos, PyObject *o)\u00b6\nLike\nPyTuple_SetItem()\n, but does no error checking, and should only be used to fill in brand new tuples.Bounds checking is performed as an assertion if Python is built in debug mode or\nwith assertions\n.Note\nThis function \u201csteals\u201d a reference to o, and, unlike\nPyTuple_SetItem()\n, does not discard a reference to any item that is being replaced; any reference in the tuple at position pos will be leaked.Warning\nThis macro should only be used on tuples that are newly created. Using this macro on a tuple that is already in use (or in other words, has a refcount > 1) could lead to undefined behavior.\n-\nint _PyTuple_Resize(PyObject **p, Py_ssize_t newsize)\u00b6\nCan be used to resize a tuple. newsize will be the new length of the tuple. Because tuples are supposed to be immutable, this should only be used if there is only one reference to the object. Do not use this if the tuple may already be known to some other part of the code. The tuple will always grow or shrink at the end. Think of this as destroying the old tuple and creating a new one, only more efficiently. Returns\n0\non success. Client code should never assume that the resulting value of*p\nwill be the same as before calling this function. If the object referenced by*p\nis replaced, the original*p\nis destroyed. On failure, returns-1\nand sets*p\ntoNULL\n, and raisesMemoryError\norSystemError\n.\nStruct Sequence Objects\u00b6\nStruct sequence objects are the C equivalent of namedtuple()\nobjects, i.e. a sequence whose items can also be accessed through attributes.\nTo create a struct sequence, you first have to create a specific struct sequence\ntype.\n-\nPyTypeObject *PyStructSequence_NewType(PyStructSequence_Desc *desc)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCreate a new struct sequence type from the data in desc, described below. Instances of the resulting type can be created with\nPyStructSequence_New()\n.Return\nNULL\nwith an exception set on failure.\n-\nvoid PyStructSequence_InitType(PyTypeObject *type, PyStructSequence_Desc *desc)\u00b6\nInitializes a struct sequence type type from desc in place.\n-\nint PyStructSequence_InitType2(PyTypeObject *type, PyStructSequence_Desc *desc)\u00b6\nLike\nPyStructSequence_InitType()\n, but returns0\non success and-1\nwith an exception set on failure.Added in version 3.4.\n-\ntype PyStructSequence_Desc\u00b6\n- Part of the Stable ABI (including all members).\nContains the meta information of a struct sequence type to create.\n-\nconst char *name\u00b6\nFully qualified name of the type; null-terminated UTF-8 encoded. The name must contain the module name.\n-\nconst char *doc\u00b6\nPointer to docstring for the type or\nNULL\nto omit.\n-\nPyStructSequence_Field *fields\u00b6\nPointer to\nNULL\n-terminated array with field names of the new type.\n-\nint n_in_sequence\u00b6\nNumber of fields visible to the Python side (if used as tuple).\n-\nconst char *name\u00b6\n-\ntype PyStructSequence_Field\u00b6\n- Part of the Stable ABI (including all members).\nDescribes a field of a struct sequence. As a struct sequence is modeled as a tuple, all fields are typed as PyObject*. The index in the\nfields\narray of thePyStructSequence_Desc\ndetermines which field of the struct sequence is described.-\nconst char *name\u00b6\nName for the field or\nNULL\nto end the list of named fields, set toPyStructSequence_UnnamedField\nto leave unnamed.\n-\nconst char *doc\u00b6\nField docstring or\nNULL\nto omit.\n-\nconst char *name\u00b6\n-\nconst char *const PyStructSequence_UnnamedField\u00b6\n- Part of the Stable ABI since version 3.11.\nSpecial value for a field name to leave it unnamed.\nChanged in version 3.9: The type was changed from\nchar *\n.\n-\nPyObject *PyStructSequence_New(PyTypeObject *type)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCreates an instance of type, which must have been created with\nPyStructSequence_NewType()\n.Return\nNULL\nwith an exception set on failure.\n-\nPyObject *PyStructSequence_GetItem(PyObject *p, Py_ssize_t pos)\u00b6\n- Return value: Borrowed reference. Part of the Stable ABI.\nReturn the object at position pos in the struct sequence pointed to by p.\nBounds checking is performed as an assertion if Python is built in debug mode or\nwith assertions\n.\n-\nPyObject *PyStructSequence_GET_ITEM(PyObject *p, Py_ssize_t pos)\u00b6\n- Return value: Borrowed reference.\nAlias to\nPyStructSequence_GetItem()\n.Changed in version 3.13: Now implemented as an alias to\nPyStructSequence_GetItem()\n.\n-\nvoid PyStructSequence_SetItem(PyObject *p, Py_ssize_t pos, PyObject *o)\u00b6\n- Part of the Stable ABI.\nSets the field at index pos of the struct sequence p to value o. Like\nPyTuple_SET_ITEM()\n, this should only be used to fill in brand new instances.Bounds checking is performed as an assertion if Python is built in debug mode or\nwith assertions\n.Note\nThis function \u201csteals\u201d a reference to o.\n-\nvoid PyStructSequence_SET_ITEM(PyObject *p, Py_ssize_t *pos, PyObject *o)\u00b6\nAlias to\nPyStructSequence_SetItem()\n.Changed in version 3.13: Now implemented as an alias to\nPyStructSequence_SetItem()\n.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1906} +{"url": "https://docs.python.org/3/c-api/veryhigh.html", "title": "The Very High Level Layer", "content": "The Very High Level Layer\u00b6\nThe functions in this chapter will let you execute Python source code given in a file or a buffer, but they will not let you interact in a more detailed way with the interpreter.\nSeveral of these functions accept a start symbol from the grammar as a\nparameter. The available start symbols are Py_eval_input\n,\nPy_file_input\n, Py_single_input\n, and\nPy_func_type_input\n. These are described following the functions\nwhich accept them as parameters.\nNote also that several of these functions take FILE* parameters. One\nparticular issue which needs to be handled carefully is that the FILE\nstructure for different C libraries can be different and incompatible. Under\nWindows (at least), it is possible for dynamically linked extensions to actually\nuse different libraries, so care should be taken that FILE* parameters\nare only passed to these functions if it is certain that they were created by\nthe same library that the Python runtime is using.\n-\nint PyRun_AnyFile(FILE *fp, const char *filename)\u00b6\nThis is a simplified interface to\nPyRun_AnyFileExFlags()\nbelow, leaving closeit set to0\nand flags set toNULL\n.\n-\nint PyRun_AnyFileFlags(FILE *fp, const char *filename, PyCompilerFlags *flags)\u00b6\nThis is a simplified interface to\nPyRun_AnyFileExFlags()\nbelow, leaving the closeit argument set to0\n.\n-\nint PyRun_AnyFileEx(FILE *fp, const char *filename, int closeit)\u00b6\nThis is a simplified interface to\nPyRun_AnyFileExFlags()\nbelow, leaving the flags argument set toNULL\n.\n-\nint PyRun_AnyFileExFlags(FILE *fp, const char *filename, int closeit, PyCompilerFlags *flags)\u00b6\nIf fp refers to a file associated with an interactive device (console or terminal input or Unix pseudo-terminal), return the value of\nPyRun_InteractiveLoop()\n, otherwise return the result ofPyRun_SimpleFile()\n. filename is decoded from the filesystem encoding (sys.getfilesystemencoding()\n). If filename isNULL\n, this function uses\"???\"\nas the filename. If closeit is true, the file is closed beforePyRun_SimpleFileExFlags()\nreturns.\n-\nint PyRun_SimpleString(const char *command)\u00b6\nThis is a simplified interface to\nPyRun_SimpleStringFlags()\nbelow, leaving thePyCompilerFlags\n* argument set toNULL\n.\n-\nint PyRun_SimpleStringFlags(const char *command, PyCompilerFlags *flags)\u00b6\nExecutes the Python source code from command in the\n__main__\nmodule according to the flags argument. If__main__\ndoes not already exist, it is created. Returns0\non success or-1\nif an exception was raised. If there was an error, there is no way to get the exception information. For the meaning of flags, see below.Note that if an otherwise unhandled\nSystemExit\nis raised, this function will not return-1\n, but exit the process, as long asPyConfig.inspect\nis zero.\n-\nint PyRun_SimpleFile(FILE *fp, const char *filename)\u00b6\nThis is a simplified interface to\nPyRun_SimpleFileExFlags()\nbelow, leaving closeit set to0\nand flags set toNULL\n.\n-\nint PyRun_SimpleFileEx(FILE *fp, const char *filename, int closeit)\u00b6\nThis is a simplified interface to\nPyRun_SimpleFileExFlags()\nbelow, leaving flags set toNULL\n.\n-\nint PyRun_SimpleFileExFlags(FILE *fp, const char *filename, int closeit, PyCompilerFlags *flags)\u00b6\nSimilar to\nPyRun_SimpleStringFlags()\n, but the Python source code is read from fp instead of an in-memory string. filename should be the name of the file, it is decoded from filesystem encoding and error handler. If closeit is true, the file is closed beforePyRun_SimpleFileExFlags()\nreturns.Note\nOn Windows, fp should be opened as binary mode (e.g.\nfopen(filename, \"rb\")\n). Otherwise, Python may not handle script file with LF line ending correctly.\n-\nint PyRun_InteractiveOneObject(FILE *fp, PyObject *filename, PyCompilerFlags *flags)\u00b6\nRead and execute a single statement from a file associated with an interactive device according to the flags argument. The user will be prompted using\nsys.ps1\nandsys.ps2\n. filename must be a Pythonstr\nobject.Returns\n0\nwhen the input was executed successfully,-1\nif there was an exception, or an error code from theerrcode.h\ninclude file distributed as part of Python if there was a parse error. (Note thaterrcode.h\nis not included byPython.h\n, so must be included specifically if needed.)\n-\nint PyRun_InteractiveOne(FILE *fp, const char *filename)\u00b6\nThis is a simplified interface to\nPyRun_InteractiveOneFlags()\nbelow, leaving flags set toNULL\n.\n-\nint PyRun_InteractiveOneFlags(FILE *fp, const char *filename, PyCompilerFlags *flags)\u00b6\nSimilar to\nPyRun_InteractiveOneObject()\n, but filename is a const char*, which is decoded from the filesystem encoding and error handler.\n-\nint PyRun_InteractiveLoop(FILE *fp, const char *filename)\u00b6\nThis is a simplified interface to\nPyRun_InteractiveLoopFlags()\nbelow, leaving flags set toNULL\n.\n-\nint PyRun_InteractiveLoopFlags(FILE *fp, const char *filename, PyCompilerFlags *flags)\u00b6\nRead and execute statements from a file associated with an interactive device until EOF is reached. The user will be prompted using\nsys.ps1\nandsys.ps2\n. filename is decoded from the filesystem encoding and error handler. Returns0\nat EOF or a negative number upon failure.\n-\nint (*PyOS_InputHook)(void)\u00b6\n- Part of the Stable ABI.\nCan be set to point to a function with the prototype\nint func(void)\n. The function will be called when Python\u2019s interpreter prompt is about to become idle and wait for user input from the terminal. The return value is ignored. Overriding this hook can be used to integrate the interpreter\u2019s prompt with other event loops, as done inModules/_tkinter.c\nin the Python source code.Changed in version 3.12: This function is only called from the main interpreter.\n-\nchar *(*PyOS_ReadlineFunctionPointer)(FILE*, FILE*, const char*)\u00b6\nCan be set to point to a function with the prototype\nchar *func(FILE *stdin, FILE *stdout, char *prompt)\n, overriding the default function used to read a single line of input at the interpreter\u2019s prompt. The function is expected to output the string prompt if it\u2019s notNULL\n, and then read a line of input from the provided standard input file, returning the resulting string. For example, Thereadline\nmodule sets this hook to provide line-editing and tab-completion features.The result must be a string allocated by\nPyMem_RawMalloc()\norPyMem_RawRealloc()\n, orNULL\nif an error occurred.Changed in version 3.4: The result must be allocated by\nPyMem_RawMalloc()\norPyMem_RawRealloc()\n, instead of being allocated byPyMem_Malloc()\norPyMem_Realloc()\n.Changed in version 3.12: This function is only called from the main interpreter.\n-\nPyObject *PyRun_String(const char *str, int start, PyObject *globals, PyObject *locals)\u00b6\n- Return value: New reference.\nThis is a simplified interface to\nPyRun_StringFlags()\nbelow, leaving flags set toNULL\n.\n-\nPyObject *PyRun_StringFlags(const char *str, int start, PyObject *globals, PyObject *locals, PyCompilerFlags *flags)\u00b6\n- Return value: New reference.\nExecute Python source code from str in the context specified by the objects globals and locals with the compiler flags specified by flags. globals must be a dictionary; locals can be any object that implements the mapping protocol. The parameter start specifies the start symbol and must be one of the available start symbols.\nReturns the result of executing the code as a Python object, or\nNULL\nif an exception was raised.\n-\nPyObject *PyRun_File(FILE *fp, const char *filename, int start, PyObject *globals, PyObject *locals)\u00b6\n- Return value: New reference.\nThis is a simplified interface to\nPyRun_FileExFlags()\nbelow, leaving closeit set to0\nand flags set toNULL\n.\n-\nPyObject *PyRun_FileEx(FILE *fp, const char *filename, int start, PyObject *globals, PyObject *locals, int closeit)\u00b6\n- Return value: New reference.\nThis is a simplified interface to\nPyRun_FileExFlags()\nbelow, leaving flags set toNULL\n.\n-\nPyObject *PyRun_FileFlags(FILE *fp, const char *filename, int start, PyObject *globals, PyObject *locals, PyCompilerFlags *flags)\u00b6\n- Return value: New reference.\nThis is a simplified interface to\nPyRun_FileExFlags()\nbelow, leaving closeit set to0\n.\n-\nPyObject *PyRun_FileExFlags(FILE *fp, const char *filename, int start, PyObject *globals, PyObject *locals, int closeit, PyCompilerFlags *flags)\u00b6\n- Return value: New reference.\nSimilar to\nPyRun_StringFlags()\n, but the Python source code is read from fp instead of an in-memory string. filename should be the name of the file, it is decoded from the filesystem encoding and error handler. If closeit is true, the file is closed beforePyRun_FileExFlags()\nreturns.\n-\nPyObject *Py_CompileString(const char *str, const char *filename, int start)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nThis is a simplified interface to\nPy_CompileStringFlags()\nbelow, leaving flags set toNULL\n.\n-\nPyObject *Py_CompileStringFlags(const char *str, const char *filename, int start, PyCompilerFlags *flags)\u00b6\n- Return value: New reference.\nThis is a simplified interface to\nPy_CompileStringExFlags()\nbelow, with optimize set to-1\n.\n-\nPyObject *Py_CompileStringObject(const char *str, PyObject *filename, int start, PyCompilerFlags *flags, int optimize)\u00b6\n- Return value: New reference.\nParse and compile the Python source code in str, returning the resulting code object. The start symbol is given by start; this can be used to constrain the code which can be compiled and should be available start symbols. The filename specified by filename is used to construct the code object and may appear in tracebacks or\nSyntaxError\nexception messages. This returnsNULL\nif the code cannot be parsed or compiled.The integer optimize specifies the optimization level of the compiler; a value of\n-1\nselects the optimization level of the interpreter as given by-O\noptions. Explicit levels are0\n(no optimization;__debug__\nis true),1\n(asserts are removed,__debug__\nis false) or2\n(docstrings are removed too).Added in version 3.4.\n-\nPyObject *Py_CompileStringExFlags(const char *str, const char *filename, int start, PyCompilerFlags *flags, int optimize)\u00b6\n- Return value: New reference.\nLike\nPy_CompileStringObject()\n, but filename is a byte string decoded from the filesystem encoding and error handler.Added in version 3.2.\n-\nPyObject *PyEval_EvalCode(PyObject *co, PyObject *globals, PyObject *locals)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nThis is a simplified interface to\nPyEval_EvalCodeEx()\n, with just the code object, and global and local variables. The other arguments are set toNULL\n.\n-\nPyObject *PyEval_EvalCodeEx(PyObject *co, PyObject *globals, PyObject *locals, PyObject *const *args, int argcount, PyObject *const *kws, int kwcount, PyObject *const *defs, int defcount, PyObject *kwdefs, PyObject *closure)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nEvaluate a precompiled code object, given a particular environment for its evaluation. This environment consists of a dictionary of global variables, a mapping object of local variables, arrays of arguments, keywords and defaults, a dictionary of default values for keyword-only arguments and a closure tuple of cells.\n-\nPyObject *PyEval_EvalFrame(PyFrameObject *f)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nEvaluate an execution frame. This is a simplified interface to\nPyEval_EvalFrameEx()\n, for backward compatibility.\n-\nPyObject *PyEval_EvalFrameEx(PyFrameObject *f, int throwflag)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nThis is the main, unvarnished function of Python interpretation. The code object associated with the execution frame f is executed, interpreting bytecode and executing calls as needed. The additional throwflag parameter can mostly be ignored - if true, then it causes an exception to immediately be thrown; this is used for the\nthrow()\nmethods of generator objects.Changed in version 3.4: This function now includes a debug assertion to help ensure that it does not silently discard an active exception.\n-\nint PyEval_MergeCompilerFlags(PyCompilerFlags *cf)\u00b6\nThis function changes the flags of the current evaluation frame, and returns true on success, false on failure.\n-\nstruct PyCompilerFlags\u00b6\nThis is the structure used to hold compiler flags. In cases where code is only being compiled, it is passed as\nint flags\n, and in cases where code is being executed, it is passed asPyCompilerFlags *flags\n. In this case,from __future__ import\ncan modify flags.Whenever\nPyCompilerFlags *flags\nisNULL\n,cf_flags\nis treated as equal to0\n, and any modification due tofrom __future__ import\nis discarded.-\nint cf_flags\u00b6\nCompiler flags.\n-\nint cf_feature_version\u00b6\ncf_feature_version is the minor Python version. It should be initialized to\nPY_MINOR_VERSION\n.The field is ignored by default, it is used if and only if\nPyCF_ONLY_AST\nflag is set incf_flags\n.\nChanged in version 3.8: Added cf_feature_version field.\nThe available compiler flags are accessible as macros:\n-\nPyCF_ALLOW_TOP_LEVEL_AWAIT\u00b6\n-\nPyCF_ONLY_AST\u00b6\n-\nPyCF_OPTIMIZED_AST\u00b6\n-\nPyCF_TYPE_COMMENTS\u00b6\nSee compiler flags in documentation of the\nast\nPython module, which exports these constants under the same names.\nThe \u201c\nPyCF\n\u201d flags above can be combined with \u201cCO_FUTURE\n\u201d flags such asCO_FUTURE_ANNOTATIONS\nto enable features normally selectable using future statements. See Code Object Flags for a complete list.-\nint cf_flags\u00b6\nAvailable start symbols\u00b6\n-\nint Py_eval_input\u00b6\nThe start symbol from the Python grammar for isolated expressions; for use with\nPy_CompileString()\n.\n-\nint Py_file_input\u00b6\nThe start symbol from the Python grammar for sequences of statements as read from a file or other source; for use with\nPy_CompileString()\n. This is the symbol to use when compiling arbitrarily long Python source code.\n-\nint Py_single_input\u00b6\nThe start symbol from the Python grammar for a single statement; for use with\nPy_CompileString()\n. This is the symbol used for the interactive interpreter loop.\n-\nint Py_func_type_input\u00b6\nThe start symbol from the Python grammar for a function type; for use with\nPy_CompileString()\n. This is used to parse \u201csignature type comments\u201d from PEP 484.This requires the\nPyCF_ONLY_AST\nflag to be set.See also\nAdded in version 3.8.\nStack Effects\u00b6\nSee also\n-\nPY_INVALID_STACK_EFFECT\u00b6\nSentinel value representing an invalid stack effect.\nThis is currently equivalent to\nINT_MAX\n.Added in version 3.8.\n-\nint PyCompile_OpcodeStackEffect(int opcode, int oparg)\u00b6\nCompute the stack effect of opcode with argument oparg.\nOn success, this function returns the stack effect; on failure, this returns\nPY_INVALID_STACK_EFFECT\n.Added in version 3.4.\n-\nint PyCompile_OpcodeStackEffectWithJump(int opcode, int oparg, int jump)\u00b6\nSimilar to\nPyCompile_OpcodeStackEffect()\n, but don\u2019t include the stack effect of jumping if jump is zero.If jump is\n0\n, this will not include the stack effect of jumping, but if jump is1\nor-1\n, this will include it.On success, this function returns the stack effect; on failure, this returns\nPY_INVALID_STACK_EFFECT\n.Added in version 3.8.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 3717} +{"url": "https://docs.python.org/3/c-api/intro.html", "title": "Introduction", "content": "Introduction\u00b6\nThe Application Programmer\u2019s Interface to Python gives C and C++ programmers access to the Python interpreter at a variety of levels. The API is equally usable from C++, but for brevity it is generally referred to as the Python/C API. There are two fundamentally different reasons for using the Python/C API. The first reason is to write extension modules for specific purposes; these are C modules that extend the Python interpreter. This is probably the most common use. The second reason is to use Python as a component in a larger application; this technique is generally referred to as embedding Python in an application.\nWriting an extension module is a relatively well-understood process, where a \u201ccookbook\u201d approach works well. There are several tools that automate the process to some extent. While people have embedded Python in other applications since its early existence, the process of embedding Python is less straightforward than writing an extension.\nMany API functions are useful independent of whether you\u2019re embedding or extending Python; moreover, most applications that embed Python will need to provide a custom extension as well, so it\u2019s probably a good idea to become familiar with writing an extension before attempting to embed Python in a real application.\nLanguage version compatibility\u00b6\nPython\u2019s C API is compatible with C11 and C++11 versions of C and C++.\nThis is a lower limit: the C API does not require features from later C/C++ versions. You do not need to enable your compiler\u2019s \u201cc11 mode\u201d.\nCoding standards\u00b6\nIf you\u2019re writing C code for inclusion in CPython, you must follow the guidelines and standards defined in PEP 7. These guidelines apply regardless of the version of Python you are contributing to. Following these conventions is not necessary for your own third party extension modules, unless you eventually expect to contribute them to Python.\nInclude Files\u00b6\nAll function, type and macro definitions needed to use the Python/C API are included in your code by the following line:\n#define PY_SSIZE_T_CLEAN\n#include \nThis implies inclusion of the following standard headers: \n,\n\n, \n, \n, \nand \n(if available).\nNote\nSince Python may define some pre-processor definitions which affect the standard\nheaders on some systems, you must include Python.h\nbefore any standard\nheaders are included.\nIt is recommended to always define PY_SSIZE_T_CLEAN\nbefore including\nPython.h\n. See Parsing arguments and building values for a description of this macro.\nAll user visible names defined by Python.h (except those defined by the included\nstandard headers) have one of the prefixes Py\nor _Py\n. Names beginning\nwith _Py\nare for internal use by the Python implementation and should not be\nused by extension writers. Structure member names do not have a reserved prefix.\nNote\nUser code should never define names that begin with Py\nor _Py\n. This\nconfuses the reader, and jeopardizes the portability of the user code to\nfuture Python versions, which may define additional names beginning with one\nof these prefixes.\nThe header files are typically installed with Python. On Unix, these are\nlocated in the directories prefix/include/pythonversion/\nand\nexec_prefix/include/pythonversion/\n, where prefix\nand\nexec_prefix\nare defined by the corresponding parameters to Python\u2019s\nconfigure script and version is\n'%d.%d' % sys.version_info[:2]\n. On Windows, the headers are installed\nin prefix/include\n, where prefix\nis the installation\ndirectory specified to the installer.\nTo include the headers, place both directories (if different) on your compiler\u2019s\nsearch path for includes. Do not place the parent directories on the search\npath and then use #include \n; this will break on\nmulti-platform builds since the platform independent headers under\nprefix\ninclude the platform specific headers from\nexec_prefix\n.\nC++ users should note that although the API is defined entirely using C, the\nheader files properly declare the entry points to be extern \"C\"\n. As a result,\nthere is no need to do anything special to use the API from C++.\nUseful macros\u00b6\nSeveral useful macros are defined in the Python header files. Many are\ndefined closer to where they are useful (for example, Py_RETURN_NONE\n,\nPyMODINIT_FUNC\n).\nOthers of a more general utility are defined here. This is not necessarily a\ncomplete listing.\n-\nPy_CAN_START_THREADS\u00b6\nIf this macro is defined, then the current system is able to start threads.\nCurrently, all systems supported by CPython (per PEP 11), with the exception of some WebAssembly platforms, support starting threads.\nAdded in version 3.13.\n-\nPy_GETENV(s)\u00b6\nLike\ngetenv(s)\n, but returnsNULL\nif-E\nwas passed on the command line (seePyConfig.use_environment\n).\nDocstring macros\u00b6\n-\nPyDoc_STRVAR(name, str)\u00b6\nCreates a variable with name name that can be used in docstrings. If Python is built without docstrings (\n--without-doc-strings\n), the value will be an empty string.Example:\nPyDoc_STRVAR(pop_doc, \"Remove and return the rightmost element.\"); static PyMethodDef deque_methods[] = { // ... {\"pop\", (PyCFunction)deque_pop, METH_NOARGS, pop_doc}, // ... }\nExpands to\nPyDoc_VAR(name) = PyDoc_STR(str)\n.\n-\nPyDoc_STR(str)\u00b6\nExpands to the given input string, or an empty string if docstrings are disabled (\n--without-doc-strings\n).Example:\nstatic PyMethodDef pysqlite_row_methods[] = { {\"keys\", (PyCFunction)pysqlite_row_keys, METH_NOARGS, PyDoc_STR(\"Returns the keys of the row.\")}, {NULL, NULL} };\n-\nPyDoc_VAR(name)\u00b6\nDeclares a static character array variable with the given name. Expands to\nstatic const char name[]\nFor example:\nPyDoc_VAR(python_doc) = PyDoc_STR( \"A genus of constricting snakes in the Pythonidae family native \" \"to the tropics and subtropics of the Eastern Hemisphere.\");\nGeneral utility macros\u00b6\nThe following macros are for common tasks not specific to Python.\n-\nPy_UNUSED(arg)\u00b6\nUse this for unused arguments in a function definition to silence compiler warnings. Example:\nint func(int a, int Py_UNUSED(b)) { return a; }\n.Added in version 3.4.\n-\nPy_GCC_ATTRIBUTE(name)\u00b6\nUse a GCC attribute name, hiding it from compilers that don\u2019t support GCC attributes (such as MSVC).\nThis expands to\n__attribute__((name))\non a GCC compiler, and expands to nothing on compilers that don\u2019t support GCC attributes.\nNumeric utilities\u00b6\n-\nPy_ABS(x)\u00b6\nReturn the absolute value of\nx\n.The argument may be evaluated more than once. Consequently, do not pass an expression with side-effects directly to this macro.\nIf the result cannot be represented (for example, if\nx\nhasINT_MIN\nvalue for int type), the behavior is undefined.Corresponds roughly to\n((x) < 0 ? -(x) : (x))\nAdded in version 3.3.\n-\nPy_MAX(x, y)\u00b6\n-\nPy_MIN(x, y)\u00b6\nReturn the larger or smaller of the arguments, respectively.\nAny arguments may be evaluated more than once. Consequently, do not pass an expression with side-effects directly to this macro.\nPy_MAX\ncorresponds roughly to(((x) > (y)) ? (x) : (y))\n.Added in version 3.3.\n-\nPy_ARITHMETIC_RIGHT_SHIFT(type, integer, positions)\u00b6\nSimilar to\ninteger >> positions\n, but forces sign extension, as the C standard does not define whether a right-shift of a signed integer will perform sign extension or a zero-fill.integer should be any signed integer type. positions is the number of positions to shift to the right.\nBoth integer and positions can be evaluated more than once; consequently, avoid directly passing a function call or some other operation with side-effects to this macro. Instead, store the result as a variable and then pass it.\ntype is unused and only kept for backwards compatibility. Historically, type was used to cast integer.\nChanged in version 3.1: This macro is now valid for all signed integer types, not just those for which\nunsigned type\nis legal. As a result, type is no longer used.\n-\nPy_CHARMASK(c)\u00b6\nArgument must be a character or an integer in the range [-128, 127] or [0, 255]. This macro returns\nc\ncast to anunsigned char\n.\nAssertion utilities\u00b6\n-\nPy_UNREACHABLE()\u00b6\nUse this when you have a code path that cannot be reached by design. For example, in the\ndefault:\nclause in aswitch\nstatement for which all possible values are covered incase\nstatements. Use this in places where you might be tempted to put anassert(0)\norabort()\ncall.In release mode, the macro helps the compiler to optimize the code, and avoids a warning about unreachable code. For example, the macro is implemented with\n__builtin_unreachable()\non GCC in release mode.In debug mode, and on unsupported compilers, the macro expands to a call to\nPy_FatalError()\n.A use for\nPy_UNREACHABLE()\nis following a call to a function that never returns but that is not declared_Noreturn\n.If a code path is very unlikely code but can be reached under exceptional case, this macro must not be used. For example, under low memory condition or if a system call returns a value out of the expected range. In this case, it\u2019s better to report the error to the caller. If the error cannot be reported to caller,\nPy_FatalError()\ncan be used.Added in version 3.7.\n-\nPy_SAFE_DOWNCAST(value, larger, smaller)\u00b6\nCast value to type smaller from type larger, validating that no information was lost.\nOn release builds of Python, this is roughly equivalent to\n((smaller) value)\n(in C++,static_cast(value)\nwill be used instead).On debug builds (implying that\nPy_DEBUG\nis defined), this asserts that no information was lost with the cast from larger to smaller.value, larger, and smaller may all be evaluated more than once in the expression; consequently, do not pass an expression with side-effects directly to this macro.\n-\nPy_BUILD_ASSERT(cond)\u00b6\nAsserts a compile-time condition cond, as a statement. The build will fail if the condition is false or cannot be evaluated at compile time.\nCorresponds roughly to\nstatic_assert(cond)\non C23 and above.For example:\nPy_BUILD_ASSERT(sizeof(PyTime_t) == sizeof(int64_t));\nAdded in version 3.3.\n-\nPy_BUILD_ASSERT_EXPR(cond)\u00b6\nAsserts a compile-time condition cond, as an expression that evaluates to\n0\n. The build will fail if the condition is false or cannot be evaluated at compile time.For example:\n#define foo_to_char(foo) \\ ((char *)(foo) + Py_BUILD_ASSERT_EXPR(offsetof(struct foo, string) == 0))\nAdded in version 3.3.\nType size utilities\u00b6\n-\nPy_ARRAY_LENGTH(array)\u00b6\nCompute the length of a statically allocated C array at compile time.\nThe array argument must be a C array with a size known at compile time. Passing an array with an unknown size, such as a heap-allocated array, will result in a compilation error on some compilers, or otherwise produce incorrect results.\nThis is roughly equivalent to:\nsizeof(array) / sizeof((array)[0])\n-\nPy_MEMBER_SIZE(type, member)\u00b6\nReturn the size of a structure (type) member in bytes.\nCorresponds roughly to\nsizeof(((type *)NULL)->member)\n.Added in version 3.6.\nMacro definition utilities\u00b6\n-\nPy_FORCE_EXPANSION(X)\u00b6\nThis is equivalent to\nX\n, which is useful for token-pasting in macros, as macro expansions in X are forcefully evaluated by the preprocessor.\n-\nPy_STRINGIFY(x)\u00b6\nConvert\nx\nto a C string. For example,Py_STRINGIFY(123)\nreturns\"123\"\n.Added in version 3.4.\nDeclaration utilities\u00b6\nThe following macros can be used in declarations. They are most useful for defining the C API itself, and have limited use for extension authors. Most of them expand to compiler-specific spellings of common extensions to the C language.\n-\nPy_ALWAYS_INLINE\u00b6\nAsk the compiler to always inline a static inline function. The compiler can ignore it and decide to not inline the function.\nCorresponds to\nalways_inline\nattribute in GCC and__forceinline\nin MSVC.It can be used to inline performance critical static inline functions when building Python in debug mode with function inlining disabled. For example, MSC disables function inlining when building in debug mode.\nMarking blindly a static inline function with Py_ALWAYS_INLINE can result in worse performances (due to increased code size for example). The compiler is usually smarter than the developer for the cost/benefit analysis.\nIf Python is built in debug mode (if the\nPy_DEBUG\nmacro is defined), thePy_ALWAYS_INLINE\nmacro does nothing.It must be specified before the function return type. Usage:\nstatic inline Py_ALWAYS_INLINE int random(void) { return 4; }\nAdded in version 3.11.\n-\nPy_NO_INLINE\u00b6\nDisable inlining on a function. For example, it reduces the C stack consumption: useful on LTO+PGO builds which heavily inline code (see bpo-33720).\nCorresponds to the\nnoinline\nattribute/specification on GCC and MSVC.Usage:\nPy_NO_INLINE static int random(void) { return 4; }\nAdded in version 3.11.\n-\nPy_DEPRECATED(version)\u00b6\nUse this to declare APIs that were deprecated in a specific CPython version. The macro must be placed before the symbol name.\nExample:\nPy_DEPRECATED(3.8) PyAPI_FUNC(int) Py_OldFunction(void);\nChanged in version 3.8: MSVC support was added.\n-\nPy_LOCAL(type)\u00b6\nDeclare a function returning the specified type using a fast-calling qualifier for functions that are local to the current file. Semantically, this is equivalent to\nstatic type\n.\n-\nPy_LOCAL_SYMBOL\u00b6\nMacro used to declare a symbol as local to the shared library (hidden). On supported platforms, it ensures the symbol is not exported.\nOn compatible versions of GCC/Clang, it expands to\n__attribute__((visibility(\"hidden\")))\n.\n-\nPy_EXPORTED_SYMBOL\u00b6\nMacro used to declare a symbol (function or data) as exported. On Windows, this expands to\n__declspec(dllexport)\n. On compatible versions of GCC/Clang, it expands to__attribute__((visibility(\"default\")))\n. This macro is for defining the C API itself; extension modules should not use it.\n-\nPy_IMPORTED_SYMBOL\u00b6\nMacro used to declare a symbol as imported. On Windows, this expands to\n__declspec(dllimport)\n. This macro is for defining the C API itself; extension modules should not use it.\n-\nPyAPI_FUNC(type)\u00b6\nMacro used by CPython to declare a function as part of the C API. Its expansion depends on the platform and build configuration. This macro is intended for defining CPython\u2019s C API itself; extension modules should not use it for their own symbols.\n-\nPyAPI_DATA(type)\u00b6\nMacro used by CPython to declare a public global variable as part of the C API. Its expansion depends on the platform and build configuration. This macro is intended for defining CPython\u2019s C API itself; extension modules should not use it for their own symbols.\nOutdated macros\u00b6\nThe following macros have been used to features that have been standardized in C11.\n-\nPy_ALIGNED(num)\u00b6\nSpecify alignment to num bytes on compilers that support it.\nConsider using the C11 standard\n_Alignas\nspecifier over this macro.\n-\nPy_LL(number)\u00b6\n-\nPy_ULL(number)\u00b6\nUse number as a\nlong long\norunsigned long long\ninteger literal, respectively.Expands to number followed by\nLL\norLLU\n, respectively, but will expand to some compiler-specific suffixes on some older compilers.Consider using the C99 standard suffixes\nLL\nandLLU\ndirectly.\n-\nPy_MEMCPY(dest, src, n)\u00b6\nThis is a soft deprecated alias to\nmemcpy()\n. Usememcpy()\ndirectly instead.Deprecated since version 3.14: The macro is soft deprecated.\n-\nPy_VA_COPY\u00b6\nThis is a soft deprecated alias to the C99-standard\nva_copy\nfunction.Historically, this would use a compiler-specific method to copy a\nva_list\n.Changed in version 3.6: This is now an alias to\nva_copy\n.\nObjects, Types and Reference Counts\u00b6\nMost Python/C API functions have one or more arguments as well as a return value\nof type PyObject*. This type is a pointer to an opaque data type\nrepresenting an arbitrary Python object. Since all Python object types are\ntreated the same way by the Python language in most situations (e.g.,\nassignments, scope rules, and argument passing), it is only fitting that they\nshould be represented by a single C type. Almost all Python objects live on the\nheap: you never declare an automatic or static variable of type\nPyObject\n, only pointer variables of type PyObject* can be\ndeclared. The sole exception are the type objects; since these must never be\ndeallocated, they are typically static PyTypeObject\nobjects.\nAll Python objects (even Python integers) have a type and a\nreference count. An object\u2019s type determines what kind of object it is\n(e.g., an integer, a list, or a user-defined function; there are many more as\nexplained in The standard type hierarchy). For each of the well-known types there is a macro\nto check whether an object is of that type; for instance, PyList_Check(a)\nis\ntrue if (and only if) the object pointed to by a is a Python list.\nReference Counts\u00b6\nThe reference count is important because today\u2019s computers have a finite (and often severely limited) memory size; it counts how many different places there are that have a strong reference to an object. Such a place could be another object, or a global (or static) C variable, or a local variable in some C function. When the last strong reference to an object is released (i.e. its reference count becomes zero), the object is deallocated. If it contains references to other objects, those references are released. Those other objects may be deallocated in turn, if there are no more references to them, and so on. (There\u2019s an obvious problem with objects that reference each other here; for now, the solution is \u201cdon\u2019t do that.\u201d)\nReference counts are always manipulated explicitly. The normal way is\nto use the macro Py_INCREF()\nto take a new reference to an\nobject (i.e. increment its reference count by one),\nand Py_DECREF()\nto release that reference (i.e. decrement the\nreference count by one). The Py_DECREF()\nmacro\nis considerably more complex than the incref one, since it must check whether\nthe reference count becomes zero and then cause the object\u2019s deallocator to be\ncalled. The deallocator is a function pointer contained in the object\u2019s type\nstructure. The type-specific deallocator takes care of releasing references\nfor other objects contained in the object if this is a compound\nobject type, such as a list, as well as performing any additional finalization\nthat\u2019s needed. There\u2019s no chance that the reference count can overflow; at\nleast as many bits are used to hold the reference count as there are distinct\nmemory locations in virtual memory (assuming sizeof(Py_ssize_t) >= sizeof(void*)\n).\nThus, the reference count increment is a simple operation.\nIt is not necessary to hold a strong reference (i.e. increment the reference count) for every local variable that contains a pointer to an object. In theory, the object\u2019s reference count goes up by one when the variable is made to point to it and it goes down by one when the variable goes out of scope. However, these two cancel each other out, so at the end the reference count hasn\u2019t changed. The only real reason to use the reference count is to prevent the object from being deallocated as long as our variable is pointing to it. If we know that there is at least one other reference to the object that lives at least as long as our variable, there is no need to take a new strong reference (i.e. increment the reference count) temporarily. An important situation where this arises is in objects that are passed as arguments to C functions in an extension module that are called from Python; the call mechanism guarantees to hold a reference to every argument for the duration of the call.\nHowever, a common pitfall is to extract an object from a list and hold on to it\nfor a while without taking a new reference. Some other operation might\nconceivably remove the object from the list, releasing that reference,\nand possibly deallocating it. The real danger is that innocent-looking\noperations may invoke arbitrary Python code which could do this; there is a code\npath which allows control to flow back to the user from a Py_DECREF()\n, so\nalmost any operation is potentially dangerous.\nA safe approach is to always use the generic operations (functions whose name\nbegins with PyObject_\n, PyNumber_\n, PySequence_\nor PyMapping_\n).\nThese operations always create a new strong reference\n(i.e. increment the reference count) of the object they return.\nThis leaves the caller with the responsibility to call Py_DECREF()\nwhen\nthey are done with the result; this soon becomes second nature.\nReference Count Details\u00b6\nThe reference count behavior of functions in the Python/C API is best explained\nin terms of ownership of references. Ownership pertains to references, never\nto objects (objects are not owned: they are always shared). \u201cOwning a\nreference\u201d means being responsible for calling Py_DECREF on it when the\nreference is no longer needed. Ownership can also be transferred, meaning that\nthe code that receives ownership of the reference then becomes responsible for\neventually releasing it by calling Py_DECREF()\nor Py_XDECREF()\nwhen it\u2019s no longer needed\u2014or passing on this responsibility (usually to its\ncaller). When a function passes ownership of a reference on to its caller, the\ncaller is said to receive a new reference. When no ownership is transferred,\nthe caller is said to borrow the reference. Nothing needs to be done for a\nborrowed reference.\nConversely, when a calling function passes in a reference to an object, there are two possibilities: the function steals a reference to the object, or it does not. Stealing a reference means that when you pass a reference to a function, that function assumes that it now owns that reference, and you are not responsible for it any longer.\nFew functions steal references; the two notable exceptions are\nPyList_SetItem()\nand PyTuple_SetItem()\n, which steal a reference\nto the item (but not to the tuple or list into which the item is put!). These\nfunctions were designed to steal a reference because of a common idiom for\npopulating a tuple or list with newly created objects; for example, the code to\ncreate the tuple (1, 2, \"three\")\ncould look like this (forgetting about\nerror handling for the moment; a better way to code this is shown below):\nPyObject *t;\nt = PyTuple_New(3);\nPyTuple_SetItem(t, 0, PyLong_FromLong(1L));\nPyTuple_SetItem(t, 1, PyLong_FromLong(2L));\nPyTuple_SetItem(t, 2, PyUnicode_FromString(\"three\"));\nHere, PyLong_FromLong()\nreturns a new reference which is immediately\nstolen by PyTuple_SetItem()\n. When you want to keep using an object\nalthough the reference to it will be stolen, use Py_INCREF()\nto grab\nanother reference before calling the reference-stealing function.\nIncidentally, PyTuple_SetItem()\nis the only way to set tuple items;\nPySequence_SetItem()\nand PyObject_SetItem()\nrefuse to do this\nsince tuples are an immutable data type. You should only use\nPyTuple_SetItem()\nfor tuples that you are creating yourself.\nEquivalent code for populating a list can be written using PyList_New()\nand PyList_SetItem()\n.\nHowever, in practice, you will rarely use these ways of creating and populating\na tuple or list. There\u2019s a generic function, Py_BuildValue()\n, that can\ncreate most common objects from C values, directed by a format string.\nFor example, the above two blocks of code could be replaced by the following\n(which also takes care of the error checking):\nPyObject *tuple, *list;\ntuple = Py_BuildValue(\"(iis)\", 1, 2, \"three\");\nlist = Py_BuildValue(\"[iis]\", 1, 2, \"three\");\nIt is much more common to use PyObject_SetItem()\nand friends with items\nwhose references you are only borrowing, like arguments that were passed in to\nthe function you are writing. In that case, their behaviour regarding references\nis much saner, since you don\u2019t have to take a new reference just so you\ncan give that reference away (\u201chave it be stolen\u201d). For example, this function\nsets all items of a list (actually, any mutable sequence) to a given item:\nint\nset_all(PyObject *target, PyObject *item)\n{\nPy_ssize_t i, n;\nn = PyObject_Length(target);\nif (n < 0)\nreturn -1;\nfor (i = 0; i < n; i++) {\nPyObject *index = PyLong_FromSsize_t(i);\nif (!index)\nreturn -1;\nif (PyObject_SetItem(target, index, item) < 0) {\nPy_DECREF(index);\nreturn -1;\n}\nPy_DECREF(index);\n}\nreturn 0;\n}\nThe situation is slightly different for function return values. While passing\na reference to most functions does not change your ownership responsibilities\nfor that reference, many functions that return a reference to an object give\nyou ownership of the reference. The reason is simple: in many cases, the\nreturned object is created on the fly, and the reference you get is the only\nreference to the object. Therefore, the generic functions that return object\nreferences, like PyObject_GetItem()\nand PySequence_GetItem()\n,\nalways return a new reference (the caller becomes the owner of the reference).\nIt is important to realize that whether you own a reference returned by a\nfunction depends on which function you call only \u2014 the plumage (the type of\nthe object passed as an argument to the function) doesn\u2019t enter into it!\nThus, if you extract an item from a list using PyList_GetItem()\n, you\ndon\u2019t own the reference \u2014 but if you obtain the same item from the same list\nusing PySequence_GetItem()\n(which happens to take exactly the same\narguments), you do own a reference to the returned object.\nHere is an example of how you could write a function that computes the sum of\nthe items in a list of integers; once using PyList_GetItem()\n, and once\nusing PySequence_GetItem()\n.\nlong\nsum_list(PyObject *list)\n{\nPy_ssize_t i, n;\nlong total = 0, value;\nPyObject *item;\nn = PyList_Size(list);\nif (n < 0)\nreturn -1; /* Not a list */\nfor (i = 0; i < n; i++) {\nitem = PyList_GetItem(list, i); /* Can't fail */\nif (!PyLong_Check(item)) continue; /* Skip non-integers */\nvalue = PyLong_AsLong(item);\nif (value == -1 && PyErr_Occurred())\n/* Integer too big to fit in a C long, bail out */\nreturn -1;\ntotal += value;\n}\nreturn total;\n}\nlong\nsum_sequence(PyObject *sequence)\n{\nPy_ssize_t i, n;\nlong total = 0, value;\nPyObject *item;\nn = PySequence_Length(sequence);\nif (n < 0)\nreturn -1; /* Has no length */\nfor (i = 0; i < n; i++) {\nitem = PySequence_GetItem(sequence, i);\nif (item == NULL)\nreturn -1; /* Not a sequence, or other failure */\nif (PyLong_Check(item)) {\nvalue = PyLong_AsLong(item);\nPy_DECREF(item);\nif (value == -1 && PyErr_Occurred())\n/* Integer too big to fit in a C long, bail out */\nreturn -1;\ntotal += value;\n}\nelse {\nPy_DECREF(item); /* Discard reference ownership */\n}\n}\nreturn total;\n}\nTypes\u00b6\nThere are few other data types that play a significant role in the Python/C API; most are simple C types such as int, long, double and char*. A few structure types are used to describe static tables used to list the functions exported by a module or the data attributes of a new object type, and another is used to describe the value of a complex number. These will be discussed together with the functions that use them.\n-\ntype Py_ssize_t\u00b6\n- Part of the Stable ABI.\nA signed integral type such that\nsizeof(Py_ssize_t) == sizeof(size_t)\n. C99 doesn\u2019t define such a thing directly (size_t is an unsigned integral type). See PEP 353 for details.PY_SSIZE_T_MAX\nis the largest positive value of typePy_ssize_t\n.\nExceptions\u00b6\nThe Python programmer only needs to deal with exceptions if specific error handling is required; unhandled exceptions are automatically propagated to the caller, then to the caller\u2019s caller, and so on, until they reach the top-level interpreter, where they are reported to the user accompanied by a stack traceback.\nFor C programmers, however, error checking always has to be explicit. All\nfunctions in the Python/C API can raise exceptions, unless an explicit claim is\nmade otherwise in a function\u2019s documentation. In general, when a function\nencounters an error, it sets an exception, discards any object references that\nit owns, and returns an error indicator. If not documented otherwise, this\nindicator is either NULL\nor -1\n, depending on the function\u2019s return type.\nA few functions return a Boolean true/false result, with false indicating an\nerror. Very few functions return no explicit error indicator or have an\nambiguous return value, and require explicit testing for errors with\nPyErr_Occurred()\n. These exceptions are always explicitly documented.\nException state is maintained in per-thread storage (this is equivalent to\nusing global storage in an unthreaded application). A thread can be in one of\ntwo states: an exception has occurred, or not. The function\nPyErr_Occurred()\ncan be used to check for this: it returns a borrowed\nreference to the exception type object when an exception has occurred, and\nNULL\notherwise. There are a number of functions to set the exception state:\nPyErr_SetString()\nis the most common (though not the most general)\nfunction to set the exception state, and PyErr_Clear()\nclears the\nexception state.\nThe full exception state consists of three objects (all of which can be\nNULL\n): the exception type, the corresponding exception value, and the\ntraceback. These have the same meanings as the Python result of\nsys.exc_info()\n; however, they are not the same: the Python objects represent\nthe last exception being handled by a Python try\n\u2026\nexcept\nstatement, while the C level exception state only exists while\nan exception is being passed on between C functions until it reaches the Python\nbytecode interpreter\u2019s main loop, which takes care of transferring it to\nsys.exc_info()\nand friends.\nNote that starting with Python 1.5, the preferred, thread-safe way to access the\nexception state from Python code is to call the function sys.exc_info()\n,\nwhich returns the per-thread exception state for Python code. Also, the\nsemantics of both ways to access the exception state have changed so that a\nfunction which catches an exception will save and restore its thread\u2019s exception\nstate so as to preserve the exception state of its caller. This prevents common\nbugs in exception handling code caused by an innocent-looking function\noverwriting the exception being handled; it also reduces the often unwanted\nlifetime extension for objects that are referenced by the stack frames in the\ntraceback.\nAs a general principle, a function that calls another function to perform some task should check whether the called function raised an exception, and if so, pass the exception state on to its caller. It should discard any object references that it owns, and return an error indicator, but it should not set another exception \u2014 that would overwrite the exception that was just raised, and lose important information about the exact cause of the error.\nA simple example of detecting exceptions and passing them on is shown in the\nsum_sequence()\nexample above. It so happens that this example doesn\u2019t\nneed to clean up any owned references when it detects an error. The following\nexample function shows some error cleanup. First, to remind you why you like\nPython, we show the equivalent Python code:\ndef incr_item(dict, key):\ntry:\nitem = dict[key]\nexcept KeyError:\nitem = 0\ndict[key] = item + 1\nHere is the corresponding C code, in all its glory:\nint\nincr_item(PyObject *dict, PyObject *key)\n{\n/* Objects all initialized to NULL for Py_XDECREF */\nPyObject *item = NULL, *const_one = NULL, *incremented_item = NULL;\nint rv = -1; /* Return value initialized to -1 (failure) */\nitem = PyObject_GetItem(dict, key);\nif (item == NULL) {\n/* Handle KeyError only: */\nif (!PyErr_ExceptionMatches(PyExc_KeyError))\ngoto error;\n/* Clear the error and use zero: */\nPyErr_Clear();\nitem = PyLong_FromLong(0L);\nif (item == NULL)\ngoto error;\n}\nconst_one = PyLong_FromLong(1L);\nif (const_one == NULL)\ngoto error;\nincremented_item = PyNumber_Add(item, const_one);\nif (incremented_item == NULL)\ngoto error;\nif (PyObject_SetItem(dict, key, incremented_item) < 0)\ngoto error;\nrv = 0; /* Success */\n/* Continue with cleanup code */\nerror:\n/* Cleanup code, shared by success and failure path */\n/* Use Py_XDECREF() to ignore NULL references */\nPy_XDECREF(item);\nPy_XDECREF(const_one);\nPy_XDECREF(incremented_item);\nreturn rv; /* -1 for error, 0 for success */\n}\nThis example represents an endorsed use of the goto\nstatement in C!\nIt illustrates the use of PyErr_ExceptionMatches()\nand\nPyErr_Clear()\nto handle specific exceptions, and the use of\nPy_XDECREF()\nto dispose of owned references that may be NULL\n(note the\n'X'\nin the name; Py_DECREF()\nwould crash when confronted with a\nNULL\nreference). It is important that the variables used to hold owned\nreferences are initialized to NULL\nfor this to work; likewise, the proposed\nreturn value is initialized to -1\n(failure) and only set to success after\nthe final call made is successful.\nEmbedding Python\u00b6\nThe one important task that only embedders (as opposed to extension writers) of the Python interpreter have to worry about is the initialization, and possibly the finalization, of the Python interpreter. Most functionality of the interpreter can only be used after the interpreter has been initialized.\nThe basic initialization function is Py_Initialize()\n. This initializes\nthe table of loaded modules, and creates the fundamental modules\nbuiltins\n, __main__\n, and sys\n. It also\ninitializes the module search path (sys.path\n).\nPy_Initialize()\ndoes not set the \u201cscript argument list\u201d (sys.argv\n).\nIf this variable is needed by Python code that will be executed later, setting\nPyConfig.argv\nand PyConfig.parse_argv\nmust be set: see\nPython Initialization Configuration.\nOn most systems (in particular, on Unix and Windows, although the details are\nslightly different), Py_Initialize()\ncalculates the module search path\nbased upon its best guess for the location of the standard Python interpreter\nexecutable, assuming that the Python library is found in a fixed location\nrelative to the Python interpreter executable. In particular, it looks for a\ndirectory named lib/pythonX.Y\nrelative to the parent directory\nwhere the executable named python\nis found on the shell command search\npath (the environment variable PATH\n).\nFor instance, if the Python executable is found in\n/usr/local/bin/python\n, it will assume that the libraries are in\n/usr/local/lib/pythonX.Y\n. (In fact, this particular path is also\nthe \u201cfallback\u201d location, used when no executable file named python\nis\nfound along PATH\n.) The user can override this behavior by setting the\nenvironment variable PYTHONHOME\n, or insert additional directories in\nfront of the standard path by setting PYTHONPATH\n.\nThe embedding application can steer the search by setting\nPyConfig.program_name\nbefore calling\nPy_InitializeFromConfig()\n. Note that\nPYTHONHOME\nstill overrides this and PYTHONPATH\nis still\ninserted in front of the standard path. An application that requires total\ncontrol has to provide its own implementation of Py_GetPath()\n,\nPy_GetPrefix()\n, Py_GetExecPrefix()\n, and\nPy_GetProgramFullPath()\n(all defined in Modules/getpath.c\n).\nSometimes, it is desirable to \u201cuninitialize\u201d Python. For instance, the\napplication may want to start over (make another call to\nPy_Initialize()\n) or the application is simply done with its use of\nPython and wants to free memory allocated by Python. This can be accomplished\nby calling Py_FinalizeEx()\n. The function Py_IsInitialized()\nreturns\ntrue if Python is currently in the initialized state. More information about\nthese functions is given in a later chapter. Notice that Py_FinalizeEx()\ndoes not free all memory allocated by the Python interpreter, e.g. memory\nallocated by extension modules currently cannot be released.\nDebugging Builds\u00b6\nPython can be built with several macros to enable extra checks of the interpreter and extension modules. These checks tend to add a large amount of overhead to the runtime so they are not enabled by default.\nA full list of the various types of debugging builds is in the file\nMisc/SpecialBuilds.txt\nin the Python source distribution. Builds are\navailable that support tracing of reference counts, debugging the memory\nallocator, or low-level profiling of the main interpreter loop. Only the most\nfrequently used builds will be described in the remainder of this section.\n-\nPy_DEBUG\u00b6\nCompiling the interpreter with the Py_DEBUG\nmacro defined produces\nwhat is generally meant by a debug build of Python.\nPy_DEBUG\nis enabled in the Unix build by adding\n--with-pydebug\nto the ./configure\ncommand.\nIt is also implied by the presence of the\nnot-Python-specific _DEBUG\nmacro. When Py_DEBUG\nis enabled\nin the Unix build, compiler optimization is disabled.\nIn addition to the reference count debugging described below, extra checks are performed, see Python Debug Build.\nDefining Py_TRACE_REFS\nenables reference tracing\n(see the configure --with-trace-refs option\n).\nWhen defined, a circular doubly linked list of active objects is maintained by adding two extra\nfields to every PyObject\n. Total allocations are tracked as well. Upon\nexit, all existing references are printed. (In interactive mode this happens\nafter every statement run by the interpreter.)\nPlease refer to Misc/SpecialBuilds.txt\nin the Python source distribution\nfor more detailed information.\nRecommended third party tools\u00b6\nThe following third party tools offer both simpler and more sophisticated approaches to creating C, C++ and Rust extensions for Python:\nUsing tools such as these can help avoid writing code that is tightly bound to a particular version of CPython, avoid reference counting errors, and focus more on your own code than on using the CPython API. In general, new versions of Python can be supported by updating the tool, and your code will often use newer and more efficient APIs automatically. Some tools also support compiling for other implementations of Python from a single set of sources.\nThese projects are not supported by the same people who maintain Python, and issues need to be raised with the projects directly. Remember to check that the project is still maintained and supported, as the list above may become outdated.\nSee also\n- Python Packaging User Guide: Binary Extensions\nThe Python Packaging User Guide not only covers several available tools that simplify the creation of binary extensions, but also discusses the various reasons why creating an extension module may be desirable in the first place.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 9501} +{"url": "https://docs.python.org/3/using/mac.html", "title": "Using Python on macOS", "content": "5. Using Python on macOS\u00b6\nThis document aims to give an overview of macOS-specific behavior you should know about to get started with Python on Mac computers. Python on a Mac running macOS is very similar to Python on other Unix-derived platforms, but there are some differences in installation and some features.\nThere are various ways to obtain and install Python for macOS. Pre-built versions of the most recent versions of Python are available from a number of distributors. Much of this document describes use of the Pythons provided by the CPython release team for download from the python.org website. See Alternative Distributions for some other options.\n5.1. Using Python for macOS from python.org\n\u00b6\n5.1.1. Installation steps\u00b6\nFor current Python versions\n(other than those in security\nstatus), the release team produces a\nPython for macOS installer package for each new release.\nA list of available installers\nis available here.\nWe recommend using the most recent supported Python version where possible.\nCurrent installers provide a\nuniversal2 binary build\nof Python which runs natively on all Macs (Apple Silicon and Intel) that are\nsupported by a wide range of macOS versions,\ncurrently typically from at least macOS 10.15 Catalina on.\nThe downloaded file is a standard macOS installer package file (.pkg\n).\nFile integrity information (checksum, size, sigstore signature, etc) for each file is included\non the release download page. Installer packages and their contents are signed and notarized\nwith Python Software Foundation\nApple Developer ID certificates\nto meet macOS Gatekeeper requirements.\nFor a default installation, double-click on the downloaded installer package file. This should launch the standard macOS Installer app and display the first of several installer windows steps.\nClicking on the Continue button brings up the Read Me for this installer.\nBesides other important information, the Read Me documents which Python version is\ngoing to be installed and on what versions of macOS it is supported. You may need\nto scroll through to read the whole file. By default, this Read Me will also be\ninstalled in /Applications/Python 3.14/\nand available to read anytime.\nClicking on Continue proceeds to display the license for Python and for other included software. You will then need to Agree to the license terms before proceeding to the next step. This license file will also be installed and available to be read later.\nAfter the license terms are accepted, the next step is the Installation Type display. For most uses, the standard set of installation operations is appropriate.\nBy pressing the Customize button, you can choose to omit or select certain package components of the installer. Click on each package name to see a description of what it installs. To also install support for the optional free-threaded feature, see Installing Free-threaded Binaries.\nIn either case, clicking Install will begin the install process by asking\npermission to install new software. A macOS user name with Administrator\nprivilege\nis needed as the installed Python will be available to all users of the Mac.\nWhen the installation is complete, the Summary window will appear.\nDouble-click on the Install Certificates.command\nicon or file in the /Applications/Python 3.14/\nwindow to complete the\ninstallation.\nThis will open a temporary Terminal shell window that will use the new Python to download and install SSL root certificates for its use.\nIf Successfully installed certifi\nand update complete\nappears\nin the terminal window, the installation is complete.\nClose this terminal window and the installer window.\nA default install will include:\nA\nPython 3.14\nfolder in yourApplications\nfolder. In here you find IDLE, the development environment that is a standard part of official Python distributions; and Python Launcher, which handles double-clicking Python scripts from the macOS Finder.A framework\n/Library/Frameworks/Python.framework\n, which includes the Python executable and libraries. The installer adds this location to your shell path. To uninstall Python, you can remove these three things. Symlinks to the Python executable are placed in/usr/local/bin/\n.\nNote\nRecent versions of macOS include a python3 command in /usr/bin/python3\nthat links to a usually older and incomplete version of Python provided by and for use by\nthe Apple development tools, Xcode or the Command Line Tools for Xcode.\nYou should never modify or attempt to delete this installation, as it is\nApple-controlled and is used by Apple-provided or third-party software. If\nyou choose to install a newer Python version from python.org\n, you will have\ntwo different but functional Python installations on your computer that\ncan co-exist. The default installer options should ensure that its python3\nwill be used instead of the system python3.\n5.1.2. How to run a Python script\u00b6\nThere are two ways to invoke the Python interpreter.\nIf you are familiar with using a Unix shell in a terminal\nwindow, you can invoke python3.14\nor python3\noptionally\nfollowed by one or more command line options (described in Command line and environment).\nThe Python tutorial also has a useful section on\nusing Python interactively from a shell.\nYou can also invoke the interpreter through an integrated development environment. IDLE \u2014 Python editor and shell is a basic editor and interpreter environment which is included with the standard distribution of Python. IDLE includes a Help menu that allows you to access Python documentation. If you are completely new to Python, you can read the tutorial introduction in that document.\nThere are many other editors and IDEs available, see Editors and IDEs for more information.\nTo run a Python script file from the terminal window, you can invoke the interpreter with the name of the script file:\npython3.14\nmyscript.py\nTo run your script from the Finder, you can either:\nDrag it to Python Launcher.\nSelect Python Launcher as the default application to open your script (or any\n.py\nscript) through the Finder Info window and double-click it. Python Launcher has various preferences to control how your script is launched. Option-dragging allows you to change these for one invocation, or use itsPreferences\nmenu to change things globally.\nBe aware that running the script directly from the macOS Finder might produce different results than when running from a terminal window as the script will not be run in the usual shell environment including any setting of environment variables in shell profiles. And, as with any other script or program, be certain of what you are about to run.\n5.2. Alternative Distributions\u00b6\nBesides the standard python.org\nfor macOS installer, there are third-party\ndistributions for macOS that may include additional functionality.\nSome popular distributions and their key features:\n- ActivePython\nInstaller with multi-platform compatibility, documentation\n- Anaconda\nPopular scientific modules (such as numpy, scipy, and pandas) and the\nconda\npackage manager.- Homebrew\nPackage manager for macOS including multiple versions of Python and many third-party Python-based packages (including numpy, scipy, and pandas).\n- MacPorts\nAnother package manager for macOS including multiple versions of Python and many third-party Python-based packages. May include pre-built versions of Python and many packages for older versions of macOS.\nNote that distributions might not include the latest versions of Python or other libraries, and are not maintained or supported by the core Python team.\n5.3. Installing Additional Python Packages\u00b6\nRefer to the Python Packaging User Guide for more information.\n5.4. GUI Programming\u00b6\nThere are several options for building GUI applications on the Mac with Python.\nThe standard Python GUI toolkit is tkinter\n, based on the cross-platform\nTk toolkit (https://www.tcl.tk). A macOS-native version of Tk is included with\nthe installer.\nPyObjC is a Python binding to Apple\u2019s Objective-C/Cocoa framework. Information on PyObjC is available from pyobjc.\nA number of alternative macOS GUI toolkits are available including:\nPySide: Official Python bindings to the Qt GUI toolkit.\nPyQt: Alternative Python bindings to Qt.\nKivy: A cross-platform GUI toolkit that supports desktop and mobile platforms.\nToga: Part of the BeeWare Project; supports desktop, mobile, web and console apps.\nwxPython: A cross-platform toolkit that supports desktop operating systems.\n5.5. Advanced Topics\u00b6\n5.5.1. Installing Free-threaded Binaries\u00b6\nAdded in version 3.13.\nThe python.org\nPython for macOS\ninstaller package can optionally install an additional build of\nPython 3.14 that supports PEP 703, the free-threading feature\n(running with the global interpreter lock disabled).\nCheck the release page on python.org\nfor possible updated information.\nThe free-threaded mode is working and continues to be improved, but there is some additional overhead in single-threaded workloads compared to the regular build. Additionally, third-party packages, in particular ones with an extension module, may not be ready for use in a free-threaded build, and will re-enable the GIL. Therefore, the support for free-threading is not installed by default. It is packaged as a separate install option, available by clicking the Customize button on the Installation Type step of the installer as described above.\nIf the box next to the Free-threaded Python package name is checked,\na separate PythonT.framework\nwill also be installed\nalongside the normal Python.framework\nin /Library/Frameworks\n.\nThis configuration allows a free-threaded Python 3.14 build to co-exist\non your system with a traditional (GIL only) Python 3.14 build with\nminimal risk while installing or testing. This installation layout may\nchange in future releases.\nKnown cautions and limitations:\nThe UNIX command-line tools package, which is selected by default, will install links in\n/usr/local/bin\nforpython3.14t\n, the free-threaded interpreter, andpython3.14t-config\n, a configuration utility which may be useful for package builders. Since/usr/local/bin\nis typically included in your shellPATH\n, in most cases no changes to yourPATH\nenvironment variables should be needed to usepython3.14t\n.For this release, the Shell profile updater package and the\nUpdate Shell Profile.command\nin/Applications/Python 3.14/\ndo not support the free-threaded package.The free-threaded build and the traditional build have separate search paths and separate\nsite-packages\ndirectories so, by default, if you need a package available in both builds, it may need to be installed in both. The free-threaded package will install a separate instance of pip for use withpython3.14t\n.To install a package using pip without a venv:\npython3.14t -m pip install \nWhen working with multiple Python environments, it is usually safest and easiest to create and use virtual environments. This can avoid possible command name conflicts and confusion about which Python is in use:\npython3.14t -m venv \nthen activate.\nTo run a free-threaded version of IDLE:\npython3.14t -m idlelib\nThe interpreters in both builds respond to the same PYTHON environment variables which may have unexpected results, for example, if you have\nPYTHONPATH\nset in a shell profile. If necessary, there are command line options like-E\nto ignore these environment variables.The free-threaded build links to the third-party shared libraries, such as\nOpenSSL\nandTk\n, installed in the traditional framework. This means that both builds also share one set of trust certificates as installed by the Install Certificates.command script, thus it only needs to be run once.If you cannot depend on the link in\n/usr/local/bin\npointing to thepython.org\nfree-threadedpython3.14t\n(for example, if you want to install your own version there or some other distribution does), you can explicitly set your shellPATH\nenvironment variable to include thePythonT\nframeworkbin\ndirectory:export PATH=\"/Library/Frameworks/PythonT.framework/Versions/3.14/bin\":\"$PATH\"\nThe traditional framework installation by default does something similar, except for\nPython.framework\n. Be aware that having both frameworkbin\ndirectories inPATH\ncan lead to confusion if there are duplicate names likepython3.14\nin both; which one is actually used depends on the order they appear inPATH\n. Thewhich python3.x\norwhich python3.xt\ncommands can show which path is being used. Using virtual environments can help avoid such ambiguities. Another option might be to create a shell alias to the desired interpreter, like:alias py3.14=\"/Library/Frameworks/Python.framework/Versions/3.14/bin/python3.14\" alias py3.14t=\"/Library/Frameworks/PythonT.framework/Versions/3.14/bin/python3.14t\"\n5.5.2. Installing using the command line\u00b6\nIf you want to use automation to install the python.org\ninstaller package\n(rather than by using the familiar macOS Installer GUI app),\nthe macOS command line installer utility lets you select non-default\noptions, too. If you are not familiar with installer, it can be\nsomewhat cryptic (see man installer for more information).\nAs an example, the following shell snippet shows one way to do it,\nusing the 3.14.0b2\nrelease and selecting the free-threaded interpreter\noption:\nRELEASE=\"python-3.140b2-macos11.pkg\" # download installer pkg curl -O https://www.python.org/ftp/python/3.14.0/${RELEASE} # create installer choicechanges to customize the install: # enable the PythonTFramework-3.14 package # while accepting the other defaults (install all other packages) cat > ./choicechanges.plist < attributeSetting 1 choiceAttribute selected choiceIdentifier org.python.Python.PythonTFramework-3.14 EOF sudo installer -pkg ./${RELEASE} -applyChoiceChangesXML ./choicechanges.plist -target /\nYou can then test that both installer builds are now available with something like:\n$ # test that the free-threaded interpreter was installed if the Unix Command Tools package was enabled $ /usr/local/bin/python3.14t -VV Python 3.14.0b2 free-threading build (v3.14.0b2:3a83b172af, Jun 5 2024, 12:57:31) [Clang 15.0.0 (clang-1500.3.9.4)] $ # and the traditional interpreter $ /usr/local/bin/python3.14 -VV Python 3.14.0b2 (v3.14.0b2:3a83b172af, Jun 5 2024, 12:50:24) [Clang 15.0.0 (clang-1500.3.9.4)] $ # test that they are also available without the prefix if /usr/local/bin is on $PATH $ python3.14t -VV Python 3.14.0b2 free-threading build (v3.14.0b2:3a83b172af, Jun 5 2024, 12:57:31) [Clang 15.0.0 (clang-1500.3.9.4)] $ python3.14 -VV Python 3.14.0b2 (v3.14.0b2:3a83b172af, Jun 5 2024, 12:50:24) [Clang 15.0.0 (clang-1500.3.9.4)]\nNote\nCurrent python.org\ninstallers only install to fixed locations like\n/Library/Frameworks/\n, /Applications\n, and /usr/local/bin\n.\nYou cannot use the installer -domain\noption to install to\nother locations.\n5.5.3. Distributing Python Applications\u00b6\nA range of tools exist for converting your Python code into a standalone distributable application:\npy2app: Supports creating macOS\n.app\nbundles from a Python project.Briefcase: Part of the BeeWare Project; a cross-platform packaging tool that supports creation of\n.app\nbundles on macOS, as well as managing signing and notarization.PyInstaller: A cross-platform packaging tool that creates a single file or folder as a distributable artifact.\n5.5.4. App Store Compliance\u00b6\nApps submitted for distribution through the macOS App Store must pass Apple\u2019s app review process. This process includes a set of automated validation rules that inspect the submitted application bundle for problematic code.\nThe Python standard library contains some code that is known to violate these automated rules. While these violations appear to be false positives, Apple\u2019s review rules cannot be challenged. Therefore, it is necessary to modify the Python standard library for an app to pass App Store review.\nThe Python source tree contains\na patch file that will remove\nall code that is known to cause issues with the App Store review process. This\npatch is applied automatically when CPython is configured with the\n--with-app-store-compliance\noption.\nThis patch is not normally required to use CPython on a Mac; nor is it required if you are distributing an app outside the macOS App Store. It is only required if you are using the macOS App Store as a distribution channel.\n5.6. Other Resources\u00b6\nThe python.org Help page has links to many useful resources. The Pythonmac-SIG mailing list is another support resource specifically for Python users and developers on the Mac.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 4192} +{"url": "https://docs.python.org/3/c-api/exceptions.html", "title": "Exception Handling", "content": "Exception Handling\u00b6\nThe functions described in this chapter will let you handle and raise Python\nexceptions. It is important to understand some of the basics of Python\nexception handling. It works somewhat like the POSIX errno\nvariable:\nthere is a global indicator (per thread) of the last error that occurred. Most\nC API functions don\u2019t clear this on success, but will set it to indicate the\ncause of the error on failure. Most C API functions also return an error\nindicator, usually NULL\nif they are supposed to return a pointer, or -1\nif they return an integer (exception: the PyArg_*\nfunctions\nreturn 1\nfor success and 0\nfor failure).\nConcretely, the error indicator consists of three object pointers: the\nexception\u2019s type, the exception\u2019s value, and the traceback object. Any\nof those pointers can be NULL\nif non-set (although some combinations are\nforbidden, for example you can\u2019t have a non-NULL\ntraceback if the exception\ntype is NULL\n).\nWhen a function must fail because some function it called failed, it generally doesn\u2019t set the error indicator; the function it called already set it. It is responsible for either handling the error and clearing the exception or returning after cleaning up any resources it holds (such as object references or memory allocations); it should not continue normally if it is not prepared to handle the error. If returning due to an error, it is important to indicate to the caller that an error has been set. If the error is not handled or carefully propagated, additional calls into the Python/C API may not behave as intended and may fail in mysterious ways.\nNote\nThe error indicator is not the result of sys.exc_info()\n.\nThe former corresponds to an exception that is not yet caught (and is\ntherefore still propagating), while the latter returns an exception after\nit is caught (and has therefore stopped propagating).\nPrinting and clearing\u00b6\n-\nvoid PyErr_Clear()\u00b6\n- Part of the Stable ABI.\nClear the error indicator. If the error indicator is not set, there is no effect.\n-\nvoid PyErr_PrintEx(int set_sys_last_vars)\u00b6\n- Part of the Stable ABI.\nPrint a standard traceback to\nsys.stderr\nand clear the error indicator. Unless the error is aSystemExit\n, in that case no traceback is printed and the Python process will exit with the error code specified by theSystemExit\ninstance.Call this function only when the error indicator is set. Otherwise it will cause a fatal error!\nIf set_sys_last_vars is nonzero, the variable\nsys.last_exc\nis set to the printed exception. For backwards compatibility, the deprecated variablessys.last_type\n,sys.last_value\nandsys.last_traceback\nare also set to the type, value and traceback of this exception, respectively.Changed in version 3.12: The setting of\nsys.last_exc\nwas added.\n-\nvoid PyErr_Print()\u00b6\n- Part of the Stable ABI.\nAlias for\nPyErr_PrintEx(1)\n.\n-\nvoid PyErr_WriteUnraisable(PyObject *obj)\u00b6\n- Part of the Stable ABI.\nCall\nsys.unraisablehook()\nusing the current exception and obj argument.This utility function prints a warning message to\nsys.stderr\nwhen an exception has been set but it is impossible for the interpreter to actually raise the exception. It is used, for example, when an exception occurs in an__del__()\nmethod.The function is called with a single argument obj that identifies the context in which the unraisable exception occurred. If possible, the repr of obj will be printed in the warning message. If obj is\nNULL\n, only the traceback is printed.An exception must be set when calling this function.\nChanged in version 3.4: Print a traceback. Print only traceback if obj is\nNULL\n.Changed in version 3.8: Use\nsys.unraisablehook()\n.\n-\nvoid PyErr_FormatUnraisable(const char *format, ...)\u00b6\nSimilar to\nPyErr_WriteUnraisable()\n, but the format and subsequent parameters help format the warning message; they have the same meaning and values as inPyUnicode_FromFormat()\n.PyErr_WriteUnraisable(obj)\nis roughly equivalent toPyErr_FormatUnraisable(\"Exception ignored in: %R\", obj)\n. If format isNULL\n, only the traceback is printed.Added in version 3.13.\n-\nvoid PyErr_DisplayException(PyObject *exc)\u00b6\n- Part of the Stable ABI since version 3.12.\nPrint the standard traceback display of\nexc\ntosys.stderr\n, including chained exceptions and notes.Added in version 3.12.\nRaising exceptions\u00b6\nThese functions help you set the current thread\u2019s error indicator.\nFor convenience, some of these functions will always return a\nNULL\npointer for use in a return\nstatement.\n-\nvoid PyErr_SetString(PyObject *type, const char *message)\u00b6\n- Part of the Stable ABI.\nThis is the most common way to set the error indicator. The first argument specifies the exception type; it is normally one of the standard exceptions, e.g.\nPyExc_RuntimeError\n. You need not create a new strong reference to it (e.g. withPy_INCREF()\n). The second argument is an error message; it is decoded from'utf-8'\n.\n-\nvoid PyErr_SetObject(PyObject *type, PyObject *value)\u00b6\n- Part of the Stable ABI.\nThis function is similar to\nPyErr_SetString()\nbut lets you specify an arbitrary Python object for the \u201cvalue\u201d of the exception.\n-\nPyObject *PyErr_Format(PyObject *exception, const char *format, ...)\u00b6\n- Return value: Always NULL. Part of the Stable ABI.\nThis function sets the error indicator and returns\nNULL\n. exception should be a Python exception class. The format and subsequent parameters help format the error message; they have the same meaning and values as inPyUnicode_FromFormat()\n. format is an ASCII-encoded string.\n-\nPyObject *PyErr_FormatV(PyObject *exception, const char *format, va_list vargs)\u00b6\n- Return value: Always NULL. Part of the Stable ABI since version 3.5.\nSame as\nPyErr_Format()\n, but taking ava_list\nargument rather than a variable number of arguments.Added in version 3.5.\n-\nvoid PyErr_SetNone(PyObject *type)\u00b6\n- Part of the Stable ABI.\nThis is a shorthand for\nPyErr_SetObject(type, Py_None)\n.\n-\nint PyErr_BadArgument()\u00b6\n- Part of the Stable ABI.\nThis is a shorthand for\nPyErr_SetString(PyExc_TypeError, message)\n, where message indicates that a built-in operation was invoked with an illegal argument. It is mostly for internal use.\n-\nPyObject *PyErr_NoMemory()\u00b6\n- Return value: Always NULL. Part of the Stable ABI.\nThis is a shorthand for\nPyErr_SetNone(PyExc_MemoryError)\n; it returnsNULL\nso an object allocation function can writereturn PyErr_NoMemory();\nwhen it runs out of memory.\n-\nPyObject *PyErr_SetFromErrno(PyObject *type)\u00b6\n- Return value: Always NULL. Part of the Stable ABI.\nThis is a convenience function to raise an exception when a C library function has returned an error and set the C variable\nerrno\n. It constructs a tuple object whose first item is the integererrno\nvalue and whose second item is the corresponding error message (gotten fromstrerror()\n), and then callsPyErr_SetObject(type, object)\n. On Unix, when theerrno\nvalue isEINTR\n, indicating an interrupted system call, this callsPyErr_CheckSignals()\n, and if that set the error indicator, leaves it set to that. The function always returnsNULL\n, so a wrapper function around a system call can writereturn PyErr_SetFromErrno(type);\nwhen the system call returns an error.\n-\nPyObject *PyErr_SetFromErrnoWithFilenameObject(PyObject *type, PyObject *filenameObject)\u00b6\n- Return value: Always NULL. Part of the Stable ABI.\nSimilar to\nPyErr_SetFromErrno()\n, with the additional behavior that if filenameObject is notNULL\n, it is passed to the constructor of type as a third parameter. In the case ofOSError\nexception, this is used to define thefilename\nattribute of the exception instance.\n-\nPyObject *PyErr_SetFromErrnoWithFilenameObjects(PyObject *type, PyObject *filenameObject, PyObject *filenameObject2)\u00b6\n- Return value: Always NULL. Part of the Stable ABI since version 3.7.\nSimilar to\nPyErr_SetFromErrnoWithFilenameObject()\n, but takes a second filename object, for raising errors when a function that takes two filenames fails.Added in version 3.4.\n-\nPyObject *PyErr_SetFromErrnoWithFilename(PyObject *type, const char *filename)\u00b6\n- Return value: Always NULL. Part of the Stable ABI.\nSimilar to\nPyErr_SetFromErrnoWithFilenameObject()\n, but the filename is given as a C string. filename is decoded from the filesystem encoding and error handler.\n-\nPyObject *PyErr_SetFromWindowsErr(int ierr)\u00b6\n- Return value: Always NULL. Part of the Stable ABI on Windows since version 3.7.\nThis is a convenience function to raise\nOSError\n. If called with ierr of0\n, the error code returned by a call toGetLastError()\nis used instead. It calls the Win32 functionFormatMessage()\nto retrieve the Windows description of error code given by ierr orGetLastError()\n, then it constructs aOSError\nobject with thewinerror\nattribute set to the error code, thestrerror\nattribute set to the corresponding error message (gotten fromFormatMessage()\n), and then callsPyErr_SetObject(PyExc_OSError, object)\n. This function always returnsNULL\n.Availability: Windows.\n-\nPyObject *PyErr_SetExcFromWindowsErr(PyObject *type, int ierr)\u00b6\n- Return value: Always NULL. Part of the Stable ABI on Windows since version 3.7.\nSimilar to\nPyErr_SetFromWindowsErr()\n, with an additional parameter specifying the exception type to be raised.Availability: Windows.\n-\nPyObject *PyErr_SetFromWindowsErrWithFilename(int ierr, const char *filename)\u00b6\n- Return value: Always NULL. Part of the Stable ABI on Windows since version 3.7.\nSimilar to\nPyErr_SetFromWindowsErr()\n, with the additional behavior that if filename is notNULL\n, it is decoded from the filesystem encoding (os.fsdecode()\n) and passed to the constructor ofOSError\nas a third parameter to be used to define thefilename\nattribute of the exception instance.Availability: Windows.\n-\nPyObject *PyErr_SetExcFromWindowsErrWithFilenameObject(PyObject *type, int ierr, PyObject *filename)\u00b6\n- Return value: Always NULL. Part of the Stable ABI on Windows since version 3.7.\nSimilar to\nPyErr_SetExcFromWindowsErr()\n, with the additional behavior that if filename is notNULL\n, it is passed to the constructor ofOSError\nas a third parameter to be used to define thefilename\nattribute of the exception instance.Availability: Windows.\n-\nPyObject *PyErr_SetExcFromWindowsErrWithFilenameObjects(PyObject *type, int ierr, PyObject *filename, PyObject *filename2)\u00b6\n- Return value: Always NULL. Part of the Stable ABI on Windows since version 3.7.\nSimilar to\nPyErr_SetExcFromWindowsErrWithFilenameObject()\n, but accepts a second filename object.Availability: Windows.\nAdded in version 3.4.\n-\nPyObject *PyErr_SetExcFromWindowsErrWithFilename(PyObject *type, int ierr, const char *filename)\u00b6\n- Return value: Always NULL. Part of the Stable ABI on Windows since version 3.7.\nSimilar to\nPyErr_SetFromWindowsErrWithFilename()\n, with an additional parameter specifying the exception type to be raised.Availability: Windows.\n-\nPyObject *PyErr_SetImportError(PyObject *msg, PyObject *name, PyObject *path)\u00b6\n- Return value: Always NULL. Part of the Stable ABI since version 3.7.\nThis is a convenience function to raise\nImportError\n. msg will be set as the exception\u2019s message string. name and path, both of which can beNULL\n, will be set as theImportError\n\u2019s respectivename\nandpath\nattributes.Added in version 3.3.\n-\nPyObject *PyErr_SetImportErrorSubclass(PyObject *exception, PyObject *msg, PyObject *name, PyObject *path)\u00b6\n- Return value: Always NULL. Part of the Stable ABI since version 3.6.\nMuch like\nPyErr_SetImportError()\nbut this function allows for specifying a subclass ofImportError\nto raise.Added in version 3.6.\n-\nvoid PyErr_SyntaxLocationObject(PyObject *filename, int lineno, int col_offset)\u00b6\nSet file, line, and offset information for the current exception. If the current exception is not a\nSyntaxError\n, then it sets additional attributes, which make the exception printing subsystem think the exception is aSyntaxError\n.Added in version 3.4.\n-\nvoid PyErr_RangedSyntaxLocationObject(PyObject *filename, int lineno, int col_offset, int end_lineno, int end_col_offset)\u00b6\nSimilar to\nPyErr_SyntaxLocationObject()\n, but also sets the end_lineno and end_col_offset information for the current exception.Added in version 3.10.\n-\nvoid PyErr_SyntaxLocationEx(const char *filename, int lineno, int col_offset)\u00b6\n- Part of the Stable ABI since version 3.7.\nLike\nPyErr_SyntaxLocationObject()\n, but filename is a byte string decoded from the filesystem encoding and error handler.Added in version 3.2.\n-\nvoid PyErr_SyntaxLocation(const char *filename, int lineno)\u00b6\n- Part of the Stable ABI.\nLike\nPyErr_SyntaxLocationEx()\n, but the col_offset parameter is omitted.\n-\nvoid PyErr_BadInternalCall()\u00b6\n- Part of the Stable ABI.\nThis is a shorthand for\nPyErr_SetString(PyExc_SystemError, message)\n, where message indicates that an internal operation (e.g. a Python/C API function) was invoked with an illegal argument. It is mostly for internal use.\n-\nPyObject *PyErr_ProgramTextObject(PyObject *filename, int lineno)\u00b6\nGet the source line in filename at line lineno. filename should be a Python\nstr\nobject.On success, this function returns a Python string object with the found line. On failure, this function returns\nNULL\nwithout an exception set.\n-\nPyObject *PyErr_ProgramText(const char *filename, int lineno)\u00b6\n- Part of the Stable ABI.\nSimilar to\nPyErr_ProgramTextObject()\n, but filename is a const char*, which is decoded with the filesystem encoding and error handler, instead of a Python object reference.\nIssuing warnings\u00b6\nUse these functions to issue warnings from C code. They mirror similar\nfunctions exported by the Python warnings\nmodule. They normally\nprint a warning message to sys.stderr; however, it is\nalso possible that the user has specified that warnings are to be turned into\nerrors, and in that case they will raise an exception. It is also possible that\nthe functions raise an exception because of a problem with the warning machinery.\nThe return value is 0\nif no exception is raised, or -1\nif an exception\nis raised. (It is not possible to determine whether a warning message is\nactually printed, nor what the reason is for the exception; this is\nintentional.) If an exception is raised, the caller should do its normal\nexception handling (for example, Py_DECREF()\nowned references and return\nan error value).\n-\nint PyErr_WarnEx(PyObject *category, const char *message, Py_ssize_t stack_level)\u00b6\n- Part of the Stable ABI.\nIssue a warning message. The category argument is a warning category (see below) or\nNULL\n; the message argument is a UTF-8 encoded string. stack_level is a positive number giving a number of stack frames; the warning will be issued from the currently executing line of code in that stack frame. A stack_level of 1 is the function callingPyErr_WarnEx()\n, 2 is the function above that, and so forth.Warning categories must be subclasses of\nPyExc_Warning\n;PyExc_Warning\nis a subclass ofPyExc_Exception\n; the default warning category isPyExc_RuntimeWarning\n. The standard Python warning categories are available as global variables whose names are enumerated at Warning types.For information about warning control, see the documentation for the\nwarnings\nmodule and the-W\noption in the command line documentation. There is no C API for warning control.\n-\nint PyErr_WarnExplicitObject(PyObject *category, PyObject *message, PyObject *filename, int lineno, PyObject *module, PyObject *registry)\u00b6\nIssue a warning message with explicit control over all warning attributes. This is a straightforward wrapper around the Python function\nwarnings.warn_explicit()\n; see there for more information. The module and registry arguments may be set toNULL\nto get the default effect described there.Added in version 3.4.\n-\nint PyErr_WarnExplicit(PyObject *category, const char *message, const char *filename, int lineno, const char *module, PyObject *registry)\u00b6\n- Part of the Stable ABI.\nSimilar to\nPyErr_WarnExplicitObject()\nexcept that message and module are UTF-8 encoded strings, and filename is decoded from the filesystem encoding and error handler.\n-\nint PyErr_WarnFormat(PyObject *category, Py_ssize_t stack_level, const char *format, ...)\u00b6\n- Part of the Stable ABI.\nFunction similar to\nPyErr_WarnEx()\n, but usePyUnicode_FromFormat()\nto format the warning message. format is an ASCII-encoded string.Added in version 3.2.\n-\nint PyErr_WarnExplicitFormat(PyObject *category, const char *filename, int lineno, const char *module, PyObject *registry, const char *format, ...)\u00b6\nSimilar to\nPyErr_WarnExplicit()\n, but usesPyUnicode_FromFormat()\nto format the warning message. format is an ASCII-encoded string.Added in version 3.2.\n-\nint PyErr_ResourceWarning(PyObject *source, Py_ssize_t stack_level, const char *format, ...)\u00b6\n- Part of the Stable ABI since version 3.6.\nFunction similar to\nPyErr_WarnFormat()\n, but category isResourceWarning\nand it passes source towarnings.WarningMessage\n.Added in version 3.6.\nQuerying the error indicator\u00b6\n-\nPyObject *PyErr_Occurred()\u00b6\n- Return value: Borrowed reference. Part of the Stable ABI.\nTest whether the error indicator is set. If set, return the exception type (the first argument to the last call to one of the\nPyErr_Set*\nfunctions or toPyErr_Restore()\n). If not set, returnNULL\n. You do not own a reference to the return value, so you do not need toPy_DECREF()\nit.The caller must have an attached thread state.\nNote\nDo not compare the return value to a specific exception; use\nPyErr_ExceptionMatches()\ninstead, shown below. (The comparison could easily fail since the exception may be an instance instead of a class, in the case of a class exception, or it may be a subclass of the expected exception.)\n-\nint PyErr_ExceptionMatches(PyObject *exc)\u00b6\n- Part of the Stable ABI.\nEquivalent to\nPyErr_GivenExceptionMatches(PyErr_Occurred(), exc)\n. This should only be called when an exception is actually set; a memory access violation will occur if no exception has been raised.\n-\nint PyErr_GivenExceptionMatches(PyObject *given, PyObject *exc)\u00b6\n- Part of the Stable ABI.\nReturn true if the given exception matches the exception type in exc. If exc is a class object, this also returns true when given is an instance of a subclass. If exc is a tuple, all exception types in the tuple (and recursively in subtuples) are searched for a match.\n-\nPyObject *PyErr_GetRaisedException(void)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.12.\nReturn the exception currently being raised, clearing the error indicator at the same time. Return\nNULL\nif the error indicator is not set.This function is used by code that needs to catch exceptions, or code that needs to save and restore the error indicator temporarily.\nFor example:\n{ PyObject *exc = PyErr_GetRaisedException(); /* ... code that might produce other errors ... */ PyErr_SetRaisedException(exc); }\nSee also\nPyErr_GetHandledException()\n, to save the exception currently being handled.Added in version 3.12.\n-\nvoid PyErr_SetRaisedException(PyObject *exc)\u00b6\n- Part of the Stable ABI since version 3.12.\nSet exc as the exception currently being raised, clearing the existing exception if one is set.\nWarning\nThis call steals a reference to exc, which must be a valid exception.\nAdded in version 3.12.\n-\nvoid PyErr_Fetch(PyObject **ptype, PyObject **pvalue, PyObject **ptraceback)\u00b6\n- Part of the Stable ABI.\nDeprecated since version 3.12: Use\nPyErr_GetRaisedException()\ninstead.Retrieve the error indicator into three variables whose addresses are passed. If the error indicator is not set, set all three variables to\nNULL\n. If it is set, it will be cleared and you own a reference to each object retrieved. The value and traceback object may beNULL\neven when the type object is not.Note\nThis function is normally only used by legacy code that needs to catch exceptions or save and restore the error indicator temporarily.\nFor example:\n{ PyObject *type, *value, *traceback; PyErr_Fetch(&type, &value, &traceback); /* ... code that might produce other errors ... */ PyErr_Restore(type, value, traceback); }\n-\nvoid PyErr_Restore(PyObject *type, PyObject *value, PyObject *traceback)\u00b6\n- Part of the Stable ABI.\nDeprecated since version 3.12: Use\nPyErr_SetRaisedException()\ninstead.Set the error indicator from the three objects, type, value, and traceback, clearing the existing exception if one is set. If the objects are\nNULL\n, the error indicator is cleared. Do not pass aNULL\ntype and non-NULL\nvalue or traceback. The exception type should be a class. Do not pass an invalid exception type or value. (Violating these rules will cause subtle problems later.) This call takes away a reference to each object: you must own a reference to each object before the call and after the call you no longer own these references. (If you don\u2019t understand this, don\u2019t use this function. I warned you.)Note\nThis function is normally only used by legacy code that needs to save and restore the error indicator temporarily. Use\nPyErr_Fetch()\nto save the current error indicator.\n-\nvoid PyErr_NormalizeException(PyObject **exc, PyObject **val, PyObject **tb)\u00b6\n- Part of the Stable ABI.\nDeprecated since version 3.12: Use\nPyErr_GetRaisedException()\ninstead, to avoid any possible de-normalization.Under certain circumstances, the values returned by\nPyErr_Fetch()\nbelow can be \u201cunnormalized\u201d, meaning that*exc\nis a class object but*val\nis not an instance of the same class. This function can be used to instantiate the class in that case. If the values are already normalized, nothing happens. The delayed normalization is implemented to improve performance.Note\nThis function does not implicitly set the\n__traceback__\nattribute on the exception value. If setting the traceback appropriately is desired, the following additional snippet is needed:if (tb != NULL) { PyException_SetTraceback(val, tb); }\n-\nPyObject *PyErr_GetHandledException(void)\u00b6\n- Part of the Stable ABI since version 3.11.\nRetrieve the active exception instance, as would be returned by\nsys.exception()\n. This refers to an exception that was already caught, not to an exception that was freshly raised. Returns a new reference to the exception orNULL\n. Does not modify the interpreter\u2019s exception state.Note\nThis function is not normally used by code that wants to handle exceptions. Rather, it can be used when code needs to save and restore the exception state temporarily. Use\nPyErr_SetHandledException()\nto restore or clear the exception state.Added in version 3.11.\n-\nvoid PyErr_SetHandledException(PyObject *exc)\u00b6\n- Part of the Stable ABI since version 3.11.\nSet the active exception, as known from\nsys.exception()\n. This refers to an exception that was already caught, not to an exception that was freshly raised. To clear the exception state, passNULL\n.Note\nThis function is not normally used by code that wants to handle exceptions. Rather, it can be used when code needs to save and restore the exception state temporarily. Use\nPyErr_GetHandledException()\nto get the exception state.Added in version 3.11.\n-\nvoid PyErr_GetExcInfo(PyObject **ptype, PyObject **pvalue, PyObject **ptraceback)\u00b6\n- Part of the Stable ABI since version 3.7.\nRetrieve the old-style representation of the exception info, as known from\nsys.exc_info()\n. This refers to an exception that was already caught, not to an exception that was freshly raised. Returns new references for the three objects, any of which may beNULL\n. Does not modify the exception info state. This function is kept for backwards compatibility. Prefer usingPyErr_GetHandledException()\n.Note\nThis function is not normally used by code that wants to handle exceptions. Rather, it can be used when code needs to save and restore the exception state temporarily. Use\nPyErr_SetExcInfo()\nto restore or clear the exception state.Added in version 3.3.\n-\nvoid PyErr_SetExcInfo(PyObject *type, PyObject *value, PyObject *traceback)\u00b6\n- Part of the Stable ABI since version 3.7.\nSet the exception info, as known from\nsys.exc_info()\n. This refers to an exception that was already caught, not to an exception that was freshly raised. This function steals the references of the arguments. To clear the exception state, passNULL\nfor all three arguments. This function is kept for backwards compatibility. Prefer usingPyErr_SetHandledException()\n.Note\nThis function is not normally used by code that wants to handle exceptions. Rather, it can be used when code needs to save and restore the exception state temporarily. Use\nPyErr_GetExcInfo()\nto read the exception state.Added in version 3.3.\nChanged in version 3.11: The\ntype\nandtraceback\narguments are no longer used and can be NULL. The interpreter now derives them from the exception instance (thevalue\nargument). The function still steals references of all three arguments.\nSignal Handling\u00b6\n-\nint PyErr_CheckSignals()\u00b6\n- Part of the Stable ABI.\nThis function interacts with Python\u2019s signal handling.\nIf the function is called from the main thread and under the main Python interpreter, it checks whether a signal has been sent to the processes and if so, invokes the corresponding signal handler. If the\nsignal\nmodule is supported, this can invoke a signal handler written in Python.The function attempts to handle all pending signals, and then returns\n0\n. However, if a Python signal handler raises an exception, the error indicator is set and the function returns-1\nimmediately (such that other pending signals may not have been handled yet: they will be on the nextPyErr_CheckSignals()\ninvocation).If the function is called from a non-main thread, or under a non-main Python interpreter, it does nothing and returns\n0\n.This function can be called by long-running C code that wants to be interruptible by user requests (such as by pressing Ctrl-C).\nNote\nThe default Python signal handler for\nSIGINT\nraises theKeyboardInterrupt\nexception.\n-\nvoid PyErr_SetInterrupt()\u00b6\n- Part of the Stable ABI.\nSimulate the effect of a\nSIGINT\nsignal arriving. This is equivalent toPyErr_SetInterruptEx(SIGINT)\n.Note\nThis function is async-signal-safe. It can be called without an attached thread state and from a C signal handler.\n-\nint PyErr_SetInterruptEx(int signum)\u00b6\n- Part of the Stable ABI since version 3.10.\nSimulate the effect of a signal arriving. The next time\nPyErr_CheckSignals()\nis called, the Python signal handler for the given signal number will be called.This function can be called by C code that sets up its own signal handling and wants Python signal handlers to be invoked as expected when an interruption is requested (for example when the user presses Ctrl-C to interrupt an operation).\nIf the given signal isn\u2019t handled by Python (it was set to\nsignal.SIG_DFL\norsignal.SIG_IGN\n), it will be ignored.If signum is outside of the allowed range of signal numbers,\n-1\nis returned. Otherwise,0\nis returned. The error indicator is never changed by this function.Note\nThis function is async-signal-safe. It can be called without an attached thread state and from a C signal handler.\nAdded in version 3.10.\n-\nint PySignal_SetWakeupFd(int fd)\u00b6\nThis utility function specifies a file descriptor to which the signal number is written as a single byte whenever a signal is received. fd must be non-blocking. It returns the previous such file descriptor.\nThe value\n-1\ndisables the feature; this is the initial state. This is equivalent tosignal.set_wakeup_fd()\nin Python, but without any error checking. fd should be a valid file descriptor. The function should only be called from the main thread.Changed in version 3.5: On Windows, the function now also supports socket handles.\nException Classes\u00b6\n-\nPyObject *PyErr_NewException(const char *name, PyObject *base, PyObject *dict)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nThis utility function creates and returns a new exception class. The name argument must be the name of the new exception, a C string of the form\nmodule.classname\n. The base and dict arguments are normallyNULL\n. This creates a class object derived fromException\n(accessible in C asPyExc_Exception\n).The\n__module__\nattribute of the new class is set to the first part (up to the last dot) of the name argument, and the class name is set to the last part (after the last dot). The base argument can be used to specify alternate base classes; it can either be only one class or a tuple of classes. The dict argument can be used to specify a dictionary of class variables and methods.\n-\nPyObject *PyErr_NewExceptionWithDoc(const char *name, const char *doc, PyObject *base, PyObject *dict)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nSame as\nPyErr_NewException()\n, except that the new exception class can easily be given a docstring: If doc is non-NULL\n, it will be used as the docstring for the exception class.Added in version 3.2.\n-\nint PyExceptionClass_Check(PyObject *ob)\u00b6\nReturn non-zero if ob is an exception class, zero otherwise. This function always succeeds.\n-\nconst char *PyExceptionClass_Name(PyObject *ob)\u00b6\n- Part of the Stable ABI since version 3.8.\nReturn\ntp_name\nof the exception class ob.\nException Objects\u00b6\n-\nint PyExceptionInstance_Check(PyObject *op)\u00b6\nReturn true if op is an instance of\nBaseException\n, false otherwise. This function always succeeds.\n-\nPyExceptionInstance_Class(op)\u00b6\nEquivalent to\nPy_TYPE(op)\n.\n-\nPyObject *PyException_GetTraceback(PyObject *ex)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn the traceback associated with the exception as a new reference, as accessible from Python through the\n__traceback__\nattribute. If there is no traceback associated, this returnsNULL\n.\n-\nint PyException_SetTraceback(PyObject *ex, PyObject *tb)\u00b6\n- Part of the Stable ABI.\nSet the traceback associated with the exception to tb. Use\nPy_None\nto clear it.\n-\nPyObject *PyException_GetContext(PyObject *ex)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn the context (another exception instance during whose handling ex was raised) associated with the exception as a new reference, as accessible from Python through the\n__context__\nattribute. If there is no context associated, this returnsNULL\n.\n-\nvoid PyException_SetContext(PyObject *ex, PyObject *ctx)\u00b6\n- Part of the Stable ABI.\nSet the context associated with the exception to ctx. Use\nNULL\nto clear it. There is no type check to make sure that ctx is an exception instance. This steals a reference to ctx.\n-\nPyObject *PyException_GetCause(PyObject *ex)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn the cause (either an exception instance, or\nNone\n, set byraise ... from ...\n) associated with the exception as a new reference, as accessible from Python through the__cause__\nattribute.\n-\nvoid PyException_SetCause(PyObject *ex, PyObject *cause)\u00b6\n- Part of the Stable ABI.\nSet the cause associated with the exception to cause. Use\nNULL\nto clear it. There is no type check to make sure that cause is either an exception instance orNone\n. This steals a reference to cause.The\n__suppress_context__\nattribute is implicitly set toTrue\nby this function.\n-\nPyObject *PyException_GetArgs(PyObject *ex)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.12.\nReturn\nargs\nof exception ex.\n-\nvoid PyException_SetArgs(PyObject *ex, PyObject *args)\u00b6\n- Part of the Stable ABI since version 3.12.\nSet\nargs\nof exception ex to args.\n-\nPyObject *PyUnstable_Exc_PrepReraiseStar(PyObject *orig, PyObject *excs)\u00b6\n- This is Unstable API. It may change without warning in minor releases.\nImplement part of the interpreter\u2019s implementation of\nexcept*\n. orig is the original exception that was caught, and excs is the list of the exceptions that need to be raised. This list contains the unhandled part of orig, if any, as well as the exceptions that were raised from theexcept*\nclauses (so they have a different traceback from orig) and those that were reraised (and have the same traceback as orig). Return theExceptionGroup\nthat needs to be reraised in the end, orNone\nif there is nothing to reraise.Added in version 3.12.\nUnicode Exception Objects\u00b6\nThe following functions are used to create and modify Unicode exceptions from C.\n-\nPyObject *PyUnicodeDecodeError_Create(const char *encoding, const char *object, Py_ssize_t length, Py_ssize_t start, Py_ssize_t end, const char *reason)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCreate a\nUnicodeDecodeError\nobject with the attributes encoding, object, length, start, end and reason. encoding and reason are UTF-8 encoded strings.\n-\nPyObject *PyUnicodeDecodeError_GetEncoding(PyObject *exc)\u00b6\n-\nPyObject *PyUnicodeEncodeError_GetEncoding(PyObject *exc)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn the encoding attribute of the given exception object.\n-\nPyObject *PyUnicodeDecodeError_GetObject(PyObject *exc)\u00b6\n-\nPyObject *PyUnicodeEncodeError_GetObject(PyObject *exc)\u00b6\n-\nPyObject *PyUnicodeTranslateError_GetObject(PyObject *exc)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn the object attribute of the given exception object.\n-\nint PyUnicodeDecodeError_GetStart(PyObject *exc, Py_ssize_t *start)\u00b6\n-\nint PyUnicodeEncodeError_GetStart(PyObject *exc, Py_ssize_t *start)\u00b6\n-\nint PyUnicodeTranslateError_GetStart(PyObject *exc, Py_ssize_t *start)\u00b6\n- Part of the Stable ABI.\nGet the start attribute of the given exception object and place it into *start. start must not be\nNULL\n. Return0\non success,-1\non failure.If the\nUnicodeError.object\nis an empty sequence, the resulting start is0\n. Otherwise, it is clipped to[0, len(object) - 1]\n.See also\n-\nint PyUnicodeDecodeError_SetStart(PyObject *exc, Py_ssize_t start)\u00b6\n-\nint PyUnicodeEncodeError_SetStart(PyObject *exc, Py_ssize_t start)\u00b6\n-\nint PyUnicodeTranslateError_SetStart(PyObject *exc, Py_ssize_t start)\u00b6\n- Part of the Stable ABI.\nSet the start attribute of the given exception object to start. Return\n0\non success,-1\non failure.Note\nWhile passing a negative start does not raise an exception, the corresponding getters will not consider it as a relative offset.\n-\nint PyUnicodeDecodeError_GetEnd(PyObject *exc, Py_ssize_t *end)\u00b6\n-\nint PyUnicodeEncodeError_GetEnd(PyObject *exc, Py_ssize_t *end)\u00b6\n-\nint PyUnicodeTranslateError_GetEnd(PyObject *exc, Py_ssize_t *end)\u00b6\n- Part of the Stable ABI.\nGet the end attribute of the given exception object and place it into *end. end must not be\nNULL\n. Return0\non success,-1\non failure.If the\nUnicodeError.object\nis an empty sequence, the resulting end is0\n. Otherwise, it is clipped to[1, len(object)]\n.\n-\nint PyUnicodeDecodeError_SetEnd(PyObject *exc, Py_ssize_t end)\u00b6\n-\nint PyUnicodeEncodeError_SetEnd(PyObject *exc, Py_ssize_t end)\u00b6\n-\nint PyUnicodeTranslateError_SetEnd(PyObject *exc, Py_ssize_t end)\u00b6\n- Part of the Stable ABI.\nSet the end attribute of the given exception object to end. Return\n0\non success,-1\non failure.See also\n-\nPyObject *PyUnicodeDecodeError_GetReason(PyObject *exc)\u00b6\n-\nPyObject *PyUnicodeEncodeError_GetReason(PyObject *exc)\u00b6\n-\nPyObject *PyUnicodeTranslateError_GetReason(PyObject *exc)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn the reason attribute of the given exception object.\n-\nint PyUnicodeDecodeError_SetReason(PyObject *exc, const char *reason)\u00b6\n-\nint PyUnicodeEncodeError_SetReason(PyObject *exc, const char *reason)\u00b6\n-\nint PyUnicodeTranslateError_SetReason(PyObject *exc, const char *reason)\u00b6\n- Part of the Stable ABI.\nSet the reason attribute of the given exception object to reason. Return\n0\non success,-1\non failure.\nRecursion Control\u00b6\nThese two functions provide a way to perform safe recursive calls at the C level, both in the core and in extension modules. They are needed if the recursive code does not necessarily invoke Python code (which tracks its recursion depth automatically). They are also not needed for tp_call implementations because the call protocol takes care of recursion handling.\n-\nint Py_EnterRecursiveCall(const char *where)\u00b6\n- Part of the Stable ABI since version 3.9.\nMarks a point where a recursive C-level call is about to be performed.\nThe function then checks if the stack limit is reached. If this is the case, a\nRecursionError\nis set and a nonzero value is returned. Otherwise, zero is returned.where should be a UTF-8 encoded string such as\n\" in instance check\"\nto be concatenated to theRecursionError\nmessage caused by the recursion depth limit.See also\nThe\nPyUnstable_ThreadState_SetStackProtection()\nfunction.Changed in version 3.9: This function is now also available in the limited API.\n-\nvoid Py_LeaveRecursiveCall(void)\u00b6\n- Part of the Stable ABI since version 3.9.\nEnds a\nPy_EnterRecursiveCall()\n. Must be called once for each successful invocation ofPy_EnterRecursiveCall()\n.Changed in version 3.9: This function is now also available in the limited API.\nProperly implementing tp_repr\nfor container types requires\nspecial recursion handling. In addition to protecting the stack,\ntp_repr\nalso needs to track objects to prevent cycles. The\nfollowing two functions facilitate this functionality. Effectively,\nthese are the C equivalent to reprlib.recursive_repr()\n.\n-\nint Py_ReprEnter(PyObject *object)\u00b6\n- Part of the Stable ABI.\nCalled at the beginning of the\ntp_repr\nimplementation to detect cycles.If the object has already been processed, the function returns a positive integer. In that case the\ntp_repr\nimplementation should return a string object indicating a cycle. As examples,dict\nobjects return{...}\nandlist\nobjects return[...]\n.The function will return a negative integer if the recursion limit is reached. In that case the\ntp_repr\nimplementation should typically returnNULL\n.Otherwise, the function returns zero and the\ntp_repr\nimplementation can continue normally.\n-\nvoid Py_ReprLeave(PyObject *object)\u00b6\n- Part of the Stable ABI.\nEnds a\nPy_ReprEnter()\n. Must be called once for each invocation ofPy_ReprEnter()\nthat returns zero.\n-\nint Py_GetRecursionLimit(void)\u00b6\n- Part of the Stable ABI.\nGet the recursion limit for the current interpreter. It can be set with\nPy_SetRecursionLimit()\n. The recursion limit prevents the Python interpreter stack from growing infinitely.This function cannot fail, and the caller must hold an attached thread state.\nSee also\n-\nvoid Py_SetRecursionLimit(int new_limit)\u00b6\n- Part of the Stable ABI.\nSet the recursion limit for the current interpreter.\nThis function cannot fail, and the caller must hold an attached thread state.\nSee also\nException and warning types\u00b6\nAll standard Python exceptions and warning categories are available as global\nvariables whose names are PyExc_\nfollowed by the Python exception name.\nThese have the type PyObject*; they are all class objects.\nFor completeness, here are all the variables:\nException types\u00b6\nC name |\nPython name |\n|---|---|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\nAdded in version 3.3: PyExc_BlockingIOError\n, PyExc_BrokenPipeError\n,\nPyExc_ChildProcessError\n, PyExc_ConnectionError\n,\nPyExc_ConnectionAbortedError\n, PyExc_ConnectionRefusedError\n,\nPyExc_ConnectionResetError\n, PyExc_FileExistsError\n,\nPyExc_FileNotFoundError\n, PyExc_InterruptedError\n,\nPyExc_IsADirectoryError\n, PyExc_NotADirectoryError\n,\nPyExc_PermissionError\n, PyExc_ProcessLookupError\nand PyExc_TimeoutError\nwere introduced following PEP 3151.\nAdded in version 3.5: PyExc_StopAsyncIteration\nand PyExc_RecursionError\n.\nAdded in version 3.6: PyExc_ModuleNotFoundError\n.\nAdded in version 3.11: PyExc_BaseExceptionGroup\n.\nOSError aliases\u00b6\nThe following are a compatibility aliases to PyExc_OSError\n.\nChanged in version 3.3: These aliases used to be separate exception types.\nC name |\nPython name |\nNotes |\n|---|---|---|\n|\n||\n|\n||\n|\nNotes:\nPyExc_WindowsError\nis only defined on Windows; protect code that\nuses this by testing that the preprocessor macro MS_WINDOWS\nis defined.\nWarning types\u00b6\nC name |\nPython name |\n|---|---|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\nAdded in version 3.2: PyExc_ResourceWarning\n.\nAdded in version 3.10: PyExc_EncodingWarning\n.\nTracebacks\u00b6\n-\nPyTypeObject PyTraceBack_Type\u00b6\n- Part of the Stable ABI.\nType object for traceback objects. This is available as\ntypes.TracebackType\nin the Python layer.\n-\nint PyTraceBack_Check(PyObject *op)\u00b6\nReturn true if op is a traceback object, false otherwise. This function does not account for subtypes.\n-\nint PyTraceBack_Here(PyFrameObject *f)\u00b6\n- Part of the Stable ABI.\nReplace the\n__traceback__\nattribute on the current exception with a new traceback prepending f to the existing chain.Calling this function without an exception set is undefined behavior.\nThis function returns\n0\non success, and returns-1\nwith an exception set on failure.\n-\nint PyTraceBack_Print(PyObject *tb, PyObject *f)\u00b6\n- Part of the Stable ABI.\nWrite the traceback tb into the file f.\nThis function returns\n0\non success, and returns-1\nwith an exception set on failure.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 10174} +{"url": "https://docs.python.org/3/howto/perf_profiling.html", "title": "Python support for the Linux ", "content": "Python support for the Linux perf\nprofiler\u00b6\n- author:\nPablo Galindo\nThe Linux perf profiler\nis a very powerful tool that allows you to profile and obtain\ninformation about the performance of your application.\nperf\nalso has a very vibrant ecosystem of tools\nthat aid with the analysis of the data that it produces.\nThe main problem with using the perf\nprofiler with Python applications is that\nperf\nonly gets information about native symbols, that is, the names of\nfunctions and procedures written in C. This means that the names and file names\nof Python functions in your code will not appear in the output of perf\n.\nSince Python 3.12, the interpreter can run in a special mode that allows Python\nfunctions to appear in the output of the perf\nprofiler. When this mode is\nenabled, the interpreter will interpose a small piece of code compiled on the\nfly before the execution of every Python function and it will teach perf\nthe\nrelationship between this piece of code and the associated Python function using\nperf map files.\nNote\nSupport for the perf\nprofiler is currently only available for Linux on\nselect architectures. Check the output of the configure\nbuild step or\ncheck the output of python -m sysconfig | grep HAVE_PERF_TRAMPOLINE\nto see if your system is supported.\nFor example, consider the following script:\ndef foo(n):\nresult = 0\nfor _ in range(n):\nresult += 1\nreturn result\ndef bar(n):\nfoo(n)\ndef baz(n):\nbar(n)\nif __name__ == \"__main__\":\nbaz(1000000)\nWe can run perf\nto sample CPU stack traces at 9999 hertz:\n$ perf record -F 9999 -g -o perf.data python my_script.py\nThen we can use perf report\nto analyze the data:\n$ perf report --stdio -n -g\n# Children Self Samples Command Shared Object Symbol\n# ........ ........ ............ .......... .................. ..........................................\n#\n91.08% 0.00% 0 python.exe python.exe [.] _start\n|\n---_start\n|\n--90.71%--__libc_start_main\nPy_BytesMain\n|\n|--56.88%--pymain_run_python.constprop.0\n| |\n| |--56.13%--_PyRun_AnyFileObject\n| | _PyRun_SimpleFileObject\n| | |\n| | |--55.02%--run_mod\n| | | |\n| | | --54.65%--PyEval_EvalCode\n| | | _PyEval_EvalFrameDefault\n| | | PyObject_Vectorcall\n| | | _PyEval_Vector\n| | | _PyEval_EvalFrameDefault\n| | | PyObject_Vectorcall\n| | | _PyEval_Vector\n| | | _PyEval_EvalFrameDefault\n| | | PyObject_Vectorcall\n| | | _PyEval_Vector\n| | | |\n| | | |--51.67%--_PyEval_EvalFrameDefault\n| | | | |\n| | | | |--11.52%--_PyLong_Add\n| | | | | |\n| | | | | |--2.97%--_PyObject_Malloc\n...\nAs you can see, the Python functions are not shown in the output, only _PyEval_EvalFrameDefault\n(the function that evaluates the Python bytecode) shows up. Unfortunately that\u2019s not very useful because all Python\nfunctions use the same C function to evaluate bytecode so we cannot know which Python function corresponds to which\nbytecode-evaluating function.\nInstead, if we run the same experiment with perf\nsupport enabled we get:\n$ perf report --stdio -n -g\n# Children Self Samples Command Shared Object Symbol\n# ........ ........ ............ .......... .................. .....................................................................\n#\n90.58% 0.36% 1 python.exe python.exe [.] _start\n|\n---_start\n|\n--89.86%--__libc_start_main\nPy_BytesMain\n|\n|--55.43%--pymain_run_python.constprop.0\n| |\n| |--54.71%--_PyRun_AnyFileObject\n| | _PyRun_SimpleFileObject\n| | |\n| | |--53.62%--run_mod\n| | | |\n| | | --53.26%--PyEval_EvalCode\n| | | py:::/src/script.py\n| | | _PyEval_EvalFrameDefault\n| | | PyObject_Vectorcall\n| | | _PyEval_Vector\n| | | py::baz:/src/script.py\n| | | _PyEval_EvalFrameDefault\n| | | PyObject_Vectorcall\n| | | _PyEval_Vector\n| | | py::bar:/src/script.py\n| | | _PyEval_EvalFrameDefault\n| | | PyObject_Vectorcall\n| | | _PyEval_Vector\n| | | py::foo:/src/script.py\n| | | |\n| | | |--51.81%--_PyEval_EvalFrameDefault\n| | | | |\n| | | | |--13.77%--_PyLong_Add\n| | | | | |\n| | | | | |--3.26%--_PyObject_Malloc\nHow to enable perf\nprofiling support\u00b6\nperf\nprofiling support can be enabled either from the start using\nthe environment variable PYTHONPERFSUPPORT\nor the\n-X perf\noption,\nor dynamically using sys.activate_stack_trampoline()\nand\nsys.deactivate_stack_trampoline()\n.\nThe sys\nfunctions take precedence over the -X\noption,\nthe -X\noption takes precedence over the environment variable.\nExample, using the environment variable:\n$ PYTHONPERFSUPPORT=1 perf record -F 9999 -g -o perf.data python my_script.py\n$ perf report -g -i perf.data\nExample, using the -X\noption:\n$ perf record -F 9999 -g -o perf.data python -X perf my_script.py\n$ perf report -g -i perf.data\nExample, using the sys\nAPIs in file example.py\n:\nimport sys\nsys.activate_stack_trampoline(\"perf\")\ndo_profiled_stuff()\nsys.deactivate_stack_trampoline()\nnon_profiled_stuff()\n\u2026then:\n$ perf record -F 9999 -g -o perf.data python ./example.py\n$ perf report -g -i perf.data\nHow to obtain the best results\u00b6\nFor best results, Python should be compiled with\nCFLAGS=\"-fno-omit-frame-pointer -mno-omit-leaf-frame-pointer\"\nas this allows\nprofilers to unwind using only the frame pointer and not on DWARF debug\ninformation. This is because as the code that is interposed to allow perf\nsupport is dynamically generated it doesn\u2019t have any DWARF debugging information\navailable.\nYou can check if your system has been compiled with this flag by running:\n$ python -m sysconfig | grep 'no-omit-frame-pointer'\nIf you don\u2019t see any output it means that your interpreter has not been compiled with\nframe pointers and therefore it may not be able to show Python functions in the output\nof perf\n.\nHow to work without frame pointers\u00b6\nIf you are working with a Python interpreter that has been compiled without\nframe pointers, you can still use the perf\nprofiler, but the overhead will be\na bit higher because Python needs to generate unwinding information for every\nPython function call on the fly. Additionally, perf\nwill take more time to\nprocess the data because it will need to use the DWARF debugging information to\nunwind the stack and this is a slow process.\nTo enable this mode, you can use the environment variable\nPYTHON_PERF_JIT_SUPPORT\nor the -X perf_jit\noption,\nwhich will enable the JIT mode for the perf\nprofiler.\nNote\nDue to a bug in the perf\ntool, only perf\nversions higher than v6.8\nwill work with the JIT mode. The fix was also backported to the v6.7.2\nversion of the tool.\nNote that when checking the version of the perf\ntool (which can be done\nby running perf version\n) you must take into account that some distros\nadd some custom version numbers including a -\ncharacter. This means\nthat perf 6.7-3\nis not necessarily perf 6.7.3\n.\nWhen using the perf JIT mode, you need an extra step before you can run perf\nreport\n. You need to call the perf inject\ncommand to inject the JIT\ninformation into the perf.data\nfile.:\n$ perf record -F 9999 -g -k 1 --call-graph dwarf -o perf.data python -Xperf_jit my_script.py\n$ perf inject -i perf.data --jit --output perf.jit.data\n$ perf report -g -i perf.jit.data\nor using the environment variable:\n$ PYTHON_PERF_JIT_SUPPORT=1 perf record -F 9999 -g --call-graph dwarf -o perf.data python my_script.py\n$ perf inject -i perf.data --jit --output perf.jit.data\n$ perf report -g -i perf.jit.data\nperf inject --jit\ncommand will read perf.data\n,\nautomatically pick up the perf dump file that Python creates (in\n/tmp/perf-$PID.dump\n), and then create perf.jit.data\nwhich merges all the\nJIT information together. It should also create a lot of jitted-XXXX-N.so\nfiles in the current directory which are ELF images for all the JIT trampolines\nthat were created by Python.\nWarning\nWhen using --call-graph dwarf\n, the perf\ntool will take\nsnapshots of the stack of the process being profiled and save the\ninformation in the perf.data\nfile. By default, the size of the stack dump\nis 8192 bytes, but you can change the size by passing it after\na comma like --call-graph dwarf,16384\n.\nThe size of the stack dump is important because if the size is too small\nperf\nwill not be able to unwind the stack and the output will be\nincomplete. On the other hand, if the size is too big, then perf\nwon\u2019t\nbe able to sample the process as frequently as it would like as the overhead\nwill be higher.\nThe stack size is particularly important when profiling Python code compiled\nwith low optimization levels (like -O0\n), as these builds tend to have\nlarger stack frames. If you are compiling Python with -O0\nand not seeing\nPython functions in your profiling output, try increasing the stack dump\nsize to 65528 bytes (the maximum):\n$ perf record -F 9999 -g -k 1 --call-graph dwarf,65528 -o perf.data python -Xperf_jit my_script.py\nDifferent compilation flags can significantly impact stack sizes:\nBuilds with\n-O0\ntypically have much larger stack frames than those with-O1\nor higherAdding optimizations (\n-O1\n,-O2\n, etc.) typically reduces stack sizeFrame pointers (\n-fno-omit-frame-pointer\n) generally provide more reliable stack unwinding", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 2222} +{"url": "https://docs.python.org/3/howto/free-threading-extensions.html", "title": "C API Extension Support for Free Threading", "content": "C API Extension Support for Free Threading\u00b6\nStarting with the 3.13 release, CPython has support for running with the global interpreter lock (GIL) disabled in a configuration called free threading. This document describes how to adapt C API extensions to support free threading.\nIdentifying the Free-Threaded Build in C\u00b6\nThe CPython C API exposes the Py_GIL_DISABLED\nmacro: in the free-threaded\nbuild it\u2019s defined to 1\n, and in the regular build it\u2019s not defined.\nYou can use it to enable code that only runs under the free-threaded build:\n#ifdef Py_GIL_DISABLED\n/* code that only runs in the free-threaded build */\n#endif\nNote\nOn Windows, this macro is not defined automatically, but must be specified\nto the compiler when building. The sysconfig.get_config_var()\nfunction\ncan be used to determine whether the current running interpreter had the\nmacro defined.\nModule Initialization\u00b6\nExtension modules need to explicitly indicate that they support running with the GIL disabled; otherwise importing the extension will raise a warning and enable the GIL at runtime.\nThere are two ways to indicate that an extension module supports running with the GIL disabled depending on whether the extension uses multi-phase or single-phase initialization.\nMulti-Phase Initialization\u00b6\nExtensions that use multi-phase initialization (i.e.,\nPyModuleDef_Init()\n) should add a Py_mod_gil\nslot in the\nmodule definition. If your extension supports older versions of CPython,\nyou should guard the slot with a PY_VERSION_HEX\ncheck.\nstatic struct PyModuleDef_Slot module_slots[] = {\n...\n#if PY_VERSION_HEX >= 0x030D0000\n{Py_mod_gil, Py_MOD_GIL_NOT_USED},\n#endif\n{0, NULL}\n};\nstatic struct PyModuleDef moduledef = {\nPyModuleDef_HEAD_INIT,\n.m_slots = module_slots,\n...\n};\nSingle-Phase Initialization\u00b6\nExtensions that use single-phase initialization (i.e.,\nPyModule_Create()\n) should call PyUnstable_Module_SetGIL()\nto\nindicate that they support running with the GIL disabled. The function is\nonly defined in the free-threaded build, so you should guard the call with\n#ifdef Py_GIL_DISABLED\nto avoid compilation errors in the regular build.\nstatic struct PyModuleDef moduledef = {\nPyModuleDef_HEAD_INIT,\n...\n};\nPyMODINIT_FUNC\nPyInit_mymodule(void)\n{\nPyObject *m = PyModule_Create(&moduledef);\nif (m == NULL) {\nreturn NULL;\n}\n#ifdef Py_GIL_DISABLED\nPyUnstable_Module_SetGIL(m, Py_MOD_GIL_NOT_USED);\n#endif\nreturn m;\n}\nGeneral API Guidelines\u00b6\nMost of the C API is thread-safe, but there are some exceptions.\nStruct Fields: Accessing fields in Python C API objects or structs directly is not thread-safe if the field may be concurrently modified.\nMacros: Accessor macros like\nPyList_GET_ITEM\n,PyList_SET_ITEM\n, and macros likePySequence_Fast_GET_SIZE\nthat use the object returned byPySequence_Fast()\ndo not perform any error checking or locking. These macros are not thread-safe if the container object may be modified concurrently.Borrowed References: C API functions that return borrowed references may not be thread-safe if the containing object is modified concurrently. See the section on borrowed references for more information.\nContainer Thread Safety\u00b6\nContainers like PyListObject\n,\nPyDictObject\n, and PySetObject\nperform internal locking\nin the free-threaded build. For example, the PyList_Append()\nwill\nlock the list before appending an item.\nPyDict_Next\n\u00b6\nA notable exception is PyDict_Next()\n, which does not lock the\ndictionary. You should use Py_BEGIN_CRITICAL_SECTION\nto protect\nthe dictionary while iterating over it if the dictionary may be concurrently\nmodified:\nPy_BEGIN_CRITICAL_SECTION(dict);\nPyObject *key, *value;\nPy_ssize_t pos = 0;\nwhile (PyDict_Next(dict, &pos, &key, &value)) {\n...\n}\nPy_END_CRITICAL_SECTION();\nBorrowed References\u00b6\nSome C API functions return borrowed references.\nThese APIs are not thread-safe if the containing object is modified\nconcurrently. For example, it\u2019s not safe to use PyList_GetItem()\nif the list may be modified concurrently.\nThe following table lists some borrowed reference APIs and their replacements that return strong references.\nBorrowed reference API |\nStrong reference API |\n|---|---|\nnone (see PyDict_Next) |\n|\nNot all APIs that return borrowed references are problematic. For\nexample, PyTuple_GetItem()\nis safe because tuples are immutable.\nSimilarly, not all uses of the above APIs are problematic. For example,\nPyDict_GetItem()\nis often used for parsing keyword argument\ndictionaries in function calls; those keyword argument dictionaries are\neffectively private (not accessible by other threads), so using borrowed\nreferences in that context is safe.\nSome of these functions were added in Python 3.13. You can use the pythoncapi-compat package to provide implementations of these functions for older Python versions.\nMemory Allocation APIs\u00b6\nPython\u2019s memory management C API provides functions in three different allocation domains: \u201craw\u201d, \u201cmem\u201d, and \u201cobject\u201d. For thread-safety, the free-threaded build requires that only Python objects are allocated using the object domain, and that all Python objects are allocated using that domain. This differs from the prior Python versions, where this was only a best practice and not a hard requirement.\nNote\nSearch for uses of PyObject_Malloc()\nin your\nextension and check that the allocated memory is used for Python objects.\nUse PyMem_Malloc()\nto allocate buffers instead of\nPyObject_Malloc()\n.\nThread State and GIL APIs\u00b6\nPython provides a set of functions and macros to manage thread state and the GIL, such as:\nThese functions should still be used in the free-threaded build to manage\nthread state even when the GIL is disabled. For example, if you\ncreate a thread outside of Python, you must call PyGILState_Ensure()\nbefore calling into the Python API to ensure that the thread has a valid\nPython thread state.\nYou should continue to call PyEval_SaveThread()\nor\nPy_BEGIN_ALLOW_THREADS\naround blocking operations, such as I/O or\nlock acquisitions, to allow other threads to run the\ncyclic garbage collector.\nProtecting Internal Extension State\u00b6\nYour extension may have internal state that was previously protected by the GIL. You may need to add locking to protect this state. The approach will depend on your extension, but some common patterns include:\nCaches: global caches are a common source of shared state. Consider using a lock to protect the cache or disabling it in the free-threaded build if the cache is not critical for performance.\nGlobal State: global state may need to be protected by a lock or moved to thread local storage. C11 and C++11 provide the\nthread_local\nor_Thread_local\nfor thread-local storage.\nCritical Sections\u00b6\nIn the free-threaded build, CPython provides a mechanism called \u201ccritical sections\u201d to protect data that would otherwise be protected by the GIL. While extension authors may not interact with the internal critical section implementation directly, understanding their behavior is crucial when using certain C API functions or managing shared state in the free-threaded build.\nWhat Are Critical Sections?\u00b6\nConceptually, critical sections act as a deadlock avoidance layer built on\ntop of simple mutexes. Each thread maintains a stack of active critical\nsections. When a thread needs to acquire a lock associated with a critical\nsection (e.g., implicitly when calling a thread-safe C API function like\nPyDict_SetItem()\n, or explicitly using macros), it attempts to acquire\nthe underlying mutex.\nUsing Critical Sections\u00b6\nThe primary APIs for using critical sections are:\nPy_BEGIN_CRITICAL_SECTION\nandPy_END_CRITICAL_SECTION\n- For locking a single objectPy_BEGIN_CRITICAL_SECTION2\nandPy_END_CRITICAL_SECTION2\n- For locking two objects simultaneously\nThese macros must be used in matching pairs and must appear in the same C scope, since they establish a new local scope. These macros are no-ops in non-free-threaded builds, so they can be safely added to code that needs to support both build types.\nA common use of a critical section would be to lock an object while accessing an internal attribute of it. For example, if an extension type has an internal count field, you could use a critical section while reading or writing that field:\n// read the count, returns new reference to internal count value\nPyObject *result;\nPy_BEGIN_CRITICAL_SECTION(obj);\nresult = Py_NewRef(obj->count);\nPy_END_CRITICAL_SECTION();\nreturn result;\n// write the count, consumes reference from new_count\nPy_BEGIN_CRITICAL_SECTION(obj);\nobj->count = new_count;\nPy_END_CRITICAL_SECTION();\nHow Critical Sections Work\u00b6\nUnlike traditional locks, critical sections do not guarantee exclusive access throughout their entire duration. If a thread would block while holding a critical section (e.g., by acquiring another lock or performing I/O), the critical section is temporarily suspended\u2014all locks are released\u2014and then resumed when the blocking operation completes.\nThis behavior is similar to what happens with the GIL when a thread makes a blocking call. The key differences are:\nCritical sections operate on a per-object basis rather than globally\nCritical sections follow a stack discipline within each thread (the \u201cbegin\u201d and \u201cend\u201d macros enforce this since they must be paired and within the same scope)\nCritical sections automatically release and reacquire locks around potential blocking operations\nDeadlock Avoidance\u00b6\nCritical sections help avoid deadlocks in two ways:\nIf a thread tries to acquire a lock that\u2019s already held by another thread, it first suspends all of its active critical sections, temporarily releasing their locks\nWhen the blocking operation completes, only the top-most critical section is reacquired first\nThis means you cannot rely on nested critical sections to lock multiple objects\nat once, as the inner critical section may suspend the outer ones. Instead, use\nPy_BEGIN_CRITICAL_SECTION2\nto lock two objects simultaneously.\nNote that the locks described above are only PyMutex\nbased locks.\nThe critical section implementation does not know about or affect other locking\nmechanisms that might be in use, like POSIX mutexes. Also note that while\nblocking on any PyMutex\ncauses the critical sections to be\nsuspended, only the mutexes that are part of the critical sections are\nreleased. If PyMutex\nis used without a critical section, it will\nnot be released and therefore does not get the same deadlock avoidance.\nImportant Considerations\u00b6\nCritical sections may temporarily release their locks, allowing other threads to modify the protected data. Be careful about making assumptions about the state of the data after operations that might block.\nBecause locks can be temporarily released (suspended), entering a critical section does not guarantee exclusive access to the protected resource throughout the section\u2019s duration. If code within a critical section calls another function that blocks (e.g., acquires another lock, performs blocking I/O), all locks held by the thread via critical sections will be released. This is similar to how the GIL can be released during blocking calls.\nOnly the lock(s) associated with the most recently entered (top-most) critical section are guaranteed to be held at any given time. Locks for outer, nested critical sections might have been suspended.\nYou can lock at most two objects simultaneously with these APIs. If you need to lock more objects, you\u2019ll need to restructure your code.\nWhile critical sections will not deadlock if you attempt to lock the same object twice, they are less efficient than purpose-built reentrant locks for this use case.\nWhen using\nPy_BEGIN_CRITICAL_SECTION2\n, the order of the objects doesn\u2019t affect correctness (the implementation handles deadlock avoidance), but it\u2019s good practice to always lock objects in a consistent order.Remember that the critical section macros are primarily for protecting access to Python objects that might be involved in internal CPython operations susceptible to the deadlock scenarios described above. For protecting purely internal extension state, standard mutexes or other synchronization primitives might be more appropriate.\nBuilding Extensions for the Free-Threaded Build\u00b6\nC API extensions need to be built specifically for the free-threaded build.\nThe wheels, shared libraries, and binaries are indicated by a t\nsuffix.\npypa/manylinux supports the free-threaded build, with the\nt\nsuffix, such aspython3.13t\n.pypa/cibuildwheel supports the free-threaded build on Python 3.13 and 3.14. On Python 3.14, free-threaded wheels will be built by default. On Python 3.13, you will need to set CIBW_ENABLE to cpython-freethreading.\nLimited C API and Stable ABI\u00b6\nThe free-threaded build does not currently support the\nLimited C API or the stable ABI. If you use\nsetuptools to build\nyour extension and currently set py_limited_api=True\nyou can use\npy_limited_api=not sysconfig.get_config_var(\"Py_GIL_DISABLED\")\nto opt out\nof the limited API when building with the free-threaded build.\nNote\nYou will need to build separate wheels specifically for the free-threaded build. If you currently use the stable ABI, you can continue to build a single wheel for multiple non-free-threaded Python versions.\nWindows\u00b6\nDue to a limitation of the official Windows installer, you will need to\nmanually define Py_GIL_DISABLED=1\nwhen building extensions from source.\nSee also\nPorting Extension Modules to Support Free-Threading: A community-maintained porting guide for extension authors.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 3352} +{"url": "https://docs.python.org/3/howto/mro.html", "title": "The Python 2.3 Method Resolution Order", "content": "The Python 2.3 Method Resolution Order\u00b6\nNote\nThis is a historical document, provided as an appendix to the official documentation. The Method Resolution Order discussed here was introduced in Python 2.3, but it is still used in later versions \u2013 including Python 3.\n- Abstract:\nThis document is intended for Python programmers who want to understand the C3 Method Resolution Order used in Python 2.3. Although it is not intended for newbies, it is quite pedagogical with many worked out examples. I am not aware of other publicly available documents with the same scope, therefore it should be useful.\nDisclaimer:\nI donate this document to the Python Software Foundation, under the Python 2.3 license. As usual in these circumstances, I warn the reader that what follows should be correct, but I don\u2019t give any warranty. Use it at your own risk and peril!\nAcknowledgments:\nAll the people of the Python mailing list who sent me their support. Paul Foley who pointed out various imprecisions and made me to add the part on local precedence ordering. David Goodger for help with the formatting in reStructuredText. David Mertz for help with the editing. Finally, Guido van Rossum who enthusiastically added this document to the official Python 2.3 home-page.\nThe beginning\u00b6\nFelix qui potuit rerum cognoscere causas \u2013 Virgilius\nEverything started with a post by Samuele Pedroni to the Python development mailing list [1]. In his post, Samuele showed that the Python 2.2 method resolution order is not monotonic and he proposed to replace it with the C3 method resolution order. Guido agreed with his arguments and therefore now Python 2.3 uses C3. The C3 method itself has nothing to do with Python, since it was invented by people working on Dylan and it is described in a paper intended for lispers [2]. The present paper gives a (hopefully) readable discussion of the C3 algorithm for Pythonistas who want to understand the reasons for the change.\nFirst of all, let me point out that what I am going to say only applies to the new style classes introduced in Python 2.2: classic classes maintain their old method resolution order, depth first and then left to right. Therefore, there is no breaking of old code for classic classes; and even if in principle there could be breaking of code for Python 2.2 new style classes, in practice the cases in which the C3 resolution order differs from the Python 2.2 method resolution order are so rare that no real breaking of code is expected. Therefore:\nDon\u2019t be scared!\nMoreover, unless you make strong use of multiple inheritance and you have non-trivial hierarchies, you don\u2019t need to understand the C3 algorithm, and you can easily skip this paper. On the other hand, if you really want to know how multiple inheritance works, then this paper is for you. The good news is that things are not as complicated as you might expect.\nLet me begin with some basic definitions.\nGiven a class C in a complicated multiple inheritance hierarchy, it is a non-trivial task to specify the order in which methods are overridden, i.e. to specify the order of the ancestors of C.\nThe list of the ancestors of a class C, including the class itself, ordered from the nearest ancestor to the furthest, is called the class precedence list or the linearization of C.\nThe Method Resolution Order (MRO) is the set of rules that construct the linearization. In the Python literature, the idiom \u201cthe MRO of C\u201d is also used as a synonymous for the linearization of the class C.\nFor instance, in the case of single inheritance hierarchy, if C is a subclass of C1, and C1 is a subclass of C2, then the linearization of C is simply the list [C, C1 , C2]. However, with multiple inheritance hierarchies, the construction of the linearization is more cumbersome, since it is more difficult to construct a linearization that respects local precedence ordering and monotonicity.\nI will discuss the local precedence ordering later, but I can give the definition of monotonicity here. A MRO is monotonic when the following is true: if C1 precedes C2 in the linearization of C, then C1 precedes C2 in the linearization of any subclass of C. Otherwise, the innocuous operation of deriving a new class could change the resolution order of methods, potentially introducing very subtle bugs. Examples where this happens will be shown later.\nNot all classes admit a linearization. There are cases, in complicated hierarchies, where it is not possible to derive a class such that its linearization respects all the desired properties.\nHere I give an example of this situation. Consider the hierarchy\n>>> O = object\n>>> class X(O): pass\n>>> class Y(O): pass\n>>> class A(X,Y): pass\n>>> class B(Y,X): pass\nwhich can be represented with the following inheritance graph, where I\nhave denoted with O the object\nclass, which is the beginning of any\nhierarchy for new style classes:\n----------- | | | O | | / \\ | - X Y / | / | / | / |/ A B \\ / ?\nIn this case, it is not possible to derive a new class C from A and B, since X precedes Y in A, but Y precedes X in B, therefore the method resolution order would be ambiguous in C.\nPython 2.3 raises an exception in this situation (TypeError: MRO conflict among bases Y, X) forbidding the naive programmer from creating ambiguous hierarchies. Python 2.2 instead does not raise an exception, but chooses an ad hoc ordering (CABXYO in this case).\nThe C3 Method Resolution Order\u00b6\nLet me introduce a few simple notations which will be useful for the following discussion. I will use the shortcut notation:\nC1 C2 ... CN\nto indicate the list of classes [C1, C2, \u2026 , CN].\nThe head of the list is its first element:\nhead = C1\nwhereas the tail is the rest of the list:\ntail = C2 ... CN.\nI shall also use the notation:\nC + (C1 C2 ... CN) = C C1 C2 ... CN\nto denote the sum of the lists [C] + [C1, C2, \u2026 ,CN].\nNow I can explain how the MRO works in Python 2.3.\nConsider a class C in a multiple inheritance hierarchy, with C inheriting from the base classes B1, B2, \u2026 , BN. We want to compute the linearization L[C] of the class C. The rule is the following:\nthe linearization of C is the sum of C plus the merge of the linearizations of the parents and the list of the parents.\nIn symbolic notation:\nL[C(B1 ... BN)] = C + merge(L[B1] ... L[BN], B1 ... BN)\nIn particular, if C is the object\nclass, which has no parents, the\nlinearization is trivial:\nL[object] = object.\nHowever, in general one has to compute the merge according to the following prescription:\ntake the head of the first list, i.e L[B1][0]; if this head is not in the tail of any of the other lists, then add it to the linearization of C and remove it from the lists in the merge, otherwise look at the head of the next list and take it, if it is a good head. Then repeat the operation until all the class are removed or it is impossible to find good heads. In this case, it is impossible to construct the merge, Python 2.3 will refuse to create the class C and will raise an exception.\nThis prescription ensures that the merge operation preserves the ordering, if the ordering can be preserved. On the other hand, if the order cannot be preserved (as in the example of serious order disagreement discussed above) then the merge cannot be computed.\nThe computation of the merge is trivial if C has only one parent (single inheritance); in this case:\nL[C(B)] = C + merge(L[B],B) = C + L[B]\nHowever, in the case of multiple inheritance things are more cumbersome and I don\u2019t expect you can understand the rule without a couple of examples ;-)\nExamples\u00b6\nFirst example. Consider the following hierarchy:\n>>> O = object\n>>> class F(O): pass\n>>> class E(O): pass\n>>> class D(O): pass\n>>> class C(D,F): pass\n>>> class B(D,E): pass\n>>> class A(B,C): pass\nIn this case the inheritance graph can be drawn as:\n6 --- Level 3 | O | (more general) / --- \\ / | \\ | / | \\ | / | \\ | --- --- --- | Level 2 3 | D | 4| E | | F | 5 | --- --- --- | \\ \\ _ / | | \\ / \\ _ | | \\ / \\ | | --- --- | Level 1 1 | B | | C | 2 | --- --- | \\ / | \\ / \\ / --- Level 0 0 | A | (more specialized) ---\nThe linearizations of O,D,E and F are trivial:\nL[O] = O\nL[D] = D O\nL[E] = E O\nL[F] = F O\nThe linearization of B can be computed as:\nL[B] = B + merge(DO, EO, DE)\nWe see that D is a good head, therefore we take it and we are reduced to\ncompute merge(O,EO,E)\n. Now O is not a good head, since it is in the\ntail of the sequence EO. In this case the rule says that we have to\nskip to the next sequence. Then we see that E is a good head; we take\nit and we are reduced to compute merge(O,O)\nwhich gives O. Therefore:\nL[B] = B D E O\nUsing the same procedure one finds:\nL[C] = C + merge(DO,FO,DF)\n= C + D + merge(O,FO,F)\n= C + D + F + merge(O,O)\n= C D F O\nNow we can compute:\nL[A] = A + merge(BDEO,CDFO,BC)\n= A + B + merge(DEO,CDFO,C)\n= A + B + C + merge(DEO,DFO)\n= A + B + C + D + merge(EO,FO)\n= A + B + C + D + E + merge(O,FO)\n= A + B + C + D + E + F + merge(O,O)\n= A B C D E F O\nIn this example, the linearization is ordered in a pretty nice way according to the inheritance level, in the sense that lower levels (i.e. more specialized classes) have higher precedence (see the inheritance graph). However, this is not the general case.\nI leave as an exercise for the reader to compute the linearization for my second example:\n>>> O = object\n>>> class F(O): pass\n>>> class E(O): pass\n>>> class D(O): pass\n>>> class C(D,F): pass\n>>> class B(E,D): pass\n>>> class A(B,C): pass\nThe only difference with the previous example is the change B(D,E) \u2013> B(E,D); however even such a little modification completely changes the ordering of the hierarchy:\n6 --- Level 3 | O | / --- \\ / | \\ / | \\ / | \\ --- --- --- Level 2 2 | E | 4 | D | | F | 5 --- --- --- \\ / \\ / \\ / \\ / \\ / \\ / --- --- Level 1 1 | B | | C | 3 --- --- \\ / \\ / --- Level 0 0 | A | ---\nNotice that the class E, which is in the second level of the hierarchy, precedes the class C, which is in the first level of the hierarchy, i.e. E is more specialized than C, even if it is in a higher level.\nA lazy programmer can obtain the MRO directly from Python 2.2, since in\nthis case it coincides with the Python 2.3 linearization. It is enough\nto invoke the mro()\nmethod of class A:\n>>> A.mro()\n[, , ,\n, , ,\n]\nFinally, let me consider the example discussed in the first section, involving a serious order disagreement. In this case, it is straightforward to compute the linearizations of O, X, Y, A and B:\nL[O] = 0 L[X] = X O L[Y] = Y O L[A] = A X Y O L[B] = B Y X O\nHowever, it is impossible to compute the linearization for a class C that inherits from A and B:\nL[C] = C + merge(AXYO, BYXO, AB)\n= C + A + merge(XYO, BYXO, B)\n= C + A + B + merge(XYO, YXO)\nAt this point we cannot merge the lists XYO and YXO, since X is in the tail of YXO whereas Y is in the tail of XYO: therefore there are no good heads and the C3 algorithm stops. Python 2.3 raises an error and refuses to create the class C.\nBad Method Resolution Orders\u00b6\nA MRO is bad when it breaks such fundamental properties as local precedence ordering and monotonicity. In this section, I will show that both the MRO for classic classes and the MRO for new style classes in Python 2.2 are bad.\nIt is easier to start with the local precedence ordering. Consider the following example:\n>>> F=type('Food',(),{'remember2buy':'spam'})\n>>> E=type('Eggs',(F,),{'remember2buy':'eggs'})\n>>> G=type('GoodFood',(F,E),{}) # under Python 2.3 this is an error!\nwith inheritance diagram\nO | (buy spam) F | \\ | E (buy eggs) | / G (buy eggs or spam ?)\nWe see that class G inherits from F and E, with F before E: therefore we would expect the attribute G.remember2buy to be inherited by F.remember2buy and not by E.remember2buy: nevertheless Python 2.2 gives\n>>> G.remember2buy\n'eggs'\nThis is a breaking of local precedence ordering since the order in the local precedence list, i.e. the list of the parents of G, is not preserved in the Python 2.2 linearization of G:\nL[G,P22]= G E F object # F *follows* E\nOne could argue that the reason why F follows E in the Python 2.2 linearization is that F is less specialized than E, since F is the superclass of E; nevertheless the breaking of local precedence ordering is quite non-intuitive and error prone. This is particularly true since it is a different from old style classes:\n>>> class F: remember2buy='spam'\n>>> class E(F): remember2buy='eggs'\n>>> class G(F,E): pass\n>>> G.remember2buy\n'spam'\nIn this case the MRO is GFEF and the local precedence ordering is preserved.\nAs a general rule, hierarchies such as the previous one should be avoided, since it is unclear if F should override E or vice-versa. Python 2.3 solves the ambiguity by raising an exception in the creation of class G, effectively stopping the programmer from generating ambiguous hierarchies. The reason for that is that the C3 algorithm fails when the merge:\nmerge(FO,EFO,FE)\ncannot be computed, because F is in the tail of EFO and E is in the tail of FE.\nThe real solution is to design a non-ambiguous hierarchy, i.e. to derive G from E and F (the more specific first) and not from F and E; in this case the MRO is GEF without any doubt.\nO | F (spam) / | (eggs) E | \\ | G (eggs, no doubt)\nPython 2.3 forces the programmer to write good hierarchies (or, at least, less error-prone ones).\nOn a related note, let me point out that the Python 2.3 algorithm is smart enough to recognize obvious mistakes, as the duplication of classes in the list of parents:\n>>> class A(object): pass\n>>> class C(A,A): pass # error\nTraceback (most recent call last):\nFile \"\", line 1, in ?\nTypeError: duplicate base class A\nPython 2.2 (both for classic classes and new style classes) in this situation, would not raise any exception.\nFinally, I would like to point out two lessons we have learned from this example:\ndespite the name, the MRO determines the resolution order of attributes, not only of methods;\nthe default food for Pythonistas is spam ! (but you already knew that ;-)\nHaving discussed the issue of local precedence ordering, let me now consider the issue of monotonicity. My goal is to show that neither the MRO for classic classes nor that for Python 2.2 new style classes is monotonic.\nTo prove that the MRO for classic classes is non-monotonic is rather trivial, it is enough to look at the diamond diagram:\nC / \\ / \\ A B \\ / \\ / D\nOne easily discerns the inconsistency:\nL[B,P21] = B C # B precedes C : B's methods win\nL[D,P21] = D A C B C # B follows C : C's methods win!\nOn the other hand, there are no problems with the Python 2.2 and 2.3 MROs, they give both:\nL[D] = D A B C\nGuido points out in his essay [3] that the classic MRO is not so bad in\npractice, since one can typically avoids diamonds for classic classes.\nBut all new style classes inherit from object\n, therefore diamonds are\nunavoidable and inconsistencies shows up in every multiple inheritance\ngraph.\nThe MRO of Python 2.2 makes breaking monotonicity difficult, but not impossible. The following example, originally provided by Samuele Pedroni, shows that the MRO of Python 2.2 is non-monotonic:\n>>> class A(object): pass\n>>> class B(object): pass\n>>> class C(object): pass\n>>> class D(object): pass\n>>> class E(object): pass\n>>> class K1(A,B,C): pass\n>>> class K2(D,B,E): pass\n>>> class K3(D,A): pass\n>>> class Z(K1,K2,K3): pass\nHere are the linearizations according to the C3 MRO (the reader should verify these linearizations as an exercise and draw the inheritance diagram ;-)\nL[A] = A O\nL[B] = B O\nL[C] = C O\nL[D] = D O\nL[E] = E O\nL[K1]= K1 A B C O\nL[K2]= K2 D B E O\nL[K3]= K3 D A O\nL[Z] = Z K1 K2 K3 D A B C E O\nPython 2.2 gives exactly the same linearizations for A, B, C, D, E, K1, K2 and K3, but a different linearization for Z:\nL[Z,P22] = Z K1 K3 A K2 D B C E O\nIt is clear that this linearization is wrong, since A comes before D whereas in the linearization of K3 A comes after D. In other words, in K3 methods derived by D override methods derived by A, but in Z, which still is a subclass of K3, methods derived by A override methods derived by D! This is a violation of monotonicity. Moreover, the Python 2.2 linearization of Z is also inconsistent with local precedence ordering, since the local precedence list of the class Z is [K1, K2, K3] (K2 precedes K3), whereas in the linearization of Z K2 follows K3. These problems explain why the 2.2 rule has been dismissed in favor of the C3 rule.\nThe end\u00b6\nThis section is for the impatient reader, who skipped all the previous sections and jumped immediately to the end. This section is for the lazy programmer too, who didn\u2019t want to exercise her/his brain. Finally, it is for the programmer with some hubris, otherwise s/he would not be reading a paper on the C3 method resolution order in multiple inheritance hierarchies ;-) These three virtues taken all together (and not separately) deserve a prize: the prize is a short Python 2.2 script that allows you to compute the 2.3 MRO without risk to your brain. Simply change the last line to play with the various examples I have discussed in this paper.:\n#\n\"\"\"C3 algorithm by Samuele Pedroni (with readability enhanced by me).\"\"\"\nclass __metaclass__(type):\n\"All classes are metamagically modified to be nicely printed\"\n__repr__ = lambda cls: cls.__name__\nclass ex_2:\n\"Serious order disagreement\" #From Guido\nclass O: pass\nclass X(O): pass\nclass Y(O): pass\nclass A(X,Y): pass\nclass B(Y,X): pass\ntry:\nclass Z(A,B): pass #creates Z(A,B) in Python 2.2\nexcept TypeError:\npass # Z(A,B) cannot be created in Python 2.3\nclass ex_5:\n\"My first example\"\nclass O: pass\nclass F(O): pass\nclass E(O): pass\nclass D(O): pass\nclass C(D,F): pass\nclass B(D,E): pass\nclass A(B,C): pass\nclass ex_6:\n\"My second example\"\nclass O: pass\nclass F(O): pass\nclass E(O): pass\nclass D(O): pass\nclass C(D,F): pass\nclass B(E,D): pass\nclass A(B,C): pass\nclass ex_9:\n\"Difference between Python 2.2 MRO and C3\" #From Samuele\nclass O: pass\nclass A(O): pass\nclass B(O): pass\nclass C(O): pass\nclass D(O): pass\nclass E(O): pass\nclass K1(A,B,C): pass\nclass K2(D,B,E): pass\nclass K3(D,A): pass\nclass Z(K1,K2,K3): pass\ndef merge(seqs):\nprint '\\n\\nCPL[%s]=%s' % (seqs[0][0],seqs),\nres = []; i=0\nwhile 1:\nnonemptyseqs=[seq for seq in seqs if seq]\nif not nonemptyseqs: return res\ni+=1; print '\\n',i,'round: candidates...',\nfor seq in nonemptyseqs: # find merge candidates among seq heads\ncand = seq[0]; print ' ',cand,\nnothead=[s for s in nonemptyseqs if cand in s[1:]]\nif nothead: cand=None #reject candidate\nelse: break\nif not cand: raise \"Inconsistent hierarchy\"\nres.append(cand)\nfor seq in nonemptyseqs: # remove cand\nif seq[0] == cand: del seq[0]\ndef mro(C):\n\"Compute the class precedence list (mro) according to C3\"\nreturn merge([[C]]+map(mro,C.__bases__)+[list(C.__bases__)])\ndef print_mro(C):\nprint '\\nMRO[%s]=%s' % (C,mro(C))\nprint '\\nP22 MRO[%s]=%s' % (C,C.mro())\nprint_mro(ex_9.Z)\n#\nThat\u2019s all folks,\nenjoy !", "code_snippets": [" ", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n\n", "\n\n", "\n ", "\n ", " ", " ", " ", " ", "\n\n", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", " ", " ", "\n ", " ", "\n ", " ", "\n\n", "\n ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n\n", "\n ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n\n", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n\n", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", " ", " ", " ", "\n ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n\n", "\n ", "\n ", " ", "\n\n", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n\n", "\n\n", "\n"], "language": "Python", "source": "python.org", "token_count": 4731} +{"url": "https://docs.python.org/3/howto/free-threading-python.html", "title": "Python support for free threading", "content": "Python support for free threading\u00b6\nStarting with the 3.13 release, CPython has support for a build of Python called free threading where the global interpreter lock (GIL) is disabled. Free-threaded execution allows for full utilization of the available processing power by running threads in parallel on available CPU cores. While not all software will benefit from this automatically, programs designed with threading in mind will run faster on multi-core hardware.\nSome third-party packages, in particular ones with an extension module, may not be ready for use in a free-threaded build, and will re-enable the GIL.\nThis document describes the implications of free threading for Python code. See C API Extension Support for Free Threading for information on how to write C extensions that support the free-threaded build.\nSee also\nPEP 703 \u2013 Making the Global Interpreter Lock Optional in CPython for an overall description of free-threaded Python.\nInstallation\u00b6\nStarting with Python 3.13, the official macOS and Windows installers optionally support installing free-threaded Python binaries. The installers are available at https://www.python.org/downloads/.\nFor information on other platforms, see the Installing a Free-Threaded Python, a community-maintained installation guide for installing free-threaded Python.\nWhen building CPython from source, the --disable-gil\nconfigure option\nshould be used to build a free-threaded Python interpreter.\nIdentifying free-threaded Python\u00b6\nTo check if the current interpreter supports free-threading, python -VV\nand sys.version\ncontain \u201cfree-threading build\u201d.\nThe new sys._is_gil_enabled()\nfunction can be used to check whether\nthe GIL is actually disabled in the running process.\nThe sysconfig.get_config_var(\"Py_GIL_DISABLED\")\nconfiguration variable can\nbe used to determine whether the build supports free threading. If the variable\nis set to 1\n, then the build supports free threading. This is the recommended\nmechanism for decisions related to the build configuration.\nThe global interpreter lock in free-threaded Python\u00b6\nFree-threaded builds of CPython support optionally running with the GIL enabled\nat runtime using the environment variable PYTHON_GIL\nor\nthe command-line option -X gil\n.\nThe GIL may also automatically be enabled when importing a C-API extension module that is not explicitly marked as supporting free threading. A warning will be printed in this case.\nIn addition to individual package documentation, the following websites track the status of popular packages support for free threading:\nThread safety\u00b6\nThe free-threaded build of CPython aims to provide similar thread-safety\nbehavior at the Python level to the default GIL-enabled build. Built-in\ntypes like dict\n, list\n, and set\nuse internal locks\nto protect against concurrent modifications in ways that behave similarly to\nthe GIL. However, Python has not historically guaranteed specific behavior for\nconcurrent modifications to these built-in types, so this should be treated\nas a description of the current implementation, not a guarantee of current or\nfuture behavior.\nNote\nIt\u2019s recommended to use the threading.Lock\nor other synchronization\nprimitives instead of relying on the internal locks of built-in types, when\npossible.\nKnown limitations\u00b6\nThis section describes known limitations of the free-threaded CPython build.\nImmortalization\u00b6\nIn the free-threaded build, some objects are immortal. Immortal objects are not deallocated and have reference counts that are never modified. This is done to avoid reference count contention that would prevent efficient multi-threaded scaling.\nAs of the 3.14 release, immortalization is limited to:\nCode constants: numeric literals, string literals, and tuple literals composed of other constants.\nStrings interned by\nsys.intern()\n.\nFrame objects\u00b6\nIt is not safe to access frame.f_locals\nfrom a frame\nobject if that frame is currently executing in another thread, and doing so may\ncrash the interpreter.\nIterators\u00b6\nIt is generally not thread-safe to access the same iterator object from multiple threads concurrently, and threads may see duplicate or missing elements.\nSingle-threaded performance\u00b6\nThe free-threaded build has additional overhead when executing Python code compared to the default GIL-enabled build. The amount of overhead depends on the workload and hardware. On the pyperformance benchmark suite, the average overhead ranges from about 1% on macOS aarch64 to 8% on x86-64 Linux systems.\nBehavioral changes\u00b6\nThis section describes CPython behavioural changes with the free-threaded build.\nContext variables\u00b6\nIn the free-threaded build, the flag thread_inherit_context\nis set to true by default which causes threads created with\nthreading.Thread\nto start with a copy of the\nContext()\nof the caller of\nstart()\n. In the default GIL-enabled build, the flag\ndefaults to false so threads start with an\nempty Context()\n.\nWarning filters\u00b6\nIn the free-threaded build, the flag context_aware_warnings\nis set to true by default. In the default GIL-enabled build, the flag defaults\nto false. If the flag is true then the warnings.catch_warnings\ncontext manager uses a context variable for warning filters. If the flag is\nfalse then catch_warnings\nmodifies the global filters list,\nwhich is not thread-safe. See the warnings\nmodule for more details.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1331} +{"url": "https://docs.python.org/3/reference/compound_stmts.html", "title": "Compound statements", "content": "8. Compound statements\u00b6\nCompound statements contain (groups of) other statements; they affect or control the execution of those other statements in some way. In general, compound statements span multiple lines, although in simple incarnations a whole compound statement may be contained in one line.\nThe if\n, while\nand for\nstatements implement\ntraditional control flow constructs. try\nspecifies exception\nhandlers and/or cleanup code for a group of statements, while the\nwith\nstatement allows the execution of initialization and\nfinalization code around a block of code. Function and class definitions are\nalso syntactically compound statements.\nA compound statement consists of one or more \u2018clauses.\u2019 A clause consists of a\nheader and a \u2018suite.\u2019 The clause headers of a particular compound statement are\nall at the same indentation level. Each clause header begins with a uniquely\nidentifying keyword and ends with a colon. A suite is a group of statements\ncontrolled by a clause. A suite can be one or more semicolon-separated simple\nstatements on the same line as the header, following the header\u2019s colon, or it\ncan be one or more indented statements on subsequent lines. Only the latter\nform of a suite can contain nested compound statements; the following is illegal,\nmostly because it wouldn\u2019t be clear to which if\nclause a following\nelse\nclause would belong:\nif test1: if test2: print(x)\nAlso note that the semicolon binds tighter than the colon in this context, so\nthat in the following example, either all or none of the print()\ncalls are\nexecuted:\nif x < y < z: print(x); print(y); print(z)\nSummarizing:\ncompound_stmt:if_stmt\n|while_stmt\n|for_stmt\n|try_stmt\n|with_stmt\n|match_stmt\n|funcdef\n|classdef\n|async_with_stmt\n|async_for_stmt\n|async_funcdef\nsuite:stmt_list\nNEWLINE | NEWLINE INDENTstatement\n+ DEDENT statement:stmt_list\nNEWLINE |compound_stmt\nstmt_list:simple_stmt\n(\";\"simple_stmt\n)* [\";\"]\nNote that statements always end in a NEWLINE\npossibly followed by a\nDEDENT\n. Also note that optional continuation clauses always begin with a\nkeyword that cannot start a statement, thus there are no ambiguities (the\n\u2018dangling else\n\u2019 problem is solved in Python by requiring nested\nif\nstatements to be indented).\nThe formatting of the grammar rules in the following sections places each clause on a separate line for clarity.\n8.1. The if\nstatement\u00b6\nThe if\nstatement is used for conditional execution:\nif_stmt: \"if\"assignment_expression\n\":\"suite\n(\"elif\"assignment_expression\n\":\"suite\n)* [\"else\" \":\"suite\n]\nIt selects exactly one of the suites by evaluating the expressions one by one\nuntil one is found to be true (see section Boolean operations for the definition of\ntrue and false); then that suite is executed (and no other part of the\nif\nstatement is executed or evaluated). If all expressions are\nfalse, the suite of the else\nclause, if present, is executed.\n8.2. The while\nstatement\u00b6\nThe while\nstatement is used for repeated execution as long as an\nexpression is true:\nwhile_stmt: \"while\"assignment_expression\n\":\"suite\n[\"else\" \":\"suite\n]\nThis repeatedly tests the expression and, if it is true, executes the first\nsuite; if the expression is false (which may be the first time it is tested) the\nsuite of the else\nclause, if present, is executed and the loop\nterminates.\nA break\nstatement executed in the first suite terminates the loop\nwithout executing the else\nclause\u2019s suite. A continue\nstatement executed in the first suite skips the rest of the suite and goes back\nto testing the expression.\n8.3. The for\nstatement\u00b6\nThe for\nstatement is used to iterate over the elements of a sequence\n(such as a string, tuple or list) or other iterable object:\nfor_stmt: \"for\"target_list\n\"in\"starred_expression_list\n\":\"suite\n[\"else\" \":\"suite\n]\nThe starred_expression_list\nexpression is evaluated\nonce; it should yield an iterable object. An iterator is\ncreated for that iterable. The first item provided by the iterator is then\nassigned to the target list using the standard rules for assignments\n(see Assignment statements), and the suite is executed. This repeats for each\nitem provided by the iterator. When the iterator is exhausted,\nthe suite in the else\nclause,\nif present, is executed, and the loop terminates.\nA break\nstatement executed in the first suite terminates the loop\nwithout executing the else\nclause\u2019s suite. A continue\nstatement executed in the first suite skips the rest of the suite and continues\nwith the next item, or with the else\nclause if there is no next\nitem.\nThe for-loop makes assignments to the variables in the target list. This overwrites all previous assignments to those variables including those made in the suite of the for-loop:\nfor i in range(10):\nprint(i)\ni = 5 # this will not affect the for-loop\n# because i will be overwritten with the next\n# index in the range\nNames in the target list are not deleted when the loop is finished, but if the\nsequence is empty, they will not have been assigned to at all by the loop. Hint:\nthe built-in type range()\nrepresents immutable arithmetic sequences of integers.\nFor instance, iterating range(3)\nsuccessively yields 0, 1, and then 2.\nChanged in version 3.11: Starred elements are now allowed in the expression list.\n8.4. The try\nstatement\u00b6\nThe try\nstatement specifies exception handlers and/or cleanup code\nfor a group of statements:\ntry_stmt:try1_stmt\n|try2_stmt\n|try3_stmt\ntry1_stmt: \"try\" \":\"suite\n(\"except\" [expression\n[\"as\"identifier\n]] \":\"suite\n)+ [\"else\" \":\"suite\n] [\"finally\" \":\"suite\n] try2_stmt: \"try\" \":\"suite\n(\"except\" \"*\"expression\n[\"as\"identifier\n] \":\"suite\n)+ [\"else\" \":\"suite\n] [\"finally\" \":\"suite\n] try3_stmt: \"try\" \":\"suite\n\"finally\" \":\"suite\nAdditional information on exceptions can be found in section Exceptions,\nand information on using the raise\nstatement to generate exceptions\nmay be found in section The raise statement.\nChanged in version 3.14: Support for optionally dropping grouping parentheses when using multiple exception types. See PEP 758.\n8.4.1. except\nclause\u00b6\nThe except\nclause(s) specify one or more exception handlers. When no\nexception occurs in the try\nclause, no exception handler is executed.\nWhen an exception occurs in the try\nsuite, a search for an exception\nhandler is started. This search inspects the except\nclauses in turn\nuntil one is found that matches the exception.\nAn expression-less except\nclause, if present, must be last;\nit matches any exception.\nFor an except\nclause with an expression, the\nexpression must evaluate to an exception type or a tuple of exception types. Parentheses\ncan be dropped if multiple exception types are provided and the as\nclause is not used.\nThe raised exception matches an except\nclause whose expression evaluates\nto the class or a non-virtual base class of the exception object,\nor to a tuple that contains such a class.\nIf no except\nclause matches the exception,\nthe search for an exception handler\ncontinues in the surrounding code and on the invocation stack. [1]\nIf the evaluation of an expression\nin the header of an except\nclause raises an exception,\nthe original search for a handler is canceled and a search starts for\nthe new exception in the surrounding code and on the call stack (it is treated\nas if the entire try\nstatement raised the exception).\nWhen a matching except\nclause is found,\nthe exception is assigned to the target\nspecified after the as\nkeyword in that except\nclause,\nif present, and the except\nclause\u2019s suite is executed.\nAll except\nclauses must have an executable block.\nWhen the end of this block is reached, execution continues\nnormally after the entire try\nstatement.\n(This means that if two nested handlers exist for the same exception,\nand the exception occurs in the try\nclause of the inner handler,\nthe outer handler will not handle the exception.)\nWhen an exception has been assigned using as target\n, it is cleared at the\nend of the except\nclause. This is as if\nexcept E as N:\nfoo\nwas translated to\nexcept E as N:\ntry:\nfoo\nfinally:\ndel N\nThis means the exception must be assigned to a different name to be able to\nrefer to it after the except\nclause.\nExceptions are cleared because with the\ntraceback attached to them, they form a reference cycle with the stack frame,\nkeeping all locals in that frame alive until the next garbage collection occurs.\nBefore an except\nclause\u2019s suite is executed,\nthe exception is stored in the sys\nmodule, where it can be accessed\nfrom within the body of the except\nclause by calling\nsys.exception()\n. When leaving an exception handler, the exception\nstored in the sys\nmodule is reset to its previous value:\n>>> print(sys.exception())\nNone\n>>> try:\n... raise TypeError\n... except:\n... print(repr(sys.exception()))\n... try:\n... raise ValueError\n... except:\n... print(repr(sys.exception()))\n... print(repr(sys.exception()))\n...\nTypeError()\nValueError()\nTypeError()\n>>> print(sys.exception())\nNone\n8.4.2. except*\nclause\u00b6\nThe except*\nclause(s) specify one or more handlers for groups of\nexceptions (BaseExceptionGroup\ninstances). A try\nstatement\ncan have either except\nor except*\nclauses, but not both.\nThe exception type for matching is mandatory in the case of except*\n,\nso except*:\nis a syntax error. The type is interpreted as in the case of\nexcept\n, but matching is performed on the exceptions contained in the\ngroup that is being handled. An TypeError\nis raised if a matching\ntype is a subclass of BaseExceptionGroup\n, because that would have\nambiguous semantics.\nWhen an exception group is raised in the try block, each except*\nclause splits (see split()\n) it into the subgroups\nof matching and non-matching exceptions. If the matching subgroup is not empty,\nit becomes the handled exception (the value returned from sys.exception()\n)\nand assigned to the target of the except*\nclause (if there is one).\nThen, the body of the except*\nclause executes. If the non-matching\nsubgroup is not empty, it is processed by the next except*\nin the\nsame manner. This continues until all exceptions in the group have been matched,\nor the last except*\nclause has run.\nAfter all except*\nclauses execute, the group of unhandled exceptions\nis merged with any exceptions that were raised or re-raised from within\nexcept*\nclauses. This merged exception group propagates on.:\n>>> try:\n... raise ExceptionGroup(\"eg\",\n... [ValueError(1), TypeError(2), OSError(3), OSError(4)])\n... except* TypeError as e:\n... print(f'caught {type(e)} with nested {e.exceptions}')\n... except* OSError as e:\n... print(f'caught {type(e)} with nested {e.exceptions}')\n...\ncaught with nested (TypeError(2),)\ncaught with nested (OSError(3), OSError(4))\n+ Exception Group Traceback (most recent call last):\n| File \"\", line 2, in \n| raise ExceptionGroup(\"eg\",\n| [ValueError(1), TypeError(2), OSError(3), OSError(4)])\n| ExceptionGroup: eg (1 sub-exception)\n+-+---------------- 1 ----------------\n| ValueError: 1\n+------------------------------------\nIf the exception raised from the try\nblock is not an exception group\nand its type matches one of the except*\nclauses, it is caught and\nwrapped by an exception group with an empty message string. This ensures that the\ntype of the target e\nis consistently BaseExceptionGroup\n:\n>>> try:\n... raise BlockingIOError\n... except* BlockingIOError as e:\n... print(repr(e))\n...\nExceptionGroup('', (BlockingIOError(),))\nbreak\n, continue\nand return\ncannot appear in an except*\nclause.\n8.4.3. else\nclause\u00b6\nThe optional else\nclause is executed if the control flow leaves the\ntry\nsuite, no exception was raised, and no return\n,\ncontinue\n, or break\nstatement was executed. Exceptions in\nthe else\nclause are not handled by the preceding except\nclauses.\n8.4.4. finally\nclause\u00b6\nIf finally\nis present, it specifies a \u2018cleanup\u2019 handler. The\ntry\nclause is executed, including any except\nand else\nclauses.\nIf an exception occurs in any of the clauses and is not handled,\nthe exception is temporarily saved.\nThe finally\nclause is executed. If there is a saved exception\nit is re-raised at the end of the finally\nclause.\nIf the finally\nclause raises another exception, the saved exception\nis set as the context of the new exception.\nIf the finally\nclause executes a return\n, break\nor continue\nstatement, the saved exception is discarded. For example,\nthis function returns 42.\ndef f():\ntry:\n1/0\nfinally:\nreturn 42\nThe exception information is not available to the program during execution of\nthe finally\nclause.\nWhen a return\n, break\nor continue\nstatement is\nexecuted in the try\nsuite of a try\n\u2026finally\nstatement, the finally\nclause is also executed \u2018on the way out.\u2019\nThe return value of a function is determined by the last return\nstatement executed. Since the finally\nclause always executes, a\nreturn\nstatement executed in the finally\nclause will\nalways be the last one executed. The following function returns \u2018finally\u2019.\ndef foo():\ntry:\nreturn 'try'\nfinally:\nreturn 'finally'\nChanged in version 3.8: Prior to Python 3.8, a continue\nstatement was illegal in the\nfinally\nclause due to a problem with the implementation.\nChanged in version 3.14: The compiler emits a SyntaxWarning\nwhen a return\n,\nbreak\nor continue\nappears in a finally\nblock (see PEP 765).\n8.5. The with\nstatement\u00b6\nThe with\nstatement is used to wrap the execution of a block with\nmethods defined by a context manager (see section With Statement Context Managers).\nThis allows common try\n\u2026except\n\u2026finally\nusage patterns to be encapsulated for convenient reuse.\nwith_stmt: \"with\" ( \"(\"with_stmt_contents\n\",\"? \")\" |with_stmt_contents\n) \":\"suite\nwith_stmt_contents:with_item\n(\",\"with_item\n)* with_item:expression\n[\"as\"target\n]\nThe execution of the with\nstatement with one \u201citem\u201d proceeds as follows:\nThe context expression (the expression given in the\nwith_item\n) is evaluated to obtain a context manager.The context manager\u2019s\n__enter__()\nis loaded for later use.The context manager\u2019s\n__exit__()\nis loaded for later use.The context manager\u2019s\n__enter__()\nmethod is invoked.If a target was included in the\nwith\nstatement, the return value from__enter__()\nis assigned to it.Note\nThe\nwith\nstatement guarantees that if the__enter__()\nmethod returns without an error, then__exit__()\nwill always be called. Thus, if an error occurs during the assignment to the target list, it will be treated the same as an error occurring within the suite would be. See step 7 below.The suite is executed.\nThe context manager\u2019s\n__exit__()\nmethod is invoked. If an exception caused the suite to be exited, its type, value, and traceback are passed as arguments to__exit__()\n. Otherwise, threeNone\narguments are supplied.If the suite was exited due to an exception, and the return value from the\n__exit__()\nmethod was false, the exception is reraised. If the return value was true, the exception is suppressed, and execution continues with the statement following thewith\nstatement.If the suite was exited for any reason other than an exception, the return value from\n__exit__()\nis ignored, and execution proceeds at the normal location for the kind of exit that was taken.\nThe following code:\nwith EXPRESSION as TARGET:\nSUITE\nis semantically equivalent to:\nmanager = (EXPRESSION)\nenter = manager.__enter__\nexit = manager.__exit__\nvalue = enter()\nhit_except = False\ntry:\nTARGET = value\nSUITE\nexcept:\nhit_except = True\nif not exit(*sys.exc_info()):\nraise\nfinally:\nif not hit_except:\nexit(None, None, None)\nexcept that implicit special method lookup is used\nfor __enter__()\nand __exit__()\n.\nWith more than one item, the context managers are processed as if multiple\nwith\nstatements were nested:\nwith A() as a, B() as b:\nSUITE\nis semantically equivalent to:\nwith A() as a:\nwith B() as b:\nSUITE\nYou can also write multi-item context managers in multiple lines if the items are surrounded by parentheses. For example:\nwith (\nA() as a,\nB() as b,\n):\nSUITE\nChanged in version 3.1: Support for multiple context expressions.\nChanged in version 3.10: Support for using grouping parentheses to break the statement in multiple lines.\n8.6. The match\nstatement\u00b6\nAdded in version 3.10.\nThe match statement is used for pattern matching. Syntax:\nmatch_stmt: 'match'subject_expr\n\":\" NEWLINE INDENTcase_block\n+ DEDENT subject_expr: `!star_named_expression` \",\" `!star_named_expressions`? | `!named_expression` case_block: 'case'patterns\n[guard\n] \":\" `!block`\nNote\nThis section uses single quotes to denote soft keywords.\nPattern matching takes a pattern as input (following case\n) and a subject\nvalue (following match\n). The pattern (which may contain subpatterns) is\nmatched against the subject value. The outcomes are:\nA match success or failure (also termed a pattern success or failure).\nPossible binding of matched values to a name. The prerequisites for this are further discussed below.\nThe match\nand case\nkeywords are soft keywords.\nSee also\n8.6.1. Overview\u00b6\nHere\u2019s an overview of the logical flow of a match statement:\nThe subject expression\nsubject_expr\nis evaluated and a resulting subject value obtained. If the subject expression contains a comma, a tuple is constructed using the standard rules.Each pattern in a\ncase_block\nis attempted to match with the subject value. The specific rules for success or failure are described below. The match attempt can also bind some or all of the standalone names within the pattern. The precise pattern binding rules vary per pattern type and are specified below. Name bindings made during a successful pattern match outlive the executed block and can be used after the match statement.Note\nDuring failed pattern matches, some subpatterns may succeed. Do not rely on bindings being made for a failed match. Conversely, do not rely on variables remaining unchanged after a failed match. The exact behavior is dependent on implementation and may vary. This is an intentional decision made to allow different implementations to add optimizations.\nIf the pattern succeeds, the corresponding guard (if present) is evaluated. In this case all name bindings are guaranteed to have happened.\nIf the guard evaluates as true or is missing, the\nblock\ninsidecase_block\nis executed.Otherwise, the next\ncase_block\nis attempted as described above.If there are no further case blocks, the match statement is completed.\nNote\nUsers should generally never rely on a pattern being evaluated. Depending on implementation, the interpreter may cache values or use other optimizations which skip repeated evaluations.\nA sample match statement:\n>>> flag = False\n>>> match (100, 200):\n... case (100, 300): # Mismatch: 200 != 300\n... print('Case 1')\n... case (100, 200) if flag: # Successful match, but guard fails\n... print('Case 2')\n... case (100, y): # Matches and binds y to 200\n... print(f'Case 3, y: {y}')\n... case _: # Pattern not attempted\n... print('Case 4, I match anything!')\n...\nCase 3, y: 200\nIn this case, if flag\nis a guard. Read more about that in the next section.\n8.6.2. Guards\u00b6\nguard: \"if\" `!named_expression`\nA guard\n(which is part of the case\n) must succeed for code inside\nthe case\nblock to execute. It takes the form: if\nfollowed by an\nexpression.\nThe logical flow of a case\nblock with a guard\nfollows:\nCheck that the pattern in the\ncase\nblock succeeded. If the pattern failed, theguard\nis not evaluated and the nextcase\nblock is checked.If the pattern succeeded, evaluate the\nguard\n.If the\nguard\ncondition evaluates as true, the case block is selected.If the\nguard\ncondition evaluates as false, the case block is not selected.If the\nguard\nraises an exception during evaluation, the exception bubbles up.\nGuards are allowed to have side effects as they are expressions. Guard evaluation must proceed from the first to the last case block, one at a time, skipping case blocks whose pattern(s) don\u2019t all succeed. (I.e., guard evaluation must happen in order.) Guard evaluation must stop once a case block is selected.\n8.6.3. Irrefutable Case Blocks\u00b6\nAn irrefutable case block is a match-all case block. A match statement may have at most one irrefutable case block, and it must be last.\nA case block is considered irrefutable if it has no guard and its pattern is irrefutable. A pattern is considered irrefutable if we can prove from its syntax alone that it will always succeed. Only the following patterns are irrefutable:\nAS Patterns whose left-hand side is irrefutable\nOR Patterns containing at least one irrefutable pattern\nparenthesized irrefutable patterns\n8.6.4. Patterns\u00b6\nNote\nThis section uses grammar notations beyond standard EBNF:\nthe notation\nSEP.RULE+\nis shorthand forRULE (SEP RULE)*\nthe notation\n!RULE\nis shorthand for a negative lookahead assertion\nThe top-level syntax for patterns\nis:\npatterns:open_sequence_pattern\n|pattern\npattern:as_pattern\n|or_pattern\nclosed_pattern: |literal_pattern\n|capture_pattern\n|wildcard_pattern\n|value_pattern\n|group_pattern\n|sequence_pattern\n|mapping_pattern\n|class_pattern\nThe descriptions below will include a description \u201cin simple terms\u201d of what a pattern does for illustration purposes (credits to Raymond Hettinger for a document that inspired most of the descriptions). Note that these descriptions are purely for illustration purposes and may not reflect the underlying implementation. Furthermore, they do not cover all valid forms.\n8.6.4.1. OR Patterns\u00b6\nAn OR pattern is two or more patterns separated by vertical\nbars |\n. Syntax:\nor_pattern: \"|\".closed_pattern\n+\nOnly the final subpattern may be irrefutable, and each subpattern must bind the same set of names to avoid ambiguity.\nAn OR pattern matches each of its subpatterns in turn to the subject value, until one succeeds. The OR pattern is then considered successful. Otherwise, if none of the subpatterns succeed, the OR pattern fails.\nIn simple terms, P1 | P2 | ...\nwill try to match P1\n, if it fails it will try to\nmatch P2\n, succeeding immediately if any succeeds, failing otherwise.\n8.6.4.2. AS Patterns\u00b6\nAn AS pattern matches an OR pattern on the left of the as\nkeyword against a subject. Syntax:\nas_pattern:or_pattern\n\"as\"capture_pattern\nIf the OR pattern fails, the AS pattern fails. Otherwise, the AS pattern binds\nthe subject to the name on the right of the as keyword and succeeds.\ncapture_pattern\ncannot be a _\n.\nIn simple terms P as NAME\nwill match with P\n, and on success it will\nset NAME = \n.\n8.6.4.3. Literal Patterns\u00b6\nA literal pattern corresponds to most literals in Python. Syntax:\nliteral_pattern:signed_number\n|signed_number\n\"+\" NUMBER |signed_number\n\"-\" NUMBER |strings\n| \"None\" | \"True\" | \"False\" signed_number: [\"-\"] NUMBER\nThe rule strings\nand the token NUMBER\nare defined in the\nstandard Python grammar. Triple-quoted strings are\nsupported. Raw strings and byte strings are supported. f-strings\nand t-strings are not supported.\nThe forms signed_number '+' NUMBER\nand signed_number '-' NUMBER\nare\nfor expressing complex numbers; they require a real number\non the left and an imaginary number on the right. E.g. 3 + 4j\n.\nIn simple terms, LITERAL\nwill succeed only if == LITERAL\n. For\nthe singletons None\n, True\nand False\n, the is\noperator is used.\n8.6.4.4. Capture Patterns\u00b6\nA capture pattern binds the subject value to a name. Syntax:\ncapture_pattern: !'_' NAME\nA single underscore _\nis not a capture pattern (this is what !'_'\nexpresses). It is instead treated as a\nwildcard_pattern\n.\nIn a given pattern, a given name can only be bound once. E.g.\ncase x, x: ...\nis invalid while case [x] | x: ...\nis allowed.\nCapture patterns always succeed. The binding follows scoping rules\nestablished by the assignment expression operator in PEP 572; the\nname becomes a local variable in the closest containing function scope unless\nthere\u2019s an applicable global\nor nonlocal\nstatement.\nIn simple terms NAME\nwill always succeed and it will set NAME = \n.\n8.6.4.5. Wildcard Patterns\u00b6\nA wildcard pattern always succeeds (matches anything) and binds no name. Syntax:\nwildcard_pattern: '_'\n_\nis a soft keyword within any pattern,\nbut only within patterns. It is an identifier, as usual, even within\nmatch\nsubject expressions, guard\ns, and case\nblocks.\nIn simple terms, _\nwill always succeed.\n8.6.4.6. Value Patterns\u00b6\nA value pattern represents a named value in Python. Syntax:\nvalue_pattern:attr\nattr:name_or_attr\n\".\" NAME name_or_attr:attr\n| NAME\nThe dotted name in the pattern is looked up using standard Python\nname resolution rules. The pattern succeeds if the\nvalue found compares equal to the subject value (using the ==\nequality\noperator).\nIn simple terms NAME1.NAME2\nwill succeed only if == NAME1.NAME2\nNote\nIf the same value occurs multiple times in the same match statement, the interpreter may cache the first value found and reuse it rather than repeat the same lookup. This cache is strictly tied to a given execution of a given match statement.\n8.6.4.7. Group Patterns\u00b6\nA group pattern allows users to add parentheses around patterns to emphasize the intended grouping. Otherwise, it has no additional syntax. Syntax:\ngroup_pattern: \"(\" pattern\n\")\"\nIn simple terms (P)\nhas the same effect as P\n.\n8.6.4.8. Sequence Patterns\u00b6\nA sequence pattern contains several subpatterns to be matched against sequence elements. The syntax is similar to the unpacking of a list or tuple.\nsequence_pattern: \"[\" [maybe_sequence_pattern\n] \"]\" | \"(\" [open_sequence_pattern\n] \")\" open_sequence_pattern:maybe_star_pattern\n\",\" [maybe_sequence_pattern\n] maybe_sequence_pattern: \",\".maybe_star_pattern\n+ \",\"? maybe_star_pattern:star_pattern\n|pattern\nstar_pattern: \"*\" (capture_pattern\n|wildcard_pattern\n)\nThere is no difference if parentheses or square brackets\nare used for sequence patterns (i.e. (...)\nvs [...]\n).\nNote\nA single pattern enclosed in parentheses without a trailing comma\n(e.g. (3 | 4)\n) is a group pattern.\nWhile a single pattern enclosed in square brackets (e.g. [3 | 4]\n) is\nstill a sequence pattern.\nAt most one star subpattern may be in a sequence pattern. The star subpattern may occur in any position. If no star subpattern is present, the sequence pattern is a fixed-length sequence pattern; otherwise it is a variable-length sequence pattern.\nThe following is the logical flow for matching a sequence pattern against a subject value:\nIf the subject value is not a sequence [2], the sequence pattern fails.\nIf the subject value is an instance of\nstr\n,bytes\norbytearray\nthe sequence pattern fails.The subsequent steps depend on whether the sequence pattern is fixed or variable-length.\nIf the sequence pattern is fixed-length:\nIf the length of the subject sequence is not equal to the number of subpatterns, the sequence pattern fails\nSubpatterns in the sequence pattern are matched to their corresponding items in the subject sequence from left to right. Matching stops as soon as a subpattern fails. If all subpatterns succeed in matching their corresponding item, the sequence pattern succeeds.\nOtherwise, if the sequence pattern is variable-length:\nIf the length of the subject sequence is less than the number of non-star subpatterns, the sequence pattern fails.\nThe leading non-star subpatterns are matched to their corresponding items as for fixed-length sequences.\nIf the previous step succeeds, the star subpattern matches a list formed of the remaining subject items, excluding the remaining items corresponding to non-star subpatterns following the star subpattern.\nRemaining non-star subpatterns are matched to their corresponding subject items, as for a fixed-length sequence.\nNote\nThe length of the subject sequence is obtained via\nlen()\n(i.e. via the__len__()\nprotocol). This length may be cached by the interpreter in a similar manner as value patterns.\nIn simple terms [P1, P2, P3,\n\u2026 , P]\nmatches only if all the following\nhappens:\ncheck\n\nis a sequencelen(subject) == \nP1\nmatches[0]\n(note that this match can also bind names)P2\nmatches[1]\n(note that this match can also bind names)\u2026 and so on for the corresponding pattern/element.\n8.6.4.9. Mapping Patterns\u00b6\nA mapping pattern contains one or more key-value patterns. The syntax is similar to the construction of a dictionary. Syntax:\nmapping_pattern: \"{\" [items_pattern\n] \"}\" items_pattern: \",\".key_value_pattern\n+ \",\"? key_value_pattern: (literal_pattern\n|value_pattern\n) \":\"pattern\n|double_star_pattern\ndouble_star_pattern: \"**\"capture_pattern\nAt most one double star pattern may be in a mapping pattern. The double star pattern must be the last subpattern in the mapping pattern.\nDuplicate keys in mapping patterns are disallowed. Duplicate literal keys will\nraise a SyntaxError\n. Two keys that otherwise have the same value will\nraise a ValueError\nat runtime.\nThe following is the logical flow for matching a mapping pattern against a subject value:\nIf the subject value is not a mapping [3],the mapping pattern fails.\nIf every key given in the mapping pattern is present in the subject mapping, and the pattern for each key matches the corresponding item of the subject mapping, the mapping pattern succeeds.\nIf duplicate keys are detected in the mapping pattern, the pattern is considered invalid. A\nSyntaxError\nis raised for duplicate literal values; or aValueError\nfor named keys of the same value.\nNote\nKey-value pairs are matched using the two-argument form of the mapping\nsubject\u2019s get()\nmethod. Matched key-value pairs must already be present\nin the mapping, and not created on-the-fly via __missing__()\nor __getitem__()\n.\nIn simple terms {KEY1: P1, KEY2: P2, ... }\nmatches only if all the following\nhappens:\ncheck\n\nis a mappingKEY1 in \nP1\nmatches[KEY1]\n\u2026 and so on for the corresponding KEY/pattern pair.\n8.6.4.10. Class Patterns\u00b6\nA class pattern represents a class and its positional and keyword arguments (if any). Syntax:\nclass_pattern:name_or_attr\n\"(\" [pattern_arguments\n\",\"?] \")\" pattern_arguments:positional_patterns\n[\",\"keyword_patterns\n] |keyword_patterns\npositional_patterns: \",\".pattern\n+ keyword_patterns: \",\".keyword_pattern\n+ keyword_pattern: NAME \"=\"pattern\nThe same keyword should not be repeated in class patterns.\nThe following is the logical flow for matching a class pattern against a subject value:\nIf\nname_or_attr\nis not an instance of the builtintype\n, raiseTypeError\n.If the subject value is not an instance of\nname_or_attr\n(tested viaisinstance()\n), the class pattern fails.If no pattern arguments are present, the pattern succeeds. Otherwise, the subsequent steps depend on whether keyword or positional argument patterns are present.\nFor a number of built-in types (specified below), a single positional subpattern is accepted which will match the entire subject; for these types keyword patterns also work as for other types.\nIf only keyword patterns are present, they are processed as follows, one by one:\nThe keyword is looked up as an attribute on the subject.\nIf this raises an exception other than\nAttributeError\n, the exception bubbles up.If this raises\nAttributeError\n, the class pattern has failed.Else, the subpattern associated with the keyword pattern is matched against the subject\u2019s attribute value. If this fails, the class pattern fails; if this succeeds, the match proceeds to the next keyword.\nIf all keyword patterns succeed, the class pattern succeeds.\nIf any positional patterns are present, they are converted to keyword patterns using the\n__match_args__\nattribute on the classname_or_attr\nbefore matching:The equivalent of\ngetattr(cls, \"__match_args__\", ())\nis called.If this raises an exception, the exception bubbles up.\nIf the returned value is not a tuple, the conversion fails and\nTypeError\nis raised.If there are more positional patterns than\nlen(cls.__match_args__)\n,TypeError\nis raised.Otherwise, positional pattern\ni\nis converted to a keyword pattern using__match_args__[i]\nas the keyword.__match_args__[i]\nmust be a string; if notTypeError\nis raised.If there are duplicate keywords,\nTypeError\nis raised.\nOnce all positional patterns have been converted to keyword patterns, the match proceeds as if there were only keyword patterns.\nFor the following built-in types the handling of positional subpatterns is different:\nThese classes accept a single positional argument, and the pattern there is matched against the whole object rather than an attribute. For example\nint(0|1)\nmatches the value0\n, but not the value0.0\n.\nIn simple terms CLS(P1, attr=P2)\nmatches only if the following happens:\nisinstance(, CLS)\nconvert\nP1\nto a keyword pattern usingCLS.__match_args__\nFor each keyword argument\nattr=P2\n:hasattr(, \"attr\")\nP2\nmatches.attr\n\u2026 and so on for the corresponding keyword argument/pattern pair.\n8.7. Function definitions\u00b6\nA function definition defines a user-defined function object (see section The standard type hierarchy):\nfuncdef: [decorators\n] \"def\"funcname\n[type_params\n] \"(\" [parameter_list\n] \")\" [\"->\"expression\n] \":\"suite\ndecorators:decorator\n+ decorator: \"@\"assignment_expression\nNEWLINE parameter_list:defparameter\n(\",\"defparameter\n)* \",\" \"/\" [\",\" [parameter_list_no_posonly\n]] |parameter_list_no_posonly\nparameter_list_no_posonly:defparameter\n(\",\"defparameter\n)* [\",\" [parameter_list_starargs\n]] |parameter_list_starargs\nparameter_list_starargs: \"*\" [star_parameter\n] (\",\"defparameter\n)* [\",\" [parameter_star_kwargs\n]] | \"*\" (\",\"defparameter\n)+ [\",\" [parameter_star_kwargs\n]] |parameter_star_kwargs\nparameter_star_kwargs: \"**\"parameter\n[\",\"] parameter:identifier\n[\":\"expression\n] star_parameter:identifier\n[\":\" [\"*\"]expression\n] defparameter:parameter\n[\"=\"expression\n] funcname:identifier\nA function definition is an executable statement. Its execution binds the function name in the current local namespace to a function object (a wrapper around the executable code for the function). This function object contains a reference to the current global namespace as the global namespace to be used when the function is called.\nThe function definition does not execute the function body; this gets executed only when the function is called. [4]\nA function definition may be wrapped by one or more decorator expressions. Decorator expressions are evaluated when the function is defined, in the scope that contains the function definition. The result must be a callable, which is invoked with the function object as the only argument. The returned value is bound to the function name instead of the function object. Multiple decorators are applied in nested fashion. For example, the following code\n@f1(arg)\n@f2\ndef func(): pass\nis roughly equivalent to\ndef func(): pass\nfunc = f1(arg)(f2(func))\nexcept that the original function is not temporarily bound to the name func\n.\nChanged in version 3.9: Functions may be decorated with any valid\nassignment_expression\n. Previously, the grammar was\nmuch more restrictive; see PEP 614 for details.\nA list of type parameters may be given in square brackets\nbetween the function\u2019s name and the opening parenthesis for its parameter list.\nThis indicates to static type checkers that the function is generic. At runtime,\nthe type parameters can be retrieved from the function\u2019s\n__type_params__\nattribute. See Generic functions for more.\nChanged in version 3.12: Type parameter lists are new in Python 3.12.\nWhen one or more parameters have the form parameter =\nexpression, the function is said to have \u201cdefault parameter values.\u201d For a\nparameter with a default value, the corresponding argument may be\nomitted from a call, in which\ncase the parameter\u2019s default value is substituted. If a parameter has a default\nvalue, all following parameters up until the \u201c*\n\u201d must also have a default\nvalue \u2014 this is a syntactic restriction that is not expressed by the grammar.\nDefault parameter values are evaluated from left to right when the function\ndefinition is executed. This means that the expression is evaluated once, when\nthe function is defined, and that the same \u201cpre-computed\u201d value is used for each\ncall. This is especially important to understand when a default parameter value is a\nmutable object, such as a list or a dictionary: if the function modifies the\nobject (e.g. by appending an item to a list), the default parameter value is in effect\nmodified. This is generally not what was intended. A way around this is to use\nNone\nas the default, and explicitly test for it in the body of the function,\ne.g.:\ndef whats_on_the_telly(penguin=None):\nif penguin is None:\npenguin = []\npenguin.append(\"property of the zoo\")\nreturn penguin\nFunction call semantics are described in more detail in section Calls. A\nfunction call always assigns values to all parameters mentioned in the parameter\nlist, either from positional arguments, from keyword arguments, or from default\nvalues. If the form \u201c*identifier\n\u201d is present, it is initialized to a tuple\nreceiving any excess positional parameters, defaulting to the empty tuple.\nIf the form \u201c**identifier\n\u201d is present, it is initialized to a new\nordered mapping receiving any excess keyword arguments, defaulting to a\nnew empty mapping of the same type. Parameters after \u201c*\n\u201d or\n\u201c*identifier\n\u201d are keyword-only parameters and may only be passed\nby keyword arguments. Parameters before \u201c/\n\u201d are positional-only parameters\nand may only be passed by positional arguments.\nChanged in version 3.8: The /\nfunction parameter syntax may be used to indicate positional-only\nparameters. See PEP 570 for details.\nParameters may have an annotation of the form \u201c: expression\n\u201d\nfollowing the parameter name. Any parameter may have an annotation, even those of the form\n*identifier\nor **identifier\n. (As a special case, parameters of the form\n*identifier\nmay have an annotation \u201c: *expression\n\u201d.) Functions may have \u201creturn\u201d annotation of\nthe form \u201c-> expression\n\u201d after the parameter list. These annotations can be\nany valid Python expression. The presence of annotations does not change the\nsemantics of a function. See Annotations for more information on annotations.\nChanged in version 3.11: Parameters of the form \u201c*identifier\n\u201d may have an annotation\n\u201c: *expression\n\u201d. See PEP 646.\nIt is also possible to create anonymous functions (functions not bound to a\nname), for immediate use in expressions. This uses lambda expressions, described in\nsection Lambdas. Note that the lambda expression is merely a shorthand for a\nsimplified function definition; a function defined in a \u201cdef\n\u201d\nstatement can be passed around or assigned to another name just like a function\ndefined by a lambda expression. The \u201cdef\n\u201d form is actually more powerful\nsince it allows the execution of multiple statements and annotations.\nProgrammer\u2019s note: Functions are first-class objects. A \u201cdef\n\u201d statement\nexecuted inside a function definition defines a local function that can be\nreturned or passed around. Free variables used in the nested function can\naccess the local variables of the function containing the def. See section\nNaming and binding for details.\nSee also\n- PEP 3107 - Function Annotations\nThe original specification for function annotations.\n- PEP 484 - Type Hints\nDefinition of a standard meaning for annotations: type hints.\n- PEP 526 - Syntax for Variable Annotations\nAbility to type hint variable declarations, including class variables and instance variables.\n- PEP 563 - Postponed Evaluation of Annotations\nSupport for forward references within annotations by preserving annotations in a string form at runtime instead of eager evaluation.\n- PEP 318 - Decorators for Functions and Methods\nFunction and method decorators were introduced. Class decorators were introduced in PEP 3129.\n8.8. Class definitions\u00b6\nA class definition defines a class object (see section The standard type hierarchy):\nclassdef: [decorators\n] \"class\"classname\n[type_params\n] [inheritance\n] \":\"suite\ninheritance: \"(\" [argument_list\n] \")\" classname:identifier\nA class definition is an executable statement. The inheritance list usually\ngives a list of base classes (see Metaclasses for more advanced uses), so\neach item in the list should evaluate to a class object which allows\nsubclassing. Classes without an inheritance list inherit, by default, from the\nbase class object\n; hence,\nclass Foo:\npass\nis equivalent to\nclass Foo(object):\npass\nThe class\u2019s suite is then executed in a new execution frame (see Naming and binding), using a newly created local namespace and the original global namespace. (Usually, the suite contains mostly function definitions.) When the class\u2019s suite finishes execution, its execution frame is discarded but its local namespace is saved. [5] A class object is then created using the inheritance list for the base classes and the saved local namespace for the attribute dictionary. The class name is bound to this class object in the original local namespace.\nThe order in which attributes are defined in the class body is preserved\nin the new class\u2019s __dict__\n. Note that this is reliable only right\nafter the class is created and only for classes that were defined using\nthe definition syntax.\nClass creation can be customized heavily using metaclasses.\nClasses can also be decorated: just like when decorating functions,\n@f1(arg)\n@f2\nclass Foo: pass\nis roughly equivalent to\nclass Foo: pass\nFoo = f1(arg)(f2(Foo))\nThe evaluation rules for the decorator expressions are the same as for function decorators. The result is then bound to the class name.\nChanged in version 3.9: Classes may be decorated with any valid\nassignment_expression\n. Previously, the grammar was\nmuch more restrictive; see PEP 614 for details.\nA list of type parameters may be given in square brackets\nimmediately after the class\u2019s name.\nThis indicates to static type checkers that the class is generic. At runtime,\nthe type parameters can be retrieved from the class\u2019s\n__type_params__\nattribute. See Generic classes for more.\nChanged in version 3.12: Type parameter lists are new in Python 3.12.\nProgrammer\u2019s note: Variables defined in the class definition are class\nattributes; they are shared by instances. Instance attributes can be set in a\nmethod with self.name = value\n. Both class and instance attributes are\naccessible through the notation \u201cself.name\n\u201d, and an instance attribute hides\na class attribute with the same name when accessed in this way. Class\nattributes can be used as defaults for instance attributes, but using mutable\nvalues there can lead to unexpected results. Descriptors\ncan be used to create instance variables with different implementation details.\nSee also\n- PEP 3115 - Metaclasses in Python 3000\nThe proposal that changed the declaration of metaclasses to the current syntax, and the semantics for how classes with metaclasses are constructed.\n- PEP 3129 - Class Decorators\nThe proposal that added class decorators. Function and method decorators were introduced in PEP 318.\n8.9. Coroutines\u00b6\nAdded in version 3.5.\n8.9.1. Coroutine function definition\u00b6\nasync_funcdef: [decorators\n] \"async\" \"def\"funcname\n\"(\" [parameter_list\n] \")\" [\"->\"expression\n] \":\"suite\nExecution of Python coroutines can be suspended and resumed at many points\n(see coroutine). await\nexpressions, async for\nand\nasync with\ncan only be used in the body of a coroutine function.\nFunctions defined with async def\nsyntax are always coroutine functions,\neven if they do not contain await\nor async\nkeywords.\nIt is a SyntaxError\nto use a yield from\nexpression inside the body\nof a coroutine function.\nAn example of a coroutine function:\nasync def func(param1, param2):\ndo_stuff()\nawait some_coroutine()\nChanged in version 3.7: await\nand async\nare now keywords; previously they were only\ntreated as such inside the body of a coroutine function.\n8.9.2. The async for\nstatement\u00b6\nasync_for_stmt: \"async\" for_stmt\nAn asynchronous iterable provides an __aiter__\nmethod that directly\nreturns an asynchronous iterator, which can call asynchronous code in\nits __anext__\nmethod.\nThe async for\nstatement allows convenient iteration over asynchronous\niterables.\nThe following code:\nasync for TARGET in ITER:\nSUITE\nelse:\nSUITE2\nIs semantically equivalent to:\niter = (ITER).__aiter__()\nrunning = True\nwhile running:\ntry:\nTARGET = await iter.__anext__()\nexcept StopAsyncIteration:\nrunning = False\nelse:\nSUITE\nelse:\nSUITE2\nexcept that implicit special method lookup is used\nfor __aiter__()\nand __anext__()\n.\nIt is a SyntaxError\nto use an async for\nstatement outside the\nbody of a coroutine function.\n8.9.3. The async with\nstatement\u00b6\nasync_with_stmt: \"async\" with_stmt\nAn asynchronous context manager is a context manager that is able to suspend execution in its enter and exit methods.\nThe following code:\nasync with EXPRESSION as TARGET:\nSUITE\nis semantically equivalent to:\nmanager = (EXPRESSION)\naenter = manager.__aenter__\naexit = manager.__aexit__\nvalue = await aenter()\nhit_except = False\ntry:\nTARGET = value\nSUITE\nexcept:\nhit_except = True\nif not await aexit(*sys.exc_info()):\nraise\nfinally:\nif not hit_except:\nawait aexit(None, None, None)\nexcept that implicit special method lookup is used\nfor __aenter__()\nand __aexit__()\n.\nIt is a SyntaxError\nto use an async with\nstatement outside the\nbody of a coroutine function.\nSee also\n- PEP 492 - Coroutines with async and await syntax\nThe proposal that made coroutines a proper standalone concept in Python, and added supporting syntax.\n8.10. Type parameter lists\u00b6\nAdded in version 3.12.\nChanged in version 3.13: Support for default values was added (see PEP 696).\ntype_params: \"[\"type_param\n(\",\"type_param\n)* \"]\" type_param:typevar\n|typevartuple\n|paramspec\ntypevar:identifier\n(\":\"expression\n)? (\"=\"expression\n)? typevartuple: \"*\"identifier\n(\"=\"expression\n)? paramspec: \"**\"identifier\n(\"=\"expression\n)?\nFunctions (including coroutines), classes and type aliases may contain a type parameter list:\ndef max[T](args: list[T]) -> T:\n...\nasync def amax[T](args: list[T]) -> T:\n...\nclass Bag[T]:\ndef __iter__(self) -> Iterator[T]:\n...\ndef add(self, arg: T) -> None:\n...\ntype ListOrSet[T] = list[T] | set[T]\nSemantically, this indicates that the function, class, or type alias is generic over a type variable. This information is primarily used by static type checkers, and at runtime, generic objects behave much like their non-generic counterparts.\nType parameters are declared in square brackets ([]\n) immediately\nafter the name of the function, class, or type alias. The type parameters\nare accessible within the scope of the generic object, but not elsewhere.\nThus, after a declaration def func[T](): pass\n, the name T\nis not available in\nthe module scope. Below, the semantics of generic objects are described\nwith more precision. The scope of type parameters is modeled with a special\nfunction (technically, an annotation scope) that\nwraps the creation of the generic object.\nGeneric functions, classes, and type aliases have a\n__type_params__\nattribute listing their type parameters.\nType parameters come in three kinds:\ntyping.TypeVar\n, introduced by a plain name (e.g.,T\n). Semantically, this represents a single type to a type checker.typing.TypeVarTuple\n, introduced by a name prefixed with a single asterisk (e.g.,*Ts\n). Semantically, this stands for a tuple of any number of types.typing.ParamSpec\n, introduced by a name prefixed with two asterisks (e.g.,**P\n). Semantically, this stands for the parameters of a callable.\ntyping.TypeVar\ndeclarations can define bounds and constraints with\na colon (:\n) followed by an expression. A single expression after the colon\nindicates a bound (e.g. T: int\n). Semantically, this means\nthat the typing.TypeVar\ncan only represent types that are a subtype of\nthis bound. A parenthesized tuple of expressions after the colon indicates a\nset of constraints (e.g. T: (str, bytes)\n). Each member of the tuple should be a\ntype (again, this is not enforced at runtime). Constrained type variables can only\ntake on one of the types in the list of constraints.\nFor typing.TypeVar\ns declared using the type parameter list syntax,\nthe bound and constraints are not evaluated when the generic object is created,\nbut only when the value is explicitly accessed through the attributes __bound__\nand __constraints__\n. To accomplish this, the bounds or constraints are\nevaluated in a separate annotation scope.\ntyping.TypeVarTuple\ns and typing.ParamSpec\ns cannot have bounds\nor constraints.\nAll three flavors of type parameters can also have a default value, which is used\nwhen the type parameter is not explicitly provided. This is added by appending\na single equals sign (=\n) followed by an expression. Like the bounds and\nconstraints of type variables, the default value is not evaluated when the\nobject is created, but only when the type parameter\u2019s __default__\nattribute\nis accessed. To this end, the default value is evaluated in a separate\nannotation scope. If no default value is specified\nfor a type parameter, the __default__\nattribute is set to the special\nsentinel object typing.NoDefault\n.\nThe following example indicates the full set of allowed type parameter declarations:\ndef overly_generic[\nSimpleTypeVar,\nTypeVarWithDefault = int,\nTypeVarWithBound: int,\nTypeVarWithConstraints: (str, bytes),\n*SimpleTypeVarTuple = (int, float),\n**SimpleParamSpec = (str, bytearray),\n](\na: SimpleTypeVar,\nb: TypeVarWithDefault,\nc: TypeVarWithBound,\nd: Callable[SimpleParamSpec, TypeVarWithConstraints],\n*e: SimpleTypeVarTuple,\n): ...\n8.10.1. Generic functions\u00b6\nGeneric functions are declared as follows:\ndef func[T](arg: T): ...\nThis syntax is equivalent to:\nannotation-def TYPE_PARAMS_OF_func():\nT = typing.TypeVar(\"T\")\ndef func(arg: T): ...\nfunc.__type_params__ = (T,)\nreturn func\nfunc = TYPE_PARAMS_OF_func()\nHere annotation-def\nindicates an annotation scope,\nwhich is not actually bound to any name at runtime. (One\nother liberty is taken in the translation: the syntax does not go through\nattribute access on the typing\nmodule, but creates an instance of\ntyping.TypeVar\ndirectly.)\nThe annotations of generic functions are evaluated within the annotation scope used for declaring the type parameters, but the function\u2019s defaults and decorators are not.\nThe following example illustrates the scoping rules for these cases, as well as for additional flavors of type parameters:\n@decorator\ndef func[T: int, *Ts, **P](*args: *Ts, arg: Callable[P, T] = some_default):\n...\nExcept for the lazy evaluation of the\nTypeVar\nbound, this is equivalent to:\nDEFAULT_OF_arg = some_default\nannotation-def TYPE_PARAMS_OF_func():\nannotation-def BOUND_OF_T():\nreturn int\n# In reality, BOUND_OF_T() is evaluated only on demand.\nT = typing.TypeVar(\"T\", bound=BOUND_OF_T())\nTs = typing.TypeVarTuple(\"Ts\")\nP = typing.ParamSpec(\"P\")\ndef func(*args: *Ts, arg: Callable[P, T] = DEFAULT_OF_arg):\n...\nfunc.__type_params__ = (T, Ts, P)\nreturn func\nfunc = decorator(TYPE_PARAMS_OF_func())\nThe capitalized names like DEFAULT_OF_arg\nare not actually\nbound at runtime.\n8.10.2. Generic classes\u00b6\nGeneric classes are declared as follows:\nclass Bag[T]: ...\nThis syntax is equivalent to:\nannotation-def TYPE_PARAMS_OF_Bag():\nT = typing.TypeVar(\"T\")\nclass Bag(typing.Generic[T]):\n__type_params__ = (T,)\n...\nreturn Bag\nBag = TYPE_PARAMS_OF_Bag()\nHere again annotation-def\n(not a real keyword) indicates an\nannotation scope, and the name\nTYPE_PARAMS_OF_Bag\nis not actually bound at runtime.\nGeneric classes implicitly inherit from typing.Generic\n.\nThe base classes and keyword arguments of generic classes are\nevaluated within the type scope for the type parameters,\nand decorators are evaluated outside that scope. This is illustrated\nby this example:\n@decorator\nclass Bag(Base[T], arg=T): ...\nThis is equivalent to:\nannotation-def TYPE_PARAMS_OF_Bag():\nT = typing.TypeVar(\"T\")\nclass Bag(Base[T], typing.Generic[T], arg=T):\n__type_params__ = (T,)\n...\nreturn Bag\nBag = decorator(TYPE_PARAMS_OF_Bag())\n8.10.3. Generic type aliases\u00b6\nThe type\nstatement can also be used to create a generic type alias:\ntype ListOrSet[T] = list[T] | set[T]\nExcept for the lazy evaluation of the value, this is equivalent to:\nannotation-def TYPE_PARAMS_OF_ListOrSet():\nT = typing.TypeVar(\"T\")\nannotation-def VALUE_OF_ListOrSet():\nreturn list[T] | set[T]\n# In reality, the value is lazily evaluated\nreturn typing.TypeAliasType(\"ListOrSet\", VALUE_OF_ListOrSet(), type_params=(T,))\nListOrSet = TYPE_PARAMS_OF_ListOrSet()\nHere, annotation-def\n(not a real keyword) indicates an\nannotation scope. The capitalized names\nlike TYPE_PARAMS_OF_ListOrSet\nare not actually bound at runtime.\n8.11. Annotations\u00b6\nChanged in version 3.14: Annotations are now lazily evaluated by default.\nVariables and function parameters may carry annotations, created by adding a colon after the name, followed by an expression:\nx: annotation = 1\ndef f(param: annotation): ...\nFunctions may also carry a return annotation following an arrow:\ndef f() -> annotation: ...\nAnnotations are conventionally used for type hints, but this\nis not enforced by the language, and in general annotations may contain arbitrary\nexpressions. The presence of annotations does not change the runtime semantics of\nthe code, except if some mechanism is used that introspects and uses the annotations\n(such as dataclasses\nor functools.singledispatch()\n).\nBy default, annotations are lazily evaluated in an annotation scope.\nThis means that they are not evaluated when the code containing the annotation is evaluated.\nInstead, the interpreter saves information that can be used to evaluate the annotation later\nif requested. The annotationlib\nmodule provides tools for evaluating annotations.\nIf the future statement from __future__ import annotations\nis present,\nall annotations are instead stored as strings:\n>>> from __future__ import annotations\n>>> def f(param: annotation): ...\n>>> f.__annotations__\n{'param': 'annotation'}\nThis future statement will be deprecated and removed in a future version of Python,\nbut not before Python 3.13 reaches its end of life (see PEP 749).\nWhen it is used, introspection tools like\nannotationlib.get_annotations()\nand typing.get_type_hints()\nare\nless likely to be able to resolve annotations at runtime.\nFootnotes", "code_snippets": [" ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n ", "\n ", " ", " ", " ", "\n ", "\n ", "\n", " ", " ", " ", "\n ", "\n", " ", " ", " ", "\n ", "\n ", "\n ", "\n ", " ", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", " ", "\n", " ", "\n", " ", " ", "\n", " ", "\n", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", " ", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", "\n", "\n", "\n ", "\n ", "\n ", "\n ", " ", "\n", "\n ", "\n ", " ", "\n ", "\n ", " ", "\n", " ", " ", " ", "\n ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n\n", "\n ", " ", " ", "\n ", "\n", "\n ", " ", " ", "\n ", " ", " ", "\n ", "\n", "\n ", " ", " ", "\n ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n ", "\n", " ", " ", " ", "\n ", " ", " ", " ", "\n ", "\n", " ", "\n ", " ", " ", "\n ", " ", " ", "\n", "\n ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", "\n", " ", " ", " ", " ", "\n", " ", "\n", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", " ", "\n", " ", "\n", " ", " ", "\n", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", "\n ", " ", "\n", "\n ", "\n", "\n ", "\n", "\n", "\n", " ", "\n", " ", "\n", " ", " ", "\n", " ", " ", "\n ", "\n ", " ", "\n", " ", " ", " ", " ", "\n ", "\n", "\n ", "\n", " ", " ", "\n", " ", " ", "\n\n", " ", "\n ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", " ", "\n ", "\n ", "\n", "\n ", "\n", " ", " ", " ", " ", "\n ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n\n", "\n ", " ", " ", "\n ", "\n", "\n ", " ", " ", "\n ", " ", " ", " ", "\n ", "\n", "\n ", " ", " ", "\n ", " ", " ", " ", "\n", " ", " ", " ", "\n ", "\n\n", " ", " ", " ", " ", "\n ", "\n\n", "\n ", " ", " ", "\n ", "\n\n ", " ", " ", " ", " ", "\n ", "\n\n", " ", " ", " ", " ", " ", "\n", "\n ", "\n ", " ", " ", "\n ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", " ", "\n ", " ", "\n", " ", "\n", " ", " ", "\n", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", "\n", " ", " ", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n ", "\n", " ", " ", "\n\n", "\n\n ", "\n ", " ", "\n ", "\n ", " ", " ", " ", "\n\n ", " ", " ", "\n ", " ", " ", "\n\n ", " ", " ", " ", " ", " ", " ", "\n ", "\n\n ", " ", " ", " ", " ", "\n ", " ", "\n", " ", " ", "\n", " ", "\n", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n ", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", "\n ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", "\n ", " ", " ", "\n\n ", "\n ", " ", " ", " ", "\n ", "\n ", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", " ", " ", "\n", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 13426} +{"url": "https://docs.python.org/3/using/cmdline.html", "title": "Command line and environment", "content": "1. Command line and environment\u00b6\nThe CPython interpreter scans the command line and the environment for various settings.\nCPython implementation detail: Other implementations\u2019 command line schemes may differ. See Alternate Implementations for further resources.\n1.1. Command line\u00b6\nWhen invoking Python, you may specify any of these options:\npython [-bBdEhiIOPqRsSuvVWx?] [-c command | -m module-name | script | - ] [args]\nThe most common use case is, of course, a simple invocation of a script:\npython myscript.py\n1.1.1. Interface options\u00b6\nThe interpreter interface resembles that of the UNIX shell, but provides some additional methods of invocation:\nWhen called with standard input connected to a tty device, it prompts for commands and executes them until an EOF (an end-of-file character, you can produce that with Ctrl-D on UNIX or Ctrl-Z, Enter on Windows) is read. For more on interactive mode, see Interactive Mode.\nWhen called with a file name argument or with a file as standard input, it reads and executes a script from that file.\nWhen called with a directory name argument, it reads and executes an appropriately named script from that directory.\nWhen called with\n-c command\n, it executes the Python statement(s) given as command. Here command may contain multiple statements separated by newlines. Leading whitespace is significant in Python statements!When called with\n-m module-name\n, the given module is located on the Python module path and executed as a script.\nIn non-interactive mode, the entire input is parsed before it is executed.\nAn interface option terminates the list of options consumed by the interpreter,\nall consecutive arguments will end up in sys.argv\n\u2013 note that the first\nelement, subscript zero (sys.argv[0]\n), is a string reflecting the program\u2019s\nsource.\n- -c \u00b6\nExecute the Python code in command. command can be one or more statements separated by newlines, with significant leading whitespace as in normal module code.\nIf this option is given, the first element of\nsys.argv\nwill be\"-c\"\nand the current directory will be added to the start ofsys.path\n(allowing modules in that directory to be imported as top level modules).Raises an auditing event\ncpython.run_command\nwith argumentcommand\n.Changed in version 3.14: command is automatically dedented before execution.\n- -m \u00b6\nSearch\nsys.path\nfor the named module and execute its contents as the__main__\nmodule.Since the argument is a module name, you must not give a file extension (\n.py\n). The module name should be a valid absolute Python module name, but the implementation may not always enforce this (e.g. it may allow you to use a name that includes a hyphen).Package names (including namespace packages) are also permitted. When a package name is supplied instead of a normal module, the interpreter will execute\n.__main__\nas the main module. This behaviour is deliberately similar to the handling of directories and zipfiles that are passed to the interpreter as the script argument.Note\nThis option cannot be used with built-in modules and extension modules written in C, since they do not have Python module files. However, it can still be used for precompiled modules, even if the original source file is not available.\nIf this option is given, the first element of\nsys.argv\nwill be the full path to the module file (while the module file is being located, the first element will be set to\"-m\"\n). As with the-c\noption, the current directory will be added to the start ofsys.path\n.-I\noption can be used to run the script in isolated mode wheresys.path\ncontains neither the current directory nor the user\u2019s site-packages directory. AllPYTHON*\nenvironment variables are ignored, too.Many standard library modules contain code that is invoked on their execution as a script. An example is the\ntimeit\nmodule:python -m timeit -s \"setup here\" \"benchmarked code here\" python -m timeit -h # for details\nRaises an auditing event\ncpython.run_module\nwith argumentmodule-name\n.See also\nrunpy.run_module()\nEquivalent functionality directly available to Python code\nPEP 338 \u2013 Executing modules as scripts\nChanged in version 3.1: Supply the package name to run a\n__main__\nsubmodule.Changed in version 3.4: namespace packages are also supported\n- -\nRead commands from standard input (\nsys.stdin\n). If standard input is a terminal,-i\nis implied.If this option is given, the first element of\nsys.argv\nwill be\"-\"\nand the current directory will be added to the start ofsys.path\n.Raises an auditing event\ncpython.run_stdin\nwith no arguments.\n-