{"url": "https://docs.python.org/3/copyright.html", "title": "Copyright", "content": "Copyright\u00b6\nPython and this documentation is:\nCopyright \u00a9 2001 Python Software Foundation. All rights reserved.\nCopyright \u00a9 2000 BeOpen.com. All rights reserved.\nCopyright \u00a9 1995-2000 Corporation for National Research Initiatives. All rights reserved.\nCopyright \u00a9 1991-1995 Stichting Mathematisch Centrum. All rights reserved.\nSee History and License for complete license and permissions information.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 100}
{"url": "https://docs.python.org/3/tutorial/appendix.html", "title": "Appendix", "content": "16. Appendix\u00b6\n16.1. Interactive Mode\u00b6\nThere are two variants of the interactive REPL. The classic basic interpreter is supported on all platforms with minimal line control capabilities.\nSince Python 3.13, a new interactive shell is used by default.\nThis one supports color, multiline editing, history browsing, and\npaste mode. To disable color, see Controlling color for\ndetails. Function keys provide some additional functionality.\nF1 enters the interactive help browser pydoc\n.\nF2 allows for browsing command-line history with neither output nor the\n>>> and \u2026 prompts. F3 enters \u201cpaste mode\u201d, which\nmakes pasting larger blocks of code easier. Press F3 to return to\nthe regular prompt.\nWhen using the new interactive shell, exit the shell by typing exit or quit. Adding call parentheses after those commands is not required.\nIf the new interactive shell is not desired, it can be disabled via\nthe PYTHON_BASIC_REPL\nenvironment variable.\n16.1.1. Error Handling\u00b6\nWhen an error occurs, the interpreter prints an error message and a stack trace.\nIn interactive mode, it then returns to the primary prompt; when input came from\na file, it exits with a nonzero exit status after printing the stack trace.\n(Exceptions handled by an except\nclause in a try\nstatement\nare not errors in this context.) Some errors are unconditionally fatal and\ncause an exit with a nonzero exit status; this applies to internal inconsistencies and\nsome cases of running out of memory. All error messages are written to the\nstandard error stream; normal output from executed commands is written to\nstandard output.\nTyping the interrupt character (usually Control-C or Delete) to the primary or\nsecondary prompt cancels the input and returns to the primary prompt. [1]\nTyping an interrupt while a command is executing raises the\nKeyboardInterrupt\nexception, which may be handled by a try\nstatement.\n16.1.2. Executable Python Scripts\u00b6\nOn BSD\u2019ish Unix systems, Python scripts can be made directly executable, like shell scripts, by putting the line\n#!/usr/bin/env python3\n(assuming that the interpreter is on the user\u2019s PATH\n) at the beginning\nof the script and giving the file an executable mode. The #!\nmust be the\nfirst two characters of the file. On some platforms, this first line must end\nwith a Unix-style line ending ('\\n'\n), not a Windows ('\\r\\n'\n) line\nending. Note that the hash, or pound, character, '#'\n, is used to start a\ncomment in Python.\nThe script can be given an executable mode, or permission, using the chmod command.\n$ chmod +x myscript.py\nOn Windows systems, there is no notion of an \u201cexecutable mode\u201d. The Python\ninstaller automatically associates .py\nfiles with python.exe\nso that\na double-click on a Python file will run it as a script. The extension can\nalso be .pyw\n, in that case, the console window that normally appears is\nsuppressed.\n16.1.3. The Interactive Startup File\u00b6\nWhen you use Python interactively, it is frequently handy to have some standard\ncommands executed every time the interpreter is started. You can do this by\nsetting an environment variable named PYTHONSTARTUP\nto the name of a\nfile containing your start-up commands. This is similar to the .profile\nfeature of the Unix shells.\nThis file is only read in interactive sessions, not when Python reads commands\nfrom a script, and not when /dev/tty\nis given as the explicit source of\ncommands (which otherwise behaves like an interactive session). It is executed\nin the same namespace where interactive commands are executed, so that objects\nthat it defines or imports can be used without qualification in the interactive\nsession. You can also change the prompts sys.ps1\nand sys.ps2\nin this\nfile.\nIf you want to read an additional start-up file from the current directory, you\ncan program this in the global start-up file using code like if\nos.path.isfile('.pythonrc.py'): exec(open('.pythonrc.py').read())\n.\nIf you want to use the startup file in a script, you must do this explicitly\nin the script:\nimport os\nfilename = os.environ.get('PYTHONSTARTUP')\nif filename and os.path.isfile(filename):\nwith open(filename) as fobj:\nstartup_file = fobj.read()\nexec(startup_file)\n16.1.4. The Customization Modules\u00b6\nPython provides two hooks to let you customize it: sitecustomize and usercustomize. To see how it works, you need first to find the location of your user site-packages directory. Start Python and run this code:\n>>> import site\n>>> site.getusersitepackages()\n'/home/user/.local/lib/python3.x/site-packages'\nNow you can create a file named usercustomize.py\nin that directory and\nput anything you want in it. It will affect every invocation of Python, unless\nit is started with the -s\noption to disable the automatic import.\nsitecustomize works in the same way, but is typically created by an\nadministrator of the computer in the global site-packages directory, and is\nimported before usercustomize. See the documentation of the site\nmodule for more details.\nFootnotes", "code_snippets": ["\n", "\n", " ", " ", "\n", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", "\n", "\n", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 1234}
{"url": "https://docs.python.org/3/tutorial/floatingpoint.html", "title": "Floating-Point Arithmetic: Issues and Limitations", "content": "15. Floating-Point Arithmetic: Issues and Limitations\u00b6\nFloating-point numbers are represented in computer hardware as base 2 (binary)\nfractions. For example, the decimal fraction 0.625\nhas value 6/10 + 2/100 + 5/1000, and in the same way the binary fraction 0.101\nhas value 1/2 + 0/4 + 1/8. These two fractions have identical values, the only\nreal difference being that the first is written in base 10 fractional notation,\nand the second in base 2.\nUnfortunately, most decimal fractions cannot be represented exactly as binary fractions. A consequence is that, in general, the decimal floating-point numbers you enter are only approximated by the binary floating-point numbers actually stored in the machine.\nThe problem is easier to understand at first in base 10. Consider the fraction 1/3. You can approximate that as a base 10 fraction:\n0.3\nor, better,\n0.33\nor, better,\n0.333\nand so on. No matter how many digits you\u2019re willing to write down, the result will never be exactly 1/3, but will be an increasingly better approximation of 1/3.\nIn the same way, no matter how many base 2 digits you\u2019re willing to use, the decimal value 0.1 cannot be represented exactly as a base 2 fraction. In base 2, 1/10 is the infinitely repeating fraction\n0.0001100110011001100110011001100110011001100110011...\nStop at any finite number of bits, and you get an approximation. On most\nmachines today, floats are approximated using a binary fraction with\nthe numerator using the first 53 bits starting with the most significant bit and\nwith the denominator as a power of two. In the case of 1/10, the binary fraction\nis 3602879701896397 / 2 ** 55\nwhich is close to but not exactly\nequal to the true value of 1/10.\nMany users are not aware of the approximation because of the way values are displayed. Python only prints a decimal approximation to the true decimal value of the binary approximation stored by the machine. On most machines, if Python were to print the true decimal value of the binary approximation stored for 0.1, it would have to display:\n>>> 0.1\n0.1000000000000000055511151231257827021181583404541015625\nThat is more digits than most people find useful, so Python keeps the number of digits manageable by displaying a rounded value instead:\n>>> 1 / 10\n0.1\nJust remember, even though the printed result looks like the exact value of 1/10, the actual stored value is the nearest representable binary fraction.\nInterestingly, there are many different decimal numbers that share the same\nnearest approximate binary fraction. For example, the numbers 0.1\nand\n0.10000000000000001\nand\n0.1000000000000000055511151231257827021181583404541015625\nare all\napproximated by 3602879701896397 / 2 ** 55\n. Since all of these decimal\nvalues share the same approximation, any one of them could be displayed\nwhile still preserving the invariant eval(repr(x)) == x\n.\nHistorically, the Python prompt and built-in repr()\nfunction would choose\nthe one with 17 significant digits, 0.10000000000000001\n. Starting with\nPython 3.1, Python (on most systems) is now able to choose the shortest of\nthese and simply display 0.1\n.\nNote that this is in the very nature of binary floating point: this is not a bug in Python, and it is not a bug in your code either. You\u2019ll see the same kind of thing in all languages that support your hardware\u2019s floating-point arithmetic (although some languages may not display the difference by default, or in all output modes).\nFor more pleasant output, you may wish to use string formatting to produce a limited number of significant digits:\n>>> format(math.pi, '.12g') # give 12 significant digits\n'3.14159265359'\n>>> format(math.pi, '.2f') # give 2 digits after the point\n'3.14'\n>>> repr(math.pi)\n'3.141592653589793'\nIt\u2019s important to realize that this is, in a real sense, an illusion: you\u2019re simply rounding the display of the true machine value.\nOne illusion may beget another. For example, since 0.1 is not exactly 1/10, summing three values of 0.1 may not yield exactly 0.3, either:\n>>> 0.1 + 0.1 + 0.1 == 0.3\nFalse\nAlso, since the 0.1 cannot get any closer to the exact value of 1/10 and\n0.3 cannot get any closer to the exact value of 3/10, then pre-rounding with\nround()\nfunction cannot help:\n>>> round(0.1, 1) + round(0.1, 1) + round(0.1, 1) == round(0.3, 1)\nFalse\nThough the numbers cannot be made closer to their intended exact values,\nthe math.isclose()\nfunction can be useful for comparing inexact values:\n>>> math.isclose(0.1 + 0.1 + 0.1, 0.3)\nTrue\nAlternatively, the round()\nfunction can be used to compare rough\napproximations:\n>>> round(math.pi, ndigits=2) == round(22 / 7, ndigits=2)\nTrue\nBinary floating-point arithmetic holds many surprises like this. The problem with \u201c0.1\u201d is explained in precise detail below, in the \u201cRepresentation Error\u201d section. See Examples of Floating Point Problems for a pleasant summary of how binary floating point works and the kinds of problems commonly encountered in practice. Also see The Perils of Floating Point for a more complete account of other common surprises.\nAs that says near the end, \u201cthere are no easy answers.\u201d Still, don\u2019t be unduly wary of floating point! The errors in Python float operations are inherited from the floating-point hardware, and on most machines are on the order of no more than 1 part in 2**53 per operation. That\u2019s more than adequate for most tasks, but you do need to keep in mind that it\u2019s not decimal arithmetic and that every float operation can suffer a new rounding error.\nWhile pathological cases do exist, for most casual use of floating-point\narithmetic you\u2019ll see the result you expect in the end if you simply round the\ndisplay of your final results to the number of decimal digits you expect.\nstr()\nusually suffices, and for finer control see the str.format()\nmethod\u2019s format specifiers in Format String Syntax.\nFor use cases which require exact decimal representation, try using the\ndecimal\nmodule which implements decimal arithmetic suitable for\naccounting applications and high-precision applications.\nAnother form of exact arithmetic is supported by the fractions\nmodule\nwhich implements arithmetic based on rational numbers (so the numbers like\n1/3 can be represented exactly).\nIf you are a heavy user of floating-point operations you should take a look at the NumPy package and many other packages for mathematical and statistical operations supplied by the SciPy project. See .\nPython provides tools that may help on those rare occasions when you really\ndo want to know the exact value of a float. The\nfloat.as_integer_ratio()\nmethod expresses the value of a float as a\nfraction:\n>>> x = 3.14159\n>>> x.as_integer_ratio()\n(3537115888337719, 1125899906842624)\nSince the ratio is exact, it can be used to losslessly recreate the original value:\n>>> x == 3537115888337719 / 1125899906842624\nTrue\nThe float.hex()\nmethod expresses a float in hexadecimal (base\n16), again giving the exact value stored by your computer:\n>>> x.hex()\n'0x1.921f9f01b866ep+1'\nThis precise hexadecimal representation can be used to reconstruct the float value exactly:\n>>> x == float.fromhex('0x1.921f9f01b866ep+1')\nTrue\nSince the representation is exact, it is useful for reliably porting values across different versions of Python (platform independence) and exchanging data with other languages that support the same format (such as Java and C99).\nAnother helpful tool is the sum()\nfunction which helps mitigate\nloss-of-precision during summation. It uses extended precision for\nintermediate rounding steps as values are added onto a running total.\nThat can make a difference in overall accuracy so that the errors do not\naccumulate to the point where they affect the final total:\n>>> 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 == 1.0\nFalse\n>>> sum([0.1] * 10) == 1.0\nTrue\nThe math.fsum()\ngoes further and tracks all of the \u201clost digits\u201d\nas values are added onto a running total so that the result has only a\nsingle rounding. This is slower than sum()\nbut will be more\naccurate in uncommon cases where large magnitude inputs mostly cancel\neach other out leaving a final sum near zero:\n>>> arr = [-0.10430216751806065, -266310978.67179024, 143401161448607.16,\n... -143401161400469.7, 266262841.31058735, -0.003244936839808227]\n>>> float(sum(map(Fraction, arr))) # Exact summation with single rounding\n8.042173697819788e-13\n>>> math.fsum(arr) # Single rounding\n8.042173697819788e-13\n>>> sum(arr) # Multiple roundings in extended precision\n8.042178034628478e-13\n>>> total = 0.0\n>>> for x in arr:\n... total += x # Multiple roundings in standard precision\n...\n>>> total # Straight addition has no correct digits!\n-0.0051575902860057365\n15.1. Representation Error\u00b6\nThis section explains the \u201c0.1\u201d example in detail, and shows how you can perform an exact analysis of cases like this yourself. Basic familiarity with binary floating-point representation is assumed.\nRepresentation error refers to the fact that some (most, actually) decimal fractions cannot be represented exactly as binary (base 2) fractions. This is the chief reason why Python (or Perl, C, C++, Java, Fortran, and many others) often won\u2019t display the exact decimal number you expect.\nWhy is that? 1/10 is not exactly representable as a binary fraction. Since at least 2000, almost all machines use IEEE 754 binary floating-point arithmetic, and almost all platforms map Python floats to IEEE 754 binary64 \u201cdouble precision\u201d values. IEEE 754 binary64 values contain 53 bits of precision, so on input the computer strives to convert 0.1 to the closest fraction it can of the form J/2**N where J is an integer containing exactly 53 bits. Rewriting\n1 / 10 ~= J / (2**N)\nas\nJ ~= 2**N / 10\nand recalling that J has exactly 53 bits (is >= 2**52\nbut < 2**53\n),\nthe best value for N is 56:\n>>> 2**52 <= 2**56 // 10 < 2**53\nTrue\nThat is, 56 is the only value for N that leaves J with exactly 53 bits. The best possible value for J is then that quotient rounded:\n>>> q, r = divmod(2**56, 10)\n>>> r\n6\nSince the remainder is more than half of 10, the best approximation is obtained by rounding up:\n>>> q+1\n7205759403792794\nTherefore the best possible approximation to 1/10 in IEEE 754 double precision is:\n7205759403792794 / 2 ** 56\nDividing both the numerator and denominator by two reduces the fraction to:\n3602879701896397 / 2 ** 55\nNote that since we rounded up, this is actually a little bit larger than 1/10; if we had not rounded up, the quotient would have been a little bit smaller than 1/10. But in no case can it be exactly 1/10!\nSo the computer never \u201csees\u201d 1/10: what it sees is the exact fraction given above, the best IEEE 754 double approximation it can get:\n>>> 0.1 * 2 ** 55\n3602879701896397.0\nIf we multiply that fraction by 10**55, we can see the value out to 55 decimal digits:\n>>> 3602879701896397 * 10 ** 55 // 2 ** 55\n1000000000000000055511151231257827021181583404541015625\nmeaning that the exact number stored in the computer is equal to the decimal value 0.1000000000000000055511151231257827021181583404541015625. Instead of displaying the full decimal value, many languages (including older versions of Python), round the result to 17 significant digits:\n>>> format(0.1, '.17f')\n'0.10000000000000001'\nThe fractions\nand decimal\nmodules make these calculations\neasy:\n>>> from decimal import Decimal\n>>> from fractions import Fraction\n>>> Fraction.from_float(0.1)\nFraction(3602879701896397, 36028797018963968)\n>>> (0.1).as_integer_ratio()\n(3602879701896397, 36028797018963968)\n>>> Decimal.from_float(0.1)\nDecimal('0.1000000000000000055511151231257827021181583404541015625')\n>>> format(Decimal.from_float(0.1), '.17')\n'0.10000000000000001'", "code_snippets": ["\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 2909}
{"url": "https://docs.python.org/3/tutorial/interactive.html", "title": "Interactive Input Editing and History Substitution", "content": "14. Interactive Input Editing and History Substitution\u00b6\nSome versions of the Python interpreter support editing of the current input line and history substitution, similar to facilities found in the Korn shell and the GNU Bash shell. This is implemented using the GNU Readline library, which supports various styles of editing. This library has its own documentation which we won\u2019t duplicate here.\n14.1. Tab Completion and History Editing\u00b6\nCompletion of variable and module names is\nautomatically enabled at interpreter startup so\nthat the Tab key invokes the completion function; it looks at\nPython statement names, the current local variables, and the available\nmodule names. For dotted expressions such as string.a\n, it will evaluate\nthe expression up to the final '.'\nand then suggest completions from\nthe attributes of the resulting object. Note that this may execute\napplication-defined code if an object with a __getattr__()\nmethod\nis part of the expression. The default configuration also saves your\nhistory into a file named .python_history\nin your user directory.\nThe history will be available again during the next interactive interpreter\nsession.\n14.2. Alternatives to the Interactive Interpreter\u00b6\nThis facility is an enormous step forward compared to earlier versions of the\ninterpreter; however, some wishes are left: It would be nice if the proper\nindentation were suggested on continuation lines (the parser knows if an\nINDENT\ntoken is required next). The completion mechanism might\nuse the interpreter\u2019s symbol table. A command to check (or even suggest)\nmatching parentheses, quotes, etc., would also be useful.\nOne alternative enhanced interactive interpreter that has been around for quite some time is IPython, which features tab completion, object exploration and advanced history management. It can also be thoroughly customized and embedded into other applications. Another similar enhanced interactive environment is bpython.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 487}
{"url": "https://docs.python.org/3/tutorial/whatnow.html", "title": "What Now?", "content": "13. What Now?\u00b6\nReading this tutorial has probably reinforced your interest in using Python \u2014 you should be eager to apply Python to solving your real-world problems. Where should you go to learn more?\nThis tutorial is part of Python\u2019s documentation set. Some other documents in the set are:\n-\nYou should browse through this manual, which gives complete (though terse) reference material about types, functions, and the modules in the standard library. The standard Python distribution includes a lot of additional code. There are modules to read Unix mailboxes, retrieve documents via HTTP, generate random numbers, parse command-line options, compress data, and many other tasks. Skimming through the Library Reference will give you an idea of what\u2019s available.\nInstalling Python Modules explains how to install additional modules written by other Python users.\nThe Python Language Reference: A detailed explanation of Python\u2019s syntax and semantics. It\u2019s heavy reading, but is useful as a complete guide to the language itself.\nMore Python resources:\nhttps://www.python.org: The major Python website. It contains code, documentation, and pointers to Python-related pages around the web.\nhttps://docs.python.org: Fast access to Python\u2019s documentation.\nhttps://pypi.org: The Python Package Index, previously also nicknamed the Cheese Shop [1], is an index of user-created Python modules that are available for download. Once you begin releasing code, you can register it here so that others can find it.\nhttps://code.activestate.com/recipes/langs/python/: The Python Cookbook is a sizable collection of code examples, larger modules, and useful scripts. Particularly notable contributions are collected in a book also titled Python Cookbook (O\u2019Reilly & Associates, ISBN 0-596-00797-3.)\nhttps://pyvideo.org collects links to Python-related videos from conferences and user-group meetings.\nhttps://scipy.org: The Scientific Python project includes modules for fast array computations and manipulations plus a host of packages for such things as linear algebra, Fourier transforms, non-linear solvers, random number distributions, statistical analysis and the like.\nFor Python-related questions and problem reports, you can post to the newsgroup comp.lang.python, or send them to the mailing list at python-list@python.org. The newsgroup and mailing list are gatewayed, so messages posted to one will automatically be forwarded to the other. There are hundreds of postings a day, asking (and answering) questions, suggesting new features, and announcing new modules. Mailing list archives are available at https://mail.python.org/pipermail/.\nBefore posting, be sure to check the list of Frequently Asked Questions (also called the FAQ). The FAQ answers many of the questions that come up again and again, and may already contain the solution for your problem.\nFootnotes", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 716}
{"url": "https://docs.python.org/3/tutorial/venv.html", "title": "Virtual Environments and Packages", "content": "12. Virtual Environments and Packages\u00b6\n12.1. Introduction\u00b6\nPython applications will often use packages and modules that don\u2019t come as part of the standard library. Applications will sometimes need a specific version of a library, because the application may require that a particular bug has been fixed or the application may be written using an obsolete version of the library\u2019s interface.\nThis means it may not be possible for one Python installation to meet the requirements of every application. If application A needs version 1.0 of a particular module but application B needs version 2.0, then the requirements are in conflict and installing either version 1.0 or 2.0 will leave one application unable to run.\nThe solution for this problem is to create a virtual environment, a self-contained directory tree that contains a Python installation for a particular version of Python, plus a number of additional packages.\nDifferent applications can then use different virtual environments. To resolve the earlier example of conflicting requirements, application A can have its own virtual environment with version 1.0 installed while application B has another virtual environment with version 2.0. If application B requires a library be upgraded to version 3.0, this will not affect application A\u2019s environment.\n12.2. Creating Virtual Environments\u00b6\nThe module used to create and manage virtual environments is called\nvenv\n. venv\nwill install the Python version from which\nthe command was run (as reported by the --version\noption).\nFor instance, executing the command with python3.12\nwill install\nversion 3.12.\nTo create a virtual environment, decide upon a directory where you want to\nplace it, and run the venv\nmodule as a script with the directory path:\npython -m venv tutorial-env\nThis will create the tutorial-env\ndirectory if it doesn\u2019t exist,\nand also create directories inside it containing a copy of the Python\ninterpreter and various supporting files.\nA common directory location for a virtual environment is .venv\n.\nThis name keeps the directory typically hidden in your shell and thus\nout of the way while giving it a name that explains why the directory\nexists. It also prevents clashing with .env\nenvironment variable\ndefinition files that some tooling supports.\nOnce you\u2019ve created a virtual environment, you may activate it.\nOn Windows, run:\ntutorial-env\\Scripts\\activate\nOn Unix or MacOS, run:\nsource tutorial-env/bin/activate\n(This script is written for the bash shell. If you use the\ncsh or fish shells, there are alternate\nactivate.csh\nand activate.fish\nscripts you should use\ninstead.)\nActivating the virtual environment will change your shell\u2019s prompt to show what\nvirtual environment you\u2019re using, and modify the environment so that running\npython\nwill get you that particular version and installation of Python.\nFor example:\n$ source ~/envs/tutorial-env/bin/activate\n(tutorial-env) $ python\nPython 3.5.1 (default, May 6 2016, 10:59:36)\n...\n>>> import sys\n>>> sys.path\n['', '/usr/local/lib/python35.zip', ...,\n'~/envs/tutorial-env/lib/python3.5/site-packages']\n>>>\nTo deactivate a virtual environment, type:\ndeactivate\ninto the terminal.\n12.3. Managing Packages with pip\u00b6\nYou can install, upgrade, and remove packages using a program called\npip. By default pip\nwill install packages from the Python\nPackage Index. You can browse the Python\nPackage Index by going to it in your web browser.\npip\nhas a number of subcommands: \u201cinstall\u201d, \u201cuninstall\u201d,\n\u201cfreeze\u201d, etc. (Consult the Installing Python Modules guide for\ncomplete documentation for pip\n.)\nYou can install the latest version of a package by specifying a package\u2019s name:\n(tutorial-env) $ python -m pip install novas\nCollecting novas\nDownloading novas-3.1.1.3.tar.gz (136kB)\nInstalling collected packages: novas\nRunning setup.py install for novas\nSuccessfully installed novas-3.1.1.3\nYou can also install a specific version of a package by giving the\npackage name followed by ==\nand the version number:\n(tutorial-env) $ python -m pip install requests==2.6.0\nCollecting requests==2.6.0\nUsing cached requests-2.6.0-py2.py3-none-any.whl\nInstalling collected packages: requests\nSuccessfully installed requests-2.6.0\nIf you re-run this command, pip\nwill notice that the requested\nversion is already installed and do nothing. You can supply a\ndifferent version number to get that version, or you can run python\n-m pip install --upgrade\nto upgrade the package to the latest version:\n(tutorial-env) $ python -m pip install --upgrade requests\nCollecting requests\nInstalling collected packages: requests\nFound existing installation: requests 2.6.0\nUninstalling requests-2.6.0:\nSuccessfully uninstalled requests-2.6.0\nSuccessfully installed requests-2.7.0\npython -m pip uninstall\nfollowed by one or more package names will\nremove the packages from the virtual environment.\npython -m pip show\nwill display information about a particular package:\n(tutorial-env) $ python -m pip show requests\n---\nMetadata-Version: 2.0\nName: requests\nVersion: 2.7.0\nSummary: Python HTTP for Humans.\nHome-page: http://python-requests.org\nAuthor: Kenneth Reitz\nAuthor-email: me@kennethreitz.com\nLicense: Apache 2.0\nLocation: /Users/akuchling/envs/tutorial-env/lib/python3.4/site-packages\nRequires:\npython -m pip list\nwill display all of the packages installed in\nthe virtual environment:\n(tutorial-env) $ python -m pip list\nnovas (3.1.1.3)\nnumpy (1.9.2)\npip (7.0.3)\nrequests (2.7.0)\nsetuptools (16.0)\npython -m pip freeze\nwill produce a similar list of the installed packages,\nbut the output uses the format that python -m pip install\nexpects.\nA common convention is to put this list in a requirements.txt\nfile:\n(tutorial-env) $ python -m pip freeze > requirements.txt\n(tutorial-env) $ cat requirements.txt\nnovas==3.1.1.3\nnumpy==1.9.2\nrequests==2.7.0\nThe requirements.txt\ncan then be committed to version control and\nshipped as part of an application. Users can then install all the\nnecessary packages with install -r\n:\n(tutorial-env) $ python -m pip install -r requirements.txt\nCollecting novas==3.1.1.3 (from -r requirements.txt (line 1))\n...\nCollecting numpy==1.9.2 (from -r requirements.txt (line 2))\n...\nCollecting requests==2.7.0 (from -r requirements.txt (line 3))\n...\nInstalling collected packages: novas, numpy, requests\nRunning setup.py install for novas\nSuccessfully installed novas-3.1.1.3 numpy-1.9.2 requests-2.7.0\npip\nhas many more options. Consult the Installing Python Modules\nguide for complete documentation for pip\n. When you\u2019ve written\na package and want to make it available on the Python Package Index,\nconsult the Python packaging user guide.", "code_snippets": [" ", " ", " ", "\n", "\\", "\\", "\n", " ", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 1653}
{"url": "https://docs.python.org/3/tutorial/stdlib2.html", "title": "Brief Tour of the Standard Library \u2014 Part II", "content": "11. Brief Tour of the Standard Library \u2014 Part II\u00b6\nThis second tour covers more advanced modules that support professional programming needs. These modules rarely occur in small scripts.\n11.1. Output Formatting\u00b6\nThe reprlib\nmodule provides a version of repr()\ncustomized for\nabbreviated displays of large or deeply nested containers:\n>>> import reprlib\n>>> reprlib.repr(set('supercalifragilisticexpialidocious'))\n\"{'a', 'c', 'd', 'e', 'f', 'g', ...}\"\nThe pprint\nmodule offers more sophisticated control over printing both\nbuilt-in and user defined objects in a way that is readable by the interpreter.\nWhen the result is longer than one line, the \u201cpretty printer\u201d adds line breaks\nand indentation to more clearly reveal data structure:\n>>> import pprint\n>>> t = [[[['black', 'cyan'], 'white', ['green', 'red']], [['magenta',\n... 'yellow'], 'blue']]]\n...\n>>> pprint.pprint(t, width=30)\n[[[['black', 'cyan'],\n'white',\n['green', 'red']],\n[['magenta', 'yellow'],\n'blue']]]\nThe textwrap\nmodule formats paragraphs of text to fit a given screen\nwidth:\n>>> import textwrap\n>>> doc = \"\"\"The wrap() method is just like fill() except that it returns\n... a list of strings instead of one big string with newlines to separate\n... the wrapped lines.\"\"\"\n...\n>>> print(textwrap.fill(doc, width=40))\nThe wrap() method is just like fill()\nexcept that it returns a list of strings\ninstead of one big string with newlines\nto separate the wrapped lines.\nThe locale\nmodule accesses a database of culture specific data formats.\nThe grouping attribute of locale\u2019s format function provides a direct way of\nformatting numbers with group separators:\n>>> import locale\n>>> locale.setlocale(locale.LC_ALL, 'English_United States.1252')\n'English_United States.1252'\n>>> conv = locale.localeconv() # get a mapping of conventions\n>>> x = 1234567.8\n>>> locale.format_string(\"%d\", x, grouping=True)\n'1,234,567'\n>>> locale.format_string(\"%s%.*f\", (conv['currency_symbol'],\n... conv['frac_digits'], x), grouping=True)\n'$1,234,567.80'\n11.2. Templating\u00b6\nThe string\nmodule includes a versatile Template\nclass\nwith a simplified syntax suitable for editing by end-users. This allows users\nto customize their applications without having to alter the application.\nThe format uses placeholder names formed by $\nwith valid Python identifiers\n(alphanumeric characters and underscores). Surrounding the placeholder with\nbraces allows it to be followed by more alphanumeric letters with no intervening\nspaces. Writing $$\ncreates a single escaped $\n:\n>>> from string import Template\n>>> t = Template('${village}folk send $$10 to $cause.')\n>>> t.substitute(village='Nottingham', cause='the ditch fund')\n'Nottinghamfolk send $10 to the ditch fund.'\nThe substitute()\nmethod raises a KeyError\nwhen a\nplaceholder is not supplied in a dictionary or a keyword argument. For\nmail-merge style applications, user supplied data may be incomplete and the\nsafe_substitute()\nmethod may be more appropriate \u2014\nit will leave placeholders unchanged if data is missing:\n>>> t = Template('Return the $item to $owner.')\n>>> d = dict(item='unladen swallow')\n>>> t.substitute(d)\nTraceback (most recent call last):\n...\nKeyError: 'owner'\n>>> t.safe_substitute(d)\n'Return the unladen swallow to $owner.'\nTemplate subclasses can specify a custom delimiter. For example, a batch renaming utility for a photo browser may elect to use percent signs for placeholders such as the current date, image sequence number, or file format:\n>>> import time, os.path\n>>> photofiles = ['img_1074.jpg', 'img_1076.jpg', 'img_1077.jpg']\n>>> class BatchRename(Template):\n... delimiter = '%'\n...\n>>> fmt = input('Enter rename style (%d-date %n-seqnum %f-format): ')\nEnter rename style (%d-date %n-seqnum %f-format): Ashley_%n%f\n>>> t = BatchRename(fmt)\n>>> date = time.strftime('%d%b%y')\n>>> for i, filename in enumerate(photofiles):\n... base, ext = os.path.splitext(filename)\n... newname = t.substitute(d=date, n=i, f=ext)\n... print('{0} --> {1}'.format(filename, newname))\nimg_1074.jpg --> Ashley_0.jpg\nimg_1076.jpg --> Ashley_1.jpg\nimg_1077.jpg --> Ashley_2.jpg\nAnother application for templating is separating program logic from the details of multiple output formats. This makes it possible to substitute custom templates for XML files, plain text reports, and HTML web reports.\n11.3. Working with Binary Data Record Layouts\u00b6\nThe struct\nmodule provides pack()\nand\nunpack()\nfunctions for working with variable length binary\nrecord formats. The following example shows\nhow to loop through header information in a ZIP file without using the\nzipfile\nmodule. Pack codes \"H\"\nand \"I\"\nrepresent two and four\nbyte unsigned numbers respectively. The \"<\"\nindicates that they are\nstandard size and in little-endian byte order:\nimport struct\nwith open('myfile.zip', 'rb') as f:\ndata = f.read()\nstart = 0\nfor i in range(3): # show the first 3 file headers\nstart += 14\nfields = struct.unpack('>> import weakref, gc\n>>> class A:\n... def __init__(self, value):\n... self.value = value\n... def __repr__(self):\n... return str(self.value)\n...\n>>> a = A(10) # create a reference\n>>> d = weakref.WeakValueDictionary()\n>>> d['primary'] = a # does not create a reference\n>>> d['primary'] # fetch the object if it is still alive\n10\n>>> del a # remove the one reference\n>>> gc.collect() # run garbage collection right away\n0\n>>> d['primary'] # entry was automatically removed\nTraceback (most recent call last):\nFile \"\", line 1, in \nd['primary'] # entry was automatically removed\nFile \"C:/python314/lib/weakref.py\", line 46, in __getitem__\no = self.data[key]()\nKeyError: 'primary'\n11.7. Tools for Working with Lists\u00b6\nMany data structure needs can be met with the built-in list type. However, sometimes there is a need for alternative implementations with different performance trade-offs.\nThe array\nmodule provides an array\nobject that is like\na list that stores only homogeneous data and stores it more compactly. The\nfollowing example shows an array of numbers stored as two byte unsigned binary\nnumbers (typecode \"H\"\n) rather than the usual 16 bytes per entry for regular\nlists of Python int objects:\n>>> from array import array\n>>> a = array('H', [4000, 10, 700, 22222])\n>>> sum(a)\n26932\n>>> a[1:3]\narray('H', [10, 700])\nThe collections\nmodule provides a deque\nobject\nthat is like a list with faster appends and pops from the left side but slower\nlookups in the middle. These objects are well suited for implementing queues\nand breadth first tree searches:\n>>> from collections import deque\n>>> d = deque([\"task1\", \"task2\", \"task3\"])\n>>> d.append(\"task4\")\n>>> print(\"Handling\", d.popleft())\nHandling task1\nunsearched = deque([starting_node])\ndef breadth_first_search(unsearched):\nnode = unsearched.popleft()\nfor m in gen_moves(node):\nif is_goal(m):\nreturn m\nunsearched.append(m)\nIn addition to alternative list implementations, the library also offers other\ntools such as the bisect\nmodule with functions for manipulating sorted\nlists:\n>>> import bisect\n>>> scores = [(100, 'perl'), (200, 'tcl'), (400, 'lua'), (500, 'python')]\n>>> bisect.insort(scores, (300, 'ruby'))\n>>> scores\n[(100, 'perl'), (200, 'tcl'), (300, 'ruby'), (400, 'lua'), (500, 'python')]\nThe heapq\nmodule provides functions for implementing heaps based on\nregular lists. The lowest valued entry is always kept at position zero. This\nis useful for applications which repeatedly access the smallest element but do\nnot want to run a full list sort:\n>>> from heapq import heapify, heappop, heappush\n>>> data = [1, 3, 5, 7, 9, 2, 4, 6, 8, 0]\n>>> heapify(data) # rearrange the list into heap order\n>>> heappush(data, -5) # add a new entry\n>>> [heappop(data) for i in range(3)] # fetch the three smallest entries\n[-5, 0, 1]\n11.8. Decimal Floating-Point Arithmetic\u00b6\nThe decimal\nmodule offers a Decimal\ndatatype for\ndecimal floating-point arithmetic. Compared to the built-in float\nimplementation of binary floating point, the class is especially helpful for\nfinancial applications and other uses which require exact decimal representation,\ncontrol over precision,\ncontrol over rounding to meet legal or regulatory requirements,\ntracking of significant decimal places, or\napplications where the user expects the results to match calculations done by hand.\nFor example, calculating a 5% tax on a 70 cent phone charge gives different results in decimal floating point and binary floating point. The difference becomes significant if the results are rounded to the nearest cent:\n>>> from decimal import *\n>>> round(Decimal('0.70') * Decimal('1.05'), 2)\nDecimal('0.74')\n>>> round(.70 * 1.05, 2)\n0.73\nThe Decimal\nresult keeps a trailing zero, automatically\ninferring four place significance from multiplicands with two place\nsignificance. Decimal reproduces mathematics as done by hand and avoids\nissues that can arise when binary floating point cannot exactly represent\ndecimal quantities.\nExact representation enables the Decimal\nclass to perform\nmodulo calculations and equality tests that are unsuitable for binary floating\npoint:\n>>> Decimal('1.00') % Decimal('.10')\nDecimal('0.00')\n>>> 1.00 % 0.10\n0.09999999999999995\n>>> sum([Decimal('0.1')]*10) == Decimal('1.0')\nTrue\n>>> 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 == 1.0\nFalse\nThe decimal\nmodule provides arithmetic with as much precision as needed:\n>>> getcontext().prec = 36\n>>> Decimal(1) / Decimal(7)\nDecimal('0.142857142857142857142857142857142857')", "code_snippets": ["\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", " ", "\n", " ", " ", " ", "\n", "\n", " ", "\n", " ", " ", "\n", " ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", ": ", "\n", "\n", "\n", "\n", " ", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n", " ", " ", "\n", "\n\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", "\n\n", "\n", "\n", "\n", "\n\n", " ", " ", " ", " ", "\n ", " ", " ", "\n\n", " ", " ", "\n", " ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", "\n\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n\n ", " ", " ", " ", " ", " ", "\n", "\n\n", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", " ", " ", "\n\n ", "\n ", " ", " ", " ", " ", "\n ", "\n ", "\n ", " ", "\n\n", " ", " ", " ", "\n", "\n", "\n\n", " ", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", " ", " ", "\n", "\n", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", "\n", " ", " ", "\n", " ", "\n", "\n", " ", "\n", "\n File ", ", line ", ", in ", "\n", " ", "\n File ", ", line ", ", in ", "\n", " ", " ", "\n", ": ", "\n", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", "\n", " ", " ", " ", " ", "\n", "\n", " ", "\n", "\n", " ", " ", "\n", "\n ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", "\n ", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", "\n", " ", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n\n", " ", " ", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 3436}
{"url": "https://docs.python.org/3/tutorial/stdlib.html", "title": "Brief Tour of the Standard Library", "content": "10. Brief Tour of the Standard Library\u00b6\n10.1. Operating System Interface\u00b6\nThe os\nmodule provides dozens of functions for interacting with the\noperating system:\n>>> import os\n>>> os.getcwd() # Return the current working directory\n'C:\\\\Python314'\n>>> os.chdir('/server/accesslogs') # Change current working directory\n>>> os.system('mkdir today') # Run the command mkdir in the system shell\n0\nBe sure to use the import os\nstyle instead of from os import *\n. This\nwill keep os.open()\nfrom shadowing the built-in open()\nfunction which\noperates much differently.\nThe built-in dir()\nand help()\nfunctions are useful as interactive\naids for working with large modules like os\n:\n>>> import os\n>>> dir(os)\n\n>>> help(os)\n\nFor daily file and directory management tasks, the shutil\nmodule provides\na higher level interface that is easier to use:\n>>> import shutil\n>>> shutil.copyfile('data.db', 'archive.db')\n'archive.db'\n>>> shutil.move('/build/executables', 'installdir')\n'installdir'\n10.2. File Wildcards\u00b6\nThe glob\nmodule provides a function for making file lists from directory\nwildcard searches:\n>>> import glob\n>>> glob.glob('*.py')\n['primes.py', 'random.py', 'quote.py']\n10.3. Command Line Arguments\u00b6\nCommon utility scripts often need to process command line arguments. These\narguments are stored in the sys\nmodule\u2019s argv attribute as a list. For\ninstance, let\u2019s take the following demo.py\nfile:\n# File demo.py\nimport sys\nprint(sys.argv)\nHere is the output from running python demo.py one two three\nat the command\nline:\n['demo.py', 'one', 'two', 'three']\nThe argparse\nmodule provides a more sophisticated mechanism to process\ncommand line arguments. The following script extracts one or more filenames\nand an optional number of lines to be displayed:\nimport argparse\nparser = argparse.ArgumentParser(\nprog='top',\ndescription='Show top lines from each file')\nparser.add_argument('filenames', nargs='+')\nparser.add_argument('-l', '--lines', type=int, default=10)\nargs = parser.parse_args()\nprint(args)\nWhen run at the command line with python top.py --lines=5 alpha.txt\nbeta.txt\n, the script sets args.lines\nto 5\nand args.filenames\nto ['alpha.txt', 'beta.txt']\n.\n10.4. Error Output Redirection and Program Termination\u00b6\nThe sys\nmodule also has attributes for stdin, stdout, and stderr.\nThe latter is useful for emitting warnings and error messages to make them\nvisible even when stdout has been redirected:\n>>> sys.stderr.write('Warning, log file not found starting a new one\\n')\nWarning, log file not found starting a new one\nThe most direct way to terminate a script is to use sys.exit()\n.\n10.5. String Pattern Matching\u00b6\nThe re\nmodule provides regular expression tools for advanced string\nprocessing. For complex matching and manipulation, regular expressions offer\nsuccinct, optimized solutions:\n>>> import re\n>>> re.findall(r'\\bf[a-z]*', 'which foot or hand fell fastest')\n['foot', 'fell', 'fastest']\n>>> re.sub(r'(\\b[a-z]+) \\1', r'\\1', 'cat in the the hat')\n'cat in the hat'\nWhen only simple capabilities are needed, string methods are preferred because they are easier to read and debug:\n>>> 'tea for too'.replace('too', 'two')\n'tea for two'\n10.6. Mathematics\u00b6\nThe math\nmodule gives access to the underlying C library functions for\nfloating-point math:\n>>> import math\n>>> math.cos(math.pi / 4)\n0.70710678118654757\n>>> math.log(1024, 2)\n10.0\nThe random\nmodule provides tools for making random selections:\n>>> import random\n>>> random.choice(['apple', 'pear', 'banana'])\n'apple'\n>>> random.sample(range(100), 10) # sampling without replacement\n[30, 83, 16, 4, 8, 81, 41, 50, 18, 33]\n>>> random.random() # random float from the interval [0.0, 1.0)\n0.17970987693706186\n>>> random.randrange(6) # random integer chosen from range(6)\n4\nThe statistics\nmodule calculates basic statistical properties\n(the mean, median, variance, etc.) of numeric data:\n>>> import statistics\n>>> data = [2.75, 1.75, 1.25, 0.25, 0.5, 1.25, 3.5]\n>>> statistics.mean(data)\n1.6071428571428572\n>>> statistics.median(data)\n1.25\n>>> statistics.variance(data)\n1.3720238095238095\nThe SciPy project has many other modules for numerical computations.\n10.7. Internet Access\u00b6\nThere are a number of modules for accessing the internet and processing internet\nprotocols. Two of the simplest are urllib.request\nfor retrieving data\nfrom URLs and smtplib\nfor sending mail:\n>>> from urllib.request import urlopen\n>>> with urlopen('https://docs.python.org/3/') as response:\n... for line in response:\n... line = line.decode() # Convert bytes to a str\n... if 'updated' in line:\n... print(line.rstrip()) # Remove trailing newline\n...\nLast updated on Nov 11, 2025 (20:11 UTC).\n>>> import smtplib\n>>> server = smtplib.SMTP('localhost')\n>>> server.sendmail('soothsayer@example.org', 'jcaesar@example.org',\n... \"\"\"To: jcaesar@example.org\n... From: soothsayer@example.org\n...\n... Beware the Ides of March.\n... \"\"\")\n>>> server.quit()\n(Note that the second example needs a mailserver running on localhost.)\n10.8. Dates and Times\u00b6\nThe datetime\nmodule supplies classes for manipulating dates and times in\nboth simple and complex ways. While date and time arithmetic is supported, the\nfocus of the implementation is on efficient member extraction for output\nformatting and manipulation. The module also supports objects that are timezone\naware.\n>>> # dates are easily constructed and formatted\n>>> from datetime import date\n>>> now = date.today()\n>>> now\ndatetime.date(2003, 12, 2)\n>>> now.strftime(\"%m-%d-%y. %d %b %Y is a %A on the %d day of %B.\")\n'12-02-03. 02 Dec 2003 is a Tuesday on the 02 day of December.'\n>>> # dates support calendar arithmetic\n>>> birthday = date(1964, 7, 31)\n>>> age = now - birthday\n>>> age.days\n14368\n10.9. Data Compression\u00b6\nCommon data archiving and compression formats are directly supported by modules\nincluding: zlib\n, gzip\n, bz2\n, lzma\n, zipfile\nand\ntarfile\n.\n>>> import zlib\n>>> s = b'witch which has which witches wrist watch'\n>>> len(s)\n41\n>>> t = zlib.compress(s)\n>>> len(t)\n37\n>>> zlib.decompress(t)\nb'witch which has which witches wrist watch'\n>>> zlib.crc32(s)\n226805979\n10.10. Performance Measurement\u00b6\nSome Python users develop a deep interest in knowing the relative performance of different approaches to the same problem. Python provides a measurement tool that answers those questions immediately.\nFor example, it may be tempting to use the tuple packing and unpacking feature\ninstead of the traditional approach to swapping arguments. The timeit\nmodule quickly demonstrates a modest performance advantage:\n>>> from timeit import Timer\n>>> Timer('t=a; a=b; b=t', 'a=1; b=2').timeit()\n0.57535828626024577\n>>> Timer('a,b = b,a', 'a=1; b=2').timeit()\n0.54962537085770791\nIn contrast to timeit\n\u2019s fine level of granularity, the profile\nand\npstats\nmodules provide tools for identifying time critical sections in\nlarger blocks of code.\n10.11. Quality Control\u00b6\nOne approach for developing high quality software is to write tests for each function as it is developed and to run those tests frequently during the development process.\nThe doctest\nmodule provides a tool for scanning a module and validating\ntests embedded in a program\u2019s docstrings. Test construction is as simple as\ncutting-and-pasting a typical call along with its results into the docstring.\nThis improves the documentation by providing the user with an example and it\nallows the doctest module to make sure the code remains true to the\ndocumentation:\ndef average(values):\n\"\"\"Computes the arithmetic mean of a list of numbers.\n>>> print(average([20, 30, 70]))\n40.0\n\"\"\"\nreturn sum(values) / len(values)\nimport doctest\ndoctest.testmod() # automatically validate the embedded tests\nThe unittest\nmodule is not as effortless as the doctest\nmodule,\nbut it allows a more comprehensive set of tests to be maintained in a separate\nfile:\nimport unittest\nclass TestStatisticalFunctions(unittest.TestCase):\ndef test_average(self):\nself.assertEqual(average([20, 30, 70]), 40.0)\nself.assertEqual(round(average([1, 5, 7]), 1), 4.3)\nwith self.assertRaises(ZeroDivisionError):\naverage([])\nwith self.assertRaises(TypeError):\naverage(20, 30, 70)\nunittest.main() # Calling from the command line invokes all tests\n10.12. Batteries Included\u00b6\nPython has a \u201cbatteries included\u201d philosophy. This is best seen through the sophisticated and robust capabilities of its larger packages. For example:\nThe\nxmlrpc.client\nandxmlrpc.server\nmodules make implementing remote procedure calls into an almost trivial task. Despite the modules\u2019 names, no direct knowledge or handling of XML is needed.The\nemail\npackage is a library for managing email messages, including MIME and other RFC 5322-based message documents. Unlikesmtplib\nandpoplib\nwhich actually send and receive messages, the email package has a complete toolset for building or decoding complex message structures (including attachments) and for implementing internet encoding and header protocols.The\njson\npackage provides robust support for parsing this popular data interchange format. Thecsv\nmodule supports direct reading and writing of files in Comma-Separated Value format, commonly supported by databases and spreadsheets. XML processing is supported by thexml.etree.ElementTree\n,xml.dom\nandxml.sax\npackages. Together, these modules and packages greatly simplify data interchange between Python applications and other tools.The\nsqlite3\nmodule is a wrapper for the SQLite database library, providing a persistent database that can be updated and accessed using slightly nonstandard SQL syntax.Internationalization is supported by a number of modules including\ngettext\n,locale\n, and thecodecs\npackage.", "code_snippets": ["\n", " ", "\n", "\n", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", "\n\n", " ", " ", "\n ", "\n ", "\n", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", " ", " ", "\n", "\n", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n\n", "\n", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", " ", "\n", "\n", " ", "\n", "\n", "\n", "\n\n", "\n", "\n", "\n ", " ", " ", " ", "\n\n", "\n", " ", "\n", "\n\n", "\n\n ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", "\n ", "\n ", " ", "\n ", " ", " ", "\n\n", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 2426}
{"url": "https://docs.python.org/3/tutorial/classes.html", "title": "Classes", "content": "9. Classes\u00b6\nClasses provide a means of bundling data and functionality together. Creating a new class creates a new type of object, allowing new instances of that type to be made. Each class instance can have attributes attached to it for maintaining its state. Class instances can also have methods (defined by its class) for modifying its state.\nCompared with other programming languages, Python\u2019s class mechanism adds classes with a minimum of new syntax and semantics. It is a mixture of the class mechanisms found in C++ and Modula-3. Python classes provide all the standard features of Object Oriented Programming: the class inheritance mechanism allows multiple base classes, a derived class can override any methods of its base class or classes, and a method can call the method of a base class with the same name. Objects can contain arbitrary amounts and kinds of data. As is true for modules, classes partake of the dynamic nature of Python: they are created at runtime, and can be modified further after creation.\nIn C++ terminology, normally class members (including the data members) are public (except see below Private Variables), and all member functions are virtual. As in Modula-3, there are no shorthands for referencing the object\u2019s members from its methods: the method function is declared with an explicit first argument representing the object, which is provided implicitly by the call. As in Smalltalk, classes themselves are objects. This provides semantics for importing and renaming. Unlike C++ and Modula-3, built-in types can be used as base classes for extension by the user. Also, like in C++, most built-in operators with special syntax (arithmetic operators, subscripting etc.) can be redefined for class instances.\n(Lacking universally accepted terminology to talk about classes, I will make occasional use of Smalltalk and C++ terms. I would use Modula-3 terms, since its object-oriented semantics are closer to those of Python than C++, but I expect that few readers have heard of it.)\n9.1. A Word About Names and Objects\u00b6\nObjects have individuality, and multiple names (in multiple scopes) can be bound to the same object. This is known as aliasing in other languages. This is usually not appreciated on a first glance at Python, and can be safely ignored when dealing with immutable basic types (numbers, strings, tuples). However, aliasing has a possibly surprising effect on the semantics of Python code involving mutable objects such as lists, dictionaries, and most other types. This is usually used to the benefit of the program, since aliases behave like pointers in some respects. For example, passing an object is cheap since only a pointer is passed by the implementation; and if a function modifies an object passed as an argument, the caller will see the change \u2014 this eliminates the need for two different argument passing mechanisms as in Pascal.\n9.2. Python Scopes and Namespaces\u00b6\nBefore introducing classes, I first have to tell you something about Python\u2019s scope rules. Class definitions play some neat tricks with namespaces, and you need to know how scopes and namespaces work to fully understand what\u2019s going on. Incidentally, knowledge about this subject is useful for any advanced Python programmer.\nLet\u2019s begin with some definitions.\nA namespace is a mapping from names to objects. Most namespaces are currently\nimplemented as Python dictionaries, but that\u2019s normally not noticeable in any\nway (except for performance), and it may change in the future. Examples of\nnamespaces are: the set of built-in names (containing functions such as abs()\n, and\nbuilt-in exception names); the global names in a module; and the local names in\na function invocation. In a sense the set of attributes of an object also form\na namespace. The important thing to know about namespaces is that there is\nabsolutely no relation between names in different namespaces; for instance, two\ndifferent modules may both define a function maximize\nwithout confusion \u2014\nusers of the modules must prefix it with the module name.\nBy the way, I use the word attribute for any name following a dot \u2014 for\nexample, in the expression z.real\n, real\nis an attribute of the object\nz\n. Strictly speaking, references to names in modules are attribute\nreferences: in the expression modname.funcname\n, modname\nis a module\nobject and funcname\nis an attribute of it. In this case there happens to be\na straightforward mapping between the module\u2019s attributes and the global names\ndefined in the module: they share the same namespace! [1]\nAttributes may be read-only or writable. In the latter case, assignment to\nattributes is possible. Module attributes are writable: you can write\nmodname.the_answer = 42\n. Writable attributes may also be deleted with the\ndel\nstatement. For example, del modname.the_answer\nwill remove\nthe attribute the_answer\nfrom the object named by modname\n.\nNamespaces are created at different moments and have different lifetimes. The\nnamespace containing the built-in names is created when the Python interpreter\nstarts up, and is never deleted. The global namespace for a module is created\nwhen the module definition is read in; normally, module namespaces also last\nuntil the interpreter quits. The statements executed by the top-level\ninvocation of the interpreter, either read from a script file or interactively,\nare considered part of a module called __main__\n, so they have their own\nglobal namespace. (The built-in names actually also live in a module; this is\ncalled builtins\n.)\nThe local namespace for a function is created when the function is called, and deleted when the function returns or raises an exception that is not handled within the function. (Actually, forgetting would be a better way to describe what actually happens.) Of course, recursive invocations each have their own local namespace.\nA scope is a textual region of a Python program where a namespace is directly accessible. \u201cDirectly accessible\u201d here means that an unqualified reference to a name attempts to find the name in the namespace.\nAlthough scopes are determined statically, they are used dynamically. At any time during execution, there are 3 or 4 nested scopes whose namespaces are directly accessible:\nthe innermost scope, which is searched first, contains the local names\nthe scopes of any enclosing functions, which are searched starting with the nearest enclosing scope, contain non-local, but also non-global names\nthe next-to-last scope contains the current module\u2019s global names\nthe outermost scope (searched last) is the namespace containing built-in names\nIf a name is declared global, then all references and assignments go directly to\nthe next-to-last scope containing the module\u2019s global names. To rebind variables\nfound outside of the innermost scope, the nonlocal\nstatement can be\nused; if not declared nonlocal, those variables are read-only (an attempt to\nwrite to such a variable will simply create a new local variable in the\ninnermost scope, leaving the identically named outer variable unchanged).\nUsually, the local scope references the local names of the (textually) current function. Outside functions, the local scope references the same namespace as the global scope: the module\u2019s namespace. Class definitions place yet another namespace in the local scope.\nIt is important to realize that scopes are determined textually: the global scope of a function defined in a module is that module\u2019s namespace, no matter from where or by what alias the function is called. On the other hand, the actual search for names is done dynamically, at run time \u2014 however, the language definition is evolving towards static name resolution, at \u201ccompile\u201d time, so don\u2019t rely on dynamic name resolution! (In fact, local variables are already determined statically.)\nA special quirk of Python is that \u2013 if no global\nor nonlocal\nstatement is in effect \u2013 assignments to names always go into the innermost scope.\nAssignments do not copy data \u2014 they just bind names to objects. The same is true\nfor deletions: the statement del x\nremoves the binding of x\nfrom the\nnamespace referenced by the local scope. In fact, all operations that introduce\nnew names use the local scope: in particular, import\nstatements and\nfunction definitions bind the module or function name in the local scope.\nThe global\nstatement can be used to indicate that particular\nvariables live in the global scope and should be rebound there; the\nnonlocal\nstatement indicates that particular variables live in\nan enclosing scope and should be rebound there.\n9.2.1. Scopes and Namespaces Example\u00b6\nThis is an example demonstrating how to reference the different scopes and\nnamespaces, and how global\nand nonlocal\naffect variable\nbinding:\ndef scope_test():\ndef do_local():\nspam = \"local spam\"\ndef do_nonlocal():\nnonlocal spam\nspam = \"nonlocal spam\"\ndef do_global():\nglobal spam\nspam = \"global spam\"\nspam = \"test spam\"\ndo_local()\nprint(\"After local assignment:\", spam)\ndo_nonlocal()\nprint(\"After nonlocal assignment:\", spam)\ndo_global()\nprint(\"After global assignment:\", spam)\nscope_test()\nprint(\"In global scope:\", spam)\nThe output of the example code is:\nAfter local assignment: test spam\nAfter nonlocal assignment: nonlocal spam\nAfter global assignment: nonlocal spam\nIn global scope: global spam\nNote how the local assignment (which is default) didn\u2019t change scope_test's\nbinding of spam. The nonlocal\nassignment changed scope_test's\nbinding of spam, and the global\nassignment changed the module-level\nbinding.\nYou can also see that there was no previous binding for spam before the\nglobal\nassignment.\n9.3. A First Look at Classes\u00b6\nClasses introduce a little bit of new syntax, three new object types, and some new semantics.\n9.3.1. Class Definition Syntax\u00b6\nThe simplest form of class definition looks like this:\nclass ClassName:\n\n.\n.\n.\n\nClass definitions, like function definitions (def\nstatements) must be\nexecuted before they have any effect. (You could conceivably place a class\ndefinition in a branch of an if\nstatement, or inside a function.)\nIn practice, the statements inside a class definition will usually be function definitions, but other statements are allowed, and sometimes useful \u2014 we\u2019ll come back to this later. The function definitions inside a class normally have a peculiar form of argument list, dictated by the calling conventions for methods \u2014 again, this is explained later.\nWhen a class definition is entered, a new namespace is created, and used as the local scope \u2014 thus, all assignments to local variables go into this new namespace. In particular, function definitions bind the name of the new function here.\nWhen a class definition is left normally (via the end), a class object is\ncreated. This is basically a wrapper around the contents of the namespace\ncreated by the class definition; we\u2019ll learn more about class objects in the\nnext section. The original local scope (the one in effect just before the class\ndefinition was entered) is reinstated, and the class object is bound here to the\nclass name given in the class definition header (ClassName\nin the\nexample).\n9.3.2. Class Objects\u00b6\nClass objects support two kinds of operations: attribute references and instantiation.\nAttribute references use the standard syntax used for all attribute references\nin Python: obj.name\n. Valid attribute names are all the names that were in\nthe class\u2019s namespace when the class object was created. So, if the class\ndefinition looked like this:\nclass MyClass:\n\"\"\"A simple example class\"\"\"\ni = 12345\ndef f(self):\nreturn 'hello world'\nthen MyClass.i\nand MyClass.f\nare valid attribute references, returning\nan integer and a function object, respectively. Class attributes can also be\nassigned to, so you can change the value of MyClass.i\nby assignment.\n__doc__\nis also a valid attribute, returning the docstring\nbelonging to the class: \"A simple example class\"\n.\nClass instantiation uses function notation. Just pretend that the class object is a parameterless function that returns a new instance of the class. For example (assuming the above class):\nx = MyClass()\ncreates a new instance of the class and assigns this object to the local\nvariable x\n.\nThe instantiation operation (\u201ccalling\u201d a class object) creates an empty object.\nMany classes like to create objects with instances customized to a specific\ninitial state. Therefore a class may define a special method named\n__init__()\n, like this:\ndef __init__(self):\nself.data = []\nWhen a class defines an __init__()\nmethod, class instantiation\nautomatically invokes __init__()\nfor the newly created class instance. So\nin this example, a new, initialized instance can be obtained by:\nx = MyClass()\nOf course, the __init__()\nmethod may have arguments for greater\nflexibility. In that case, arguments given to the class instantiation operator\nare passed on to __init__()\n. For example,\n>>> class Complex:\n... def __init__(self, realpart, imagpart):\n... self.r = realpart\n... self.i = imagpart\n...\n>>> x = Complex(3.0, -4.5)\n>>> x.r, x.i\n(3.0, -4.5)\n9.3.3. Instance Objects\u00b6\nNow what can we do with instance objects? The only operations understood by instance objects are attribute references. There are two kinds of valid attribute names: data attributes and methods.\nData attributes correspond to \u201cinstance variables\u201d in Smalltalk, and to \u201cdata\nmembers\u201d in C++. Data attributes need not be declared; like local variables,\nthey spring into existence when they are first assigned to. For example, if\nx\nis the instance of MyClass\ncreated above, the following piece of\ncode will print the value 16\n, without leaving a trace:\nx.counter = 1\nwhile x.counter < 10:\nx.counter = x.counter * 2\nprint(x.counter)\ndel x.counter\nThe other kind of instance attribute reference is a method. A method is a function that \u201cbelongs to\u201d an object.\nValid method names of an instance object depend on its class. By definition,\nall attributes of a class that are function objects define corresponding\nmethods of its instances. So in our example, x.f\nis a valid method\nreference, since MyClass.f\nis a function, but x.i\nis not, since\nMyClass.i\nis not. But x.f\nis not the same thing as MyClass.f\n\u2014 it\nis a method object, not a function object.\n9.3.4. Method Objects\u00b6\nUsually, a method is called right after it is bound:\nx.f()\nIf x = MyClass()\n, as above, this will return the string 'hello world'\n.\nHowever, it is not necessary to call a method right away: x.f\nis a method\nobject, and can be stored away and called at a later time. For example:\nxf = x.f\nwhile True:\nprint(xf())\nwill continue to print hello world\nuntil the end of time.\nWhat exactly happens when a method is called? You may have noticed that\nx.f()\nwas called without an argument above, even though the function\ndefinition for f()\nspecified an argument. What happened to the argument?\nSurely Python raises an exception when a function that requires an argument is\ncalled without any \u2014 even if the argument isn\u2019t actually used\u2026\nActually, you may have guessed the answer: the special thing about methods is\nthat the instance object is passed as the first argument of the function. In our\nexample, the call x.f()\nis exactly equivalent to MyClass.f(x)\n. In\ngeneral, calling a method with a list of n arguments is equivalent to calling\nthe corresponding function with an argument list that is created by inserting\nthe method\u2019s instance object before the first argument.\nIn general, methods work as follows. When a non-data attribute of an instance is referenced, the instance\u2019s class is searched. If the name denotes a valid class attribute that is a function object, references to both the instance object and the function object are packed into a method object. When the method object is called with an argument list, a new argument list is constructed from the instance object and the argument list, and the function object is called with this new argument list.\n9.3.5. Class and Instance Variables\u00b6\nGenerally speaking, instance variables are for data unique to each instance and class variables are for attributes and methods shared by all instances of the class:\nclass Dog:\nkind = 'canine' # class variable shared by all instances\ndef __init__(self, name):\nself.name = name # instance variable unique to each instance\n>>> d = Dog('Fido')\n>>> e = Dog('Buddy')\n>>> d.kind # shared by all dogs\n'canine'\n>>> e.kind # shared by all dogs\n'canine'\n>>> d.name # unique to d\n'Fido'\n>>> e.name # unique to e\n'Buddy'\nAs discussed in A Word About Names and Objects, shared data can have possibly surprising effects involving mutable objects such as lists and dictionaries. For example, the tricks list in the following code should not be used as a class variable because just a single list would be shared by all Dog instances:\nclass Dog:\ntricks = [] # mistaken use of a class variable\ndef __init__(self, name):\nself.name = name\ndef add_trick(self, trick):\nself.tricks.append(trick)\n>>> d = Dog('Fido')\n>>> e = Dog('Buddy')\n>>> d.add_trick('roll over')\n>>> e.add_trick('play dead')\n>>> d.tricks # unexpectedly shared by all dogs\n['roll over', 'play dead']\nCorrect design of the class should use an instance variable instead:\nclass Dog:\ndef __init__(self, name):\nself.name = name\nself.tricks = [] # creates a new empty list for each dog\ndef add_trick(self, trick):\nself.tricks.append(trick)\n>>> d = Dog('Fido')\n>>> e = Dog('Buddy')\n>>> d.add_trick('roll over')\n>>> e.add_trick('play dead')\n>>> d.tricks\n['roll over']\n>>> e.tricks\n['play dead']\n9.4. Random Remarks\u00b6\nIf the same attribute name occurs in both an instance and in a class, then attribute lookup prioritizes the instance:\n>>> class Warehouse:\n... purpose = 'storage'\n... region = 'west'\n...\n>>> w1 = Warehouse()\n>>> print(w1.purpose, w1.region)\nstorage west\n>>> w2 = Warehouse()\n>>> w2.region = 'east'\n>>> print(w2.purpose, w2.region)\nstorage east\nData attributes may be referenced by methods as well as by ordinary users (\u201cclients\u201d) of an object. In other words, classes are not usable to implement pure abstract data types. In fact, nothing in Python makes it possible to enforce data hiding \u2014 it is all based upon convention. (On the other hand, the Python implementation, written in C, can completely hide implementation details and control access to an object if necessary; this can be used by extensions to Python written in C.)\nClients should use data attributes with care \u2014 clients may mess up invariants maintained by the methods by stamping on their data attributes. Note that clients may add data attributes of their own to an instance object without affecting the validity of the methods, as long as name conflicts are avoided \u2014 again, a naming convention can save a lot of headaches here.\nThere is no shorthand for referencing data attributes (or other methods!) from within methods. I find that this actually increases the readability of methods: there is no chance of confusing local variables and instance variables when glancing through a method.\nOften, the first argument of a method is called self\n. This is nothing more\nthan a convention: the name self\nhas absolutely no special meaning to\nPython. Note, however, that by not following the convention your code may be\nless readable to other Python programmers, and it is also conceivable that a\nclass browser program might be written that relies upon such a convention.\nAny function object that is a class attribute defines a method for instances of that class. It is not necessary that the function definition is textually enclosed in the class definition: assigning a function object to a local variable in the class is also ok. For example:\n# Function defined outside the class\ndef f1(self, x, y):\nreturn min(x, x+y)\nclass C:\nf = f1\ndef g(self):\nreturn 'hello world'\nh = g\nNow f\n, g\nand h\nare all attributes of class C\nthat refer to\nfunction objects, and consequently they are all methods of instances of\nC\n\u2014 h\nbeing exactly equivalent to g\n. Note that this practice\nusually only serves to confuse the reader of a program.\nMethods may call other methods by using method attributes of the self\nargument:\nclass Bag:\ndef __init__(self):\nself.data = []\ndef add(self, x):\nself.data.append(x)\ndef addtwice(self, x):\nself.add(x)\nself.add(x)\nMethods may reference global names in the same way as ordinary functions. The global scope associated with a method is the module containing its definition. (A class is never used as a global scope.) While one rarely encounters a good reason for using global data in a method, there are many legitimate uses of the global scope: for one thing, functions and modules imported into the global scope can be used by methods, as well as functions and classes defined in it. Usually, the class containing the method is itself defined in this global scope, and in the next section we\u2019ll find some good reasons why a method would want to reference its own class.\nEach value is an object, and therefore has a class (also called its type).\nIt is stored as object.__class__\n.\n9.5. Inheritance\u00b6\nOf course, a language feature would not be worthy of the name \u201cclass\u201d without supporting inheritance. The syntax for a derived class definition looks like this:\nclass DerivedClassName(BaseClassName):\n\n.\n.\n.\n\nThe name BaseClassName\nmust be defined in a\nnamespace accessible from the scope containing the\nderived class definition. In place of a base class name, other arbitrary\nexpressions are also allowed. This can be useful, for example, when the base\nclass is defined in another module:\nclass DerivedClassName(modname.BaseClassName):\nExecution of a derived class definition proceeds the same as for a base class. When the class object is constructed, the base class is remembered. This is used for resolving attribute references: if a requested attribute is not found in the class, the search proceeds to look in the base class. This rule is applied recursively if the base class itself is derived from some other class.\nThere\u2019s nothing special about instantiation of derived classes:\nDerivedClassName()\ncreates a new instance of the class. Method references\nare resolved as follows: the corresponding class attribute is searched,\ndescending down the chain of base classes if necessary, and the method reference\nis valid if this yields a function object.\nDerived classes may override methods of their base classes. Because methods\nhave no special privileges when calling other methods of the same object, a\nmethod of a base class that calls another method defined in the same base class\nmay end up calling a method of a derived class that overrides it. (For C++\nprogrammers: all methods in Python are effectively virtual\n.)\nAn overriding method in a derived class may in fact want to extend rather than\nsimply replace the base class method of the same name. There is a simple way to\ncall the base class method directly: just call BaseClassName.methodname(self,\narguments)\n. This is occasionally useful to clients as well. (Note that this\nonly works if the base class is accessible as BaseClassName\nin the global\nscope.)\nPython has two built-in functions that work with inheritance:\nUse\nisinstance()\nto check an instance\u2019s type:isinstance(obj, int)\nwill beTrue\nonly ifobj.__class__\nisint\nor some class derived fromint\n.Use\nissubclass()\nto check class inheritance:issubclass(bool, int)\nisTrue\nsincebool\nis a subclass ofint\n. However,issubclass(float, int)\nisFalse\nsincefloat\nis not a subclass ofint\n.\n9.5.1. Multiple Inheritance\u00b6\nPython supports a form of multiple inheritance as well. A class definition with multiple base classes looks like this:\nclass DerivedClassName(Base1, Base2, Base3):\n\n.\n.\n.\n\nFor most purposes, in the simplest cases, you can think of the search for\nattributes inherited from a parent class as depth-first, left-to-right, not\nsearching twice in the same class where there is an overlap in the hierarchy.\nThus, if an attribute is not found in DerivedClassName\n, it is searched\nfor in Base1\n, then (recursively) in the base classes of Base1\n,\nand if it was not found there, it was searched for in Base2\n, and so on.\nIn fact, it is slightly more complex than that; the method resolution order\nchanges dynamically to support cooperative calls to super()\n. This\napproach is known in some other multiple-inheritance languages as\ncall-next-method and is more powerful than the super call found in\nsingle-inheritance languages.\nDynamic ordering is necessary because all cases of multiple inheritance exhibit\none or more diamond relationships (where at least one of the parent classes\ncan be accessed through multiple paths from the bottommost class). For example,\nall classes inherit from object\n, so any case of multiple inheritance\nprovides more than one path to reach object\n. To keep the base classes\nfrom being accessed more than once, the dynamic algorithm linearizes the search\norder in a way that preserves the left-to-right ordering specified in each\nclass, that calls each parent only once, and that is monotonic (meaning that a\nclass can be subclassed without affecting the precedence order of its parents).\nTaken together, these properties make it possible to design reliable and\nextensible classes with multiple inheritance. For more detail, see\nThe Python 2.3 Method Resolution Order.\n9.6. Private Variables\u00b6\n\u201cPrivate\u201d instance variables that cannot be accessed except from inside an\nobject don\u2019t exist in Python. However, there is a convention that is followed\nby most Python code: a name prefixed with an underscore (e.g. _spam\n) should\nbe treated as a non-public part of the API (whether it is a function, a method\nor a data member). It should be considered an implementation detail and subject\nto change without notice.\nSince there is a valid use-case for class-private members (namely to avoid name\nclashes of names with names defined by subclasses), there is limited support for\nsuch a mechanism, called name mangling. Any identifier of the form\n__spam\n(at least two leading underscores, at most one trailing underscore)\nis textually replaced with _classname__spam\n, where classname\nis the\ncurrent class name with leading underscore(s) stripped. This mangling is done\nwithout regard to the syntactic position of the identifier, as long as it\noccurs within the definition of a class.\nSee also\nThe private name mangling specifications for details and special cases.\nName mangling is helpful for letting subclasses override methods without breaking intraclass method calls. For example:\nclass Mapping:\ndef __init__(self, iterable):\nself.items_list = []\nself.__update(iterable)\ndef update(self, iterable):\nfor item in iterable:\nself.items_list.append(item)\n__update = update # private copy of original update() method\nclass MappingSubclass(Mapping):\ndef update(self, keys, values):\n# provides new signature for update()\n# but does not break __init__()\nfor item in zip(keys, values):\nself.items_list.append(item)\nThe above example would work even if MappingSubclass\nwere to introduce a\n__update\nidentifier since it is replaced with _Mapping__update\nin the\nMapping\nclass and _MappingSubclass__update\nin the MappingSubclass\nclass respectively.\nNote that the mangling rules are designed mostly to avoid accidents; it still is possible to access or modify a variable that is considered private. This can even be useful in special circumstances, such as in the debugger.\nNotice that code passed to exec()\nor eval()\ndoes not consider the\nclassname of the invoking class to be the current class; this is similar to the\neffect of the global\nstatement, the effect of which is likewise restricted\nto code that is byte-compiled together. The same restriction applies to\ngetattr()\n, setattr()\nand delattr()\n, as well as when referencing\n__dict__\ndirectly.\n9.7. Odds and Ends\u00b6\nSometimes it is useful to have a data type similar to the Pascal \u201crecord\u201d or C\n\u201cstruct\u201d, bundling together a few named data items. The idiomatic approach\nis to use dataclasses\nfor this purpose:\nfrom dataclasses import dataclass\n@dataclass\nclass Employee:\nname: str\ndept: str\nsalary: int\n>>> john = Employee('john', 'computer lab', 1000)\n>>> john.dept\n'computer lab'\n>>> john.salary\n1000\nA piece of Python code that expects a particular abstract data type can often be\npassed a class that emulates the methods of that data type instead. For\ninstance, if you have a function that formats some data from a file object, you\ncan define a class with methods read()\nand\nreadline()\nthat get the\ndata from a string buffer instead, and pass it as an argument.\nInstance method objects have attributes, too:\nm.__self__\nis the instance\nobject with the method m()\n, and m.__func__\nis\nthe function object\ncorresponding to the method.\n9.8. Iterators\u00b6\nBy now you have probably noticed that most container objects can be looped over\nusing a for\nstatement:\nfor element in [1, 2, 3]:\nprint(element)\nfor element in (1, 2, 3):\nprint(element)\nfor key in {'one':1, 'two':2}:\nprint(key)\nfor char in \"123\":\nprint(char)\nfor line in open(\"myfile.txt\"):\nprint(line, end='')\nThis style of access is clear, concise, and convenient. The use of iterators\npervades and unifies Python. Behind the scenes, the for\nstatement\ncalls iter()\non the container object. The function returns an iterator\nobject that defines the method __next__()\nwhich accesses\nelements in the container one at a time. When there are no more elements,\n__next__()\nraises a StopIteration\nexception which tells the\nfor\nloop to terminate. You can call the __next__()\nmethod\nusing the next()\nbuilt-in function; this example shows how it all works:\n>>> s = 'abc'\n>>> it = iter(s)\n>>> it\n\n>>> next(it)\n'a'\n>>> next(it)\n'b'\n>>> next(it)\n'c'\n>>> next(it)\nTraceback (most recent call last):\nFile \"\", line 1, in \nnext(it)\nStopIteration\nHaving seen the mechanics behind the iterator protocol, it is easy to add\niterator behavior to your classes. Define an __iter__()\nmethod which\nreturns an object with a __next__()\nmethod. If the class\ndefines __next__()\n, then __iter__()\ncan just return self\n:\nclass Reverse:\n\"\"\"Iterator for looping over a sequence backwards.\"\"\"\ndef __init__(self, data):\nself.data = data\nself.index = len(data)\ndef __iter__(self):\nreturn self\ndef __next__(self):\nif self.index == 0:\nraise StopIteration\nself.index = self.index - 1\nreturn self.data[self.index]\n>>> rev = Reverse('spam')\n>>> iter(rev)\n<__main__.Reverse object at 0x00A1DB50>\n>>> for char in rev:\n... print(char)\n...\nm\na\np\ns\n9.9. Generators\u00b6\nGenerators are a simple and powerful tool for creating iterators. They\nare written like regular functions but use the yield\nstatement\nwhenever they want to return data. Each time next()\nis called on it, the\ngenerator resumes where it left off (it remembers all the data values and which\nstatement was last executed). An example shows that generators can be trivially\neasy to create:\ndef reverse(data):\nfor index in range(len(data)-1, -1, -1):\nyield data[index]\n>>> for char in reverse('golf'):\n... print(char)\n...\nf\nl\no\ng\nAnything that can be done with generators can also be done with class-based\niterators as described in the previous section. What makes generators so\ncompact is that the __iter__()\nand __next__()\nmethods\nare created automatically.\nAnother key feature is that the local variables and execution state are\nautomatically saved between calls. This made the function easier to write and\nmuch more clear than an approach using instance variables like self.index\nand self.data\n.\nIn addition to automatic method creation and saving program state, when\ngenerators terminate, they automatically raise StopIteration\n. In\ncombination, these features make it easy to create iterators with no more effort\nthan writing a regular function.\n9.10. Generator Expressions\u00b6\nSome simple generators can be coded succinctly as expressions using a syntax similar to list comprehensions but with parentheses instead of square brackets. These expressions are designed for situations where the generator is used right away by an enclosing function. Generator expressions are more compact but less versatile than full generator definitions and tend to be more memory friendly than equivalent list comprehensions.\nExamples:\n>>> sum(i*i for i in range(10)) # sum of squares\n285\n>>> xvec = [10, 20, 30]\n>>> yvec = [7, 5, 3]\n>>> sum(x*y for x,y in zip(xvec, yvec)) # dot product\n260\n>>> unique_words = set(word for line in page for word in line.split())\n>>> valedictorian = max((student.gpa, student.name) for student in graduates)\n>>> data = 'golf'\n>>> list(data[i] for i in range(len(data)-1, -1, -1))\n['f', 'l', 'o', 'g']\nFootnotes", "code_snippets": ["\n ", "\n ", " ", " ", "\n\n ", "\n ", " ", "\n ", " ", " ", "\n\n ", "\n ", " ", "\n ", " ", " ", "\n\n ", " ", " ", "\n ", "\n ", " ", "\n ", "\n ", " ", "\n ", "\n ", " ", "\n\n", "\n", " ", "\n", "\n ", "\n ", "\n ", "\n ", "\n ", "\n", "\n", "\n ", " ", " ", "\n\n ", "\n ", " ", "\n", " ", " ", "\n", "\n ", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", " ", "\n", "\n", " ", " ", "\n", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n", "\n", " ", "\n", "\n", " ", " ", "\n", " ", "\n ", "\n", "\n\n ", " ", " ", " ", "\n\n ", " ", "\n ", " ", " ", " ", "\n\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n\n ", " ", " ", " ", "\n\n ", " ", "\n ", " ", " ", "\n\n ", " ", "\n ", "\n\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", " ", "\n", " ", " ", "\n", " ", "\n", "\n\n ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n\n ", " ", "\n ", "\n\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", " ", "\n", " ", "\n", "\n", " ", "\n", "\n", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", " ", " ", "\n", " ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", "\n", "\n", "\n", " ", " ", "\n ", " ", " ", "\n\n", "\n ", " ", " ", "\n\n ", "\n ", " ", "\n\n ", " ", " ", "\n", "\n ", "\n ", " ", " ", "\n\n ", " ", "\n ", "\n\n ", " ", "\n ", "\n ", "\n", "\n ", "\n ", "\n ", "\n ", "\n ", "\n", "\n", " ", " ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n", "\n ", " ", "\n ", " ", " ", "\n ", "\n\n ", " ", "\n ", " ", " ", " ", "\n ", "\n\n ", " ", " ", " ", "\n\n", "\n\n ", " ", " ", "\n ", "\n ", "\n ", " ", " ", " ", " ", "\n ", "\n", " ", "\n\n", "\n", "\n ", " ", "\n ", " ", "\n ", " ", "\n", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", "\n ", "\n", " ", " ", " ", " ", " ", "\n ", "\n", " ", " ", " ", " ", "\n ", "\n", " ", " ", " ", "\n ", "\n", " ", " ", " ", "\n ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n File ", ", line ", ", in ", "\n", "\n", "\n", "\n", "\n ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n\n ", "\n ", " ", "\n\n ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", "\n", " ", " ", "\n", "\n", "\n", " ", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n ", " ", " ", " ", " ", " ", "\n ", " ", "\n", " ", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", "\n", "\n\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", "\n\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n\n", " ", " ", " ", " ", " ", " ", " ", "\n\n", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 8139}
{"url": "https://docs.python.org/3/faq/index.html", "title": "Python Frequently Asked Questions", "content": "Theme\nAuto\nLight\nDark\nPrevious topic\nRemote debugging attachment protocol\nNext topic\nGeneral Python FAQ\nThis page\nReport a bug\nImprove this page\nShow source\nNavigation\nindex\nmodules\n|\nnext\n|\nprevious\n|\nPython\n\u00bb\n3.14.3 Documentation\n\u00bb\nPython Frequently Asked Questions\n|\nTheme\nAuto\nLight\nDark\n|\nPython Frequently Asked Questions\n\u00b6\nGeneral Python FAQ\nProgramming FAQ\nDesign and History FAQ\nLibrary and Extension FAQ\nExtending/Embedding FAQ\nPython on Windows FAQ\nGraphic User Interface FAQ\n\u201cWhy is Python Installed on my Computer?\u201d FAQ\nPrevious topic\nRemote debugging attachment protocol\nNext topic\nGeneral Python FAQ\nThis page\nReport a bug\nImprove this page\nShow source\n\u00ab\nNavigation\nindex\nmodules\n|\nnext\n|\nprevious\n|\nPython\n\u00bb\n3.14.3 Documentation\n\u00bb\nPython Frequently Asked Questions\n|\nTheme\nAuto\nLight\nDark\n|", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 201}
{"url": "https://docs.python.org/3/installing/index.html", "title": "Installing Python Modules", "content": "Installing Python Modules\u00b6\n- Email:\nAs a popular open source development project, Python has an active supporting community of contributors and users that also make their software available for other Python developers to use under open source license terms.\nThis allows Python users to share and collaborate effectively, benefiting from the solutions others have already created to common (and sometimes even rare!) problems, as well as potentially contributing their own solutions to the common pool.\nThis guide covers the installation part of the process. For a guide to creating and sharing your own Python projects, refer to the Python packaging user guide.\nNote\nFor corporate and other institutional users, be aware that many organisations have their own policies around using and contributing to open source software. Please take such policies into account when making use of the distribution and installation tools provided with Python.\nKey terms\u00b6\npip\nis the preferred installer program. Starting with Python 3.4, it is included by default with the Python binary installers.A virtual environment is a semi-isolated Python environment that allows packages to be installed for use by a particular application, rather than being installed system wide.\nvenv\nis the standard tool for creating virtual environments, and has been part of Python since Python 3.3. Starting with Python 3.4, it defaults to installingpip\ninto all created virtual environments.virtualenv\nis a third party alternative (and predecessor) tovenv\n. It allows virtual environments to be used on versions of Python prior to 3.4, which either don\u2019t providevenv\nat all, or aren\u2019t able to automatically installpip\ninto created environments.The Python Package Index is a public repository of open source licensed packages made available for use by other Python users.\nthe Python Packaging Authority is the group of developers and documentation authors responsible for the maintenance and evolution of the standard packaging tools and the associated metadata and file format standards. They maintain a variety of tools, documentation, and issue trackers on GitHub.\ndistutils\nis the original build and distribution system first added to the Python standard library in 1998. While direct use ofdistutils\nis being phased out, it still laid the foundation for the current packaging and distribution infrastructure, and it not only remains part of the standard library, but its name lives on in other ways (such as the name of the mailing list used to coordinate Python packaging standards development).\nChanged in version 3.5: The use of venv\nis now recommended for creating virtual environments.\nBasic usage\u00b6\nThe standard packaging tools are all designed to be used from the command line.\nThe following command will install the latest version of a module and its dependencies from the Python Package Index:\npython -m pip install SomePackage\nNote\nFor POSIX users (including macOS and Linux users), the examples in this guide assume the use of a virtual environment.\nFor Windows users, the examples in this guide assume that the option to adjust the system PATH environment variable was selected when installing Python.\nIt\u2019s also possible to specify an exact or minimum version directly on the\ncommand line. When using comparator operators such as >\n, <\nor some other\nspecial character which get interpreted by shell, the package name and the\nversion should be enclosed within double quotes:\npython -m pip install SomePackage==1.0.4 # specific version\npython -m pip install \"SomePackage>=1.0.4\" # minimum version\nNormally, if a suitable module is already installed, attempting to install it again will have no effect. Upgrading existing modules must be requested explicitly:\npython -m pip install --upgrade SomePackage\nMore information and resources regarding pip\nand its capabilities can be\nfound in the Python Packaging User Guide.\nCreation of virtual environments is done through the venv\nmodule.\nInstalling packages into an active virtual environment uses the commands shown\nabove.\nHow do I \u2026?\u00b6\nThese are quick answers or links for some common tasks.\n\u2026 install pip\nin versions of Python prior to Python 3.4?\u00b6\nPython only started bundling pip\nwith Python 3.4. For earlier versions,\npip\nneeds to be \u201cbootstrapped\u201d as described in the Python Packaging\nUser Guide.\n\u2026 install packages just for the current user?\u00b6\nPassing the --user\noption to python -m pip install\nwill install a\npackage just for the current user, rather than for all users of the system.\n\u2026 install scientific Python packages?\u00b6\nA number of scientific Python packages have complex binary dependencies, and\naren\u2019t currently easy to install using pip\ndirectly. At this point in\ntime, it will often be easier for users to install these packages by\nother means\nrather than attempting to install them with pip\n.\n\u2026 work with multiple versions of Python installed in parallel?\u00b6\nOn Linux, macOS, and other POSIX systems, use the versioned Python commands\nin combination with the -m\nswitch to run the appropriate copy of\npip\n:\npython2 -m pip install SomePackage # default Python 2\npython2.7 -m pip install SomePackage # specifically Python 2.7\npython3 -m pip install SomePackage # default Python 3\npython3.4 -m pip install SomePackage # specifically Python 3.4\nAppropriately versioned pip\ncommands may also be available.\nOn Windows, use the py\nPython launcher in combination with the -m\nswitch:\npy -2 -m pip install SomePackage # default Python 2\npy -2.7 -m pip install SomePackage # specifically Python 2.7\npy -3 -m pip install SomePackage # default Python 3\npy -3.4 -m pip install SomePackage # specifically Python 3.4\nCommon installation issues\u00b6\nInstalling into the system Python on Linux\u00b6\nOn Linux systems, a Python installation will typically be included as part\nof the distribution. Installing into this Python installation requires\nroot access to the system, and may interfere with the operation of the\nsystem package manager and other components of the system if a component\nis unexpectedly upgraded using pip\n.\nOn such systems, it is often better to use a virtual environment or a\nper-user installation when installing packages with pip\n.\nPip not installed\u00b6\nIt is possible that pip\ndoes not get installed by default. One potential fix is:\npython -m ensurepip --default-pip\nThere are also additional resources for installing pip.\nInstalling binary extensions\u00b6\nPython has typically relied heavily on source based distribution, with end users being expected to compile extension modules from source as part of the installation process.\nWith the introduction of support for the binary wheel\nformat, and the\nability to publish wheels for at least Windows and macOS through the\nPython Package Index, this problem is expected to diminish over time,\nas users are more regularly able to install pre-built extensions rather\nthan needing to build them themselves.\nSome of the solutions for installing scientific software\nthat are not yet available as pre-built wheel\nfiles may also help with\nobtaining other binary extensions without needing to build them locally.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1770}
{"url": "https://docs.python.org/3/tutorial/errors.html", "title": "Errors and Exceptions", "content": "8. Errors and Exceptions\u00b6\nUntil now error messages haven\u2019t been more than mentioned, but if you have tried out the examples you have probably seen some. There are (at least) two distinguishable kinds of errors: syntax errors and exceptions.\n8.1. Syntax Errors\u00b6\nSyntax errors, also known as parsing errors, are perhaps the most common kind of complaint you get while you are still learning Python:\n>>> while True print('Hello world')\nFile \"\", line 1\nwhile True print('Hello world')\n^^^^^\nSyntaxError: invalid syntax\nThe parser repeats the offending line and displays little arrows pointing\nat the place where the error was detected. Note that this is not always the\nplace that needs to be fixed. In the example, the error is detected at the\nfunction print()\n, since a colon (':'\n) is missing just before it.\nThe file name (\nin our example) and line number are printed so you\nknow where to look in case the input came from a file.\n8.2. Exceptions\u00b6\nEven if a statement or expression is syntactically correct, it may cause an error when an attempt is made to execute it. Errors detected during execution are called exceptions and are not unconditionally fatal: you will soon learn how to handle them in Python programs. Most exceptions are not handled by programs, however, and result in error messages as shown here:\n>>> 10 * (1/0)\nTraceback (most recent call last):\nFile \"\", line 1, in \n10 * (1/0)\n~^~\nZeroDivisionError: division by zero\n>>> 4 + spam*3\nTraceback (most recent call last):\nFile \"\", line 1, in \n4 + spam*3\n^^^^\nNameError: name 'spam' is not defined\n>>> '2' + 2\nTraceback (most recent call last):\nFile \"\", line 1, in \n'2' + 2\n~~~~^~~\nTypeError: can only concatenate str (not \"int\") to str\nThe last line of the error message indicates what happened. Exceptions come in\ndifferent types, and the type is printed as part of the message: the types in\nthe example are ZeroDivisionError\n, NameError\nand TypeError\n.\nThe string printed as the exception type is the name of the built-in exception\nthat occurred. This is true for all built-in exceptions, but need not be true\nfor user-defined exceptions (although it is a useful convention). Standard\nexception names are built-in identifiers (not reserved keywords).\nThe rest of the line provides detail based on the type of exception and what caused it.\nThe preceding part of the error message shows the context where the exception occurred, in the form of a stack traceback. In general it contains a stack traceback listing source lines; however, it will not display lines read from standard input.\nBuilt-in Exceptions lists the built-in exceptions and their meanings.\n8.3. Handling Exceptions\u00b6\nIt is possible to write programs that handle selected exceptions. Look at the\nfollowing example, which asks the user for input until a valid integer has been\nentered, but allows the user to interrupt the program (using Control-C or\nwhatever the operating system supports); note that a user-generated interruption\nis signalled by raising the KeyboardInterrupt\nexception.\n>>> while True:\n... try:\n... x = int(input(\"Please enter a number: \"))\n... break\n... except ValueError:\n... print(\"Oops! That was no valid number. Try again...\")\n...\nThe try\nstatement works as follows.\nFirst, the try clause (the statement(s) between the\ntry\nandexcept\nkeywords) is executed.If no exception occurs, the except clause is skipped and execution of the\ntry\nstatement is finished.If an exception occurs during execution of the\ntry\nclause, the rest of the clause is skipped. Then, if its type matches the exception named after theexcept\nkeyword, the except clause is executed, and then execution continues after the try/except block.If an exception occurs which does not match the exception named in the except clause, it is passed on to outer\ntry\nstatements; if no handler is found, it is an unhandled exception and execution stops with an error message.\nA try\nstatement may have more than one except clause, to specify\nhandlers for different exceptions. At most one handler will be executed.\nHandlers only handle exceptions that occur in the corresponding try clause,\nnot in other handlers of the same try\nstatement. An except clause\nmay name multiple exceptions as a parenthesized tuple, for example:\n... except (RuntimeError, TypeError, NameError):\n... pass\nA class in an except\nclause matches exceptions which are instances of the\nclass itself or one of its derived classes (but not the other way around \u2014 an\nexcept clause listing a derived class does not match instances of its base classes).\nFor example, the following code will print B, C, D in that order:\nclass B(Exception):\npass\nclass C(B):\npass\nclass D(C):\npass\nfor cls in [B, C, D]:\ntry:\nraise cls()\nexcept D:\nprint(\"D\")\nexcept C:\nprint(\"C\")\nexcept B:\nprint(\"B\")\nNote that if the except clauses were reversed (with except B\nfirst), it\nwould have printed B, B, B \u2014 the first matching except clause is triggered.\nWhen an exception occurs, it may have associated values, also known as the exception\u2019s arguments. The presence and types of the arguments depend on the exception type.\nThe except clause may specify a variable after the exception name. The\nvariable is bound to the exception instance which typically has an args\nattribute that stores the arguments. For convenience, builtin exception\ntypes define __str__()\nto print all the arguments without explicitly\naccessing .args\n.\n>>> try:\n... raise Exception('spam', 'eggs')\n... except Exception as inst:\n... print(type(inst)) # the exception type\n... print(inst.args) # arguments stored in .args\n... print(inst) # __str__ allows args to be printed directly,\n... # but may be overridden in exception subclasses\n... x, y = inst.args # unpack args\n... print('x =', x)\n... print('y =', y)\n...\n\n('spam', 'eggs')\n('spam', 'eggs')\nx = spam\ny = eggs\nThe exception\u2019s __str__()\noutput is printed as the last part (\u2018detail\u2019)\nof the message for unhandled exceptions.\nBaseException\nis the common base class of all exceptions. One of its\nsubclasses, Exception\n, is the base class of all the non-fatal exceptions.\nExceptions which are not subclasses of Exception\nare not typically\nhandled, because they are used to indicate that the program should terminate.\nThey include SystemExit\nwhich is raised by sys.exit()\nand\nKeyboardInterrupt\nwhich is raised when a user wishes to interrupt\nthe program.\nException\ncan be used as a wildcard that catches (almost) everything.\nHowever, it is good practice to be as specific as possible with the types\nof exceptions that we intend to handle, and to allow any unexpected\nexceptions to propagate on.\nThe most common pattern for handling Exception\nis to print or log\nthe exception and then re-raise it (allowing a caller to handle the\nexception as well):\nimport sys\ntry:\nf = open('myfile.txt')\ns = f.readline()\ni = int(s.strip())\nexcept OSError as err:\nprint(\"OS error:\", err)\nexcept ValueError:\nprint(\"Could not convert data to an integer.\")\nexcept Exception as err:\nprint(f\"Unexpected {err=}, {type(err)=}\")\nraise\nThe try\n\u2026 except\nstatement has an optional else\nclause, which, when present, must follow all except clauses. It is useful\nfor code that must be executed if the try clause does not raise an exception.\nFor example:\nfor arg in sys.argv[1:]:\ntry:\nf = open(arg, 'r')\nexcept OSError:\nprint('cannot open', arg)\nelse:\nprint(arg, 'has', len(f.readlines()), 'lines')\nf.close()\nThe use of the else\nclause is better than adding additional code to\nthe try\nclause because it avoids accidentally catching an exception\nthat wasn\u2019t raised by the code being protected by the try\n\u2026\nexcept\nstatement.\nException handlers do not handle only exceptions that occur immediately in the try clause, but also those that occur inside functions that are called (even indirectly) in the try clause. For example:\n>>> def this_fails():\n... x = 1/0\n...\n>>> try:\n... this_fails()\n... except ZeroDivisionError as err:\n... print('Handling run-time error:', err)\n...\nHandling run-time error: division by zero\n8.4. Raising Exceptions\u00b6\nThe raise\nstatement allows the programmer to force a specified\nexception to occur. For example:\n>>> raise NameError('HiThere')\nTraceback (most recent call last):\nFile \"\", line 1, in \nraise NameError('HiThere')\nNameError: HiThere\nThe sole argument to raise\nindicates the exception to be raised.\nThis must be either an exception instance or an exception class (a class that\nderives from BaseException\n, such as Exception\nor one of its\nsubclasses). If an exception class is passed, it will be implicitly\ninstantiated by calling its constructor with no arguments:\nraise ValueError # shorthand for 'raise ValueError()'\nIf you need to determine whether an exception was raised but don\u2019t intend to\nhandle it, a simpler form of the raise\nstatement allows you to\nre-raise the exception:\n>>> try:\n... raise NameError('HiThere')\n... except NameError:\n... print('An exception flew by!')\n... raise\n...\nAn exception flew by!\nTraceback (most recent call last):\nFile \"\", line 2, in \nraise NameError('HiThere')\nNameError: HiThere\n8.5. Exception Chaining\u00b6\nIf an unhandled exception occurs inside an except\nsection, it will\nhave the exception being handled attached to it and included in the error\nmessage:\n>>> try:\n... open(\"database.sqlite\")\n... except OSError:\n... raise RuntimeError(\"unable to handle error\")\n...\nTraceback (most recent call last):\nFile \"\", line 2, in \nopen(\"database.sqlite\")\n~~~~^^^^^^^^^^^^^^^^^^^\nFileNotFoundError: [Errno 2] No such file or directory: 'database.sqlite'\nDuring handling of the above exception, another exception occurred:\nTraceback (most recent call last):\nFile \"\", line 4, in \nraise RuntimeError(\"unable to handle error\")\nRuntimeError: unable to handle error\nTo indicate that an exception is a direct consequence of another, the\nraise\nstatement allows an optional from\nclause:\n# exc must be exception instance or None.\nraise RuntimeError from exc\nThis can be useful when you are transforming exceptions. For example:\n>>> def func():\n... raise ConnectionError\n...\n>>> try:\n... func()\n... except ConnectionError as exc:\n... raise RuntimeError('Failed to open database') from exc\n...\nTraceback (most recent call last):\nFile \"\", line 2, in \nfunc()\n~~~~^^\nFile \"\", line 2, in func\nConnectionError\nThe above exception was the direct cause of the following exception:\nTraceback (most recent call last):\nFile \"\", line 4, in \nraise RuntimeError('Failed to open database') from exc\nRuntimeError: Failed to open database\nIt also allows disabling automatic exception chaining using the from None\nidiom:\n>>> try:\n... open('database.sqlite')\n... except OSError:\n... raise RuntimeError from None\n...\nTraceback (most recent call last):\nFile \"\", line 4, in \nraise RuntimeError from None\nRuntimeError\nFor more information about chaining mechanics, see Built-in Exceptions.\n8.6. User-defined Exceptions\u00b6\nPrograms may name their own exceptions by creating a new exception class (see\nClasses for more about Python classes). Exceptions should typically\nbe derived from the Exception\nclass, either directly or indirectly.\nException classes can be defined which do anything any other class can do, but are usually kept simple, often only offering a number of attributes that allow information about the error to be extracted by handlers for the exception.\nMost exceptions are defined with names that end in \u201cError\u201d, similar to the naming of the standard exceptions.\nMany standard modules define their own exceptions to report errors that may occur in functions they define.\n8.7. Defining Clean-up Actions\u00b6\nThe try\nstatement has another optional clause which is intended to\ndefine clean-up actions that must be executed under all circumstances. For\nexample:\n>>> try:\n... raise KeyboardInterrupt\n... finally:\n... print('Goodbye, world!')\n...\nGoodbye, world!\nTraceback (most recent call last):\nFile \"\", line 2, in \nraise KeyboardInterrupt\nKeyboardInterrupt\nIf a finally\nclause is present, the finally\nclause will execute as the last task before the try\nstatement completes. The finally\nclause runs whether or\nnot the try\nstatement produces an exception. The following\npoints discuss more complex cases when an exception occurs:\nIf an exception occurs during execution of the\ntry\nclause, the exception may be handled by anexcept\nclause. If the exception is not handled by anexcept\nclause, the exception is re-raised after thefinally\nclause has been executed.An exception could occur during execution of an\nexcept\norelse\nclause. Again, the exception is re-raised after thefinally\nclause has been executed.If the\nfinally\nclause executes abreak\n,continue\norreturn\nstatement, exceptions are not re-raised. This can be confusing and is therefore discouraged. From version 3.14 the compiler emits aSyntaxWarning\nfor it (see PEP 765).If the\ntry\nstatement reaches abreak\n,continue\norreturn\nstatement, thefinally\nclause will execute just prior to thebreak\n,continue\norreturn\nstatement\u2019s execution.If a\nfinally\nclause includes areturn\nstatement, the returned value will be the one from thefinally\nclause\u2019sreturn\nstatement, not the value from thetry\nclause\u2019sreturn\nstatement. This can be confusing and is therefore discouraged. From version 3.14 the compiler emits aSyntaxWarning\nfor it (see PEP 765).\nFor example:\n>>> def bool_return():\n... try:\n... return True\n... finally:\n... return False\n...\n>>> bool_return()\nFalse\nA more complicated example:\n>>> def divide(x, y):\n... try:\n... result = x / y\n... except ZeroDivisionError:\n... print(\"division by zero!\")\n... else:\n... print(\"result is\", result)\n... finally:\n... print(\"executing finally clause\")\n...\n>>> divide(2, 1)\nresult is 2.0\nexecuting finally clause\n>>> divide(2, 0)\ndivision by zero!\nexecuting finally clause\n>>> divide(\"2\", \"1\")\nexecuting finally clause\nTraceback (most recent call last):\nFile \"\", line 1, in \ndivide(\"2\", \"1\")\n~~~~~~^^^^^^^^^^\nFile \"\", line 3, in divide\nresult = x / y\n~~^~~\nTypeError: unsupported operand type(s) for /: 'str' and 'str'\nAs you can see, the finally\nclause is executed in any event. The\nTypeError\nraised by dividing two strings is not handled by the\nexcept\nclause and therefore re-raised after the finally\nclause has been executed.\nIn real world applications, the finally\nclause is useful for\nreleasing external resources (such as files or network connections), regardless\nof whether the use of the resource was successful.\n8.8. Predefined Clean-up Actions\u00b6\nSome objects define standard clean-up actions to be undertaken when the object is no longer needed, regardless of whether or not the operation using the object succeeded or failed. Look at the following example, which tries to open a file and print its contents to the screen.\nfor line in open(\"myfile.txt\"):\nprint(line, end=\"\")\nThe problem with this code is that it leaves the file open for an indeterminate\namount of time after this part of the code has finished executing.\nThis is not an issue in simple scripts, but can be a problem for larger\napplications. The with\nstatement allows objects like files to be\nused in a way that ensures they are always cleaned up promptly and correctly.\nwith open(\"myfile.txt\") as f:\nfor line in f:\nprint(line, end=\"\")\nAfter the statement is executed, the file f is always closed, even if a problem was encountered while processing the lines. Objects which, like files, provide predefined clean-up actions will indicate this in their documentation.\n8.10. Enriching Exceptions with Notes\u00b6\nWhen an exception is created in order to be raised, it is usually initialized\nwith information that describes the error that has occurred. There are cases\nwhere it is useful to add information after the exception was caught. For this\npurpose, exceptions have a method add_note(note)\nthat accepts a string and\nadds it to the exception\u2019s notes list. The standard traceback rendering\nincludes all notes, in the order they were added, after the exception.\n>>> try:\n... raise TypeError('bad type')\n... except Exception as e:\n... e.add_note('Add some information')\n... e.add_note('Add some more information')\n... raise\n...\nTraceback (most recent call last):\nFile \"\", line 2, in \nraise TypeError('bad type')\nTypeError: bad type\nAdd some information\nAdd some more information\n>>>\nFor example, when collecting exceptions into an exception group, we may want to add context information for the individual errors. In the following each exception in the group has a note indicating when this error has occurred.\n>>> def f():\n... raise OSError('operation failed')\n...\n>>> excs = []\n>>> for i in range(3):\n... try:\n... f()\n... except Exception as e:\n... e.add_note(f'Happened in Iteration {i+1}')\n... excs.append(e)\n...\n>>> raise ExceptionGroup('We have some problems', excs)\n+ Exception Group Traceback (most recent call last):\n| File \"\", line 1, in \n| raise ExceptionGroup('We have some problems', excs)\n| ExceptionGroup: We have some problems (3 sub-exceptions)\n+-+---------------- 1 ----------------\n| Traceback (most recent call last):\n| File \"\", line 3, in \n| f()\n| ~^^\n| File \"\", line 2, in f\n| raise OSError('operation failed')\n| OSError: operation failed\n| Happened in Iteration 1\n+---------------- 2 ----------------\n| Traceback (most recent call last):\n| File \"\", line 3, in \n| f()\n| ~^^\n| File \"\", line 2, in f\n| raise OSError('operation failed')\n| OSError: operation failed\n| Happened in Iteration 2\n+---------------- 3 ----------------\n| Traceback (most recent call last):\n| File \"\", line 3, in \n| f()\n| ~^^\n| File \"\", line 2, in f\n| raise OSError('operation failed')\n| OSError: operation failed\n| Happened in Iteration 3\n+------------------------------------\n>>>", "code_snippets": [" ", " ", "\n File ", ", line ", "\n", " ", " ", "\n", "\n", ": ", "\n", " ", " ", "\n", "\n File ", ", line ", ", in ", "\n", " ", " ", "\n", "\n", ": ", "\n", " ", " ", "\n", "\n File ", ", line ", ", in ", "\n", " ", " ", "\n", "\n", ": ", "\n", " ", " ", "\n", "\n File ", ", line ", ", in ", "\n", " ", " ", "\n", "\n", ": ", "\n", " ", "\n", " ", "\n", " ", " ", " ", "\n", " ", "\n", " ", " ", "\n", " ", "\n", "\n", " ", " ", " ", " ", "\n", " ", "\n", "\n ", "\n\n", "\n ", "\n\n", "\n ", "\n\n", " ", " ", " ", " ", " ", "\n ", "\n ", " ", "\n ", " ", "\n ", "\n ", " ", "\n ", "\n ", " ", "\n ", "\n", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n\n", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n", " ", " ", " ", "\n ", " ", "\n", " ", "\n ", "\n", " ", " ", " ", "\n ", "\n ", "\n", " ", " ", " ", "\n ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", " ", " ", " ", "\n ", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", " ", "\n", "\n File ", ", line ", ", in ", "\n", " ", "\n", ": ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", " ", "\n", " ", "\n", " ", "\n", "\n", "\n", "\n File ", ", line ", ", in ", "\n", " ", "\n", ": ", "\n", "\n", " ", "\n", " ", "\n", " ", " ", "\n", "\n", "\n File ", ", line ", ", in ", "\n", "\n", "\n", ": ", "\n\n", "\n\n", "\n File ", ", line ", ", in ", "\n", " ", "\n", ": ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n File ", ", line ", ", in ", "\n", "\n", "\n File ", ", line ", ", in ", "\n", "\n\n", "\n\n", "\n File ", ", line ", ", in ", "\n", " ", " ", "\n", ": ", "\n", "\n", " ", "\n", " ", "\n", " ", " ", " ", "\n", "\n", "\n File ", ", line ", ", in ", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", " ", "\n", "\n", "\n", "\n File ", ", line ", ", in ", "\n", " ", "\n", "\n", "\n", " ", "\n", " ", " ", "\n", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", " ", "\n", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", "\n", " ", "\n", " ", "\n", " ", " ", "\n", " ", "\n", " ", "\n", "\n", " ", "\n", "\n", "\n", " ", "\n", "\n", "\n", " ", "\n", "\n", "\n File ", ", line ", ", in ", "\n", " ", "\n", "\n File ", ", line ", ", in ", "\n", " ", " ", " ", " ", "\n", "\n", ": ", "\n", " ", " ", " ", "\n ", " ", "\n", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", " ", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n", "\n", "\n", " ", "\n", " ", " ", " ", "\n", " ", "\n", " ", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", " ", "\n", " ", " ", " ", " ", "\n", " ", "\n", "\n", " ", "\n", " ", " ", " ", "\n", "\n", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", " ", "\n", " ", "\n", "\n", "\n File ", ", line ", ", in ", "\n", " ", "\n", ": ", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", " ", "\n", " ", " ", " ", " ", "\n", " ", "\n", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 4806}
{"url": "https://docs.python.org/3/howto/index.html", "title": "Python HOWTOs", "content": "Python HOWTOs\u00b6\nPython HOWTOs are documents that cover a specific topic in-depth. Modeled on the Linux Documentation Project\u2019s HOWTO collection, this collection is an effort to foster documentation that\u2019s more detailed than the Python Library Reference.\nGeneral:\nAdvanced development:\nDebugging and profiling:", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 77}
{"url": "https://docs.python.org/3/c-api/monitoring.html", "title": "Monitoring C API", "content": "Monitoring C API\u00b6\nAdded in version 3.13.\nAn extension may need to interact with the event monitoring system. Subscribing\nto events and registering callbacks can be done via the Python API exposed in\nsys.monitoring\n.\nGenerating Execution Events\u00b6\nThe functions below make it possible for an extension to fire monitoring\nevents as it emulates the execution of Python code. Each of these functions\naccepts a PyMonitoringState\nstruct which contains concise information\nabout the activation state of events, as well as the event arguments, which\ninclude a PyObject*\nrepresenting the code object, the instruction offset\nand sometimes additional, event-specific arguments (see sys.monitoring\nfor details about the signatures of the different event callbacks).\nThe codelike\nargument should be an instance of types.CodeType\nor of a type that emulates it.\nThe VM disables tracing when firing an event, so there is no need for user code to do that.\nMonitoring functions should not be called with an exception set, except those listed below as working with the current exception.\n-\ntype PyMonitoringState\u00b6\nRepresentation of the state of an event type. It is allocated by the user while its contents are maintained by the monitoring API functions described below.\nAll of the functions below return 0 on success and -1 (with an exception set) on error.\nSee sys.monitoring\nfor descriptions of the events.\n-\nint PyMonitoring_FirePyStartEvent(PyMonitoringState *state, PyObject *codelike, int32_t offset)\u00b6\nFire a\nPY_START\nevent.\n-\nint PyMonitoring_FirePyResumeEvent(PyMonitoringState *state, PyObject *codelike, int32_t offset)\u00b6\nFire a\nPY_RESUME\nevent.\n-\nint PyMonitoring_FirePyReturnEvent(PyMonitoringState *state, PyObject *codelike, int32_t offset, PyObject *retval)\u00b6\nFire a\nPY_RETURN\nevent.\n-\nint PyMonitoring_FirePyYieldEvent(PyMonitoringState *state, PyObject *codelike, int32_t offset, PyObject *retval)\u00b6\nFire a\nPY_YIELD\nevent.\n-\nint PyMonitoring_FireCallEvent(PyMonitoringState *state, PyObject *codelike, int32_t offset, PyObject *callable, PyObject *arg0)\u00b6\nFire a\nCALL\nevent.\n-\nint PyMonitoring_FireLineEvent(PyMonitoringState *state, PyObject *codelike, int32_t offset, int lineno)\u00b6\nFire a\nLINE\nevent.\n-\nint PyMonitoring_FireJumpEvent(PyMonitoringState *state, PyObject *codelike, int32_t offset, PyObject *target_offset)\u00b6\nFire a\nJUMP\nevent.\n-\nint PyMonitoring_FireBranchLeftEvent(PyMonitoringState *state, PyObject *codelike, int32_t offset, PyObject *target_offset)\u00b6\nFire a\nBRANCH_LEFT\nevent.\n-\nint PyMonitoring_FireBranchRightEvent(PyMonitoringState *state, PyObject *codelike, int32_t offset, PyObject *target_offset)\u00b6\nFire a\nBRANCH_RIGHT\nevent.\n-\nint PyMonitoring_FireCReturnEvent(PyMonitoringState *state, PyObject *codelike, int32_t offset, PyObject *retval)\u00b6\nFire a\nC_RETURN\nevent.\n-\nint PyMonitoring_FirePyThrowEvent(PyMonitoringState *state, PyObject *codelike, int32_t offset)\u00b6\nFire a\nPY_THROW\nevent with the current exception (as returned byPyErr_GetRaisedException()\n).\n-\nint PyMonitoring_FireRaiseEvent(PyMonitoringState *state, PyObject *codelike, int32_t offset)\u00b6\nFire a\nRAISE\nevent with the current exception (as returned byPyErr_GetRaisedException()\n).\n-\nint PyMonitoring_FireCRaiseEvent(PyMonitoringState *state, PyObject *codelike, int32_t offset)\u00b6\nFire a\nC_RAISE\nevent with the current exception (as returned byPyErr_GetRaisedException()\n).\n-\nint PyMonitoring_FireReraiseEvent(PyMonitoringState *state, PyObject *codelike, int32_t offset)\u00b6\nFire a\nRERAISE\nevent with the current exception (as returned byPyErr_GetRaisedException()\n).\n-\nint PyMonitoring_FireExceptionHandledEvent(PyMonitoringState *state, PyObject *codelike, int32_t offset)\u00b6\nFire an\nEXCEPTION_HANDLED\nevent with the current exception (as returned byPyErr_GetRaisedException()\n).\n-\nint PyMonitoring_FirePyUnwindEvent(PyMonitoringState *state, PyObject *codelike, int32_t offset)\u00b6\nFire a\nPY_UNWIND\nevent with the current exception (as returned byPyErr_GetRaisedException()\n).\n-\nint PyMonitoring_FireStopIterationEvent(PyMonitoringState *state, PyObject *codelike, int32_t offset, PyObject *value)\u00b6\nFire a\nSTOP_ITERATION\nevent. Ifvalue\nis an instance ofStopIteration\n, it is used. Otherwise, a newStopIteration\ninstance is created withvalue\nas its argument.\nManaging the Monitoring State\u00b6\nMonitoring states can be managed with the help of monitoring scopes. A scope would typically correspond to a Python function.\n-\nint PyMonitoring_EnterScope(PyMonitoringState *state_array, uint64_t *version, const uint8_t *event_types, Py_ssize_t length)\u00b6\nEnter a monitored scope.\nevent_types\nis an array of the event IDs for events that may be fired from the scope. For example, the ID of aPY_START\nevent is the valuePY_MONITORING_EVENT_PY_START\n, which is numerically equal to the base-2 logarithm ofsys.monitoring.events.PY_START\n.state_array\nis an array with a monitoring state entry for each event inevent_types\n, it is allocated by the user but populated byPyMonitoring_EnterScope()\nwith information about the activation state of the event. The size ofevent_types\n(and hence also ofstate_array\n) is given inlength\n.The\nversion\nargument is a pointer to a value which should be allocated by the user together withstate_array\nand initialized to 0, and then set only byPyMonitoring_EnterScope()\nitself. It allows this function to determine whether event states have changed since the previous call, and to return quickly if they have not.The scopes referred to here are lexical scopes: a function, class or method.\nPyMonitoring_EnterScope()\nshould be called whenever the lexical scope is entered. Scopes can be reentered, reusing the same state_array and version, in situations like when emulating a recursive Python function. When a code-like\u2019s execution is paused, such as when emulating a generator, the scope needs to be exited and re-entered.The macros for event_types are:\nMacro\nEvent\n-\nPY_MONITORING_EVENT_BRANCH_LEFT\u00b6\n-\nPY_MONITORING_EVENT_BRANCH_RIGHT\u00b6\n-\nPY_MONITORING_EVENT_CALL\u00b6\n-\nPY_MONITORING_EVENT_C_RAISE\u00b6\n-\nPY_MONITORING_EVENT_C_RETURN\u00b6\n-\nPY_MONITORING_EVENT_EXCEPTION_HANDLED\u00b6\n-\nPY_MONITORING_EVENT_INSTRUCTION\u00b6\n-\nPY_MONITORING_EVENT_JUMP\u00b6\n-\nPY_MONITORING_EVENT_LINE\u00b6\n-\nPY_MONITORING_EVENT_PY_RESUME\u00b6\n-\nPY_MONITORING_EVENT_PY_RETURN\u00b6\n-\nPY_MONITORING_EVENT_PY_START\u00b6\n-\nPY_MONITORING_EVENT_PY_THROW\u00b6\n-\nPY_MONITORING_EVENT_PY_UNWIND\u00b6\n-\nPY_MONITORING_EVENT_PY_YIELD\u00b6\n-\nPY_MONITORING_EVENT_RAISE\u00b6\n-\nPY_MONITORING_EVENT_RERAISE\u00b6\n-\nPY_MONITORING_EVENT_STOP_ITERATION\u00b6\n-\nPY_MONITORING_EVENT_BRANCH_LEFT\u00b6\n-\nint PyMonitoring_ExitScope(void)\u00b6\nExit the last scope that was entered with\nPyMonitoring_EnterScope()\n.\n-\nint PY_MONITORING_IS_INSTRUMENTED_EVENT(uint8_t ev)\u00b6\nReturn true if the event corresponding to the event ID ev is a local event.\nAdded in version 3.13.\nDeprecated since version 3.14: This function is soft deprecated.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1701}
{"url": "https://docs.python.org/3/faq/installed.html", "title": "\u201cWhy is Python Installed on my Computer?\u201d FAQ", "content": "\u201cWhy is Python Installed on my Computer?\u201d FAQ\u00b6\nWhat is Python?\u00b6\nPython is a programming language. It\u2019s used for many different applications. It\u2019s used in some high schools and colleges as an introductory programming language because Python is easy to learn, but it\u2019s also used by professional software developers at places such as Google, NASA, and Lucasfilm Ltd.\nIf you wish to learn more about Python, start with the Beginner\u2019s Guide to Python.\nWhy is Python installed on my machine?\u00b6\nIf you find Python installed on your system but don\u2019t remember installing it, there are several possible ways it could have gotten there.\nPerhaps another user on the computer wanted to learn programming and installed it; you\u2019ll have to figure out who\u2019s been using the machine and might have installed it.\nA third-party application installed on the machine might have been written in Python and included a Python installation. There are many such applications, from GUI programs to network servers and administrative scripts.\nSome Windows machines also have Python installed. At this writing we\u2019re aware of computers from Hewlett-Packard and Compaq that include Python. Apparently some of HP/Compaq\u2019s administrative tools are written in Python.\nMany Unix-compatible operating systems, such as macOS and some Linux distributions, have Python installed by default; it\u2019s included in the base installation.\nCan I delete Python?\u00b6\nThat depends on where Python came from.\nIf someone installed it deliberately, you can remove it without hurting anything. On Windows, use the Add/Remove Programs icon in the Control Panel.\nIf Python was installed by a third-party application, you can also remove it, but that application will no longer work. You should use that application\u2019s uninstaller rather than removing Python directly.\nIf Python came with your operating system, removing it is not recommended. If you remove it, whatever tools were written in Python will no longer run, and some of them might be important to you. Reinstalling the whole system would then be required to fix things again.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 518}
{"url": "https://docs.python.org/3/faq/gui.html", "title": null, "content": "Graphic User Interface FAQ\u00b6\nGeneral GUI Questions\u00b6\nWhat GUI toolkits exist for Python?\u00b6\nStandard builds of Python include an object-oriented interface to the Tcl/Tk widget set, called tkinter. This is probably the easiest to install (since it comes included with most binary distributions of Python) and use. For more info about Tk, including pointers to the source, see the Tcl/Tk home page. Tcl/Tk is fully portable to the macOS, Windows, and Unix platforms.\nDepending on what platform(s) you are aiming at, there are also several alternatives. A list of cross-platform and platform-specific GUI frameworks can be found on the python wiki.\nTkinter questions\u00b6\nHow do I freeze Tkinter applications?\u00b6\nFreeze is a tool to create stand-alone applications. When freezing Tkinter applications, the applications will not be truly stand-alone, as the application will still need the Tcl and Tk libraries.\nOne solution is to ship the application with the Tcl and Tk libraries, and point\nto them at run-time using the TCL_LIBRARY\nand TK_LIBRARY\nenvironment variables.\nVarious third-party freeze libraries such as py2exe and cx_Freeze have handling for Tkinter applications built-in.\nCan I have Tk events handled while waiting for I/O?\u00b6\nOn platforms other than Windows, yes, and you don\u2019t even\nneed threads! But you\u2019ll have to restructure your I/O\ncode a bit. Tk has the equivalent of Xt\u2019s XtAddInput()\ncall, which allows you\nto register a callback function which will be called from the Tk mainloop when\nI/O is possible on a file descriptor. See File Handlers.\nI can\u2019t get key bindings to work in Tkinter: why?\u00b6\nAn often-heard complaint is that event handlers bound\nto events with the bind()\nmethod\ndon\u2019t get handled even when the appropriate key is pressed.\nThe most common cause is that the widget to which the binding applies doesn\u2019t have \u201ckeyboard focus\u201d. Check out the Tk documentation for the focus command. Usually a widget is given the keyboard focus by clicking in it (but not for labels; see the takefocus option).", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 504}
{"url": "https://docs.python.org/3/faq/windows.html", "title": null, "content": "Python on Windows FAQ\u00b6\nHow do I run a Python program under Windows?\u00b6\nThis is not necessarily a straightforward question. If you are already familiar with running programs from the Windows command line then everything will seem obvious; otherwise, you might need a little more guidance.\nUnless you use some sort of integrated development environment, you will end up\ntyping Windows commands into what is referred to as a\n\u201cCommand prompt window\u201d. Usually you can create such a window from your\nsearch bar by searching for cmd\n. You should be able to recognize\nwhen you have started such a window because you will see a Windows \u201ccommand\nprompt\u201d, which usually looks like this:\nC:\\>\nThe letter may be different, and there might be other things after it, so you might just as easily see something like:\nD:\\YourName\\Projects\\Python>\ndepending on how your computer has been set up and what else you have recently done with it. Once you have started such a window, you are well on the way to running Python programs.\nYou need to realize that your Python scripts have to be processed by another program called the Python interpreter. The interpreter reads your script, compiles it into bytecodes, and then executes the bytecodes to run your program. So, how do you arrange for the interpreter to handle your Python?\nFirst, you need to make sure that your command window recognises the word\n\u201cpy\u201d as an instruction to start the interpreter. If you have opened a\ncommand window, you should try entering the command py\nand hitting\nreturn:\nC:\\Users\\YourName> py\nYou should then see something like:\nPython 3.6.4 (v3.6.4:d48eceb, Dec 19 2017, 06:04:45) [MSC v.1900 32 bit (Intel)] on win32\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>>\nYou have started the interpreter in \u201cinteractive mode\u201d. That means you can enter Python statements or expressions interactively and have them executed or evaluated while you wait. This is one of Python\u2019s strongest features. Check it by entering a few expressions of your choice and seeing the results:\n>>> print(\"Hello\")\nHello\n>>> \"Hello\" * 3\n'HelloHelloHello'\nMany people use the interactive mode as a convenient yet highly programmable\ncalculator. When you want to end your interactive Python session,\ncall the exit()\nfunction or hold the Ctrl key down\nwhile you enter a Z, then hit the \u201cEnter\u201d key to get\nback to your Windows command prompt.\nYou may also find that you have a Start-menu entry such as >>>\nprompt in a new window. If so, the window will disappear\nafter you call the exit()\nfunction or enter the Ctrl-Z\ncharacter; Windows is running a single \u201cpython\u201d\ncommand in the window, and closes it when you terminate the interpreter.\nNow that we know the py\ncommand is recognized, you can give your\nPython script to it. You\u2019ll have to give either an absolute or a\nrelative path to the Python script. Let\u2019s say your Python script is\nlocated in your desktop and is named hello.py\n, and your command\nprompt is nicely opened in your home directory so you\u2019re seeing something\nsimilar to:\nC:\\Users\\YourName>\nSo now you\u2019ll ask the py\ncommand to give your script to Python by\ntyping py\nfollowed by your script path:\nC:\\Users\\YourName> py Desktop\\hello.py\nhello\nHow do I make Python scripts executable?\u00b6\nOn Windows, the standard Python installer already associates the .py\nextension with a file type (Python.File) and gives that file type an open\ncommand that runs the interpreter (D:\\Program Files\\Python\\python.exe \"%1\"\n%*\n). This is enough to make scripts executable from the command prompt as\n\u2018foo.py\u2019. If you\u2019d rather be able to execute the script by simple typing \u2018foo\u2019\nwith no extension you need to add .py to the PATHEXT environment variable.\nWhy does Python sometimes take so long to start?\u00b6\nUsually Python starts very quickly on Windows, but occasionally there are bug reports that Python suddenly begins to take a long time to start up. This is made even more puzzling because Python will work fine on other Windows systems which appear to be configured identically.\nThe problem may be caused by a misconfiguration of virus checking software on the problem machine. Some virus scanners have been known to introduce startup overhead of two orders of magnitude when the scanner is configured to monitor all reads from the filesystem. Try checking the configuration of virus scanning software on your systems to ensure that they are indeed configured identically. McAfee, when configured to scan all file system read activity, is a particular offender.\nHow do I make an executable from a Python script?\u00b6\nSee How can I create a stand-alone binary from a Python script? for a list of tools that can be used to make executables.\nIs a *.pyd\nfile the same as a DLL?\u00b6\nYes, .pyd files are dll\u2019s, but there are a few differences. If you have a DLL\nnamed foo.pyd\n, then it must have a function PyInit_foo()\n. You can then\nwrite Python \u201cimport foo\u201d, and Python will search for foo.pyd (as well as\nfoo.py, foo.pyc) and if it finds it, will attempt to call PyInit_foo()\nto\ninitialize it. You do not link your .exe with foo.lib, as that would cause\nWindows to require the DLL to be present.\nNote that the search path for foo.pyd is PYTHONPATH, not the same as the path\nthat Windows uses to search for foo.dll. Also, foo.pyd need not be present to\nrun your program, whereas if you linked your program with a dll, the dll is\nrequired. Of course, foo.pyd is required if you want to say import foo\n. In\na DLL, linkage is declared in the source code with __declspec(dllexport)\n.\nIn a .pyd, linkage is defined in a list of available functions.\nHow can I embed Python into a Windows application?\u00b6\nEmbedding the Python interpreter in a Windows app can be summarized as follows:\nDo not build Python into your .exe file directly. On Windows, Python must be a DLL to handle importing modules that are themselves DLL\u2019s. (This is the first key undocumented fact.) Instead, link to\npythonNN.dll\n; it is typically installed inC:\\Windows\\System\n. NN is the Python version, a number such as \u201c33\u201d for Python 3.3.You can link to Python in two different ways. Load-time linking means linking against\npythonNN.lib\n, while run-time linking means linking againstpythonNN.dll\n. (General note:pythonNN.lib\nis the so-called \u201cimport lib\u201d corresponding topythonNN.dll\n. It merely defines symbols for the linker.)Run-time linking greatly simplifies link options; everything happens at run time. Your code must load\npythonNN.dll\nusing the WindowsLoadLibraryEx()\nroutine. The code must also use access routines and data inpythonNN.dll\n(that is, Python\u2019s C API\u2019s) using pointers obtained by the WindowsGetProcAddress()\nroutine. Macros can make using these pointers transparent to any C code that calls routines in Python\u2019s C API.If you use SWIG, it is easy to create a Python \u201cextension module\u201d that will make the app\u2019s data and methods available to Python. SWIG will handle just about all the grungy details for you. The result is C code that you link into your .exe file (!) You do not have to create a DLL file, and this also simplifies linking.\nSWIG will create an init function (a C function) whose name depends on the name of the extension module. For example, if the name of the module is leo, the init function will be called initleo(). If you use SWIG shadow classes, as you should, the init function will be called initleoc(). This initializes a mostly hidden helper class used by the shadow class.\nThe reason you can link the C code in step 2 into your .exe file is that calling the initialization function is equivalent to importing the module into Python! (This is the second key undocumented fact.)\nIn short, you can use the following code to initialize the Python interpreter with your extension module.\n#include ... Py_Initialize(); // Initialize Python. initmyAppc(); // Initialize (import) the helper class. PyRun_SimpleString(\"import myApp\"); // Import the shadow class.\nThere are two problems with Python\u2019s C API which will become apparent if you use a compiler other than MSVC, the compiler used to build pythonNN.dll.\nProblem 1: The so-called \u201cVery High Level\u201d functions that take\nFILE *\narguments will not work in a multi-compiler environment because each compiler\u2019s notion of astruct FILE\nwill be different. From an implementation standpoint these are very low level functions.Problem 2: SWIG generates the following code when generating wrappers to void functions:\nPy_INCREF(Py_None); _resultobj = Py_None; return _resultobj;\nAlas, Py_None is a macro that expands to a reference to a complex data structure called _Py_NoneStruct inside pythonNN.dll. Again, this code will fail in a mult-compiler environment. Replace such code by:\nreturn Py_BuildValue(\"\");\nIt may be possible to use SWIG\u2019s\n%typemap\ncommand to make the change automatically, though I have not been able to get this to work (I\u2019m a complete SWIG newbie).Using a Python shell script to put up a Python interpreter window from inside your Windows app is not a good idea; the resulting window will be independent of your app\u2019s windowing system. Rather, you (or the wxPythonWindow class) should create a \u201cnative\u201d interpreter window. It is easy to connect that window to the Python interpreter. You can redirect Python\u2019s i/o to _any_ object that supports read and write, so all you need is a Python object (defined in your extension module) that contains read() and write() methods.\nHow do I keep editors from inserting tabs into my Python source?\u00b6\nThe FAQ does not recommend using tabs, and the Python style guide, PEP 8, recommends 4 spaces for distributed Python code; this is also the Emacs python-mode default.\nUnder any editor, mixing tabs and spaces is a bad idea. MSVC is no different in this respect, and is easily configured to use spaces: Take\n, and for file type \u201cDefault\u201d set \u201cTab size\u201d and \u201cIndent size\u201d to 4, and select the \u201cInsert spaces\u201d radio button.Python raises IndentationError\nor TabError\nif mixed tabs\nand spaces are causing problems in leading whitespace.\nYou may also run the tabnanny\nmodule to check a directory tree\nin batch mode.\nHow do I check for a keypress without blocking?\u00b6\nUse the msvcrt\nmodule. This is a standard Windows-specific extension module.\nIt defines a function kbhit()\nwhich checks whether a keyboard hit is\npresent, and getch()\nwhich gets one character without echoing it.\nHow do I solve the missing api-ms-win-crt-runtime-l1-1-0.dll error?\u00b6\nThis can occur on Python 3.5 and later when using Windows 8.1 or earlier without all updates having been installed. First ensure your operating system is supported and is up to date, and if that does not resolve the issue, visit the Microsoft support page for guidance on manually installing the C Runtime update.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 2676}
{"url": "https://docs.python.org/3/faq/extending.html", "title": null, "content": "Extending/Embedding FAQ\u00b6\nCan I create my own functions in C?\u00b6\nYes, you can create built-in modules containing functions, variables, exceptions and even new types in C. This is explained in the document Extending and Embedding the Python Interpreter.\nMost intermediate or advanced Python books will also cover this topic.\nCan I create my own functions in C++?\u00b6\nYes, using the C compatibility features found in C++. Place extern \"C\" {\n... }\naround the Python include files and put extern \"C\"\nbefore each\nfunction that is going to be called by the Python interpreter. Global or static\nC++ objects with constructors are probably not a good idea.\nWriting C is hard; are there any alternatives?\u00b6\nThere are a number of alternatives to writing your own C extensions, depending on what you\u2019re trying to do. Recommended third party tools offer both simpler and more sophisticated approaches to creating C and C++ extensions for Python.\nHow can I execute arbitrary Python statements from C?\u00b6\nThe highest-level function to do this is PyRun_SimpleString()\nwhich takes\na single string argument to be executed in the context of the module\n__main__\nand returns 0\nfor success and -1\nwhen an exception occurred\n(including SyntaxError\n). If you want more control, use\nPyRun_String()\n; see the source for PyRun_SimpleString()\nin\nPython/pythonrun.c\n.\nHow can I evaluate an arbitrary Python expression from C?\u00b6\nCall the function PyRun_String()\nfrom the previous question with the\nstart symbol Py_eval_input\n; it parses an expression, evaluates it and\nreturns its value.\nHow do I extract C values from a Python object?\u00b6\nThat depends on the object\u2019s type. If it\u2019s a tuple, PyTuple_Size()\nreturns its length and PyTuple_GetItem()\nreturns the item at a specified\nindex. Lists have similar functions, PyList_Size()\nand\nPyList_GetItem()\n.\nFor bytes, PyBytes_Size()\nreturns its length and\nPyBytes_AsStringAndSize()\nprovides a pointer to its value and its\nlength. Note that Python bytes objects may contain null bytes so C\u2019s\nstrlen()\nshould not be used.\nTo test the type of an object, first make sure it isn\u2019t NULL\n, and then use\nPyBytes_Check()\n, PyTuple_Check()\n, PyList_Check()\n, etc.\nThere is also a high-level API to Python objects which is provided by the\nso-called \u2018abstract\u2019 interface \u2013 read Include/abstract.h\nfor further\ndetails. It allows interfacing with any kind of Python sequence using calls\nlike PySequence_Length()\n, PySequence_GetItem()\n, etc. as well\nas many other useful protocols such as numbers (PyNumber_Index()\net\nal.) and mappings in the PyMapping APIs.\nHow do I use Py_BuildValue() to create a tuple of arbitrary length?\u00b6\nYou can\u2019t. Use PyTuple_Pack()\ninstead.\nHow do I call an object\u2019s method from C?\u00b6\nThe PyObject_CallMethod()\nfunction can be used to call an arbitrary\nmethod of an object. The parameters are the object, the name of the method to\ncall, a format string like that used with Py_BuildValue()\n, and the\nargument values:\nPyObject *\nPyObject_CallMethod(PyObject *object, const char *method_name,\nconst char *arg_format, ...);\nThis works for any object that has methods \u2013 whether built-in or user-defined.\nYou are responsible for eventually Py_DECREF()\n\u2018ing the return value.\nTo call, e.g., a file object\u2019s \u201cseek\u201d method with arguments 10, 0 (assuming the file object pointer is \u201cf\u201d):\nres = PyObject_CallMethod(f, \"seek\", \"(ii)\", 10, 0);\nif (res == NULL) {\n... an exception occurred ...\n}\nelse {\nPy_DECREF(res);\n}\nNote that since PyObject_CallObject()\nalways wants a tuple for the\nargument list, to call a function without arguments, pass \u201c()\u201d for the format,\nand to call a function with one argument, surround the argument in parentheses,\ne.g. \u201c(i)\u201d.\nHow do I catch the output from PyErr_Print() (or anything that prints to stdout/stderr)?\u00b6\nIn Python code, define an object that supports the write()\nmethod. Assign\nthis object to sys.stdout\nand sys.stderr\n. Call print_error, or\njust allow the standard traceback mechanism to work. Then, the output will go\nwherever your write()\nmethod sends it.\nThe easiest way to do this is to use the io.StringIO\nclass:\n>>> import io, sys\n>>> sys.stdout = io.StringIO()\n>>> print('foo')\n>>> print('hello world!')\n>>> sys.stderr.write(sys.stdout.getvalue())\nfoo\nhello world!\nA custom object to do the same would look like this:\n>>> import io, sys\n>>> class StdoutCatcher(io.TextIOBase):\n... def __init__(self):\n... self.data = []\n... def write(self, stuff):\n... self.data.append(stuff)\n...\n>>> import sys\n>>> sys.stdout = StdoutCatcher()\n>>> print('foo')\n>>> print('hello world!')\n>>> sys.stderr.write(''.join(sys.stdout.data))\nfoo\nhello world!\nHow do I access a module written in Python from C?\u00b6\nYou can get a pointer to the module object as follows:\nmodule = PyImport_ImportModule(\"\");\nIf the module hasn\u2019t been imported yet (i.e. it is not yet present in\nsys.modules\n), this initializes the module; otherwise it simply returns\nthe value of sys.modules[\"\"]\n. Note that it doesn\u2019t enter the\nmodule into any namespace \u2013 it only ensures it has been initialized and is\nstored in sys.modules\n.\nYou can then access the module\u2019s attributes (i.e. any name defined in the module) as follows:\nattr = PyObject_GetAttrString(module, \"\");\nCalling PyObject_SetAttrString()\nto assign to variables in the module\nalso works.\nHow do I interface to C++ objects from Python?\u00b6\nDepending on your requirements, there are many approaches. To do this manually, begin by reading the \u201cExtending and Embedding\u201d document. Realize that for the Python run-time system, there isn\u2019t a whole lot of difference between C and C++ \u2013 so the strategy of building a new Python type around a C structure (pointer) type will also work for C++ objects.\nFor C++ libraries, see Writing C is hard; are there any alternatives?.\nI added a module using the Setup file and the make fails; why?\u00b6\nSetup must end in a newline, if there is no newline there, the build process fails. (Fixing this requires some ugly shell script hackery, and this bug is so minor that it doesn\u2019t seem worth the effort.)\nHow do I debug an extension?\u00b6\nWhen using GDB with dynamically loaded extensions, you can\u2019t set a breakpoint in your extension until your extension is loaded.\nIn your .gdbinit\nfile (or interactively), add the command:\nbr _PyImport_LoadDynamicModule\nThen, when you run GDB:\n$ gdb /local/bin/python\ngdb) run myscript.py\ngdb) continue # repeat until your extension is loaded\ngdb) finish # so that your extension is loaded\ngdb) br myfunction.c:50\ngdb) continue\nI want to compile a Python module on my Linux system, but some files are missing. Why?\u00b6\nMost packaged versions of Python omit some files required for compiling Python extensions.\nFor Red Hat, install the python3-devel RPM to get the necessary files.\nFor Debian, run apt-get install python3-dev\n.\nHow do I tell \u201cincomplete input\u201d from \u201cinvalid input\u201d?\u00b6\nSometimes you want to emulate the Python interactive interpreter\u2019s behavior, where it gives you a continuation prompt when the input is incomplete (e.g. you typed the start of an \u201cif\u201d statement or you didn\u2019t close your parentheses or triple string quotes), but it gives you a syntax error message immediately when the input is invalid.\nIn Python you can use the codeop\nmodule, which approximates the parser\u2019s\nbehavior sufficiently. IDLE uses this, for example.\nThe easiest way to do it in C is to call PyRun_InteractiveLoop()\n(perhaps\nin a separate thread) and let the Python interpreter handle the input for\nyou. You can also set the PyOS_ReadlineFunctionPointer()\nto point at your\ncustom input function. See Modules/readline.c\nand Parser/myreadline.c\nfor more hints.\nHow do I find undefined g++ symbols __builtin_new or __pure_virtual?\u00b6\nTo dynamically load g++ extension modules, you must recompile Python, relink it\nusing g++ (change LINKCC in the Python Modules Makefile), and link your\nextension module using g++ (e.g., g++ -shared -o mymodule.so mymodule.o\n).\nCan I create an object class with some methods implemented in C and others in Python (e.g. through inheritance)?\u00b6\nYes, you can inherit from built-in classes such as int\n, list\n,\ndict\n, etc.\nThe Boost Python Library (BPL, https://www.boost.org/libs/python/doc/index.html) provides a way of doing this from C++ (i.e. you can inherit from an extension class written in C++ using the BPL).", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 2071}
{"url": "https://docs.python.org/3/faq/library.html", "title": null, "content": "Library and Extension FAQ\u00b6\nGeneral Library Questions\u00b6\nHow do I find a module or application to perform task X?\u00b6\nCheck the Library Reference to see if there\u2019s a relevant standard library module. (Eventually you\u2019ll learn what\u2019s in the standard library and will be able to skip this step.)\nFor third-party packages, search the Python Package Index or try Google or another web search engine. Searching for \u201cPython\u201d plus a keyword or two for your topic of interest will usually find something helpful.\nWhere is the math.py (socket.py, regex.py, etc.) source file?\u00b6\nIf you can\u2019t find a source file for a module it may be a built-in or\ndynamically loaded module implemented in C, C++ or other compiled language.\nIn this case you may not have the source file or it may be something like\nmathmodule.c\n, somewhere in a C source directory (not on the Python Path).\nThere are (at least) three kinds of modules in Python:\nmodules written in Python (.py);\nmodules written in C and dynamically loaded (.dll, .pyd, .so, .sl, etc);\nmodules written in C and linked with the interpreter; to get a list of these, type:\nimport sys print(sys.builtin_module_names)\nHow do I make a Python script executable on Unix?\u00b6\nYou need to do two things: the script file\u2019s mode must be executable and the\nfirst line must begin with #!\nfollowed by the path of the Python\ninterpreter.\nThe first is done by executing chmod +x scriptfile\nor perhaps chmod 755\nscriptfile\n.\nThe second can be done in a number of ways. The most straightforward way is to write\n#!/usr/local/bin/python\nas the very first line of your file, using the pathname for where the Python interpreter is installed on your platform.\nIf you would like the script to be independent of where the Python interpreter\nlives, you can use the env program. Almost all Unix variants support\nthe following, assuming the Python interpreter is in a directory on the user\u2019s\nPATH\n:\n#!/usr/bin/env python\nDon\u2019t do this for CGI scripts. The PATH\nvariable for CGI scripts is\noften very minimal, so you need to use the actual absolute pathname of the\ninterpreter.\nOccasionally, a user\u2019s environment is so full that the /usr/bin/env program fails; or there\u2019s no env program at all. In that case, you can try the following hack (due to Alex Rezinsky):\n#! /bin/sh\n\"\"\":\"\nexec python $0 ${1+\"$@\"}\n\"\"\"\nThe minor disadvantage is that this defines the script\u2019s __doc__ string. However, you can fix that by adding\n__doc__ = \"\"\"...Whatever...\"\"\"\nIs there a curses/termcap package for Python?\u00b6\nFor Unix variants: The standard Python source distribution comes with a curses module in the Modules subdirectory, though it\u2019s not compiled by default. (Note that this is not available in the Windows distribution \u2013 there is no curses module for Windows.)\nThe curses\nmodule supports basic curses features as well as many additional\nfunctions from ncurses and SYSV curses such as colour, alternative character set\nsupport, pads, and mouse support. This means the module isn\u2019t compatible with\noperating systems that only have BSD curses, but there don\u2019t seem to be any\ncurrently maintained OSes that fall into this category.\nIs there an equivalent to C\u2019s onexit() in Python?\u00b6\nThe atexit\nmodule provides a register function that is similar to C\u2019s\nonexit()\n.\nWhy don\u2019t my signal handlers work?\u00b6\nThe most common problem is that the signal handler is declared with the wrong argument list. It is called as\nhandler(signum, frame)\nso it should be declared with two parameters:\ndef handler(signum, frame):\n...\nCommon tasks\u00b6\nHow do I test a Python program or component?\u00b6\nPython comes with two testing frameworks. The doctest\nmodule finds\nexamples in the docstrings for a module and runs them, comparing the output with\nthe expected output given in the docstring.\nThe unittest\nmodule is a fancier testing framework modelled on Java and\nSmalltalk testing frameworks.\nTo make testing easier, you should use good modular design in your program. Your program should have almost all functionality encapsulated in either functions or class methods \u2013 and this sometimes has the surprising and delightful effect of making the program run faster (because local variable accesses are faster than global accesses). Furthermore the program should avoid depending on mutating global variables, since this makes testing much more difficult to do.\nThe \u201cglobal main logic\u201d of your program may be as simple as\nif __name__ == \"__main__\":\nmain_logic()\nat the bottom of the main module of your program.\nOnce your program is organized as a tractable collection of function and class behaviours, you should write test functions that exercise the behaviours. A test suite that automates a sequence of tests can be associated with each module. This sounds like a lot of work, but since Python is so terse and flexible it\u2019s surprisingly easy. You can make coding much more pleasant and fun by writing your test functions in parallel with the \u201cproduction code\u201d, since this makes it easy to find bugs and even design flaws earlier.\n\u201cSupport modules\u201d that are not intended to be the main module of a program may include a self-test of the module.\nif __name__ == \"__main__\":\nself_test()\nEven programs that interact with complex external interfaces may be tested when the external interfaces are unavailable by using \u201cfake\u201d interfaces implemented in Python.\nHow do I create documentation from doc strings?\u00b6\nThe pydoc\nmodule can create HTML from the doc strings in your Python\nsource code. An alternative for creating API documentation purely from\ndocstrings is epydoc. Sphinx can also include docstring content.\nHow do I get a single keypress at a time?\u00b6\nFor Unix variants there are several solutions. It\u2019s straightforward to do this using curses, but curses is a fairly large module to learn.\nThreads\u00b6\nHow do I program using threads?\u00b6\nBe sure to use the threading\nmodule and not the _thread\nmodule.\nThe threading\nmodule builds convenient abstractions on top of the\nlow-level primitives provided by the _thread\nmodule.\nNone of my threads seem to run: why?\u00b6\nAs soon as the main thread exits, all threads are killed. Your main thread is running too quickly, giving the threads no time to do any work.\nA simple fix is to add a sleep to the end of the program that\u2019s long enough for all the threads to finish:\nimport threading, time\ndef thread_task(name, n):\nfor i in range(n):\nprint(name, i)\nfor i in range(10):\nT = threading.Thread(target=thread_task, args=(str(i), i))\nT.start()\ntime.sleep(10) # <---------------------------!\nBut now (on many platforms) the threads don\u2019t run in parallel, but appear to run sequentially, one at a time! The reason is that the OS thread scheduler doesn\u2019t start a new thread until the previous thread is blocked.\nA simple fix is to add a tiny sleep to the start of the run function:\ndef thread_task(name, n):\ntime.sleep(0.001) # <--------------------!\nfor i in range(n):\nprint(name, i)\nfor i in range(10):\nT = threading.Thread(target=thread_task, args=(str(i), i))\nT.start()\ntime.sleep(10)\nInstead of trying to guess a good delay value for time.sleep()\n,\nit\u2019s better to use some kind of semaphore mechanism. One idea is to use the\nqueue\nmodule to create a queue object, let each thread append a token to\nthe queue when it finishes, and let the main thread read as many tokens from the\nqueue as there are threads.\nHow do I parcel out work among a bunch of worker threads?\u00b6\nThe easiest way is to use the concurrent.futures\nmodule,\nespecially the ThreadPoolExecutor\nclass.\nOr, if you want fine control over the dispatching algorithm, you can write\nyour own logic manually. Use the queue\nmodule to create a queue\ncontaining a list of jobs. The Queue\nclass maintains a\nlist of objects and has a .put(obj)\nmethod that adds items to the queue and\na .get()\nmethod to return them. The class will take care of the locking\nnecessary to ensure that each job is handed out exactly once.\nHere\u2019s a trivial example:\nimport threading, queue, time\n# The worker thread gets jobs off the queue. When the queue is empty, it\n# assumes there will be no more work and exits.\n# (Realistically workers will run until terminated.)\ndef worker():\nprint('Running worker')\ntime.sleep(0.1)\nwhile True:\ntry:\narg = q.get(block=False)\nexcept queue.Empty:\nprint('Worker', threading.current_thread(), end=' ')\nprint('queue empty')\nbreak\nelse:\nprint('Worker', threading.current_thread(), end=' ')\nprint('running with argument', arg)\ntime.sleep(0.5)\n# Create queue\nq = queue.Queue()\n# Start a pool of 5 workers\nfor i in range(5):\nt = threading.Thread(target=worker, name='worker %i' % (i+1))\nt.start()\n# Begin adding work to the queue\nfor i in range(50):\nq.put(i)\n# Give threads time to run\nprint('Main thread sleeping')\ntime.sleep(5)\nWhen run, this will produce the following output:\nRunning worker\nRunning worker\nRunning worker\nRunning worker\nRunning worker\nMain thread sleeping\nWorker running with argument 0\nWorker running with argument 1\nWorker running with argument 2\nWorker running with argument 3\nWorker running with argument 4\nWorker running with argument 5\n...\nConsult the module\u2019s documentation for more details; the Queue\nclass provides a featureful interface.\nWhat kinds of global value mutation are thread-safe?\u00b6\nA global interpreter lock (GIL) is used internally to ensure that only one\nthread runs in the Python VM at a time. In general, Python offers to switch\namong threads only between bytecode instructions; how frequently it switches can\nbe set via sys.setswitchinterval()\n. Each bytecode instruction and\ntherefore all the C implementation code reached from each instruction is\ntherefore atomic from the point of view of a Python program.\nIn theory, this means an exact accounting requires an exact understanding of the PVM bytecode implementation. In practice, it means that operations on shared variables of built-in data types (ints, lists, dicts, etc) that \u201clook atomic\u201d really are.\nFor example, the following operations are all atomic (L, L1, L2 are lists, D, D1, D2 are dicts, x, y are objects, i, j are ints):\nL.append(x)\nL1.extend(L2)\nx = L[i]\nx = L.pop()\nL1[i:j] = L2\nL.sort()\nx = y\nx.field = y\nD[x] = y\nD1.update(D2)\nD.keys()\nThese aren\u2019t:\ni = i+1\nL.append(L[-1])\nL[i] = L[j]\nD[x] = D[x] + 1\nOperations that replace other objects may invoke those other objects\u2019\n__del__()\nmethod when their reference count reaches zero, and that can\naffect things. This is especially true for the mass updates to dictionaries and\nlists. When in doubt, use a mutex!\nCan\u2019t we get rid of the Global Interpreter Lock?\u00b6\nThe global interpreter lock (GIL) is often seen as a hindrance to Python\u2019s deployment on high-end multiprocessor server machines, because a multi-threaded Python program effectively only uses one CPU, due to the insistence that (almost) all Python code can only run while the GIL is held.\nWith the approval of PEP 703 work is now underway to remove the GIL from the CPython implementation of Python. Initially it will be implemented as an optional compiler flag when building the interpreter, and so separate builds will be available with and without the GIL. Long-term, the hope is to settle on a single build, once the performance implications of removing the GIL are fully understood. Python 3.13 is likely to be the first release containing this work, although it may not be completely functional in this release.\nThe current work to remove the GIL is based on a fork of Python 3.9 with the GIL removed by Sam Gross. Prior to that, in the days of Python 1.5, Greg Stein actually implemented a comprehensive patch set (the \u201cfree threading\u201d patches) that removed the GIL and replaced it with fine-grained locking. Adam Olsen did a similar experiment in his python-safethread project. Unfortunately, both of these earlier experiments exhibited a sharp drop in single-thread performance (at least 30% slower), due to the amount of fine-grained locking necessary to compensate for the removal of the GIL. The Python 3.9 fork is the first attempt at removing the GIL with an acceptable performance impact.\nThe presence of the GIL in current Python releases\ndoesn\u2019t mean that you can\u2019t make good use of Python on multi-CPU machines!\nYou just have to be creative with dividing the work up between multiple\nprocesses rather than multiple threads. The\nProcessPoolExecutor\nclass in the new\nconcurrent.futures\nmodule provides an easy way of doing so; the\nmultiprocessing\nmodule provides a lower-level API in case you want\nmore control over dispatching of tasks.\nJudicious use of C extensions will also help; if you use a C extension to\nperform a time-consuming task, the extension can release the GIL while the\nthread of execution is in the C code and allow other threads to get some work\ndone. Some standard library modules such as zlib\nand hashlib\nalready do this.\nAn alternative approach to reducing the impact of the GIL is to make the GIL a per-interpreter-state lock rather than truly global. This was first implemented in Python 3.12 and is available in the C API. A Python interface to it is expected in Python 3.13. The main limitation to it at the moment is likely to be 3rd party extension modules, since these must be written with multiple interpreters in mind in order to be usable, so many older extension modules will not be usable.\nInput and Output\u00b6\nHow do I delete a file? (And other file questions\u2026)\u00b6\nUse os.remove(filename)\nor os.unlink(filename)\n; for documentation, see\nthe os\nmodule. The two functions are identical; unlink()\nis simply\nthe name of the Unix system call for this function.\nTo remove a directory, use os.rmdir()\n; use os.mkdir()\nto create one.\nos.makedirs(path)\nwill create any intermediate directories in path\nthat\ndon\u2019t exist. os.removedirs(path)\nwill remove intermediate directories as\nlong as they\u2019re empty; if you want to delete an entire directory tree and its\ncontents, use shutil.rmtree()\n.\nTo rename a file, use os.rename(old_path, new_path)\n.\nTo truncate a file, open it using f = open(filename, \"rb+\")\n, and use\nf.truncate(offset)\n; offset defaults to the current seek position. There\u2019s\nalso os.ftruncate(fd, offset)\nfor files opened with os.open()\n, where\nfd is the file descriptor (a small integer).\nThe shutil\nmodule also contains a number of functions to work on files\nincluding copyfile()\n, copytree()\n, and\nrmtree()\n.\nHow do I copy a file?\u00b6\nThe shutil\nmodule contains a copyfile()\nfunction.\nNote that on Windows NTFS volumes, it does not copy\nalternate data streams\nnor resource forks\non macOS HFS+ volumes, though both are now rarely used.\nIt also doesn\u2019t copy file permissions and metadata, though using\nshutil.copy2()\ninstead will preserve most (though not all) of it.\nHow do I read (or write) binary data?\u00b6\nTo read or write complex binary data formats, it\u2019s best to use the struct\nmodule. It allows you to take a string containing binary data (usually numbers)\nand convert it to Python objects; and vice versa.\nFor example, the following code reads two 2-byte integers and one 4-byte integer in big-endian format from a file:\nimport struct\nwith open(filename, \"rb\") as f:\ns = f.read(8)\nx, y, z = struct.unpack(\">hhl\", s)\nThe \u2018>\u2019 in the format string forces big-endian data; the letter \u2018h\u2019 reads one \u201cshort integer\u201d (2 bytes), and \u2018l\u2019 reads one \u201clong integer\u201d (4 bytes) from the string.\nFor data that is more regular (e.g. a homogeneous list of ints or floats),\nyou can also use the array\nmodule.\nI can\u2019t seem to use os.read() on a pipe created with os.popen(); why?\u00b6\nos.read()\nis a low-level function which takes a file descriptor, a small\ninteger representing the opened file. os.popen()\ncreates a high-level\nfile object, the same type returned by the built-in open()\nfunction.\nThus, to read n bytes from a pipe p created with os.popen()\n, you need to\nuse p.read(n)\n.\nHow do I access the serial (RS232) port?\u00b6\nFor Win32, OSX, Linux, BSD, Jython, IronPython:\nFor Unix, see a Usenet post by Mitch Chapman:\nWhy doesn\u2019t closing sys.stdout (stdin, stderr) really close it?\u00b6\nPython file objects are a high-level layer of abstraction on low-level C file descriptors.\nFor most file objects you create in Python via the built-in open()\nfunction, f.close()\nmarks the Python file object as being closed from\nPython\u2019s point of view, and also arranges to close the underlying C file\ndescriptor. This also happens automatically in f\n\u2019s destructor, when\nf\nbecomes garbage.\nBut stdin, stdout and stderr are treated specially by Python, because of the\nspecial status also given to them by C. Running sys.stdout.close()\nmarks\nthe Python-level file object as being closed, but does not close the\nassociated C file descriptor.\nTo close the underlying C file descriptor for one of these three, you should\nfirst be sure that\u2019s what you really want to do (e.g., you may confuse\nextension modules trying to do I/O). If it is, use os.close()\n:\nos.close(stdin.fileno())\nos.close(stdout.fileno())\nos.close(stderr.fileno())\nOr you can use the numeric constants 0, 1 and 2, respectively.\nNetwork/Internet Programming\u00b6\nWhat WWW tools are there for Python?\u00b6\nSee the chapters titled Internet Protocols and Support and Internet Data Handling in the Library Reference Manual. Python has many modules that will help you build server-side and client-side web systems.\nA summary of available frameworks is maintained by Paul Boddie at https://wiki.python.org/moin/WebProgramming.\nWhat module should I use to help with generating HTML?\u00b6\nYou can find a collection of useful links on the Web Programming wiki page.\nHow do I send mail from a Python script?\u00b6\nUse the standard library module smtplib\n.\nHere\u2019s a very simple interactive mail sender that uses it. This method will work on any host that supports an SMTP listener.\nimport sys, smtplib\nfromaddr = input(\"From: \")\ntoaddrs = input(\"To: \").split(',')\nprint(\"Enter message, end with ^D:\")\nmsg = ''\nwhile True:\nline = sys.stdin.readline()\nif not line:\nbreak\nmsg += line\n# The actual mail send\nserver = smtplib.SMTP('localhost')\nserver.sendmail(fromaddr, toaddrs, msg)\nserver.quit()\nA Unix-only alternative uses sendmail. The location of the sendmail program\nvaries between systems; sometimes it is /usr/lib/sendmail\n, sometimes\n/usr/sbin/sendmail\n. The sendmail manual page will help you out. Here\u2019s\nsome sample code:\nimport os\nSENDMAIL = \"/usr/sbin/sendmail\" # sendmail location\np = os.popen(\"%s -t -i\" % SENDMAIL, \"w\")\np.write(\"To: receiver@example.com\\n\")\np.write(\"Subject: test\\n\")\np.write(\"\\n\") # blank line separating headers from body\np.write(\"Some text\\n\")\np.write(\"some more text\\n\")\nsts = p.close()\nif sts != 0:\nprint(\"Sendmail exit status\", sts)\nHow do I avoid blocking in the connect() method of a socket?\u00b6\nThe select\nmodule is commonly used to help with asynchronous I/O on\nsockets.\nTo prevent the TCP connect from blocking, you can set the socket to non-blocking\nmode. Then when you do the connect()\n,\nyou will either connect immediately\n(unlikely) or get an exception that contains the error number as .errno\n.\nerrno.EINPROGRESS\nindicates that the connection is in progress, but hasn\u2019t\nfinished yet. Different OSes will return different values, so you\u2019re going to\nhave to check what\u2019s returned on your system.\nYou can use the connect_ex()\nmethod\nto avoid creating an exception.\nIt will just return the errno value.\nTo poll, you can call connect_ex()\nagain later\n\u2013 0\nor errno.EISCONN\nindicate that you\u2019re connected \u2013 or you can pass this\nsocket to select.select()\nto check if it\u2019s writable.\nDatabases\u00b6\nAre there any interfaces to database packages in Python?\u00b6\nYes.\nInterfaces to disk-based hashes such as DBM\nand GDBM\nare also included with standard Python. There is also the\nsqlite3\nmodule, which provides a lightweight disk-based relational\ndatabase.\nSupport for most relational databases is available. See the DatabaseProgramming wiki page for details.\nHow do you implement persistent objects in Python?\u00b6\nThe pickle\nlibrary module solves this in a very general way (though you\nstill can\u2019t store things like open files, sockets or windows), and the\nshelve\nlibrary module uses pickle and (g)dbm to create persistent\nmappings containing arbitrary Python objects.\nMathematics and Numerics\u00b6\nHow do I generate random numbers in Python?\u00b6\nThe standard module random\nimplements a random number generator. Usage\nis simple:\nimport random\nrandom.random()\nThis returns a random floating-point number in the range [0, 1).\nThere are also many other specialized generators in this module, such as:\nrandrange(a, b)\nchooses an integer in the range [a, b).uniform(a, b)\nchooses a floating-point number in the range [a, b).normalvariate(mean, sdev)\nsamples the normal (Gaussian) distribution.\nSome higher-level functions operate on sequences directly, such as:\nchoice(S)\nchooses a random element from a given sequence.shuffle(L)\nshuffles a list in-place, i.e. permutes it randomly.\nThere\u2019s also a Random\nclass you can instantiate to create independent\nmultiple random number generators.", "code_snippets": ["\n", "\n", "\n", "\n", " ", " ", "\n", " ", "\n", " ", "\n ", "\n", " ", " ", " ", "\n ", "\n", " ", " ", " ", "\n ", "\n", "\n\n", " ", "\n ", " ", " ", " ", "\n ", " ", "\n\n", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", "\n\n", " ", "\n", " ", "\n ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n\n", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", "\n\n", "\n", "\n\n", "\n", "\n", "\n", "\n ", "\n ", "\n ", " ", "\n ", "\n ", " ", " ", "\n ", " ", "\n ", " ", " ", "\n ", "\n ", "\n ", "\n ", " ", " ", "\n ", " ", "\n ", "\n\n", "\n", " ", " ", "\n\n", "\n", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n ", "\n\n", "\n", " ", " ", " ", "\n ", "\n\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n", "\n\n", " ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n\n", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n\n", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", "\n", "\n", " ", "\n", "\n", "\n", " ", " ", "\n", " ", " ", " ", "\n ", " ", "\n", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 5270}
{"url": "https://docs.python.org/3/faq/design.html", "title": null, "content": "Design and History FAQ\u00b6\nWhy does Python use indentation for grouping of statements?\u00b6\nGuido van Rossum believes that using indentation for grouping is extremely elegant and contributes a lot to the clarity of the average Python program. Most people learn to love this feature after a while.\nSince there are no begin/end brackets there cannot be a disagreement between grouping perceived by the parser and the human reader. Occasionally C programmers will encounter a fragment of code like this:\nif (x <= y)\nx++;\ny--;\nz++;\nOnly the x++\nstatement is executed if the condition is true, but the\nindentation leads many to believe otherwise. Even experienced C programmers will\nsometimes stare at it a long time wondering as to why y\nis being decremented even\nfor x > y\n.\nBecause there are no begin/end brackets, Python is much less prone to coding-style conflicts. In C there are many different ways to place the braces. After becoming used to reading and writing code using a particular style, it is normal to feel somewhat uneasy when reading (or being required to write) in a different one.\nMany coding styles place begin/end brackets on a line by themselves. This makes programs considerably longer and wastes valuable screen space, making it harder to get a good overview of a program. Ideally, a function should fit on one screen (say, 20\u201330 lines). 20 lines of Python can do a lot more work than 20 lines of C. This is not solely due to the lack of begin/end brackets \u2013 the lack of declarations and the high-level data types are also responsible \u2013 but the indentation-based syntax certainly helps.\nWhy am I getting strange results with simple arithmetic operations?\u00b6\nSee the next question.\nWhy are floating-point calculations so inaccurate?\u00b6\nUsers are often surprised by results like this:\n>>> 1.2 - 1.0\n0.19999999999999996\nand think it is a bug in Python. It\u2019s not. This has little to do with Python, and much more to do with how the underlying platform handles floating-point numbers.\nThe float\ntype in CPython uses a C double\nfor storage. A\nfloat\nobject\u2019s value is stored in binary floating-point with a fixed\nprecision (typically 53 bits) and Python uses C operations, which in turn rely\non the hardware implementation in the processor, to perform floating-point\noperations. This means that as far as floating-point operations are concerned,\nPython behaves like many popular languages including C and Java.\nMany numbers that can be written easily in decimal notation cannot be expressed exactly in binary floating point. For example, after:\n>>> x = 1.2\nthe value stored for x\nis a (very good) approximation to the decimal value\n1.2\n, but is not exactly equal to it. On a typical machine, the actual\nstored value is:\n1.0011001100110011001100110011001100110011001100110011 (binary)\nwhich is exactly:\n1.1999999999999999555910790149937383830547332763671875 (decimal)\nThe typical precision of 53 bits provides Python floats with 15\u201316 decimal digits of accuracy.\nFor a fuller explanation, please see the floating-point arithmetic chapter in the Python tutorial.\nWhy are Python strings immutable?\u00b6\nThere are several advantages.\nOne is performance: knowing that a string is immutable means we can allocate space for it at creation time, and the storage requirements are fixed and unchanging. This is also one of the reasons for the distinction between tuples and lists.\nAnother advantage is that strings in Python are considered as \u201celemental\u201d as numbers. No amount of activity will change the value 8 to anything else, and in Python, no amount of activity will change the string \u201ceight\u201d to anything else.\nWhy must \u2018self\u2019 be used explicitly in method definitions and calls?\u00b6\nThe idea was borrowed from Modula-3. It turns out to be very useful, for a variety of reasons.\nFirst, it\u2019s more obvious that you are using a method or instance attribute\ninstead of a local variable. Reading self.x\nor self.meth()\nmakes it\nabsolutely clear that an instance variable or method is used even if you don\u2019t\nknow the class definition by heart. In C++, you can sort of tell by the lack of\na local variable declaration (assuming globals are rare or easily recognizable)\n\u2013 but in Python, there are no local variable declarations, so you\u2019d have to\nlook up the class definition to be sure. Some C++ and Java coding standards\ncall for instance attributes to have an m_\nprefix, so this explicitness is\nstill useful in those languages, too.\nSecond, it means that no special syntax is necessary if you want to explicitly\nreference or call the method from a particular class. In C++, if you want to\nuse a method from a base class which is overridden in a derived class, you have\nto use the ::\noperator \u2013 in Python you can write\nbaseclass.methodname(self, )\n. This is particularly useful\nfor __init__()\nmethods, and in general in cases where a derived class\nmethod wants to extend the base class method of the same name and thus has to\ncall the base class method somehow.\nFinally, for instance variables it solves a syntactic problem with assignment:\nsince local variables in Python are (by definition!) those variables to which a\nvalue is assigned in a function body (and that aren\u2019t explicitly declared\nglobal), there has to be some way to tell the interpreter that an assignment was\nmeant to assign to an instance variable instead of to a local variable, and it\nshould preferably be syntactic (for efficiency reasons). C++ does this through\ndeclarations, but Python doesn\u2019t have declarations and it would be a pity having\nto introduce them just for this purpose. Using the explicit self.var\nsolves\nthis nicely. Similarly, for using instance variables, having to write\nself.var\nmeans that references to unqualified names inside a method don\u2019t\nhave to search the instance\u2019s directories. To put it another way, local\nvariables and instance variables live in two different namespaces, and you need\nto tell Python which namespace to use.\nWhy can\u2019t I use an assignment in an expression?\u00b6\nStarting in Python 3.8, you can!\nAssignment expressions using the walrus operator :=\nassign a variable in an\nexpression:\nwhile chunk := fp.read(200):\nprint(chunk)\nSee PEP 572 for more information.\nWhy does Python use methods for some functionality (e.g. list.index()) but functions for other (e.g. len(list))?\u00b6\nAs Guido said:\n(a) For some operations, prefix notation just reads better than postfix \u2013 prefix (and infix!) operations have a long tradition in mathematics which likes notations where the visuals help the mathematician thinking about a problem. Compare the easy with which we rewrite a formula like x*(a+b) into x*a + x*b to the clumsiness of doing the same thing using a raw OO notation.\n(b) When I read code that says len(x) I know that it is asking for the length of something. This tells me two things: the result is an integer, and the argument is some kind of container. To the contrary, when I read x.len(), I have to already know that x is some kind of container implementing an interface or inheriting from a class that has a standard len(). Witness the confusion we occasionally have when a class that is not implementing a mapping has a get() or keys() method, or something that isn\u2019t a file has a write() method.\n\u2014https://mail.python.org/pipermail/python-3000/2006-November/004643.html\nWhy is join() a string method instead of a list or tuple method?\u00b6\nStrings became much more like other standard types starting in Python 1.6, when methods were added which give the same functionality that has always been available using the functions of the string module. Most of these new methods have been widely accepted, but the one which appears to make some programmers feel uncomfortable is:\n\", \".join(['1', '2', '4', '8', '16'])\nwhich gives the result:\n\"1, 2, 4, 8, 16\"\nThere are two common arguments against this usage.\nThe first runs along the lines of: \u201cIt looks really ugly using a method of a string literal (string constant)\u201d, to which the answer is that it might, but a string literal is just a fixed value. If the methods are to be allowed on names bound to strings there is no logical reason to make them unavailable on literals.\nThe second objection is typically cast as: \u201cI am really telling a sequence to\njoin its members together with a string constant\u201d. Sadly, you aren\u2019t. For some\nreason there seems to be much less difficulty with having split()\nas\na string method, since in that case it is easy to see that\n\"1, 2, 4, 8, 16\".split(\", \")\nis an instruction to a string literal to return the substrings delimited by the given separator (or, by default, arbitrary runs of white space).\njoin()\nis a string method because in using it you are telling the\nseparator string to iterate over a sequence of strings and insert itself between\nadjacent elements. This method can be used with any argument which obeys the\nrules for sequence objects, including any new classes you might define yourself.\nSimilar methods exist for bytes and bytearray objects.\nHow fast are exceptions?\u00b6\nA try\n/except\nblock is extremely efficient if no exceptions\nare raised. Actually\ncatching an exception is expensive. In versions of Python prior to 2.0 it was\ncommon to use this idiom:\ntry:\nvalue = mydict[key]\nexcept KeyError:\nmydict[key] = getvalue(key)\nvalue = mydict[key]\nThis only made sense when you expected the dict to have the key almost all the time. If that wasn\u2019t the case, you coded it like this:\nif key in mydict:\nvalue = mydict[key]\nelse:\nvalue = mydict[key] = getvalue(key)\nFor this specific case, you could also use value = dict.setdefault(key,\ngetvalue(key))\n, but only if the getvalue()\ncall is cheap enough because it\nis evaluated in all cases.\nWhy isn\u2019t there a switch or case statement in Python?\u00b6\nIn general, structured switch statements execute one block of code\nwhen an expression has a particular value or set of values.\nSince Python 3.10 one can easily match literal values, or constants\nwithin a namespace, with a match ... case\nstatement.\nAn older alternative is a sequence of if... elif... elif... else\n.\nFor cases where you need to choose from a very large number of possibilities, you can create a dictionary mapping case values to functions to call. For example:\nfunctions = {'a': function_1,\n'b': function_2,\n'c': self.method_1}\nfunc = functions[value]\nfunc()\nFor calling methods on objects, you can simplify yet further by using the\ngetattr()\nbuilt-in to retrieve methods with a particular name:\nclass MyVisitor:\ndef visit_a(self):\n...\ndef dispatch(self, value):\nmethod_name = 'visit_' + str(value)\nmethod = getattr(self, method_name)\nmethod()\nIt\u2019s suggested that you use a prefix for the method names, such as visit_\nin\nthis example. Without such a prefix, if values are coming from an untrusted\nsource, an attacker would be able to call any method on your object.\nImitating switch with fallthrough, as with C\u2019s switch-case-default, is possible, much harder, and less needed.\nCan\u2019t you emulate threads in the interpreter instead of relying on an OS-specific thread implementation?\u00b6\nAnswer 1: Unfortunately, the interpreter pushes at least one C stack frame for each Python stack frame. Also, extensions can call back into Python at almost random moments. Therefore, a complete threads implementation requires thread support for C.\nAnswer 2: Fortunately, there is Stackless Python, which has a completely redesigned interpreter loop that avoids the C stack.\nWhy can\u2019t lambda expressions contain statements?\u00b6\nPython lambda expressions cannot contain statements because Python\u2019s syntactic framework can\u2019t handle statements nested inside expressions. However, in Python, this is not a serious problem. Unlike lambda forms in other languages, where they add functionality, Python lambdas are only a shorthand notation if you\u2019re too lazy to define a function.\nFunctions are already first class objects in Python, and can be declared in a local scope. Therefore the only advantage of using a lambda instead of a locally defined function is that you don\u2019t need to invent a name for the function \u2013 but that\u2019s just a local variable to which the function object (which is exactly the same type of object that a lambda expression yields) is assigned!\nCan Python be compiled to machine code, C or some other language?\u00b6\nCython compiles a modified version of Python with optional annotations into C extensions. Nuitka is an up-and-coming compiler of Python into C++ code, aiming to support the full Python language.\nHow does Python manage memory?\u00b6\nThe details of Python memory management depend on the implementation. The\nstandard implementation of Python, CPython, uses reference counting to\ndetect inaccessible objects, and another mechanism to collect reference cycles,\nperiodically executing a cycle detection algorithm which looks for inaccessible\ncycles and deletes the objects involved. The gc\nmodule provides functions\nto perform a garbage collection, obtain debugging statistics, and tune the\ncollector\u2019s parameters.\nOther implementations (such as Jython or PyPy), however, can rely on a different mechanism such as a full-blown garbage collector. This difference can cause some subtle porting problems if your Python code depends on the behavior of the reference counting implementation.\nIn some Python implementations, the following code (which is fine in CPython) will probably run out of file descriptors:\nfor file in very_long_list_of_files:\nf = open(file)\nc = f.read(1)\nIndeed, using CPython\u2019s reference counting and destructor scheme, each new\nassignment to f\ncloses the previous file. With a traditional GC, however,\nthose file objects will only get collected (and closed) at varying and possibly\nlong intervals.\nIf you want to write code that will work with any Python implementation,\nyou should explicitly close the file or use the with\nstatement;\nthis will work regardless of memory management scheme:\nfor file in very_long_list_of_files:\nwith open(file) as f:\nc = f.read(1)\nWhy doesn\u2019t CPython use a more traditional garbage collection scheme?\u00b6\nFor one thing, this is not a C standard feature and hence it\u2019s not portable. (Yes, we know about the Boehm GC library. It has bits of assembler code for most common platforms, not for all of them, and although it is mostly transparent, it isn\u2019t completely transparent; patches are required to get Python to work with it.)\nTraditional GC also becomes a problem when Python is embedded into other\napplications. While in a standalone Python it\u2019s fine to replace the standard\nmalloc()\nand free()\nwith versions provided by the GC library, an application\nembedding Python may want to have its own substitute for malloc()\nand free()\n,\nand may not want Python\u2019s. Right now, CPython works with anything that\nimplements malloc()\nand free()\nproperly.\nWhy isn\u2019t all memory freed when CPython exits?\u00b6\nObjects referenced from the global namespaces of Python modules are not always deallocated when Python exits. This may happen if there are circular references. There are also certain bits of memory that are allocated by the C library that are impossible to free (e.g. a tool like Purify will complain about these). Python is, however, aggressive about cleaning up memory on exit and does try to destroy every single object.\nIf you want to force Python to delete certain things on deallocation use the\natexit\nmodule to run a function that will force those deletions.\nWhy are there separate tuple and list data types?\u00b6\nLists and tuples, while similar in many respects, are generally used in\nfundamentally different ways. Tuples can be thought of as being similar to\nPascal records\nor C structs\n; they\u2019re small collections of related data which may\nbe of different types which are operated on as a group. For example, a\nCartesian coordinate is appropriately represented as a tuple of two or three\nnumbers.\nLists, on the other hand, are more like arrays in other languages. They tend to\nhold a varying number of objects all of which have the same type and which are\noperated on one-by-one. For example, os.listdir('.')\nreturns a list of\nstrings representing the files in the current directory. Functions which\noperate on this output would generally not break if you added another file or\ntwo to the directory.\nTuples are immutable, meaning that once a tuple has been created, you can\u2019t replace any of its elements with a new value. Lists are mutable, meaning that you can always change a list\u2019s elements. Only immutable elements can be used as dictionary keys, and hence only tuples and not lists can be used as keys.\nHow are lists implemented in CPython?\u00b6\nCPython\u2019s lists are really variable-length arrays, not Lisp-style linked lists. The implementation uses a contiguous array of references to other objects, and keeps a pointer to this array and the array\u2019s length in a list head structure.\nThis makes indexing a list a[i]\nan operation whose cost is independent of\nthe size of the list or the value of the index.\nWhen items are appended or inserted, the array of references is resized. Some cleverness is applied to improve the performance of appending items repeatedly; when the array must be grown, some extra space is allocated so the next few times don\u2019t require an actual resize.\nHow are dictionaries implemented in CPython?\u00b6\nCPython\u2019s dictionaries are implemented as resizable hash tables. Compared to B-trees, this gives better performance for lookup (the most common operation by far) under most circumstances, and the implementation is simpler.\nDictionaries work by computing a hash code for each key stored in the dictionary\nusing the hash()\nbuilt-in function. The hash code varies widely depending\non the key and a per-process seed; for example, 'Python'\ncould hash to\n-539294296\nwhile 'python'\n, a string that differs by a single bit, could hash\nto 1142331976\n. The hash code is then used to calculate a location in an\ninternal array where the value will be stored. Assuming that you\u2019re storing\nkeys that all have different hash values, this means that dictionaries take\nconstant time \u2013 O(1), in Big-O notation \u2013 to retrieve a key.\nWhy must dictionary keys be immutable?\u00b6\nThe hash table implementation of dictionaries uses a hash value calculated from the key value to find the key. If the key were a mutable object, its value could change, and thus its hash could also change. But since whoever changes the key object can\u2019t tell that it was being used as a dictionary key, it can\u2019t move the entry around in the dictionary. Then, when you try to look up the same object in the dictionary it won\u2019t be found because its hash value is different. If you tried to look up the old value it wouldn\u2019t be found either, because the value of the object found in that hash bin would be different.\nIf you want a dictionary indexed with a list, simply convert the list to a tuple\nfirst; the function tuple(L)\ncreates a tuple with the same entries as the\nlist L\n. Tuples are immutable and can therefore be used as dictionary keys.\nSome unacceptable solutions that have been proposed:\nHash lists by their address (object ID). This doesn\u2019t work because if you construct a new list with the same value it won\u2019t be found; e.g.:\nmydict = {[1, 2]: '12'} print(mydict[[1, 2]])\nwould raise a\nKeyError\nexception because the id of the[1, 2]\nused in the second line differs from that in the first line. In other words, dictionary keys should be compared using==\n, not usingis\n.Make a copy when using a list as a key. This doesn\u2019t work because the list, being a mutable object, could contain a reference to itself, and then the copying code would run into an infinite loop.\nAllow lists as keys but tell the user not to modify them. This would allow a class of hard-to-track bugs in programs when you forgot or modified a list by accident. It also invalidates an important invariant of dictionaries: every value in\nd.keys()\nis usable as a key of the dictionary.Mark lists as read-only once they are used as a dictionary key. The problem is that it\u2019s not just the top-level object that could change its value; you could use a tuple containing a list as a key. Entering anything as a key into a dictionary would require marking all objects reachable from there as read-only \u2013 and again, self-referential objects could cause an infinite loop.\nThere is a trick to get around this if you need to, but use it at your own risk:\nYou can wrap a mutable structure inside a class instance which has both a\n__eq__()\nand a __hash__()\nmethod.\nYou must then make sure that the\nhash value for all such wrapper objects that reside in a dictionary (or other\nhash based structure), remain fixed while the object is in the dictionary (or\nother structure).\nclass ListWrapper:\ndef __init__(self, the_list):\nself.the_list = the_list\ndef __eq__(self, other):\nreturn self.the_list == other.the_list\ndef __hash__(self):\nl = self.the_list\nresult = 98767 - len(l)*555\nfor i, el in enumerate(l):\ntry:\nresult = result + (hash(el) % 9999999) * 1001 + i\nexcept Exception:\nresult = (result % 7777777) + i * 333\nreturn result\nNote that the hash computation is complicated by the possibility that some members of the list may be unhashable and also by the possibility of arithmetic overflow.\nFurthermore it must always be the case that if o1 == o2\n(ie o1.__eq__(o2)\nis True\n) then hash(o1) == hash(o2)\n(ie, o1.__hash__() == o2.__hash__()\n),\nregardless of whether the object is in a dictionary or not. If you fail to meet\nthese restrictions dictionaries and other hash based structures will misbehave.\nIn the case of ListWrapper\n, whenever the wrapper object is in a dictionary the\nwrapped list must not change to avoid anomalies. Don\u2019t do this unless you are\nprepared to think hard about the requirements and the consequences of not\nmeeting them correctly. Consider yourself warned.\nWhy doesn\u2019t list.sort() return the sorted list?\u00b6\nIn situations where performance matters, making a copy of the list just to sort\nit would be wasteful. Therefore, list.sort()\nsorts the list in place. In\norder to remind you of that fact, it does not return the sorted list. This way,\nyou won\u2019t be fooled into accidentally overwriting a list when you need a sorted\ncopy but also need to keep the unsorted version around.\nIf you want to return a new list, use the built-in sorted()\nfunction\ninstead. This function creates a new list from a provided iterable, sorts\nit and returns it. For example, here\u2019s how to iterate over the keys of a\ndictionary in sorted order:\nfor key in sorted(mydict):\n... # do whatever with mydict[key]...\nHow do you specify and enforce an interface spec in Python?\u00b6\nAn interface specification for a module as provided by languages such as C++ and Java describes the prototypes for the methods and functions of the module. Many feel that compile-time enforcement of interface specifications helps in the construction of large programs.\nPython 2.6 adds an abc\nmodule that lets you define Abstract Base Classes\n(ABCs). You can then use isinstance()\nand issubclass()\nto check\nwhether an instance or a class implements a particular ABC. The\ncollections.abc\nmodule defines a set of useful ABCs such as\nIterable\n, Container\n, and\nMutableMapping\n.\nFor Python, many of the advantages of interface specifications can be obtained by an appropriate test discipline for components.\nA good test suite for a module can both provide a regression test and serve as a\nmodule interface specification and a set of examples. Many Python modules can\nbe run as a script to provide a simple \u201cself test.\u201d Even modules which use\ncomplex external interfaces can often be tested in isolation using trivial\n\u201cstub\u201d emulations of the external interface. The doctest\nand\nunittest\nmodules or third-party test frameworks can be used to construct\nexhaustive test suites that exercise every line of code in a module.\nAn appropriate testing discipline can help build large complex applications in\nPython as well as having interface specifications would. In fact, it can be\nbetter because an interface specification cannot test certain properties of a\nprogram. For example, the list.append()\nmethod is expected to add new elements\nto the end of some internal list; an interface specification cannot test that\nyour list.append()\nimplementation will actually do this correctly, but it\u2019s\ntrivial to check this property in a test suite.\nWriting test suites is very helpful, and you might want to design your code to make it easily tested. One increasingly popular technique, test-driven development, calls for writing parts of the test suite first, before you write any of the actual code. Of course Python allows you to be sloppy and not write test cases at all.\nWhy is there no goto?\u00b6\nIn the 1970s people realized that unrestricted goto could lead\nto messy \u201cspaghetti\u201d code that was hard to understand and revise.\nIn a high-level language, it is also unneeded as long as there\nare ways to branch (in Python, with if\nstatements and or\n,\nand\n, and if\n/else\nexpressions) and loop (with while\nand for\nstatements, possibly containing continue\nand break\n).\nOne can also use exceptions to provide a \u201cstructured goto\u201d\nthat works even across\nfunction calls. Many feel that exceptions can conveniently emulate all\nreasonable uses of the go\nor goto\nconstructs of C, Fortran, and other\nlanguages. For example:\nclass label(Exception): pass # declare a label\ntry:\n...\nif condition: raise label() # goto label\n...\nexcept label: # where to goto\npass\n...\nThis doesn\u2019t allow you to jump into the middle of a loop, but that\u2019s usually\nconsidered an abuse of goto\nanyway. Use sparingly.\nWhy can\u2019t raw strings (r-strings) end with a backslash?\u00b6\nMore precisely, they can\u2019t end with an odd number of backslashes: the unpaired backslash at the end escapes the closing quote character, leaving an unterminated string.\nRaw strings were designed to ease creating input for processors (chiefly regular expression engines) that want to do their own backslash escape processing. Such processors consider an unmatched trailing backslash to be an error anyway, so raw strings disallow that. In return, they allow you to pass on the string quote character by escaping it with a backslash. These rules work well when r-strings are used for their intended purpose.\nIf you\u2019re trying to build Windows pathnames, note that all Windows system calls accept forward slashes too:\nf = open(\"/mydir/file.txt\") # works fine!\nIf you\u2019re trying to build a pathname for a DOS command, try e.g. one of\ndir = r\"\\this\\is\\my\\dos\\dir\" \"\\\\\"\ndir = r\"\\this\\is\\my\\dos\\dir\\ \"[:-1]\ndir = \"\\\\this\\\\is\\\\my\\\\dos\\\\dir\\\\\"\nWhy doesn\u2019t Python have a \u201cwith\u201d statement for attribute assignments?\u00b6\nPython has a with\nstatement that wraps the execution of a block, calling code\non the entrance and exit from the block. Some languages have a construct that\nlooks like this:\nwith obj:\na = 1 # equivalent to obj.a = 1\ntotal = total + 1 # obj.total = obj.total + 1\nIn Python, such a construct would be ambiguous.\nOther languages, such as Object Pascal, Delphi, and C++, use static types, so it\u2019s possible to know, in an unambiguous way, what member is being assigned to. This is the main point of static typing \u2013 the compiler always knows the scope of every variable at compile time.\nPython uses dynamic types. It is impossible to know in advance which attribute will be referenced at runtime. Member attributes may be added or removed from objects on the fly. This makes it impossible to know, from a simple reading, what attribute is being referenced: a local one, a global one, or a member attribute?\nFor instance, take the following incomplete snippet:\ndef foo(a):\nwith a:\nprint(x)\nThe snippet assumes that a\nmust have a member attribute called x\n. However,\nthere is nothing in Python that tells the interpreter this. What should happen\nif a\nis, let us say, an integer? If there is a global variable named x\n,\nwill it be used inside the with\nblock? As you see, the dynamic nature of Python\nmakes such choices much harder.\nThe primary benefit of with\nand similar language features (reduction of code\nvolume) can, however, easily be achieved in Python by assignment. Instead of:\nfunction(args).mydict[index][index].a = 21\nfunction(args).mydict[index][index].b = 42\nfunction(args).mydict[index][index].c = 63\nwrite this:\nref = function(args).mydict[index][index]\nref.a = 21\nref.b = 42\nref.c = 63\nThis also has the side-effect of increasing execution speed because name bindings are resolved at run-time in Python, and the second version only needs to perform the resolution once.\nSimilar proposals that would introduce syntax to further reduce code volume, such as using a \u2018leading dot\u2019, have been rejected in favour of explicitness (see https://mail.python.org/pipermail/python-ideas/2016-May/040070.html).\nWhy don\u2019t generators support the with statement?\u00b6\nFor technical reasons, a generator used directly as a context manager\nwould not work correctly. When, as is most common, a generator is used as\nan iterator run to completion, no closing is needed. When it is, wrap\nit as contextlib.closing(generator)\nin the with\nstatement.\nWhy are colons required for the if/while/def/class statements?\u00b6\nThe colon is required primarily to enhance readability (one of the results of the experimental ABC language). Consider this:\nif a == b\nprint(a)\nversus\nif a == b:\nprint(a)\nNotice how the second one is slightly easier to read. Notice further how a colon sets off the example in this FAQ answer; it\u2019s a standard usage in English.\nAnother minor reason is that the colon makes it easier for editors with syntax highlighting; they can look for colons to decide when indentation needs to be increased instead of having to do a more elaborate parsing of the program text.\nWhy does Python allow commas at the end of lists and tuples?\u00b6\nPython lets you add a trailing comma at the end of lists, tuples, and dictionaries:\n[1, 2, 3,]\n('a', 'b', 'c',)\nd = {\n\"A\": [1, 5],\n\"B\": [6, 7], # last trailing comma is optional but good style\n}\nThere are several reasons to allow this.\nWhen you have a literal value for a list, tuple, or dictionary spread across multiple lines, it\u2019s easier to add more elements because you don\u2019t have to remember to add a comma to the previous line. The lines can also be reordered without creating a syntax error.\nAccidentally omitting the comma can lead to errors that are hard to diagnose. For example:\nx = [\n\"fee\",\n\"fie\"\n\"foo\",\n\"fum\"\n]\nThis list looks like it has four elements, but it actually contains three: \u201cfee\u201d, \u201cfiefoo\u201d and \u201cfum\u201d. Always adding the comma avoids this source of error.\nAllowing the trailing comma may also make programmatic code generation easier.", "code_snippets": [" ", " ", " ", "\n ", "\n ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", " ", "\n", " ", "\n", " ", " ", " ", "\n ", "\n", " ", " ", " ", " ", "\n", "\n", "\n", "\n ", " ", " ", "\n", " ", "\n ", " ", " ", "\n ", " ", " ", "\n", " ", " ", " ", "\n ", " ", " ", "\n", "\n ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n ", " ", "\n ", " ", "\n\n", " ", " ", "\n", "\n", "\n ", "\n ", "\n\n ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", "\n", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", "\n", "\n ", " ", "\n ", " ", " ", "\n\n ", " ", "\n ", " ", " ", " ", "\n\n ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", "\n", " ", " ", " ", "\n ", " ", "\n", " ", " ", "\n\n", "\n ", "\n ", " ", " ", " ", " ", "\n ", "\n", " ", " ", "\n ", "\n", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n", "\n ", " ", "\n ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n ", "\n", " ", " ", " ", "\n ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n", "\n", " ", " ", "\n ", "\n ", "\n ", "\n ", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 7616}
{"url": "https://docs.python.org/3/faq/programming.html", "title": null, "content": "Programming FAQ\u00b6\nGeneral Questions\u00b6\nIs there a source code level debugger with breakpoints, single-stepping, etc.?\u00b6\nYes.\nSeveral debuggers for Python are described below, and the built-in function\nbreakpoint()\nallows you to drop into any of them.\nThe pdb module is a simple but adequate console-mode debugger for Python. It is\npart of the standard Python library, and is documented in the Library\nReference Manual\n. You can also write your own debugger by using the code\nfor pdb as an example.\nThe IDLE interactive development environment, which is part of the standard Python distribution (normally available as Tools/scripts/idle3), includes a graphical debugger.\nPythonWin is a Python IDE that includes a GUI debugger based on pdb. The PythonWin debugger colors breakpoints and has quite a few cool features such as debugging non-PythonWin programs. PythonWin is available as part of pywin32 project and as a part of the ActivePython distribution.\nEric is an IDE built on PyQt and the Scintilla editing component.\ntrepan3k is a gdb-like debugger.\nVisual Studio Code is an IDE with debugging tools that integrates with version-control software.\nThere are a number of commercial Python IDEs that include graphical debuggers. They include:\nAre there tools to help find bugs or perform static analysis?\u00b6\nYes.\nPylint and Pyflakes do basic checking that will help you catch bugs sooner.\nStatic type checkers such as Mypy, Pyre, and Pytype can check type hints in Python source code.\nHow can I create a stand-alone binary from a Python script?\u00b6\nYou don\u2019t need the ability to compile Python to C code if all you want is a stand-alone program that users can download and run without having to install the Python distribution first. There are a number of tools that determine the set of modules required by a program and bind these modules together with a Python binary to produce a single executable.\nOne is to use the freeze tool, which is included in the Python source tree as Tools/freeze. It converts Python byte code to C arrays; with a C compiler you can embed all your modules into a new program, which is then linked with the standard Python modules.\nIt works by scanning your source recursively for import statements (in both forms) and looking for the modules in the standard Python path as well as in the source directory (for built-in modules). It then turns the bytecode for modules written in Python into C code (array initializers that can be turned into code objects using the marshal module) and creates a custom-made config file that only contains those built-in modules which are actually used in the program. It then compiles the generated C code and links it with the rest of the Python interpreter to form a self-contained binary which acts exactly like your script.\nThe following packages can help with the creation of console and GUI executables:\nNuitka (Cross-platform)\nPyInstaller (Cross-platform)\nPyOxidizer (Cross-platform)\ncx_Freeze (Cross-platform)\npy2app (macOS only)\npy2exe (Windows only)\nAre there coding standards or a style guide for Python programs?\u00b6\nYes. The coding style required for standard library modules is documented as PEP 8.\nCore Language\u00b6\nWhy am I getting an UnboundLocalError when the variable has a value?\u00b6\nIt can be a surprise to get the UnboundLocalError\nin previously working\ncode when it is modified by adding an assignment statement somewhere in\nthe body of a function.\nThis code:\n>>> x = 10\n>>> def bar():\n... print(x)\n...\n>>> bar()\n10\nworks, but this code:\n>>> x = 10\n>>> def foo():\n... print(x)\n... x += 1\nresults in an UnboundLocalError\n:\n>>> foo()\nTraceback (most recent call last):\n...\nUnboundLocalError: local variable 'x' referenced before assignment\nThis is because when you make an assignment to a variable in a scope, that\nvariable becomes local to that scope and shadows any similarly named variable\nin the outer scope. Since the last statement in foo assigns a new value to\nx\n, the compiler recognizes it as a local variable. Consequently when the\nearlier print(x)\nattempts to print the uninitialized local variable and\nan error results.\nIn the example above you can access the outer scope variable by declaring it global:\n>>> x = 10\n>>> def foobar():\n... global x\n... print(x)\n... x += 1\n...\n>>> foobar()\n10\nThis explicit declaration is required in order to remind you that (unlike the superficially analogous situation with class and instance variables) you are actually modifying the value of the variable in the outer scope:\n>>> print(x)\n11\nYou can do a similar thing in a nested scope using the nonlocal\nkeyword:\n>>> def foo():\n... x = 10\n... def bar():\n... nonlocal x\n... print(x)\n... x += 1\n... bar()\n... print(x)\n...\n>>> foo()\n10\n11\nWhat are the rules for local and global variables in Python?\u00b6\nIn Python, variables that are only referenced inside a function are implicitly global. If a variable is assigned a value anywhere within the function\u2019s body, it\u2019s assumed to be a local unless explicitly declared as global.\nThough a bit surprising at first, a moment\u2019s consideration explains this. On\none hand, requiring global\nfor assigned variables provides a bar\nagainst unintended side-effects. On the other hand, if global\nwas required\nfor all global references, you\u2019d be using global\nall the time. You\u2019d have\nto declare as global every reference to a built-in function or to a component of\nan imported module. This clutter would defeat the usefulness of the global\ndeclaration for identifying side-effects.\nWhy do lambdas defined in a loop with different values all return the same result?\u00b6\nAssume you use a for loop to define a few different lambdas (or even plain functions), e.g.:\n>>> squares = []\n>>> for x in range(5):\n... squares.append(lambda: x**2)\nThis gives you a list that contains 5 lambdas that calculate x**2\n. You\nmight expect that, when called, they would return, respectively, 0\n, 1\n,\n4\n, 9\n, and 16\n. However, when you actually try you will see that\nthey all return 16\n:\n>>> squares[2]()\n16\n>>> squares[4]()\n16\nThis happens because x\nis not local to the lambdas, but is defined in\nthe outer scope, and it is accessed when the lambda is called \u2014 not when it\nis defined. At the end of the loop, the value of x\nis 4\n, so all the\nfunctions now return 4**2\n, i.e. 16\n. You can also verify this by\nchanging the value of x\nand see how the results of the lambdas change:\n>>> x = 8\n>>> squares[2]()\n64\nIn order to avoid this, you need to save the values in variables local to the\nlambdas, so that they don\u2019t rely on the value of the global x\n:\n>>> squares = []\n>>> for x in range(5):\n... squares.append(lambda n=x: n**2)\nHere, n=x\ncreates a new variable n\nlocal to the lambda and computed\nwhen the lambda is defined so that it has the same value that x\nhad at\nthat point in the loop. This means that the value of n\nwill be 0\nin the first lambda, 1\nin the second, 2\nin the third, and so on.\nTherefore each lambda will now return the correct result:\n>>> squares[2]()\n4\n>>> squares[4]()\n16\nNote that this behaviour is not peculiar to lambdas, but applies to regular functions too.\nWhat are the \u201cbest practices\u201d for using import in a module?\u00b6\nIn general, don\u2019t use from modulename import *\n. Doing so clutters the\nimporter\u2019s namespace, and makes it much harder for linters to detect undefined\nnames.\nImport modules at the top of a file. Doing so makes it clear what other modules your code requires and avoids questions of whether the module name is in scope. Using one import per line makes it easy to add and delete module imports, but using multiple imports per line uses less screen space.\nIt\u2019s good practice if you import modules in the following order:\nthird-party library modules (anything installed in Python\u2019s site-packages directory) \u2013 e.g.\ndateutil\n,requests\n,PIL.Image\nlocally developed modules\nIt is sometimes necessary to move imports to a function or class to avoid problems with circular imports. Gordon McMillan says:\nCircular imports are fine where both modules use the \u201cimport \u201d form of import. They fail when the 2nd module wants to grab a name out of the first (\u201cfrom module import name\u201d) and the import is at the top level. That\u2019s because names in the 1st are not yet available, because the first module is busy importing the 2nd.\nIn this case, if the second module is only used in one function, then the import can easily be moved into that function. By the time the import is called, the first module will have finished initializing, and the second module can do its import.\nIt may also be necessary to move imports out of the top level of code if some of the modules are platform-specific. In that case, it may not even be possible to import all of the modules at the top of the file. In this case, importing the correct modules in the corresponding platform-specific code is a good option.\nOnly move imports into a local scope, such as inside a function definition, if\nit\u2019s necessary to solve a problem such as avoiding a circular import or are\ntrying to reduce the initialization time of a module. This technique is\nespecially helpful if many of the imports are unnecessary depending on how the\nprogram executes. You may also want to move imports into a function if the\nmodules are only ever used in that function. Note that loading a module the\nfirst time may be expensive because of the one time initialization of the\nmodule, but loading a module multiple times is virtually free, costing only a\ncouple of dictionary lookups. Even if the module name has gone out of scope,\nthe module is probably available in sys.modules\n.\nHow can I pass optional or keyword parameters from one function to another?\u00b6\nCollect the arguments using the *\nand **\nspecifiers in the function\u2019s\nparameter list; this gives you the positional arguments as a tuple and the\nkeyword arguments as a dictionary. You can then pass these arguments when\ncalling another function by using *\nand **\n:\ndef f(x, *args, **kwargs):\n...\nkwargs['width'] = '14.3c'\n...\ng(x, *args, **kwargs)\nWhat is the difference between arguments and parameters?\u00b6\nParameters are defined by the names that appear in a function definition, whereas arguments are the values actually passed to a function when calling it. Parameters define what kind of arguments a function can accept. For example, given the function definition:\ndef func(foo, bar=None, **kwargs):\npass\nfoo, bar and kwargs are parameters of func\n. However, when calling\nfunc\n, for example:\nfunc(42, bar=314, extra=somevar)\nthe values 42\n, 314\n, and somevar\nare arguments.\nWhy did changing list \u2018y\u2019 also change list \u2018x\u2019?\u00b6\nIf you wrote code like:\n>>> x = []\n>>> y = x\n>>> y.append(10)\n>>> y\n[10]\n>>> x\n[10]\nyou might be wondering why appending an element to y\nchanged x\ntoo.\nThere are two factors that produce this result:\nVariables are simply names that refer to objects. Doing\ny = x\ndoesn\u2019t create a copy of the list \u2013 it creates a new variabley\nthat refers to the same objectx\nrefers to. This means that there is only one object (the list), and bothx\nandy\nrefer to it.Lists are mutable, which means that you can change their content.\nAfter the call to append()\n, the content of the mutable object has\nchanged from []\nto [10]\n. Since both the variables refer to the same\nobject, using either name accesses the modified value [10]\n.\nIf we instead assign an immutable object to x\n:\n>>> x = 5 # ints are immutable\n>>> y = x\n>>> x = x + 1 # 5 can't be mutated, we are creating a new object here\n>>> x\n6\n>>> y\n5\nwe can see that in this case x\nand y\nare not equal anymore. This is\nbecause integers are immutable, and when we do x = x + 1\nwe are not\nmutating the int 5\nby incrementing its value; instead, we are creating a\nnew object (the int 6\n) and assigning it to x\n(that is, changing which\nobject x\nrefers to). After this assignment we have two objects (the ints\n6\nand 5\n) and two variables that refer to them (x\nnow refers to\n6\nbut y\nstill refers to 5\n).\nSome operations (for example y.append(10)\nand y.sort()\n) mutate the\nobject, whereas superficially similar operations (for example y = y + [10]\nand sorted(y)\n) create a new object. In general in Python (and in all cases\nin the standard library) a method that mutates an object will return None\nto help avoid getting the two types of operations confused. So if you\nmistakenly write y.sort()\nthinking it will give you a sorted copy of y\n,\nyou\u2019ll instead end up with None\n, which will likely cause your program to\ngenerate an easily diagnosed error.\nHowever, there is one class of operations where the same operation sometimes\nhas different behaviors with different types: the augmented assignment\noperators. For example, +=\nmutates lists but not tuples or ints (a_list\n+= [1, 2, 3]\nis equivalent to a_list.extend([1, 2, 3])\nand mutates\na_list\n, whereas some_tuple += (1, 2, 3)\nand some_int += 1\ncreate\nnew objects).\nIn other words:\nIf we have a mutable object (\nlist\n,dict\n,set\n, etc.), we can use some specific operations to mutate it and all the variables that refer to it will see the change.If we have an immutable object (\nstr\n,int\n,tuple\n, etc.), all the variables that refer to it will always see the same value, but operations that transform that value into a new value always return a new object.\nIf you want to know if two variables refer to the same object or not, you can\nuse the is\noperator, or the built-in function id()\n.\nHow do I write a function with output parameters (call by reference)?\u00b6\nRemember that arguments are passed by assignment in Python. Since assignment just creates references to objects, there\u2019s no alias between an argument name in the caller and callee, and so no call-by-reference per se. You can achieve the desired effect in a number of ways.\nBy returning a tuple of the results:\n>>> def func1(a, b): ... a = 'new-value' # a and b are local names ... b = b + 1 # assigned to new objects ... return a, b # return new values ... >>> x, y = 'old-value', 99 >>> func1(x, y) ('new-value', 100)\nThis is almost always the clearest solution.\nBy using global variables. This isn\u2019t thread-safe, and is not recommended.\nBy passing a mutable (changeable in-place) object:\n>>> def func2(a): ... a[0] = 'new-value' # 'a' references a mutable list ... a[1] = a[1] + 1 # changes a shared object ... >>> args = ['old-value', 99] >>> func2(args) >>> args ['new-value', 100]\nBy passing in a dictionary that gets mutated:\n>>> def func3(args): ... args['a'] = 'new-value' # args is a mutable dictionary ... args['b'] = args['b'] + 1 # change it in-place ... >>> args = {'a': 'old-value', 'b': 99} >>> func3(args) >>> args {'a': 'new-value', 'b': 100}\nOr bundle up values in a class instance:\n>>> class Namespace: ... def __init__(self, /, **args): ... for key, value in args.items(): ... setattr(self, key, value) ... >>> def func4(args): ... args.a = 'new-value' # args is a mutable Namespace ... args.b = args.b + 1 # change object in-place ... >>> args = Namespace(a='old-value', b=99) >>> func4(args) >>> vars(args) {'a': 'new-value', 'b': 100}\nThere\u2019s almost never a good reason to get this complicated.\nYour best choice is to return a tuple containing the multiple results.\nHow do you make a higher order function in Python?\u00b6\nYou have two choices: you can use nested scopes or you can use callable objects.\nFor example, suppose you wanted to define linear(a,b)\nwhich returns a\nfunction f(x)\nthat computes the value a*x+b\n. Using nested scopes:\ndef linear(a, b):\ndef result(x):\nreturn a * x + b\nreturn result\nOr using a callable object:\nclass linear:\ndef __init__(self, a, b):\nself.a, self.b = a, b\ndef __call__(self, x):\nreturn self.a * x + self.b\nIn both cases,\ntaxes = linear(0.3, 2)\ngives a callable object where taxes(10e6) == 0.3 * 10e6 + 2\n.\nThe callable object approach has the disadvantage that it is a bit slower and results in slightly longer code. However, note that a collection of callables can share their signature via inheritance:\nclass exponential(linear):\n# __init__ inherited\ndef __call__(self, x):\nreturn self.a * (x ** self.b)\nObject can encapsulate state for several methods:\nclass counter:\nvalue = 0\ndef set(self, x):\nself.value = x\ndef up(self):\nself.value = self.value + 1\ndef down(self):\nself.value = self.value - 1\ncount = counter()\ninc, dec, reset = count.up, count.down, count.set\nHere inc()\n, dec()\nand reset()\nact like functions which share the\nsame counting variable.\nHow do I copy an object in Python?\u00b6\nIn general, try copy.copy()\nor copy.deepcopy()\nfor the general case.\nNot all objects can be copied, but most can.\nSome objects can be copied more easily. Dictionaries have a copy()\nmethod:\nnewdict = olddict.copy()\nSequences can be copied by slicing:\nnew_l = l[:]\nHow can I find the methods or attributes of an object?\u00b6\nFor an instance x\nof a user-defined class, dir(x)\nreturns an alphabetized\nlist of the names containing the instance attributes and methods and attributes\ndefined by its class.\nHow can my code discover the name of an object?\u00b6\nGenerally speaking, it can\u2019t, because objects don\u2019t really have names.\nEssentially, assignment always binds a name to a value; the same is true of\ndef\nand class\nstatements, but in that case the value is a\ncallable. Consider the following code:\n>>> class A:\n... pass\n...\n>>> B = A\n>>> a = B()\n>>> b = a\n>>> print(b)\n<__main__.A object at 0x16D07CC>\n>>> print(a)\n<__main__.A object at 0x16D07CC>\nArguably the class has a name: even though it is bound to two names and invoked\nthrough the name B\nthe created instance is still reported as an instance of\nclass A\n. However, it is impossible to say whether the instance\u2019s name is a\nor\nb\n, since both names are bound to the same value.\nGenerally speaking it should not be necessary for your code to \u201cknow the names\u201d of particular values. Unless you are deliberately writing introspective programs, this is usually an indication that a change of approach might be beneficial.\nIn comp.lang.python, Fredrik Lundh once gave an excellent analogy in answer to this question:\nThe same way as you get the name of that cat you found on your porch: the cat (object) itself cannot tell you its name, and it doesn\u2019t really care \u2013 so the only way to find out what it\u2019s called is to ask all your neighbours (namespaces) if it\u2019s their cat (object)\u2026\n\u2026.and don\u2019t be surprised if you\u2019ll find that it\u2019s known by many names, or no name at all!\nWhat\u2019s up with the comma operator\u2019s precedence?\u00b6\nComma is not an operator in Python. Consider this session:\n>>> \"a\" in \"b\", \"a\"\n(False, 'a')\nSince the comma is not an operator, but a separator between expressions the above is evaluated as if you had entered:\n(\"a\" in \"b\"), \"a\"\nnot:\n\"a\" in (\"b\", \"a\")\nThe same is true of the various assignment operators (=\n, +=\netc). They\nare not truly operators but syntactic delimiters in assignment statements.\nIs there an equivalent of C\u2019s \u201c?:\u201d ternary operator?\u00b6\nYes, there is. The syntax is as follows:\n[on_true] if [expression] else [on_false]\nx, y = 50, 25\nsmall = x if x < y else y\nBefore this syntax was introduced in Python 2.5, a common idiom was to use logical operators:\n[expression] and [on_true] or [on_false]\nHowever, this idiom is unsafe, as it can give wrong results when on_true\nhas a false boolean value. Therefore, it is always better to use\nthe ... if ... else ...\nform.\nIs it possible to write obfuscated one-liners in Python?\u00b6\nYes. Usually this is done by nesting lambda\nwithin\nlambda\n. See the following three examples, slightly adapted from Ulf Bartelt:\nfrom functools import reduce\n# Primes < 1000\nprint(list(filter(None,map(lambda y:y*reduce(lambda x,y:x*y!=0,\nmap(lambda x,y=y:y%x,range(2,int(pow(y,0.5)+1))),1),range(2,1000)))))\n# First 10 Fibonacci numbers\nprint(list(map(lambda x,f=lambda x,f:(f(x-1,f)+f(x-2,f)) if x>1 else 1:\nf(x,f), range(10))))\n# Mandelbrot set\nprint((lambda Ru,Ro,Iu,Io,IM,Sx,Sy:reduce(lambda x,y:x+'\\n'+y,map(lambda y,\nIu=Iu,Io=Io,Ru=Ru,Ro=Ro,Sy=Sy,L=lambda yc,Iu=Iu,Io=Io,Ru=Ru,Ro=Ro,i=IM,\nSx=Sx,Sy=Sy:reduce(lambda x,y:x+y,map(lambda x,xc=Ru,yc=yc,Ru=Ru,Ro=Ro,\ni=i,Sx=Sx,F=lambda xc,yc,x,y,k,f=lambda xc,yc,x,y,k,f:(k<=0)or (x*x+y*y\n>=4.0) or 1+f(xc,yc,x*x-y*y+xc,2.0*x*y+yc,k-1,f):f(xc,yc,x,y,k,f):chr(\n64+F(Ru+x*(Ro-Ru)/Sx,yc,0,0,i)),range(Sx))):L(Iu+y*(Io-Iu)/Sy),range(Sy\n))))(-2.1, 0.7, -1.2, 1.2, 30, 80, 24))\n# \\___ ___/ \\___ ___/ | | |__ lines on screen\n# V V | |______ columns on screen\n# | | |__________ maximum of \"iterations\"\n# | |_________________ range on y axis\n# |____________________________ range on x axis\nDon\u2019t try this at home, kids!\nWhat does the slash(/) in the parameter list of a function mean?\u00b6\nA slash in the argument list of a function denotes that the parameters prior to\nit are positional-only. Positional-only parameters are the ones without an\nexternally usable name. Upon calling a function that accepts positional-only\nparameters, arguments are mapped to parameters based solely on their position.\nFor example, divmod()\nis a function that accepts positional-only\nparameters. Its documentation looks like this:\n>>> help(divmod)\nHelp on built-in function divmod in module builtins:\ndivmod(x, y, /)\nReturn the tuple (x//y, x%y). Invariant: div*y + mod == x.\nThe slash at the end of the parameter list means that both parameters are\npositional-only. Thus, calling divmod()\nwith keyword arguments would lead\nto an error:\n>>> divmod(x=3, y=4)\nTraceback (most recent call last):\nFile \"\", line 1, in \nTypeError: divmod() takes no keyword arguments\nNumbers and strings\u00b6\nHow do I specify hexadecimal and octal integers?\u00b6\nTo specify an octal digit, precede the octal value with a zero, and then a lower or uppercase \u201co\u201d. For example, to set the variable \u201ca\u201d to the octal value \u201c10\u201d (8 in decimal), type:\n>>> a = 0o10\n>>> a\n8\nHexadecimal is just as easy. Simply precede the hexadecimal number with a zero, and then a lower or uppercase \u201cx\u201d. Hexadecimal digits can be specified in lower or uppercase. For example, in the Python interpreter:\n>>> a = 0xa5\n>>> a\n165\n>>> b = 0XB2\n>>> b\n178\nWhy does -22 // 10 return -3?\u00b6\nIt\u2019s primarily driven by the desire that i % j\nhave the same sign as j\n.\nIf you want that, and also want:\ni == (i // j) * j + (i % j)\nthen integer division has to return the floor. C also requires that identity to\nhold, and then compilers that truncate i // j\nneed to make i % j\nhave\nthe same sign as i\n.\nThere are few real use cases for i % j\nwhen j\nis negative. When j\nis positive, there are many, and in virtually all of them it\u2019s more useful for\ni % j\nto be >= 0\n. If the clock says 10 now, what did it say 200 hours\nago? -190 % 12 == 2\nis useful; -190 % 12 == -10\nis a bug waiting to\nbite.\nHow do I get int literal attribute instead of SyntaxError?\u00b6\nTrying to lookup an int\nliteral attribute in the normal manner gives\na SyntaxError\nbecause the period is seen as a decimal point:\n>>> 1.__class__\nFile \"\", line 1\n1.__class__\n^\nSyntaxError: invalid decimal literal\nThe solution is to separate the literal from the period with either a space or parentheses.\n>>> 1 .__class__\n\n>>> (1).__class__\n\nHow do I convert a string to a number?\u00b6\nFor integers, use the built-in int()\ntype constructor, e.g. int('144')\n== 144\n. Similarly, float()\nconverts to a floating-point number,\ne.g. float('144') == 144.0\n.\nBy default, these interpret the number as decimal, so that int('0144') ==\n144\nholds true, and int('0x144')\nraises ValueError\n. int(string,\nbase)\ntakes the base to convert from as a second optional argument, so int(\n'0x144', 16) == 324\n. If the base is specified as 0, the number is interpreted\nusing Python\u2019s rules: a leading \u20180o\u2019 indicates octal, and \u20180x\u2019 indicates a hex\nnumber.\nDo not use the built-in function eval()\nif all you need is to convert\nstrings to numbers. eval()\nwill be significantly slower and it presents a\nsecurity risk: someone could pass you a Python expression that might have\nunwanted side effects. For example, someone could pass\n__import__('os').system(\"rm -rf $HOME\")\nwhich would erase your home\ndirectory.\neval()\nalso has the effect of interpreting numbers as Python expressions,\nso that e.g. eval('09')\ngives a syntax error because Python does not allow\nleading \u20180\u2019 in a decimal number (except \u20180\u2019).\nHow do I convert a number to a string?\u00b6\nTo convert, e.g., the number 144\nto the string '144'\n, use the built-in type\nconstructor str()\n. If you want a hexadecimal or octal representation, use\nthe built-in functions hex()\nor oct()\n. For fancy formatting, see\nthe f-strings and Format String Syntax sections,\ne.g. \"{:04d}\".format(144)\nyields\n'0144'\nand \"{:.3f}\".format(1.0/3.0)\nyields '0.333'\n.\nHow do I modify a string in place?\u00b6\nYou can\u2019t, because strings are immutable. In most situations, you should\nsimply construct a new string from the various parts you want to assemble\nit from. However, if you need an object with the ability to modify in-place\nunicode data, try using an io.StringIO\nobject or the array\nmodule:\n>>> import io\n>>> s = \"Hello, world\"\n>>> sio = io.StringIO(s)\n>>> sio.getvalue()\n'Hello, world'\n>>> sio.seek(7)\n7\n>>> sio.write(\"there!\")\n6\n>>> sio.getvalue()\n'Hello, there!'\n>>> import array\n>>> a = array.array('w', s)\n>>> print(a)\narray('w', 'Hello, world')\n>>> a[0] = 'y'\n>>> print(a)\narray('w', 'yello, world')\n>>> a.tounicode()\n'yello, world'\nHow do I use strings to call functions/methods?\u00b6\nThere are various techniques.\nThe best is to use a dictionary that maps strings to functions. The primary advantage of this technique is that the strings do not need to match the names of the functions. This is also the primary technique used to emulate a case construct:\ndef a(): pass def b(): pass dispatch = {'go': a, 'stop': b} # Note lack of parens for funcs dispatch[get_input()]() # Note trailing parens to call function\nUse the built-in function\ngetattr()\n:import foo getattr(foo, 'bar')()\nNote that\ngetattr()\nworks on any object, including classes, class instances, modules, and so on.This is used in several places in the standard library, like this:\nclass Foo: def do_foo(self): ... def do_bar(self): ... f = getattr(foo_instance, 'do_' + opname) f()\nUse\nlocals()\nto resolve the function name:def myFunc(): print(\"hello\") fname = \"myFunc\" f = locals()[fname] f()\nIs there an equivalent to Perl\u2019s chomp()\nfor removing trailing newlines from strings?\u00b6\nYou can use S.rstrip(\"\\r\\n\")\nto remove all occurrences of any line\nterminator from the end of the string S\nwithout removing other trailing\nwhitespace. If the string S\nrepresents more than one line, with several\nempty lines at the end, the line terminators for all the blank lines will\nbe removed:\n>>> lines = (\"line 1 \\r\\n\"\n... \"\\r\\n\"\n... \"\\r\\n\")\n>>> lines.rstrip(\"\\n\\r\")\n'line 1 '\nSince this is typically only desired when reading text one line at a time, using\nS.rstrip()\nthis way works well.\nIs there a scanf()\nor sscanf()\nequivalent?\u00b6\nNot as such.\nFor simple input parsing, the easiest approach is usually to split the line into\nwhitespace-delimited words using the split()\nmethod of string objects\nand then convert decimal strings to numeric values using int()\nor\nfloat()\n. split()\nsupports an optional \u201csep\u201d parameter which is useful\nif the line uses something other than whitespace as a separator.\nFor more complicated input parsing, regular expressions are more powerful\nthan C\u2019s sscanf\nand better suited for the task.\nWhat does UnicodeDecodeError\nor UnicodeEncodeError\nerror mean?\u00b6\nSee the Unicode HOWTO.\nCan I end a raw string with an odd number of backslashes?\u00b6\nA raw string ending with an odd number of backslashes will escape the string\u2019s quote:\n>>> r'C:\\this\\will\\not\\work\\'\nFile \"\", line 1\nr'C:\\this\\will\\not\\work\\'\n^\nSyntaxError: unterminated string literal (detected at line 1)\nThere are several workarounds for this. One is to use regular strings and double the backslashes:\n>>> 'C:\\\\this\\\\will\\\\work\\\\'\n'C:\\\\this\\\\will\\\\work\\\\'\nAnother is to concatenate a regular string containing an escaped backslash to the raw string:\n>>> r'C:\\this\\will\\work' '\\\\'\n'C:\\\\this\\\\will\\\\work\\\\'\nIt is also possible to use os.path.join()\nto append a backslash on Windows:\n>>> os.path.join(r'C:\\this\\will\\work', '')\n'C:\\\\this\\\\will\\\\work\\\\'\nNote that while a backslash will \u201cescape\u201d a quote for the purposes of determining where the raw string ends, no escaping occurs when interpreting the value of the raw string. That is, the backslash remains present in the value of the raw string:\n>>> r'backslash\\'preserved'\n\"backslash\\\\'preserved\"\nAlso see the specification in the language reference.\nPerformance\u00b6\nMy program is too slow. How do I speed it up?\u00b6\nThat\u2019s a tough one, in general. First, here are a list of things to remember before diving further:\nPerformance characteristics vary across Python implementations. This FAQ focuses on CPython.\nBehaviour can vary across operating systems, especially when talking about I/O or multi-threading.\nYou should always find the hot spots in your program before attempting to optimize any code (see the\nprofile\nmodule).Writing benchmark scripts will allow you to iterate quickly when searching for improvements (see the\ntimeit\nmodule).It is highly recommended to have good code coverage (through unit testing or any other technique) before potentially introducing regressions hidden in sophisticated optimizations.\nThat being said, there are many tricks to speed up Python code. Here are some general principles which go a long way towards reaching acceptable performance levels:\nMaking your algorithms faster (or changing to faster ones) can yield much larger benefits than trying to sprinkle micro-optimization tricks all over your code.\nUse the right data structures. Study documentation for the Built-in Types and the\ncollections\nmodule.When the standard library provides a primitive for doing something, it is likely (although not guaranteed) to be faster than any alternative you may come up with. This is doubly true for primitives written in C, such as builtins and some extension types. For example, be sure to use either the\nlist.sort()\nbuilt-in method or the relatedsorted()\nfunction to do sorting (and see the Sorting Techniques for examples of moderately advanced usage).Abstractions tend to create indirections and force the interpreter to work more. If the levels of indirection outweigh the amount of useful work done, your program will be slower. You should avoid excessive abstraction, especially under the form of tiny functions or methods (which are also often detrimental to readability).\nIf you have reached the limit of what pure Python can allow, there are tools to take you further away. For example, Cython can compile a slightly modified version of Python code into a C extension, and can be used on many different platforms. Cython can take advantage of compilation (and optional type annotations) to make your code significantly faster than when interpreted. If you are confident in your C programming skills, you can also write a C extension module yourself.\nSee also\nThe wiki page devoted to performance tips.\nWhat is the most efficient way to concatenate many strings together?\u00b6\nstr\nand bytes\nobjects are immutable, therefore concatenating\nmany strings together is inefficient as each concatenation creates a new\nobject. In the general case, the total runtime cost is quadratic in the\ntotal string length.\nTo accumulate many str\nobjects, the recommended idiom is to place\nthem into a list and call str.join()\nat the end:\nchunks = []\nfor s in my_strings:\nchunks.append(s)\nresult = ''.join(chunks)\n(another reasonably efficient idiom is to use io.StringIO\n)\nTo accumulate many bytes\nobjects, the recommended idiom is to extend\na bytearray\nobject using in-place concatenation (the +=\noperator):\nresult = bytearray()\nfor b in my_bytes_objects:\nresult += b\nSequences (Tuples/Lists)\u00b6\nHow do I convert between tuples and lists?\u00b6\nThe type constructor tuple(seq)\nconverts any sequence (actually, any\niterable) into a tuple with the same items in the same order.\nFor example, tuple([1, 2, 3])\nyields (1, 2, 3)\nand tuple('abc')\nyields ('a', 'b', 'c')\n. If the argument is a tuple, it does not make a copy\nbut returns the same object, so it is cheap to call tuple()\nwhen you\naren\u2019t sure that an object is already a tuple.\nThe type constructor list(seq)\nconverts any sequence or iterable into a list\nwith the same items in the same order. For example, list((1, 2, 3))\nyields\n[1, 2, 3]\nand list('abc')\nyields ['a', 'b', 'c']\n. If the argument\nis a list, it makes a copy just like seq[:]\nwould.\nWhat\u2019s a negative index?\u00b6\nPython sequences are indexed with positive numbers and negative numbers. For\npositive numbers 0 is the first index 1 is the second index and so forth. For\nnegative indices -1 is the last index and -2 is the penultimate (next to last)\nindex and so forth. Think of seq[-n]\nas the same as seq[len(seq)-n]\n.\nUsing negative indices can be very convenient. For example S[:-1]\nis all of\nthe string except for its last character, which is useful for removing the\ntrailing newline from a string.\nHow do I iterate over a sequence in reverse order?\u00b6\nUse the reversed()\nbuilt-in function:\nfor x in reversed(sequence):\n... # do something with x ...\nThis won\u2019t touch your original sequence, but build a new copy with reversed order to iterate over.\nHow do you remove duplicates from a list?\u00b6\nSee the Python Cookbook for a long discussion of many ways to do this:\nIf you don\u2019t mind reordering the list, sort it and then scan from the end of the list, deleting duplicates as you go:\nif mylist:\nmylist.sort()\nlast = mylist[-1]\nfor i in range(len(mylist)-2, -1, -1):\nif last == mylist[i]:\ndel mylist[i]\nelse:\nlast = mylist[i]\nIf all elements of the list may be used as set keys (i.e. they are all hashable) this is often faster\nmylist = list(set(mylist))\nThis converts the list into a set, thereby removing duplicates, and then back into a list.\nHow do you remove multiple items from a list?\u00b6\nAs with removing duplicates, explicitly iterating in reverse with a delete condition is one possibility. However, it is easier and faster to use slice replacement with an implicit or explicit forward iteration. Here are three variations:\nmylist[:] = filter(keep_function, mylist)\nmylist[:] = (x for x in mylist if keep_condition)\nmylist[:] = [x for x in mylist if keep_condition]\nThe list comprehension may be fastest.\nHow do you make an array in Python?\u00b6\nUse a list:\n[\"this\", 1, \"is\", \"an\", \"array\"]\nLists are equivalent to C or Pascal arrays in their time complexity; the primary difference is that a Python list can contain objects of many different types.\nThe array\nmodule also provides methods for creating arrays of fixed types\nwith compact representations, but they are slower to index than lists. Also\nnote that NumPy\nand other third party packages define array-like structures with\nvarious characteristics as well.\nTo get Lisp-style linked lists, you can emulate cons cells using tuples:\nlisp_list = (\"like\", (\"this\", (\"example\", None) ) )\nIf mutability is desired, you could use lists instead of tuples. Here the\nanalogue of a Lisp car is lisp_list[0]\nand the analogue of cdr is\nlisp_list[1]\n. Only do this if you\u2019re sure you really need to, because it\u2019s\nusually a lot slower than using Python lists.\nHow do I create a multidimensional list?\u00b6\nYou probably tried to make a multidimensional array like this:\n>>> A = [[None] * 2] * 3\nThis looks correct if you print it:\n>>> A\n[[None, None], [None, None], [None, None]]\nBut when you assign a value, it shows up in multiple places:\n>>> A[0][0] = 5\n>>> A\n[[5, None], [5, None], [5, None]]\nThe reason is that replicating a list with *\ndoesn\u2019t create copies, it only\ncreates references to the existing objects. The *3\ncreates a list\ncontaining 3 references to the same list of length two. Changes to one row will\nshow in all rows, which is almost certainly not what you want.\nThe suggested approach is to create a list of the desired length first and then fill in each element with a newly created list:\nA = [None] * 3\nfor i in range(3):\nA[i] = [None] * 2\nThis generates a list containing 3 different lists of length two. You can also use a list comprehension:\nw, h = 2, 3\nA = [[None] * w for i in range(h)]\nOr, you can use an extension that provides a matrix datatype; NumPy is the best known.\nHow do I apply a method or function to a sequence of objects?\u00b6\nTo call a method or function and accumulate the return values is a list, a list comprehension is an elegant solution:\nresult = [obj.method() for obj in mylist]\nresult = [function(obj) for obj in mylist]\nTo just run the method or function without saving the return values,\na plain for\nloop will suffice:\nfor obj in mylist:\nobj.method()\nfor obj in mylist:\nfunction(obj)\nWhy does a_tuple[i] += [\u2018item\u2019] raise an exception when the addition works?\u00b6\nThis is because of a combination of the fact that augmented assignment operators are assignment operators, and the difference between mutable and immutable objects in Python.\nThis discussion applies in general when augmented assignment operators are\napplied to elements of a tuple that point to mutable objects, but we\u2019ll use\na list\nand +=\nas our exemplar.\nIf you wrote:\n>>> a_tuple = (1, 2)\n>>> a_tuple[0] += 1\nTraceback (most recent call last):\n...\nTypeError: 'tuple' object does not support item assignment\nThe reason for the exception should be immediately clear: 1\nis added to the\nobject a_tuple[0]\npoints to (1\n), producing the result object, 2\n,\nbut when we attempt to assign the result of the computation, 2\n, to element\n0\nof the tuple, we get an error because we can\u2019t change what an element of\na tuple points to.\nUnder the covers, what this augmented assignment statement is doing is approximately this:\n>>> result = a_tuple[0] + 1\n>>> a_tuple[0] = result\nTraceback (most recent call last):\n...\nTypeError: 'tuple' object does not support item assignment\nIt is the assignment part of the operation that produces the error, since a tuple is immutable.\nWhen you write something like:\n>>> a_tuple = (['foo'], 'bar')\n>>> a_tuple[0] += ['item']\nTraceback (most recent call last):\n...\nTypeError: 'tuple' object does not support item assignment\nThe exception is a bit more surprising, and even more surprising is the fact that even though there was an error, the append worked:\n>>> a_tuple[0]\n['foo', 'item']\nTo see why this happens, you need to know that (a) if an object implements an\n__iadd__()\nmagic method, it gets called when the +=\naugmented\nassignment\nis executed, and its return value is what gets used in the assignment statement;\nand (b) for lists, __iadd__()\nis equivalent to calling\nextend()\non the list and returning the list.\nThat\u2019s why we say that for lists, +=\nis a \u201cshorthand\u201d for list.extend()\n:\n>>> a_list = []\n>>> a_list += [1]\n>>> a_list\n[1]\nThis is equivalent to:\n>>> result = a_list.__iadd__([1])\n>>> a_list = result\nThe object pointed to by a_list has been mutated, and the pointer to the\nmutated object is assigned back to a_list\n. The end result of the\nassignment is a no-op, since it is a pointer to the same object that a_list\nwas previously pointing to, but the assignment still happens.\nThus, in our tuple example what is happening is equivalent to:\n>>> result = a_tuple[0].__iadd__(['item'])\n>>> a_tuple[0] = result\nTraceback (most recent call last):\n...\nTypeError: 'tuple' object does not support item assignment\nThe __iadd__()\nsucceeds, and thus the list is extended, but even though\nresult\npoints to the same object that a_tuple[0]\nalready points to,\nthat final assignment still results in an error, because tuples are immutable.\nI want to do a complicated sort: can you do a Schwartzian Transform in Python?\u00b6\nThe technique, attributed to Randal Schwartz of the Perl community, sorts the\nelements of a list by a metric which maps each element to its \u201csort value\u201d. In\nPython, use the key\nargument for the list.sort()\nmethod:\nIsorted = L[:]\nIsorted.sort(key=lambda s: int(s[10:15]))\nHow can I sort one list by values from another list?\u00b6\nMerge them into an iterator of tuples, sort the resulting list, and then pick out the element you want.\n>>> list1 = [\"what\", \"I'm\", \"sorting\", \"by\"]\n>>> list2 = [\"something\", \"else\", \"to\", \"sort\"]\n>>> pairs = zip(list1, list2)\n>>> pairs = sorted(pairs)\n>>> pairs\n[(\"I'm\", 'else'), ('by', 'sort'), ('sorting', 'to'), ('what', 'something')]\n>>> result = [x[1] for x in pairs]\n>>> result\n['else', 'sort', 'to', 'something']\nObjects\u00b6\nWhat is a class?\u00b6\nA class is the particular object type created by executing a class statement. Class objects are used as templates to create instance objects, which embody both the data (attributes) and code (methods) specific to a datatype.\nA class can be based on one or more other classes, called its base class(es). It\nthen inherits the attributes and methods of its base classes. This allows an\nobject model to be successively refined by inheritance. You might have a\ngeneric Mailbox\nclass that provides basic accessor methods for a mailbox,\nand subclasses such as MboxMailbox\n, MaildirMailbox\n, OutlookMailbox\nthat handle various specific mailbox formats.\nWhat is a method?\u00b6\nA method is a function on some object x\nthat you normally call as\nx.name(arguments...)\n. Methods are defined as functions inside the class\ndefinition:\nclass C:\ndef meth(self, arg):\nreturn arg * 2 + self.attribute\nWhat is self?\u00b6\nSelf is merely a conventional name for the first argument of a method. A method\ndefined as meth(self, a, b, c)\nshould be called as x.meth(a, b, c)\nfor\nsome instance x\nof the class in which the definition occurs; the called\nmethod will think it is called as meth(x, a, b, c)\n.\nSee also Why must \u2018self\u2019 be used explicitly in method definitions and calls?.\nHow do I check if an object is an instance of a given class or of a subclass of it?\u00b6\nUse the built-in function isinstance(obj, cls)\n. You can\ncheck if an object\nis an instance of any of a number of classes by providing a tuple instead of a\nsingle class, e.g. isinstance(obj, (class1, class2, ...))\n, and can also\ncheck whether an object is one of Python\u2019s built-in types, e.g.\nisinstance(obj, str)\nor isinstance(obj, (int, float, complex))\n.\nNote that isinstance()\nalso checks for virtual inheritance from an\nabstract base class. So, the test will return True\nfor a\nregistered class even if hasn\u2019t directly or indirectly inherited from it. To\ntest for \u201ctrue inheritance\u201d, scan the MRO of the class:\nfrom collections.abc import Mapping\nclass P:\npass\nclass C(P):\npass\nMapping.register(P)\n>>> c = C()\n>>> isinstance(c, C) # direct\nTrue\n>>> isinstance(c, P) # indirect\nTrue\n>>> isinstance(c, Mapping) # virtual\nTrue\n# Actual inheritance chain\n>>> type(c).__mro__\n(, , )\n# Test for \"true inheritance\"\n>>> Mapping in type(c).__mro__\nFalse\nNote that most programs do not use isinstance()\non user-defined classes\nvery often. If you are developing the classes yourself, a more proper\nobject-oriented style is to define methods on the classes that encapsulate a\nparticular behaviour, instead of checking the object\u2019s class and doing a\ndifferent thing based on what class it is. For example, if you have a function\nthat does something:\ndef search(obj):\nif isinstance(obj, Mailbox):\n... # code to search a mailbox\nelif isinstance(obj, Document):\n... # code to search a document\nelif ...\nA better approach is to define a search()\nmethod on all the classes and just\ncall it:\nclass Mailbox:\ndef search(self):\n... # code to search a mailbox\nclass Document:\ndef search(self):\n... # code to search a document\nobj.search()\nWhat is delegation?\u00b6\nDelegation is an object oriented technique (also called a design pattern).\nLet\u2019s say you have an object x\nand want to change the behaviour of just one\nof its methods. You can create a new class that provides a new implementation\nof the method you\u2019re interested in changing and delegates all other methods to\nthe corresponding method of x\n.\nPython programmers can easily implement delegation. For example, the following class implements a class that behaves like a file but converts all written data to uppercase:\nclass UpperOut:\ndef __init__(self, outfile):\nself._outfile = outfile\ndef write(self, s):\nself._outfile.write(s.upper())\ndef __getattr__(self, name):\nreturn getattr(self._outfile, name)\nHere the UpperOut\nclass redefines the write()\nmethod to convert the\nargument string to uppercase before calling the underlying\nself._outfile.write()\nmethod. All other methods are delegated to the\nunderlying self._outfile\nobject. The delegation is accomplished via the\n__getattr__()\nmethod; consult the language reference\nfor more information about controlling attribute access.\nNote that for more general cases delegation can get trickier. When attributes\nmust be set as well as retrieved, the class must define a __setattr__()\nmethod too, and it must do so carefully. The basic implementation of\n__setattr__()\nis roughly equivalent to the following:\nclass X:\n...\ndef __setattr__(self, name, value):\nself.__dict__[name] = value\n...\nMany __setattr__()\nimplementations call object.__setattr__()\nto set\nan attribute on self without causing infinite recursion:\nclass X:\ndef __setattr__(self, name, value):\n# Custom logic here...\nobject.__setattr__(self, name, value)\nAlternatively, it is possible to set attributes by inserting\nentries into self.__dict__\ndirectly.\nHow do I call a method defined in a base class from a derived class that extends it?\u00b6\nUse the built-in super()\nfunction:\nclass Derived(Base):\ndef meth(self):\nsuper().meth() # calls Base.meth\nIn the example, super()\nwill automatically determine the instance from\nwhich it was called (the self\nvalue), look up the method resolution\norder (MRO) with type(self).__mro__\n, and return the next in line after\nDerived\nin the MRO: Base\n.\nHow can I organize my code to make it easier to change the base class?\u00b6\nYou could assign the base class to an alias and derive from the alias. Then all you have to change is the value assigned to the alias. Incidentally, this trick is also handy if you want to decide dynamically (e.g. depending on availability of resources) which base class to use. Example:\nclass Base:\n...\nBaseAlias = Base\nclass Derived(BaseAlias):\n...\nHow do I create static class data and static class methods?\u00b6\nBoth static data and static methods (in the sense of C++ or Java) are supported in Python.\nFor static data, simply define a class attribute. To assign a new value to the attribute, you have to explicitly use the class name in the assignment:\nclass C:\ncount = 0 # number of times C.__init__ called\ndef __init__(self):\nC.count = C.count + 1\ndef getcount(self):\nreturn C.count # or return self.count\nc.count\nalso refers to C.count\nfor any c\nsuch that isinstance(c,\nC)\nholds, unless overridden by c\nitself or by some class on the base-class\nsearch path from c.__class__\nback to C\n.\nCaution: within a method of C, an assignment like self.count = 42\ncreates a\nnew and unrelated instance named \u201ccount\u201d in self\n\u2019s own dict. Rebinding of a\nclass-static data name must always specify the class whether inside a method or\nnot:\nC.count = 314\nStatic methods are possible:\nclass C:\n@staticmethod\ndef static(arg1, arg2, arg3):\n# No 'self' parameter!\n...\nHowever, a far more straightforward way to get the effect of a static method is via a simple module-level function:\ndef getcount():\nreturn C.count\nIf your code is structured so as to define one class (or tightly related class hierarchy) per module, this supplies the desired encapsulation.\nHow can I overload constructors (or methods) in Python?\u00b6\nThis answer actually applies to all methods, but the question usually comes up first in the context of constructors.\nIn C++ you\u2019d write\nclass C {\nC() { cout << \"No arguments\\n\"; }\nC(int i) { cout << \"Argument is \" << i << \"\\n\"; }\n}\nIn Python you have to write a single constructor that catches all cases using default arguments. For example:\nclass C:\ndef __init__(self, i=None):\nif i is None:\nprint(\"No arguments\")\nelse:\nprint(\"Argument is\", i)\nThis is not entirely equivalent, but close enough in practice.\nYou could also try a variable-length argument list, e.g.\ndef __init__(self, *args):\n...\nThe same approach works for all method definitions.\nI try to use __spam and I get an error about _SomeClassName__spam.\u00b6\nVariable names with double leading underscores are \u201cmangled\u201d to provide a simple\nbut effective way to define class private variables. Any identifier of the form\n__spam\n(at least two leading underscores, at most one trailing underscore)\nis textually replaced with _classname__spam\n, where classname\nis the\ncurrent class name with any leading underscores stripped.\nThe identifier can be used unchanged within the class, but to access it outside the class, the mangled name must be used:\nclass A:\ndef __one(self):\nreturn 1\ndef two(self):\nreturn 2 * self.__one()\nclass B(A):\ndef three(self):\nreturn 3 * self._A__one()\nfour = 4 * A()._A__one()\nIn particular, this does not guarantee privacy since an outside user can still deliberately access the private attribute; many Python programmers never bother to use private variable names at all.\nSee also\nThe private name mangling specifications for details and special cases.\nMy class defines __del__ but it is not called when I delete the object.\u00b6\nThere are several possible reasons for this.\nThe del\nstatement does not necessarily call __del__()\n\u2013 it simply\ndecrements the object\u2019s reference count, and if this reaches zero\n__del__()\nis called.\nIf your data structures contain circular links (e.g. a tree where each child has\na parent reference and each parent has a list of children) the reference counts\nwill never go back to zero. Once in a while Python runs an algorithm to detect\nsuch cycles, but the garbage collector might run some time after the last\nreference to your data structure vanishes, so your __del__()\nmethod may be\ncalled at an inconvenient and random time. This is inconvenient if you\u2019re trying\nto reproduce a problem. Worse, the order in which object\u2019s __del__()\nmethods are executed is arbitrary. You can run gc.collect()\nto force a\ncollection, but there are pathological cases where objects will never be\ncollected.\nDespite the cycle collector, it\u2019s still a good idea to define an explicit\nclose()\nmethod on objects to be called whenever you\u2019re done with them. The\nclose()\nmethod can then remove attributes that refer to subobjects. Don\u2019t\ncall __del__()\ndirectly \u2013 __del__()\nshould call close()\nand\nclose()\nshould make sure that it can be called more than once for the same\nobject.\nAnother way to avoid cyclical references is to use the weakref\nmodule,\nwhich allows you to point to objects without incrementing their reference count.\nTree data structures, for instance, should use weak references for their parent\nand sibling references (if they need them!).\nFinally, if your __del__()\nmethod raises an exception, a warning message\nis printed to sys.stderr\n.\nHow do I get a list of all instances of a given class?\u00b6\nPython does not keep track of all instances of a class (or of a built-in type). You can program the class\u2019s constructor to keep track of all instances by keeping a list of weak references to each instance.\nWhy does the result of id()\nappear to be not unique?\u00b6\nThe id()\nbuiltin returns an integer that is guaranteed to be unique during\nthe lifetime of the object. Since in CPython, this is the object\u2019s memory\naddress, it happens frequently that after an object is deleted from memory, the\nnext freshly created object is allocated at the same position in memory. This\nis illustrated by this example:\n>>> id(1000)\n13901272\n>>> id(2000)\n13901272\nThe two ids belong to different integer objects that are created before, and\ndeleted immediately after execution of the id()\ncall. To be sure that\nobjects whose id you want to examine are still alive, create another reference\nto the object:\n>>> a = 1000; b = 2000\n>>> id(a)\n13901272\n>>> id(b)\n13891296\nWhen can I rely on identity tests with the is operator?\u00b6\nThe is\noperator tests for object identity. The test a is b\nis\nequivalent to id(a) == id(b)\n.\nThe most important property of an identity test is that an object is always\nidentical to itself, a is a\nalways returns True\n. Identity tests are\nusually faster than equality tests. And unlike equality tests, identity tests\nare guaranteed to return a boolean True\nor False\n.\nHowever, identity tests can only be substituted for equality tests when object identity is assured. Generally, there are three circumstances where identity is guaranteed:\nAssignments create new names but do not change object identity. After the assignment\nnew = old\n, it is guaranteed thatnew is old\n.Putting an object in a container that stores object references does not change object identity. After the list assignment\ns[0] = x\n, it is guaranteed thats[0] is x\n.If an object is a singleton, it means that only one instance of that object can exist. After the assignments\na = None\nandb = None\n, it is guaranteed thata is b\nbecauseNone\nis a singleton.\nIn most other circumstances, identity tests are inadvisable and equality tests\nare preferred. In particular, identity tests should not be used to check\nconstants such as int\nand str\nwhich aren\u2019t guaranteed to be\nsingletons:\n>>> a = 1000\n>>> b = 500\n>>> c = b + 500\n>>> a is c\nFalse\n>>> a = 'Python'\n>>> b = 'Py'\n>>> c = b + 'thon'\n>>> a is c\nFalse\nLikewise, new instances of mutable containers are never identical:\n>>> a = []\n>>> b = []\n>>> a is b\nFalse\nIn the standard library code, you will see several common patterns for correctly using identity tests:\nAs recommended by PEP 8, an identity test is the preferred way to check for\nNone\n. This reads like plain English in code and avoids confusion with other objects that may have boolean values that evaluate to false.Detecting optional arguments can be tricky when\nNone\nis a valid input value. In those situations, you can create a singleton sentinel object guaranteed to be distinct from other objects. For example, here is how to implement a method that behaves likedict.pop()\n:_sentinel = object() def pop(self, key, default=_sentinel): if key in self: value = self[key] del self[key] return value if default is _sentinel: raise KeyError(key) return default\nContainer implementations sometimes need to augment equality tests with identity tests. This prevents the code from being confused by objects such as\nfloat('NaN')\nthat are not equal to themselves.\nFor example, here is the implementation of\ncollections.abc.Sequence.__contains__()\n:\ndef __contains__(self, value):\nfor v in self:\nif v is value or v == value:\nreturn True\nreturn False\nHow can a subclass control what data is stored in an immutable instance?\u00b6\nWhen subclassing an immutable type, override the __new__()\nmethod\ninstead of the __init__()\nmethod. The latter only runs after an\ninstance is created, which is too late to alter data in an immutable\ninstance.\nAll of these immutable classes have a different signature than their parent class:\nfrom datetime import date\nclass FirstOfMonthDate(date):\n\"Always choose the first day of the month\"\ndef __new__(cls, year, month, day):\nreturn super().__new__(cls, year, month, 1)\nclass NamedInt(int):\n\"Allow text names for some numbers\"\nxlat = {'zero': 0, 'one': 1, 'ten': 10}\ndef __new__(cls, value):\nvalue = cls.xlat.get(value, value)\nreturn super().__new__(cls, value)\nclass TitleStr(str):\n\"Convert str to name suitable for a URL path\"\ndef __new__(cls, s):\ns = s.lower().replace(' ', '-')\ns = ''.join([c for c in s if c.isalnum() or c == '-'])\nreturn super().__new__(cls, s)\nThe classes can be used like this:\n>>> FirstOfMonthDate(2012, 2, 14)\nFirstOfMonthDate(2012, 2, 1)\n>>> NamedInt('ten')\n10\n>>> NamedInt(20)\n20\n>>> TitleStr('Blog: Why Python Rocks')\n'blog-why-python-rocks'\nHow do I cache method calls?\u00b6\nThe two principal tools for caching methods are\nfunctools.cached_property()\nand functools.lru_cache()\n. The\nformer stores results at the instance level and the latter at the class\nlevel.\nThe cached_property approach only works with methods that do not take any arguments. It does not create a reference to the instance. The cached method result will be kept only as long as the instance is alive.\nThe advantage is that when an instance is no longer used, the cached method result will be released right away. The disadvantage is that if instances accumulate, so too will the accumulated method results. They can grow without bound.\nThe lru_cache approach works with methods that have hashable arguments. It creates a reference to the instance unless special efforts are made to pass in weak references.\nThe advantage of the least recently used algorithm is that the cache is bounded by the specified maxsize. The disadvantage is that instances are kept alive until they age out of the cache or until the cache is cleared.\nThis example shows the various techniques:\nclass Weather:\n\"Lookup weather information on a government website\"\ndef __init__(self, station_id):\nself._station_id = station_id\n# The _station_id is private and immutable\ndef current_temperature(self):\n\"Latest hourly observation\"\n# Do not cache this because old results\n# can be out of date.\n@cached_property\ndef location(self):\n\"Return the longitude/latitude coordinates of the station\"\n# Result only depends on the station_id\n@lru_cache(maxsize=20)\ndef historic_rainfall(self, date, units='mm'):\n\"Rainfall on a given date\"\n# Depends on the station_id, date, and units.\nThe above example assumes that the station_id never changes. If the relevant instance attributes are mutable, the cached_property approach can\u2019t be made to work because it cannot detect changes to the attributes.\nTo make the lru_cache approach work when the station_id is mutable,\nthe class needs to define the __eq__()\nand __hash__()\nmethods so that the cache can detect relevant attribute updates:\nclass Weather:\n\"Example with a mutable station identifier\"\ndef __init__(self, station_id):\nself.station_id = station_id\ndef change_station(self, station_id):\nself.station_id = station_id\ndef __eq__(self, other):\nreturn self.station_id == other.station_id\ndef __hash__(self):\nreturn hash(self.station_id)\n@lru_cache(maxsize=20)\ndef historic_rainfall(self, date, units='cm'):\n'Rainfall on a given date'\n# Depends on the station_id, date, and units.\nModules\u00b6\nHow do I create a .pyc file?\u00b6\nWhen a module is imported for the first time (or when the source file has\nchanged since the current compiled file was created) a .pyc\nfile containing\nthe compiled code should be created in a __pycache__\nsubdirectory of the\ndirectory containing the .py\nfile. The .pyc\nfile will have a\nfilename that starts with the same name as the .py\nfile, and ends with\n.pyc\n, with a middle component that depends on the particular python\nbinary that created it. (See PEP 3147 for details.)\nOne reason that a .pyc\nfile may not be created is a permissions problem\nwith the directory containing the source file, meaning that the __pycache__\nsubdirectory cannot be created. This can happen, for example, if you develop as\none user but run as another, such as if you are testing with a web server.\nUnless the PYTHONDONTWRITEBYTECODE\nenvironment variable is set,\ncreation of a .pyc file is automatic if you\u2019re importing a module and Python\nhas the ability (permissions, free space, etc\u2026) to create a __pycache__\nsubdirectory and write the compiled module to that subdirectory.\nRunning Python on a top level script is not considered an import and no\n.pyc\nwill be created. For example, if you have a top-level module\nfoo.py\nthat imports another module xyz.py\n, when you run foo\n(by\ntyping python foo.py\nas a shell command), a .pyc\nwill be created for\nxyz\nbecause xyz\nis imported, but no .pyc\nfile will be created for\nfoo\nsince foo.py\nisn\u2019t being imported.\nIf you need to create a .pyc\nfile for foo\n\u2013 that is, to create a\n.pyc\nfile for a module that is not imported \u2013 you can, using the\npy_compile\nand compileall\nmodules.\nThe py_compile\nmodule can manually compile any module. One way is to use\nthe compile()\nfunction in that module interactively:\n>>> import py_compile\n>>> py_compile.compile('foo.py')\nThis will write the .pyc\nto a __pycache__\nsubdirectory in the same\nlocation as foo.py\n(or you can override that with the optional parameter\ncfile\n).\nYou can also automatically compile all files in a directory or directories using\nthe compileall\nmodule. You can do it from the shell prompt by running\ncompileall.py\nand providing the path of a directory containing Python files\nto compile:\npython -m compileall .\nHow do I find the current module name?\u00b6\nA module can find out its own module name by looking at the predefined global\nvariable __name__\n. If this has the value '__main__'\n, the program is\nrunning as a script. Many modules that are usually used by importing them also\nprovide a command-line interface or a self-test, and only execute this code\nafter checking __name__\n:\ndef main():\nprint('Running test...')\n...\nif __name__ == '__main__':\nmain()\nHow can I have modules that mutually import each other?\u00b6\nSuppose you have the following modules:\nfoo.py\n:\nfrom bar import bar_var\nfoo_var = 1\nbar.py\n:\nfrom foo import foo_var\nbar_var = 2\nThe problem is that the interpreter will perform the following steps:\nmain imports\nfoo\nEmpty globals for\nfoo\nare createdfoo\nis compiled and starts executingfoo\nimportsbar\nEmpty globals for\nbar\nare createdbar\nis compiled and starts executingbar\nimportsfoo\n(which is a no-op since there already is a module namedfoo\n)The import mechanism tries to read\nfoo_var\nfromfoo\nglobals, to setbar.foo_var = foo.foo_var\nThe last step fails, because Python isn\u2019t done with interpreting foo\nyet and\nthe global symbol dictionary for foo\nis still empty.\nThe same thing happens when you use import foo\n, and then try to access\nfoo.foo_var\nin global code.\nThere are (at least) three possible workarounds for this problem.\nGuido van Rossum recommends avoiding all uses of from import ...\n,\nand placing all code inside functions. Initializations of global variables and\nclass variables should use constants or built-in functions only. This means\neverything from an imported module is referenced as .\n.\nJim Roskind suggests performing steps in the following order in each module:\nexports (globals, functions, and classes that don\u2019t need imported base classes)\nimport\nstatementsactive code (including globals that are initialized from imported values).\nVan Rossum doesn\u2019t like this approach much because the imports appear in a strange place, but it does work.\nMatthias Urlichs recommends restructuring your code so that the recursive import is not necessary in the first place.\nThese solutions are not mutually exclusive.\n__import__(\u2018x.y.z\u2019) returns ; how do I get z?\u00b6\nConsider using the convenience function import_module()\nfrom\nimportlib\ninstead:\nz = importlib.import_module('x.y.z')\nWhen I edit an imported module and reimport it, the changes don\u2019t show up. Why does this happen?\u00b6\nFor reasons of efficiency as well as consistency, Python only reads the module file on the first time a module is imported. If it didn\u2019t, in a program consisting of many modules where each one imports the same basic module, the basic module would be parsed and re-parsed many times. To force re-reading of a changed module, do this:\nimport importlib\nimport modname\nimportlib.reload(modname)\nWarning: this technique is not 100% fool-proof. In particular, modules containing statements like\nfrom modname import some_objects\nwill continue to work with the old version of the imported objects. If the module contains class definitions, existing class instances will not be updated to use the new class definition. This can result in the following paradoxical behaviour:\n>>> import importlib\n>>> import cls\n>>> c = cls.C() # Create an instance of C\n>>> importlib.reload(cls)\n\n>>> isinstance(c, cls.C) # isinstance is false?!?\nFalse\nThe nature of the problem is made clear if you print out the \u201cidentity\u201d of the class objects:\n>>> hex(id(c.__class__))\n'0x7352a0'\n>>> hex(id(cls.C))\n'0x4198d0'", "code_snippets": [" ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", "\n", "\n ", "\n", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", "\n\n ", "\n ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", "\n", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n", " ", " ", "\n ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", "\n", " ", " ", " ", " ", "\n", " ", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", "\n", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", " ", "\n ", "\n ", " ", " ", " ", " ", " ", "\n ", " ", "\n", "\n\n ", " ", " ", "\n ", " ", " ", " ", " ", "\n\n ", " ", "\n ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n ", "\n ", " ", "\n ", " ", " ", " ", " ", " ", "\n", "\n\n ", " ", " ", "\n\n ", " ", "\n ", " ", " ", "\n\n ", "\n ", " ", " ", " ", " ", "\n\n ", "\n ", " ", " ", " ", " ", "\n\n", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", " ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", "\n\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", "\n\n", "\n", " ", " ", "\n", " ", "\n\n", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", "\n\n", "\n", " ", " ", " ", "\n", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n\n", "\n", "\n", " ", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n File ", ", line ", "\n", "\n", "\n", ": ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n\n", "\n", " ", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n ", "\n\n", "\n ", "\n\n", " ", " ", " ", " ", " ", " ", "\n\n", " ", "\n", "\n", " ", "\n", "\n ", "\n ", "\n\n ", "\n ", "\n\n", " ", " ", " ", " ", " ", "\n", "\n", "\n ", "\n\n", " ", " ", "\n\n", " ", " ", "\n", "\n", " ", " ", "\n", " ", "\n", " ", "\n", "\n", "\n", "\n File ", ", line ", "\n", "\n", "\n", ": ", "\n", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", " ", " ", "\n ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n ", " ", " ", "\n", " ", " ", " ", "\n ", " ", "\n", " ", "\n ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", "\n ", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n ", "\n\n", " ", " ", " ", "\n ", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", ": ", "\n", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", ": ", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", ": ", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", ": ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n ", " ", "\n ", " ", " ", " ", " ", " ", "\n", "\n ", " ", " ", "\n ", " ", "\n ", " ", " ", "\n ", " ", "\n ", " ", "\n", "\n ", "\n ", " ", "\n\n", "\n ", "\n ", " ", "\n\n", "\n", "\n\n ", " ", "\n ", " ", " ", "\n\n ", " ", "\n ", "\n\n ", " ", "\n ", " ", " ", "\n", "\n ", "\n ", " ", " ", "\n ", " ", " ", "\n ", "\n", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n", "\n ", "\n ", " ", "\n", "\n ", "\n\n", " ", " ", "\n\n", "\n ", "\n", "\n ", " ", " ", " ", "\n\n ", "\n ", " ", " ", " ", " ", "\n\n ", "\n ", " ", " ", "\n", " ", " ", "\n", "\n ", "\n ", " ", " ", "\n ", "\n ", "\n", "\n ", " ", "\n", "\n ", " ", "\n ", " ", " ", " ", "\n ", "\n ", "\n ", " ", "\n", " ", "\n ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", "\n ", " ", "\n", "\n ", "\n\n ", " ", "\n ", " ", " ", "\n ", "\n\n ", "\n ", "\n ", "\n ", "\n\n ", "\n ", "\n ", "\n ", "\n\n ", "\n ", " ", " ", "\n ", "\n ", "\n", "\n ", "\n\n ", " ", "\n ", " ", " ", "\n\n ", " ", "\n ", " ", " ", "\n\n ", " ", "\n ", " ", " ", " ", "\n\n ", "\n ", " ", "\n\n ", "\n ", " ", " ", "\n ", "\n ", "\n", "\n", "\n", " ", " ", " ", "\n", "\n ", "\n ", "\n\n", " ", " ", " ", "\n ", "\n", " ", "\n", " ", " ", "\n", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", " ", "\n", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 16050}
{"url": "https://docs.python.org/3/faq/general.html", "title": null, "content": "General Python FAQ\u00b6\nGeneral Information\u00b6\nWhat is Python?\u00b6\nPython is an interpreted, interactive, object-oriented programming language. It incorporates modules, exceptions, dynamic typing, very high level dynamic data types, and classes. It supports multiple programming paradigms beyond object-oriented programming, such as procedural and functional programming. Python combines remarkable power with very clear syntax. It has interfaces to many system calls and libraries, as well as to various window systems, and is extensible in C or C++. It is also usable as an extension language for applications that need a programmable interface. Finally, Python is portable: it runs on many Unix variants including Linux and macOS, and on Windows.\nTo find out more, start with The Python Tutorial. The Beginner\u2019s Guide to Python links to other introductory tutorials and resources for learning Python.\nWhat is the Python Software Foundation?\u00b6\nThe Python Software Foundation is an independent non-profit organization that holds the copyright on Python versions 2.1 and newer. The PSF\u2019s mission is to advance open source technology related to the Python programming language and to publicize the use of Python. The PSF\u2019s home page is at https://www.python.org/psf/.\nDonations to the PSF are tax-exempt in the US. If you use Python and find it helpful, please contribute via the PSF donation page.\nAre there copyright restrictions on the use of Python?\u00b6\nYou can do anything you want with the source, as long as you leave the copyrights in and display those copyrights in any documentation about Python that you produce. If you honor the copyright rules, it\u2019s OK to use Python for commercial use, to sell copies of Python in source or binary form (modified or unmodified), or to sell products that incorporate Python in some form. We would still like to know about all commercial use of Python, of course.\nSee the license page to find further explanations and the full text of the PSF License.\nThe Python logo is trademarked, and in certain cases permission is required to use it. Consult the Trademark Usage Policy for more information.\nWhy was Python created in the first place?\u00b6\nHere\u2019s a very brief summary of what started it all, written by Guido van Rossum:\nI had extensive experience with implementing an interpreted language in the ABC group at CWI, and from working with this group I had learned a lot about language design. This is the origin of many Python features, including the use of indentation for statement grouping and the inclusion of very-high-level data types (although the details are all different in Python).\nI had a number of gripes about the ABC language, but also liked many of its features. It was impossible to extend the ABC language (or its implementation) to remedy my complaints \u2013 in fact its lack of extensibility was one of its biggest problems. I had some experience with using Modula-2+ and talked with the designers of Modula-3 and read the Modula-3 report. Modula-3 is the origin of the syntax and semantics used for exceptions, and some other Python features.\nI was working in the Amoeba distributed operating system group at CWI. We needed a better way to do system administration than by writing either C programs or Bourne shell scripts, since Amoeba had its own system call interface which wasn\u2019t easily accessible from the Bourne shell. My experience with error handling in Amoeba made me acutely aware of the importance of exceptions as a programming language feature.\nIt occurred to me that a scripting language with a syntax like ABC but with access to the Amoeba system calls would fill the need. I realized that it would be foolish to write an Amoeba-specific language, so I decided that I needed a language that was generally extensible.\nDuring the 1989 Christmas holidays, I had a lot of time on my hand, so I decided to give it a try. During the next year, while still mostly working on it in my own time, Python was used in the Amoeba project with increasing success, and the feedback from colleagues made me add many early improvements.\nIn February 1991, after just over a year of development, I decided to post to USENET. The rest is in the\nMisc/HISTORY\nfile.\nWhat is Python good for?\u00b6\nPython is a high-level general-purpose programming language that can be applied to many different classes of problems.\nThe language comes with a large standard library that covers areas such as string processing (regular expressions, Unicode, calculating differences between files), internet protocols (HTTP, FTP, SMTP, XML-RPC, POP, IMAP), software engineering (unit testing, logging, profiling, parsing Python code), and operating system interfaces (system calls, filesystems, TCP/IP sockets). Look at the table of contents for The Python Standard Library to get an idea of what\u2019s available. A wide variety of third-party extensions are also available. Consult the Python Package Index to find packages of interest to you.\nHow does the Python version numbering scheme work?\u00b6\nPython versions are numbered \u201cA.B.C\u201d or \u201cA.B\u201d:\nA is the major version number \u2013 it is only incremented for really major changes in the language.\nB is the minor version number \u2013 it is incremented for less earth-shattering changes.\nC is the micro version number \u2013 it is incremented for each bugfix release.\nNot all releases are bugfix releases. In the run-up to a new feature release, a series of development releases are made, denoted as alpha, beta, or release candidate. Alphas are early releases in which interfaces aren\u2019t yet finalized; it\u2019s not unexpected to see an interface change between two alpha releases. Betas are more stable, preserving existing interfaces but possibly adding new modules, and release candidates are frozen, making no changes except as needed to fix critical bugs.\nAlpha, beta and release candidate versions have an additional suffix:\nThe suffix for an alpha version is \u201caN\u201d for some small number N.\nThe suffix for a beta version is \u201cbN\u201d for some small number N.\nThe suffix for a release candidate version is \u201crcN\u201d for some small number N.\nIn other words, all versions labeled 2.0aN precede the versions labeled 2.0bN, which precede versions labeled 2.0rcN, and those precede 2.0.\nYou may also find version numbers with a \u201c+\u201d suffix, e.g. \u201c2.2+\u201d. These are unreleased versions, built directly from the CPython development repository. In practice, after a final minor release is made, the version is incremented to the next minor version, which becomes the \u201ca0\u201d version, e.g. \u201c2.4a0\u201d.\nSee the Developer\u2019s Guide\nfor more information about the development cycle, and\nPEP 387 to learn more about Python\u2019s backward compatibility policy. See also\nthe documentation for sys.version\n, sys.hexversion\n, and\nsys.version_info\n.\nHow do I obtain a copy of the Python source?\u00b6\nThe latest Python source distribution is always available from python.org, at https://www.python.org/downloads/. The latest development sources can be obtained at https://github.com/python/cpython/.\nThe source distribution is a gzipped tar file containing the complete C source, Sphinx-formatted documentation, Python library modules, example programs, and several useful pieces of freely distributable software. The source will compile and run out of the box on most UNIX platforms.\nConsult the Getting Started section of the Python Developer\u2019s Guide for more information on getting the source code and compiling it.\nHow do I get documentation on Python?\u00b6\nThe standard documentation for the current stable version of Python is available at https://docs.python.org/3/. EPUB, plain text, and downloadable HTML versions are also available at https://docs.python.org/3/download.html.\nThe documentation is written in reStructuredText and processed by the Sphinx documentation tool. The reStructuredText source for the documentation is part of the Python source distribution.\nI\u2019ve never programmed before. Is there a Python tutorial?\u00b6\nThere are numerous tutorials and books available. The standard documentation includes The Python Tutorial.\nConsult the Beginner\u2019s Guide to find information for beginning Python programmers, including lists of tutorials.\nIs there a newsgroup or mailing list devoted to Python?\u00b6\nThere is a newsgroup, comp.lang.python, and a mailing list, python-list. The newsgroup and mailing list are gatewayed into each other \u2013 if you can read news it\u2019s unnecessary to subscribe to the mailing list. comp.lang.python is high-traffic, receiving hundreds of postings every day, and Usenet readers are often more able to cope with this volume.\nAnnouncements of new software releases and events can be found in comp.lang.python.announce, a low-traffic moderated list that receives about five postings per day. It\u2019s available as the python-announce mailing list.\nMore info about other mailing lists and newsgroups can be found at https://www.python.org/community/lists/.\nHow do I get a beta test version of Python?\u00b6\nAlpha and beta releases are available from https://www.python.org/downloads/. All releases are announced on the comp.lang.python and comp.lang.python.announce newsgroups and on the Python home page at https://www.python.org/; an RSS feed of news is available.\nYou can also access the development version of Python through Git. See The Python Developer\u2019s Guide for details.\nHow do I submit bug reports and patches for Python?\u00b6\nTo report a bug or submit a patch, use the issue tracker at https://github.com/python/cpython/issues.\nFor more information on how Python is developed, consult the Python Developer\u2019s Guide.\nAre there any published articles about Python that I can reference?\u00b6\nIt\u2019s probably best to cite your favorite book about Python.\nThe very first article about Python was written in 1991 and is now quite outdated.\nGuido van Rossum and Jelke de Boer, \u201cInteractively Testing Remote Servers Using the Python Programming Language\u201d, CWI Quarterly, Volume 4, Issue 4 (December 1991), Amsterdam, pp 283\u2013303.\nAre there any books on Python?\u00b6\nYes, there are many, and more are being published. See the python.org wiki at https://wiki.python.org/moin/PythonBooks for a list.\nYou can also search online bookstores for \u201cPython\u201d and filter out the Monty Python references; or perhaps search for \u201cPython\u201d and \u201clanguage\u201d.\nWhere in the world is www.python.org located?\u00b6\nThe Python project\u2019s infrastructure is located all over the world and is managed by the Python Infrastructure Team. Details here.\nWhy is it called Python?\u00b6\nWhen he began implementing Python, Guido van Rossum was also reading the published scripts from \u201cMonty Python\u2019s Flying Circus\u201d, a BBC comedy series from the 1970s. Van Rossum thought he needed a name that was short, unique, and slightly mysterious, so he decided to call the language Python.\nDo I have to like \u201cMonty Python\u2019s Flying Circus\u201d?\u00b6\nNo, but it helps. :)\nPython in the real world\u00b6\nHow stable is Python?\u00b6\nVery stable. New, stable releases have been coming out roughly every 6 to 18 months since 1991, and this seems likely to continue. As of version 3.9, Python will have a new feature release every 12 months (PEP 602).\nThe developers issue bugfix releases of older versions, so the stability of existing releases gradually improves. Bugfix releases, indicated by a third component of the version number (e.g. 3.5.3, 3.6.2), are managed for stability; only fixes for known problems are included in a bugfix release, and it\u2019s guaranteed that interfaces will remain the same throughout a series of bugfix releases.\nThe latest stable releases can always be found on the Python download page. Python 3.x is the recommended version and supported by most widely used libraries. Python 2.x is not maintained anymore.\nHow many people are using Python?\u00b6\nThere are probably millions of users, though it\u2019s difficult to obtain an exact count.\nPython is available for free download, so there are no sales figures, and it\u2019s available from many different sites and packaged with many Linux distributions, so download statistics don\u2019t tell the whole story either.\nThe comp.lang.python newsgroup is very active, but not all Python users post to the group or even read it.\nHave any significant projects been done in Python?\u00b6\nSee https://www.python.org/about/success for a list of projects that use Python. Consulting the proceedings for past Python conferences will reveal contributions from many different companies and organizations.\nHigh-profile Python projects include the Mailman mailing list manager and the Zope application server. Several Linux distributions, most notably Red Hat, have written part or all of their installer and system administration software in Python. Companies that use Python internally include Google, Yahoo, and Lucasfilm Ltd.\nWhat new developments are expected for Python in the future?\u00b6\nSee https://peps.python.org/ for the Python Enhancement Proposals (PEPs). PEPs are design documents describing a suggested new feature for Python, providing a concise technical specification and a rationale. Look for a PEP titled \u201cPython X.Y Release Schedule\u201d, where X.Y is a version that hasn\u2019t been publicly released yet.\nNew development is discussed on the python-dev mailing list.\nIs it reasonable to propose incompatible changes to Python?\u00b6\nIn general, no. There are already millions of lines of Python code around the world, so any change in the language that invalidates more than a very small fraction of existing programs has to be frowned upon. Even if you can provide a conversion program, there\u2019s still the problem of updating all documentation; many books have been written about Python, and we don\u2019t want to invalidate them all at a single stroke.\nProviding a gradual upgrade path is necessary if a feature has to be changed. PEP 5 describes the procedure followed for introducing backward-incompatible changes while minimizing disruption for users.\nIs Python a good language for beginning programmers?\u00b6\nYes.\nIt is still common to start students with a procedural and statically typed language such as Pascal, C, or a subset of C++ or Java. Students may be better served by learning Python as their first language. Python has a very simple and consistent syntax and a large standard library and, most importantly, using Python in a beginning programming course lets students concentrate on important programming skills such as problem decomposition and data type design. With Python, students can be quickly introduced to basic concepts such as loops and procedures. They can probably even work with user-defined objects in their very first course.\nFor a student who has never programmed before, using a statically typed language seems unnatural. It presents additional complexity that the student must master and slows the pace of the course. The students are trying to learn to think like a computer, decompose problems, design consistent interfaces, and encapsulate data. While learning to use a statically typed language is important in the long term, it is not necessarily the best topic to address in the students\u2019 first programming course.\nMany other aspects of Python make it a good first language. Like Java, Python has a large standard library so that students can be assigned programming projects very early in the course that do something. Assignments aren\u2019t restricted to the standard four-function calculator and check balancing programs. By using the standard library, students can gain the satisfaction of working on realistic applications as they learn the fundamentals of programming. Using the standard library also teaches students about code reuse. Third-party modules such as PyGame are also helpful in extending the students\u2019 reach.\nPython\u2019s interactive interpreter enables students to test language features while they\u2019re programming. They can keep a window with the interpreter running while they enter their program\u2019s source in another window. If they can\u2019t remember the methods for a list, they can do something like this:\n>>> L = []\n>>> dir(L)\n['__add__', '__class__', '__contains__', '__delattr__', '__delitem__',\n'__dir__', '__doc__', '__eq__', '__format__', '__ge__',\n'__getattribute__', '__getitem__', '__gt__', '__hash__', '__iadd__',\n'__imul__', '__init__', '__iter__', '__le__', '__len__', '__lt__',\n'__mul__', '__ne__', '__new__', '__reduce__', '__reduce_ex__',\n'__repr__', '__reversed__', '__rmul__', '__setattr__', '__setitem__',\n'__sizeof__', '__str__', '__subclasshook__', 'append', 'clear',\n'copy', 'count', 'extend', 'index', 'insert', 'pop', 'remove',\n'reverse', 'sort']\n>>> [d for d in dir(L) if '__' not in d]\n['append', 'clear', 'copy', 'count', 'extend', 'index', 'insert', 'pop', 'remove', 'reverse', 'sort']\n>>> help(L.append)\nHelp on built-in function append:\nappend(...)\nL.append(object) -> None -- append object to end\n>>> L.append(1)\n>>> L\n[1]\nWith the interpreter, documentation is never far from the student as they are programming.\nThere are also good IDEs for Python. IDLE is a cross-platform IDE for Python that is written in Python using Tkinter. Emacs users will be happy to know that there is a very good Python mode for Emacs. All of these programming environments provide syntax highlighting, auto-indenting, and access to the interactive interpreter while coding. Consult the Python wiki for a full list of Python editing environments.\nIf you want to discuss Python\u2019s use in education, you may be interested in joining the edu-sig mailing list.", "code_snippets": [" ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n\n", "\n", "\n\n", "\n", "\n\n", "\n", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 4345}
{"url": "https://docs.python.org/3/howto/remote_debugging.html", "title": "Remote debugging attachment protocol", "content": "Remote debugging attachment protocol\u00b6\nThis protocol enables external tools to attach to a running CPython process and execute Python code remotely.\nMost platforms require elevated privileges to attach to another Python process.\nDisabling remote debugging\u00b6\nTo disable remote debugging support, use any of the following:\nSet the\nPYTHON_DISABLE_REMOTE_DEBUG\nenvironment variable to1\nbefore starting the interpreter.Use the\n-X disable_remote_debug\ncommand-line option.Compile Python with the\n--without-remote-debug\nbuild flag.\nPermission requirements\u00b6\nAttaching to a running Python process for remote debugging requires elevated privileges on most platforms. The specific requirements and troubleshooting steps depend on your operating system:\nLinux\nThe tracer process must have the CAP_SYS_PTRACE\ncapability or equivalent\nprivileges. You can only trace processes you own and can signal. Tracing may\nfail if the process is already being traced, or if it is running with\nset-user-ID or set-group-ID. Security modules like Yama may further restrict\ntracing.\nTo temporarily relax ptrace restrictions (until reboot), run:\necho 0 | sudo tee /proc/sys/kernel/yama/ptrace_scope\nNote\nDisabling ptrace_scope\nreduces system hardening and should only be done\nin trusted environments.\nIf running inside a container, use --cap-add=SYS_PTRACE\nor\n--privileged\n, and run as root if needed.\nTry re-running the command with elevated privileges:\nsudo -E !!\nmacOS\nTo attach to another process, you typically need to run your debugging tool\nwith elevated privileges. This can be done by using sudo\nor running as\nroot.\nEven when attaching to processes you own, macOS may block debugging unless the debugger is run with root privileges due to system security restrictions.\nWindows\nTo attach to another process, you usually need to run your debugging tool with administrative privileges. Start the command prompt or terminal as Administrator.\nSome processes may still be inaccessible even with Administrator rights,\nunless you have the SeDebugPrivilege\nprivilege enabled.\nTo resolve file or folder access issues, adjust the security permissions:\nRight-click the file or folder and select Properties.\nGo to the Security tab to view users and groups with access.\nClick Edit to modify permissions.\nSelect your user account.\nIn Permissions, check Read or Full control as needed.\nClick Apply, then OK to confirm.\nNote\nEnsure you\u2019ve satisfied all Permission requirements before proceeding.\nThis section describes the low-level protocol that enables external tools to inject and execute a Python script within a running CPython process.\nThis mechanism forms the basis of the sys.remote_exec()\nfunction, which\ninstructs a remote Python process to execute a .py\nfile. However, this\nsection does not document the usage of that function. Instead, it provides a\ndetailed explanation of the underlying protocol, which takes as input the\npid\nof a target Python process and the path to a Python source file to be\nexecuted. This information supports independent reimplementation of the\nprotocol, regardless of programming language.\nWarning\nThe execution of the injected script depends on the interpreter reaching a safe evaluation point. As a result, execution may be delayed depending on the runtime state of the target process.\nOnce injected, the script is executed by the interpreter within the target process the next time a safe evaluation point is reached. This approach enables remote execution capabilities without modifying the behavior or structure of the running Python application.\nSubsequent sections provide a step-by-step description of the protocol, including techniques for locating interpreter structures in memory, safely accessing internal fields, and triggering code execution. Platform-specific variations are noted where applicable, and example implementations are included to clarify each operation.\nLocating the PyRuntime structure\u00b6\nCPython places the PyRuntime\nstructure in a dedicated binary section to\nhelp external tools find it at runtime. The name and format of this section\nvary by platform. For example, .PyRuntime\nis used on ELF systems, and\n__DATA,__PyRuntime\nis used on macOS. Tools can find the offset of this\nstructure by examining the binary on disk.\nThe PyRuntime\nstructure contains CPython\u2019s global interpreter state and\nprovides access to other internal data, including the list of interpreters,\nthread states, and debugger support fields.\nTo work with a remote Python process, a debugger must first find the memory\naddress of the PyRuntime\nstructure in the target process. This address\ncan\u2019t be hardcoded or calculated from a symbol name, because it depends on\nwhere the operating system loaded the binary.\nThe method for finding PyRuntime\ndepends on the platform, but the steps are\nthe same in general:\nFind the base address where the Python binary or shared library was loaded in the target process.\nUse the on-disk binary to locate the offset of the\n.PyRuntime\nsection.Add the section offset to the base address to compute the address in memory.\nThe sections below explain how to do this on each supported platform and include example code.\nLinux (ELF)\nTo find the PyRuntime\nstructure on Linux:\nRead the process\u2019s memory map (for example,\n/proc//maps\n) to find the address where the Python executable orlibpython\nwas loaded.Parse the ELF section headers in the binary to get the offset of the\n.PyRuntime\nsection.Add that offset to the base address from step 1 to get the memory address of\nPyRuntime\n.\nThe following is an example implementation:\ndef find_py_runtime_linux(pid: int) -> int:\n# Step 1: Try to find the Python executable in memory\nbinary_path, base_address = find_mapped_binary(\npid, name_contains=\"python\"\n)\n# Step 2: Fallback to shared library if executable is not found\nif binary_path is None:\nbinary_path, base_address = find_mapped_binary(\npid, name_contains=\"libpython\"\n)\n# Step 3: Parse ELF headers to get .PyRuntime section offset\nsection_offset = parse_elf_section_offset(\nbinary_path, \".PyRuntime\"\n)\n# Step 4: Compute PyRuntime address in memory\nreturn base_address + section_offset\nOn Linux systems, there are two main approaches to read memory from another\nprocess. The first is through the /proc\nfilesystem, specifically by reading from\n/proc/[pid]/mem\nwhich provides direct access to the process\u2019s memory. This\nrequires appropriate permissions - either being the same user as the target\nprocess or having root access. The second approach is using the\nprocess_vm_readv()\nsystem call which provides a more efficient way to copy\nmemory between processes. While ptrace\u2019s PTRACE_PEEKTEXT\noperation can also be\nused to read memory, it is significantly slower as it only reads one word at a\ntime and requires multiple context switches between the tracer and tracee\nprocesses.\nFor parsing ELF sections, the process involves reading and interpreting the ELF file format structures from the binary file on disk. The ELF header contains a pointer to the section header table. Each section header contains metadata about a section including its name (stored in a separate string table), offset, and size. To find a specific section like .PyRuntime, you need to walk through these headers and match the section name. The section header then provides the offset where that section exists in the file, which can be used to calculate its runtime address when the binary is loaded into memory.\nYou can read more about the ELF file format in the ELF specification.\nmacOS (Mach-O)\nTo find the PyRuntime\nstructure on macOS:\nCall\ntask_for_pid()\nto get themach_port_t\ntask port for the target process. This handle is needed to read memory using APIs likemach_vm_read_overwrite\nandmach_vm_region\n.Scan the memory regions to find the one containing the Python executable or\nlibpython\n.Load the binary file from disk and parse the Mach-O headers to find the section named\nPyRuntime\nin the__DATA\nsegment. On macOS, symbol names are automatically prefixed with an underscore, so thePyRuntime\nsymbol appears as_PyRuntime\nin the symbol table, but the section name is not affected.\nThe following is an example implementation:\ndef find_py_runtime_macos(pid: int) -> int:\n# Step 1: Get access to the process's memory\nhandle = get_memory_access_handle(pid)\n# Step 2: Try to find the Python executable in memory\nbinary_path, base_address = find_mapped_binary(\nhandle, name_contains=\"python\"\n)\n# Step 3: Fallback to libpython if the executable is not found\nif binary_path is None:\nbinary_path, base_address = find_mapped_binary(\nhandle, name_contains=\"libpython\"\n)\n# Step 4: Parse Mach-O headers to get __DATA,__PyRuntime section offset\nsection_offset = parse_macho_section_offset(\nbinary_path, \"__DATA\", \"__PyRuntime\"\n)\n# Step 5: Compute the PyRuntime address in memory\nreturn base_address + section_offset\nOn macOS, accessing another process\u2019s memory requires using Mach-O specific APIs\nand file formats. The first step is obtaining a task_port\nhandle via\ntask_for_pid()\n, which provides access to the target process\u2019s memory space.\nThis handle enables memory operations through APIs like\nmach_vm_read_overwrite()\n.\nThe process memory can be examined using mach_vm_region()\nto scan through the\nvirtual memory space, while proc_regionfilename()\nhelps identify which binary\nfiles are loaded at each memory region. When the Python binary or library is\nfound, its Mach-O headers need to be parsed to locate the PyRuntime\nstructure.\nThe Mach-O format organizes code and data into segments and sections. The\nPyRuntime\nstructure lives in a section named __PyRuntime\nwithin the\n__DATA\nsegment. The actual runtime address calculation involves finding the\n__TEXT\nsegment which serves as the binary\u2019s base address, then locating the\n__DATA\nsegment containing our target section. The final address is computed by\ncombining the base address with the appropriate section offsets from the Mach-O\nheaders.\nNote that accessing another process\u2019s memory on macOS typically requires elevated privileges - either root access or special security entitlements granted to the debugging process.\nWindows (PE)\nTo find the PyRuntime\nstructure on Windows:\nUse the ToolHelp API to enumerate all modules loaded in the target process. This is done using functions such as CreateToolhelp32Snapshot, Module32First, and Module32Next.\nIdentify the module corresponding to\npython.exe\norpythonXY.dll\n, whereX\nandY\nare the major and minor version numbers of the Python version, and record its base address.Locate the\nPyRuntim\nsection. Due to the PE format\u2019s 8-character limit on section names (defined asIMAGE_SIZEOF_SHORT_NAME\n), the original namePyRuntime\nis truncated. This section contains thePyRuntime\nstructure.Retrieve the section\u2019s relative virtual address (RVA) and add it to the base address of the module.\nThe following is an example implementation:\ndef find_py_runtime_windows(pid: int) -> int:\n# Step 1: Try to find the Python executable in memory\nbinary_path, base_address = find_loaded_module(\npid, name_contains=\"python\"\n)\n# Step 2: Fallback to shared pythonXY.dll if the executable is not\n# found\nif binary_path is None:\nbinary_path, base_address = find_loaded_module(\npid, name_contains=\"python3\"\n)\n# Step 3: Parse PE section headers to get the RVA of the PyRuntime\n# section. The section name appears as \"PyRuntim\" due to the\n# 8-character limit defined by the PE format (IMAGE_SIZEOF_SHORT_NAME).\nsection_rva = parse_pe_section_offset(binary_path, \"PyRuntim\")\n# Step 4: Compute PyRuntime address in memory\nreturn base_address + section_rva\nOn Windows, accessing another process\u2019s memory requires using the Windows API\nfunctions like CreateToolhelp32Snapshot()\nand Module32First()/Module32Next()\nto enumerate loaded modules. The OpenProcess()\nfunction provides a handle to\naccess the target process\u2019s memory space, enabling memory operations through\nReadProcessMemory()\n.\nThe process memory can be examined by enumerating loaded modules to find the\nPython binary or DLL. When found, its PE headers need to be parsed to locate the\nPyRuntime\nstructure.\nThe PE format organizes code and data into sections. The PyRuntime\nstructure\nlives in a section named \u201cPyRuntim\u201d (truncated from \u201cPyRuntime\u201d due to PE\u2019s\n8-character name limit). The actual runtime address calculation involves finding\nthe module\u2019s base address from the module entry, then locating our target\nsection in the PE headers. The final address is computed by combining the base\naddress with the section\u2019s virtual address from the PE section headers.\nNote that accessing another process\u2019s memory on Windows typically requires\nappropriate privileges - either administrative access or the SeDebugPrivilege\nprivilege granted to the debugging process.\nReading _Py_DebugOffsets\u00b6\nOnce the address of the PyRuntime\nstructure has been determined, the next\nstep is to read the _Py_DebugOffsets\nstructure located at the beginning of\nthe PyRuntime\nblock.\nThis structure provides version-specific field offsets that are needed to safely read interpreter and thread state memory. These offsets vary between CPython versions and must be checked before use to ensure they are compatible.\nTo read and check the debug offsets, follow these steps:\nRead memory from the target process starting at the\nPyRuntime\naddress, covering the same number of bytes as the_Py_DebugOffsets\nstructure. This structure is located at the very start of thePyRuntime\nmemory block. Its layout is defined in CPython\u2019s internal headers and stays the same within a given minor version, but may change in major versions.Check that the structure contains valid data:\nThe\ncookie\nfield must match the expected debug marker.The\nversion\nfield must match the version of the Python interpreter used by the debugger.If either the debugger or the target process is using a pre-release version (for example, an alpha, beta, or release candidate), the versions must match exactly.\nThe\nfree_threaded\nfield must have the same value in both the debugger and the target process.\nIf the structure is valid, the offsets it contains can be used to locate fields in memory. If any check fails, the debugger should stop the operation to avoid reading memory in the wrong format.\nThe following is an example implementation that reads and checks\n_Py_DebugOffsets\n:\ndef read_debug_offsets(pid: int, py_runtime_addr: int) -> DebugOffsets:\n# Step 1: Read memory from the target process at the PyRuntime address\ndata = read_process_memory(\npid, address=py_runtime_addr, size=DEBUG_OFFSETS_SIZE\n)\n# Step 2: Deserialize the raw bytes into a _Py_DebugOffsets structure\ndebug_offsets = parse_debug_offsets(data)\n# Step 3: Validate the contents of the structure\nif debug_offsets.cookie != EXPECTED_COOKIE:\nraise RuntimeError(\"Invalid or missing debug cookie\")\nif debug_offsets.version != LOCAL_PYTHON_VERSION:\nraise RuntimeError(\n\"Mismatch between caller and target Python versions\"\n)\nif debug_offsets.free_threaded != LOCAL_FREE_THREADED:\nraise RuntimeError(\"Mismatch in free-threaded configuration\")\nreturn debug_offsets\nWarning\nProcess suspension recommended\nTo avoid race conditions and ensure memory consistency, it is strongly recommended that the target process be suspended before performing any operations that read or write internal interpreter state. The Python runtime may concurrently mutate interpreter data structures\u2014such as creating or destroying threads\u2014during normal execution. This can result in invalid memory reads or writes.\nA debugger may suspend execution by attaching to the process with ptrace\nor by sending a SIGSTOP\nsignal. Execution should only be resumed after\ndebugger-side memory operations are complete.\nNote\nSome tools, such as profilers or sampling-based debuggers, may operate on a running process without suspension. In such cases, tools must be explicitly designed to handle partially updated or inconsistent memory. For most debugger implementations, suspending the process remains the safest and most robust approach.\nLocating the interpreter and thread state\u00b6\nBefore code can be injected and executed in a remote Python process, the\ndebugger must choose a thread in which to schedule execution. This is necessary\nbecause the control fields used to perform remote code injection are located in\nthe _PyRemoteDebuggerSupport\nstructure, which is embedded in a\nPyThreadState\nobject. These fields are modified by the debugger to request\nexecution of injected scripts.\nThe PyThreadState\nstructure represents a thread running inside a Python\ninterpreter. It maintains the thread\u2019s evaluation context and contains the\nfields required for debugger coordination. Locating a valid PyThreadState\nis therefore a key prerequisite for triggering execution remotely.\nA thread is typically selected based on its role or ID. In most cases, the main thread is used, but some tools may target a specific thread by its native thread ID. Once the target thread is chosen, the debugger must locate both the interpreter and the associated thread state structures in memory.\nThe relevant internal structures are defined as follows:\nPyInterpreterState\nrepresents an isolated Python interpreter instance. Each interpreter maintains its own set of imported modules, built-in state, and thread state list. Although most Python applications use a single interpreter, CPython supports multiple interpreters in the same process.PyThreadState\nrepresents a thread running within an interpreter. It contains execution state and the control fields used by the debugger.\nTo locate a thread:\nUse the offset\nruntime_state.interpreters_head\nto obtain the address of the first interpreter in thePyRuntime\nstructure. This is the entry point to the linked list of active interpreters.Use the offset\ninterpreter_state.threads_main\nto access the main thread state associated with the selected interpreter. This is typically the most reliable thread to target.Optionally, use the offset\ninterpreter_state.threads_head\nto iterate through the linked list of all thread states. EachPyThreadState\nstructure contains anative_thread_id\nfield, which may be compared to a target thread ID to find a specific thread.Once a valid\nPyThreadState\nhas been found, its address can be used in later steps of the protocol, such as writing debugger control fields and scheduling execution.\nThe following is an example implementation that locates the main thread state:\ndef find_main_thread_state(\npid: int, py_runtime_addr: int, debug_offsets: DebugOffsets,\n) -> int:\n# Step 1: Read interpreters_head from PyRuntime\ninterp_head_ptr = (\npy_runtime_addr + debug_offsets.runtime_state.interpreters_head\n)\ninterp_addr = read_pointer(pid, interp_head_ptr)\nif interp_addr == 0:\nraise RuntimeError(\"No interpreter found in the target process\")\n# Step 2: Read the threads_main pointer from the interpreter\nthreads_main_ptr = (\ninterp_addr + debug_offsets.interpreter_state.threads_main\n)\nthread_state_addr = read_pointer(pid, threads_main_ptr)\nif thread_state_addr == 0:\nraise RuntimeError(\"Main thread state is not available\")\nreturn thread_state_addr\nThe following example demonstrates how to locate a thread by its native thread ID:\ndef find_thread_by_id(\npid: int,\ninterp_addr: int,\ndebug_offsets: DebugOffsets,\ntarget_tid: int,\n) -> int:\n# Start at threads_head and walk the linked list\nthread_ptr = read_pointer(\npid,\ninterp_addr + debug_offsets.interpreter_state.threads_head\n)\nwhile thread_ptr:\nnative_tid_ptr = (\nthread_ptr + debug_offsets.thread_state.native_thread_id\n)\nnative_tid = read_int(pid, native_tid_ptr)\nif native_tid == target_tid:\nreturn thread_ptr\nthread_ptr = read_pointer(\npid,\nthread_ptr + debug_offsets.thread_state.next\n)\nraise RuntimeError(\"Thread with the given ID was not found\")\nOnce a valid thread state has been located, the debugger can proceed with modifying its control fields and scheduling execution, as described in the next section.\nWriting control information\u00b6\nOnce a valid PyThreadState\nstructure has been identified, the debugger may\nmodify control fields within it to schedule the execution of a specified Python\nscript. These control fields are checked periodically by the interpreter, and\nwhen set correctly, they trigger the execution of remote code at a safe point\nin the evaluation loop.\nEach PyThreadState\ncontains a _PyRemoteDebuggerSupport\nstructure used\nfor communication between the debugger and the interpreter. The locations of\nits fields are defined by the _Py_DebugOffsets\nstructure and include the\nfollowing:\ndebugger_script_path\n: A fixed-size buffer that holds the full path to a Python source file (.py\n). This file must be accessible and readable by the target process when execution is triggered.debugger_pending_call\n: An integer flag. Setting this to1\ntells the interpreter that a script is ready to be executed.eval_breaker\n: A field checked by the interpreter during execution. Setting bit 5 (_PY_EVAL_PLEASE_STOP_BIT\n, value1U << 5\n) in this field causes the interpreter to pause and check for debugger activity.\nTo complete the injection, the debugger must perform the following steps:\nWrite the full script path into the\ndebugger_script_path\nbuffer.Set\ndebugger_pending_call\nto1\n.Read the current value of\neval_breaker\n, set bit 5 (_PY_EVAL_PLEASE_STOP_BIT\n), and write the updated value back. This signals the interpreter to check for debugger activity.\nThe following is an example implementation:\ndef inject_script(\npid: int,\nthread_state_addr: int,\ndebug_offsets: DebugOffsets,\nscript_path: str\n) -> None:\n# Compute the base offset of _PyRemoteDebuggerSupport\nsupport_base = (\nthread_state_addr +\ndebug_offsets.debugger_support.remote_debugger_support\n)\n# Step 1: Write the script path into debugger_script_path\nscript_path_ptr = (\nsupport_base +\ndebug_offsets.debugger_support.debugger_script_path\n)\nwrite_string(pid, script_path_ptr, script_path)\n# Step 2: Set debugger_pending_call to 1\npending_ptr = (\nsupport_base +\ndebug_offsets.debugger_support.debugger_pending_call\n)\nwrite_int(pid, pending_ptr, 1)\n# Step 3: Set _PY_EVAL_PLEASE_STOP_BIT (bit 5, value 1 << 5) in\n# eval_breaker\neval_breaker_ptr = (\nthread_state_addr +\ndebug_offsets.debugger_support.eval_breaker\n)\nbreaker = read_int(pid, eval_breaker_ptr)\nbreaker |= (1 << 5)\nwrite_int(pid, eval_breaker_ptr, breaker)\nOnce these fields are set, the debugger may resume the process (if it was suspended). The interpreter will process the request at the next safe evaluation point, load the script from disk, and execute it.\nIt is the responsibility of the debugger to ensure that the script file remains present and accessible to the target process during execution.\nNote\nScript execution is asynchronous. The script file cannot be deleted immediately after injection. The debugger should wait until the injected script has produced an observable effect before removing the file. This effect depends on what the script is designed to do. For example, a debugger might wait until the remote process connects back to a socket before removing the script. Once such an effect is observed, it is safe to assume the file is no longer needed.\nSummary\u00b6\nTo inject and execute a Python script in a remote process:\nLocate the\nPyRuntime\nstructure in the target process\u2019s memory.Read and validate the\n_Py_DebugOffsets\nstructure at the beginning ofPyRuntime\n.Use the offsets to locate a valid\nPyThreadState\n.Write the path to a Python script into\ndebugger_script_path\n.Set the\ndebugger_pending_call\nflag to1\n.Set\n_PY_EVAL_PLEASE_STOP_BIT\nin theeval_breaker\nfield.Resume the process (if suspended). The script will execute at the next safe evaluation point.", "code_snippets": [" ", " ", " ", "\n ", "\n ", " ", " ", " ", "\n ", " ", "\n ", "\n\n ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", "\n\n ", "\n ", " ", " ", "\n ", " ", "\n ", "\n\n ", "\n ", " ", " ", " ", "\n", " ", " ", " ", "\n ", "\n ", " ", " ", "\n\n ", "\n ", " ", " ", " ", "\n ", " ", "\n ", "\n\n ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", "\n\n ", "\n ", " ", " ", "\n ", " ", " ", "\n ", "\n\n ", "\n ", " ", " ", " ", "\n", " ", " ", " ", "\n ", "\n ", " ", " ", " ", "\n ", " ", "\n ", "\n\n ", "\n ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", "\n\n ", "\n ", "\n ", "\n ", " ", " ", " ", "\n\n ", "\n ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n ", "\n ", " ", " ", "\n ", " ", " ", "\n ", "\n\n ", "\n ", " ", " ", "\n\n ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", "\n ", "\n ", " ", " ", " ", "\n ", " ", "\n\n ", " ", "\n", "\n ", " ", " ", " ", " ", " ", "\n", " ", " ", "\n ", "\n ", " ", " ", "\n ", " ", " ", "\n ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n\n ", "\n ", " ", " ", "\n ", " ", " ", "\n ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n\n ", " ", "\n", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n\n ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n\n ", " ", "\n", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n", " ", " ", "\n ", "\n ", " ", " ", "\n ", " ", "\n ", "\n ", "\n\n ", "\n ", " ", " ", "\n ", " ", "\n ", "\n ", "\n ", " ", " ", "\n\n ", "\n ", " ", " ", "\n ", " ", "\n ", "\n ", "\n ", " ", " ", "\n\n ", "\n ", "\n ", " ", " ", "\n ", " ", "\n ", "\n ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 5877}
{"url": "https://docs.python.org/3/tutorial/inputoutput.html", "title": "Input and Output", "content": "7. Input and Output\u00b6\nThere are several ways to present the output of a program; data can be printed in a human-readable form, or written to a file for future use. This chapter will discuss some of the possibilities.\n7.1. Fancier Output Formatting\u00b6\nSo far we\u2019ve encountered two ways of writing values: expression statements and\nthe print()\nfunction. (A third way is using the write()\nmethod\nof file objects; the standard output file can be referenced as sys.stdout\n.\nSee the Library Reference for more information on this.)\nOften you\u2019ll want more control over the formatting of your output than simply printing space-separated values. There are several ways to format output.\nTo use formatted string literals, begin a string with\nf\norF\nbefore the opening quotation mark or triple quotation mark. Inside this string, you can write a Python expression between{\nand}\ncharacters that can refer to variables or literal values.>>> year = 2016 >>> event = 'Referendum' >>> f'Results of the {year} {event}' 'Results of the 2016 Referendum'\nThe\nstr.format()\nmethod of strings requires more manual effort. You\u2019ll still use{\nand}\nto mark where a variable will be substituted and can provide detailed formatting directives, but you\u2019ll also need to provide the information to be formatted. In the following code block there are two examples of how to format variables:>>> yes_votes = 42_572_654 >>> total_votes = 85_705_149 >>> percentage = yes_votes / total_votes >>> '{:-9} YES votes {:2.2%}'.format(yes_votes, percentage) ' 42572654 YES votes 49.67%'\nNotice how the\nyes_votes\nare padded with spaces and a negative sign only for negative numbers. The example also printspercentage\nmultiplied by 100, with 2 decimal places and followed by a percent sign (see Format Specification Mini-Language for details).Finally, you can do all the string handling yourself by using string slicing and concatenation operations to create any layout you can imagine. The string type has some methods that perform useful operations for padding strings to a given column width.\nWhen you don\u2019t need fancy output but just want a quick display of some\nvariables for debugging purposes, you can convert any value to a string with\nthe repr()\nor str()\nfunctions.\nThe str()\nfunction is meant to return representations of values which are\nfairly human-readable, while repr()\nis meant to generate representations\nwhich can be read by the interpreter (or will force a SyntaxError\nif\nthere is no equivalent syntax). For objects which don\u2019t have a particular\nrepresentation for human consumption, str()\nwill return the same value as\nrepr()\n. Many values, such as numbers or structures like lists and\ndictionaries, have the same representation using either function. Strings, in\nparticular, have two distinct representations.\nSome examples:\n>>> s = 'Hello, world.'\n>>> str(s)\n'Hello, world.'\n>>> repr(s)\n\"'Hello, world.'\"\n>>> str(1/7)\n'0.14285714285714285'\n>>> x = 10 * 3.25\n>>> y = 200 * 200\n>>> s = 'The value of x is ' + repr(x) + ', and y is ' + repr(y) + '...'\n>>> print(s)\nThe value of x is 32.5, and y is 40000...\n>>> # The repr() of a string adds string quotes and backslashes:\n>>> hello = 'hello, world\\n'\n>>> hellos = repr(hello)\n>>> print(hellos)\n'hello, world\\n'\n>>> # The argument to repr() may be any Python object:\n>>> repr((x, y, ('spam', 'eggs')))\n\"(32.5, 40000, ('spam', 'eggs'))\"\nThe string\nmodule contains support for a simple templating approach\nbased upon regular expressions, via string.Template\n.\nThis offers yet another way to substitute values into strings,\nusing placeholders like $x\nand replacing them with values from a dictionary.\nThis syntax is easy to use, although it offers much less control for formatting.\n7.1.1. Formatted String Literals\u00b6\nFormatted string literals (also called f-strings for\nshort) let you include the value of Python expressions inside a string by\nprefixing the string with f\nor F\nand writing expressions as\n{expression}\n.\nAn optional format specifier can follow the expression. This allows greater control over how the value is formatted. The following example rounds pi to three places after the decimal:\n>>> import math\n>>> print(f'The value of pi is approximately {math.pi:.3f}.')\nThe value of pi is approximately 3.142.\nPassing an integer after the ':'\nwill cause that field to be a minimum\nnumber of characters wide. This is useful for making columns line up.\n>>> table = {'Sjoerd': 4127, 'Jack': 4098, 'Dcab': 7678}\n>>> for name, phone in table.items():\n... print(f'{name:10} ==> {phone:10d}')\n...\nSjoerd ==> 4127\nJack ==> 4098\nDcab ==> 7678\nOther modifiers can be used to convert the value before it is formatted.\n'!a'\napplies ascii()\n, '!s'\napplies str()\n, and '!r'\napplies repr()\n:\n>>> animals = 'eels'\n>>> print(f'My hovercraft is full of {animals}.')\nMy hovercraft is full of eels.\n>>> print(f'My hovercraft is full of {animals!r}.')\nMy hovercraft is full of 'eels'.\nThe =\nspecifier can be used to expand an expression to the text of the\nexpression, an equal sign, then the representation of the evaluated expression:\n>>> bugs = 'roaches'\n>>> count = 13\n>>> area = 'living room'\n>>> print(f'Debugging {bugs=} {count=} {area=}')\nDebugging bugs='roaches' count=13 area='living room'\nSee self-documenting expressions for more information\non the =\nspecifier. For a reference on these format specifications, see\nthe reference guide for the Format Specification Mini-Language.\n7.1.2. The String format() Method\u00b6\nBasic usage of the str.format()\nmethod looks like this:\n>>> print('We are the {} who say \"{}!\"'.format('knights', 'Ni'))\nWe are the knights who say \"Ni!\"\nThe brackets and characters within them (called format fields) are replaced with\nthe objects passed into the str.format()\nmethod. A number in the\nbrackets can be used to refer to the position of the object passed into the\nstr.format()\nmethod.\n>>> print('{0} and {1}'.format('spam', 'eggs'))\nspam and eggs\n>>> print('{1} and {0}'.format('spam', 'eggs'))\neggs and spam\nIf keyword arguments are used in the str.format()\nmethod, their values\nare referred to by using the name of the argument.\n>>> print('This {food} is {adjective}.'.format(\n... food='spam', adjective='absolutely horrible'))\nThis spam is absolutely horrible.\nPositional and keyword arguments can be arbitrarily combined:\n>>> print('The story of {0}, {1}, and {other}.'.format('Bill', 'Manfred',\n... other='Georg'))\nThe story of Bill, Manfred, and Georg.\nIf you have a really long format string that you don\u2019t want to split up, it\nwould be nice if you could reference the variables to be formatted by name\ninstead of by position. This can be done by simply passing the dict and using\nsquare brackets '[]'\nto access the keys.\n>>> table = {'Sjoerd': 4127, 'Jack': 4098, 'Dcab': 8637678}\n>>> print('Jack: {0[Jack]:d}; Sjoerd: {0[Sjoerd]:d}; '\n... 'Dcab: {0[Dcab]:d}'.format(table))\nJack: 4098; Sjoerd: 4127; Dcab: 8637678\nThis could also be done by passing the table\ndictionary as keyword arguments with the **\nnotation.\n>>> table = {'Sjoerd': 4127, 'Jack': 4098, 'Dcab': 8637678}\n>>> print('Jack: {Jack:d}; Sjoerd: {Sjoerd:d}; Dcab: {Dcab:d}'.format(**table))\nJack: 4098; Sjoerd: 4127; Dcab: 8637678\nThis is particularly useful in combination with the built-in function\nvars()\n, which returns a dictionary containing all local variables:\n>>> table = {k: str(v) for k, v in vars().items()}\n>>> message = \" \".join([f'{k}: ' + '{' + k +'};' for k in table.keys()])\n>>> print(message.format(**table))\n__name__: __main__; __doc__: None; __package__: None; __loader__: ...\nAs an example, the following lines produce a tidily aligned set of columns giving integers and their squares and cubes:\n>>> for x in range(1, 11):\n... print('{0:2d} {1:3d} {2:4d}'.format(x, x*x, x*x*x))\n...\n1 1 1\n2 4 8\n3 9 27\n4 16 64\n5 25 125\n6 36 216\n7 49 343\n8 64 512\n9 81 729\n10 100 1000\nFor a complete overview of string formatting with str.format()\n, see\nFormat String Syntax.\n7.1.3. Manual String Formatting\u00b6\nHere\u2019s the same table of squares and cubes, formatted manually:\n>>> for x in range(1, 11):\n... print(repr(x).rjust(2), repr(x*x).rjust(3), end=' ')\n... # Note use of 'end' on previous line\n... print(repr(x*x*x).rjust(4))\n...\n1 1 1\n2 4 8\n3 9 27\n4 16 64\n5 25 125\n6 36 216\n7 49 343\n8 64 512\n9 81 729\n10 100 1000\n(Note that the one space between each column was added by the\nway print()\nworks: it always adds spaces between its arguments.)\nThe str.rjust()\nmethod of string objects right-justifies a string in a\nfield of a given width by padding it with spaces on the left. There are\nsimilar methods str.ljust()\nand str.center()\n. These methods do\nnot write anything, they just return a new string. If the input string is too\nlong, they don\u2019t truncate it, but return it unchanged; this will mess up your\ncolumn lay-out but that\u2019s usually better than the alternative, which would be\nlying about a value. (If you really want truncation you can always add a\nslice operation, as in x.ljust(n)[:n]\n.)\nThere is another method, str.zfill()\n, which pads a numeric string on the\nleft with zeros. It understands about plus and minus signs:\n>>> '12'.zfill(5)\n'00012'\n>>> '-3.14'.zfill(7)\n'-003.14'\n>>> '3.14159265359'.zfill(5)\n'3.14159265359'\n7.1.4. Old string formatting\u00b6\nThe % operator (modulo) can also be used for string formatting.\nGiven format % values\n(where format is a string),\n%\nconversion specifications in format are replaced with\nzero or more elements of values.\nThis operation is commonly known as string\ninterpolation. For example:\n>>> import math\n>>> print('The value of pi is approximately %5.3f.' % math.pi)\nThe value of pi is approximately 3.142.\nMore information can be found in the printf-style String Formatting section.\n7.2. Reading and Writing Files\u00b6\nopen()\nreturns a file object, and is most commonly used with\ntwo positional arguments and one keyword argument:\nopen(filename, mode, encoding=None)\n>>> f = open('workfile', 'w', encoding=\"utf-8\")\nThe first argument is a string containing the filename. The second argument is\nanother string containing a few characters describing the way in which the file\nwill be used. mode can be 'r'\nwhen the file will only be read, 'w'\nfor only writing (an existing file with the same name will be erased), and\n'a'\nopens the file for appending; any data written to the file is\nautomatically added to the end. 'r+'\nopens the file for both reading and\nwriting. The mode argument is optional; 'r'\nwill be assumed if it\u2019s\nomitted.\nNormally, files are opened in text mode, that means, you read and write\nstrings from and to the file, which are encoded in a specific encoding.\nIf encoding is not specified, the default is platform dependent\n(see open()\n).\nBecause UTF-8 is the modern de-facto standard, encoding=\"utf-8\"\nis\nrecommended unless you know that you need to use a different encoding.\nAppending a 'b'\nto the mode opens the file in binary mode.\nBinary mode data is read and written as bytes\nobjects.\nYou can not specify encoding when opening file in binary mode.\nIn text mode, the default when reading is to convert platform-specific line\nendings (\\n\non Unix, \\r\\n\non Windows) to just \\n\n. When writing in\ntext mode, the default is to convert occurrences of \\n\nback to\nplatform-specific line endings. This behind-the-scenes modification\nto file data is fine for text files, but will corrupt binary data like that in\nJPEG\nor EXE\nfiles. Be very careful to use binary mode when\nreading and writing such files.\nIt is good practice to use the with\nkeyword when dealing\nwith file objects. The advantage is that the file is properly closed\nafter its suite finishes, even if an exception is raised at some\npoint. Using with\nis also much shorter than writing\nequivalent try\n-finally\nblocks:\n>>> with open('workfile', encoding=\"utf-8\") as f:\n... read_data = f.read()\n>>> # We can check that the file has been automatically closed.\n>>> f.closed\nTrue\nIf you\u2019re not using the with\nkeyword, then you should call\nf.close()\nto close the file and immediately free up any system\nresources used by it.\nWarning\nCalling f.write()\nwithout using the with\nkeyword or calling\nf.close()\nmight result in the arguments\nof f.write()\nnot being completely written to the disk, even if the\nprogram exits successfully.\nAfter a file object is closed, either by a with\nstatement\nor by calling f.close()\n, attempts to use the file object will\nautomatically fail.\n>>> f.close()\n>>> f.read()\nTraceback (most recent call last):\nFile \"\", line 1, in \nValueError: I/O operation on closed file.\n7.2.1. Methods of File Objects\u00b6\nThe rest of the examples in this section will assume that a file object called\nf\nhas already been created.\nTo read a file\u2019s contents, call f.read(size)\n, which reads some quantity of\ndata and returns it as a string (in text mode) or bytes object (in binary mode).\nsize is an optional numeric argument. When size is omitted or negative, the\nentire contents of the file will be read and returned; it\u2019s your problem if the\nfile is twice as large as your machine\u2019s memory. Otherwise, at most size\ncharacters (in text mode) or size bytes (in binary mode) are read and returned.\nIf the end of the file has been reached, f.read()\nwill return an empty\nstring (''\n).\n>>> f.read()\n'This is the entire file.\\n'\n>>> f.read()\n''\nf.readline()\nreads a single line from the file; a newline character (\\n\n)\nis left at the end of the string, and is only omitted on the last line of the\nfile if the file doesn\u2019t end in a newline. This makes the return value\nunambiguous; if f.readline()\nreturns an empty string, the end of the file\nhas been reached, while a blank line is represented by '\\n'\n, a string\ncontaining only a single newline.\n>>> f.readline()\n'This is the first line of the file.\\n'\n>>> f.readline()\n'Second line of the file\\n'\n>>> f.readline()\n''\nFor reading lines from a file, you can loop over the file object. This is memory efficient, fast, and leads to simple code:\n>>> for line in f:\n... print(line, end='')\n...\nThis is the first line of the file.\nSecond line of the file\nIf you want to read all the lines of a file in a list you can also use\nlist(f)\nor f.readlines()\n.\nf.write(string)\nwrites the contents of string to the file, returning\nthe number of characters written.\n>>> f.write('This is a test\\n')\n15\nOther types of objects need to be converted \u2013 either to a string (in text mode) or a bytes object (in binary mode) \u2013 before writing them:\n>>> value = ('the answer', 42)\n>>> s = str(value) # convert the tuple to string\n>>> f.write(s)\n18\nf.tell()\nreturns an integer giving the file object\u2019s current position in the file\nrepresented as number of bytes from the beginning of the file when in binary mode and\nan opaque number when in text mode.\nTo change the file object\u2019s position, use f.seek(offset, whence)\n. The position is computed\nfrom adding offset to a reference point; the reference point is selected by\nthe whence argument. A whence value of 0 measures from the beginning\nof the file, 1 uses the current file position, and 2 uses the end of the file as\nthe reference point. whence can be omitted and defaults to 0, using the\nbeginning of the file as the reference point.\n>>> f = open('workfile', 'rb+')\n>>> f.write(b'0123456789abcdef')\n16\n>>> f.seek(5) # Go to the 6th byte in the file\n5\n>>> f.read(1)\nb'5'\n>>> f.seek(-3, 2) # Go to the 3rd byte before the end\n13\n>>> f.read(1)\nb'd'\nIn text files (those opened without a b\nin the mode string), only seeks\nrelative to the beginning of the file are allowed (the exception being seeking\nto the very file end with seek(0, 2)\n) and the only valid offset values are\nthose returned from the f.tell()\n, or zero. Any other offset value produces\nundefined behaviour.\nFile objects have some additional methods, such as isatty()\nand\ntruncate()\nwhich are less frequently used; consult the Library\nReference for a complete guide to file objects.\n7.2.2. Saving structured data with json\n\u00b6\nStrings can easily be written to and read from a file. Numbers take a bit more\neffort, since the read()\nmethod only returns strings, which will have to\nbe passed to a function like int()\n, which takes a string like '123'\nand returns its numeric value 123. When you want to save more complex data\ntypes like nested lists and dictionaries, parsing and serializing by hand\nbecomes complicated.\nRather than having users constantly writing and debugging code to save\ncomplicated data types to files, Python allows you to use the popular data\ninterchange format called JSON (JavaScript Object Notation). The standard module called json\ncan take Python\ndata hierarchies, and convert them to string representations; this process is\ncalled serializing. Reconstructing the data from the string representation\nis called deserializing. Between serializing and deserializing, the\nstring representing the object may have been stored in a file or data, or\nsent over a network connection to some distant machine.\nNote\nThe JSON format is commonly used by modern applications to allow for data exchange. Many programmers are already familiar with it, which makes it a good choice for interoperability.\nIf you have an object x\n, you can view its JSON string representation with a\nsimple line of code:\n>>> import json\n>>> x = [1, 'simple', 'list']\n>>> json.dumps(x)\n'[1, \"simple\", \"list\"]'\nAnother variant of the dumps()\nfunction, called dump()\n,\nsimply serializes the object to a text file. So if f\nis a\ntext file object opened for writing, we can do this:\njson.dump(x, f)\nTo decode the object again, if f\nis a binary file or\ntext file object which has been opened for reading:\nx = json.load(f)\nNote\nJSON files must be encoded in UTF-8. Use encoding=\"utf-8\"\nwhen opening\nJSON file as a text file for both of reading and writing.\nThis simple serialization technique can handle lists and dictionaries, but\nserializing arbitrary class instances in JSON requires a bit of extra effort.\nThe reference for the json\nmodule contains an explanation of this.\nSee also\npickle\n- the pickle module\nContrary to JSON, pickle is a protocol which allows the serialization of arbitrarily complex Python objects. As such, it is specific to Python and cannot be used to communicate with applications written in other languages. It is also insecure by default: deserializing pickle data coming from an untrusted source can execute arbitrary code, if the data was crafted by a skilled attacker.", "code_snippets": [" ", " ", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", " ", "\n", " ", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", " ", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", "\n\n", "\n", "\n", "\n", "\n", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", "\n", "\n", "\n", " ", "\n", " ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 4570}
{"url": "https://docs.python.org/3/using/configure.html", "title": "Configure Python", "content": "3. Configure Python\u00b6\n3.1. Build Requirements\u00b6\nTo build CPython, you will need:\nA C11 compiler. Optional C11 features are not required.\nOn Windows, Microsoft Visual Studio 2017 or later is required.\nSupport for IEEE 754 floating-point numbers and floating-point Not-a-Number (NaN).\nSupport for threads.\nChanged in version 3.5: On Windows, Visual Studio 2015 or later is now required.\nChanged in version 3.6: Selected C99 features, like \nand static inline\nfunctions,\nare now required.\nChanged in version 3.7: Thread support is now required.\nChanged in version 3.11: C11 compiler, IEEE 754 and NaN support are now required. On Windows, Visual Studio 2017 or later is required.\nSee also PEP 7 \u201cStyle Guide for C Code\u201d and PEP 11 \u201cCPython platform support\u201d.\n3.1.1. Requirements for optional modules\u00b6\nSome optional modules of the standard library require third-party libraries installed for development (for example, header files must be available).\nMissing requirements are reported in the configure\noutput.\nModules that are missing due to missing dependencies are listed near the end\nof the make\noutput,\nsometimes using an internal name, for example, _ctypes\nfor ctypes\nmodule.\nIf you distribute a CPython interpreter without optional modules, it\u2019s best practice to advise users, who generally expect that standard library modules are available.\nDependencies to build optional modules are:\nDependency |\nMinimum version |\nPython module |\n|---|---|---|\n3.3.0 recommended |\n||\n2.5.0 |\n||\n|\n||\n3.0.18 recommended\n(1.1.1 minimum)\n|\n||\n3.15.2 |\n||\n8.5.12 |\n||\n1.2.2.1 |\n||\n1.4.5 |\nNote that the table does not include all optional modules; in particular,\nplatform-specific modules like winreg\nare not listed here.\nSee also\nThe devguide includes a full list of dependencies required to build all modules and instructions on how to install them on common platforms.\n--with-system-expat\nallows building with an external libexpat library.\nChanged in version 3.1: Tcl/Tk version 8.3.1 is now required for tkinter\n.\nChanged in version 3.5: Tcl/Tk version 8.4 is now required for tkinter\n.\nChanged in version 3.10: OpenSSL 1.1.1 is now required for hashlib\nand ssl\n.\nSQLite 3.7.15 is now required for sqlite3\n.\nChanged in version 3.11: Tcl/Tk version 8.5.12 is now required for tkinter\n.\nChanged in version 3.13: SQLite 3.15.2 is now required for sqlite3\n.\n3.2. Generated files\u00b6\nTo reduce build dependencies, Python source code contains multiple generated files. Commands to regenerate all generated files:\nmake regen-all\nmake regen-stdlib-module-names\nmake regen-limited-abi\nmake regen-configure\nThe Makefile.pre.in\nfile documents generated files, their inputs, and tools used\nto regenerate them. Search for regen-*\nmake targets.\n3.2.1. configure script\u00b6\nThe make regen-configure\ncommand regenerates the aclocal.m4\nfile and\nthe configure\nscript using the Tools/build/regen-configure.sh\nshell\nscript which uses an Ubuntu container to get the same tools versions and have a\nreproducible output.\nThe container is optional, the following command can be run locally:\nautoreconf -ivf -Werror\nThe generated files can change depending on the exact versions of the\ntools used.\nThe container that CPython uses has\nAutoconf 2.72,\naclocal\nfrom Automake 1.16.5,\nand pkg-config 1.8.1.\nChanged in version 3.13: Autoconf 2.71 and aclocal 1.16.5 and are now used to regenerate\nconfigure\n.\nChanged in version 3.14: Autoconf 2.72 is now used to regenerate configure\n.\n3.3. Configure Options\u00b6\nList all configure\nscript options using:\n./configure --help\nSee also the Misc/SpecialBuilds.txt\nin the Python source distribution.\n3.3.1. General Options\u00b6\n- --enable-loadable-sqlite-extensions\u00b6\nSupport loadable extensions in the\n_sqlite\nextension module (default is no) of thesqlite3\nmodule.See the\nsqlite3.Connection.enable_load_extension()\nmethod of thesqlite3\nmodule.Added in version 3.6.\n- --enable-big-digits=[15|30]\u00b6\nDefine the size in bits of Python\nint\ndigits: 15 or 30 bits.By default, the digit size is 30.\nDefine the\nPYLONG_BITS_IN_DIGIT\nto15\nor30\n.\n- --with-suffix=SUFFIX\u00b6\nSet the Python executable suffix to SUFFIX.\nThe default suffix is\n.exe\non Windows and macOS (python.exe\nexecutable),.js\non Emscripten node,.html\non Emscripten browser,.wasm\non WASI, and an empty string on other platforms (python\nexecutable).Changed in version 3.11: The default suffix on WASM platform is one of\n.js\n,.html\nor.wasm\n.\n- --with-tzpath=\u00b6\nSelect the default time zone search path for\nzoneinfo.TZPATH\n. See the Compile-time configuration of thezoneinfo\nmodule.Default:\n/usr/share/zoneinfo:/usr/lib/zoneinfo:/usr/share/lib/zoneinfo:/etc/zoneinfo\n.See\nos.pathsep\npath separator.Added in version 3.9.\n- --without-decimal-contextvar\u00b6\nBuild the\n_decimal\nextension module using a thread-local context rather than a coroutine-local context (default), see thedecimal\nmodule.See\ndecimal.HAVE_CONTEXTVAR\nand thecontextvars\nmodule.Added in version 3.9.\n- --with-dbmliborder=\u00b6\nOverride order to check db backends for the\ndbm\nmoduleA valid value is a colon (\n:\n) separated string with the backend names:ndbm\n;gdbm\n;bdb\n.\n- --without-c-locale-coercion\u00b6\nDisable C locale coercion to a UTF-8 based locale (enabled by default).\nDon\u2019t define the\nPY_COERCE_C_LOCALE\nmacro.See\nPYTHONCOERCECLOCALE\nand the PEP 538.\n- --with-platlibdir=DIRNAME\u00b6\nPython library directory name (default is\nlib\n).Fedora and SuSE use\nlib64\non 64-bit platforms.See\nsys.platlibdir\n.Added in version 3.9.\n- --with-wheel-pkg-dir=PATH\u00b6\nDirectory of wheel packages used by the\nensurepip\nmodule (none by default).Some Linux distribution packaging policies recommend against bundling dependencies. For example, Fedora installs wheel packages in the\n/usr/share/python-wheels/\ndirectory and don\u2019t install theensurepip._bundled\npackage.Added in version 3.10.\n- --with-pkg-config=[check|yes|no]\u00b6\nWhether configure should use pkg-config to detect build dependencies.\ncheck\n(default): pkg-config is optionalyes\n: pkg-config is mandatoryno\n: configure does not use pkg-config even when present\nAdded in version 3.11.\n- --enable-pystats\u00b6\nTurn on internal Python performance statistics gathering.\nBy default, statistics gathering is off. Use\npython3 -X pystats\ncommand or setPYTHONSTATS=1\nenvironment variable to turn on statistics gathering at Python startup.At Python exit, dump statistics if statistics gathering was on and not cleared.\nEffects:\nAdd\n-X pystats\ncommand line option.Add\nPYTHONSTATS\nenvironment variable.Define the\nPy_STATS\nmacro.Add functions to the\nsys\nmodule:sys._stats_on()\n: Turns on statistics gathering.sys._stats_off()\n: Turns off statistics gathering.sys._stats_clear()\n: Clears the statistics.sys._stats_dump()\n: Dump statistics to file, and clears the statistics.\nThe statistics will be dumped to a arbitrary (probably unique) file in\n/tmp/py_stats/\n(Unix) orC:\\temp\\py_stats\\\n(Windows). If that directory does not exist, results will be printed on stderr.Use\nTools/scripts/summarize_stats.py\nto read the stats.Statistics:\nOpcode:\nSpecialization: success, failure, hit, deferred, miss, deopt, failures;\nExecution count;\nPair count.\nCall:\nInlined Python calls;\nPyEval calls;\nFrames pushed;\nFrame object created;\nEval calls: vector, generator, legacy, function VECTORCALL, build class, slot, function \u201cex\u201d, API, method.\nObject:\nincref and decref;\ninterpreter incref and decref;\nallocations: all, 512 bytes, 4 kiB, big;\nfree;\nto/from free lists;\ndictionary materialized/dematerialized;\ntype cache;\noptimization attempts;\noptimization traces created/executed;\nuops executed.\nGarbage collector:\nGarbage collections;\nObjects visited;\nObjects collected.\nAdded in version 3.11.\n- --disable-gil\u00b6\nEnables support for running Python without the global interpreter lock (GIL): free-threaded build.\nDefines the\nPy_GIL_DISABLED\nmacro and adds\"t\"\ntosys.abiflags\n.See Free-threaded CPython for more detail.\nAdded in version 3.13.\n- --enable-experimental-jit=[no|yes|yes-off|interpreter]\u00b6\nIndicate how to integrate the experimental just-in-time compiler.\nno\n: Don\u2019t build the JIT.yes\n: Enable the JIT. To disable it at runtime, set the environment variablePYTHON_JIT=0\n.yes-off\n: Build the JIT, but disable it by default. To enable it at runtime, set the environment variablePYTHON_JIT=1\n.interpreter\n: Enable the \u201cJIT interpreter\u201d (only useful for those debugging the JIT itself). To disable it at runtime, set the environment variablePYTHON_JIT=0\n.\n--enable-experimental-jit=no\nis the default behavior if the option is not provided, and--enable-experimental-jit\nis shorthand for--enable-experimental-jit=yes\n. SeeTools/jit/README.md\nfor more information, including how to install the necessary build-time dependencies.Note\nWhen building CPython with JIT enabled, ensure that your system has Python 3.11 or later installed.\nAdded in version 3.13.\n- PKG_CONFIG\u00b6\nPath to\npkg-config\nutility.\n- PKG_CONFIG_LIBDIR\u00b6\n- PKG_CONFIG_PATH\u00b6\npkg-config\noptions.\n3.3.2. C compiler options\u00b6\n- CC\u00b6\nC compiler command.\n- CFLAGS\u00b6\nC compiler flags.\n- CPP\u00b6\nC preprocessor command.\n- CPPFLAGS\u00b6\nC preprocessor flags, e.g.\n-Iinclude_dir\n.\n3.3.3. Linker options\u00b6\n- LDFLAGS\u00b6\nLinker flags, e.g.\n-Llibrary_directory\n.\n- LIBS\u00b6\nLibraries to pass to the linker, e.g.\n-llibrary\n.\n- MACHDEP\u00b6\nName for machine-dependent library files.\n3.3.4. Options for third-party dependencies\u00b6\nAdded in version 3.11.\n- BZIP2_CFLAGS\u00b6\n- BZIP2_LIBS\u00b6\nC compiler and linker flags to link Python to\nlibbz2\n, used bybz2\nmodule, overridingpkg-config\n.\n- CURSES_CFLAGS\u00b6\n- CURSES_LIBS\u00b6\nC compiler and linker flags for\nlibncurses\norlibncursesw\n, used bycurses\nmodule, overridingpkg-config\n.\n- GDBM_CFLAGS\u00b6\n- GDBM_LIBS\u00b6\nC compiler and linker flags for\ngdbm\n.\n- LIBEDIT_CFLAGS\u00b6\n- LIBEDIT_LIBS\u00b6\nC compiler and linker flags for\nlibedit\n, used byreadline\nmodule, overridingpkg-config\n.\n- LIBFFI_CFLAGS\u00b6\n- LIBMPDEC_CFLAGS\u00b6\n- LIBMPDEC_LIBS\u00b6\nC compiler and linker flags for\nlibmpdec\n, used bydecimal\nmodule, overridingpkg-config\n.Note\nThese environment variables have no effect unless\n--with-system-libmpdec\nis specified.\n- LIBLZMA_CFLAGS\u00b6\n- LIBREADLINE_CFLAGS\u00b6\n- LIBREADLINE_LIBS\u00b6\nC compiler and linker flags for\nlibreadline\n, used byreadline\nmodule, overridingpkg-config\n.\n- LIBSQLITE3_CFLAGS\u00b6\n- LIBSQLITE3_LIBS\u00b6\nC compiler and linker flags for\nlibsqlite3\n, used bysqlite3\nmodule, overridingpkg-config\n.\n- LIBUUID_CFLAGS\u00b6\n- LIBZSTD_CFLAGS\u00b6\n- LIBZSTD_LIBS\u00b6\nC compiler and linker flags for\nlibzstd\n, used bycompression.zstd\nmodule, overridingpkg-config\n.Added in version 3.14.\n- PANEL_CFLAGS\u00b6\n- PANEL_LIBS\u00b6\nC compiler and linker flags for PANEL, overriding\npkg-config\n.C compiler and linker flags for\nlibpanel\norlibpanelw\n, used bycurses.panel\nmodule, overridingpkg-config\n.\n- TCLTK_CFLAGS\u00b6\n- TCLTK_LIBS\u00b6\nC compiler and linker flags for TCLTK, overriding\npkg-config\n.\n- ZLIB_CFLAGS\u00b6\n3.3.5. WebAssembly Options\u00b6\n- --enable-wasm-dynamic-linking\u00b6\nTurn on dynamic linking support for WASM.\nDynamic linking enables\ndlopen\n. File size of the executable increases due to limited dead code elimination and additional features.Added in version 3.11.\n- --enable-wasm-pthreads\u00b6\nTurn on pthreads support for WASM.\nAdded in version 3.11.\n3.3.6. Install Options\u00b6\n- --prefix=PREFIX\u00b6\nInstall architecture-independent files in PREFIX. On Unix, it defaults to\n/usr/local\n.This value can be retrieved at runtime using\nsys.prefix\n.As an example, one can use\n--prefix=\"$HOME/.local/\"\nto install a Python in its home directory.\n- --exec-prefix=EPREFIX\u00b6\nInstall architecture-dependent files in EPREFIX, defaults to\n--prefix\n.This value can be retrieved at runtime using\nsys.exec_prefix\n.\n3.3.7. Performance options\u00b6\nConfiguring Python using --enable-optimizations --with-lto\n(PGO + LTO) is\nrecommended for best performance. The experimental --enable-bolt\nflag can\nalso be used to improve performance.\n- --enable-optimizations\u00b6\nEnable Profile Guided Optimization (PGO) using\nPROFILE_TASK\n(disabled by default).The C compiler Clang requires\nllvm-profdata\nprogram for PGO. On macOS, GCC also requires it: GCC is just an alias to Clang on macOS.Disable also semantic interposition in libpython if\n--enable-shared\nand GCC is used: add-fno-semantic-interposition\nto the compiler and linker flags.Note\nDuring the build, you may encounter compiler warnings about profile data not being available for some source files. These warnings are harmless, as only a subset of the code is exercised during profile data acquisition. To disable these warnings on Clang, manually suppress them by adding\n-Wno-profile-instr-unprofiled\ntoCFLAGS\n.Added in version 3.6.\nChanged in version 3.10: Use\n-fno-semantic-interposition\non GCC.\n- PROFILE_TASK\u00b6\nEnvironment variable used in the Makefile: Python command line arguments for the PGO generation task.\nDefault:\n-m test --pgo --timeout=$(TESTTIMEOUT)\n.Added in version 3.8.\nChanged in version 3.13: Task failure is no longer ignored silently.\n- --with-lto=[full|thin|no|yes]\u00b6\nEnable Link Time Optimization (LTO) in any build (disabled by default).\nThe C compiler Clang requires\nllvm-ar\nfor LTO (ar\non macOS), as well as an LTO-aware linker (ld.gold\norlld\n).Added in version 3.6.\nAdded in version 3.11: To use ThinLTO feature, use\n--with-lto=thin\non Clang.Changed in version 3.12: Use ThinLTO as the default optimization policy on Clang if the compiler accepts the flag.\n- --enable-bolt\u00b6\nEnable usage of the BOLT post-link binary optimizer (disabled by default).\nBOLT is part of the LLVM project but is not always included in their binary distributions. This flag requires that\nllvm-bolt\nandmerge-fdata\nare available.BOLT is still a fairly new project so this flag should be considered experimental for now. Because this tool operates on machine code its success is dependent on a combination of the build environment + the other optimization configure args + the CPU architecture, and not all combinations are supported. BOLT versions before LLVM 16 are known to crash BOLT under some scenarios. Use of LLVM 16 or newer for BOLT optimization is strongly encouraged.\nThe\nBOLT_INSTRUMENT_FLAGS\nandBOLT_APPLY_FLAGS\nconfigure variables can be defined to override the default set of arguments for llvm-bolt to instrument and apply BOLT data to binaries, respectively.Added in version 3.12.\n- BOLT_APPLY_FLAGS\u00b6\nArguments to\nllvm-bolt\nwhen creating a BOLT optimized binary.Added in version 3.12.\n- BOLT_INSTRUMENT_FLAGS\u00b6\nArguments to\nllvm-bolt\nwhen instrumenting binaries.Added in version 3.12.\n- --with-computed-gotos\u00b6\nEnable computed gotos in evaluation loop (enabled by default on supported compilers).\n- --with-tail-call-interp\u00b6\nEnable interpreters using tail calls in CPython. If enabled, enabling PGO (\n--enable-optimizations\n) is highly recommended. This option specifically requires a C compiler with proper tail call support, and the preserve_none calling convention. For example, Clang 19 and newer supports this feature.Added in version 3.14.\n- --without-mimalloc\u00b6\nDisable the fast mimalloc allocator (enabled by default).\nSee also\nPYTHONMALLOC\nenvironment variable.\n- --without-pymalloc\u00b6\nDisable the specialized Python memory allocator pymalloc (enabled by default).\nSee also\nPYTHONMALLOC\nenvironment variable.\n- --without-doc-strings\u00b6\nDisable static documentation strings to reduce the memory footprint (enabled by default). Documentation strings defined in Python are not affected.\nDon\u2019t define the\nWITH_DOC_STRINGS\nmacro.See the\nPyDoc_STRVAR()\nmacro.\n- --enable-profiling\u00b6\nEnable C-level code profiling with\ngprof\n(disabled by default).\n- --with-strict-overflow\u00b6\nAdd\n-fstrict-overflow\nto the C compiler flags (by default we add-fno-strict-overflow\ninstead).\n- --without-remote-debug\u00b6\nDeactivate remote debugging support described in PEP 768 (enabled by default). When this flag is provided the code that allows the interpreter to schedule the execution of a Python file in a separate process as described in PEP 768 is not compiled. This includes both the functionality to schedule code to be executed and the functionality to receive code to be executed.\n-\nPy_REMOTE_DEBUG\u00b6\nThis macro is defined by default, unless Python is configured with\n--without-remote-debug\n.Note that even if the macro is defined, remote debugging may not be available (for example, on an incompatible platform).\nAdded in version 3.14.\n-\nPy_REMOTE_DEBUG\u00b6\n3.3.8. Python Debug Build\u00b6\nA debug build is Python built with the --with-pydebug\nconfigure\noption.\nEffects of a debug build:\nDisplay all warnings by default: the list of default warning filters is empty in the\nwarnings\nmodule.Add\nd\ntosys.abiflags\n.Add\nsys.gettotalrefcount()\nfunction.Add\n-X showrefcount\ncommand line option.Add\n-d\ncommand line option andPYTHONDEBUG\nenvironment variable to debug the parser.Add support for the\n__lltrace__\nvariable: enable low-level tracing in the bytecode evaluation loop if the variable is defined.Install debug hooks on memory allocators to detect buffer overflow and other memory errors.\nDefine\nPy_DEBUG\nandPy_REF_DEBUG\nmacros.Add runtime checks: code surrounded by\n#ifdef Py_DEBUG\nand#endif\n. Enableassert(...)\nand_PyObject_ASSERT(...)\nassertions: don\u2019t set theNDEBUG\nmacro (see also the--with-assertions\nconfigure option). Main runtime checks:Add sanity checks on the function arguments.\nUnicode and int objects are created with their memory filled with a pattern to detect usage of uninitialized objects.\nEnsure that functions which can clear or replace the current exception are not called with an exception raised.\nCheck that deallocator functions don\u2019t change the current exception.\nThe garbage collector (\ngc.collect()\nfunction) runs some basic checks on objects consistency.The\nPy_SAFE_DOWNCAST()\nmacro checks for integer underflow and overflow when downcasting from wide types to narrow types.\nSee also the Python Development Mode and the\n--with-trace-refs\nconfigure option.\nChanged in version 3.8: Release builds and debug builds are now ABI compatible: defining the\nPy_DEBUG\nmacro no longer implies the Py_TRACE_REFS\nmacro (see the\n--with-trace-refs\noption).\n3.3.9. Debug options\u00b6\n- --with-pydebug\u00b6\nBuild Python in debug mode: define the\nPy_DEBUG\nmacro (disabled by default).\n- --with-trace-refs\u00b6\nEnable tracing references for debugging purpose (disabled by default).\nEffects:\nDefine the\nPy_TRACE_REFS\nmacro.Add\nsys.getobjects()\nfunction.Add\nPYTHONDUMPREFS\nenvironment variable.\nThe\nPYTHONDUMPREFS\nenvironment variable can be used to dump objects and reference counts still alive at Python exit.Statically allocated objects are not traced.\nAdded in version 3.8.\nChanged in version 3.13: This build is now ABI compatible with release build and debug build.\n- --with-assertions\u00b6\nBuild with C assertions enabled (default is no):\nassert(...);\nand_PyObject_ASSERT(...);\n.If set, the\nNDEBUG\nmacro is not defined in theOPT\ncompiler variable.See also the\n--with-pydebug\noption (debug build) which also enables assertions.Added in version 3.6.\n- --with-valgrind\u00b6\nEnable Valgrind support (default is no).\n- --with-dtrace\u00b6\nEnable DTrace support (default is no).\nSee Instrumenting CPython with DTrace and SystemTap.\nAdded in version 3.6.\n- --with-address-sanitizer\u00b6\nEnable AddressSanitizer memory error detector,\nasan\n(default is no). To improve ASan detection capabilities you may also want to combine this with--without-pymalloc\nto disable the specialized small-object allocator whose allocations are not tracked by ASan.Added in version 3.6.\n- --with-memory-sanitizer\u00b6\nEnable MemorySanitizer allocation error detector,\nmsan\n(default is no).Added in version 3.6.\n- --with-undefined-behavior-sanitizer\u00b6\nEnable UndefinedBehaviorSanitizer undefined behaviour detector,\nubsan\n(default is no).Added in version 3.6.\n- --with-thread-sanitizer\u00b6\nEnable ThreadSanitizer data race detector,\ntsan\n(default is no).Added in version 3.13.\n3.3.10. Linker options\u00b6\nEnable building a shared Python library:\nlibpython\n(default is no).\n- --without-static-libpython\u00b6\nDo not build\nlibpythonMAJOR.MINOR.a\nand do not installpython.o\n(built and enabled by default).Added in version 3.10.\n3.3.11. Libraries options\u00b6\n- --with-libs='lib1 ...'\u00b6\nLink against additional libraries (default is no).\n- --with-system-expat\u00b6\nBuild the\npyexpat\nmodule using an installedexpat\nlibrary (default is no).\n- --with-system-libmpdec\u00b6\nBuild the\n_decimal\nextension module using an installedmpdecimal\nlibrary, see thedecimal\nmodule (default is yes).Added in version 3.3.\nChanged in version 3.13: Default to using the installed\nmpdecimal\nlibrary.Changed in version 3.15: A bundled copy of the library will no longer be selected implicitly if an installed\nmpdecimal\nlibrary is not found. In Python 3.15 only, it can still be selected explicitly using--with-system-libmpdec=no\nor--without-system-libmpdec\n.Deprecated since version 3.13, will be removed in version 3.16: A copy of the\nmpdecimal\nlibrary sources will no longer be distributed with Python 3.16.See also\n- --with-readline=readline|editline\u00b6\nDesignate a backend library for the\nreadline\nmodule.readline: Use readline as the backend.\neditline: Use editline as the backend.\nAdded in version 3.10.\n- --without-readline\u00b6\nDon\u2019t build the\nreadline\nmodule (built by default).Don\u2019t define the\nHAVE_LIBREADLINE\nmacro.Added in version 3.10.\n- --with-libm=STRING\u00b6\nOverride\nlibm\nmath library to STRING (default is system-dependent).\n- --with-libc=STRING\u00b6\nOverride\nlibc\nC library to STRING (default is system-dependent).\n- --with-openssl=DIR\u00b6\nRoot of the OpenSSL directory.\nAdded in version 3.7.\n- --with-openssl-rpath=[no|auto|DIR]\u00b6\nSet runtime library directory (rpath) for OpenSSL libraries:\nno\n(default): don\u2019t set rpath;auto\n: auto-detect rpath from--with-openssl\nandpkg-config\n;DIR: set an explicit rpath.\nAdded in version 3.10.\n3.3.12. Security Options\u00b6\n- --with-hash-algorithm=[fnv|siphash13|siphash24]\u00b6\nSelect hash algorithm for use in\nPython/pyhash.c\n:siphash13\n(default);siphash24\n;fnv\n.\nAdded in version 3.4.\nAdded in version 3.11:\nsiphash13\nis added and it is the new default.\n- --with-builtin-hashlib-hashes=md5,sha1,sha256,sha512,sha3,blake2\u00b6\nBuilt-in hash modules:\nmd5\n;sha1\n;sha256\n;sha512\n;sha3\n(with shake);blake2\n.\nAdded in version 3.9.\n- --with-ssl-default-suites=[python|openssl|STRING]\u00b6\nOverride the OpenSSL default cipher suites string:\npython\n(default): use Python\u2019s preferred selection;openssl\n: leave OpenSSL\u2019s defaults untouched;STRING: use a custom string\nSee the\nssl\nmodule.Added in version 3.7.\nChanged in version 3.10: The settings\npython\nand STRING also set TLS 1.2 as minimum protocol version.\n- --disable-safety\u00b6\nDisable compiler options that are recommended by OpenSSF for security reasons with no performance overhead. If this option is not enabled, CPython will be built based on safety compiler options with no slow down. When this option is enabled, CPython will not be built with the compiler options listed below.\nThe following compiler options are disabled with\n--disable-safety\n:-fstack-protector-strong: Enable run-time checks for stack-based buffer overflows.\n-Wtrampolines: Enable warnings about trampolines that require executable stacks.\nAdded in version 3.14.\n- --enable-slower-safety\u00b6\nEnable compiler options that are recommended by OpenSSF for security reasons which require overhead. If this option is not enabled, CPython will not be built based on safety compiler options which performance impact. When this option is enabled, CPython will be built with the compiler options listed below.\nThe following compiler options are enabled with\n--enable-slower-safety\n:-D_FORTIFY_SOURCE=3: Fortify sources with compile- and run-time checks for unsafe libc usage and buffer overflows.\nAdded in version 3.14.\n3.3.13. macOS Options\u00b6\nSee Mac/README.rst.\n- --enable-universalsdk\u00b6\n- --enable-universalsdk=SDKDIR\u00b6\nCreate a universal binary build. SDKDIR specifies which macOS SDK should be used to perform the build (default is no).\n- --enable-framework\u00b6\n- --enable-framework=INSTALLDIR\u00b6\nCreate a Python.framework rather than a traditional Unix install. Optional INSTALLDIR specifies the installation path (default is no).\n- --with-universal-archs=ARCH\u00b6\nSpecify the kind of universal binary that should be created. This option is only valid when\n--enable-universalsdk\nis set.Options:\nuniversal2\n(x86-64 and arm64);32-bit\n(PPC and i386);64-bit\n(PPC64 and x86-64);3-way\n(i386, PPC and x86-64);intel\n(i386 and x86-64);intel-32\n(i386);intel-64\n(x86-64);all\n(PPC, i386, PPC64 and x86-64).\nNote that values for this configuration item are not the same as the identifiers used for universal binary wheels on macOS. See the Python Packaging User Guide for details on the packaging platform compatibility tags used on macOS\n- --with-framework-name=FRAMEWORK\u00b6\nSpecify the name for the python framework on macOS only valid when\n--enable-framework\nis set (default:Python\n).\n- --with-app-store-compliance\u00b6\n- --with-app-store-compliance=PATCH-FILE\u00b6\nThe Python standard library contains strings that are known to trigger automated inspection tool errors when submitted for distribution by the macOS and iOS App Stores. If enabled, this option will apply the list of patches that are known to correct app store compliance. A custom patch file can also be specified. This option is disabled by default.\nAdded in version 3.13.\n3.3.14. iOS Options\u00b6\nSee iOS/README.rst.\n- --enable-framework=INSTALLDIR\u00b6\nCreate a Python.framework. Unlike macOS, the INSTALLDIR argument specifying the installation path is mandatory.\n- --with-framework-name=FRAMEWORK\u00b6\nSpecify the name for the framework (default:\nPython\n).\n3.3.15. Cross Compiling Options\u00b6\nCross compiling, also known as cross building, can be used to build Python for another CPU architecture or platform. Cross compiling requires a Python interpreter for the build platform. The version of the build Python must match the version of the cross compiled host Python.\n- --build=BUILD\u00b6\nconfigure for building on BUILD, usually guessed by config.guess.\n- --host=HOST\u00b6\ncross-compile to build programs to run on HOST (target platform)\n- --with-build-python=path/to/python\u00b6\npath to build\npython\nbinary for cross compilingAdded in version 3.11.\n- CONFIG_SITE=file\u00b6\nAn environment variable that points to a file with configure overrides.\nExample config.site file:\n# config.site-aarch64 ac_cv_buggy_getaddrinfo=no ac_cv_file__dev_ptmx=yes ac_cv_file__dev_ptc=no\n- HOSTRUNNER\u00b6\nProgram to run CPython for the host platform for cross-compilation.\nAdded in version 3.11.\nCross compiling example:\nCONFIG_SITE=config.site-aarch64 ../configure \\\n--build=x86_64-pc-linux-gnu \\\n--host=aarch64-unknown-linux-gnu \\\n--with-build-python=../x86_64/python\n3.4. Python Build System\u00b6\n3.4.1. Main files of the build system\u00b6\nconfigure.ac\n=>configure\n;Makefile.pre.in\n=>Makefile\n(created byconfigure\n);pyconfig.h\n(created byconfigure\n);Modules/Setup\n: C extensions built by the Makefile usingModule/makesetup\nshell script;\n3.4.2. Main build steps\u00b6\nC files (\n.c\n) are built as object files (.o\n).A static\nlibpython\nlibrary (.a\n) is created from objects files.python.o\nand the staticlibpython\nlibrary are linked into the finalpython\nprogram.C extensions are built by the Makefile (see\nModules/Setup\n).\n3.4.3. Main Makefile targets\u00b6\n3.4.3.1. make\u00b6\nFor the most part, when rebuilding after editing some code or\nrefreshing your checkout from upstream, all you need to do is execute\nmake\n, which (per Make\u2019s semantics) builds the default target, the\nfirst one defined in the Makefile. By tradition (including in the\nCPython project) this is usually the all\ntarget. The\nconfigure\nscript expands an autoconf\nvariable,\n@DEF_MAKE_ALL_RULE@\nto describe precisely which targets make\nall\nwill build. The three choices are:\nprofile-opt\n(configured with--enable-optimizations\n)build_wasm\n(chosen if the host platform matcheswasm32-wasi*\norwasm32-emscripten\n)build_all\n(configured without explicitly using either of the others)\nDepending on the most recent source file changes, Make will rebuild\nany targets (object files and executables) deemed out-of-date,\nincluding running configure\nagain if necessary. Source/target\ndependencies are many and maintained manually however, so Make\nsometimes doesn\u2019t have all the information necessary to correctly\ndetect all targets which need to be rebuilt. Depending on which\ntargets aren\u2019t rebuilt, you might experience a number of problems. If\nyou have build or test problems which you can\u2019t otherwise explain,\nmake clean && make\nshould work around most dependency problems, at\nthe expense of longer build times.\n3.4.3.2. make platform\u00b6\nBuild the python\nprogram, but don\u2019t build the standard library\nextension modules. This generates a file named platform\nwhich\ncontains a single line describing the details of the build platform,\ne.g., macosx-14.3-arm64-3.12\nor linux-x86_64-3.13\n.\n3.4.3.3. make profile-opt\u00b6\nBuild Python using profile-guided optimization (PGO). You can use the\nconfigure --enable-optimizations\noption to make this the\ndefault target of the make\ncommand (make all\nor just\nmake\n).\n3.4.3.4. make clean\u00b6\nRemove built files.\n3.4.3.5. make distclean\u00b6\nIn addition to the work done by make clean\n, remove files\ncreated by the configure script. configure\nwill have to be run\nbefore building again. [6]\n3.4.3.6. make install\u00b6\nBuild the all\ntarget and install Python.\n3.4.3.7. make test\u00b6\nBuild the all\ntarget and run the Python test suite with the\n--fast-ci\noption without GUI tests. Variables:\nTESTOPTS\n: additional regrtest command-line options.TESTPYTHONOPTS\n: additional Python command-line options.TESTTIMEOUT\n: timeout in seconds (default: 10 minutes).\n3.4.3.8. make ci\u00b6\nThis is similar to make test\n, but uses the -ugui\nto also run GUI tests.\nAdded in version 3.14.\n3.4.3.9. make buildbottest\u00b6\nThis is similar to make test\n, but uses the --slow-ci\noption and default timeout of 20 minutes, instead of --fast-ci\noption.\n3.4.3.10. make regen-all\u00b6\nRegenerate (almost) all generated files. These include (but are not\nlimited to) bytecode cases, and parser generator file.\nmake regen-stdlib-module-names\nand autoconf\nmust be run\nseparately for the remaining generated files.\n3.4.4. C extensions\u00b6\nSome C extensions are built as built-in modules, like the sys\nmodule.\nThey are built with the Py_BUILD_CORE_BUILTIN\nmacro defined.\nBuilt-in modules have no __file__\nattribute:\n>>> import sys\n>>> sys\n\n>>> sys.__file__\nTraceback (most recent call last):\nFile \"\", line 1, in \nAttributeError: module 'sys' has no attribute '__file__'\nOther C extensions are built as dynamic libraries, like the _asyncio\nmodule.\nThey are built with the Py_BUILD_CORE_MODULE\nmacro defined.\nExample on Linux x86-64:\n>>> import _asyncio\n>>> _asyncio\n\n>>> _asyncio.__file__\n'/usr/lib64/python3.9/lib-dynload/_asyncio.cpython-39-x86_64-linux-gnu.so'\nModules/Setup\nis used to generate Makefile targets to build C extensions.\nAt the beginning of the files, C extensions are built as built-in modules.\nExtensions defined after the *shared*\nmarker are built as dynamic libraries.\nThe PyAPI_FUNC()\n, PyAPI_DATA()\nand\nPyMODINIT_FUNC\nmacros of Include/exports.h\nare defined\ndifferently depending if the Py_BUILD_CORE_MODULE\nmacro is defined:\nUse\nPy_EXPORTED_SYMBOL\nif thePy_BUILD_CORE_MODULE\nis definedUse\nPy_IMPORTED_SYMBOL\notherwise.\nIf the Py_BUILD_CORE_BUILTIN\nmacro is used by mistake on a C extension\nbuilt as a shared library, its PyInit_xxx()\nfunction is not exported,\ncausing an ImportError\non import.\n3.5. Compiler and linker flags\u00b6\nOptions set by the ./configure\nscript and environment variables and used by\nMakefile\n.\n3.5.1. Preprocessor flags\u00b6\n- CONFIGURE_CPPFLAGS\u00b6\nValue of\nCPPFLAGS\nvariable passed to the./configure\nscript.Added in version 3.6.\n- CPPFLAGS\u00b6\n(Objective) C/C++ preprocessor flags, e.g.\n-Iinclude_dir\nif you have headers in a nonstandard directory include_dir.Both\nCPPFLAGS\nandLDFLAGS\nneed to contain the shell\u2019s value to be able to build extension modules using the directories specified in the environment variables.\n- BASECPPFLAGS\u00b6\nAdded in version 3.4.\n- PY_CPPFLAGS\u00b6\nExtra preprocessor flags added for building the interpreter object files.\nDefault:\n$(BASECPPFLAGS) -I. -I$(srcdir)/Include $(CONFIGURE_CPPFLAGS) $(CPPFLAGS)\n.Added in version 3.2.\n3.5.2. Compiler flags\u00b6\n- CC\u00b6\nC compiler command.\nExample:\ngcc -pthread\n.\n- CXX\u00b6\nC++ compiler command.\nExample:\ng++ -pthread\n.\n- CFLAGS\u00b6\nC compiler flags.\n- CFLAGS_NODIST\u00b6\nCFLAGS_NODIST\nis used for building the interpreter and stdlib C extensions. Use it when a compiler flag should not be part ofCFLAGS\nonce Python is installed (gh-65320).In particular,\nCFLAGS\nshould not contain:the compiler flag\n-I\n(for setting the search path for include files). The-I\nflags are processed from left to right, and any flags inCFLAGS\nwould take precedence over user- and package-supplied-I\nflags.hardening flags such as\n-Werror\nbecause distributions cannot control whether packages installed by users conform to such heightened standards.\nAdded in version 3.5.\n- COMPILEALL_OPTS\u00b6\nOptions passed to the\ncompileall\ncommand line when building PYC files inmake install\n. Default:-j0\n.Added in version 3.12.\n- EXTRA_CFLAGS\u00b6\nExtra C compiler flags.\n- CONFIGURE_CFLAGS_NODIST\u00b6\nValue of\nCFLAGS_NODIST\nvariable passed to the./configure\nscript.Added in version 3.5.\n- BASECFLAGS\u00b6\nBase compiler flags.\n- OPT\u00b6\nOptimization flags.\n- CFLAGS_ALIASING\u00b6\nStrict or non-strict aliasing flags used to compile\nPython/dtoa.c\n.Added in version 3.7.\n- CCSHARED\u00b6\nCompiler flags used to build a shared library.\nFor example,\n-fPIC\nis used on Linux and on BSD.\n- CFLAGSFORSHARED\u00b6\nExtra C flags added for building the interpreter object files.\nDefault:\n$(CCSHARED)\nwhen--enable-shared\nis used, or an empty string otherwise.\n- PY_CFLAGS\u00b6\nDefault:\n$(BASECFLAGS) $(OPT) $(CONFIGURE_CFLAGS) $(CFLAGS) $(EXTRA_CFLAGS)\n.\n- PY_CFLAGS_NODIST\u00b6\nDefault:\n$(CONFIGURE_CFLAGS_NODIST) $(CFLAGS_NODIST) -I$(srcdir)/Include/internal\n.Added in version 3.5.\n- PY_STDMODULE_CFLAGS\u00b6\nC flags used for building the interpreter object files.\nDefault:\n$(PY_CFLAGS) $(PY_CFLAGS_NODIST) $(PY_CPPFLAGS) $(CFLAGSFORSHARED)\n.Added in version 3.7.\n- PY_CORE_CFLAGS\u00b6\nDefault:\n$(PY_STDMODULE_CFLAGS) -DPy_BUILD_CORE\n.Added in version 3.2.\n- PY_BUILTIN_MODULE_CFLAGS\u00b6\nCompiler flags to build a standard library extension module as a built-in module, like the\nposix\nmodule.Default:\n$(PY_STDMODULE_CFLAGS) -DPy_BUILD_CORE_BUILTIN\n.Added in version 3.8.\n- PURIFY\u00b6\nPurify command. Purify is a memory debugger program.\nDefault: empty string (not used).\n3.5.3. Linker flags\u00b6\n- LINKCC\u00b6\nLinker command used to build programs like\npython\nand_testembed\n.Default:\n$(PURIFY) $(CC)\n.\n- CONFIGURE_LDFLAGS\u00b6\nValue of\nLDFLAGS\nvariable passed to the./configure\nscript.Avoid assigning\nCFLAGS\n,LDFLAGS\n, etc. so users can use them on the command line to append to these values without stomping the pre-set values.Added in version 3.2.\n- LDFLAGS_NODIST\u00b6\nLDFLAGS_NODIST\nis used in the same manner asCFLAGS_NODIST\n. Use it when a linker flag should not be part ofLDFLAGS\nonce Python is installed (gh-65320).In particular,\nLDFLAGS\nshould not contain:the compiler flag\n-L\n(for setting the search path for libraries). The-L\nflags are processed from left to right, and any flags inLDFLAGS\nwould take precedence over user- and package-supplied-L\nflags.\n- CONFIGURE_LDFLAGS_NODIST\u00b6\nValue of\nLDFLAGS_NODIST\nvariable passed to the./configure\nscript.Added in version 3.8.\n- LDFLAGS\u00b6\nLinker flags, e.g.\n-Llib_dir\nif you have libraries in a nonstandard directory lib_dir.Both\nCPPFLAGS\nandLDFLAGS\nneed to contain the shell\u2019s value to be able to build extension modules using the directories specified in the environment variables.\n- LIBS\u00b6\nLinker flags to pass libraries to the linker when linking the Python executable.\nExample:\n-lrt\n.\n- LDSHARED\u00b6\nCommand to build a shared library.\nDefault:\n@LDSHARED@ $(PY_LDFLAGS)\n.\n- BLDSHARED\u00b6\nCommand to build\nlibpython\nshared library.Default:\n@BLDSHARED@ $(PY_CORE_LDFLAGS)\n.\n- PY_LDFLAGS\u00b6\nDefault:\n$(CONFIGURE_LDFLAGS) $(LDFLAGS)\n.\n- PY_LDFLAGS_NODIST\u00b6\nDefault:\n$(CONFIGURE_LDFLAGS_NODIST) $(LDFLAGS_NODIST)\n.Added in version 3.8.\n- PY_CORE_LDFLAGS\u00b6\nLinker flags used for building the interpreter object files.\nAdded in version 3.8.\nFootnotes\ngit clean -fdx\nis an even more extreme way to \u201cclean\u201d your\ncheckout. It removes all files not known to Git.\nWhen bug hunting using git bisect\n, this is\nrecommended between probes\nto guarantee a completely clean build. Use with care, as it\nwill delete all files not checked into Git, including your\nnew, uncommitted work.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 9110}
{"url": "https://docs.python.org/3/download.html", "title": "Download Python 3.14 documentation", "content": "Download Python 3.14 documentation\nLast updated on: Feb 18, 2026 (17:01 UTC).\nDownload an archive containing all the documentation for this version of Python:\n| Format | Packed as .zip | Packed as .tar.bz2 |\n|---|---|---|\n| HTML | Download | Download |\n| Plain text | Download | Download |\n| Texinfo | Download | Download |\n| EPUB | Download |\nWe no longer provide pre-built PDFs of the documentation.\nTo build a PDF archive, follow the instructions in the\nDeveloper's Guide\nand run make dist-pdf\nin the Doc/\ndirectory of a copy of the CPython repository.\nSee the directory listing for file sizes.\nProblems\nOpen an issue if you have comments or suggestions for the Python documentation.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 171}
{"url": "https://docs.python.org/3/tutorial/modules.html", "title": "Modules", "content": "6. Modules\u00b6\nIf you quit from the Python interpreter and enter it again, the definitions you have made (functions and variables) are lost. Therefore, if you want to write a somewhat longer program, you are better off using a text editor to prepare the input for the interpreter and running it with that file as input instead. This is known as creating a script. As your program gets longer, you may want to split it into several files for easier maintenance. You may also want to use a handy function that you\u2019ve written in several programs without copying its definition into each program.\nTo support this, Python has a way to put definitions in a file and use them in a script or in an interactive instance of the interpreter. Such a file is called a module; definitions from a module can be imported into other modules or into the main module (the collection of variables that you have access to in a script executed at the top level and in calculator mode).\nA module is a file containing Python definitions and statements. The file name\nis the module name with the suffix .py\nappended. Within a module, the\nmodule\u2019s name (as a string) is available as the value of the global variable\n__name__\n. For instance, use your favorite text editor to create a file\ncalled fibo.py\nin the current directory with the following contents:\n# Fibonacci numbers module\ndef fib(n):\n\"\"\"Write Fibonacci series up to n.\"\"\"\na, b = 0, 1\nwhile a < n:\nprint(a, end=' ')\na, b = b, a+b\nprint()\ndef fib2(n):\n\"\"\"Return Fibonacci series up to n.\"\"\"\nresult = []\na, b = 0, 1\nwhile a < n:\nresult.append(a)\na, b = b, a+b\nreturn result\nNow enter the Python interpreter and import this module with the following command:\n>>> import fibo\nThis does not add the names of the functions defined in fibo\ndirectly to\nthe current namespace (see Python Scopes and Namespaces for more details);\nit only adds the module name fibo\nthere. Using\nthe module name you can access the functions:\n>>> fibo.fib(1000)\n0 1 1 2 3 5 8 13 21 34 55 89 144 233 377 610 987\n>>> fibo.fib2(100)\n[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89]\n>>> fibo.__name__\n'fibo'\nIf you intend to use a function often you can assign it to a local name:\n>>> fib = fibo.fib\n>>> fib(500)\n0 1 1 2 3 5 8 13 21 34 55 89 144 233 377\n6.1. More on Modules\u00b6\nA module can contain executable statements as well as function definitions. These statements are intended to initialize the module. They are executed only the first time the module name is encountered in an import statement. [1] (They are also run if the file is executed as a script.)\nEach module has its own private namespace, which is used as the global namespace\nby all functions defined in the module. Thus, the author of a module can\nuse global variables in the module without worrying about accidental clashes\nwith a user\u2019s global variables. On the other hand, if you know what you are\ndoing you can touch a module\u2019s global variables with the same notation used to\nrefer to its functions, modname.itemname\n.\nModules can import other modules. It is customary but not required to place all\nimport\nstatements at the beginning of a module (or script, for that\nmatter). The imported module names, if placed at the top level of a module\n(outside any functions or classes), are added to the module\u2019s global namespace.\nThere is a variant of the import\nstatement that imports names from a\nmodule directly into the importing module\u2019s namespace. For example:\n>>> from fibo import fib, fib2\n>>> fib(500)\n0 1 1 2 3 5 8 13 21 34 55 89 144 233 377\nThis does not introduce the module name from which the imports are taken in the\nlocal namespace (so in the example, fibo\nis not defined).\nThere is even a variant to import all names that a module defines:\n>>> from fibo import *\n>>> fib(500)\n0 1 1 2 3 5 8 13 21 34 55 89 144 233 377\nThis imports all names except those beginning with an underscore (_\n).\nIn most cases Python programmers do not use this facility since it introduces\nan unknown set of names into the interpreter, possibly hiding some things\nyou have already defined.\nNote that in general the practice of importing *\nfrom a module or package is\nfrowned upon, since it often causes poorly readable code. However, it is okay to\nuse it to save typing in interactive sessions.\nIf the module name is followed by as\n, then the name\nfollowing as\nis bound directly to the imported module.\n>>> import fibo as fib\n>>> fib.fib(500)\n0 1 1 2 3 5 8 13 21 34 55 89 144 233 377\nThis is effectively importing the module in the same way that import fibo\nwill do, with the only difference of it being available as fib\n.\nIt can also be used when utilising from\nwith similar effects:\n>>> from fibo import fib as fibonacci\n>>> fibonacci(500)\n0 1 1 2 3 5 8 13 21 34 55 89 144 233 377\nNote\nFor efficiency reasons, each module is only imported once per interpreter\nsession. Therefore, if you change your modules, you must restart the\ninterpreter \u2013 or, if it\u2019s just one module you want to test interactively,\nuse importlib.reload()\n, e.g. import importlib;\nimportlib.reload(modulename)\n.\n6.1.1. Executing modules as scripts\u00b6\nWhen you run a Python module with\npython fibo.py \nthe code in the module will be executed, just as if you imported it, but with\nthe __name__\nset to \"__main__\"\n. That means that by adding this code at\nthe end of your module:\nif __name__ == \"__main__\":\nimport sys\nfib(int(sys.argv[1]))\nyou can make the file usable as a script as well as an importable module, because the code that parses the command line only runs if the module is executed as the \u201cmain\u201d file:\n$ python fibo.py 50\n0 1 1 2 3 5 8 13 21 34\nIf the module is imported, the code is not run:\n>>> import fibo\n>>>\nThis is often used either to provide a convenient user interface to a module, or for testing purposes (running the module as a script executes a test suite).\n6.1.2. The Module Search Path\u00b6\nWhen a module named spam\nis imported, the interpreter first searches for\na built-in module with that name. These module names are listed in\nsys.builtin_module_names\n. If not found, it then searches for a file\nnamed spam.py\nin a list of directories given by the variable\nsys.path\n. sys.path\nis initialized from these locations:\nThe directory containing the input script (or the current directory when no file is specified).\nPYTHONPATH\n(a list of directory names, with the same syntax as the shell variablePATH\n).The installation-dependent default (by convention including a\nsite-packages\ndirectory, handled by thesite\nmodule).\nMore details are at The initialization of the sys.path module search path.\nNote\nOn file systems which support symlinks, the directory containing the input script is calculated after the symlink is followed. In other words the directory containing the symlink is not added to the module search path.\nAfter initialization, Python programs can modify sys.path\n. The\ndirectory containing the script being run is placed at the beginning of the\nsearch path, ahead of the standard library path. This means that scripts in that\ndirectory will be loaded instead of modules of the same name in the library\ndirectory. This is an error unless the replacement is intended. See section\nStandard Modules for more information.\n6.1.3. \u201cCompiled\u201d Python files\u00b6\nTo speed up loading modules, Python caches the compiled version of each module\nin the __pycache__\ndirectory under the name module.version.pyc\n,\nwhere the version encodes the format of the compiled file; it generally contains\nthe Python version number. For example, in CPython release 3.3 the compiled\nversion of spam.py would be cached as __pycache__/spam.cpython-33.pyc\n. This\nnaming convention allows compiled modules from different releases and different\nversions of Python to coexist.\nPython checks the modification date of the source against the compiled version to see if it\u2019s out of date and needs to be recompiled. This is a completely automatic process. Also, the compiled modules are platform-independent, so the same library can be shared among systems with different architectures.\nPython does not check the cache in two circumstances. First, it always recompiles and does not store the result for the module that\u2019s loaded directly from the command line. Second, it does not check the cache if there is no source module. To support a non-source (compiled only) distribution, the compiled module must be in the source directory, and there must not be a source module.\nSome tips for experts:\nYou can use the\n-O\nor-OO\nswitches on the Python command to reduce the size of a compiled module. The-O\nswitch removes assert statements, the-OO\nswitch removes both assert statements and __doc__ strings. Since some programs may rely on having these available, you should only use this option if you know what you\u2019re doing. \u201cOptimized\u201d modules have anopt-\ntag and are usually smaller. Future releases may change the effects of optimization.A program doesn\u2019t run any faster when it is read from a\n.pyc\nfile than when it is read from a.py\nfile; the only thing that\u2019s faster about.pyc\nfiles is the speed with which they are loaded.The module\ncompileall\ncan create .pyc files for all modules in a directory.There is more detail on this process, including a flow chart of the decisions, in PEP 3147.\n6.2. Standard Modules\u00b6\nPython comes with a library of standard modules, described in a separate\ndocument, the Python Library Reference (\u201cLibrary Reference\u201d hereafter). Some\nmodules are built into the interpreter; these provide access to operations that\nare not part of the core of the language but are nevertheless built in, either\nfor efficiency or to provide access to operating system primitives such as\nsystem calls. The set of such modules is a configuration option which also\ndepends on the underlying platform. For example, the winreg\nmodule is only\nprovided on Windows systems. One particular module deserves some attention:\nsys\n, which is built into every Python interpreter. The variables\nsys.ps1\nand sys.ps2\ndefine the strings used as primary and secondary\nprompts:\n>>> import sys\n>>> sys.ps1\n'>>> '\n>>> sys.ps2\n'... '\n>>> sys.ps1 = 'C> '\nC> print('Yuck!')\nYuck!\nC>\nThese two variables are only defined if the interpreter is in interactive mode.\nThe variable sys.path\nis a list of strings that determines the interpreter\u2019s\nsearch path for modules. It is initialized to a default path taken from the\nenvironment variable PYTHONPATH\n, or from a built-in default if\nPYTHONPATH\nis not set. You can modify it using standard list\noperations:\n>>> import sys\n>>> sys.path.append('/ufs/guido/lib/python')\n6.3. The dir()\nFunction\u00b6\nThe built-in function dir()\nis used to find out which names a module\ndefines. It returns a sorted list of strings:\n>>> import fibo, sys\n>>> dir(fibo)\n['__name__', 'fib', 'fib2']\n>>> dir(sys)\n['__breakpointhook__', '__displayhook__', '__doc__', '__excepthook__',\n'__interactivehook__', '__loader__', '__name__', '__package__', '__spec__',\n'__stderr__', '__stdin__', '__stdout__', '__unraisablehook__',\n'_clear_type_cache', '_current_frames', '_debugmallocstats', '_framework',\n'_getframe', '_git', '_home', '_xoptions', 'abiflags', 'addaudithook',\n'api_version', 'argv', 'audit', 'base_exec_prefix', 'base_prefix',\n'breakpointhook', 'builtin_module_names', 'byteorder', 'call_tracing',\n'callstats', 'copyright', 'displayhook', 'dont_write_bytecode', 'exc_info',\n'excepthook', 'exec_prefix', 'executable', 'exit', 'flags', 'float_info',\n'float_repr_style', 'get_asyncgen_hooks', 'get_coroutine_origin_tracking_depth',\n'getallocatedblocks', 'getdefaultencoding', 'getdlopenflags',\n'getfilesystemencodeerrors', 'getfilesystemencoding', 'getprofile',\n'getrecursionlimit', 'getrefcount', 'getsizeof', 'getswitchinterval',\n'gettrace', 'hash_info', 'hexversion', 'implementation', 'int_info',\n'intern', 'is_finalizing', 'last_traceback', 'last_type', 'last_value',\n'maxsize', 'maxunicode', 'meta_path', 'modules', 'path', 'path_hooks',\n'path_importer_cache', 'platform', 'prefix', 'ps1', 'ps2', 'pycache_prefix',\n'set_asyncgen_hooks', 'set_coroutine_origin_tracking_depth', 'setdlopenflags',\n'setprofile', 'setrecursionlimit', 'setswitchinterval', 'settrace', 'stderr',\n'stdin', 'stdout', 'thread_info', 'unraisablehook', 'version', 'version_info',\n'warnoptions']\nWithout arguments, dir()\nlists the names you have defined currently:\n>>> a = [1, 2, 3, 4, 5]\n>>> import fibo\n>>> fib = fibo.fib\n>>> dir()\n['__builtins__', '__name__', 'a', 'fib', 'fibo', 'sys']\nNote that it lists all types of names: variables, modules, functions, etc.\ndir()\ndoes not list the names of built-in functions and variables. If you\nwant a list of those, they are defined in the standard module\nbuiltins\n:\n>>> import builtins\n>>> dir(builtins)\n['ArithmeticError', 'AssertionError', 'AttributeError', 'BaseException',\n'BlockingIOError', 'BrokenPipeError', 'BufferError', 'BytesWarning',\n'ChildProcessError', 'ConnectionAbortedError', 'ConnectionError',\n'ConnectionRefusedError', 'ConnectionResetError', 'DeprecationWarning',\n'EOFError', 'Ellipsis', 'EnvironmentError', 'Exception', 'False',\n'FileExistsError', 'FileNotFoundError', 'FloatingPointError',\n'FutureWarning', 'GeneratorExit', 'IOError', 'ImportError',\n'ImportWarning', 'IndentationError', 'IndexError', 'InterruptedError',\n'IsADirectoryError', 'KeyError', 'KeyboardInterrupt', 'LookupError',\n'MemoryError', 'NameError', 'None', 'NotADirectoryError', 'NotImplemented',\n'NotImplementedError', 'OSError', 'OverflowError',\n'PendingDeprecationWarning', 'PermissionError', 'ProcessLookupError',\n'ReferenceError', 'ResourceWarning', 'RuntimeError', 'RuntimeWarning',\n'StopIteration', 'SyntaxError', 'SyntaxWarning', 'SystemError',\n'SystemExit', 'TabError', 'TimeoutError', 'True', 'TypeError',\n'UnboundLocalError', 'UnicodeDecodeError', 'UnicodeEncodeError',\n'UnicodeError', 'UnicodeTranslateError', 'UnicodeWarning', 'UserWarning',\n'ValueError', 'Warning', 'ZeroDivisionError', '_', '__build_class__',\n'__debug__', '__doc__', '__import__', '__name__', '__package__', 'abs',\n'all', 'any', 'ascii', 'bin', 'bool', 'bytearray', 'bytes', 'callable',\n'chr', 'classmethod', 'compile', 'complex', 'copyright', 'credits',\n'delattr', 'dict', 'dir', 'divmod', 'enumerate', 'eval', 'exec', 'exit',\n'filter', 'float', 'format', 'frozenset', 'getattr', 'globals', 'hasattr',\n'hash', 'help', 'hex', 'id', 'input', 'int', 'isinstance', 'issubclass',\n'iter', 'len', 'license', 'list', 'locals', 'map', 'max', 'memoryview',\n'min', 'next', 'object', 'oct', 'open', 'ord', 'pow', 'print', 'property',\n'quit', 'range', 'repr', 'reversed', 'round', 'set', 'setattr', 'slice',\n'sorted', 'staticmethod', 'str', 'sum', 'super', 'tuple', 'type', 'vars',\n'zip']\n6.4. Packages\u00b6\nPackages are a way of structuring Python\u2019s module namespace by using \u201cdotted\nmodule names\u201d. For example, the module name A.B\ndesignates a submodule\nnamed B\nin a package named A\n. Just like the use of modules saves the\nauthors of different modules from having to worry about each other\u2019s global\nvariable names, the use of dotted module names saves the authors of multi-module\npackages like NumPy or Pillow from having to worry about\neach other\u2019s module names.\nSuppose you want to design a collection of modules (a \u201cpackage\u201d) for the uniform\nhandling of sound files and sound data. There are many different sound file\nformats (usually recognized by their extension, for example: .wav\n,\n.aiff\n, .au\n), so you may need to create and maintain a growing\ncollection of modules for the conversion between the various file formats.\nThere are also many different operations you might want to perform on sound data\n(such as mixing, adding echo, applying an equalizer function, creating an\nartificial stereo effect), so in addition you will be writing a never-ending\nstream of modules to perform these operations. Here\u2019s a possible structure for\nyour package (expressed in terms of a hierarchical filesystem):\nsound/ Top-level package\n__init__.py Initialize the sound package\nformats/ Subpackage for file format conversions\n__init__.py\nwavread.py\nwavwrite.py\naiffread.py\naiffwrite.py\nauread.py\nauwrite.py\n...\neffects/ Subpackage for sound effects\n__init__.py\necho.py\nsurround.py\nreverse.py\n...\nfilters/ Subpackage for filters\n__init__.py\nequalizer.py\nvocoder.py\nkaraoke.py\n...\nWhen importing the package, Python searches through the directories on\nsys.path\nlooking for the package subdirectory.\nThe __init__.py\nfiles are required to make Python treat directories\ncontaining the file as packages (unless using a namespace package, a\nrelatively advanced feature). This prevents directories with a common name,\nsuch as string\n, from unintentionally hiding valid modules that occur later\non the module search path. In the simplest case, __init__.py\ncan just be\nan empty file, but it can also execute initialization code for the package or\nset the __all__\nvariable, described later.\nUsers of the package can import individual modules from the package, for example:\nimport sound.effects.echo\nThis loads the submodule sound.effects.echo\n. It must be referenced with\nits full name.\nsound.effects.echo.echofilter(input, output, delay=0.7, atten=4)\nAn alternative way of importing the submodule is:\nfrom sound.effects import echo\nThis also loads the submodule echo\n, and makes it available without its\npackage prefix, so it can be used as follows:\necho.echofilter(input, output, delay=0.7, atten=4)\nYet another variation is to import the desired function or variable directly:\nfrom sound.effects.echo import echofilter\nAgain, this loads the submodule echo\n, but this makes its function\nechofilter()\ndirectly available:\nechofilter(input, output, delay=0.7, atten=4)\nNote that when using from package import item\n, the item can be either a\nsubmodule (or subpackage) of the package, or some other name defined in the\npackage, like a function, class or variable. The import\nstatement first\ntests whether the item is defined in the package; if not, it assumes it is a\nmodule and attempts to load it. If it fails to find it, an ImportError\nexception is raised.\nContrarily, when using syntax like import item.subitem.subsubitem\n, each item\nexcept for the last must be a package; the last item can be a module or a\npackage but can\u2019t be a class or function or variable defined in the previous\nitem.\n6.4.1. Importing * From a Package\u00b6\nNow what happens when the user writes from sound.effects import *\n? Ideally,\none would hope that this somehow goes out to the filesystem, finds which\nsubmodules are present in the package, and imports them all. This could take a\nlong time and importing sub-modules might have unwanted side-effects that should\nonly happen when the sub-module is explicitly imported.\nThe only solution is for the package author to provide an explicit index of the\npackage. The import\nstatement uses the following convention: if a package\u2019s\n__init__.py\ncode defines a list named __all__\n, it is taken to be the\nlist of module names that should be imported when from package import *\nis\nencountered. It is up to the package author to keep this list up-to-date when a\nnew version of the package is released. Package authors may also decide not to\nsupport it, if they don\u2019t see a use for importing * from their package. For\nexample, the file sound/effects/__init__.py\ncould contain the following\ncode:\n__all__ = [\"echo\", \"surround\", \"reverse\"]\nThis would mean that from sound.effects import *\nwould import the three\nnamed submodules of the sound.effects\npackage.\nBe aware that submodules might become shadowed by locally defined names. For\nexample, if you added a reverse\nfunction to the\nsound/effects/__init__.py\nfile, the from sound.effects import *\nwould only import the two submodules echo\nand surround\n, but not the\nreverse\nsubmodule, because it is shadowed by the locally defined\nreverse\nfunction:\n__all__ = [\n\"echo\", # refers to the 'echo.py' file\n\"surround\", # refers to the 'surround.py' file\n\"reverse\", # !!! refers to the 'reverse' function now !!!\n]\ndef reverse(msg: str): # <-- this name shadows the 'reverse.py' submodule\nreturn msg[::-1] # in the case of a 'from sound.effects import *'\nIf __all__\nis not defined, the statement from sound.effects import *\ndoes not import all submodules from the package sound.effects\ninto the\ncurrent namespace; it only ensures that the package sound.effects\nhas\nbeen imported (possibly running any initialization code in __init__.py\n)\nand then imports whatever names are defined in the package. This includes any\nnames defined (and submodules explicitly loaded) by __init__.py\n. It\nalso includes any submodules of the package that were explicitly loaded by\nprevious import\nstatements. Consider this code:\nimport sound.effects.echo\nimport sound.effects.surround\nfrom sound.effects import *\nIn this example, the echo\nand surround\nmodules are imported in the\ncurrent namespace because they are defined in the sound.effects\npackage\nwhen the from...import\nstatement is executed. (This also works when\n__all__\nis defined.)\nAlthough certain modules are designed to export only names that follow certain\npatterns when you use import *\n, it is still considered bad practice in\nproduction code.\nRemember, there is nothing wrong with using from package import\nspecific_submodule\n! In fact, this is the recommended notation unless the\nimporting module needs to use submodules with the same name from different\npackages.\n6.4.2. Intra-package References\u00b6\nWhen packages are structured into subpackages (as with the sound\npackage\nin the example), you can use absolute imports to refer to submodules of siblings\npackages. For example, if the module sound.filters.vocoder\nneeds to use\nthe echo\nmodule in the sound.effects\npackage, it can use from\nsound.effects import echo\n.\nYou can also write relative imports, with the from module import name\nform\nof import statement. These imports use leading dots to indicate the current and\nparent packages involved in the relative import. From the surround\nmodule for example, you might use:\nfrom . import echo\nfrom .. import formats\nfrom ..filters import equalizer\nNote that relative imports are based on the name of the current module\u2019s package. Since the main module does not have a package, modules intended for use as the main module of a Python application must always use absolute imports.\n6.4.3. Packages in Multiple Directories\u00b6\nPackages support one more special attribute, __path__\n. This is\ninitialized to be a sequence of strings containing the name of the\ndirectory holding the\npackage\u2019s __init__.py\nbefore the code in that file is executed. This\nvariable can be modified; doing so affects future searches for modules and\nsubpackages contained in the package.\nWhile this feature is not often needed, it can be used to extend the set of modules found in a package.\nFootnotes", "code_snippets": ["\n\n", "\n", "\n ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", " ", " ", " ", "\n ", "\n\n", "\n", "\n ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", "\n ", " ", " ", " ", " ", "\n ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", " ", " ", "\n", " ", " ", " ", "\n ", "\n ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", " ", "\n", " ", " ", " ", "\n", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n", "\n\n", " ", " ", "\n ", " ", " ", "\n", "\n", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 5661}
{"url": "https://docs.python.org/3/tutorial/datastructures.html", "title": "Data Structures", "content": "5. Data Structures\u00b6\nThis chapter describes some things you\u2019ve learned about already in more detail, and adds some new things as well.\n5.1. More on Lists\u00b6\nThe list data type has some more methods. Here are all of the methods of list objects:\n- list.append(x)\nAdd an item to the end of the list. Similar to\na[len(a):] = [x]\n.\n- list.extend(iterable)\nExtend the list by appending all the items from the iterable. Similar to\na[len(a):] = iterable\n.\n- list.insert(i, x)\nInsert an item at a given position. The first argument is the index of the element before which to insert, so\na.insert(0, x)\ninserts at the front of the list, anda.insert(len(a), x)\nis equivalent toa.append(x)\n.\n- list.remove(x)\nRemove the first item from the list whose value is equal to x. It raises a\nValueError\nif there is no such item.\n- list.pop([i])\nRemove the item at the given position in the list, and return it. If no index is specified,\na.pop()\nremoves and returns the last item in the list. It raises anIndexError\nif the list is empty or the index is outside the list range.\n- list.clear()\nRemove all items from the list. Similar to\ndel a[:]\n.\n- list.index(x[, start[, end]])\nReturn zero-based index of the first occurrence of x in the list. Raises a\nValueError\nif there is no such item.The optional arguments start and end are interpreted as in the slice notation and are used to limit the search to a particular subsequence of the list. The returned index is computed relative to the beginning of the full sequence rather than the start argument.\n- list.count(x)\nReturn the number of times x appears in the list.\n- list.sort(*, key=None, reverse=False)\nSort the items of the list in place (the arguments can be used for sort customization, see\nsorted()\nfor their explanation).\n- list.reverse()\nReverse the elements of the list in place.\n- list.copy()\nReturn a shallow copy of the list. Similar to\na[:]\n.\nAn example that uses most of the list methods:\n>>> fruits = ['orange', 'apple', 'pear', 'banana', 'kiwi', 'apple', 'banana']\n>>> fruits.count('apple')\n2\n>>> fruits.count('tangerine')\n0\n>>> fruits.index('banana')\n3\n>>> fruits.index('banana', 4) # Find next banana starting at position 4\n6\n>>> fruits.reverse()\n>>> fruits\n['banana', 'apple', 'kiwi', 'banana', 'pear', 'apple', 'orange']\n>>> fruits.append('grape')\n>>> fruits\n['banana', 'apple', 'kiwi', 'banana', 'pear', 'apple', 'orange', 'grape']\n>>> fruits.sort()\n>>> fruits\n['apple', 'apple', 'banana', 'banana', 'grape', 'kiwi', 'orange', 'pear']\n>>> fruits.pop()\n'pear'\nYou might have noticed that methods like insert\n, remove\nor sort\nthat\nonly modify the list have no return value printed \u2013 they return the default\nNone\n. [1] This is a design principle for all mutable data structures in\nPython.\nAnother thing you might notice is that not all data can be sorted or\ncompared. For instance, [None, 'hello', 10]\ndoesn\u2019t sort because\nintegers can\u2019t be compared to strings and None\ncan\u2019t be compared to\nother types. Also, there are some types that don\u2019t have a defined\nordering relation. For example, 3+4j < 5+7j\nisn\u2019t a valid\ncomparison.\n5.1.1. Using Lists as Stacks\u00b6\nThe list methods make it very easy to use a list as a stack, where the last\nelement added is the first element retrieved (\u201clast-in, first-out\u201d). To add an\nitem to the top of the stack, use append()\n. To retrieve an item from the\ntop of the stack, use pop()\nwithout an explicit index. For example:\n>>> stack = [3, 4, 5]\n>>> stack.append(6)\n>>> stack.append(7)\n>>> stack\n[3, 4, 5, 6, 7]\n>>> stack.pop()\n7\n>>> stack\n[3, 4, 5, 6]\n>>> stack.pop()\n6\n>>> stack.pop()\n5\n>>> stack\n[3, 4]\n5.1.2. Using Lists as Queues\u00b6\nIt is also possible to use a list as a queue, where the first element added is the first element retrieved (\u201cfirst-in, first-out\u201d); however, lists are not efficient for this purpose. While appends and pops from the end of list are fast, doing inserts or pops from the beginning of a list is slow (because all of the other elements have to be shifted by one).\nTo implement a queue, use collections.deque\nwhich was designed to\nhave fast appends and pops from both ends. For example:\n>>> from collections import deque\n>>> queue = deque([\"Eric\", \"John\", \"Michael\"])\n>>> queue.append(\"Terry\") # Terry arrives\n>>> queue.append(\"Graham\") # Graham arrives\n>>> queue.popleft() # The first to arrive now leaves\n'Eric'\n>>> queue.popleft() # The second to arrive now leaves\n'John'\n>>> queue # Remaining queue in order of arrival\ndeque(['Michael', 'Terry', 'Graham'])\n5.1.3. List Comprehensions\u00b6\nList comprehensions provide a concise way to create lists. Common applications are to make new lists where each element is the result of some operations applied to each member of another sequence or iterable, or to create a subsequence of those elements that satisfy a certain condition.\nFor example, assume we want to create a list of squares, like:\n>>> squares = []\n>>> for x in range(10):\n... squares.append(x**2)\n...\n>>> squares\n[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]\nNote that this creates (or overwrites) a variable named x\nthat still exists\nafter the loop completes. We can calculate the list of squares without any\nside effects using:\nsquares = list(map(lambda x: x**2, range(10)))\nor, equivalently:\nsquares = [x**2 for x in range(10)]\nwhich is more concise and readable.\nA list comprehension consists of brackets containing an expression followed\nby a for\nclause, then zero or more for\nor if\nclauses. The result will be a new list resulting from evaluating the expression\nin the context of the for\nand if\nclauses which follow it.\nFor example, this listcomp combines the elements of two lists if they are not\nequal:\n>>> [(x, y) for x in [1,2,3] for y in [3,1,4] if x != y]\n[(1, 3), (1, 4), (2, 3), (2, 1), (2, 4), (3, 1), (3, 4)]\nand it\u2019s equivalent to:\n>>> combs = []\n>>> for x in [1,2,3]:\n... for y in [3,1,4]:\n... if x != y:\n... combs.append((x, y))\n...\n>>> combs\n[(1, 3), (1, 4), (2, 3), (2, 1), (2, 4), (3, 1), (3, 4)]\nNote how the order of the for\nand if\nstatements is the\nsame in both these snippets.\nIf the expression is a tuple (e.g. the (x, y)\nin the previous example),\nit must be parenthesized.\n>>> vec = [-4, -2, 0, 2, 4]\n>>> # create a new list with the values doubled\n>>> [x*2 for x in vec]\n[-8, -4, 0, 4, 8]\n>>> # filter the list to exclude negative numbers\n>>> [x for x in vec if x >= 0]\n[0, 2, 4]\n>>> # apply a function to all the elements\n>>> [abs(x) for x in vec]\n[4, 2, 0, 2, 4]\n>>> # call a method on each element\n>>> freshfruit = [' banana', ' loganberry ', 'passion fruit ']\n>>> [weapon.strip() for weapon in freshfruit]\n['banana', 'loganberry', 'passion fruit']\n>>> # create a list of 2-tuples like (number, square)\n>>> [(x, x**2) for x in range(6)]\n[(0, 0), (1, 1), (2, 4), (3, 9), (4, 16), (5, 25)]\n>>> # the tuple must be parenthesized, otherwise an error is raised\n>>> [x, x**2 for x in range(6)]\nFile \"\", line 1\n[x, x**2 for x in range(6)]\n^^^^^^^\nSyntaxError: did you forget parentheses around the comprehension target?\n>>> # flatten a list using a listcomp with two 'for'\n>>> vec = [[1,2,3], [4,5,6], [7,8,9]]\n>>> [num for elem in vec for num in elem]\n[1, 2, 3, 4, 5, 6, 7, 8, 9]\nList comprehensions can contain complex expressions and nested functions:\n>>> from math import pi\n>>> [str(round(pi, i)) for i in range(1, 6)]\n['3.1', '3.14', '3.142', '3.1416', '3.14159']\n5.1.4. Nested List Comprehensions\u00b6\nThe initial expression in a list comprehension can be any arbitrary expression, including another list comprehension.\nConsider the following example of a 3x4 matrix implemented as a list of 3 lists of length 4:\n>>> matrix = [\n... [1, 2, 3, 4],\n... [5, 6, 7, 8],\n... [9, 10, 11, 12],\n... ]\nThe following list comprehension will transpose rows and columns:\n>>> [[row[i] for row in matrix] for i in range(4)]\n[[1, 5, 9], [2, 6, 10], [3, 7, 11], [4, 8, 12]]\nAs we saw in the previous section, the inner list comprehension is evaluated in\nthe context of the for\nthat follows it, so this example is\nequivalent to:\n>>> transposed = []\n>>> for i in range(4):\n... transposed.append([row[i] for row in matrix])\n...\n>>> transposed\n[[1, 5, 9], [2, 6, 10], [3, 7, 11], [4, 8, 12]]\nwhich, in turn, is the same as:\n>>> transposed = []\n>>> for i in range(4):\n... # the following 3 lines implement the nested listcomp\n... transposed_row = []\n... for row in matrix:\n... transposed_row.append(row[i])\n... transposed.append(transposed_row)\n...\n>>> transposed\n[[1, 5, 9], [2, 6, 10], [3, 7, 11], [4, 8, 12]]\nIn the real world, you should prefer built-in functions to complex flow statements.\nThe zip()\nfunction would do a great job for this use case:\n>>> list(zip(*matrix))\n[(1, 5, 9), (2, 6, 10), (3, 7, 11), (4, 8, 12)]\nSee Unpacking Argument Lists for details on the asterisk in this line.\n5.2. The del\nstatement\u00b6\nThere is a way to remove an item from a list given its index instead of its\nvalue: the del\nstatement. This differs from the pop()\nmethod\nwhich returns a value. The del\nstatement can also be used to remove\nslices from a list or clear the entire list (which we did earlier by assignment\nof an empty list to the slice). For example:\n>>> a = [-1, 1, 66.25, 333, 333, 1234.5]\n>>> del a[0]\n>>> a\n[1, 66.25, 333, 333, 1234.5]\n>>> del a[2:4]\n>>> a\n[1, 66.25, 1234.5]\n>>> del a[:]\n>>> a\n[]\ndel\ncan also be used to delete entire variables:\n>>> del a\nReferencing the name a\nhereafter is an error (at least until another value\nis assigned to it). We\u2019ll find other uses for del\nlater.\n5.3. Tuples and Sequences\u00b6\nWe saw that lists and strings have many common properties, such as indexing and slicing operations. They are two examples of sequence data types (see Sequence Types \u2014 list, tuple, range). Since Python is an evolving language, other sequence data types may be added. There is also another standard sequence data type: the tuple.\nA tuple consists of a number of values separated by commas, for instance:\n>>> t = 12345, 54321, 'hello!'\n>>> t[0]\n12345\n>>> t\n(12345, 54321, 'hello!')\n>>> # Tuples may be nested:\n>>> u = t, (1, 2, 3, 4, 5)\n>>> u\n((12345, 54321, 'hello!'), (1, 2, 3, 4, 5))\n>>> # Tuples are immutable:\n>>> t[0] = 88888\nTraceback (most recent call last):\nFile \"\", line 1, in \nTypeError: 'tuple' object does not support item assignment\n>>> # but they can contain mutable objects:\n>>> v = ([1, 2, 3], [3, 2, 1])\n>>> v\n([1, 2, 3], [3, 2, 1])\nAs you see, on output tuples are always enclosed in parentheses, so that nested tuples are interpreted correctly; they may be input with or without surrounding parentheses, although often parentheses are necessary anyway (if the tuple is part of a larger expression). It is not possible to assign to the individual items of a tuple, however it is possible to create tuples which contain mutable objects, such as lists.\nThough tuples may seem similar to lists, they are often used in different\nsituations and for different purposes.\nTuples are immutable, and usually contain a heterogeneous sequence of\nelements that are accessed via unpacking (see later in this section) or indexing\n(or even by attribute in the case of namedtuples\n).\nLists are mutable, and their elements are usually homogeneous and are\naccessed by iterating over the list.\nA special problem is the construction of tuples containing 0 or 1 items: the syntax has some extra quirks to accommodate these. Empty tuples are constructed by an empty pair of parentheses; a tuple with one item is constructed by following a value with a comma (it is not sufficient to enclose a single value in parentheses). Ugly, but effective. For example:\n>>> empty = ()\n>>> singleton = 'hello', # <-- note trailing comma\n>>> len(empty)\n0\n>>> len(singleton)\n1\n>>> singleton\n('hello',)\nThe statement t = 12345, 54321, 'hello!'\nis an example of tuple packing:\nthe values 12345\n, 54321\nand 'hello!'\nare packed together in a tuple.\nThe reverse operation is also possible:\n>>> x, y, z = t\nThis is called, appropriately enough, sequence unpacking and works for any sequence on the right-hand side. Sequence unpacking requires that there are as many variables on the left side of the equals sign as there are elements in the sequence. Note that multiple assignment is really just a combination of tuple packing and sequence unpacking.\n5.4. Sets\u00b6\nPython also includes a data type for sets. A set is an unordered collection with no duplicate elements. Basic uses include membership testing and eliminating duplicate entries. Set objects also support mathematical operations like union, intersection, difference, and symmetric difference.\nCurly braces or the set()\nfunction can be used to create sets. Note: to\ncreate an empty set you have to use set()\n, not {}\n; the latter creates an\nempty dictionary, a data structure that we discuss in the next section.\nHere is a brief demonstration:\n>>> basket = {'apple', 'orange', 'apple', 'pear', 'orange', 'banana'}\n>>> print(basket) # show that duplicates have been removed\n{'orange', 'banana', 'pear', 'apple'}\n>>> 'orange' in basket # fast membership testing\nTrue\n>>> 'crabgrass' in basket\nFalse\n>>> # Demonstrate set operations on unique letters from two words\n>>>\n>>> a = set('abracadabra')\n>>> b = set('alacazam')\n>>> a # unique letters in a\n{'a', 'r', 'b', 'c', 'd'}\n>>> a - b # letters in a but not in b\n{'r', 'd', 'b'}\n>>> a | b # letters in a or b or both\n{'a', 'c', 'r', 'd', 'b', 'm', 'z', 'l'}\n>>> a & b # letters in both a and b\n{'a', 'c'}\n>>> a ^ b # letters in a or b but not both\n{'r', 'd', 'b', 'm', 'z', 'l'}\nSimilarly to list comprehensions, set comprehensions are also supported:\n>>> a = {x for x in 'abracadabra' if x not in 'abc'}\n>>> a\n{'r', 'd'}\n5.5. Dictionaries\u00b6\nAnother useful data type built into Python is the dictionary (see\nMapping Types \u2014 dict). Dictionaries are sometimes found in other languages as\n\u201cassociative memories\u201d or \u201cassociative arrays\u201d. Unlike sequences, which are\nindexed by a range of numbers, dictionaries are indexed by keys, which can be\nany immutable type; strings and numbers can always be keys. Tuples can be used\nas keys if they contain only strings, numbers, or tuples; if a tuple contains\nany mutable object either directly or indirectly, it cannot be used as a key.\nYou can\u2019t use lists as keys, since lists can be modified in place using index\nassignments, slice assignments, or methods like append()\nand\nextend()\n.\nIt is best to think of a dictionary as a set of key: value pairs,\nwith the requirement that the keys are unique (within one dictionary). A pair of\nbraces creates an empty dictionary: {}\n. Placing a comma-separated list of\nkey:value pairs within the braces adds initial key:value pairs to the\ndictionary; this is also the way dictionaries are written on output.\nThe main operations on a dictionary are storing a value with some key and\nextracting the value given the key. It is also possible to delete a key:value\npair with del\n. If you store using a key that is already in use, the old\nvalue associated with that key is forgotten.\nExtracting a value for a non-existent key by subscripting (d[key]\n) raises a\nKeyError\n. To avoid getting this error when trying to access a possibly\nnon-existent key, use the get()\nmethod instead, which returns\nNone\n(or a specified default value) if the key is not in the dictionary.\nPerforming list(d)\non a dictionary returns a list of all the keys\nused in the dictionary, in insertion order (if you want it sorted, just use\nsorted(d)\ninstead). To check whether a single key is in the\ndictionary, use the in\nkeyword.\nHere is a small example using a dictionary:\n>>> tel = {'jack': 4098, 'sape': 4139}\n>>> tel['guido'] = 4127\n>>> tel\n{'jack': 4098, 'sape': 4139, 'guido': 4127}\n>>> tel['jack']\n4098\n>>> tel['irv']\nTraceback (most recent call last):\nFile \"\", line 1, in \nKeyError: 'irv'\n>>> print(tel.get('irv'))\nNone\n>>> del tel['sape']\n>>> tel['irv'] = 4127\n>>> tel\n{'jack': 4098, 'guido': 4127, 'irv': 4127}\n>>> list(tel)\n['jack', 'guido', 'irv']\n>>> sorted(tel)\n['guido', 'irv', 'jack']\n>>> 'guido' in tel\nTrue\n>>> 'jack' not in tel\nFalse\nThe dict()\nconstructor builds dictionaries directly from sequences of\nkey-value pairs:\n>>> dict([('sape', 4139), ('guido', 4127), ('jack', 4098)])\n{'sape': 4139, 'guido': 4127, 'jack': 4098}\nIn addition, dict comprehensions can be used to create dictionaries from arbitrary key and value expressions:\n>>> {x: x**2 for x in (2, 4, 6)}\n{2: 4, 4: 16, 6: 36}\nWhen the keys are simple strings, it is sometimes easier to specify pairs using keyword arguments:\n>>> dict(sape=4139, guido=4127, jack=4098)\n{'sape': 4139, 'guido': 4127, 'jack': 4098}\n5.6. Looping Techniques\u00b6\nWhen looping through dictionaries, the key and corresponding value can be\nretrieved at the same time using the items()\nmethod.\n>>> knights = {'gallahad': 'the pure', 'robin': 'the brave'}\n>>> for k, v in knights.items():\n... print(k, v)\n...\ngallahad the pure\nrobin the brave\nWhen looping through a sequence, the position index and corresponding value can\nbe retrieved at the same time using the enumerate()\nfunction.\n>>> for i, v in enumerate(['tic', 'tac', 'toe']):\n... print(i, v)\n...\n0 tic\n1 tac\n2 toe\nTo loop over two or more sequences at the same time, the entries can be paired\nwith the zip()\nfunction.\n>>> questions = ['name', 'quest', 'favorite color']\n>>> answers = ['lancelot', 'the holy grail', 'blue']\n>>> for q, a in zip(questions, answers):\n... print('What is your {0}? It is {1}.'.format(q, a))\n...\nWhat is your name? It is lancelot.\nWhat is your quest? It is the holy grail.\nWhat is your favorite color? It is blue.\nTo loop over a sequence in reverse, first specify the sequence in a forward\ndirection and then call the reversed()\nfunction.\n>>> for i in reversed(range(1, 10, 2)):\n... print(i)\n...\n9\n7\n5\n3\n1\nTo loop over a sequence in sorted order, use the sorted()\nfunction which\nreturns a new sorted list while leaving the source unaltered.\n>>> basket = ['apple', 'orange', 'apple', 'pear', 'orange', 'banana']\n>>> for i in sorted(basket):\n... print(i)\n...\napple\napple\nbanana\norange\norange\npear\nUsing set()\non a sequence eliminates duplicate elements. The use of\nsorted()\nin combination with set()\nover a sequence is an idiomatic\nway to loop over unique elements of the sequence in sorted order.\n>>> basket = ['apple', 'orange', 'apple', 'pear', 'orange', 'banana']\n>>> for f in sorted(set(basket)):\n... print(f)\n...\napple\nbanana\norange\npear\nIt is sometimes tempting to change a list while you are looping over it; however, it is often simpler and safer to create a new list instead.\n>>> import math\n>>> raw_data = [56.2, float('NaN'), 51.7, 55.3, 52.5, float('NaN'), 47.8]\n>>> filtered_data = []\n>>> for value in raw_data:\n... if not math.isnan(value):\n... filtered_data.append(value)\n...\n>>> filtered_data\n[56.2, 51.7, 55.3, 52.5, 47.8]\n5.7. More on Conditions\u00b6\nThe conditions used in while\nand if\nstatements can contain any\noperators, not just comparisons.\nThe comparison operators in\nand not in\nare membership tests that\ndetermine whether a value is in (or not in) a container. The operators is\nand is not\ncompare whether two objects are really the same object. All\ncomparison operators have the same priority, which is lower than that of all\nnumerical operators.\nComparisons can be chained. For example, a < b == c\ntests whether a\nis\nless than b\nand moreover b\nequals c\n.\nComparisons may be combined using the Boolean operators and\nand or\n, and\nthe outcome of a comparison (or of any other Boolean expression) may be negated\nwith not\n. These have lower priorities than comparison operators; between\nthem, not\nhas the highest priority and or\nthe lowest, so that A and\nnot B or C\nis equivalent to (A and (not B)) or C\n. As always, parentheses\ncan be used to express the desired composition.\nThe Boolean operators and\nand or\nare so-called short-circuit\noperators: their arguments are evaluated from left to right, and evaluation\nstops as soon as the outcome is determined. For example, if A\nand C\nare\ntrue but B\nis false, A and B and C\ndoes not evaluate the expression\nC\n. When used as a general value and not as a Boolean, the return value of a\nshort-circuit operator is the last evaluated argument.\nIt is possible to assign the result of a comparison or other Boolean expression to a variable. For example,\n>>> string1, string2, string3 = '', 'Trondheim', 'Hammer Dance'\n>>> non_null = string1 or string2 or string3\n>>> non_null\n'Trondheim'\nNote that in Python, unlike C, assignment inside expressions must be done\nexplicitly with the\nwalrus operator :=\n.\nThis avoids a common class of problems encountered in C programs: typing =\nin an expression when ==\nwas intended.\n5.8. Comparing Sequences and Other Types\u00b6\nSequence objects typically may be compared to other objects with the same sequence type. The comparison uses lexicographical ordering: first the first two items are compared, and if they differ this determines the outcome of the comparison; if they are equal, the next two items are compared, and so on, until either sequence is exhausted. If two items to be compared are themselves sequences of the same type, the lexicographical comparison is carried out recursively. If all items of two sequences compare equal, the sequences are considered equal. If one sequence is an initial sub-sequence of the other, the shorter sequence is the smaller (lesser) one. Lexicographical ordering for strings uses the Unicode code point number to order individual characters. Some examples of comparisons between sequences of the same type:\n(1, 2, 3) < (1, 2, 4)\n[1, 2, 3] < [1, 2, 4]\n'ABC' < 'C' < 'Pascal' < 'Python'\n(1, 2, 3, 4) < (1, 2, 4)\n(1, 2) < (1, 2, -1)\n(1, 2, 3) == (1.0, 2.0, 3.0)\n(1, 2, ('aa', 'ab')) < (1, 2, ('abc', 'a'), 4)\nNote that comparing objects of different types with <\nor >\nis legal\nprovided that the objects have appropriate comparison methods. For example,\nmixed numeric types are compared according to their numeric value, so 0 equals\n0.0, etc. Otherwise, rather than providing an arbitrary ordering, the\ninterpreter will raise a TypeError\nexception.\nFootnotes", "code_snippets": [" ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", " ", " ", " ", " ", "\n", " ", "\n", " ", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", "\n", "\n", " ", " ", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", " ", "\n File ", ", line ", "\n", " ", " ", " ", " ", " ", "\n", "\n", ": ", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", " ", "\n", "\n", "\n", " ", "\n", "\n", "\n", " ", "\n", "\n", "\n", " ", "\n", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", " ", "\n", "\n", " ", " ", " ", "\n", "\n", " ", " ", "\n", "\n\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", "\n", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", "\n", "\n", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", " ", " ", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 5614}
{"url": "https://docs.python.org/3/c-api/typeobj.html", "title": "Type Object Structures", "content": "Type Object Structures\u00b6\nPerhaps one of the most important structures of the Python object system is the\nstructure that defines a new type: the PyTypeObject\nstructure. Type\nobjects can be handled using any of the PyObject_*\nor\nPyType_*\nfunctions, but do not offer much that\u2019s interesting to most\nPython applications. These objects are fundamental to how objects behave, so\nthey are very important to the interpreter itself and to any extension module\nthat implements new types.\nType objects are fairly large compared to most of the standard types. The reason for the size is that each type object stores a large number of values, mostly C function pointers, each of which implements a small part of the type\u2019s functionality. The fields of the type object are examined in detail in this section. The fields will be described in the order in which they occur in the structure.\nIn addition to the following quick reference, the Examples\nsection provides at-a-glance insight into the meaning and use of\nPyTypeObject\n.\nQuick Reference\u00b6\n\u201ctp slots\u201d\u00b6\nPyTypeObject Slot [1] |\nspecial methods/attrs |\nInfo [2] |\n||||\n|---|---|---|---|---|---|---|\nO |\nT |\nD |\nI |\n|||\n |\nconst char * |\n__name__ |\nX |\nX |\n||\nX |\nX |\nX |\n||||\nX |\nX |\n|||||\nX |\nX |\nX |\n||||\nX |\nX |\n|||||\n__getattribute__, __getattr__ |\nG |\n|||||\n__setattr__, __delattr__ |\nG |\n|||||\n% |\n||||||\n__repr__ |\nX |\nX |\nX |\n|||\n% |\n||||||\n% |\n||||||\n% |\n||||||\n__hash__ |\nX |\nG |\n||||\n__call__ |\nX |\nX |\n||||\n__str__ |\nX |\nX |\n||||\n__getattribute__, __getattr__ |\nX |\nX |\nG |\n|||\n__setattr__, __delattr__ |\nX |\nX |\nG |\n|||\n% |\n||||||\nunsigned long |\nX |\nX |\n? |\n|||\nconst char * |\n__doc__ |\nX |\nX |\n|||\nX |\nG |\n|||||\nX |\nG |\n|||||\n__lt__, __le__, __eq__, __ne__, __gt__, __ge__ |\nX |\nG |\n||||\nX |\n? |\n|||||\n__iter__ |\nX |\n|||||\n__next__ |\nX |\n|||||\n|\nX |\nX |\n||||\n|\nX |\n|||||\n|\nX |\nX |\n||||\n__base__ |\nX |\n|||||\n|\n__dict__ |\n? |\n||||\n__get__ |\nX |\n|||||\n__set__, __delete__ |\nX |\n|||||\nX |\n? |\n|||||\n__init__ |\nX |\nX |\nX |\n|||\nX |\n? |\n? |\n||||\n__new__ |\nX |\nX |\n? |\n? |\n||\nX |\nX |\n? |\n? |\n|||\nX |\nX |\n|||||\n< |\n|\n__bases__ |\n~ |\n|||\n< |\n|\n__mro__ |\n~ |\n|||\n[ |\n|\n|||||\nvoid * |\n__subclasses__ |\n|||||\n|\n||||||\n( |\n||||||\nunsigned int |\n||||||\n__del__ |\nX |\n|||||\nunsigned char |\nsub-slots\u00b6\nSlot |\nspecial methods |\n|\n|---|---|---|\n__await__ |\n||\n__aiter__ |\n||\n__anext__ |\n||\n__add__ __radd__ |\n||\n__iadd__ |\n||\n__sub__ __rsub__ |\n||\n__isub__ |\n||\n__mul__ __rmul__ |\n||\n__imul__ |\n||\n__mod__ __rmod__ |\n||\n__imod__ |\n||\n__divmod__ __rdivmod__ |\n||\n__pow__ __rpow__ |\n||\n__ipow__ |\n||\n__neg__ |\n||\n__pos__ |\n||\n__abs__ |\n||\n__bool__ |\n||\n__invert__ |\n||\n__lshift__ __rlshift__ |\n||\n__ilshift__ |\n||\n__rshift__ __rrshift__ |\n||\n__irshift__ |\n||\n__and__ __rand__ |\n||\n__iand__ |\n||\n__xor__ __rxor__ |\n||\n__ixor__ |\n||\n__or__ __ror__ |\n||\n__ior__ |\n||\n__int__ |\n||\nvoid * |\n||\n__float__ |\n||\n__floordiv__ |\n||\n__ifloordiv__ |\n||\n__truediv__ |\n||\n__itruediv__ |\n||\n__index__ |\n||\n__matmul__ __rmatmul__ |\n||\n__imatmul__ |\n||\n__len__ |\n||\n__getitem__ |\n||\n__setitem__, __delitem__ |\n||\n__len__ |\n||\n__add__ |\n||\n__mul__ |\n||\n__getitem__ |\n||\n__setitem__ __delitem__ |\n||\n__contains__ |\n||\n__iadd__ |\n||\n__imul__ |\n||\n__buffer__ |\n||\n__release_buffer__ |\nslot typedefs\u00b6\ntypedef |\nParameter Types |\nReturn Type |\n|---|---|---|\n|\n||\n|\nvoid |\n|\nvoid * |\nvoid |\n|\nint |\n||\n|\n||\nint |\n||\n|\n|\n|\nPyObject *const char *\n|\n|\n|\nint |\n||\n|\n||\nint |\n||\n|\n||\nint |\n||\n|\nPy_hash_t |\n|\n|\n||\n|\n|\n|\n|\n|\n|\n|\n||\nint |\n||\nvoid |\n||\n|\nint |\n|\nPyObject * |\n|\n|\n|\n||\n|\n||\n|\n||\nint |\n||\nint |\n||\nint |\nSee Slot Type typedefs below for more detail.\nPyTypeObject Definition\u00b6\nThe structure definition for PyTypeObject\ncan be found in\nInclude/cpython/object.h\n. For convenience of reference, this repeats the\ndefinition found there:\ntypedef struct _typeobject {\nPyObject_VAR_HEAD\nconst char *tp_name; /* For printing, in format \".\" */\nPy_ssize_t tp_basicsize, tp_itemsize; /* For allocation */\n/* Methods to implement standard operations */\ndestructor tp_dealloc;\nPy_ssize_t tp_vectorcall_offset;\ngetattrfunc tp_getattr;\nsetattrfunc tp_setattr;\nPyAsyncMethods *tp_as_async; /* formerly known as tp_compare (Python 2)\nor tp_reserved (Python 3) */\nreprfunc tp_repr;\n/* Method suites for standard classes */\nPyNumberMethods *tp_as_number;\nPySequenceMethods *tp_as_sequence;\nPyMappingMethods *tp_as_mapping;\n/* More standard operations (here for binary compatibility) */\nhashfunc tp_hash;\nternaryfunc tp_call;\nreprfunc tp_str;\ngetattrofunc tp_getattro;\nsetattrofunc tp_setattro;\n/* Functions to access object as input/output buffer */\nPyBufferProcs *tp_as_buffer;\n/* Flags to define presence of optional/expanded features */\nunsigned long tp_flags;\nconst char *tp_doc; /* Documentation string */\n/* Assigned meaning in release 2.0 */\n/* call function for all accessible objects */\ntraverseproc tp_traverse;\n/* delete references to contained objects */\ninquiry tp_clear;\n/* Assigned meaning in release 2.1 */\n/* rich comparisons */\nrichcmpfunc tp_richcompare;\n/* weak reference enabler */\nPy_ssize_t tp_weaklistoffset;\n/* Iterators */\ngetiterfunc tp_iter;\niternextfunc tp_iternext;\n/* Attribute descriptor and subclassing stuff */\nPyMethodDef *tp_methods;\nPyMemberDef *tp_members;\nPyGetSetDef *tp_getset;\n// Strong reference on a heap type, borrowed reference on a static type\nPyTypeObject *tp_base;\nPyObject *tp_dict;\ndescrgetfunc tp_descr_get;\ndescrsetfunc tp_descr_set;\nPy_ssize_t tp_dictoffset;\ninitproc tp_init;\nallocfunc tp_alloc;\nnewfunc tp_new;\nfreefunc tp_free; /* Low-level free-memory routine */\ninquiry tp_is_gc; /* For PyObject_IS_GC */\nPyObject *tp_bases;\nPyObject *tp_mro; /* method resolution order */\nPyObject *tp_cache; /* no longer used */\nvoid *tp_subclasses; /* for static builtin types this is an index */\nPyObject *tp_weaklist; /* not used for static builtin types */\ndestructor tp_del;\n/* Type attribute cache version tag. Added in version 2.6.\n* If zero, the cache is invalid and must be initialized.\n*/\nunsigned int tp_version_tag;\ndestructor tp_finalize;\nvectorcallfunc tp_vectorcall;\n/* bitset of which type-watchers care about this type */\nunsigned char tp_watched;\n/* Number of tp_version_tag values used.\n* Set to _Py_ATTR_CACHE_UNUSED if the attribute cache is\n* disabled for this type (e.g. due to custom MRO entries).\n* Otherwise, limited to MAX_VERSIONS_PER_CLASS (defined elsewhere).\n*/\nuint16_t tp_versions_used;\n} PyTypeObject;\nPyObject Slots\u00b6\nThe type object structure extends the PyVarObject\nstructure. The\nob_size\nfield is used for dynamic types (created by type_new()\n,\nusually called from a class statement). Note that PyType_Type\n(the\nmetatype) initializes tp_itemsize\n, which means that its instances (i.e.\ntype objects) must have the ob_size\nfield.\nThe type object\u2019s reference count is initialized to\n1\nby thePyObject_HEAD_INIT\nmacro. Note that for statically allocated type objects, the type\u2019s instances (objects whoseob_type\npoints back to the type) do not count as references. But for dynamically allocated type objects, the instances do count as references.Inheritance:\nThis field is not inherited by subtypes.\nThis is the type\u2019s type, in other words its metatype. It is initialized by the argument to the\nPyObject_HEAD_INIT\nmacro, and its value should normally be&PyType_Type\n. However, for dynamically loadable extension modules that must be usable on Windows (at least), the compiler complains that this is not a valid initializer. Therefore, the convention is to passNULL\nto thePyObject_HEAD_INIT\nmacro and to initialize this field explicitly at the start of the module\u2019s initialization function, before doing anything else. This is typically done like this:Foo_Type.ob_type = &PyType_Type;This should be done before any instances of the type are created.\nPyType_Ready()\nchecks ifob_type\nisNULL\n, and if so, initializes it to theob_type\nfield of the base class.PyType_Ready()\nwill not change this field if it is non-zero.Inheritance:\nThis field is inherited by subtypes.\nPyVarObject Slots\u00b6\nFor statically allocated type objects, this should be initialized to zero. For dynamically allocated type objects, this field has a special internal meaning.\nThis field should be accessed using the\nPy_SIZE()\nmacro.Inheritance:\nThis field is not inherited by subtypes.\nPyTypeObject Slots\u00b6\nEach slot has a section describing inheritance. If PyType_Ready()\nmay set a value when the field is set to NULL\nthen there will also be\na \u201cDefault\u201d section. (Note that many fields set on PyBaseObject_Type\nand PyType_Type\neffectively act as defaults.)\n-\nconst char *PyTypeObject.tp_name\u00b6\nPointer to a NUL-terminated string containing the name of the type. For types that are accessible as module globals, the string should be the full module name, followed by a dot, followed by the type name; for built-in types, it should be just the type name. If the module is a submodule of a package, the full package name is part of the full module name. For example, a type named\nT\ndefined in moduleM\nin subpackageQ\nin packageP\nshould have thetp_name\ninitializer\"P.Q.M.T\"\n.For dynamically allocated type objects, this should just be the type name, and the module name explicitly stored in the type dict as the value for key\n'__module__'\n.For statically allocated type objects, the tp_name field should contain a dot. Everything before the last dot is made accessible as the\n__module__\nattribute, and everything after the last dot is made accessible as the__name__\nattribute.If no dot is present, the entire\ntp_name\nfield is made accessible as the__name__\nattribute, and the__module__\nattribute is undefined (unless explicitly set in the dictionary, as explained above). This means your type will be impossible to pickle. Additionally, it will not be listed in module documentations created with pydoc.This field must not be\nNULL\n. It is the only required field inPyTypeObject()\n(other than potentiallytp_itemsize\n).Inheritance:\nThis field is not inherited by subtypes.\n-\nPy_ssize_t PyTypeObject.tp_basicsize\u00b6\n-\nPy_ssize_t PyTypeObject.tp_itemsize\u00b6\nThese fields allow calculating the size in bytes of instances of the type.\nThere are two kinds of types: types with fixed-length instances have a zero\ntp_itemsize\nfield, types with variable-length instances have a non-zerotp_itemsize\nfield. For a type with fixed-length instances, all instances have the same size, given intp_basicsize\n. (Exceptions to this rule can be made usingPyUnstable_Object_GC_NewWithExtraData()\n.)For a type with variable-length instances, the instances must have an\nob_size\nfield, and the instance size istp_basicsize\nplus N timestp_itemsize\n, where N is the \u201clength\u201d of the object.Functions like\nPyObject_NewVar()\nwill take the value of N as an argument, and store in the instance\u2019sob_size\nfield. Note that theob_size\nfield may later be used for other purposes. For example,int\ninstances use the bits ofob_size\nin an implementation-defined way; the underlying storage and its size should be accessed usingPyLong_Export()\n.Note\nThe\nob_size\nfield should be accessed using thePy_SIZE()\nandPy_SET_SIZE()\nmacros.Also, the presence of an\nob_size\nfield in the instance layout doesn\u2019t mean that the instance structure is variable-length. For example, thelist\ntype has fixed-length instances, yet those instances have aob_size\nfield. (As withint\n, avoid reading lists\u2019ob_size\ndirectly. CallPyList_Size()\ninstead.)The\ntp_basicsize\nincludes size needed for data of the type\u2019stp_base\n, plus any extra data needed by each instance.The correct way to set\ntp_basicsize\nis to use thesizeof\noperator on the struct used to declare the instance layout. This struct must include the struct used to declare the base type. In other words,tp_basicsize\nmust be greater than or equal to the base\u2019stp_basicsize\n.Since every type is a subtype of\nobject\n, this struct must includePyObject\norPyVarObject\n(depending on whetherob_size\nshould be included). These are usually defined by the macroPyObject_HEAD\norPyObject_VAR_HEAD\n, respectively.The basic size does not include the GC header size, as that header is not part of\nPyObject_HEAD\n.For cases where struct used to declare the base type is unknown, see\nPyType_Spec.basicsize\nandPyType_FromMetaclass()\n.Notes about alignment:\ntp_basicsize\nmust be a multiple of_Alignof(PyObject)\n. When usingsizeof\non astruct\nthat includesPyObject_HEAD\n, as recommended, the compiler ensures this. When not using a Cstruct\n, or when using compiler extensions like__attribute__((packed))\n, it is up to you.If the variable items require a particular alignment,\ntp_basicsize\nandtp_itemsize\nmust each be a multiple of that alignment. For example, if a type\u2019s variable part stores adouble\n, it is your responsibility that both fields are a multiple of_Alignof(double)\n.\nInheritance:\nThese fields are inherited separately by subtypes. (That is, if the field is set to zero,\nPyType_Ready()\nwill copy the value from the base type, indicating that the instances do not need additional storage.)If the base type has a non-zero\ntp_itemsize\n, it is generally not safe to settp_itemsize\nto a different non-zero value in a subtype (though this depends on the implementation of the base type).\n-\ndestructor PyTypeObject.tp_dealloc\u00b6\nThe corresponding slot ID\nPy_tp_dealloc\nis part of the Stable ABI.A pointer to the instance destructor function. The function signature is:\nvoid tp_dealloc(PyObject *self);\nThe destructor function should remove all references which the instance owns (e.g., call\nPy_CLEAR()\n), free all memory buffers owned by the instance, and call the type\u2019stp_free\nfunction to free the object itself.If you may call functions that may set the error indicator, you must use\nPyErr_GetRaisedException()\nandPyErr_SetRaisedException()\nto ensure you don\u2019t clobber a preexisting error indicator (the deallocation could have occurred while processing a different error):static void foo_dealloc(foo_object *self) { PyObject *et, *ev, *etb; PyObject *exc = PyErr_GetRaisedException(); ... PyErr_SetRaisedException(exc); }\nThe dealloc handler itself must not raise an exception; if it hits an error case it should call\nPyErr_FormatUnraisable()\nto log (and clear) an unraisable exception.No guarantees are made about when an object is destroyed, except:\nPython will destroy an object immediately or some time after the final reference to the object is deleted, unless its finalizer (\ntp_finalize\n) subsequently resurrects the object.An object will not be destroyed while it is being automatically finalized (\ntp_finalize\n) or automatically cleared (tp_clear\n).\nCPython currently destroys an object immediately from\nPy_DECREF()\nwhen the new reference count is zero, but this may change in a future version.It is recommended to call\nPyObject_CallFinalizerFromDealloc()\nat the beginning oftp_dealloc\nto guarantee that the object is always finalized before destruction.If the type supports garbage collection (the\nPy_TPFLAGS_HAVE_GC\nflag is set), the destructor should callPyObject_GC_UnTrack()\nbefore clearing any member fields.It is permissible to call\ntp_clear\nfromtp_dealloc\nto reduce code duplication and to guarantee that the object is always cleared before destruction. Beware thattp_clear\nmight have already been called.If the type is heap allocated (\nPy_TPFLAGS_HEAPTYPE\n), the deallocator should release the owned reference to its type object (viaPy_DECREF()\n) after calling the type deallocator. See the example code below.:static void foo_dealloc(PyObject *op) { foo_object *self = (foo_object *) op; PyObject_GC_UnTrack(self); Py_CLEAR(self->ref); Py_TYPE(self)->tp_free(self); }\ntp_dealloc\nmust leave the exception status unchanged. If it needs to call something that might raise an exception, the exception state must be backed up first and restored later (after logging any exceptions withPyErr_WriteUnraisable()\n).Example:\nstatic void foo_dealloc(PyObject *self) { PyObject *exc = PyErr_GetRaisedException(); if (PyObject_CallFinalizerFromDealloc(self) < 0) { // self was resurrected. goto done; } PyTypeObject *tp = Py_TYPE(self); if (tp->tp_flags & Py_TPFLAGS_HAVE_GC) { PyObject_GC_UnTrack(self); } // Optional, but convenient to avoid code duplication. if (tp->tp_clear && tp->tp_clear(self) < 0) { PyErr_WriteUnraisable(self); } // Any additional destruction goes here. tp->tp_free(self); self = NULL; // In case PyErr_WriteUnraisable() is called below. if (tp->tp_flags & Py_TPFLAGS_HEAPTYPE) { Py_CLEAR(tp); } done: // Optional, if something was called that might have raised an // exception. if (PyErr_Occurred()) { PyErr_WriteUnraisable(self); } PyErr_SetRaisedException(exc); }\ntp_dealloc\nmay be called from any Python thread, not just the thread which created the object (if the object becomes part of a refcount cycle, that cycle might be collected by a garbage collection on any thread). This is not a problem for Python API calls, since the thread on whichtp_dealloc\nis called with an attached thread state. However, if the object being destroyed in turn destroys objects from some other C library, care should be taken to ensure that destroying those objects on the thread which calledtp_dealloc\nwill not violate any assumptions of the library.Inheritance:\nThis field is inherited by subtypes.\nSee also\nObject Life Cycle for details about how this slot relates to other slots.\n-\nPy_ssize_t PyTypeObject.tp_vectorcall_offset\u00b6\nAn optional offset to a per-instance function that implements calling the object using the vectorcall protocol, a more efficient alternative of the simpler\ntp_call\n.This field is only used if the flag\nPy_TPFLAGS_HAVE_VECTORCALL\nis set. If so, this must be a positive integer containing the offset in the instance of avectorcallfunc\npointer.The vectorcallfunc pointer may be\nNULL\n, in which case the instance behaves as ifPy_TPFLAGS_HAVE_VECTORCALL\nwas not set: calling the instance falls back totp_call\n.Any class that sets\nPy_TPFLAGS_HAVE_VECTORCALL\nmust also settp_call\nand make sure its behaviour is consistent with the vectorcallfunc function. This can be done by setting tp_call toPyVectorcall_Call()\n.Changed in version 3.8: Before version 3.8, this slot was named\ntp_print\n. In Python 2.x, it was used for printing to a file. In Python 3.0 to 3.7, it was unused.Changed in version 3.12: Before version 3.12, it was not recommended for mutable heap types to implement the vectorcall protocol. When a user sets\n__call__\nin Python code, only tp_call is updated, likely making it inconsistent with the vectorcall function. Since 3.12, setting__call__\nwill disable vectorcall optimization by clearing thePy_TPFLAGS_HAVE_VECTORCALL\nflag.Inheritance:\nThis field is always inherited. However, the\nPy_TPFLAGS_HAVE_VECTORCALL\nflag is not always inherited. If it\u2019s not set, then the subclass won\u2019t use vectorcall, except whenPyVectorcall_Call()\nis explicitly called.\n-\ngetattrfunc PyTypeObject.tp_getattr\u00b6\nThe corresponding slot ID\nPy_tp_getattr\nis part of the Stable ABI.An optional pointer to the get-attribute-string function.\nThis field is deprecated. When it is defined, it should point to a function that acts the same as the\ntp_getattro\nfunction, but taking a C string instead of a Python string object to give the attribute name.Inheritance:\nGroup:\ntp_getattr\n,tp_getattro\nThis field is inherited by subtypes together with\ntp_getattro\n: a subtype inherits bothtp_getattr\nandtp_getattro\nfrom its base type when the subtype\u2019stp_getattr\nandtp_getattro\nare bothNULL\n.\n-\nsetattrfunc PyTypeObject.tp_setattr\u00b6\nThe corresponding slot ID\nPy_tp_setattr\nis part of the Stable ABI.An optional pointer to the function for setting and deleting attributes.\nThis field is deprecated. When it is defined, it should point to a function that acts the same as the\ntp_setattro\nfunction, but taking a C string instead of a Python string object to give the attribute name.Inheritance:\nGroup:\ntp_setattr\n,tp_setattro\nThis field is inherited by subtypes together with\ntp_setattro\n: a subtype inherits bothtp_setattr\nandtp_setattro\nfrom its base type when the subtype\u2019stp_setattr\nandtp_setattro\nare bothNULL\n.\n-\nPyAsyncMethods *PyTypeObject.tp_as_async\u00b6\nPointer to an additional structure that contains fields relevant only to objects which implement awaitable and asynchronous iterator protocols at the C-level. See Async Object Structures for details.\nAdded in version 3.5: Formerly known as\ntp_compare\nandtp_reserved\n.Inheritance:\nThe\ntp_as_async\nfield is not inherited, but the contained fields are inherited individually.\n-\nreprfunc PyTypeObject.tp_repr\u00b6\nThe corresponding slot ID\nPy_tp_repr\nis part of the Stable ABI.An optional pointer to a function that implements the built-in function\nrepr()\n.The signature is the same as for\nPyObject_Repr()\n:PyObject *tp_repr(PyObject *self);\nThe function must return a string or a Unicode object. Ideally, this function should return a string that, when passed to\neval()\n, given a suitable environment, returns an object with the same value. If this is not feasible, it should return a string starting with'<'\nand ending with'>'\nfrom which both the type and the value of the object can be deduced.Inheritance:\nThis field is inherited by subtypes.\nDefault:\nWhen this field is not set, a string of the form\n<%s object at %p>\nis returned, where%s\nis replaced by the type name, and%p\nby the object\u2019s memory address.\n-\nPyNumberMethods *PyTypeObject.tp_as_number\u00b6\nPointer to an additional structure that contains fields relevant only to objects which implement the number protocol. These fields are documented in Number Object Structures.\nInheritance:\nThe\ntp_as_number\nfield is not inherited, but the contained fields are inherited individually.\n-\nPySequenceMethods *PyTypeObject.tp_as_sequence\u00b6\nPointer to an additional structure that contains fields relevant only to objects which implement the sequence protocol. These fields are documented in Sequence Object Structures.\nInheritance:\nThe\ntp_as_sequence\nfield is not inherited, but the contained fields are inherited individually.\n-\nPyMappingMethods *PyTypeObject.tp_as_mapping\u00b6\nPointer to an additional structure that contains fields relevant only to objects which implement the mapping protocol. These fields are documented in Mapping Object Structures.\nInheritance:\nThe\ntp_as_mapping\nfield is not inherited, but the contained fields are inherited individually.\n-\nhashfunc PyTypeObject.tp_hash\u00b6\nThe corresponding slot ID\nPy_tp_hash\nis part of the Stable ABI.An optional pointer to a function that implements the built-in function\nhash()\n.The signature is the same as for\nPyObject_Hash()\n:Py_hash_t tp_hash(PyObject *);\nThe value\n-1\nshould not be returned as a normal return value; when an error occurs during the computation of the hash value, the function should set an exception and return-1\n.When this field is not set (and\ntp_richcompare\nis not set), an attempt to take the hash of the object raisesTypeError\n. This is the same as setting it toPyObject_HashNotImplemented()\n.This field can be set explicitly to\nPyObject_HashNotImplemented()\nto block inheritance of the hash method from a parent type. This is interpreted as the equivalent of__hash__ = None\nat the Python level, causingisinstance(o, collections.Hashable)\nto correctly returnFalse\n. Note that the converse is also true - setting__hash__ = None\non a class at the Python level will result in thetp_hash\nslot being set toPyObject_HashNotImplemented()\n.Inheritance:\nGroup:\ntp_hash\n,tp_richcompare\nThis field is inherited by subtypes together with\ntp_richcompare\n: a subtype inherits both oftp_richcompare\nandtp_hash\n, when the subtype\u2019stp_richcompare\nandtp_hash\nare bothNULL\n.Default:\n-\nternaryfunc PyTypeObject.tp_call\u00b6\nThe corresponding slot ID\nPy_tp_call\nis part of the Stable ABI.An optional pointer to a function that implements calling the object. This should be\nNULL\nif the object is not callable. The signature is the same as forPyObject_Call()\n:PyObject *tp_call(PyObject *self, PyObject *args, PyObject *kwargs);\nInheritance:\nThis field is inherited by subtypes.\n-\nreprfunc PyTypeObject.tp_str\u00b6\nThe corresponding slot ID\nPy_tp_str\nis part of the Stable ABI.An optional pointer to a function that implements the built-in operation\nstr()\n. (Note thatstr\nis a type now, andstr()\ncalls the constructor for that type. This constructor callsPyObject_Str()\nto do the actual work, andPyObject_Str()\nwill call this handler.)The signature is the same as for\nPyObject_Str()\n:PyObject *tp_str(PyObject *self);\nThe function must return a string or a Unicode object. It should be a \u201cfriendly\u201d string representation of the object, as this is the representation that will be used, among other things, by the\nprint()\nfunction.Inheritance:\nThis field is inherited by subtypes.\nDefault:\nWhen this field is not set,\nPyObject_Repr()\nis called to return a string representation.\n-\ngetattrofunc PyTypeObject.tp_getattro\u00b6\nThe corresponding slot ID\nPy_tp_getattro\nis part of the Stable ABI.An optional pointer to the get-attribute function.\nThe signature is the same as for\nPyObject_GetAttr()\n:PyObject *tp_getattro(PyObject *self, PyObject *attr);\nIt is usually convenient to set this field to\nPyObject_GenericGetAttr()\n, which implements the normal way of looking for object attributes.Inheritance:\nGroup:\ntp_getattr\n,tp_getattro\nThis field is inherited by subtypes together with\ntp_getattr\n: a subtype inherits bothtp_getattr\nandtp_getattro\nfrom its base type when the subtype\u2019stp_getattr\nandtp_getattro\nare bothNULL\n.Default:\n-\nsetattrofunc PyTypeObject.tp_setattro\u00b6\nThe corresponding slot ID\nPy_tp_setattro\nis part of the Stable ABI.An optional pointer to the function for setting and deleting attributes.\nThe signature is the same as for\nPyObject_SetAttr()\n:int tp_setattro(PyObject *self, PyObject *attr, PyObject *value);\nIn addition, setting value to\nNULL\nto delete an attribute must be supported. It is usually convenient to set this field toPyObject_GenericSetAttr()\n, which implements the normal way of setting object attributes.Inheritance:\nGroup:\ntp_setattr\n,tp_setattro\nThis field is inherited by subtypes together with\ntp_setattr\n: a subtype inherits bothtp_setattr\nandtp_setattro\nfrom its base type when the subtype\u2019stp_setattr\nandtp_setattro\nare bothNULL\n.Default:\n-\nPyBufferProcs *PyTypeObject.tp_as_buffer\u00b6\nPointer to an additional structure that contains fields relevant only to objects which implement the buffer interface. These fields are documented in Buffer Object Structures.\nInheritance:\nThe\ntp_as_buffer\nfield is not inherited, but the contained fields are inherited individually.\n-\nunsigned long PyTypeObject.tp_flags\u00b6\nThis field is a bit mask of various flags. Some flags indicate variant semantics for certain situations; others are used to indicate that certain fields in the type object (or in the extension structures referenced via\ntp_as_number\n,tp_as_sequence\n,tp_as_mapping\n, andtp_as_buffer\n) that were historically not always present are valid; if such a flag bit is clear, the type fields it guards must not be accessed and must be considered to have a zero orNULL\nvalue instead.Inheritance:\nInheritance of this field is complicated. Most flag bits are inherited individually, i.e. if the base type has a flag bit set, the subtype inherits this flag bit. The flag bits that pertain to extension structures are strictly inherited if the extension structure is inherited, i.e. the base type\u2019s value of the flag bit is copied into the subtype together with a pointer to the extension structure. The\nPy_TPFLAGS_HAVE_GC\nflag bit is inherited together with thetp_traverse\nandtp_clear\nfields, i.e. if thePy_TPFLAGS_HAVE_GC\nflag bit is clear in the subtype and thetp_traverse\nandtp_clear\nfields in the subtype exist and haveNULL\nvalues.Default:\nPyBaseObject_Type\nusesPy_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE\n.Bit Masks:\nThe following bit masks are currently defined; these can be ORed together using the\n|\noperator to form the value of thetp_flags\nfield. The macroPyType_HasFeature()\ntakes a type and a flags value, tp and f, and checks whethertp->tp_flags & f\nis non-zero.-\nPy_TPFLAGS_HEAPTYPE\u00b6\nThis bit is set when the type object itself is allocated on the heap, for example, types created dynamically using\nPyType_FromSpec()\n. In this case, theob_type\nfield of its instances is considered a reference to the type, and the type object is INCREF\u2019ed when a new instance is created, and DECREF\u2019ed when an instance is destroyed (this does not apply to instances of subtypes; only the type referenced by the instance\u2019s ob_type gets INCREF\u2019ed or DECREF\u2019ed). Heap types should also support garbage collection as they can form a reference cycle with their own module object.Inheritance:\n???\n-\nPy_TPFLAGS_BASETYPE\u00b6\n- Part of the Stable ABI.\nThis bit is set when the type can be used as the base type of another type. If this bit is clear, the type cannot be subtyped (similar to a \u201cfinal\u201d class in Java).\nInheritance:\n???\n-\nPy_TPFLAGS_READY\u00b6\nThis bit is set when the type object has been fully initialized by\nPyType_Ready()\n.Inheritance:\n???\n-\nPy_TPFLAGS_READYING\u00b6\nThis bit is set while\nPyType_Ready()\nis in the process of initializing the type object.Inheritance:\n???\n-\nPy_TPFLAGS_HAVE_GC\u00b6\n- Part of the Stable ABI.\nThis bit is set when the object supports garbage collection. If this bit is set, memory for new instances (see\ntp_alloc\n) must be allocated usingPyObject_GC_New\norPyType_GenericAlloc()\nand deallocated (seetp_free\n) usingPyObject_GC_Del()\n. More information in section Supporting Cyclic Garbage Collection.Inheritance:\nGroup:\nPy_TPFLAGS_HAVE_GC\n,tp_traverse\n,tp_clear\nThe\nPy_TPFLAGS_HAVE_GC\nflag bit is inherited together with thetp_traverse\nandtp_clear\nfields, i.e. if thePy_TPFLAGS_HAVE_GC\nflag bit is clear in the subtype and thetp_traverse\nandtp_clear\nfields in the subtype exist and haveNULL\nvalues.\n-\nPy_TPFLAGS_DEFAULT\u00b6\n- Part of the Stable ABI.\nThis is a bitmask of all the bits that pertain to the existence of certain fields in the type object and its extension structures. Currently, it includes the following bits:\nPy_TPFLAGS_HAVE_STACKLESS_EXTENSION\n.Inheritance:\n???\n-\nPy_TPFLAGS_METHOD_DESCRIPTOR\u00b6\n- Part of the Stable ABI since version 3.8.\nThis bit indicates that objects behave like unbound methods.\nIf this flag is set for\ntype(meth)\n, then:meth.__get__(obj, cls)(*args, **kwds)\n(withobj\nnot None) must be equivalent tometh(obj, *args, **kwds)\n.meth.__get__(None, cls)(*args, **kwds)\nmust be equivalent tometh(*args, **kwds)\n.\nThis flag enables an optimization for typical method calls like\nobj.meth()\n: it avoids creating a temporary \u201cbound method\u201d object forobj.meth\n.Added in version 3.8.\nInheritance:\nThis flag is never inherited by types without the\nPy_TPFLAGS_IMMUTABLETYPE\nflag set. For extension types, it is inherited whenevertp_descr_get\nis inherited.\n-\nPy_TPFLAGS_MANAGED_DICT\u00b6\nThis bit indicates that instances of the class have a\n__dict__\nattribute, and that the space for the dictionary is managed by the VM.If this flag is set,\nPy_TPFLAGS_HAVE_GC\nshould also be set.The type traverse function must call\nPyObject_VisitManagedDict()\nand its clear function must callPyObject_ClearManagedDict()\n.Added in version 3.12.\nInheritance:\nThis flag is inherited unless the\ntp_dictoffset\nfield is set in a superclass.\n-\nPy_TPFLAGS_MANAGED_WEAKREF\u00b6\nThis bit indicates that instances of the class should be weakly referenceable.\nAdded in version 3.12.\nInheritance:\nThis flag is inherited unless the\ntp_weaklistoffset\nfield is set in a superclass.\n-\nPy_TPFLAGS_ITEMS_AT_END\u00b6\n- Part of the Stable ABI since version 3.12.\nOnly usable with variable-size types, i.e. ones with non-zero\ntp_itemsize\n.Indicates that the variable-sized portion of an instance of this type is at the end of the instance\u2019s memory area, at an offset of\nPy_TYPE(obj)->tp_basicsize\n(which may be different in each subclass).When setting this flag, be sure that all superclasses either use this memory layout, or are not variable-sized. Python does not check this.\nAdded in version 3.12.\nInheritance:\nThis flag is inherited.\n-\nPy_TPFLAGS_LONG_SUBCLASS\u00b6\n-\nPy_TPFLAGS_LIST_SUBCLASS\u00b6\n-\nPy_TPFLAGS_TUPLE_SUBCLASS\u00b6\n-\nPy_TPFLAGS_BYTES_SUBCLASS\u00b6\n-\nPy_TPFLAGS_UNICODE_SUBCLASS\u00b6\n-\nPy_TPFLAGS_DICT_SUBCLASS\u00b6\n-\nPy_TPFLAGS_BASE_EXC_SUBCLASS\u00b6\n-\nPy_TPFLAGS_TYPE_SUBCLASS\u00b6\nFunctions such as\nPyLong_Check()\nwill callPyType_FastSubclass()\nwith one of these flags to quickly determine if a type is a subclass of a built-in type; such specific checks are faster than a generic check, likePyObject_IsInstance()\n. Custom types that inherit from built-ins should have theirtp_flags\nset appropriately, or the code that interacts with such types will behave differently depending on what kind of check is used.\n-\nPy_TPFLAGS_HAVE_FINALIZE\u00b6\nThis bit is set when the\ntp_finalize\nslot is present in the type structure.Added in version 3.4.\nDeprecated since version 3.8: This flag isn\u2019t necessary anymore, as the interpreter assumes the\ntp_finalize\nslot is always present in the type structure.\n-\nPy_TPFLAGS_HAVE_VECTORCALL\u00b6\n- Part of the Stable ABI since version 3.12.\nThis bit is set when the class implements the vectorcall protocol. See\ntp_vectorcall_offset\nfor details.Inheritance:\nThis bit is inherited if\ntp_call\nis also inherited.Added in version 3.8: as\n_Py_TPFLAGS_HAVE_VECTORCALL\nChanged in version 3.9.\nRenamed to the current name, without the leading underscore. The old provisional name is soft deprecated.\nChanged in version 3.12: This flag is now removed from a class when the class\u2019s\n__call__()\nmethod is reassigned.This flag can now be inherited by mutable classes.\n-\nPy_TPFLAGS_IMMUTABLETYPE\u00b6\nThis bit is set for type objects that are immutable: type attributes cannot be set nor deleted.\nPyType_Ready()\nautomatically applies this flag to static types.Inheritance:\nThis flag is not inherited.\nAdded in version 3.10.\n-\nPy_TPFLAGS_DISALLOW_INSTANTIATION\u00b6\nDisallow creating instances of the type: set\ntp_new\nto NULL and don\u2019t create the__new__\nkey in the type dictionary.The flag must be set before creating the type, not after. For example, it must be set before\nPyType_Ready()\nis called on the type.The flag is set automatically on static types if\ntp_base\nis NULL or&PyBaseObject_Type\nandtp_new\nis NULL.Inheritance:\nThis flag is not inherited. However, subclasses will not be instantiable unless they provide a non-NULL\ntp_new\n(which is only possible via the C API).Note\nTo disallow instantiating a class directly but allow instantiating its subclasses (e.g. for an abstract base class), do not use this flag. Instead, make\ntp_new\nonly succeed for subclasses.Added in version 3.10.\n-\nPy_TPFLAGS_MAPPING\u00b6\nThis bit indicates that instances of the class may match mapping patterns when used as the subject of a\nmatch\nblock. It is automatically set when registering or subclassingcollections.abc.Mapping\n, and unset when registeringcollections.abc.Sequence\n.Note\nPy_TPFLAGS_MAPPING\nandPy_TPFLAGS_SEQUENCE\nare mutually exclusive; it is an error to enable both flags simultaneously.Inheritance:\nThis flag is inherited by types that do not already set\nPy_TPFLAGS_SEQUENCE\n.See also\nPEP 634 \u2013 Structural Pattern Matching: Specification\nAdded in version 3.10.\n-\nPy_TPFLAGS_SEQUENCE\u00b6\nThis bit indicates that instances of the class may match sequence patterns when used as the subject of a\nmatch\nblock. It is automatically set when registering or subclassingcollections.abc.Sequence\n, and unset when registeringcollections.abc.Mapping\n.Note\nPy_TPFLAGS_MAPPING\nandPy_TPFLAGS_SEQUENCE\nare mutually exclusive; it is an error to enable both flags simultaneously.Inheritance:\nThis flag is inherited by types that do not already set\nPy_TPFLAGS_MAPPING\n.See also\nPEP 634 \u2013 Structural Pattern Matching: Specification\nAdded in version 3.10.\n-\nPy_TPFLAGS_VALID_VERSION_TAG\u00b6\nInternal. Do not set or unset this flag. To indicate that a class has changed call\nPyType_Modified()\nWarning\nThis flag is present in header files, but is not be used. It will be removed in a future version of CPython\n-\nPy_TPFLAGS_HEAPTYPE\u00b6\n-\nconst char *PyTypeObject.tp_doc\u00b6\nThe corresponding slot ID\nPy_tp_doc\nis part of the Stable ABI.An optional pointer to a NUL-terminated C string giving the docstring for this type object. This is exposed as the\n__doc__\nattribute on the type and instances of the type.Inheritance:\nThis field is not inherited by subtypes.\n-\ntraverseproc PyTypeObject.tp_traverse\u00b6\nThe corresponding slot ID\nPy_tp_traverse\nis part of the Stable ABI.An optional pointer to a traversal function for the garbage collector. This is only used if the\nPy_TPFLAGS_HAVE_GC\nflag bit is set. The signature is:int tp_traverse(PyObject *self, visitproc visit, void *arg);\nMore information about Python\u2019s garbage collection scheme can be found in section Supporting Cyclic Garbage Collection.\nThe\ntp_traverse\npointer is used by the garbage collector to detect reference cycles. A typical implementation of atp_traverse\nfunction simply callsPy_VISIT()\non each of the instance\u2019s members that are Python objects that the instance owns. For example, this is functionlocal_traverse()\nfrom the_thread\nextension module:static int local_traverse(PyObject *op, visitproc visit, void *arg) { localobject *self = (localobject *) op; Py_VISIT(self->args); Py_VISIT(self->kw); Py_VISIT(self->dict); return 0; }\nNote that\nPy_VISIT()\nis called only on those members that can participate in reference cycles. Although there is also aself->key\nmember, it can only beNULL\nor a Python string and therefore cannot be part of a reference cycle.On the other hand, even if you know a member can never be part of a cycle, as a debugging aid you may want to visit it anyway just so the\ngc\nmodule\u2019sget_referents()\nfunction will include it.Heap types (\nPy_TPFLAGS_HEAPTYPE\n) must visit their type with:Py_VISIT(Py_TYPE(self));\nIt is only needed since Python 3.9. To support Python 3.8 and older, this line must be conditional:\n#if PY_VERSION_HEX >= 0x03090000 Py_VISIT(Py_TYPE(self)); #endif\nIf the\nPy_TPFLAGS_MANAGED_DICT\nbit is set in thetp_flags\nfield, the traverse function must callPyObject_VisitManagedDict()\nlike this:PyObject_VisitManagedDict((PyObject*)self, visit, arg);\nWarning\nWhen implementing\ntp_traverse\n, only the members that the instance owns (by having strong references to them) must be visited. For instance, if an object supports weak references via thetp_weaklist\nslot, the pointer supporting the linked list (what tp_weaklist points to) must not be visited as the instance does not directly own the weak references to itself (the weakreference list is there to support the weak reference machinery, but the instance has no strong reference to the elements inside it, as they are allowed to be removed even if the instance is still alive).Warning\nThe traversal function must not have any side effects. It must not modify the reference counts of any Python objects nor create or destroy any Python objects.\nNote that\nPy_VISIT()\nrequires the visit and arg parameters tolocal_traverse()\nto have these specific names; don\u2019t name them just anything.Instances of heap-allocated types hold a reference to their type. Their traversal function must therefore either visit\nPy_TYPE(self)\n, or delegate this responsibility by callingtp_traverse\nof another heap-allocated type (such as a heap-allocated superclass). If they do not, the type object may not be garbage-collected.Note\nThe\ntp_traverse\nfunction can be called from any thread.Changed in version 3.9: Heap-allocated types are expected to visit\nPy_TYPE(self)\nintp_traverse\n. In earlier versions of Python, due to bug 40217, doing this may lead to crashes in subclasses.Inheritance:\nGroup:\nPy_TPFLAGS_HAVE_GC\n,tp_traverse\n,tp_clear\nThis field is inherited by subtypes together with\ntp_clear\nand thePy_TPFLAGS_HAVE_GC\nflag bit: the flag bit,tp_traverse\n, andtp_clear\nare all inherited from the base type if they are all zero in the subtype.\n-\ninquiry PyTypeObject.tp_clear\u00b6\nThe corresponding slot ID\nPy_tp_clear\nis part of the Stable ABI.An optional pointer to a clear function. The signature is:\nint tp_clear(PyObject *);\nThe purpose of this function is to break reference cycles that are causing a cyclic isolate so that the objects can be safely destroyed. A cleared object is a partially destroyed object; the object is not obligated to satisfy design invariants held during normal use.\ntp_clear\ndoes not need to delete references to objects that can\u2019t participate in reference cycles, such as Python strings or Python integers. However, it may be convenient to clear all references, and write the type\u2019stp_dealloc\nfunction to invoketp_clear\nto avoid code duplication. (Beware thattp_clear\nmight have already been called. Prefer calling idempotent functions likePy_CLEAR()\n.)Any non-trivial cleanup should be performed in\ntp_finalize\ninstead oftp_clear\n.Note\nIf\ntp_clear\nfails to break a reference cycle then the objects in the cyclic isolate may remain indefinitely uncollectable (\u201cleak\u201d). Seegc.garbage\n.Note\nReferents (direct and indirect) might have already been cleared; they are not guaranteed to be in a consistent state.\nNote\nThe\ntp_clear\nfunction can be called from any thread.Note\nAn object is not guaranteed to be automatically cleared before its destructor (\ntp_dealloc\n) is called.This function differs from the destructor (\ntp_dealloc\n) in the following ways:The purpose of clearing an object is to remove references to other objects that might participate in a reference cycle. The purpose of the destructor, on the other hand, is a superset: it must release all resources it owns, including references to objects that cannot participate in a reference cycle (e.g., integers) as well as the object\u2019s own memory (by calling\ntp_free\n).When\ntp_clear\nis called, other objects might still hold references to the object being cleared. Because of this,tp_clear\nmust not deallocate the object\u2019s own memory (tp_free\n). The destructor, on the other hand, is only called when no (strong) references exist, and as such, must safely destroy the object itself by deallocating it.tp_clear\nmight never be automatically called. An object\u2019s destructor, on the other hand, will be automatically called some time after the object becomes unreachable (i.e., either there are no references to the object or the object is a member of a cyclic isolate).\nNo guarantees are made about when, if, or how often Python automatically clears an object, except:\nPython will not automatically clear an object if it is reachable, i.e., there is a reference to it and it is not a member of a cyclic isolate.\nPython will not automatically clear an object if it has not been automatically finalized (see\ntp_finalize\n). (If the finalizer resurrected the object, the object may or may not be automatically finalized again before it is cleared.)If an object is a member of a cyclic isolate, Python will not automatically clear it if any member of the cyclic isolate has not yet been automatically finalized (\ntp_finalize\n).Python will not destroy an object until after any automatic calls to its\ntp_clear\nfunction have returned. This ensures that the act of breaking a reference cycle does not invalidate theself\npointer whiletp_clear\nis still executing.Python will not automatically call\ntp_clear\nmultiple times concurrently.\nCPython currently only automatically clears objects as needed to break reference cycles in a cyclic isolate, but future versions might clear objects regularly before their destruction.\nTaken together, all\ntp_clear\nfunctions in the system must combine to break all reference cycles. This is subtle, and if in any doubt supply atp_clear\nfunction. For example, the tuple type does not implement atp_clear\nfunction, because it\u2019s possible to prove that no reference cycle can be composed entirely of tuples. Therefore thetp_clear\nfunctions of other types are responsible for breaking any cycle containing a tuple. This isn\u2019t immediately obvious, and there\u2019s rarely a good reason to avoid implementingtp_clear\n.Implementations of\ntp_clear\nshould drop the instance\u2019s references to those of its members that may be Python objects, and set its pointers to those members toNULL\n, as in the following example:static int local_clear(PyObject *op) { localobject *self = (localobject *) op; Py_CLEAR(self->key); Py_CLEAR(self->args); Py_CLEAR(self->kw); Py_CLEAR(self->dict); return 0; }\nThe\nPy_CLEAR()\nmacro should be used, because clearing references is delicate: the reference to the contained object must not be released (viaPy_DECREF()\n) until after the pointer to the contained object is set toNULL\n. This is because releasing the reference may cause the contained object to become trash, triggering a chain of reclamation activity that may include invoking arbitrary Python code (due to finalizers, or weakref callbacks, associated with the contained object). If it\u2019s possible for such code to reference self again, it\u2019s important that the pointer to the contained object beNULL\nat that time, so that self knows the contained object can no longer be used. ThePy_CLEAR()\nmacro performs the operations in a safe order.If the\nPy_TPFLAGS_MANAGED_DICT\nbit is set in thetp_flags\nfield, the clear function must callPyObject_ClearManagedDict()\nlike this:PyObject_ClearManagedDict((PyObject*)self);\nMore information about Python\u2019s garbage collection scheme can be found in section Supporting Cyclic Garbage Collection.\nInheritance:\nGroup:\nPy_TPFLAGS_HAVE_GC\n,tp_traverse\n,tp_clear\nThis field is inherited by subtypes together with\ntp_traverse\nand thePy_TPFLAGS_HAVE_GC\nflag bit: the flag bit,tp_traverse\n, andtp_clear\nare all inherited from the base type if they are all zero in the subtype.See also\nObject Life Cycle for details about how this slot relates to other slots.\n-\nrichcmpfunc PyTypeObject.tp_richcompare\u00b6\nThe corresponding slot ID\nPy_tp_richcompare\nis part of the Stable ABI.An optional pointer to the rich comparison function, whose signature is:\nPyObject *tp_richcompare(PyObject *self, PyObject *other, int op);\nThe first parameter is guaranteed to be an instance of the type that is defined by\nPyTypeObject\n.The function should return the result of the comparison (usually\nPy_True\norPy_False\n). If the comparison is undefined, it must returnPy_NotImplemented\n, if another error occurred it must returnNULL\nand set an exception condition.The following constants are defined to be used as the third argument for\ntp_richcompare\nand forPyObject_RichCompare()\n:Constant\nComparison\n-\nPy_LT\u00b6\n<\n-\nPy_LE\u00b6\n<=\n-\nPy_EQ\u00b6\n==\n-\nPy_NE\u00b6\n!=\n-\nPy_GT\u00b6\n>\n-\nPy_GE\u00b6\n>=\nThe following macro is defined to ease writing rich comparison functions:\n-\nPy_RETURN_RICHCOMPARE(VAL_A, VAL_B, op)\u00b6\nReturn\nPy_True\norPy_False\nfrom the function, depending on the result of a comparison. VAL_A and VAL_B must be orderable by C comparison operators (for example, they may be C ints or floats). The third argument specifies the requested operation, as forPyObject_RichCompare()\n.The returned value is a new strong reference.\nOn error, sets an exception and returns\nNULL\nfrom the function.Added in version 3.7.\nInheritance:\nGroup:\ntp_hash\n,tp_richcompare\nThis field is inherited by subtypes together with\ntp_hash\n: a subtype inheritstp_richcompare\nandtp_hash\nwhen the subtype\u2019stp_richcompare\nandtp_hash\nare bothNULL\n.Default:\nPyBaseObject_Type\nprovides atp_richcompare\nimplementation, which may be inherited. However, if onlytp_hash\nis defined, not even the inherited function is used and instances of the type will not be able to participate in any comparisons.-\nPy_LT\u00b6\n-\nPy_ssize_t PyTypeObject.tp_weaklistoffset\u00b6\nWhile this field is still supported,\nPy_TPFLAGS_MANAGED_WEAKREF\nshould be used instead, if at all possible.If the instances of this type are weakly referenceable, this field is greater than zero and contains the offset in the instance structure of the weak reference list head (ignoring the GC header, if present); this offset is used by\nPyObject_ClearWeakRefs()\nand thePyWeakref_*\nfunctions. The instance structure needs to include a field of type PyObject* which is initialized toNULL\n.Do not confuse this field with\ntp_weaklist\n; that is the list head for weak references to the type object itself.It is an error to set both the\nPy_TPFLAGS_MANAGED_WEAKREF\nbit andtp_weaklistoffset\n.Inheritance:\nThis field is inherited by subtypes, but see the rules listed below. A subtype may override this offset; this means that the subtype uses a different weak reference list head than the base type. Since the list head is always found via\ntp_weaklistoffset\n, this should not be a problem.Default:\nIf the\nPy_TPFLAGS_MANAGED_WEAKREF\nbit is set in thetp_flags\nfield, thentp_weaklistoffset\nwill be set to a negative value, to indicate that it is unsafe to use this field.\n-\ngetiterfunc PyTypeObject.tp_iter\u00b6\nThe corresponding slot ID\nPy_tp_iter\nis part of the Stable ABI.An optional pointer to a function that returns an iterator for the object. Its presence normally signals that the instances of this type are iterable (although sequences may be iterable without this function).\nThis function has the same signature as\nPyObject_GetIter()\n:PyObject *tp_iter(PyObject *self);\nInheritance:\nThis field is inherited by subtypes.\n-\niternextfunc PyTypeObject.tp_iternext\u00b6\nThe corresponding slot ID\nPy_tp_iternext\nis part of the Stable ABI.An optional pointer to a function that returns the next item in an iterator. The signature is:\nPyObject *tp_iternext(PyObject *self);\nWhen the iterator is exhausted, it must return\nNULL\n; aStopIteration\nexception may or may not be set. When another error occurs, it must returnNULL\ntoo. Its presence signals that the instances of this type are iterators.Iterator types should also define the\ntp_iter\nfunction, and that function should return the iterator instance itself (not a new iterator instance).This function has the same signature as\nPyIter_Next()\n.Inheritance:\nThis field is inherited by subtypes.\n-\nstruct PyMethodDef *PyTypeObject.tp_methods\u00b6\nThe corresponding slot ID\nPy_tp_methods\nis part of the Stable ABI.An optional pointer to a static\nNULL\n-terminated array ofPyMethodDef\nstructures, declaring regular methods of this type.For each entry in the array, an entry is added to the type\u2019s dictionary (see\ntp_dict\nbelow) containing a method descriptor.Inheritance:\nThis field is not inherited by subtypes (methods are inherited through a different mechanism).\n-\nstruct PyMemberDef *PyTypeObject.tp_members\u00b6\nThe corresponding slot ID\nPy_tp_members\nis part of the Stable ABI.An optional pointer to a static\nNULL\n-terminated array ofPyMemberDef\nstructures, declaring regular data members (fields or slots) of instances of this type.For each entry in the array, an entry is added to the type\u2019s dictionary (see\ntp_dict\nbelow) containing a member descriptor.Inheritance:\nThis field is not inherited by subtypes (members are inherited through a different mechanism).\n-\nstruct PyGetSetDef *PyTypeObject.tp_getset\u00b6\nThe corresponding slot ID\nPy_tp_getset\nis part of the Stable ABI.An optional pointer to a static\nNULL\n-terminated array ofPyGetSetDef\nstructures, declaring computed attributes of instances of this type.For each entry in the array, an entry is added to the type\u2019s dictionary (see\ntp_dict\nbelow) containing a getset descriptor.Inheritance:\nThis field is not inherited by subtypes (computed attributes are inherited through a different mechanism).\n-\nPyTypeObject *PyTypeObject.tp_base\u00b6\nThe corresponding slot ID\nPy_tp_base\nis part of the Stable ABI.An optional pointer to a base type from which type properties are inherited. At this level, only single inheritance is supported; multiple inheritance require dynamically creating a type object by calling the metatype.\nNote\nSlot initialization is subject to the rules of initializing globals. C99 requires the initializers to be \u201caddress constants\u201d. Function designators like\nPyType_GenericNew()\n, with implicit conversion to a pointer, are valid C99 address constants.However, the unary \u2018&\u2019 operator applied to a non-static variable like\nPyBaseObject_Type\nis not required to produce an address constant. Compilers may support this (gcc does), MSVC does not. Both compilers are strictly standard conforming in this particular behavior.Consequently,\ntp_base\nshould be set in the extension module\u2019s init function.Inheritance:\nThis field is not inherited by subtypes (obviously).\nDefault:\nThis field defaults to\n&PyBaseObject_Type\n(which to Python programmers is known as the typeobject\n).\n-\nPyObject *PyTypeObject.tp_dict\u00b6\nThe type\u2019s dictionary is stored here by\nPyType_Ready()\n.This field should normally be initialized to\nNULL\nbefore PyType_Ready is called; it may also be initialized to a dictionary containing initial attributes for the type. OncePyType_Ready()\nhas initialized the type, extra attributes for the type may be added to this dictionary only if they don\u2019t correspond to overloaded operations (like__add__()\n). Once initialization for the type has finished, this field should be treated as read-only.Some types may not store their dictionary in this slot. Use\nPyType_GetDict()\nto retrieve the dictionary for an arbitrary type.Changed in version 3.12: Internals detail: For static builtin types, this is always\nNULL\n. Instead, the dict for such types is stored onPyInterpreterState\n. UsePyType_GetDict()\nto get the dict for an arbitrary type.Inheritance:\nThis field is not inherited by subtypes (though the attributes defined in here are inherited through a different mechanism).\nDefault:\nIf this field is\nNULL\n,PyType_Ready()\nwill assign a new dictionary to it.Warning\nIt is not safe to use\nPyDict_SetItem()\non or otherwise modifytp_dict\nwith the dictionary C-API.\n-\ndescrgetfunc PyTypeObject.tp_descr_get\u00b6\nThe corresponding slot ID\nPy_tp_descr_get\nis part of the Stable ABI.An optional pointer to a \u201cdescriptor get\u201d function.\nThe function signature is:\nPyObject * tp_descr_get(PyObject *self, PyObject *obj, PyObject *type);\nInheritance:\nThis field is inherited by subtypes.\n-\ndescrsetfunc PyTypeObject.tp_descr_set\u00b6\nThe corresponding slot ID\nPy_tp_descr_set\nis part of the Stable ABI.An optional pointer to a function for setting and deleting a descriptor\u2019s value.\nThe function signature is:\nint tp_descr_set(PyObject *self, PyObject *obj, PyObject *value);\nThe value argument is set to\nNULL\nto delete the value.Inheritance:\nThis field is inherited by subtypes.\n-\nPy_ssize_t PyTypeObject.tp_dictoffset\u00b6\nWhile this field is still supported,\nPy_TPFLAGS_MANAGED_DICT\nshould be used instead, if at all possible.If the instances of this type have a dictionary containing instance variables, this field is non-zero and contains the offset in the instances of the type of the instance variable dictionary; this offset is used by\nPyObject_GenericGetAttr()\n.Do not confuse this field with\ntp_dict\n; that is the dictionary for attributes of the type object itself.The value specifies the offset of the dictionary from the start of the instance structure.\nThe\ntp_dictoffset\nshould be regarded as write-only. To get the pointer to the dictionary callPyObject_GenericGetDict()\n. CallingPyObject_GenericGetDict()\nmay need to allocate memory for the dictionary, so it is may be more efficient to callPyObject_GetAttr()\nwhen accessing an attribute on the object.It is an error to set both the\nPy_TPFLAGS_MANAGED_DICT\nbit andtp_dictoffset\n.Inheritance:\nThis field is inherited by subtypes. A subtype should not override this offset; doing so could be unsafe, if C code tries to access the dictionary at the previous offset. To properly support inheritance, use\nPy_TPFLAGS_MANAGED_DICT\n.Default:\nThis slot has no default. For static types, if the field is\nNULL\nthen no__dict__\ngets created for instances.If the\nPy_TPFLAGS_MANAGED_DICT\nbit is set in thetp_flags\nfield, thentp_dictoffset\nwill be set to-1\n, to indicate that it is unsafe to use this field.\n-\ninitproc PyTypeObject.tp_init\u00b6\nThe corresponding slot ID\nPy_tp_init\nis part of the Stable ABI.An optional pointer to an instance initialization function.\nThis function corresponds to the\n__init__()\nmethod of classes. Like__init__()\n, it is possible to create an instance without calling__init__()\n, and it is possible to reinitialize an instance by calling its__init__()\nmethod again.The function signature is:\nint tp_init(PyObject *self, PyObject *args, PyObject *kwds);\nThe self argument is the instance to be initialized; the args and kwds arguments represent positional and keyword arguments of the call to\n__init__()\n.The\ntp_init\nfunction, if notNULL\n, is called when an instance is created normally by calling its type, after the type\u2019stp_new\nfunction has returned an instance of the type. If thetp_new\nfunction returns an instance of some other type that is not a subtype of the original type, notp_init\nfunction is called; iftp_new\nreturns an instance of a subtype of the original type, the subtype\u2019stp_init\nis called.Returns\n0\non success,-1\nand sets an exception on error.Inheritance:\nThis field is inherited by subtypes.\nDefault:\nFor static types this field does not have a default.\n-\nallocfunc PyTypeObject.tp_alloc\u00b6\nThe corresponding slot ID\nPy_tp_alloc\nis part of the Stable ABI.An optional pointer to an instance allocation function.\nThe function signature is:\nPyObject *tp_alloc(PyTypeObject *self, Py_ssize_t nitems);\nInheritance:\nStatic subtypes inherit this slot, which will be\nPyType_GenericAlloc()\nif inherited fromobject\n.Heap subtypes do not inherit this slot.\nDefault:\nFor heap subtypes, this field is always set to\nPyType_GenericAlloc()\n.For static subtypes, this slot is inherited (see above).\n-\nnewfunc PyTypeObject.tp_new\u00b6\nThe corresponding slot ID\nPy_tp_new\nis part of the Stable ABI.An optional pointer to an instance creation function.\nThe function signature is:\nPyObject *tp_new(PyTypeObject *subtype, PyObject *args, PyObject *kwds);\nThe subtype argument is the type of the object being created; the args and kwds arguments represent positional and keyword arguments of the call to the type. Note that subtype doesn\u2019t have to equal the type whose\ntp_new\nfunction is called; it may be a subtype of that type (but not an unrelated type).The\ntp_new\nfunction should callsubtype->tp_alloc(subtype, nitems)\nto allocate space for the object, and then do only as much further initialization as is absolutely necessary. Initialization that can safely be ignored or repeated should be placed in thetp_init\nhandler. A good rule of thumb is that for immutable types, all initialization should take place intp_new\n, while for mutable types, most initialization should be deferred totp_init\n.Set the\nPy_TPFLAGS_DISALLOW_INSTANTIATION\nflag to disallow creating instances of the type in Python.Inheritance:\nThis field is inherited by subtypes, except it is not inherited by static types whose\ntp_base\nisNULL\nor&PyBaseObject_Type\n.Default:\nFor static types this field has no default. This means if the slot is defined as\nNULL\n, the type cannot be called to create new instances; presumably there is some other way to create instances, like a factory function.\n-\nfreefunc PyTypeObject.tp_free\u00b6\nThe corresponding slot ID\nPy_tp_free\nis part of the Stable ABI.An optional pointer to an instance deallocation function. Its signature is:\nvoid tp_free(void *self);\nThis function must free the memory allocated by\ntp_alloc\n.Inheritance:\nStatic subtypes inherit this slot, which will be\nPyObject_Free()\nif inherited fromobject\n. Exception: If the type supports garbage collection (i.e., thePy_TPFLAGS_HAVE_GC\nflag is set intp_flags\n) and it would inheritPyObject_Free()\n, then this slot is not inherited but instead defaults toPyObject_GC_Del()\n.Heap subtypes do not inherit this slot.\nDefault:\nFor heap subtypes, this slot defaults to a deallocator suitable to match\nPyType_GenericAlloc()\nand the value of thePy_TPFLAGS_HAVE_GC\nflag.For static subtypes, this slot is inherited (see above).\n-\ninquiry PyTypeObject.tp_is_gc\u00b6\nThe corresponding slot ID\nPy_tp_is_gc\nis part of the Stable ABI.An optional pointer to a function called by the garbage collector.\nThe garbage collector needs to know whether a particular object is collectible or not. Normally, it is sufficient to look at the object\u2019s type\u2019s\ntp_flags\nfield, and check thePy_TPFLAGS_HAVE_GC\nflag bit. But some types have a mixture of statically and dynamically allocated instances, and the statically allocated instances are not collectible. Such types should define this function; it should return1\nfor a collectible instance, and0\nfor a non-collectible instance. The signature is:int tp_is_gc(PyObject *self);\n(The only example of this are types themselves. The metatype,\nPyType_Type\n, defines this function to distinguish between statically and dynamically allocated types.)Inheritance:\nThis field is inherited by subtypes.\nDefault:\nThis slot has no default. If this field is\nNULL\n,Py_TPFLAGS_HAVE_GC\nis used as the functional equivalent.\n-\nPyObject *PyTypeObject.tp_bases\u00b6\nThe corresponding slot ID\nPy_tp_bases\nis part of the Stable ABI.Tuple of base types.\nThis field should be set to\nNULL\nand treated as read-only. Python will fill it in when the type isinitialized\n.For dynamically created classes, the\nPy_tp_bases\nslot\ncan be used instead of the bases argument ofPyType_FromSpecWithBases()\n. The argument form is preferred.Warning\nMultiple inheritance does not work well for statically defined types. If you set\ntp_bases\nto a tuple, Python will not raise an error, but some slots will only be inherited from the first base.Inheritance:\nThis field is not inherited.\n-\nPyObject *PyTypeObject.tp_mro\u00b6\nTuple containing the expanded set of base types, starting with the type itself and ending with\nobject\n, in Method Resolution Order.This field should be set to\nNULL\nand treated as read-only. Python will fill it in when the type isinitialized\n.Inheritance:\nThis field is not inherited; it is calculated fresh by\nPyType_Ready()\n.\n-\nPyObject *PyTypeObject.tp_cache\u00b6\nUnused. Internal use only.\nInheritance:\nThis field is not inherited.\n-\nvoid *PyTypeObject.tp_subclasses\u00b6\nA collection of subclasses. Internal use only. May be an invalid pointer.\nTo get a list of subclasses, call the Python method\n__subclasses__()\n.Changed in version 3.12: For some types, this field does not hold a valid PyObject*. The type was changed to void* to indicate this.\nInheritance:\nThis field is not inherited.\n-\nPyObject *PyTypeObject.tp_weaklist\u00b6\nWeak reference list head, for weak references to this type object. Not inherited. Internal use only.\nChanged in version 3.12: Internals detail: For the static builtin types this is always\nNULL\n, even if weakrefs are added. Instead, the weakrefs for each are stored onPyInterpreterState\n. Use the public C-API or the internal_PyObject_GET_WEAKREFS_LISTPTR()\nmacro to avoid the distinction.Inheritance:\nThis field is not inherited.\n-\ndestructor PyTypeObject.tp_del\u00b6\nThe corresponding slot ID\nPy_tp_del\nis part of the Stable ABI.This field is deprecated. Use\ntp_finalize\ninstead.\n-\nunsigned int PyTypeObject.tp_version_tag\u00b6\nUsed to index into the method cache. Internal use only.\nInheritance:\nThis field is not inherited.\n-\ndestructor PyTypeObject.tp_finalize\u00b6\nThe corresponding slot ID\nPy_tp_finalize\nis part of the Stable ABI since version 3.5.An optional pointer to an instance finalization function. This is the C implementation of the\n__del__()\nspecial method. Its signature is:void tp_finalize(PyObject *self);\nThe primary purpose of finalization is to perform any non-trivial cleanup that must be performed before the object is destroyed, while the object and any other objects it directly or indirectly references are still in a consistent state. The finalizer is allowed to execute arbitrary Python code.\nBefore Python automatically finalizes an object, some of the object\u2019s direct or indirect referents might have themselves been automatically finalized. However, none of the referents will have been automatically cleared (\ntp_clear\n) yet.Other non-finalized objects might still be using a finalized object, so the finalizer must leave the object in a sane state (e.g., invariants are still met).\nNote\nAfter Python automatically finalizes an object, Python might start automatically clearing (\ntp_clear\n) the object and its referents (direct and indirect). Cleared objects are not guaranteed to be in a consistent state; a finalized object must be able to tolerate cleared referents.Note\nAn object is not guaranteed to be automatically finalized before its destructor (\ntp_dealloc\n) is called. It is recommended to callPyObject_CallFinalizerFromDealloc()\nat the beginning oftp_dealloc\nto guarantee that the object is always finalized before destruction.Note\nThe\ntp_finalize\nfunction can be called from any thread, although the GIL will be held.Note\nThe\ntp_finalize\nfunction can be called during shutdown, after some global variables have been deleted. See the documentation of the__del__()\nmethod for details.When Python finalizes an object, it behaves like the following algorithm:\nPython might mark the object as finalized. Currently, Python always marks objects whose type supports garbage collection (i.e., the\nPy_TPFLAGS_HAVE_GC\nflag is set intp_flags\n) and never marks other types of objects; this might change in a future version.If the object is not marked as finalized and its\ntp_finalize\nfinalizer function is non-NULL\n, the finalizer function is called.If the finalizer function was called and the finalizer made the object reachable (i.e., there is a reference to the object and it is not a member of a cyclic isolate), then the finalizer is said to have resurrected the object. It is unspecified whether the finalizer can also resurrect the object by adding a new reference to the object that does not make it reachable, i.e., the object is (still) a member of a cyclic isolate.\nIf the finalizer resurrected the object, the object\u2019s pending destruction is canceled and the object\u2019s finalized mark might be removed if present. Currently, Python never removes the finalized mark; this might change in a future version.\nAutomatic finalization refers to any finalization performed by Python except via calls to\nPyObject_CallFinalizer()\norPyObject_CallFinalizerFromDealloc()\n. No guarantees are made about when, if, or how often an object is automatically finalized, except:Python will not automatically finalize an object if it is reachable, i.e., there is a reference to it and it is not a member of a cyclic isolate.\nPython will not automatically finalize an object if finalizing it would not mark the object as finalized. Currently, this applies to objects whose type does not support garbage collection, i.e., the\nPy_TPFLAGS_HAVE_GC\nflag is not set. Such objects can still be manually finalized by callingPyObject_CallFinalizer()\norPyObject_CallFinalizerFromDealloc()\n.Python will not automatically finalize any two members of a cyclic isolate concurrently.\nPython will not automatically finalize an object after it has automatically cleared (\ntp_clear\n) the object.If an object is a member of a cyclic isolate, Python will not automatically finalize it after automatically clearing (see\ntp_clear\n) any other member.Python will automatically finalize every member of a cyclic isolate before it automatically clears (see\ntp_clear\n) any of them.If Python is going to automatically clear an object (\ntp_clear\n), it will automatically finalize the object first.\nPython currently only automatically finalizes objects that are members of a cyclic isolate, but future versions might finalize objects regularly before their destruction.\nTo manually finalize an object, do not call this function directly; call\nPyObject_CallFinalizer()\norPyObject_CallFinalizerFromDealloc()\ninstead.tp_finalize\nshould leave the current exception status unchanged. The recommended way to write a non-trivial finalizer is to back up the exception at the beginning by callingPyErr_GetRaisedException()\nand restore the exception at the end by callingPyErr_SetRaisedException()\n. If an exception is encountered in the middle of the finalizer, log and clear it withPyErr_WriteUnraisable()\norPyErr_FormatUnraisable()\n. For example:static void foo_finalize(PyObject *self) { // Save the current exception, if any. PyObject *exc = PyErr_GetRaisedException(); // ... if (do_something_that_might_raise() != success_indicator) { PyErr_WriteUnraisable(self); goto done; } done: // Restore the saved exception. This silently discards any exception // raised above, so be sure to call PyErr_WriteUnraisable first if // necessary. PyErr_SetRaisedException(exc); }\nInheritance:\nThis field is inherited by subtypes.\nAdded in version 3.4.\nChanged in version 3.8: Before version 3.8 it was necessary to set the\nPy_TPFLAGS_HAVE_FINALIZE\nflags bit in order for this field to be used. This is no longer required.See also\nPEP 442: \u201cSafe object finalization\u201d\nObject Life Cycle for details about how this slot relates to other slots.\n-\nvectorcallfunc PyTypeObject.tp_vectorcall\u00b6\nThe corresponding slot ID\nPy_tp_vectorcall\nis part of the Stable ABI since version 3.14.A vectorcall function to use for calls of this type object (rather than instances). In other words,\ntp_vectorcall\ncan be used to optimizetype.__call__\n, which typically returns a new instance of type.As with any vectorcall function, if\ntp_vectorcall\nisNULL\n, the tp_call protocol (Py_TYPE(type)->tp_call\n) is used instead.Note\nThe vectorcall protocol requires that the vectorcall function has the same behavior as the corresponding\ntp_call\n. This means thattype->tp_vectorcall\nmust match the behavior ofPy_TYPE(type)->tp_call\n.Specifically, if type uses the default metaclass,\ntype->tp_vectorcall\nmust behave the same as PyType_Type->tp_call, which:calls\ntype->tp_new\n,if the result is a subclass of type, calls\ntype->tp_init\non the result oftp_new\n, andreturns the result of\ntp_new\n.\nTypically,\ntp_vectorcall\nis overridden to optimize this process for specifictp_new\nandtp_init\n. When doing this for user-subclassable types, note that both can be overridden (using__new__()\nand__init__()\n, respectively).Inheritance:\nThis field is never inherited.\nAdded in version 3.9: (the field exists since 3.8 but it\u2019s only used since 3.9)\n-\nunsigned char PyTypeObject.tp_watched\u00b6\nInternal. Do not use.\nAdded in version 3.12.\nStatic Types\u00b6\nTraditionally, types defined in C code are static, that is,\na static PyTypeObject\nstructure is defined directly in code\nand initialized using PyType_Ready()\n.\nThis results in types that are limited relative to types defined in Python:\nStatic types are limited to one base, i.e. they cannot use multiple inheritance.\nStatic type objects (but not necessarily their instances) are immutable. It is not possible to add or modify the type object\u2019s attributes from Python.\nStatic type objects are shared across sub-interpreters, so they should not include any subinterpreter-specific state.\nAlso, since PyTypeObject\nis only part of the Limited API as an opaque struct, any extension modules using static types must be\ncompiled for a specific Python minor version.\nHeap Types\u00b6\nAn alternative to static types is heap-allocated types,\nor heap types for short, which correspond closely to classes created by\nPython\u2019s class\nstatement. Heap types have the Py_TPFLAGS_HEAPTYPE\nflag set.\nThis is done by filling a PyType_Spec\nstructure and calling\nPyType_FromSpec()\n, PyType_FromSpecWithBases()\n,\nPyType_FromModuleAndSpec()\n, or PyType_FromMetaclass()\n.\nNumber Object Structures\u00b6\n-\ntype PyNumberMethods\u00b6\nThis structure holds pointers to the functions which an object uses to implement the number protocol. Each function is used by the function of similar name documented in the Number Protocol section.\nHere is the structure definition:\ntypedef struct { binaryfunc nb_add; binaryfunc nb_subtract; binaryfunc nb_multiply; binaryfunc nb_remainder; binaryfunc nb_divmod; ternaryfunc nb_power; unaryfunc nb_negative; unaryfunc nb_positive; unaryfunc nb_absolute; inquiry nb_bool; unaryfunc nb_invert; binaryfunc nb_lshift; binaryfunc nb_rshift; binaryfunc nb_and; binaryfunc nb_xor; binaryfunc nb_or; unaryfunc nb_int; void *nb_reserved; unaryfunc nb_float; binaryfunc nb_inplace_add; binaryfunc nb_inplace_subtract; binaryfunc nb_inplace_multiply; binaryfunc nb_inplace_remainder; ternaryfunc nb_inplace_power; binaryfunc nb_inplace_lshift; binaryfunc nb_inplace_rshift; binaryfunc nb_inplace_and; binaryfunc nb_inplace_xor; binaryfunc nb_inplace_or; binaryfunc nb_floor_divide; binaryfunc nb_true_divide; binaryfunc nb_inplace_floor_divide; binaryfunc nb_inplace_true_divide; unaryfunc nb_index; binaryfunc nb_matrix_multiply; binaryfunc nb_inplace_matrix_multiply; } PyNumberMethods;\nNote\nBinary and ternary functions must check the type of all their operands, and implement the necessary conversions (at least one of the operands is an instance of the defined type). If the operation is not defined for the given operands, binary and ternary functions must return\nPy_NotImplemented\n, if another error occurred they must returnNULL\nand set an exception.Note\nThe\nnb_reserved\nfield should always beNULL\n. It was previously callednb_long\n, and was renamed in Python 3.0.1.\n-\nbinaryfunc PyNumberMethods.nb_add\u00b6\nThe corresponding slot ID\nPy_nb_add\nis part of the Stable ABI.\n-\nbinaryfunc PyNumberMethods.nb_subtract\u00b6\nThe corresponding slot ID\nPy_nb_subtract\nis part of the Stable ABI.\n-\nbinaryfunc PyNumberMethods.nb_multiply\u00b6\nThe corresponding slot ID\nPy_nb_multiply\nis part of the Stable ABI.\n-\nbinaryfunc PyNumberMethods.nb_remainder\u00b6\nThe corresponding slot ID\nPy_nb_remainder\nis part of the Stable ABI.\n-\nbinaryfunc PyNumberMethods.nb_divmod\u00b6\nThe corresponding slot ID\nPy_nb_divmod\nis part of the Stable ABI.\n-\nternaryfunc PyNumberMethods.nb_power\u00b6\nThe corresponding slot ID\nPy_nb_power\nis part of the Stable ABI.\n-\nunaryfunc PyNumberMethods.nb_negative\u00b6\nThe corresponding slot ID\nPy_nb_negative\nis part of the Stable ABI.\n-\nunaryfunc PyNumberMethods.nb_positive\u00b6\nThe corresponding slot ID\nPy_nb_positive\nis part of the Stable ABI.\n-\nunaryfunc PyNumberMethods.nb_absolute\u00b6\nThe corresponding slot ID\nPy_nb_absolute\nis part of the Stable ABI.\n-\ninquiry PyNumberMethods.nb_bool\u00b6\nThe corresponding slot ID\nPy_nb_bool\nis part of the Stable ABI.\n-\nunaryfunc PyNumberMethods.nb_invert\u00b6\nThe corresponding slot ID\nPy_nb_invert\nis part of the Stable ABI.\n-\nbinaryfunc PyNumberMethods.nb_lshift\u00b6\nThe corresponding slot ID\nPy_nb_lshift\nis part of the Stable ABI.\n-\nbinaryfunc PyNumberMethods.nb_rshift\u00b6\nThe corresponding slot ID\nPy_nb_rshift\nis part of the Stable ABI.\n-\nbinaryfunc PyNumberMethods.nb_and\u00b6\nThe corresponding slot ID\nPy_nb_and\nis part of the Stable ABI.\n-\nbinaryfunc PyNumberMethods.nb_xor\u00b6\nThe corresponding slot ID\nPy_nb_xor\nis part of the Stable ABI.\n-\nbinaryfunc PyNumberMethods.nb_or\u00b6\nThe corresponding slot ID\nPy_nb_or\nis part of the Stable ABI.\n-\nunaryfunc PyNumberMethods.nb_int\u00b6\nThe corresponding slot ID\nPy_nb_int\nis part of the Stable ABI.\n-\nvoid *PyNumberMethods.nb_reserved\u00b6\n-\nunaryfunc PyNumberMethods.nb_float\u00b6\nThe corresponding slot ID\nPy_nb_float\nis part of the Stable ABI.\n-\nbinaryfunc PyNumberMethods.nb_inplace_add\u00b6\nThe corresponding slot ID\nPy_nb_inplace_add\nis part of the Stable ABI.\n-\nbinaryfunc PyNumberMethods.nb_inplace_subtract\u00b6\nThe corresponding slot ID\nPy_nb_inplace_subtract\nis part of the Stable ABI.\n-\nbinaryfunc PyNumberMethods.nb_inplace_multiply\u00b6\nThe corresponding slot ID\nPy_nb_inplace_multiply\nis part of the Stable ABI.\n-\nbinaryfunc PyNumberMethods.nb_inplace_remainder\u00b6\nThe corresponding slot ID\nPy_nb_inplace_remainder\nis part of the Stable ABI.\n-\nternaryfunc PyNumberMethods.nb_inplace_power\u00b6\nThe corresponding slot ID\nPy_nb_inplace_power\nis part of the Stable ABI.\n-\nbinaryfunc PyNumberMethods.nb_inplace_lshift\u00b6\nThe corresponding slot ID\nPy_nb_inplace_lshift\nis part of the Stable ABI.\n-\nbinaryfunc PyNumberMethods.nb_inplace_rshift\u00b6\nThe corresponding slot ID\nPy_nb_inplace_rshift\nis part of the Stable ABI.\n-\nbinaryfunc PyNumberMethods.nb_inplace_and\u00b6\nThe corresponding slot ID\nPy_nb_inplace_and\nis part of the Stable ABI.\n-\nbinaryfunc PyNumberMethods.nb_inplace_xor\u00b6\nThe corresponding slot ID\nPy_nb_inplace_xor\nis part of the Stable ABI.\n-\nbinaryfunc PyNumberMethods.nb_inplace_or\u00b6\nThe corresponding slot ID\nPy_nb_inplace_or\nis part of the Stable ABI.\n-\nbinaryfunc PyNumberMethods.nb_floor_divide\u00b6\nThe corresponding slot ID\nPy_nb_floor_divide\nis part of the Stable ABI.\n-\nbinaryfunc PyNumberMethods.nb_true_divide\u00b6\nThe corresponding slot ID\nPy_nb_true_divide\nis part of the Stable ABI.\n-\nbinaryfunc PyNumberMethods.nb_inplace_floor_divide\u00b6\nThe corresponding slot ID\nPy_nb_inplace_floor_divide\nis part of the Stable ABI.\n-\nbinaryfunc PyNumberMethods.nb_inplace_true_divide\u00b6\nThe corresponding slot ID\nPy_nb_inplace_true_divide\nis part of the Stable ABI.\n-\nunaryfunc PyNumberMethods.nb_index\u00b6\nThe corresponding slot ID\nPy_nb_index\nis part of the Stable ABI.\n-\nbinaryfunc PyNumberMethods.nb_matrix_multiply\u00b6\nThe corresponding slot ID\nPy_nb_matrix_multiply\nis part of the Stable ABI since version 3.5.\n-\nbinaryfunc PyNumberMethods.nb_inplace_matrix_multiply\u00b6\nThe corresponding slot ID\nPy_nb_inplace_matrix_multiply\nis part of the Stable ABI since version 3.5.\nMapping Object Structures\u00b6\n-\ntype PyMappingMethods\u00b6\nThis structure holds pointers to the functions which an object uses to implement the mapping protocol. It has three members:\n-\nlenfunc PyMappingMethods.mp_length\u00b6\nThe corresponding slot ID\nPy_mp_length\nis part of the Stable ABI.This function is used by\nPyMapping_Size()\nandPyObject_Size()\n, and has the same signature. This slot may be set toNULL\nif the object has no defined length.\n-\nbinaryfunc PyMappingMethods.mp_subscript\u00b6\nThe corresponding slot ID\nPy_mp_subscript\nis part of the Stable ABI.This function is used by\nPyObject_GetItem()\nandPySequence_GetSlice()\n, and has the same signature asPyObject_GetItem()\n. This slot must be filled for thePyMapping_Check()\nfunction to return1\n, it can beNULL\notherwise.\n-\nobjobjargproc PyMappingMethods.mp_ass_subscript\u00b6\nThe corresponding slot ID\nPy_mp_ass_subscript\nis part of the Stable ABI.This function is used by\nPyObject_SetItem()\n,PyObject_DelItem()\n,PySequence_SetSlice()\nandPySequence_DelSlice()\n. It has the same signature asPyObject_SetItem()\n, but v can also be set toNULL\nto delete an item. If this slot isNULL\n, the object does not support item assignment and deletion.\nSequence Object Structures\u00b6\n-\ntype PySequenceMethods\u00b6\nThis structure holds pointers to the functions which an object uses to implement the sequence protocol.\n-\nlenfunc PySequenceMethods.sq_length\u00b6\nThe corresponding slot ID\nPy_sq_length\nis part of the Stable ABI.This function is used by\nPySequence_Size()\nandPyObject_Size()\n, and has the same signature. It is also used for handling negative indices via thesq_item\nand thesq_ass_item\nslots.\n-\nbinaryfunc PySequenceMethods.sq_concat\u00b6\nThe corresponding slot ID\nPy_sq_concat\nis part of the Stable ABI.This function is used by\nPySequence_Concat()\nand has the same signature. It is also used by the+\noperator, after trying the numeric addition via thenb_add\nslot.\n-\nssizeargfunc PySequenceMethods.sq_repeat\u00b6\nThe corresponding slot ID\nPy_sq_repeat\nis part of the Stable ABI.This function is used by\nPySequence_Repeat()\nand has the same signature. It is also used by the*\noperator, after trying numeric multiplication via thenb_multiply\nslot.\n-\nssizeargfunc PySequenceMethods.sq_item\u00b6\nThe corresponding slot ID\nPy_sq_item\nis part of the Stable ABI.This function is used by\nPySequence_GetItem()\nand has the same signature. It is also used byPyObject_GetItem()\n, after trying the subscription via themp_subscript\nslot. This slot must be filled for thePySequence_Check()\nfunction to return1\n, it can beNULL\notherwise.Negative indexes are handled as follows: if the\nsq_length\nslot is filled, it is called and the sequence length is used to compute a positive index which is passed tosq_item\n. Ifsq_length\nisNULL\n, the index is passed as is to the function.\n-\nssizeobjargproc PySequenceMethods.sq_ass_item\u00b6\nThe corresponding slot ID\nPy_sq_ass_item\nis part of the Stable ABI.This function is used by\nPySequence_SetItem()\nand has the same signature. It is also used byPyObject_SetItem()\nandPyObject_DelItem()\n, after trying the item assignment and deletion via themp_ass_subscript\nslot. This slot may be left toNULL\nif the object does not support item assignment and deletion.\n-\nobjobjproc PySequenceMethods.sq_contains\u00b6\nThe corresponding slot ID\nPy_sq_contains\nis part of the Stable ABI.This function may be used by\nPySequence_Contains()\nand has the same signature. This slot may be left toNULL\n, in this casePySequence_Contains()\nsimply traverses the sequence until it finds a match.\n-\nbinaryfunc PySequenceMethods.sq_inplace_concat\u00b6\nThe corresponding slot ID\nPy_sq_inplace_concat\nis part of the Stable ABI.This function is used by\nPySequence_InPlaceConcat()\nand has the same signature. It should modify its first operand, and return it. This slot may be left toNULL\n, in this casePySequence_InPlaceConcat()\nwill fall back toPySequence_Concat()\n. It is also used by the augmented assignment+=\n, after trying numeric in-place addition via thenb_inplace_add\nslot.\n-\nssizeargfunc PySequenceMethods.sq_inplace_repeat\u00b6\nThe corresponding slot ID\nPy_sq_inplace_repeat\nis part of the Stable ABI.This function is used by\nPySequence_InPlaceRepeat()\nand has the same signature. It should modify its first operand, and return it. This slot may be left toNULL\n, in this casePySequence_InPlaceRepeat()\nwill fall back toPySequence_Repeat()\n. It is also used by the augmented assignment*=\n, after trying numeric in-place multiplication via thenb_inplace_multiply\nslot.\nBuffer Object Structures\u00b6\n-\ntype PyBufferProcs\u00b6\nThis structure holds pointers to the functions required by the Buffer protocol. The protocol defines how an exporter object can expose its internal data to consumer objects.\n-\ngetbufferproc PyBufferProcs.bf_getbuffer\u00b6\nThe corresponding slot ID\nPy_bf_getbuffer\nis part of the Stable ABI since version 3.11.The signature of this function is:\nint (PyObject *exporter, Py_buffer *view, int flags);\nHandle a request to exporter to fill in view as specified by flags. Except for point (3), an implementation of this function MUST take these steps:\nCheck if the request can be met. If not, raise\nBufferError\n, set view->obj toNULL\nand return-1\n.Fill in the requested fields.\nIncrement an internal counter for the number of exports.\nSet view->obj to exporter and increment view->obj.\nReturn\n0\n.\nIf exporter is part of a chain or tree of buffer providers, two main schemes can be used:\nRe-export: Each member of the tree acts as the exporting object and sets view->obj to a new reference to itself.\nRedirect: The buffer request is redirected to the root object of the tree. Here, view->obj will be a new reference to the root object.\nThe individual fields of view are described in section Buffer structure, the rules how an exporter must react to specific requests are in section Buffer request types.\nAll memory pointed to in the\nPy_buffer\nstructure belongs to the exporter and must remain valid until there are no consumers left.format\n,shape\n,strides\n,suboffsets\nandinternal\nare read-only for the consumer.PyBuffer_FillInfo()\nprovides an easy way of exposing a simple bytes buffer while dealing correctly with all request types.PyObject_GetBuffer()\nis the interface for the consumer that wraps this function.\n-\nreleasebufferproc PyBufferProcs.bf_releasebuffer\u00b6\nThe corresponding slot ID\nPy_bf_releasebuffer\nis part of the Stable ABI since version 3.11.The signature of this function is:\nvoid (PyObject *exporter, Py_buffer *view);\nHandle a request to release the resources of the buffer. If no resources need to be released,\nPyBufferProcs.bf_releasebuffer\nmay beNULL\n. Otherwise, a standard implementation of this function will take these optional steps:Decrement an internal counter for the number of exports.\nIf the counter is\n0\n, free all memory associated with view.\nThe exporter MUST use the\ninternal\nfield to keep track of buffer-specific resources. This field is guaranteed to remain constant, while a consumer MAY pass a copy of the original buffer as the view argument.This function MUST NOT decrement view->obj, since that is done automatically in\nPyBuffer_Release()\n(this scheme is useful for breaking reference cycles).PyBuffer_Release()\nis the interface for the consumer that wraps this function.\nAsync Object Structures\u00b6\nAdded in version 3.5.\n-\ntype PyAsyncMethods\u00b6\nThis structure holds pointers to the functions required to implement awaitable and asynchronous iterator objects.\nHere is the structure definition:\ntypedef struct { unaryfunc am_await; unaryfunc am_aiter; unaryfunc am_anext; sendfunc am_send; } PyAsyncMethods;\n-\nunaryfunc PyAsyncMethods.am_await\u00b6\nThe corresponding slot ID\nPy_am_await\nis part of the Stable ABI since version 3.5.The signature of this function is:\nPyObject *am_await(PyObject *self);\nThe returned object must be an iterator, i.e.\nPyIter_Check()\nmust return1\nfor it.This slot may be set to\nNULL\nif an object is not an awaitable.\n-\nunaryfunc PyAsyncMethods.am_aiter\u00b6\nThe corresponding slot ID\nPy_am_aiter\nis part of the Stable ABI since version 3.5.The signature of this function is:\nPyObject *am_aiter(PyObject *self);\nMust return an asynchronous iterator object. See\n__anext__()\nfor details.This slot may be set to\nNULL\nif an object does not implement asynchronous iteration protocol.\n-\nunaryfunc PyAsyncMethods.am_anext\u00b6\nThe corresponding slot ID\nPy_am_anext\nis part of the Stable ABI since version 3.5.The signature of this function is:\nPyObject *am_anext(PyObject *self);\nMust return an awaitable object. See\n__anext__()\nfor details. This slot may be set toNULL\n.\n-\nsendfunc PyAsyncMethods.am_send\u00b6\nThe corresponding slot ID\nPy_am_send\nis part of the Stable ABI since version 3.10.The signature of this function is:\nPySendResult am_send(PyObject *self, PyObject *arg, PyObject **result);\nSee\nPyIter_Send()\nfor details. This slot may be set toNULL\n.Added in version 3.10.\nSlot Type typedefs\u00b6\n-\ntypedef PyObject *(*allocfunc)(PyTypeObject *cls, Py_ssize_t nitems)\u00b6\n- Part of the Stable ABI.\nThe purpose of this function is to separate memory allocation from memory initialization. It should return a pointer to a block of memory of adequate length for the instance, suitably aligned, and initialized to zeros, but with\nob_refcnt\nset to1\nandob_type\nset to the type argument. If the type\u2019stp_itemsize\nis non-zero, the object\u2019sob_size\nfield should be initialized to nitems and the length of the allocated memory block should betp_basicsize + nitems*tp_itemsize\n, rounded up to a multiple ofsizeof(void*)\n; otherwise, nitems is not used and the length of the block should betp_basicsize\n.This function should not do any other instance initialization, not even to allocate additional memory; that should be done by\ntp_new\n.\n-\ntypedef void (*destructor)(PyObject*)\u00b6\n- Part of the Stable ABI.\n-\ntypedef PyObject *(*newfunc)(PyTypeObject*, PyObject*, PyObject*)\u00b6\n- Part of the Stable ABI.\nSee\ntp_new\n.\n-\ntypedef PyObject *(*reprfunc)(PyObject*)\u00b6\n- Part of the Stable ABI.\nSee\ntp_repr\n.\n-\ntypedef PyObject *(*getattrfunc)(PyObject *self, char *attr)\u00b6\n- Part of the Stable ABI.\nReturn the value of the named attribute for the object.\n-\ntypedef int (*setattrfunc)(PyObject *self, char *attr, PyObject *value)\u00b6\n- Part of the Stable ABI.\nSet the value of the named attribute for the object. The value argument is set to\nNULL\nto delete the attribute.\n-\ntypedef PyObject *(*getattrofunc)(PyObject *self, PyObject *attr)\u00b6\n- Part of the Stable ABI.\nReturn the value of the named attribute for the object.\nSee\ntp_getattro\n.\n-\ntypedef int (*setattrofunc)(PyObject *self, PyObject *attr, PyObject *value)\u00b6\n- Part of the Stable ABI.\nSet the value of the named attribute for the object. The value argument is set to\nNULL\nto delete the attribute.See\ntp_setattro\n.\n-\ntypedef PyObject *(*descrgetfunc)(PyObject*, PyObject*, PyObject*)\u00b6\n- Part of the Stable ABI.\nSee\ntp_descr_get\n.\n-\ntypedef int (*descrsetfunc)(PyObject*, PyObject*, PyObject*)\u00b6\n- Part of the Stable ABI.\nSee\ntp_descr_set\n.\n-\ntypedef Py_hash_t (*hashfunc)(PyObject*)\u00b6\n- Part of the Stable ABI.\nSee\ntp_hash\n.\n-\ntypedef PyObject *(*richcmpfunc)(PyObject*, PyObject*, int)\u00b6\n- Part of the Stable ABI.\nSee\ntp_richcompare\n.\n-\ntypedef PyObject *(*getiterfunc)(PyObject*)\u00b6\n- Part of the Stable ABI.\nSee\ntp_iter\n.\n-\ntypedef PyObject *(*iternextfunc)(PyObject*)\u00b6\n- Part of the Stable ABI.\nSee\ntp_iternext\n.\n-\ntypedef Py_ssize_t (*lenfunc)(PyObject*)\u00b6\n- Part of the Stable ABI.\n-\ntypedef int (*getbufferproc)(PyObject*, Py_buffer*, int)\u00b6\n- Part of the Stable ABI since version 3.12.\n-\ntypedef void (*releasebufferproc)(PyObject*, Py_buffer*)\u00b6\n- Part of the Stable ABI since version 3.12.\n-\ntypedef PyObject *(*unaryfunc)(PyObject*)\u00b6\n- Part of the Stable ABI.\n-\ntypedef PyObject *(*binaryfunc)(PyObject*, PyObject*)\u00b6\n- Part of the Stable ABI.\n-\ntypedef PyObject *(*ssizeargfunc)(PyObject*, Py_ssize_t)\u00b6\n- Part of the Stable ABI.\n-\ntypedef int (*ssizeobjargproc)(PyObject*, Py_ssize_t, PyObject*)\u00b6\n- Part of the Stable ABI.\n-\ntypedef int (*objobjproc)(PyObject*, PyObject*)\u00b6\n- Part of the Stable ABI.\n-\ntypedef int (*objobjargproc)(PyObject*, PyObject*, PyObject*)\u00b6\n- Part of the Stable ABI.\nExamples\u00b6\nThe following are simple examples of Python type definitions. They include common usage you may encounter. Some demonstrate tricky corner cases. For more examples, practical info, and a tutorial, see Defining Extension Types: Tutorial and Defining Extension Types: Assorted Topics.\nA basic static type:\ntypedef struct {\nPyObject_HEAD\nconst char *data;\n} MyObject;\nstatic PyTypeObject MyObject_Type = {\nPyVarObject_HEAD_INIT(NULL, 0)\n.tp_name = \"mymod.MyObject\",\n.tp_basicsize = sizeof(MyObject),\n.tp_doc = PyDoc_STR(\"My objects\"),\n.tp_new = myobj_new,\n.tp_dealloc = (destructor)myobj_dealloc,\n.tp_repr = (reprfunc)myobj_repr,\n};\nYou may also find older code (especially in the CPython code base) with a more verbose initializer:\nstatic PyTypeObject MyObject_Type = {\nPyVarObject_HEAD_INIT(NULL, 0)\n\"mymod.MyObject\", /* tp_name */\nsizeof(MyObject), /* tp_basicsize */\n0, /* tp_itemsize */\n(destructor)myobj_dealloc, /* tp_dealloc */\n0, /* tp_vectorcall_offset */\n0, /* tp_getattr */\n0, /* tp_setattr */\n0, /* tp_as_async */\n(reprfunc)myobj_repr, /* tp_repr */\n0, /* tp_as_number */\n0, /* tp_as_sequence */\n0, /* tp_as_mapping */\n0, /* tp_hash */\n0, /* tp_call */\n0, /* tp_str */\n0, /* tp_getattro */\n0, /* tp_setattro */\n0, /* tp_as_buffer */\n0, /* tp_flags */\nPyDoc_STR(\"My objects\"), /* tp_doc */\n0, /* tp_traverse */\n0, /* tp_clear */\n0, /* tp_richcompare */\n0, /* tp_weaklistoffset */\n0, /* tp_iter */\n0, /* tp_iternext */\n0, /* tp_methods */\n0, /* tp_members */\n0, /* tp_getset */\n0, /* tp_base */\n0, /* tp_dict */\n0, /* tp_descr_get */\n0, /* tp_descr_set */\n0, /* tp_dictoffset */\n0, /* tp_init */\n0, /* tp_alloc */\nmyobj_new, /* tp_new */\n};\nA type that supports weakrefs, instance dicts, and hashing:\ntypedef struct {\nPyObject_HEAD\nconst char *data;\n} MyObject;\nstatic PyTypeObject MyObject_Type = {\nPyVarObject_HEAD_INIT(NULL, 0)\n.tp_name = \"mymod.MyObject\",\n.tp_basicsize = sizeof(MyObject),\n.tp_doc = PyDoc_STR(\"My objects\"),\n.tp_flags = Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE |\nPy_TPFLAGS_HAVE_GC | Py_TPFLAGS_MANAGED_DICT |\nPy_TPFLAGS_MANAGED_WEAKREF,\n.tp_new = myobj_new,\n.tp_traverse = (traverseproc)myobj_traverse,\n.tp_clear = (inquiry)myobj_clear,\n.tp_alloc = PyType_GenericNew,\n.tp_dealloc = (destructor)myobj_dealloc,\n.tp_repr = (reprfunc)myobj_repr,\n.tp_hash = (hashfunc)myobj_hash,\n.tp_richcompare = PyBaseObject_Type.tp_richcompare,\n};\nA str subclass that cannot be subclassed and cannot be called\nto create instances (e.g. uses a separate factory func) using\nPy_TPFLAGS_DISALLOW_INSTANTIATION\nflag:\ntypedef struct {\nPyUnicodeObject raw;\nchar *extra;\n} MyStr;\nstatic PyTypeObject MyStr_Type = {\nPyVarObject_HEAD_INIT(NULL, 0)\n.tp_name = \"mymod.MyStr\",\n.tp_basicsize = sizeof(MyStr),\n.tp_base = NULL, // set to &PyUnicode_Type in module init\n.tp_doc = PyDoc_STR(\"my custom str\"),\n.tp_flags = Py_TPFLAGS_DEFAULT | Py_TPFLAGS_DISALLOW_INSTANTIATION,\n.tp_repr = (reprfunc)myobj_repr,\n};\nThe simplest static type with fixed-length instances:\ntypedef struct {\nPyObject_HEAD\n} MyObject;\nstatic PyTypeObject MyObject_Type = {\nPyVarObject_HEAD_INIT(NULL, 0)\n.tp_name = \"mymod.MyObject\",\n};\nThe simplest static type with variable-length instances:\ntypedef struct {\nPyObject_VAR_HEAD\nconst char *data[1];\n} MyObject;\nstatic PyTypeObject MyObject_Type = {\nPyVarObject_HEAD_INIT(NULL, 0)\n.tp_name = \"mymod.MyObject\",\n.tp_basicsize = sizeof(MyObject) - sizeof(char *),\n.tp_itemsize = sizeof(char *),\n};", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 23070}
{"url": "https://docs.python.org/3/tutorial/controlflow.html", "title": "More Control Flow Tools", "content": "4. More Control Flow Tools\u00b6\nAs well as the while\nstatement just introduced, Python uses a few more\nthat we will encounter in this chapter.\n4.1. if\nStatements\u00b6\nPerhaps the most well-known statement type is the if\nstatement. For\nexample:\n>>> x = int(input(\"Please enter an integer: \"))\nPlease enter an integer: 42\n>>> if x < 0:\n... x = 0\n... print('Negative changed to zero')\n... elif x == 0:\n... print('Zero')\n... elif x == 1:\n... print('Single')\n... else:\n... print('More')\n...\nMore\nThere can be zero or more elif\nparts, and the else\npart is\noptional. The keyword \u2018elif\n\u2019 is short for \u2018else if\u2019, and is useful\nto avoid excessive indentation. An if\n\u2026 elif\n\u2026\nelif\n\u2026 sequence is a substitute for the switch\nor\ncase\nstatements found in other languages.\nIf you\u2019re comparing the same value to several constants, or checking for specific types or\nattributes, you may also find the match\nstatement useful. For more\ndetails see match Statements.\n4.2. for\nStatements\u00b6\nThe for\nstatement in Python differs a bit from what you may be used\nto in C or Pascal. Rather than always iterating over an arithmetic progression\nof numbers (like in Pascal), or giving the user the ability to define both the\niteration step and halting condition (as C), Python\u2019s for\nstatement\niterates over the items of any sequence (a list or a string), in the order that\nthey appear in the sequence. For example (no pun intended):\n>>> # Measure some strings:\n>>> words = ['cat', 'window', 'defenestrate']\n>>> for w in words:\n... print(w, len(w))\n...\ncat 3\nwindow 6\ndefenestrate 12\nCode that modifies a collection while iterating over that same collection can be tricky to get right. Instead, it is usually more straight-forward to loop over a copy of the collection or to create a new collection:\n# Create a sample collection\nusers = {'Hans': 'active', '\u00c9l\u00e9onore': 'inactive', '\u666f\u592a\u90ce': 'active'}\n# Strategy: Iterate over a copy\nfor user, status in users.copy().items():\nif status == 'inactive':\ndel users[user]\n# Strategy: Create a new collection\nactive_users = {}\nfor user, status in users.items():\nif status == 'active':\nactive_users[user] = status\n4.3. The range()\nFunction\u00b6\nIf you do need to iterate over a sequence of numbers, the built-in function\nrange()\ncomes in handy. It generates arithmetic progressions:\n>>> for i in range(5):\n... print(i)\n...\n0\n1\n2\n3\n4\nThe given end point is never part of the generated sequence; range(10)\ngenerates\n10 values, the legal indices for items of a sequence of length 10. It\nis possible to let the range start at another number, or to specify a different\nincrement (even negative; sometimes this is called the \u2018step\u2019):\n>>> list(range(5, 10))\n[5, 6, 7, 8, 9]\n>>> list(range(0, 10, 3))\n[0, 3, 6, 9]\n>>> list(range(-10, -100, -30))\n[-10, -40, -70]\nTo iterate over the indices of a sequence, you can combine range()\nand\nlen()\nas follows:\n>>> a = ['Mary', 'had', 'a', 'little', 'lamb']\n>>> for i in range(len(a)):\n... print(i, a[i])\n...\n0 Mary\n1 had\n2 a\n3 little\n4 lamb\nIn most such cases, however, it is convenient to use the enumerate()\nfunction, see Looping Techniques.\nA strange thing happens if you just print a range:\n>>> range(10)\nrange(0, 10)\nIn many ways the object returned by range()\nbehaves as if it is a list,\nbut in fact it isn\u2019t. It is an object which returns the successive items of\nthe desired sequence when you iterate over it, but it doesn\u2019t really make\nthe list, thus saving space.\nWe say such an object is iterable, that is, suitable as a target for\nfunctions and constructs that expect something from which they can\nobtain successive items until the supply is exhausted. We have seen that\nthe for\nstatement is such a construct, while an example of a function\nthat takes an iterable is sum()\n:\n>>> sum(range(4)) # 0 + 1 + 2 + 3\n6\nLater we will see more functions that return iterables and take iterables as\narguments. In chapter Data Structures, we will discuss list()\nin more\ndetail.\n4.4. break\nand continue\nStatements\u00b6\nThe break\nstatement breaks out of the innermost enclosing\nfor\nor while\nloop:\n>>> for n in range(2, 10):\n... for x in range(2, n):\n... if n % x == 0:\n... print(f\"{n} equals {x} * {n//x}\")\n... break\n...\n4 equals 2 * 2\n6 equals 2 * 3\n8 equals 2 * 4\n9 equals 3 * 3\nThe continue\nstatement continues with the next\niteration of the loop:\n>>> for num in range(2, 10):\n... if num % 2 == 0:\n... print(f\"Found an even number {num}\")\n... continue\n... print(f\"Found an odd number {num}\")\n...\nFound an even number 2\nFound an odd number 3\nFound an even number 4\nFound an odd number 5\nFound an even number 6\nFound an odd number 7\nFound an even number 8\nFound an odd number 9\n4.5. else\nClauses on Loops\u00b6\nIn a for\nor while\nloop the break\nstatement\nmay be paired with an else\nclause. If the loop finishes without\nexecuting the break\n, the else\nclause executes.\nIn a for\nloop, the else\nclause is executed\nafter the loop finishes its final iteration, that is, if no break occurred.\nIn a while\nloop, it\u2019s executed after the loop\u2019s condition becomes false.\nIn either kind of loop, the else\nclause is not executed if the\nloop was terminated by a break\n. Of course, other ways of ending the\nloop early, such as a return\nor a raised exception, will also skip\nexecution of the else\nclause.\nThis is exemplified in the following for\nloop,\nwhich searches for prime numbers:\n>>> for n in range(2, 10):\n... for x in range(2, n):\n... if n % x == 0:\n... print(n, 'equals', x, '*', n//x)\n... break\n... else:\n... # loop fell through without finding a factor\n... print(n, 'is a prime number')\n...\n2 is a prime number\n3 is a prime number\n4 equals 2 * 2\n5 is a prime number\n6 equals 2 * 3\n7 is a prime number\n8 equals 2 * 4\n9 equals 3 * 3\n(Yes, this is the correct code. Look closely: the else\nclause belongs to\nthe for\nloop, not the if\nstatement.)\nOne way to think of the else clause is to imagine it paired with the if\ninside the loop. As the loop executes, it will run a sequence like\nif/if/if/else. The if\nis inside the loop, encountered a number of times. If\nthe condition is ever true, a break\nwill happen. If the condition is never\ntrue, the else\nclause outside the loop will execute.\nWhen used with a loop, the else\nclause has more in common with the else\nclause of a try\nstatement than it does with that of if\nstatements: a try\nstatement\u2019s else\nclause runs when no exception\noccurs, and a loop\u2019s else\nclause runs when no break\noccurs. For more on\nthe try\nstatement and exceptions, see Handling Exceptions.\n4.6. pass\nStatements\u00b6\nThe pass\nstatement does nothing. It can be used when a statement is\nrequired syntactically but the program requires no action. For example:\n>>> while True:\n... pass # Busy-wait for keyboard interrupt (Ctrl+C)\n...\nThis is commonly used for creating minimal classes:\n>>> class MyEmptyClass:\n... pass\n...\nAnother place pass\ncan be used is as a place-holder for a function or\nconditional body when you are working on new code, allowing you to keep thinking\nat a more abstract level. The pass\nis silently ignored:\n>>> def initlog(*args):\n... pass # Remember to implement this!\n...\nFor this last case, many people use the ellipsis literal ...\ninstead of\npass\n. This use has no special meaning to Python, and is not part of\nthe language definition (you could use any constant expression here), but\n...\nis used conventionally as a placeholder body as well.\nSee The Ellipsis Object.\n4.7. match\nStatements\u00b6\nA match\nstatement takes an expression and compares its value to successive\npatterns given as one or more case blocks. This is superficially\nsimilar to a switch statement in C, Java or JavaScript (and many\nother languages), but it\u2019s more similar to pattern matching in\nlanguages like Rust or Haskell. Only the first pattern that matches\ngets executed and it can also extract components (sequence elements\nor object attributes) from the value into variables. If no case matches,\nnone of the branches is executed.\nThe simplest form compares a subject value against one or more literals:\ndef http_error(status):\nmatch status:\ncase 400:\nreturn \"Bad request\"\ncase 404:\nreturn \"Not found\"\ncase 418:\nreturn \"I'm a teapot\"\ncase _:\nreturn \"Something's wrong with the internet\"\nNote the last block: the \u201cvariable name\u201d _\nacts as a wildcard and\nnever fails to match.\nYou can combine several literals in a single pattern using |\n(\u201cor\u201d):\ncase 401 | 403 | 404:\nreturn \"Not allowed\"\nPatterns can look like unpacking assignments, and can be used to bind variables:\n# point is an (x, y) tuple\nmatch point:\ncase (0, 0):\nprint(\"Origin\")\ncase (0, y):\nprint(f\"Y={y}\")\ncase (x, 0):\nprint(f\"X={x}\")\ncase (x, y):\nprint(f\"X={x}, Y={y}\")\ncase _:\nraise ValueError(\"Not a point\")\nStudy that one carefully! The first pattern has two literals, and can\nbe thought of as an extension of the literal pattern shown above. But\nthe next two patterns combine a literal and a variable, and the\nvariable binds a value from the subject (point\n). The fourth\npattern captures two values, which makes it conceptually similar to\nthe unpacking assignment (x, y) = point\n.\nIf you are using classes to structure your data you can use the class name followed by an argument list resembling a constructor, but with the ability to capture attributes into variables:\nclass Point:\ndef __init__(self, x, y):\nself.x = x\nself.y = y\ndef where_is(point):\nmatch point:\ncase Point(x=0, y=0):\nprint(\"Origin\")\ncase Point(x=0, y=y):\nprint(f\"Y={y}\")\ncase Point(x=x, y=0):\nprint(f\"X={x}\")\ncase Point():\nprint(\"Somewhere else\")\ncase _:\nprint(\"Not a point\")\nYou can use positional parameters with some builtin classes that provide an\nordering for their attributes (e.g. dataclasses). You can also define a specific\nposition for attributes in patterns by setting the __match_args__\nspecial\nattribute in your classes. If it\u2019s set to (\u201cx\u201d, \u201cy\u201d), the following patterns are all\nequivalent (and all bind the y\nattribute to the var\nvariable):\nPoint(1, var)\nPoint(1, y=var)\nPoint(x=1, y=var)\nPoint(y=var, x=1)\nA recommended way to read patterns is to look at them as an extended form of what you\nwould put on the left of an assignment, to understand which variables would be set to\nwhat.\nOnly the standalone names (like var\nabove) are assigned to by a match statement.\nDotted names (like foo.bar\n), attribute names (the x=\nand y=\nabove) or class names\n(recognized by the \u201c(\u2026)\u201d next to them like Point\nabove) are never assigned to.\nPatterns can be arbitrarily nested. For example, if we have a short\nlist of Points, with __match_args__\nadded, we could match it like this:\nclass Point:\n__match_args__ = ('x', 'y')\ndef __init__(self, x, y):\nself.x = x\nself.y = y\nmatch points:\ncase []:\nprint(\"No points\")\ncase [Point(0, 0)]:\nprint(\"The origin\")\ncase [Point(x, y)]:\nprint(f\"Single point {x}, {y}\")\ncase [Point(0, y1), Point(0, y2)]:\nprint(f\"Two on the Y axis at {y1}, {y2}\")\ncase _:\nprint(\"Something else\")\nWe can add an if\nclause to a pattern, known as a \u201cguard\u201d. If the\nguard is false, match\ngoes on to try the next case block. Note\nthat value capture happens before the guard is evaluated:\nmatch point:\ncase Point(x, y) if x == y:\nprint(f\"Y=X at {x}\")\ncase Point(x, y):\nprint(f\"Not on the diagonal\")\nSeveral other key features of this statement:\nLike unpacking assignments, tuple and list patterns have exactly the same meaning and actually match arbitrary sequences. An important exception is that they don\u2019t match iterators or strings.\nSequence patterns support extended unpacking:\n[x, y, *rest]\nand(x, y, *rest)\nwork similar to unpacking assignments. The name after*\nmay also be_\n, so(x, y, *_)\nmatches a sequence of at least two items without binding the remaining items.Mapping patterns:\n{\"bandwidth\": b, \"latency\": l}\ncaptures the\"bandwidth\"\nand\"latency\"\nvalues from a dictionary. Unlike sequence patterns, extra keys are ignored. An unpacking like**rest\nis also supported. (But**_\nwould be redundant, so it is not allowed.)Subpatterns may be captured using the\nas\nkeyword:case (Point(x1, y1), Point(x2, y2) as p2): ...\nwill capture the second element of the input as\np2\n(as long as the input is a sequence of two points)Most literals are compared by equality, however the singletons\nTrue\n,False\nandNone\nare compared by identity.Patterns may use named constants. These must be dotted names to prevent them from being interpreted as capture variables:\nfrom enum import Enum class Color(Enum): RED = 'red' GREEN = 'green' BLUE = 'blue' color = Color(input(\"Enter your choice of 'red', 'blue' or 'green': \")) match color: case Color.RED: print(\"I see red!\") case Color.GREEN: print(\"Grass is green\") case Color.BLUE: print(\"I'm feeling the blues :(\")\nFor a more detailed explanation and additional examples, you can look into PEP 636 which is written in a tutorial format.\n4.8. Defining Functions\u00b6\nWe can create a function that writes the Fibonacci series to an arbitrary boundary:\n>>> def fib(n): # write Fibonacci series less than n\n... \"\"\"Print a Fibonacci series less than n.\"\"\"\n... a, b = 0, 1\n... while a < n:\n... print(a, end=' ')\n... a, b = b, a+b\n... print()\n...\n>>> # Now call the function we just defined:\n>>> fib(2000)\n0 1 1 2 3 5 8 13 21 34 55 89 144 233 377 610 987 1597\nThe keyword def\nintroduces a function definition. It must be\nfollowed by the function name and the parenthesized list of formal parameters.\nThe statements that form the body of the function start at the next line, and\nmust be indented.\nThe first statement of the function body can optionally be a string literal; this string literal is the function\u2019s documentation string, or docstring. (More about docstrings can be found in the section Documentation Strings.) There are tools which use docstrings to automatically produce online or printed documentation, or to let the user interactively browse through code; it\u2019s good practice to include docstrings in code that you write, so make a habit of it.\nThe execution of a function introduces a new symbol table used for the local\nvariables of the function. More precisely, all variable assignments in a\nfunction store the value in the local symbol table; whereas variable references\nfirst look in the local symbol table, then in the local symbol tables of\nenclosing functions, then in the global symbol table, and finally in the table\nof built-in names. Thus, global variables and variables of enclosing functions\ncannot be directly assigned a value within a function (unless, for global\nvariables, named in a global\nstatement, or, for variables of enclosing\nfunctions, named in a nonlocal\nstatement), although they may be\nreferenced.\nThe actual parameters (arguments) to a function call are introduced in the local symbol table of the called function when it is called; thus, arguments are passed using call by value (where the value is always an object reference, not the value of the object). [1] When a function calls another function, or calls itself recursively, a new local symbol table is created for that call.\nA function definition associates the function name with the function object in the current symbol table. The interpreter recognizes the object pointed to by that name as a user-defined function. Other names can also point to that same function object and can also be used to access the function:\n>>> fib\n\n>>> f = fib\n>>> f(100)\n0 1 1 2 3 5 8 13 21 34 55 89\nComing from other languages, you might object that fib\nis not a function but\na procedure since it doesn\u2019t return a value. In fact, even functions without a\nreturn\nstatement do return a value, albeit a rather boring one. This\nvalue is called None\n(it\u2019s a built-in name). Writing the value None\nis\nnormally suppressed by the interpreter if it would be the only value written.\nYou can see it if you really want to using print()\n:\n>>> fib(0)\n>>> print(fib(0))\nNone\nIt is simple to write a function that returns a list of the numbers of the Fibonacci series, instead of printing it:\n>>> def fib2(n): # return Fibonacci series up to n\n... \"\"\"Return a list containing the Fibonacci series up to n.\"\"\"\n... result = []\n... a, b = 0, 1\n... while a < n:\n... result.append(a) # see below\n... a, b = b, a+b\n... return result\n...\n>>> f100 = fib2(100) # call it\n>>> f100 # write the result\n[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89]\nThis example, as usual, demonstrates some new Python features:\nThe\nreturn\nstatement returns with a value from a function.return\nwithout an expression argument returnsNone\n. Falling off the end of a function also returnsNone\n.The statement\nresult.append(a)\ncalls a method of the list objectresult\n. A method is a function that \u2018belongs\u2019 to an object and is namedobj.methodname\n, whereobj\nis some object (this may be an expression), andmethodname\nis the name of a method that is defined by the object\u2019s type. Different types define different methods. Methods of different types may have the same name without causing ambiguity. (It is possible to define your own object types and methods, using classes, see Classes) The methodappend()\nshown in the example is defined for list objects; it adds a new element at the end of the list. In this example it is equivalent toresult = result + [a]\n, but more efficient.\n4.9. More on Defining Functions\u00b6\nIt is also possible to define functions with a variable number of arguments. There are three forms, which can be combined.\n4.9.1. Default Argument Values\u00b6\nThe most useful form is to specify a default value for one or more arguments. This creates a function that can be called with fewer arguments than it is defined to allow. For example:\ndef ask_ok(prompt, retries=4, reminder='Please try again!'):\nwhile True:\nreply = input(prompt)\nif reply in {'y', 'ye', 'yes'}:\nreturn True\nif reply in {'n', 'no', 'nop', 'nope'}:\nreturn False\nretries = retries - 1\nif retries < 0:\nraise ValueError('invalid user response')\nprint(reminder)\nThis function can be called in several ways:\ngiving only the mandatory argument:\nask_ok('Do you really want to quit?')\ngiving one of the optional arguments:\nask_ok('OK to overwrite the file?', 2)\nor even giving all arguments:\nask_ok('OK to overwrite the file?', 2, 'Come on, only yes or no!')\nThis example also introduces the in\nkeyword. This tests whether or\nnot a sequence contains a certain value.\nThe default values are evaluated at the point of function definition in the defining scope, so that\ni = 5\ndef f(arg=i):\nprint(arg)\ni = 6\nf()\nwill print 5\n.\nImportant warning: The default value is evaluated only once. This makes a difference when the default is a mutable object such as a list, dictionary, or instances of most classes. For example, the following function accumulates the arguments passed to it on subsequent calls:\ndef f(a, L=[]):\nL.append(a)\nreturn L\nprint(f(1))\nprint(f(2))\nprint(f(3))\nThis will print\n[1]\n[1, 2]\n[1, 2, 3]\nIf you don\u2019t want the default to be shared between subsequent calls, you can write the function like this instead:\ndef f(a, L=None):\nif L is None:\nL = []\nL.append(a)\nreturn L\n4.9.2. Keyword Arguments\u00b6\nFunctions can also be called using keyword arguments\nof the form kwarg=value\n. For instance, the following function:\ndef parrot(voltage, state='a stiff', action='voom', type='Norwegian Blue'):\nprint(\"-- This parrot wouldn't\", action, end=' ')\nprint(\"if you put\", voltage, \"volts through it.\")\nprint(\"-- Lovely plumage, the\", type)\nprint(\"-- It's\", state, \"!\")\naccepts one required argument (voltage\n) and three optional arguments\n(state\n, action\n, and type\n). This function can be called in any\nof the following ways:\nparrot(1000) # 1 positional argument\nparrot(voltage=1000) # 1 keyword argument\nparrot(voltage=1000000, action='VOOOOOM') # 2 keyword arguments\nparrot(action='VOOOOOM', voltage=1000000) # 2 keyword arguments\nparrot('a million', 'bereft of life', 'jump') # 3 positional arguments\nparrot('a thousand', state='pushing up the daisies') # 1 positional, 1 keyword\nbut all the following calls would be invalid:\nparrot() # required argument missing\nparrot(voltage=5.0, 'dead') # non-keyword argument after a keyword argument\nparrot(110, voltage=220) # duplicate value for the same argument\nparrot(actor='John Cleese') # unknown keyword argument\nIn a function call, keyword arguments must follow positional arguments.\nAll the keyword arguments passed must match one of the arguments\naccepted by the function (e.g. actor\nis not a valid argument for the\nparrot\nfunction), and their order is not important. This also includes\nnon-optional arguments (e.g. parrot(voltage=1000)\nis valid too).\nNo argument may receive a value more than once.\nHere\u2019s an example that fails due to this restriction:\n>>> def function(a):\n... pass\n...\n>>> function(0, a=0)\nTraceback (most recent call last):\nFile \"\", line 1, in \nTypeError: function() got multiple values for argument 'a'\nWhen a final formal parameter of the form **name\nis present, it receives a\ndictionary (see Mapping Types \u2014 dict) containing all keyword arguments except for\nthose corresponding to a formal parameter. This may be combined with a formal\nparameter of the form *name\n(described in the next subsection) which\nreceives a tuple containing the positional\narguments beyond the formal parameter list. (*name\nmust occur\nbefore **name\n.) For example, if we define a function like this:\ndef cheeseshop(kind, *arguments, **keywords):\nprint(\"-- Do you have any\", kind, \"?\")\nprint(\"-- I'm sorry, we're all out of\", kind)\nfor arg in arguments:\nprint(arg)\nprint(\"-\" * 40)\nfor kw in keywords:\nprint(kw, \":\", keywords[kw])\nIt could be called like this:\ncheeseshop(\"Limburger\", \"It's very runny, sir.\",\n\"It's really very, VERY runny, sir.\",\nshopkeeper=\"Michael Palin\",\nclient=\"John Cleese\",\nsketch=\"Cheese Shop Sketch\")\nand of course it would print:\n-- Do you have any Limburger ?\n-- I'm sorry, we're all out of Limburger\nIt's very runny, sir.\nIt's really very, VERY runny, sir.\n----------------------------------------\nshopkeeper : Michael Palin\nclient : John Cleese\nsketch : Cheese Shop Sketch\nNote that the order in which the keyword arguments are printed is guaranteed to match the order in which they were provided in the function call.\n4.9.3. Special parameters\u00b6\nBy default, arguments may be passed to a Python function either by position or explicitly by keyword. For readability and performance, it makes sense to restrict the way arguments can be passed so that a developer need only look at the function definition to determine if items are passed by position, by position or keyword, or by keyword.\nA function definition may look like:\ndef f(pos1, pos2, /, pos_or_kwd, *, kwd1, kwd2):\n----------- ---------- ----------\n| | |\n| Positional or keyword |\n| - Keyword only\n-- Positional only\nwhere /\nand *\nare optional. If used, these symbols indicate the kind of\nparameter by how the arguments may be passed to the function:\npositional-only, positional-or-keyword, and keyword-only. Keyword parameters\nare also referred to as named parameters.\n4.9.3.1. Positional-or-Keyword Arguments\u00b6\nIf /\nand *\nare not present in the function definition, arguments may\nbe passed to a function by position or by keyword.\n4.9.3.2. Positional-Only Parameters\u00b6\nLooking at this in a bit more detail, it is possible to mark certain parameters\nas positional-only. If positional-only, the parameters\u2019 order matters, and\nthe parameters cannot be passed by keyword. Positional-only parameters are\nplaced before a /\n(forward-slash). The /\nis used to logically\nseparate the positional-only parameters from the rest of the parameters.\nIf there is no /\nin the function definition, there are no positional-only\nparameters.\nParameters following the /\nmay be positional-or-keyword or keyword-only.\n4.9.3.3. Keyword-Only Arguments\u00b6\nTo mark parameters as keyword-only, indicating the parameters must be passed\nby keyword argument, place an *\nin the arguments list just before the first\nkeyword-only parameter.\n4.9.3.4. Function Examples\u00b6\nConsider the following example function definitions paying close attention to the\nmarkers /\nand *\n:\n>>> def standard_arg(arg):\n... print(arg)\n...\n>>> def pos_only_arg(arg, /):\n... print(arg)\n...\n>>> def kwd_only_arg(*, arg):\n... print(arg)\n...\n>>> def combined_example(pos_only, /, standard, *, kwd_only):\n... print(pos_only, standard, kwd_only)\nThe first function definition, standard_arg\n, the most familiar form,\nplaces no restrictions on the calling convention and arguments may be\npassed by position or keyword:\n>>> standard_arg(2)\n2\n>>> standard_arg(arg=2)\n2\nThe second function pos_only_arg\nis restricted to only use positional\nparameters as there is a /\nin the function definition:\n>>> pos_only_arg(1)\n1\n>>> pos_only_arg(arg=1)\nTraceback (most recent call last):\nFile \"\", line 1, in \nTypeError: pos_only_arg() got some positional-only arguments passed as keyword arguments: 'arg'\nThe third function kwd_only_arg\nonly allows keyword arguments as indicated\nby a *\nin the function definition:\n>>> kwd_only_arg(3)\nTraceback (most recent call last):\nFile \"\", line 1, in \nTypeError: kwd_only_arg() takes 0 positional arguments but 1 was given\n>>> kwd_only_arg(arg=3)\n3\nAnd the last uses all three calling conventions in the same function definition:\n>>> combined_example(1, 2, 3)\nTraceback (most recent call last):\nFile \"\", line 1, in \nTypeError: combined_example() takes 2 positional arguments but 3 were given\n>>> combined_example(1, 2, kwd_only=3)\n1 2 3\n>>> combined_example(1, standard=2, kwd_only=3)\n1 2 3\n>>> combined_example(pos_only=1, standard=2, kwd_only=3)\nTraceback (most recent call last):\nFile \"\", line 1, in \nTypeError: combined_example() got some positional-only arguments passed as keyword arguments: 'pos_only'\nFinally, consider this function definition which has a potential collision between the positional argument name\nand **kwds\nwhich has name\nas a key:\ndef foo(name, **kwds):\nreturn 'name' in kwds\nThere is no possible call that will make it return True\nas the keyword 'name'\nwill always bind to the first parameter. For example:\n>>> foo(1, **{'name': 2})\nTraceback (most recent call last):\nFile \"\", line 1, in \nTypeError: foo() got multiple values for argument 'name'\n>>>\nBut using /\n(positional only arguments), it is possible since it allows name\nas a positional argument and 'name'\nas a key in the keyword arguments:\n>>> def foo(name, /, **kwds):\n... return 'name' in kwds\n...\n>>> foo(1, **{'name': 2})\nTrue\nIn other words, the names of positional-only parameters can be used in\n**kwds\nwithout ambiguity.\n4.9.3.5. Recap\u00b6\nThe use case will determine which parameters to use in the function definition:\ndef f(pos1, pos2, /, pos_or_kwd, *, kwd1, kwd2):\nAs guidance:\nUse positional-only if you want the name of the parameters to not be available to the user. This is useful when parameter names have no real meaning, if you want to enforce the order of the arguments when the function is called or if you need to take some positional parameters and arbitrary keywords.\nUse keyword-only when names have meaning and the function definition is more understandable by being explicit with names or you want to prevent users relying on the position of the argument being passed.\nFor an API, use positional-only to prevent breaking API changes if the parameter\u2019s name is modified in the future.\n4.9.4. Arbitrary Argument Lists\u00b6\nFinally, the least frequently used option is to specify that a function can be called with an arbitrary number of arguments. These arguments will be wrapped up in a tuple (see Tuples and Sequences). Before the variable number of arguments, zero or more normal arguments may occur.\ndef write_multiple_items(file, separator, *args):\nfile.write(separator.join(args))\nNormally, these variadic arguments will be last in the list of formal\nparameters, because they scoop up all remaining input arguments that are\npassed to the function. Any formal parameters which occur after the *args\nparameter are \u2018keyword-only\u2019 arguments, meaning that they can only be used as\nkeywords rather than positional arguments.\n>>> def concat(*args, sep=\"/\"):\n... return sep.join(args)\n...\n>>> concat(\"earth\", \"mars\", \"venus\")\n'earth/mars/venus'\n>>> concat(\"earth\", \"mars\", \"venus\", sep=\".\")\n'earth.mars.venus'\n4.9.5. Unpacking Argument Lists\u00b6\nThe reverse situation occurs when the arguments are already in a list or tuple\nbut need to be unpacked for a function call requiring separate positional\narguments. For instance, the built-in range()\nfunction expects separate\nstart and stop arguments. If they are not available separately, write the\nfunction call with the *\n-operator to unpack the arguments out of a list\nor tuple:\n>>> list(range(3, 6)) # normal call with separate arguments\n[3, 4, 5]\n>>> args = [3, 6]\n>>> list(range(*args)) # call with arguments unpacked from a list\n[3, 4, 5]\nIn the same fashion, dictionaries can deliver keyword arguments with the\n**\n-operator:\n>>> def parrot(voltage, state='a stiff', action='voom'):\n... print(\"-- This parrot wouldn't\", action, end=' ')\n... print(\"if you put\", voltage, \"volts through it.\", end=' ')\n... print(\"E's\", state, \"!\")\n...\n>>> d = {\"voltage\": \"four million\", \"state\": \"bleedin' demised\", \"action\": \"VOOM\"}\n>>> parrot(**d)\n-- This parrot wouldn't VOOM if you put four million volts through it. E's bleedin' demised !\n4.9.6. Lambda Expressions\u00b6\nSmall anonymous functions can be created with the lambda\nkeyword.\nThis function returns the sum of its two arguments: lambda a, b: a+b\n.\nLambda functions can be used wherever function objects are required. They are\nsyntactically restricted to a single expression. Semantically, they are just\nsyntactic sugar for a normal function definition. Like nested function\ndefinitions, lambda functions can reference variables from the containing\nscope:\n>>> def make_incrementor(n):\n... return lambda x: x + n\n...\n>>> f = make_incrementor(42)\n>>> f(0)\n42\n>>> f(1)\n43\nThe above example uses a lambda expression to return a function. Another use\nis to pass a small function as an argument. For instance, list.sort()\ntakes a sorting key function key which can be a lambda function:\n>>> pairs = [(1, 'one'), (2, 'two'), (3, 'three'), (4, 'four')]\n>>> pairs.sort(key=lambda pair: pair[1])\n>>> pairs\n[(4, 'four'), (1, 'one'), (3, 'three'), (2, 'two')]\n4.9.7. Documentation Strings\u00b6\nHere are some conventions about the content and formatting of documentation strings.\nThe first line should always be a short, concise summary of the object\u2019s purpose. For brevity, it should not explicitly state the object\u2019s name or type, since these are available by other means (except if the name happens to be a verb describing a function\u2019s operation). This line should begin with a capital letter and end with a period.\nIf there are more lines in the documentation string, the second line should be blank, visually separating the summary from the rest of the description. The following lines should be one or more paragraphs describing the object\u2019s calling conventions, its side effects, etc.\nThe Python parser strips indentation from multi-line string literals when they serve as module, class, or function docstrings.\nHere is an example of a multi-line docstring:\n>>> def my_function():\n... \"\"\"Do nothing, but document it.\n...\n... No, really, it doesn't do anything:\n...\n... >>> my_function()\n... >>>\n... \"\"\"\n... pass\n...\n>>> print(my_function.__doc__)\nDo nothing, but document it.\nNo, really, it doesn't do anything:\n>>> my_function()\n>>>\n4.9.8. Function Annotations\u00b6\nFunction annotations are completely optional metadata information about the types used by user-defined functions (see PEP 3107 and PEP 484 for more information).\nAnnotations are stored in the __annotations__\nattribute of the function as a dictionary and have no effect on any other part of the\nfunction. Parameter annotations are defined by a colon after the parameter name, followed\nby an expression evaluating to the value of the annotation. Return annotations are\ndefined by a literal ->\n, followed by an expression, between the parameter\nlist and the colon denoting the end of the def\nstatement. The\nfollowing example has a required argument, an optional argument, and the return\nvalue annotated:\n>>> def f(ham: str, eggs: str = 'eggs') -> str:\n... print(\"Annotations:\", f.__annotations__)\n... print(\"Arguments:\", ham, eggs)\n... return ham + ' and ' + eggs\n...\n>>> f('spam')\nAnnotations: {'ham': , 'return': , 'eggs': }\nArguments: spam eggs\n'spam and eggs'\n4.10. Intermezzo: Coding Style\u00b6\nNow that you are about to write longer, more complex pieces of Python, it is a good time to talk about coding style. Most languages can be written (or more concisely, formatted) in different styles; some are more readable than others. Making it easy for others to read your code is always a good idea, and adopting a nice coding style helps tremendously for that.\nFor Python, PEP 8 has emerged as the style guide that most projects adhere to; it promotes a very readable and eye-pleasing coding style. Every Python developer should read it at some point; here are the most important points extracted for you:\nUse 4-space indentation, and no tabs.\n4 spaces are a good compromise between small indentation (allows greater nesting depth) and large indentation (easier to read). Tabs introduce confusion, and are best left out.\nWrap lines so that they don\u2019t exceed 79 characters.\nThis helps users with small displays and makes it possible to have several code files side-by-side on larger displays.\nUse blank lines to separate functions and classes, and larger blocks of code inside functions.\nWhen possible, put comments on a line of their own.\nUse docstrings.\nUse spaces around operators and after commas, but not directly inside bracketing constructs:\na = f(1, 2) + g(3, 4)\n.Name your classes and functions consistently; the convention is to use\nUpperCamelCase\nfor classes andlowercase_with_underscores\nfor functions and methods. Always useself\nas the name for the first method argument (see A First Look at Classes for more on classes and methods).Don\u2019t use fancy encodings if your code is meant to be used in international environments. Python\u2019s default, UTF-8, or even plain ASCII work best in any case.\nLikewise, don\u2019t use non-ASCII characters in identifiers if there is only the slightest chance people speaking a different language will read or maintain the code.\nFootnotes", "code_snippets": [" ", " ", "\n", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", " ", " ", " ", "\n", " ", "\n", " ", " ", " ", "\n", " ", "\n", "\n", " ", "\n", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", "\n\n", "\n", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n\n", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n\n", " ", " ", "\n", "\n\n", " ", " ", "\n", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", "\n", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", " ", " ", "\n", "\n", "\n", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", " ", "\n", " ", " ", " ", " ", " ", "\n ", " ", "\n", "\n", " ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n ", "\n ", " ", "\n", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n\n", "\n ", " ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n ", " ", "\n ", "\n ", "\n ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n\n", " ", "\n ", " ", "\n ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n ", " ", " ", " ", " ", "\n ", "\n ", "\n ", "\n", " ", "\n ", " ", " ", " ", " ", " ", " ", "\n ", "\n ", " ", " ", "\n ", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", " ", "\n", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n\n", " ", " ", "\n\n", " ", "\n ", " ", "\n ", "\n ", " ", "\n ", "\n ", " ", "\n ", "\n", " ", "\n", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", " ", "\n", " ", "\n", "\n", " ", " ", "\n ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n ", " ", "\n ", " ", " ", " ", " ", " ", " ", "\n ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", "\n", " ", " ", "\n\n", "\n ", "\n\n", " ", " ", "\n", "\n", " ", "\n ", "\n ", " ", "\n\n", "\n", "\n", "\n", "\n", " ", "\n", " ", " ", "\n", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", "\n ", " ", "\n", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", "\n ", " ", " ", "\n", " ", "\n", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", " ", " ", "\n ", " ", " ", "\n ", " ", "\n ", " ", " ", " ", "\n ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n", " ", "\n ", "\n ", "\n ", "\n ", "\n", "\n", " ", "\n", "\n", " ", "\n", " ", "\n", "\n", " ", "\n", " ", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n\n", "\n", "\n", "\n", "\n\n", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n\n", "\n", "\n", " ", " ", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n\n", " ", " ", "\n", "\n\n", " ", " ", "\n", "\n\n", " ", " ", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", " ", "\n ", " ", " ", " ", "\n", " ", " ", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", "\n ", "\n", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", " ", "\n", " ", "\n", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n\n", "\n\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 8619}
{"url": "https://docs.python.org/3/tutorial/introduction.html", "title": "An Informal Introduction to Python", "content": "3. An Informal Introduction to Python\u00b6\nIn the following examples, input and output are distinguished by the presence or absence of prompts (>>> and \u2026): to repeat the example, you must type everything after the prompt, when the prompt appears; lines that do not begin with a prompt are output from the interpreter. Note that a secondary prompt on a line by itself in an example means you must type a blank line; this is used to end a multi-line command.\nYou can use the \u201cCopy\u201d button (it appears in the upper-right corner when hovering over or tapping a code example), which strips prompts and omits output, to copy and paste the input lines into your interpreter.\nMany of the examples in this manual, even those entered at the interactive\nprompt, include comments. Comments in Python start with the hash character,\n#\n, and extend to the end of the physical line. A comment may appear at the\nstart of a line or following whitespace or code, but not within a string\nliteral. A hash character within a string literal is just a hash character.\nSince comments are to clarify code and are not interpreted by Python, they may\nbe omitted when typing in examples.\nSome examples:\n# this is the first comment\nspam = 1 # and this is the second comment\n# ... and now a third!\ntext = \"# This is not a comment because it's inside quotes.\"\n3.1. Using Python as a Calculator\u00b6\nLet\u2019s try some simple Python commands. Start the interpreter and wait for the\nprimary prompt, >>>\n. (It shouldn\u2019t take long.)\n3.1.1. Numbers\u00b6\nThe interpreter acts as a simple calculator: you can type an expression into it\nand it will write the value. Expression syntax is straightforward: the\noperators +\n, -\n, *\nand /\ncan be used to perform\narithmetic; parentheses (()\n) can be used for grouping.\nFor example:\n>>> 2 + 2\n4\n>>> 50 - 5*6\n20\n>>> (50 - 5*6) / 4\n5.0\n>>> 8 / 5 # division always returns a floating-point number\n1.6\nThe integer numbers (e.g. 2\n, 4\n, 20\n) have type int\n,\nthe ones with a fractional part (e.g. 5.0\n, 1.6\n) have type\nfloat\n. We will see more about numeric types later in the tutorial.\nDivision (/\n) always returns a float. To do floor division and\nget an integer result you can use the //\noperator; to calculate\nthe remainder you can use %\n:\n>>> 17 / 3 # classic division returns a float\n5.666666666666667\n>>>\n>>> 17 // 3 # floor division discards the fractional part\n5\n>>> 17 % 3 # the % operator returns the remainder of the division\n2\n>>> 5 * 3 + 2 # floored quotient * divisor + remainder\n17\nWith Python, it is possible to use the **\noperator to calculate powers [1]:\n>>> 5 ** 2 # 5 squared\n25\n>>> 2 ** 7 # 2 to the power of 7\n128\nThe equal sign (=\n) is used to assign a value to a variable. Afterwards, no\nresult is displayed before the next interactive prompt:\n>>> width = 20\n>>> height = 5 * 9\n>>> width * height\n900\nIf a variable is not \u201cdefined\u201d (assigned a value), trying to use it will give you an error:\n>>> n # try to access an undefined variable\nTraceback (most recent call last):\nFile \"\", line 1, in \nNameError: name 'n' is not defined\nThere is full support for floating point; operators with mixed type operands convert the integer operand to floating point:\n>>> 4 * 3.75 - 1\n14.0\nIn interactive mode, the last printed expression is assigned to the variable\n_\n. This means that when you are using Python as a desk calculator, it is\nsomewhat easier to continue calculations, for example:\n>>> tax = 12.5 / 100\n>>> price = 100.50\n>>> price * tax\n12.5625\n>>> price + _\n113.0625\n>>> round(_, 2)\n113.06\nThis variable should be treated as read-only by the user. Don\u2019t explicitly assign a value to it \u2014 you would create an independent local variable with the same name masking the built-in variable with its magic behavior.\nIn addition to int\nand float\n, Python supports other types of\nnumbers, such as Decimal\nand Fraction\n.\nPython also has built-in support for complex numbers,\nand uses the j\nor J\nsuffix to indicate the imaginary part\n(e.g. 3+5j\n).\n3.1.2. Text\u00b6\nPython can manipulate text (represented by type str\n, so-called\n\u201cstrings\u201d) as well as numbers. This includes characters \u201c!\n\u201d, words\n\u201crabbit\n\u201d, names \u201cParis\n\u201d, sentences \u201cGot your back.\n\u201d, etc.\n\u201cYay! :)\n\u201d. They can be enclosed in single quotes ('...'\n) or double\nquotes (\"...\"\n) with the same result [2].\n>>> 'spam eggs' # single quotes\n'spam eggs'\n>>> \"Paris rabbit got your back :)! Yay!\" # double quotes\n'Paris rabbit got your back :)! Yay!'\n>>> '1975' # digits and numerals enclosed in quotes are also strings\n'1975'\nTo quote a quote, we need to \u201cescape\u201d it, by preceding it with \\\n.\nAlternatively, we can use the other type of quotation marks:\n>>> 'doesn\\'t' # use \\' to escape the single quote...\n\"doesn't\"\n>>> \"doesn't\" # ...or use double quotes instead\n\"doesn't\"\n>>> '\"Yes,\" they said.'\n'\"Yes,\" they said.'\n>>> \"\\\"Yes,\\\" they said.\"\n'\"Yes,\" they said.'\n>>> '\"Isn\\'t,\" they said.'\n'\"Isn\\'t,\" they said.'\nIn the Python shell, the string definition and output string can look\ndifferent. The print()\nfunction produces a more readable output, by\nomitting the enclosing quotes and by printing escaped and special characters:\n>>> s = 'First line.\\nSecond line.' # \\n means newline\n>>> s # without print(), special characters are included in the string\n'First line.\\nSecond line.'\n>>> print(s) # with print(), special characters are interpreted, so \\n produces new line\nFirst line.\nSecond line.\nIf you don\u2019t want characters prefaced by \\\nto be interpreted as\nspecial characters, you can use raw strings by adding an r\nbefore\nthe first quote:\n>>> print('C:\\some\\name') # here \\n means newline!\nC:\\some\name\n>>> print(r'C:\\some\\name') # note the r before the quote\nC:\\some\\name\nThere is one subtle aspect to raw strings: a raw string may not end in\nan odd number of \\\ncharacters; see\nthe FAQ entry for more information\nand workarounds.\nString literals can span multiple lines. One way is using triple-quotes:\n\"\"\"...\"\"\"\nor '''...'''\n. End-of-line characters are automatically\nincluded in the string, but it\u2019s possible to prevent this by adding a \\\nat\nthe end of the line. In the following example, the initial newline is not\nincluded:\n>>> print(\"\"\"\\\n... Usage: thingy [OPTIONS]\n... -h Display this usage message\n... -H hostname Hostname to connect to\n... \"\"\")\nUsage: thingy [OPTIONS]\n-h Display this usage message\n-H hostname Hostname to connect to\n>>>\nStrings can be concatenated (glued together) with the +\noperator, and\nrepeated with *\n:\n>>> # 3 times 'un', followed by 'ium'\n>>> 3 * 'un' + 'ium'\n'unununium'\nTwo or more string literals (i.e. the ones enclosed between quotes) next to each other are automatically concatenated.\n>>> 'Py' 'thon'\n'Python'\nThis feature is particularly useful when you want to break long strings:\n>>> text = ('Put several strings within parentheses '\n... 'to have them joined together.')\n>>> text\n'Put several strings within parentheses to have them joined together.'\nThis only works with two literals though, not with variables or expressions:\n>>> prefix = 'Py'\n>>> prefix 'thon' # can't concatenate a variable and a string literal\nFile \"\", line 1\nprefix 'thon'\n^^^^^^\nSyntaxError: invalid syntax\n>>> ('un' * 3) 'ium'\nFile \"\", line 1\n('un' * 3) 'ium'\n^^^^^\nSyntaxError: invalid syntax\nIf you want to concatenate variables or a variable and a literal, use +\n:\n>>> prefix + 'thon'\n'Python'\nStrings can be indexed (subscripted), with the first character having index 0. There is no separate character type; a character is simply a string of size one:\n>>> word = 'Python'\n>>> word[0] # character in position 0\n'P'\n>>> word[5] # character in position 5\n'n'\nIndices may also be negative numbers, to start counting from the right:\n>>> word[-1] # last character\n'n'\n>>> word[-2] # second-last character\n'o'\n>>> word[-6]\n'P'\nNote that since -0 is the same as 0, negative indices start from -1.\nIn addition to indexing, slicing is also supported. While indexing is used to obtain individual characters, slicing allows you to obtain a substring:\n>>> word[0:2] # characters from position 0 (included) to 2 (excluded)\n'Py'\n>>> word[2:5] # characters from position 2 (included) to 5 (excluded)\n'tho'\nSlice indices have useful defaults; an omitted first index defaults to zero, an omitted second index defaults to the size of the string being sliced.\n>>> word[:2] # character from the beginning to position 2 (excluded)\n'Py'\n>>> word[4:] # characters from position 4 (included) to the end\n'on'\n>>> word[-2:] # characters from the second-last (included) to the end\n'on'\nNote how the start is always included, and the end always excluded. This\nmakes sure that s[:i] + s[i:]\nis always equal to s\n:\n>>> word[:2] + word[2:]\n'Python'\n>>> word[:4] + word[4:]\n'Python'\nOne way to remember how slices work is to think of the indices as pointing between characters, with the left edge of the first character numbered 0. Then the right edge of the last character of a string of n characters has index n, for example:\n+---+---+---+---+---+---+\n| P | y | t | h | o | n |\n+---+---+---+---+---+---+\n0 1 2 3 4 5 6\n-6 -5 -4 -3 -2 -1\nThe first row of numbers gives the position of the indices 0\u20266 in the string; the second row gives the corresponding negative indices. The slice from i to j consists of all characters between the edges labeled i and j, respectively.\nFor non-negative indices, the length of a slice is the difference of the\nindices, if both are within bounds. For example, the length of word[1:3]\nis\n2.\nAttempting to use an index that is too large will result in an error:\n>>> word[42] # the word only has 6 characters\nTraceback (most recent call last):\nFile \"\", line 1, in \nIndexError: string index out of range\nHowever, out of range slice indexes are handled gracefully when used for slicing:\n>>> word[4:42]\n'on'\n>>> word[42:]\n''\nPython strings cannot be changed \u2014 they are immutable. Therefore, assigning to an indexed position in the string results in an error:\n>>> word[0] = 'J'\nTraceback (most recent call last):\nFile \"\", line 1, in \nTypeError: 'str' object does not support item assignment\n>>> word[2:] = 'py'\nTraceback (most recent call last):\nFile \"\", line 1, in \nTypeError: 'str' object does not support item assignment\nIf you need a different string, you should create a new one:\n>>> 'J' + word[1:]\n'Jython'\n>>> word[:2] + 'py'\n'Pypy'\nThe built-in function len()\nreturns the length of a string:\n>>> s = 'supercalifragilisticexpialidocious'\n>>> len(s)\n34\nSee also\n- Text Sequence Type \u2014 str\nStrings are examples of sequence types, and support the common operations supported by such types.\n- String Methods\nStrings support a large number of methods for basic transformations and searching.\n- f-strings\nString literals that have embedded expressions.\n- Format String Syntax\nInformation about string formatting with\nstr.format()\n.- printf-style String Formatting\nThe old formatting operations invoked when strings are the left operand of the\n%\noperator are described in more detail here.\n3.1.3. Lists\u00b6\nPython knows a number of compound data types, used to group together other values. The most versatile is the list, which can be written as a list of comma-separated values (items) between square brackets. Lists might contain items of different types, but usually the items all have the same type.\n>>> squares = [1, 4, 9, 16, 25]\n>>> squares\n[1, 4, 9, 16, 25]\nLike strings (and all other built-in sequence types), lists can be indexed and sliced:\n>>> squares[0] # indexing returns the item\n1\n>>> squares[-1]\n25\n>>> squares[-3:] # slicing returns a new list\n[9, 16, 25]\nLists also support operations like concatenation:\n>>> squares + [36, 49, 64, 81, 100]\n[1, 4, 9, 16, 25, 36, 49, 64, 81, 100]\nUnlike strings, which are immutable, lists are a mutable type, i.e. it is possible to change their content:\n>>> cubes = [1, 8, 27, 65, 125] # something's wrong here\n>>> 4 ** 3 # the cube of 4 is 64, not 65!\n64\n>>> cubes[3] = 64 # replace the wrong value\n>>> cubes\n[1, 8, 27, 64, 125]\nYou can also add new items at the end of the list, by using\nthe list.append()\nmethod (we will see more about methods later):\n>>> cubes.append(216) # add the cube of 6\n>>> cubes.append(7 ** 3) # and the cube of 7\n>>> cubes\n[1, 8, 27, 64, 125, 216, 343]\nSimple assignment in Python never copies data. When you assign a list to a variable, the variable refers to the existing list. Any changes you make to the list through one variable will be seen through all other variables that refer to it.:\n>>> rgb = [\"Red\", \"Green\", \"Blue\"]\n>>> rgba = rgb\n>>> id(rgb) == id(rgba) # they reference the same object\nTrue\n>>> rgba.append(\"Alph\")\n>>> rgb\n[\"Red\", \"Green\", \"Blue\", \"Alph\"]\nAll slice operations return a new list containing the requested elements. This means that the following slice returns a shallow copy of the list:\n>>> correct_rgba = rgba[:]\n>>> correct_rgba[-1] = \"Alpha\"\n>>> correct_rgba\n[\"Red\", \"Green\", \"Blue\", \"Alpha\"]\n>>> rgba\n[\"Red\", \"Green\", \"Blue\", \"Alph\"]\nAssignment to slices is also possible, and this can even change the size of the list or clear it entirely:\n>>> letters = ['a', 'b', 'c', 'd', 'e', 'f', 'g']\n>>> letters\n['a', 'b', 'c', 'd', 'e', 'f', 'g']\n>>> # replace some values\n>>> letters[2:5] = ['C', 'D', 'E']\n>>> letters\n['a', 'b', 'C', 'D', 'E', 'f', 'g']\n>>> # now remove them\n>>> letters[2:5] = []\n>>> letters\n['a', 'b', 'f', 'g']\n>>> # clear the list by replacing all the elements with an empty list\n>>> letters[:] = []\n>>> letters\n[]\nThe built-in function len()\nalso applies to lists:\n>>> letters = ['a', 'b', 'c', 'd']\n>>> len(letters)\n4\nIt is possible to nest lists (create lists containing other lists), for example:\n>>> a = ['a', 'b', 'c']\n>>> n = [1, 2, 3]\n>>> x = [a, n]\n>>> x\n[['a', 'b', 'c'], [1, 2, 3]]\n>>> x[0]\n['a', 'b', 'c']\n>>> x[0][1]\n'b'\n3.2. First Steps Towards Programming\u00b6\nOf course, we can use Python for more complicated tasks than adding two and two together. For instance, we can write an initial sub-sequence of the Fibonacci series as follows:\n>>> # Fibonacci series:\n>>> # the sum of two elements defines the next\n>>> a, b = 0, 1\n>>> while a < 10:\n... print(a)\n... a, b = b, a+b\n...\n0\n1\n1\n2\n3\n5\n8\nThis example introduces several new features.\nThe first line contains a multiple assignment: the variables\na\nandb\nsimultaneously get the new values 0 and 1. On the last line this is used again, demonstrating that the expressions on the right-hand side are all evaluated first before any of the assignments take place. The right-hand side expressions are evaluated from the left to the right.The\nwhile\nloop executes as long as the condition (here:a < 10\n) remains true. In Python, like in C, any non-zero integer value is true; zero is false. The condition may also be a string or list value, in fact any sequence; anything with a non-zero length is true, empty sequences are false. The test used in the example is a simple comparison. The standard comparison operators are written the same as in C:<\n(less than),>\n(greater than),==\n(equal to),<=\n(less than or equal to),>=\n(greater than or equal to) and!=\n(not equal to).The body of the loop is indented: indentation is Python\u2019s way of grouping statements. At the interactive prompt, you have to type a tab or space(s) for each indented line. In practice you will prepare more complicated input for Python with a text editor; all decent text editors have an auto-indent facility. When a compound statement is entered interactively, it must be followed by a blank line to indicate completion (since the parser cannot guess when you have typed the last line). Note that each line within a basic block must be indented by the same amount.\nThe\nprint()\nfunction writes the value of the argument(s) it is given. It differs from just writing the expression you want to write (as we did earlier in the calculator examples) in the way it handles multiple arguments, floating-point quantities, and strings. Strings are printed without quotes, and a space is inserted between items, so you can format things nicely, like this:>>> i = 256*256 >>> print('The value of i is', i) The value of i is 65536\nThe keyword argument end can be used to avoid the newline after the output, or end the output with a different string:\n>>> a, b = 0, 1 >>> while a < 1000: ... print(a, end=',') ... a, b = b, a+b ... 0,1,1,2,3,5,8,13,21,34,55,89,144,233,377,610,987,\nFootnotes", "code_snippets": ["\n", " ", " ", " ", "\n ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n", " ", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", " ", " ", " ", " ", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", " ", "\n", "\n", " ", "\n", "\n", "\n", " ", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n\n", "\n", "\n", " ", " ", " ", " ", "\n", "\n", " ", "\n", "\n", " ", " ", "\n", " ", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n File ", ", line ", "\n", " ", "\n", "\n", ": ", "\n", " ", " ", " ", "\n File ", ", line ", "\n", " ", " ", " ", "\n", "\n", ": ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n ", "\n ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", " ", " ", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", " ", "\n", "\n", " ", " ", " ", " ", " ", " ", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", " ", "\n", " ", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", " ", " ", " ", " ", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 4159}
{"url": "https://docs.python.org/3/tutorial/interpreter.html", "title": "Using the Python Interpreter", "content": "2. Using the Python Interpreter\u00b6\n2.1. Invoking the Interpreter\u00b6\nThe Python interpreter is usually installed as /usr/local/bin/python3.14\non those machines where it is available; putting /usr/local/bin\nin your\nUnix shell\u2019s search path makes it possible to start it by typing the command:\npython3.14\nto the shell. [1] Since the choice of the directory where the interpreter lives\nis an installation option, other places are possible; check with your local\nPython guru or system administrator. (E.g., /usr/local/python\nis a\npopular alternative location.)\nOn Windows machines where you have installed Python from the Microsoft Store, the python3.14\ncommand will be available. If you have\nthe py.exe launcher installed, you can use the py\ncommand. See Python install manager for other ways to launch Python.\nTyping an end-of-file character (Control-D on Unix, Control-Z on\nWindows) at the primary prompt causes the interpreter to exit with a zero exit\nstatus. If that doesn\u2019t work, you can exit the interpreter by typing the\nfollowing command: quit()\n.\nThe interpreter\u2019s line-editing features include interactive editing, history\nsubstitution and code completion on most systems.\nPerhaps the quickest check to see whether command line editing is supported is\ntyping a word in on the Python prompt, then pressing Left arrow (or Control-b).\nIf the cursor moves, you have command line editing; see Appendix\nInteractive Input Editing and History Substitution for an introduction to the keys.\nIf nothing appears to happen, or if a sequence like ^[[D\nor ^B\nappears,\ncommand line editing isn\u2019t available; you\u2019ll only be able to use\nbackspace to remove characters from the current line.\nThe interpreter operates somewhat like the Unix shell: when called with standard input connected to a tty device, it reads and executes commands interactively; when called with a file name argument or with a file as standard input, it reads and executes a script from that file.\nA second way of starting the interpreter is python -c command [arg] ...\n,\nwhich executes the statement(s) in command, analogous to the shell\u2019s\n-c\noption. Since Python statements often contain spaces or other\ncharacters that are special to the shell, it is usually advised to quote\ncommand in its entirety.\nSome Python modules are also useful as scripts. These can be invoked using\npython -m module [arg] ...\n, which executes the source file for module as\nif you had spelled out its full name on the command line.\nWhen a script file is used, it is sometimes useful to be able to run the script\nand enter interactive mode afterwards. This can be done by passing -i\nbefore the script.\nAll command line options are described in Command line and environment.\n2.1.1. Argument Passing\u00b6\nWhen known to the interpreter, the script name and additional arguments\nthereafter are turned into a list of strings and assigned to the argv\nvariable in the sys\nmodule. You can access this list by executing import\nsys\n. The length of the list is at least one; when no script and no arguments\nare given, sys.argv[0]\nis an empty string. When the script name is given as\n'-'\n(meaning standard input), sys.argv[0]\nis set to '-'\n. When\n-c\ncommand is used, sys.argv[0]\nis set to '-c'\n. When\n-m\nmodule is used, sys.argv[0]\nis set to the full name of the\nlocated module. Options found after -c\ncommand or -m\nmodule are not consumed by the Python interpreter\u2019s option processing but\nleft in sys.argv\nfor the command or module to handle.\n2.1.2. Interactive Mode\u00b6\nWhen commands are read from a tty, the interpreter is said to be in interactive\nmode. In this mode it prompts for the next command with the primary prompt,\nusually three greater-than signs (>>>\n); for continuation lines it prompts\nwith the secondary prompt, by default three dots (...\n). The interpreter\nprints a welcome message stating its version number and a copyright notice\nbefore printing the first prompt:\n$ python3.14\nPython 3.14 (default, April 4 2024, 09:25:04)\n[GCC 10.2.0] on linux\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>>\nContinuation lines are needed when entering a multi-line construct. As an\nexample, take a look at this if\nstatement:\n>>> the_world_is_flat = True\n>>> if the_world_is_flat:\n... print(\"Be careful not to fall off!\")\n...\nBe careful not to fall off!\nFor more on interactive mode, see Interactive Mode.\n2.2. The Interpreter and Its Environment\u00b6\n2.2.1. Source Code Encoding\u00b6\nBy default, Python source files are treated as encoded in UTF-8. In that encoding, characters of most languages in the world can be used simultaneously in string literals, identifiers and comments \u2014 although the standard library only uses ASCII characters for identifiers, a convention that any portable code should follow. To display all these characters properly, your editor must recognize that the file is UTF-8, and it must use a font that supports all the characters in the file.\nTo declare an encoding other than the default one, a special comment line should be added as the first line of the file. The syntax is as follows:\n# -*- coding: encoding -*-\nwhere encoding is one of the valid codecs\nsupported by Python.\nFor example, to declare that Windows-1252 encoding is to be used, the first line of your source code file should be:\n# -*- coding: cp1252 -*-\nOne exception to the first line rule is when the source code starts with a UNIX \u201cshebang\u201d line. In this case, the encoding declaration should be added as the second line of the file. For example:\n#!/usr/bin/env python3\n# -*- coding: cp1252 -*-\nFootnotes", "code_snippets": [" ", " ", "\n", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 1385}
{"url": "https://docs.python.org/3/library/security_warnings.html", "title": "Security Considerations", "content": "Security Considerations\u00b6\nThe following modules have specific security considerations:\nhashlib\n: all constructors take a \u201cusedforsecurity\u201d keyword-only argument disabling known insecure and blocked algorithmshttp.server\nis not suitable for production use, only implementing basic security checks. See the security considerations.random\nshouldn\u2019t be used for security purposes, usesecrets\ninsteadshelve\n: shelve is based on pickle and thus unsuitable for dealing with untrusted sourcestempfile\n: mktemp is deprecated due to vulnerability to race conditionszipfile\n: maliciously prepared .zip files can cause disk volume exhaustion\nThe -I\ncommand line option can be used to run Python in isolated\nmode. When it cannot be used, the -P\noption or the\nPYTHONSAFEPATH\nenvironment variable can be used to not prepend a\npotentially unsafe path to sys.path\nsuch as the current directory, the\nscript\u2019s directory or an empty string.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 230}
{"url": "https://docs.python.org/3/c-api/init.html", "title": "Initialization, Finalization, and Threads", "content": "Initialization, Finalization, and Threads\u00b6\nSee Python Initialization Configuration for details on how to configure the interpreter prior to initialization.\nBefore Python Initialization\u00b6\nIn an application embedding Python, the Py_Initialize()\nfunction must\nbe called before using any other Python/C API functions; with the exception of\na few functions and the global configuration variables.\nThe following functions can be safely called before Python is initialized:\nFunctions that initialize the interpreter:\nthe runtime pre-initialization functions covered in Python Initialization Configuration\nConfiguration functions:\nPyInitFrozenExtensions()\nthe configuration functions covered in Python Initialization Configuration\nInformative functions:\nUtilities:\nthe status reporting and utility functions covered in Python Initialization Configuration\nMemory allocators:\nSynchronization:\nNote\nDespite their apparent similarity to some of the functions listed above,\nthe following functions should not be called before the interpreter has\nbeen initialized: Py_EncodeLocale()\n, Py_GetPath()\n,\nPy_GetPrefix()\n, Py_GetExecPrefix()\n,\nPy_GetProgramFullPath()\n, Py_GetPythonHome()\n,\nPy_GetProgramName()\n, PyEval_InitThreads()\n, and\nPy_RunMain()\n.\nGlobal configuration variables\u00b6\nPython has variables for the global configuration to control different features and options. By default, these flags are controlled by command line options.\nWhen a flag is set by an option, the value of the flag is the number of times\nthat the option was set. For example, -b\nsets Py_BytesWarningFlag\nto 1 and -bb\nsets Py_BytesWarningFlag\nto 2.\n-\nint Py_BytesWarningFlag\u00b6\nThis API is kept for backward compatibility: setting\nPyConfig.bytes_warning\nshould be used instead, see Python Initialization Configuration.Issue a warning when comparing\nbytes\norbytearray\nwithstr\norbytes\nwithint\n. Issue an error if greater or equal to2\n.Set by the\n-b\noption.Deprecated since version 3.12, will be removed in version 3.15.\n-\nint Py_DebugFlag\u00b6\nThis API is kept for backward compatibility: setting\nPyConfig.parser_debug\nshould be used instead, see Python Initialization Configuration.Turn on parser debugging output (for expert only, depending on compilation options).\nSet by the\n-d\noption and thePYTHONDEBUG\nenvironment variable.Deprecated since version 3.12, will be removed in version 3.15.\n-\nint Py_DontWriteBytecodeFlag\u00b6\nThis API is kept for backward compatibility: setting\nPyConfig.write_bytecode\nshould be used instead, see Python Initialization Configuration.If set to non-zero, Python won\u2019t try to write\n.pyc\nfiles on the import of source modules.Set by the\n-B\noption and thePYTHONDONTWRITEBYTECODE\nenvironment variable.Deprecated since version 3.12, will be removed in version 3.15.\n-\nint Py_FrozenFlag\u00b6\nThis API is kept for backward compatibility: setting\nPyConfig.pathconfig_warnings\nshould be used instead, see Python Initialization Configuration.Suppress error messages when calculating the module search path in\nPy_GetPath()\n.Private flag used by\n_freeze_module\nandfrozenmain\nprograms.Deprecated since version 3.12, will be removed in version 3.15.\n-\nint Py_HashRandomizationFlag\u00b6\nThis API is kept for backward compatibility: setting\nPyConfig.hash_seed\nandPyConfig.use_hash_seed\nshould be used instead, see Python Initialization Configuration.Set to\n1\nif thePYTHONHASHSEED\nenvironment variable is set to a non-empty string.If the flag is non-zero, read the\nPYTHONHASHSEED\nenvironment variable to initialize the secret hash seed.Deprecated since version 3.12, will be removed in version 3.15.\n-\nint Py_IgnoreEnvironmentFlag\u00b6\nThis API is kept for backward compatibility: setting\nPyConfig.use_environment\nshould be used instead, see Python Initialization Configuration.Ignore all\nPYTHON*\nenvironment variables, e.g.PYTHONPATH\nandPYTHONHOME\n, that might be set.Deprecated since version 3.12, will be removed in version 3.15.\n-\nint Py_InspectFlag\u00b6\nThis API is kept for backward compatibility: setting\nPyConfig.inspect\nshould be used instead, see Python Initialization Configuration.When a script is passed as first argument or the\n-c\noption is used, enter interactive mode after executing the script or the command, even whensys.stdin\ndoes not appear to be a terminal.Set by the\n-i\noption and thePYTHONINSPECT\nenvironment variable.Deprecated since version 3.12, will be removed in version 3.15.\n-\nint Py_InteractiveFlag\u00b6\nThis API is kept for backward compatibility: setting\nPyConfig.interactive\nshould be used instead, see Python Initialization Configuration.Set by the\n-i\noption.Deprecated since version 3.12, will be removed in version 3.15.\n-\nint Py_IsolatedFlag\u00b6\nThis API is kept for backward compatibility: setting\nPyConfig.isolated\nshould be used instead, see Python Initialization Configuration.Run Python in isolated mode. In isolated mode\nsys.path\ncontains neither the script\u2019s directory nor the user\u2019s site-packages directory.Set by the\n-I\noption.Added in version 3.4.\nDeprecated since version 3.12, will be removed in version 3.15.\n-\nint Py_LegacyWindowsFSEncodingFlag\u00b6\nThis API is kept for backward compatibility: setting\nPyPreConfig.legacy_windows_fs_encoding\nshould be used instead, see Python Initialization Configuration.If the flag is non-zero, use the\nmbcs\nencoding withreplace\nerror handler, instead of the UTF-8 encoding withsurrogatepass\nerror handler, for the filesystem encoding and error handler.Set to\n1\nif thePYTHONLEGACYWINDOWSFSENCODING\nenvironment variable is set to a non-empty string.See PEP 529 for more details.\nAvailability: Windows.\nDeprecated since version 3.12, will be removed in version 3.15.\n-\nint Py_LegacyWindowsStdioFlag\u00b6\nThis API is kept for backward compatibility: setting\nPyConfig.legacy_windows_stdio\nshould be used instead, see Python Initialization Configuration.If the flag is non-zero, use\nio.FileIO\ninstead ofio._WindowsConsoleIO\nforsys\nstandard streams.Set to\n1\nif thePYTHONLEGACYWINDOWSSTDIO\nenvironment variable is set to a non-empty string.See PEP 528 for more details.\nAvailability: Windows.\nDeprecated since version 3.12, will be removed in version 3.15.\n-\nint Py_NoSiteFlag\u00b6\nThis API is kept for backward compatibility: setting\nPyConfig.site_import\nshould be used instead, see Python Initialization Configuration.Disable the import of the module\nsite\nand the site-dependent manipulations ofsys.path\nthat it entails. Also disable these manipulations ifsite\nis explicitly imported later (callsite.main()\nif you want them to be triggered).Set by the\n-S\noption.Deprecated since version 3.12, will be removed in version 3.15.\n-\nint Py_NoUserSiteDirectory\u00b6\nThis API is kept for backward compatibility: setting\nPyConfig.user_site_directory\nshould be used instead, see Python Initialization Configuration.Don\u2019t add the\nuser site-packages directory\ntosys.path\n.Set by the\n-s\nand-I\noptions, and thePYTHONNOUSERSITE\nenvironment variable.Deprecated since version 3.12, will be removed in version 3.15.\n-\nint Py_OptimizeFlag\u00b6\nThis API is kept for backward compatibility: setting\nPyConfig.optimization_level\nshould be used instead, see Python Initialization Configuration.Set by the\n-O\noption and thePYTHONOPTIMIZE\nenvironment variable.Deprecated since version 3.12, will be removed in version 3.15.\n-\nint Py_QuietFlag\u00b6\nThis API is kept for backward compatibility: setting\nPyConfig.quiet\nshould be used instead, see Python Initialization Configuration.Don\u2019t display the copyright and version messages even in interactive mode.\nSet by the\n-q\noption.Added in version 3.2.\nDeprecated since version 3.12, will be removed in version 3.15.\n-\nint Py_UnbufferedStdioFlag\u00b6\nThis API is kept for backward compatibility: setting\nPyConfig.buffered_stdio\nshould be used instead, see Python Initialization Configuration.Force the stdout and stderr streams to be unbuffered.\nSet by the\n-u\noption and thePYTHONUNBUFFERED\nenvironment variable.Deprecated since version 3.12, will be removed in version 3.15.\n-\nint Py_VerboseFlag\u00b6\nThis API is kept for backward compatibility: setting\nPyConfig.verbose\nshould be used instead, see Python Initialization Configuration.Print a message each time a module is initialized, showing the place (filename or built-in module) from which it is loaded. If greater or equal to\n2\n, print a message for each file that is checked for when searching for a module. Also provides information on module cleanup at exit.Set by the\n-v\noption and thePYTHONVERBOSE\nenvironment variable.Deprecated since version 3.12, will be removed in version 3.15.\nInitializing and finalizing the interpreter\u00b6\n-\nvoid Py_Initialize()\u00b6\n- Part of the Stable ABI.\nInitialize the Python interpreter. In an application embedding Python, this should be called before using any other Python/C API functions; see Before Python Initialization for the few exceptions.\nThis initializes the table of loaded modules (\nsys.modules\n), and creates the fundamental modulesbuiltins\n,__main__\nandsys\n. It also initializes the module search path (sys.path\n). It does not setsys.argv\n; use the Python Initialization Configuration API for that. This is a no-op when called for a second time (without callingPy_FinalizeEx()\nfirst). There is no return value; it is a fatal error if the initialization fails.Use\nPy_InitializeFromConfig()\nto customize the Python Initialization Configuration.Note\nOn Windows, changes the console mode from\nO_TEXT\ntoO_BINARY\n, which will also affect non-Python uses of the console using the C Runtime.\n-\nvoid Py_InitializeEx(int initsigs)\u00b6\n- Part of the Stable ABI.\nThis function works like\nPy_Initialize()\nif initsigs is1\n. If initsigs is0\n, it skips initialization registration of signal handlers, which may be useful when CPython is embedded as part of a larger application.Use\nPy_InitializeFromConfig()\nto customize the Python Initialization Configuration.\n-\nPyStatus Py_InitializeFromConfig(const PyConfig *config)\u00b6\nInitialize Python from config configuration, as described in Initialization with PyConfig.\nSee the Python Initialization Configuration section for details on pre-initializing the interpreter, populating the runtime configuration structure, and querying the returned status structure.\n-\nint Py_IsInitialized()\u00b6\n- Part of the Stable ABI.\nReturn true (nonzero) when the Python interpreter has been initialized, false (zero) if not. After\nPy_FinalizeEx()\nis called, this returns false untilPy_Initialize()\nis called again.\n-\nint Py_IsFinalizing()\u00b6\n- Part of the Stable ABI since version 3.13.\nReturn true (non-zero) if the main Python interpreter is shutting down. Return false (zero) otherwise.\nAdded in version 3.13.\n-\nint Py_FinalizeEx()\u00b6\n- Part of the Stable ABI since version 3.6.\nUndo all initializations made by\nPy_Initialize()\nand subsequent use of Python/C API functions, and destroy all sub-interpreters (seePy_NewInterpreter()\nbelow) that were created and not yet destroyed since the last call toPy_Initialize()\n. This is a no-op when called for a second time (without callingPy_Initialize()\nagain first).Since this is the reverse of\nPy_Initialize()\n, it should be called in the same thread with the same interpreter active. That means the main thread and the main interpreter. This should never be called whilePy_RunMain()\nis running.Normally the return value is\n0\n. If there were errors during finalization (flushing buffered data),-1\nis returned.Note that Python will do a best effort at freeing all memory allocated by the Python interpreter. Therefore, any C-Extension should make sure to correctly clean up all of the previously allocated PyObjects before using them in subsequent calls to\nPy_Initialize()\n. Otherwise it could introduce vulnerabilities and incorrect behavior.This function is provided for a number of reasons. An embedding application might want to restart Python without having to restart the application itself. An application that has loaded the Python interpreter from a dynamically loadable library (or DLL) might want to free all memory allocated by Python before unloading the DLL. During a hunt for memory leaks in an application a developer might want to free all memory allocated by Python before exiting from the application.\nBugs and caveats: The destruction of modules and objects in modules is done in random order; this may cause destructors (\n__del__()\nmethods) to fail when they depend on other objects (even functions) or modules. Dynamically loaded extension modules loaded by Python are not unloaded. Small amounts of memory allocated by the Python interpreter may not be freed (if you find a leak, please report it). Memory tied up in circular references between objects is not freed. Interned strings will all be deallocated regardless of their reference count. Some memory allocated by extension modules may not be freed. Some extensions may not work properly if their initialization routine is called more than once; this can happen if an application callsPy_Initialize()\nandPy_FinalizeEx()\nmore than once.Py_FinalizeEx()\nmust not be called recursively from within itself. Therefore, it must not be called by any code that may be run as part of the interpreter shutdown process, such asatexit\nhandlers, object finalizers, or any code that may be run while flushing the stdout and stderr files.Raises an auditing event\ncpython._PySys_ClearAuditHooks\nwith no arguments.Added in version 3.6.\n-\nvoid Py_Finalize()\u00b6\n- Part of the Stable ABI.\nThis is a backwards-compatible version of\nPy_FinalizeEx()\nthat disregards the return value.\n-\nint Py_BytesMain(int argc, char **argv)\u00b6\n- Part of the Stable ABI since version 3.8.\nSimilar to\nPy_Main()\nbut argv is an array of bytes strings, allowing the calling application to delegate the text decoding step to the CPython runtime.Added in version 3.8.\n-\nint Py_Main(int argc, wchar_t **argv)\u00b6\n- Part of the Stable ABI.\nThe main program for the standard interpreter, encapsulating a full initialization/finalization cycle, as well as additional behaviour to implement reading configurations settings from the environment and command line, and then executing\n__main__\nin accordance with Command line.This is made available for programs which wish to support the full CPython command line interface, rather than just embedding a Python runtime in a larger application.\nThe argc and argv parameters are similar to those which are passed to a C program\u2019s\nmain()\nfunction, except that the argv entries are first converted towchar_t\nusingPy_DecodeLocale()\n. It is also important to note that the argument list entries may be modified to point to strings other than those passed in (however, the contents of the strings pointed to by the argument list are not modified).The return value is\n2\nif the argument list does not represent a valid Python command line, and otherwise the same asPy_RunMain()\n.In terms of the CPython runtime configuration APIs documented in the runtime configuration section (and without accounting for error handling),\nPy_Main\nis approximately equivalent to:PyConfig config; PyConfig_InitPythonConfig(&config); PyConfig_SetArgv(&config, argc, argv); Py_InitializeFromConfig(&config); PyConfig_Clear(&config); Py_RunMain();\nIn normal usage, an embedding application will call this function instead of calling\nPy_Initialize()\n,Py_InitializeEx()\norPy_InitializeFromConfig()\ndirectly, and all settings will be applied as described elsewhere in this documentation. If this function is instead called after a preceding runtime initialization API call, then exactly which environmental and command line configuration settings will be updated is version dependent (as it depends on which settings correctly support being modified after they have already been set once when the runtime was first initialized).\n-\nint Py_RunMain(void)\u00b6\nExecutes the main module in a fully configured CPython runtime.\nExecutes the command (\nPyConfig.run_command\n), the script (PyConfig.run_filename\n) or the module (PyConfig.run_module\n) specified on the command line or in the configuration. If none of these values are set, runs the interactive Python prompt (REPL) using the__main__\nmodule\u2019s global namespace.If\nPyConfig.inspect\nis not set (the default), the return value will be0\nif the interpreter exits normally (that is, without raising an exception), the exit status of an unhandledSystemExit\n, or1\nfor any other unhandled exception.If\nPyConfig.inspect\nis set (such as when the-i\noption is used), rather than returning when the interpreter exits, execution will instead resume in an interactive Python prompt (REPL) using the__main__\nmodule\u2019s global namespace. If the interpreter exited with an exception, it is immediately raised in the REPL session. The function return value is then determined by the way the REPL session terminates:0\n,1\n, or the status of aSystemExit\n, as specified above.This function always finalizes the Python interpreter before it returns.\nSee Python Configuration for an example of a customized Python that always runs in isolated mode using\nPy_RunMain()\n.\n-\nint PyUnstable_AtExit(PyInterpreterState *interp, void (*func)(void*), void *data)\u00b6\n- This is Unstable API. It may change without warning in minor releases.\nRegister an\natexit\ncallback for the target interpreter interp. This is similar toPy_AtExit()\n, but takes an explicit interpreter and data pointer for the callback.There must be an attached thread state for interp.\nAdded in version 3.13.\nProcess-wide parameters\u00b6\n-\nvoid Py_SetProgramName(const wchar_t *name)\u00b6\n- Part of the Stable ABI.\nThis API is kept for backward compatibility: setting\nPyConfig.program_name\nshould be used instead, see Python Initialization Configuration.This function should be called before\nPy_Initialize()\nis called for the first time, if it is called at all. It tells the interpreter the value of theargv[0]\nargument to themain()\nfunction of the program (converted to wide characters). This is used byPy_GetPath()\nand some other functions below to find the Python run-time libraries relative to the interpreter executable. The default value is'python'\n. The argument should point to a zero-terminated wide character string in static storage whose contents will not change for the duration of the program\u2019s execution. No code in the Python interpreter will change the contents of this storage.Use\nPy_DecodeLocale()\nto decode a bytes string to get a wchar_t* string.Deprecated since version 3.11, will be removed in version 3.15.\n-\nwchar_t *Py_GetProgramName()\u00b6\n- Part of the Stable ABI.\nReturn the program name set with\nPyConfig.program_name\n, or the default. The returned string points into static storage; the caller should not modify its value.This function should not be called before\nPy_Initialize()\n, otherwise it returnsNULL\n.Changed in version 3.10: It now returns\nNULL\nif called beforePy_Initialize()\n.Deprecated since version 3.13, will be removed in version 3.15: Use\nPyConfig_Get(\"executable\")\n(sys.executable\n) instead.\n-\nwchar_t *Py_GetPrefix()\u00b6\n- Part of the Stable ABI.\nReturn the prefix for installed platform-independent files. This is derived through a number of complicated rules from the program name set with\nPyConfig.program_name\nand some environment variables; for example, if the program name is'/usr/local/bin/python'\n, the prefix is'/usr/local'\n. The returned string points into static storage; the caller should not modify its value. This corresponds to the prefix variable in the top-levelMakefile\nand the--prefix\nargument to the configure script at build time. The value is available to Python code assys.base_prefix\n. It is only useful on Unix. See also the next function.This function should not be called before\nPy_Initialize()\n, otherwise it returnsNULL\n.Changed in version 3.10: It now returns\nNULL\nif called beforePy_Initialize()\n.Deprecated since version 3.13, will be removed in version 3.15: Use\nPyConfig_Get(\"base_prefix\")\n(sys.base_prefix\n) instead. UsePyConfig_Get(\"prefix\")\n(sys.prefix\n) if virtual environments need to be handled.\n-\nwchar_t *Py_GetExecPrefix()\u00b6\n- Part of the Stable ABI.\nReturn the exec-prefix for installed platform-dependent files. This is derived through a number of complicated rules from the program name set with\nPyConfig.program_name\nand some environment variables; for example, if the program name is'/usr/local/bin/python'\n, the exec-prefix is'/usr/local'\n. The returned string points into static storage; the caller should not modify its value. This corresponds to the exec_prefix variable in the top-levelMakefile\nand the--exec-prefix\nargument to the configure script at build time. The value is available to Python code assys.base_exec_prefix\n. It is only useful on Unix.Background: The exec-prefix differs from the prefix when platform dependent files (such as executables and shared libraries) are installed in a different directory tree. In a typical installation, platform dependent files may be installed in the\n/usr/local/plat\nsubtree while platform independent may be installed in/usr/local\n.Generally speaking, a platform is a combination of hardware and software families, e.g. Sparc machines running the Solaris 2.x operating system are considered the same platform, but Intel machines running Solaris 2.x are another platform, and Intel machines running Linux are yet another platform. Different major revisions of the same operating system generally also form different platforms. Non-Unix operating systems are a different story; the installation strategies on those systems are so different that the prefix and exec-prefix are meaningless, and set to the empty string. Note that compiled Python bytecode files are platform independent (but not independent from the Python version by which they were compiled!).\nSystem administrators will know how to configure the mount or automount programs to share\n/usr/local\nbetween platforms while having/usr/local/plat\nbe a different filesystem for each platform.This function should not be called before\nPy_Initialize()\n, otherwise it returnsNULL\n.Changed in version 3.10: It now returns\nNULL\nif called beforePy_Initialize()\n.Deprecated since version 3.13, will be removed in version 3.15: Use\nPyConfig_Get(\"base_exec_prefix\")\n(sys.base_exec_prefix\n) instead. UsePyConfig_Get(\"exec_prefix\")\n(sys.exec_prefix\n) if virtual environments need to be handled.\n-\nwchar_t *Py_GetProgramFullPath()\u00b6\n- Part of the Stable ABI.\nReturn the full program name of the Python executable; this is computed as a side-effect of deriving the default module search path from the program name (set by\nPyConfig.program_name\n). The returned string points into static storage; the caller should not modify its value. The value is available to Python code assys.executable\n.This function should not be called before\nPy_Initialize()\n, otherwise it returnsNULL\n.Changed in version 3.10: It now returns\nNULL\nif called beforePy_Initialize()\n.Deprecated since version 3.13, will be removed in version 3.15: Use\nPyConfig_Get(\"executable\")\n(sys.executable\n) instead.\n-\nwchar_t *Py_GetPath()\u00b6\n- Part of the Stable ABI.\nReturn the default module search path; this is computed from the program name (set by\nPyConfig.program_name\n) and some environment variables. The returned string consists of a series of directory names separated by a platform dependent delimiter character. The delimiter character is':'\non Unix and macOS,';'\non Windows. The returned string points into static storage; the caller should not modify its value. The listsys.path\nis initialized with this value on interpreter startup; it can be (and usually is) modified later to change the search path for loading modules.This function should not be called before\nPy_Initialize()\n, otherwise it returnsNULL\n.Changed in version 3.10: It now returns\nNULL\nif called beforePy_Initialize()\n.Deprecated since version 3.13, will be removed in version 3.15: Use\nPyConfig_Get(\"module_search_paths\")\n(sys.path\n) instead.\n-\nconst char *Py_GetVersion()\u00b6\n- Part of the Stable ABI.\nReturn the version of this Python interpreter. This is a string that looks something like\n\"3.0a5+ (py3k:63103M, May 12 2008, 00:53:55) \\n[GCC 4.2.3]\"\nThe first word (up to the first space character) is the current Python version; the first characters are the major and minor version separated by a period. The returned string points into static storage; the caller should not modify its value. The value is available to Python code as\nsys.version\n.See also the\nPy_Version\nconstant.\n-\nconst char *Py_GetPlatform()\u00b6\n- Part of the Stable ABI.\nReturn the platform identifier for the current platform. On Unix, this is formed from the \u201cofficial\u201d name of the operating system, converted to lower case, followed by the major revision number; e.g., for Solaris 2.x, which is also known as SunOS 5.x, the value is\n'sunos5'\n. On macOS, it is'darwin'\n. On Windows, it is'win'\n. The returned string points into static storage; the caller should not modify its value. The value is available to Python code assys.platform\n.\n-\nconst char *Py_GetCopyright()\u00b6\n- Part of the Stable ABI.\nReturn the official copyright string for the current Python version, for example\n'Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam'\nThe returned string points into static storage; the caller should not modify its value. The value is available to Python code as\nsys.copyright\n.\n-\nconst char *Py_GetCompiler()\u00b6\n- Part of the Stable ABI.\nReturn an indication of the compiler used to build the current Python version, in square brackets, for example:\n\"[GCC 2.7.2.2]\"\nThe returned string points into static storage; the caller should not modify its value. The value is available to Python code as part of the variable\nsys.version\n.\n-\nconst char *Py_GetBuildInfo()\u00b6\n- Part of the Stable ABI.\nReturn information about the sequence number and build date and time of the current Python interpreter instance, for example\n\"#67, Aug 1 1997, 22:34:28\"\nThe returned string points into static storage; the caller should not modify its value. The value is available to Python code as part of the variable\nsys.version\n.\n-\nvoid PySys_SetArgvEx(int argc, wchar_t **argv, int updatepath)\u00b6\n- Part of the Stable ABI.\nThis API is kept for backward compatibility: setting\nPyConfig.argv\n,PyConfig.parse_argv\nandPyConfig.safe_path\nshould be used instead, see Python Initialization Configuration.Set\nsys.argv\nbased on argc and argv. These parameters are similar to those passed to the program\u2019smain()\nfunction with the difference that the first entry should refer to the script file to be executed rather than the executable hosting the Python interpreter. If there isn\u2019t a script that will be run, the first entry in argv can be an empty string. If this function fails to initializesys.argv\n, a fatal condition is signalled usingPy_FatalError()\n.If updatepath is zero, this is all the function does. If updatepath is non-zero, the function also modifies\nsys.path\naccording to the following algorithm:If the name of an existing script is passed in\nargv[0]\n, the absolute path of the directory where the script is located is prepended tosys.path\n.Otherwise (that is, if argc is\n0\norargv[0]\ndoesn\u2019t point to an existing file name), an empty string is prepended tosys.path\n, which is the same as prepending the current working directory (\".\"\n).\nUse\nPy_DecodeLocale()\nto decode a bytes string to get a wchar_t* string.See also\nPyConfig.orig_argv\nandPyConfig.argv\nmembers of the Python Initialization Configuration.Note\nIt is recommended that applications embedding the Python interpreter for purposes other than executing a single script pass\n0\nas updatepath, and updatesys.path\nthemselves if desired. See CVE 2008-5983.On versions before 3.1.3, you can achieve the same effect by manually popping the first\nsys.path\nelement after having calledPySys_SetArgv()\n, for example using:PyRun_SimpleString(\"import sys; sys.path.pop(0)\\n\");\nAdded in version 3.1.3.\nDeprecated since version 3.11, will be removed in version 3.15.\n-\nvoid PySys_SetArgv(int argc, wchar_t **argv)\u00b6\n- Part of the Stable ABI.\nThis API is kept for backward compatibility: setting\nPyConfig.argv\nandPyConfig.parse_argv\nshould be used instead, see Python Initialization Configuration.This function works like\nPySys_SetArgvEx()\nwith updatepath set to1\nunless the python interpreter was started with the-I\n.Use\nPy_DecodeLocale()\nto decode a bytes string to get a wchar_t* string.See also\nPyConfig.orig_argv\nandPyConfig.argv\nmembers of the Python Initialization Configuration.Changed in version 3.4: The updatepath value depends on\n-I\n.Deprecated since version 3.11, will be removed in version 3.15.\n-\nvoid Py_SetPythonHome(const wchar_t *home)\u00b6\n- Part of the Stable ABI.\nThis API is kept for backward compatibility: setting\nPyConfig.home\nshould be used instead, see Python Initialization Configuration.Set the default \u201chome\u201d directory, that is, the location of the standard Python libraries. See\nPYTHONHOME\nfor the meaning of the argument string.The argument should point to a zero-terminated character string in static storage whose contents will not change for the duration of the program\u2019s execution. No code in the Python interpreter will change the contents of this storage.\nUse\nPy_DecodeLocale()\nto decode a bytes string to get a wchar_t* string.Deprecated since version 3.11, will be removed in version 3.15.\n-\nwchar_t *Py_GetPythonHome()\u00b6\n- Part of the Stable ABI.\nReturn the default \u201chome\u201d, that is, the value set by\nPyConfig.home\n, or the value of thePYTHONHOME\nenvironment variable if it is set.This function should not be called before\nPy_Initialize()\n, otherwise it returnsNULL\n.Changed in version 3.10: It now returns\nNULL\nif called beforePy_Initialize()\n.Deprecated since version 3.13, will be removed in version 3.15: Use\nPyConfig_Get(\"home\")\nor thePYTHONHOME\nenvironment variable instead.\nThread State and the Global Interpreter Lock\u00b6\nUnless on a free-threaded build of CPython, the Python interpreter is not fully thread-safe. In order to support multi-threaded Python programs, there\u2019s a global lock, called the global interpreter lock or GIL, that must be held by the current thread before it can safely access Python objects. Without the lock, even the simplest operations could cause problems in a multi-threaded program: for example, when two threads simultaneously increment the reference count of the same object, the reference count could end up being incremented only once instead of twice.\nTherefore, the rule exists that only the thread that has acquired the\nGIL may operate on Python objects or call Python/C API functions.\nIn order to emulate concurrency of execution, the interpreter regularly\ntries to switch threads (see sys.setswitchinterval()\n). The lock is also\nreleased around potentially blocking I/O operations like reading or writing\na file, so that other Python threads can run in the meantime.\nThe Python interpreter keeps some thread-specific bookkeeping information\ninside a data structure called PyThreadState\n, known as a thread state.\nEach OS thread has a thread-local pointer to a PyThreadState\n; a thread state\nreferenced by this pointer is considered to be attached.\nA thread can only have one attached thread state at a time. An attached thread state is typically analogous with holding the GIL, except on free-threaded builds. On builds with the GIL enabled, attaching a thread state will block until the GIL can be acquired. However, even on builds with the GIL disabled, it is still required to have an attached thread state to call most of the C API.\nIn general, there will always be an attached thread state when using Python\u2019s C API.\nOnly in some specific cases (such as in a Py_BEGIN_ALLOW_THREADS\nblock) will the\nthread not have an attached thread state. If uncertain, check if PyThreadState_GetUnchecked()\nreturns\nNULL\n.\nDetaching the thread state from extension code\u00b6\nMost extension code manipulating the thread state has the following simple structure:\nSave the thread state in a local variable.\n... Do some blocking I/O operation ...\nRestore the thread state from the local variable.\nThis is so common that a pair of macros exists to simplify it:\nPy_BEGIN_ALLOW_THREADS\n... Do some blocking I/O operation ...\nPy_END_ALLOW_THREADS\nThe Py_BEGIN_ALLOW_THREADS\nmacro opens a new block and declares a\nhidden local variable; the Py_END_ALLOW_THREADS\nmacro closes the\nblock.\nThe block above expands to the following code:\nPyThreadState *_save;\n_save = PyEval_SaveThread();\n... Do some blocking I/O operation ...\nPyEval_RestoreThread(_save);\nHere is how these functions work:\nThe attached thread state holds the GIL for the entire interpreter. When detaching\nthe attached thread state, the GIL is released, allowing other threads to attach\na thread state to their own thread, thus getting the GIL and can start executing.\nThe pointer to the prior attached thread state is stored as a local variable.\nUpon reaching Py_END_ALLOW_THREADS\n, the thread state that was\npreviously attached is passed to PyEval_RestoreThread()\n.\nThis function will block until another releases its thread state,\nthus allowing the old thread state to get re-attached and the\nC API can be called again.\nFor free-threaded builds, the GIL is normally out of the question, but detaching the thread state is still required for blocking I/O and long operations. The difference is that threads don\u2019t have to wait for the GIL to be released to attach their thread state, allowing true multi-core parallelism.\nNote\nCalling system I/O functions is the most common use case for detaching\nthe thread state, but it can also be useful before calling\nlong-running computations which don\u2019t need access to Python objects, such\nas compression or cryptographic functions operating over memory buffers.\nFor example, the standard zlib\nand hashlib\nmodules detach the\nthread state when compressing or hashing data.\nNon-Python created threads\u00b6\nWhen threads are created using the dedicated Python APIs (such as the\nthreading\nmodule), a thread state is automatically associated to them\nand the code showed above is therefore correct. However, when threads are\ncreated from C (for example by a third-party library with its own thread\nmanagement), they don\u2019t hold the GIL, because they don\u2019t have an\nattached thread state.\nIf you need to call Python code from these threads (often this will be part of a callback API provided by the aforementioned third-party library), you must first register these threads with the interpreter by creating an attached thread state before you can start using the Python/C API. When you are done, you should detach the thread state, and finally free it.\nThe PyGILState_Ensure()\nand PyGILState_Release()\nfunctions do\nall of the above automatically. The typical idiom for calling into Python\nfrom a C thread is:\nPyGILState_STATE gstate;\ngstate = PyGILState_Ensure();\n/* Perform Python actions here. */\nresult = CallSomeFunction();\n/* evaluate result or handle exception */\n/* Release the thread. No Python API allowed beyond this point. */\nPyGILState_Release(gstate);\nNote that the PyGILState_*\nfunctions assume there is only one global\ninterpreter (created automatically by Py_Initialize()\n). Python\nsupports the creation of additional interpreters (using\nPy_NewInterpreter()\n), but mixing multiple interpreters and the\nPyGILState_*\nAPI is unsupported. This is because PyGILState_Ensure()\nand similar functions default to attaching a\nthread state for the main interpreter, meaning that the thread can\u2019t safely\ninteract with the calling subinterpreter.\nSupporting subinterpreters in non-Python threads\u00b6\nIf you would like to support subinterpreters with non-Python created threads, you\nmust use the PyThreadState_*\nAPI instead of the traditional PyGILState_*\nAPI.\nIn particular, you must store the interpreter state from the calling\nfunction and pass it to PyThreadState_New()\n, which will ensure that\nthe thread state is targeting the correct interpreter:\n/* The return value of PyInterpreterState_Get() from the\nfunction that created this thread. */\nPyInterpreterState *interp = ThreadData->interp;\nPyThreadState *tstate = PyThreadState_New(interp);\nPyThreadState_Swap(tstate);\n/* GIL of the subinterpreter is now held.\nPerform Python actions here. */\nresult = CallSomeFunction();\n/* evaluate result or handle exception */\n/* Destroy the thread state. No Python API allowed beyond this point. */\nPyThreadState_Clear(tstate);\nPyThreadState_DeleteCurrent();\nCautions about fork()\u00b6\nAnother important thing to note about threads is their behaviour in the face\nof the C fork()\ncall. On most systems with fork()\n, after a\nprocess forks only the thread that issued the fork will exist. This has a\nconcrete impact both on how locks must be handled and on all stored state\nin CPython\u2019s runtime.\nThe fact that only the \u201ccurrent\u201d thread remains\nmeans any locks held by other threads will never be released. Python solves\nthis for os.fork()\nby acquiring the locks it uses internally before\nthe fork, and releasing them afterwards. In addition, it resets any\nLock objects in the child. When extending or embedding Python, there\nis no way to inform Python of additional (non-Python) locks that need to be\nacquired before or reset after a fork. OS facilities such as\npthread_atfork()\nwould need to be used to accomplish the same thing.\nAdditionally, when extending or embedding Python, calling fork()\ndirectly rather than through os.fork()\n(and returning to or calling\ninto Python) may result in a deadlock by one of Python\u2019s internal locks\nbeing held by a thread that is defunct after the fork.\nPyOS_AfterFork_Child()\ntries to reset the necessary locks, but is not\nalways able to.\nThe fact that all other threads go away also means that CPython\u2019s\nruntime state there must be cleaned up properly, which os.fork()\ndoes. This means finalizing all other PyThreadState\nobjects\nbelonging to the current interpreter and all other\nPyInterpreterState\nobjects. Due to this and the special\nnature of the \u201cmain\u201d interpreter,\nfork()\nshould only be called in that interpreter\u2019s \u201cmain\u201d\nthread, where the CPython global runtime was originally initialized.\nThe only exception is if exec()\nwill be called immediately\nafter.\nCautions regarding runtime finalization\u00b6\nIn the late stage of interpreter shutdown, after attempting to wait for\nnon-daemon threads to exit (though this can be interrupted by\nKeyboardInterrupt\n) and running the atexit\nfunctions, the runtime\nis marked as finalizing: Py_IsFinalizing()\nand\nsys.is_finalizing()\nreturn true. At this point, only the finalization\nthread that initiated finalization (typically the main thread) is allowed to\nacquire the GIL.\nIf any thread, other than the finalization thread, attempts to attach a thread state during finalization, either explicitly or implicitly, the thread enters a permanently blocked state where it remains until the program exits. In most cases this is harmless, but this can result in deadlock if a later stage of finalization attempts to acquire a lock owned by the blocked thread, or otherwise waits on the blocked thread.\nGross? Yes. This prevents random crashes and/or unexpectedly skipped C++ finalizations further up the call stack when such threads were forcibly exited here in CPython 3.13 and earlier. The CPython runtime thread state C APIs have never had any error reporting or handling expectations at thread state attachment time that would\u2019ve allowed for graceful exit from this situation. Changing that would require new stable C APIs and rewriting the majority of C code in the CPython ecosystem to use those with error handling.\nHigh-level API\u00b6\nThese are the most commonly used types and functions when writing C extension code, or when embedding the Python interpreter:\n-\ntype PyInterpreterState\u00b6\n- Part of the Limited API (as an opaque struct).\nThis data structure represents the state shared by a number of cooperating threads. Threads belonging to the same interpreter share their module administration and a few other internal items. There are no public members in this structure.\nThreads belonging to different interpreters initially share nothing, except process state like available memory, open file descriptors and such. The global interpreter lock is also shared by all threads, regardless of to which interpreter they belong.\nChanged in version 3.12: PEP 684 introduced the possibility of a per-interpreter GIL. See\nPy_NewInterpreterFromConfig()\n.\n-\ntype PyThreadState\u00b6\n- Part of the Limited API (as an opaque struct).\nThis data structure represents the state of a single thread. The only public data member is:\n-\nPyInterpreterState *interp\u00b6\nThis thread\u2019s interpreter state.\n-\nPyInterpreterState *interp\u00b6\n-\nvoid PyEval_InitThreads()\u00b6\n- Part of the Stable ABI.\nDeprecated function which does nothing.\nIn Python 3.6 and older, this function created the GIL if it didn\u2019t exist.\nChanged in version 3.9: The function now does nothing.\nChanged in version 3.7: This function is now called by\nPy_Initialize()\n, so you don\u2019t have to call it yourself anymore.Changed in version 3.2: This function cannot be called before\nPy_Initialize()\nanymore.Deprecated since version 3.9.\n-\nPyThreadState *PyEval_SaveThread()\u00b6\n- Part of the Stable ABI.\nDetach the attached thread state and return it. The thread will have no thread state upon returning.\n-\nvoid PyEval_RestoreThread(PyThreadState *tstate)\u00b6\n- Part of the Stable ABI.\nSet the attached thread state to tstate. The passed thread state should not be attached, otherwise deadlock ensues. tstate will be attached upon returning.\nNote\nCalling this function from a thread when the runtime is finalizing will hang the thread until the program exits, even if the thread was not created by Python. Refer to Cautions regarding runtime finalization for more details.\nChanged in version 3.14: Hangs the current thread, rather than terminating it, if called while the interpreter is finalizing.\n-\nPyThreadState *PyThreadState_Get()\u00b6\n- Part of the Stable ABI.\nReturn the attached thread state. If the thread has no attached thread state, (such as when inside of\nPy_BEGIN_ALLOW_THREADS\nblock), then this issues a fatal error (so that the caller needn\u2019t check forNULL\n).See also\nPyThreadState_GetUnchecked()\n.\n-\nPyThreadState *PyThreadState_GetUnchecked()\u00b6\nSimilar to\nPyThreadState_Get()\n, but don\u2019t kill the process with a fatal error if it is NULL. The caller is responsible to check if the result is NULL.Added in version 3.13: In Python 3.5 to 3.12, the function was private and known as\n_PyThreadState_UncheckedGet()\n.\n-\nPyThreadState *PyThreadState_Swap(PyThreadState *tstate)\u00b6\n- Part of the Stable ABI.\nSet the attached thread state to tstate, and return the thread state that was attached prior to calling.\nThis function is safe to call without an attached thread state; it will simply return\nNULL\nindicating that there was no prior thread state.See also\nNote\nSimilar to\nPyGILState_Ensure()\n, this function will hang the thread if the runtime is finalizing.\nThe following functions use thread-local storage, and are not compatible with sub-interpreters:\n-\ntype PyGILState_STATE\u00b6\n- Part of the Stable ABI.\nThe type of the value returned by\nPyGILState_Ensure()\nand passed toPyGILState_Release()\n.-\nenumerator PyGILState_LOCKED\u00b6\nThe GIL was already held when\nPyGILState_Ensure()\nwas called.\n-\nenumerator PyGILState_UNLOCKED\u00b6\nThe GIL was not held when\nPyGILState_Ensure()\nwas called.\n-\nenumerator PyGILState_LOCKED\u00b6\n-\nPyGILState_STATE PyGILState_Ensure()\u00b6\n- Part of the Stable ABI.\nEnsure that the current thread is ready to call the Python C API regardless of the current state of Python, or of the attached thread state. This may be called as many times as desired by a thread as long as each call is matched with a call to\nPyGILState_Release()\n. In general, other thread-related APIs may be used betweenPyGILState_Ensure()\nandPyGILState_Release()\ncalls as long as the thread state is restored to its previous state before the Release(). For example, normal usage of thePy_BEGIN_ALLOW_THREADS\nandPy_END_ALLOW_THREADS\nmacros is acceptable.The return value is an opaque \u201chandle\u201d to the attached thread state when\nPyGILState_Ensure()\nwas called, and must be passed toPyGILState_Release()\nto ensure Python is left in the same state. Even though recursive calls are allowed, these handles cannot be shared - each unique call toPyGILState_Ensure()\nmust save the handle for its call toPyGILState_Release()\n.When the function returns, there will be an attached thread state and the thread will be able to call arbitrary Python code. Failure is a fatal error.\nWarning\nCalling this function when the runtime is finalizing is unsafe. Doing so will either hang the thread until the program ends, or fully crash the interpreter in rare cases. Refer to Cautions regarding runtime finalization for more details.\nChanged in version 3.14: Hangs the current thread, rather than terminating it, if called while the interpreter is finalizing.\n-\nvoid PyGILState_Release(PyGILState_STATE)\u00b6\n- Part of the Stable ABI.\nRelease any resources previously acquired. After this call, Python\u2019s state will be the same as it was prior to the corresponding\nPyGILState_Ensure()\ncall (but generally this state will be unknown to the caller, hence the use of the GILState API).Every call to\nPyGILState_Ensure()\nmust be matched by a call toPyGILState_Release()\non the same thread.\n-\nPyThreadState *PyGILState_GetThisThreadState()\u00b6\n- Part of the Stable ABI.\nGet the attached thread state for this thread. May return\nNULL\nif no GILState API has been used on the current thread. Note that the main thread always has such a thread-state, even if no auto-thread-state call has been made on the main thread. This is mainly a helper/diagnostic function.Note\nThis function may return non-\nNULL\neven when the thread state is detached. PreferPyThreadState_Get()\norPyThreadState_GetUnchecked()\nfor most cases.See also\n-\nint PyGILState_Check()\u00b6\nReturn\n1\nif the current thread is holding the GIL and0\notherwise. This function can be called from any thread at any time. Only if it has had its thread state initialized viaPyGILState_Ensure()\nwill it return1\n. This is mainly a helper/diagnostic function. It can be useful for example in callback contexts or memory allocation functions when knowing that the GIL is locked can allow the caller to perform sensitive actions or otherwise behave differently.Note\nIf the current Python process has ever created a subinterpreter, this function will always return\n1\n. PreferPyThreadState_GetUnchecked()\nfor most cases.Added in version 3.4.\nThe following macros are normally used without a trailing semicolon; look for example usage in the Python source distribution.\n-\nPy_BEGIN_ALLOW_THREADS\u00b6\n- Part of the Stable ABI.\nThis macro expands to\n{ PyThreadState *_save; _save = PyEval_SaveThread();\n. Note that it contains an opening brace; it must be matched with a followingPy_END_ALLOW_THREADS\nmacro. See above for further discussion of this macro.\n-\nPy_END_ALLOW_THREADS\u00b6\n- Part of the Stable ABI.\nThis macro expands to\nPyEval_RestoreThread(_save); }\n. Note that it contains a closing brace; it must be matched with an earlierPy_BEGIN_ALLOW_THREADS\nmacro. See above for further discussion of this macro.\n-\nPy_BLOCK_THREADS\u00b6\n- Part of the Stable ABI.\nThis macro expands to\nPyEval_RestoreThread(_save);\n: it is equivalent toPy_END_ALLOW_THREADS\nwithout the closing brace.\n-\nPy_UNBLOCK_THREADS\u00b6\n- Part of the Stable ABI.\nThis macro expands to\n_save = PyEval_SaveThread();\n: it is equivalent toPy_BEGIN_ALLOW_THREADS\nwithout the opening brace and variable declaration.\nLow-level API\u00b6\nAll of the following functions must be called after Py_Initialize()\n.\nChanged in version 3.7: Py_Initialize()\nnow initializes the GIL\nand sets an attached thread state.\n-\nPyInterpreterState *PyInterpreterState_New()\u00b6\n- Part of the Stable ABI.\nCreate a new interpreter state object. An attached thread state is not needed, but may optionally exist if it is necessary to serialize calls to this function.\nRaises an auditing event\ncpython.PyInterpreterState_New\nwith no arguments.\n-\nvoid PyInterpreterState_Clear(PyInterpreterState *interp)\u00b6\n- Part of the Stable ABI.\nReset all information in an interpreter state object. There must be an attached thread state for the interpreter.\nRaises an auditing event\ncpython.PyInterpreterState_Clear\nwith no arguments.\n-\nvoid PyInterpreterState_Delete(PyInterpreterState *interp)\u00b6\n- Part of the Stable ABI.\nDestroy an interpreter state object. There should not be an attached thread state for the target interpreter. The interpreter state must have been reset with a previous call to\nPyInterpreterState_Clear()\n.\n-\nPyThreadState *PyThreadState_New(PyInterpreterState *interp)\u00b6\n- Part of the Stable ABI.\nCreate a new thread state object belonging to the given interpreter object. An attached thread state is not needed.\n-\nvoid PyThreadState_Clear(PyThreadState *tstate)\u00b6\n- Part of the Stable ABI.\nReset all information in a thread state object. tstate must be attached\nChanged in version 3.9: This function now calls the\nPyThreadState.on_delete\ncallback. Previously, that happened inPyThreadState_Delete()\n.Changed in version 3.13: The\nPyThreadState.on_delete\ncallback was removed.\n-\nvoid PyThreadState_Delete(PyThreadState *tstate)\u00b6\n- Part of the Stable ABI.\nDestroy a thread state object. tstate should not be attached to any thread. tstate must have been reset with a previous call to\nPyThreadState_Clear()\n.\n-\nvoid PyThreadState_DeleteCurrent(void)\u00b6\nDetach the attached thread state (which must have been reset with a previous call to\nPyThreadState_Clear()\n) and then destroy it.No thread state will be attached upon returning.\n-\nPyFrameObject *PyThreadState_GetFrame(PyThreadState *tstate)\u00b6\n- Part of the Stable ABI since version 3.10.\nGet the current frame of the Python thread state tstate.\nReturn a strong reference. Return\nNULL\nif no frame is currently executing.See also\nPyEval_GetFrame()\n.tstate must not be\nNULL\n, and must be attached.Added in version 3.9.\n-\nuint64_t PyThreadState_GetID(PyThreadState *tstate)\u00b6\n- Part of the Stable ABI since version 3.10.\nGet the unique thread state identifier of the Python thread state tstate.\ntstate must not be\nNULL\n, and must be attached.Added in version 3.9.\n-\nPyInterpreterState *PyThreadState_GetInterpreter(PyThreadState *tstate)\u00b6\n- Part of the Stable ABI since version 3.10.\nGet the interpreter of the Python thread state tstate.\ntstate must not be\nNULL\n, and must be attached.Added in version 3.9.\n-\nvoid PyThreadState_EnterTracing(PyThreadState *tstate)\u00b6\nSuspend tracing and profiling in the Python thread state tstate.\nResume them using the\nPyThreadState_LeaveTracing()\nfunction.Added in version 3.11.\n-\nvoid PyThreadState_LeaveTracing(PyThreadState *tstate)\u00b6\nResume tracing and profiling in the Python thread state tstate suspended by the\nPyThreadState_EnterTracing()\nfunction.See also\nPyEval_SetTrace()\nandPyEval_SetProfile()\nfunctions.Added in version 3.11.\n-\nint PyUnstable_ThreadState_SetStackProtection(PyThreadState *tstate, void *stack_start_addr, size_t stack_size)\u00b6\n- This is Unstable API. It may change without warning in minor releases.\nSet the stack protection start address and stack protection size of a Python thread state.\nOn success, return\n0\n. On failure, set an exception and return-1\n.CPython implements recursion control for C code by raising\nRecursionError\nwhen it notices that the machine execution stack is close to overflow. See for example thePy_EnterRecursiveCall()\nfunction. For this, it needs to know the location of the current thread\u2019s stack, which it normally gets from the operating system. When the stack is changed, for example using context switching techniques like the Boost library\u2019sboost::context\n, you must callPyUnstable_ThreadState_SetStackProtection()\nto inform CPython of the change.Call\nPyUnstable_ThreadState_SetStackProtection()\neither before or after changing the stack. Do not call any other Python C API between the call and the stack change.See\nPyUnstable_ThreadState_ResetStackProtection()\nfor undoing this operation.Added in version 3.14.1:\nWarning\nThis function was added in a bugfix release, and extensions that use it will be incompatible with Python 3.14.0. Most packaging tools for Python are not able to handle this incompatibility automatically, and will need explicit configuration. When using PyPA standards (wheels and source distributions), specify\nRequires-Python: != 3.14.0.*\nin core metadata.\n-\nvoid PyUnstable_ThreadState_ResetStackProtection(PyThreadState *tstate)\u00b6\n- This is Unstable API. It may change without warning in minor releases.\nReset the stack protection start address and stack protection size of a Python thread state to the operating system defaults.\nSee\nPyUnstable_ThreadState_SetStackProtection()\nfor an explanation.Added in version 3.14.1:\nWarning\nThis function was added in a bugfix release, and extensions that use it will be incompatible with Python 3.14.0. Most packaging tools for Python are not able to handle this incompatibility automatically, and will need explicit configuration. When using PyPA standards (wheels and source distributions), specify\nRequires-Python: != 3.14.0.*\nin core metadata.\n-\nPyInterpreterState *PyInterpreterState_Get(void)\u00b6\n- Part of the Stable ABI since version 3.9.\nGet the current interpreter.\nIssue a fatal error if there is no attached thread state. It cannot return NULL.\nAdded in version 3.9.\n-\nint64_t PyInterpreterState_GetID(PyInterpreterState *interp)\u00b6\n- Part of the Stable ABI since version 3.7.\nReturn the interpreter\u2019s unique ID. If there was any error in doing so then\n-1\nis returned and an error is set.The caller must have an attached thread state.\nAdded in version 3.7.\n-\nPyObject *PyInterpreterState_GetDict(PyInterpreterState *interp)\u00b6\n- Return value: Borrowed reference. Part of the Stable ABI since version 3.8.\nReturn a dictionary in which interpreter-specific data may be stored. If this function returns\nNULL\nthen no exception has been raised and the caller should assume no interpreter-specific dict is available.This is not a replacement for\nPyModule_GetState()\n, which extensions should use to store interpreter-specific state information.The returned dictionary is borrowed from the interpreter and is valid until interpreter shutdown.\nAdded in version 3.8.\n-\ntypedef PyObject *(*_PyFrameEvalFunction)(PyThreadState *tstate, _PyInterpreterFrame *frame, int throwflag)\u00b6\nType of a frame evaluation function.\nThe throwflag parameter is used by the\nthrow()\nmethod of generators: if non-zero, handle the current exception.Changed in version 3.9: The function now takes a tstate parameter.\nChanged in version 3.11: The frame parameter changed from\nPyFrameObject*\nto_PyInterpreterFrame*\n.\n-\n_PyFrameEvalFunction _PyInterpreterState_GetEvalFrameFunc(PyInterpreterState *interp)\u00b6\nGet the frame evaluation function.\nSee the PEP 523 \u201cAdding a frame evaluation API to CPython\u201d.\nAdded in version 3.9.\n-\nvoid _PyInterpreterState_SetEvalFrameFunc(PyInterpreterState *interp, _PyFrameEvalFunction eval_frame)\u00b6\nSet the frame evaluation function.\nSee the PEP 523 \u201cAdding a frame evaluation API to CPython\u201d.\nAdded in version 3.9.\n-\nPyObject *PyThreadState_GetDict()\u00b6\n- Return value: Borrowed reference. Part of the Stable ABI.\nReturn a dictionary in which extensions can store thread-specific state information. Each extension should use a unique key to use to store state in the dictionary. It is okay to call this function when no thread state is attached. If this function returns\nNULL\n, no exception has been raised and the caller should assume no thread state is attached.\n-\nint PyThreadState_SetAsyncExc(unsigned long id, PyObject *exc)\u00b6\n- Part of the Stable ABI.\nAsynchronously raise an exception in a thread. The id argument is the thread id of the target thread; exc is the exception object to be raised. This function does not steal any references to exc. To prevent naive misuse, you must write your own C extension to call this. Must be called with an attached thread state. Returns the number of thread states modified; this is normally one, but will be zero if the thread id isn\u2019t found. If exc is\nNULL\n, the pending exception (if any) for the thread is cleared. This raises no exceptions.Changed in version 3.7: The type of the id parameter changed from long to unsigned long.\n-\nvoid PyEval_AcquireThread(PyThreadState *tstate)\u00b6\n- Part of the Stable ABI.\nAttach tstate to the current thread, which must not be\nNULL\nor already attached.The calling thread must not already have an attached thread state.\nNote\nCalling this function from a thread when the runtime is finalizing will hang the thread until the program exits, even if the thread was not created by Python. Refer to Cautions regarding runtime finalization for more details.\nChanged in version 3.8: Updated to be consistent with\nPyEval_RestoreThread()\n,Py_END_ALLOW_THREADS()\n, andPyGILState_Ensure()\n, and terminate the current thread if called while the interpreter is finalizing.Changed in version 3.14: Hangs the current thread, rather than terminating it, if called while the interpreter is finalizing.\nPyEval_RestoreThread()\nis a higher-level function which is always available (even when threads have not been initialized).\n-\nvoid PyEval_ReleaseThread(PyThreadState *tstate)\u00b6\n- Part of the Stable ABI.\nDetach the attached thread state. The tstate argument, which must not be\nNULL\n, is only used to check that it represents the attached thread state \u2014 if it isn\u2019t, a fatal error is reported.PyEval_SaveThread()\nis a higher-level function which is always available (even when threads have not been initialized).\nSub-interpreter support\u00b6\nWhile in most uses, you will only embed a single Python interpreter, there are cases where you need to create several independent interpreters in the same process and perhaps even in the same thread. Sub-interpreters allow you to do that.\nThe \u201cmain\u201d interpreter is the first one created when the runtime initializes.\nIt is usually the only Python interpreter in a process. Unlike sub-interpreters,\nthe main interpreter has unique process-global responsibilities like signal\nhandling. It is also responsible for execution during runtime initialization and\nis usually the active interpreter during runtime finalization. The\nPyInterpreterState_Main()\nfunction returns a pointer to its state.\nYou can switch between sub-interpreters using the PyThreadState_Swap()\nfunction. You can create and destroy them using the following functions:\n-\ntype PyInterpreterConfig\u00b6\nStructure containing most parameters to configure a sub-interpreter. Its values are used only in\nPy_NewInterpreterFromConfig()\nand never modified by the runtime.Added in version 3.12.\nStructure fields:\n-\nint use_main_obmalloc\u00b6\nIf this is\n0\nthen the sub-interpreter will use its own \u201cobject\u201d allocator state. Otherwise it will use (share) the main interpreter\u2019s.If this is\n0\nthencheck_multi_interp_extensions\nmust be1\n(non-zero). If this is1\nthengil\nmust not bePyInterpreterConfig_OWN_GIL\n.\n-\nint allow_fork\u00b6\nIf this is\n0\nthen the runtime will not support forking the process in any thread where the sub-interpreter is currently active. Otherwise fork is unrestricted.Note that the\nsubprocess\nmodule still works when fork is disallowed.\n-\nint allow_exec\u00b6\nIf this is\n0\nthen the runtime will not support replacing the current process via exec (e.g.os.execv()\n) in any thread where the sub-interpreter is currently active. Otherwise exec is unrestricted.Note that the\nsubprocess\nmodule still works when exec is disallowed.\n-\nint allow_threads\u00b6\nIf this is\n0\nthen the sub-interpreter\u2019sthreading\nmodule won\u2019t create threads. Otherwise threads are allowed.\n-\nint allow_daemon_threads\u00b6\nIf this is\n0\nthen the sub-interpreter\u2019sthreading\nmodule won\u2019t create daemon threads. Otherwise daemon threads are allowed (as long asallow_threads\nis non-zero).\n-\nint check_multi_interp_extensions\u00b6\nIf this is\n0\nthen all extension modules may be imported, including legacy (single-phase init) modules, in any thread where the sub-interpreter is currently active. Otherwise only multi-phase init extension modules (see PEP 489) may be imported. (Also seePy_mod_multiple_interpreters\n.)This must be\n1\n(non-zero) ifuse_main_obmalloc\nis0\n.\n-\nint gil\u00b6\nThis determines the operation of the GIL for the sub-interpreter. It may be one of the following:\n-\nPyInterpreterConfig_DEFAULT_GIL\u00b6\nUse the default selection (\nPyInterpreterConfig_SHARED_GIL\n).\n-\nPyInterpreterConfig_SHARED_GIL\u00b6\nUse (share) the main interpreter\u2019s GIL.\n-\nPyInterpreterConfig_OWN_GIL\u00b6\nUse the sub-interpreter\u2019s own GIL.\nIf this is\nPyInterpreterConfig_OWN_GIL\nthenPyInterpreterConfig.use_main_obmalloc\nmust be0\n.-\nPyInterpreterConfig_DEFAULT_GIL\u00b6\n-\nint use_main_obmalloc\u00b6\n-\nPyStatus Py_NewInterpreterFromConfig(PyThreadState **tstate_p, const PyInterpreterConfig *config)\u00b6\nCreate a new sub-interpreter. This is an (almost) totally separate environment for the execution of Python code. In particular, the new interpreter has separate, independent versions of all imported modules, including the fundamental modules\nbuiltins\n,__main__\nandsys\n. The table of loaded modules (sys.modules\n) and the module search path (sys.path\n) are also separate. The new environment has nosys.argv\nvariable. It has new standard I/O stream file objectssys.stdin\n,sys.stdout\nandsys.stderr\n(however these refer to the same underlying file descriptors).The given config controls the options with which the interpreter is initialized.\nUpon success, tstate_p will be set to the first thread state created in the new sub-interpreter. This thread state is attached. Note that no actual thread is created; see the discussion of thread states below. If creation of the new interpreter is unsuccessful, tstate_p is set to\nNULL\n; no exception is set since the exception state is stored in the attached thread state, which might not exist.Like all other Python/C API functions, an attached thread state must be present before calling this function, but it might be detached upon returning. On success, the returned thread state will be attached. If the sub-interpreter is created with its own GIL then the attached thread state of the calling interpreter will be detached. When the function returns, the new interpreter\u2019s thread state will be attached to the current thread and the previous interpreter\u2019s attached thread state will remain detached.\nAdded in version 3.12.\nSub-interpreters are most effective when isolated from each other, with certain functionality restricted:\nPyInterpreterConfig config = { .use_main_obmalloc = 0, .allow_fork = 0, .allow_exec = 0, .allow_threads = 1, .allow_daemon_threads = 0, .check_multi_interp_extensions = 1, .gil = PyInterpreterConfig_OWN_GIL, }; PyThreadState *tstate = NULL; PyStatus status = Py_NewInterpreterFromConfig(&tstate, &config); if (PyStatus_Exception(status)) { Py_ExitStatusException(status); }\nNote that the config is used only briefly and does not get modified. During initialization the config\u2019s values are converted into various\nPyInterpreterState\nvalues. A read-only copy of the config may be stored internally on thePyInterpreterState\n.Extension modules are shared between (sub-)interpreters as follows:\nFor modules using multi-phase initialization, e.g.\nPyModule_FromDefAndSpec()\n, a separate module object is created and initialized for each interpreter. Only C-level static and global variables are shared between these module objects.For modules using single-phase initialization, e.g.\nPyModule_Create()\n, the first time a particular extension is imported, it is initialized normally, and a (shallow) copy of its module\u2019s dictionary is squirreled away. When the same extension is imported by another (sub-)interpreter, a new module is initialized and filled with the contents of this copy; the extension\u2019sinit\nfunction is not called. Objects in the module\u2019s dictionary thus end up shared across (sub-)interpreters, which might cause unwanted behavior (see Bugs and caveats below).Note that this is different from what happens when an extension is imported after the interpreter has been completely re-initialized by calling\nPy_FinalizeEx()\nandPy_Initialize()\n; in that case, the extension\u2019sinitmodule\nfunction is called again. As with multi-phase initialization, this means that only C-level static and global variables are shared between these modules.\n-\nPyThreadState *Py_NewInterpreter(void)\u00b6\n- Part of the Stable ABI.\nCreate a new sub-interpreter. This is essentially just a wrapper around\nPy_NewInterpreterFromConfig()\nwith a config that preserves the existing behavior. The result is an unisolated sub-interpreter that shares the main interpreter\u2019s GIL, allows fork/exec, allows daemon threads, and allows single-phase init modules.\n-\nvoid Py_EndInterpreter(PyThreadState *tstate)\u00b6\n- Part of the Stable ABI.\nDestroy the (sub-)interpreter represented by the given thread state. The given thread state must be attached. When the call returns, there will be no attached thread state. All thread states associated with this interpreter are destroyed.\nPy_FinalizeEx()\nwill destroy all sub-interpreters that haven\u2019t been explicitly destroyed at that point.\nA Per-Interpreter GIL\u00b6\nUsing Py_NewInterpreterFromConfig()\nyou can create\na sub-interpreter that is completely isolated from other interpreters,\nincluding having its own GIL. The most important benefit of this\nisolation is that such an interpreter can execute Python code without\nbeing blocked by other interpreters or blocking any others. Thus a\nsingle Python process can truly take advantage of multiple CPU cores\nwhen running Python code. The isolation also encourages a different\napproach to concurrency than that of just using threads.\n(See PEP 554 and PEP 684.)\nUsing an isolated interpreter requires vigilance in preserving that\nisolation. That especially means not sharing any objects or mutable\nstate without guarantees about thread-safety. Even objects that are\notherwise immutable (e.g. None\n, (1, 5)\n) can\u2019t normally be shared\nbecause of the refcount. One simple but less-efficient approach around\nthis is to use a global lock around all use of some state (or object).\nAlternately, effectively immutable objects (like integers or strings)\ncan be made safe in spite of their refcounts by making them immortal.\nIn fact, this has been done for the builtin singletons, small integers,\nand a number of other builtin objects.\nIf you preserve isolation then you will have access to proper multi-core computing without the complications that come with free-threading. Failure to preserve isolation will expose you to the full consequences of free-threading, including races and hard-to-debug crashes.\nAside from that, one of the main challenges of using multiple isolated interpreters is how to communicate between them safely (not break isolation) and efficiently. The runtime and stdlib do not provide any standard approach to this yet. A future stdlib module would help mitigate the effort of preserving isolation and expose effective tools for communicating (and sharing) data between interpreters.\nAdded in version 3.12.\nBugs and caveats\u00b6\nBecause sub-interpreters (and the main interpreter) are part of the same\nprocess, the insulation between them isn\u2019t perfect \u2014 for example, using\nlow-level file operations like os.close()\nthey can\n(accidentally or maliciously) affect each other\u2019s open files. Because of the\nway extensions are shared between (sub-)interpreters, some extensions may not\nwork properly; this is especially likely when using single-phase initialization\nor (static) global variables.\nIt is possible to insert objects created in one sub-interpreter into\na namespace of another (sub-)interpreter; this should be avoided if possible.\nSpecial care should be taken to avoid sharing user-defined functions, methods, instances or classes between sub-interpreters, since import operations executed by such objects may affect the wrong (sub-)interpreter\u2019s dictionary of loaded modules. It is equally important to avoid sharing objects from which the above are reachable.\nAlso note that combining this functionality with PyGILState_*\nAPIs\nis delicate, because these APIs assume a bijection between Python thread states\nand OS-level threads, an assumption broken by the presence of sub-interpreters.\nIt is highly recommended that you don\u2019t switch sub-interpreters between a pair\nof matching PyGILState_Ensure()\nand PyGILState_Release()\ncalls.\nFurthermore, extensions (such as ctypes\n) using these APIs to allow calling\nof Python code from non-Python created threads will probably be broken when using\nsub-interpreters.\nAsynchronous Notifications\u00b6\nA mechanism is provided to make asynchronous notifications to the main interpreter thread. These notifications take the form of a function pointer and a void pointer argument.\n-\nint Py_AddPendingCall(int (*func)(void*), void *arg)\u00b6\n- Part of the Stable ABI.\nSchedule a function to be called from the main interpreter thread. On success,\n0\nis returned and func is queued for being called in the main thread. On failure,-1\nis returned without setting any exception.When successfully queued, func will be eventually called from the main interpreter thread with the argument arg. It will be called asynchronously with respect to normally running Python code, but with both these conditions met:\non a bytecode boundary;\nwith the main thread holding an attached thread state (func can therefore use the full C API).\nfunc must return\n0\non success, or-1\non failure with an exception set. func won\u2019t be interrupted to perform another asynchronous notification recursively, but it can still be interrupted to switch threads if the thread state is detached.This function doesn\u2019t need an attached thread state. However, to call this function in a subinterpreter, the caller must have an attached thread state. Otherwise, the function func can be scheduled to be called from the wrong interpreter.\nWarning\nThis is a low-level function, only useful for very special cases. There is no guarantee that func will be called as quick as possible. If the main thread is busy executing a system call, func won\u2019t be called before the system call returns. This function is generally not suitable for calling Python code from arbitrary C threads. Instead, use the PyGILState API.\nAdded in version 3.1.\nChanged in version 3.9: If this function is called in a subinterpreter, the function func is now scheduled to be called from the subinterpreter, rather than being called from the main interpreter. Each subinterpreter now has its own list of scheduled calls.\nChanged in version 3.12: This function now always schedules func to be run in the main interpreter.\n-\nint Py_MakePendingCalls(void)\u00b6\n- Part of the Stable ABI.\nExecute all pending calls. This is usually executed automatically by the interpreter.\nThis function returns\n0\non success, and returns-1\nwith an exception set on failure.If this is not called in the main thread of the main interpreter, this function does nothing and returns\n0\n. The caller must hold an attached thread state.Added in version 3.1.\nChanged in version 3.12: This function only runs pending calls in the main interpreter.\nProfiling and Tracing\u00b6\nThe Python interpreter provides some low-level support for attaching profiling and execution tracing facilities. These are used for profiling, debugging, and coverage analysis tools.\nThis C interface allows the profiling or tracing code to avoid the overhead of calling through Python-level callable objects, making a direct C function call instead. The essential attributes of the facility have not changed; the interface allows trace functions to be installed per-thread, and the basic events reported to the trace function are the same as had been reported to the Python-level trace functions in previous versions.\n-\ntypedef int (*Py_tracefunc)(PyObject *obj, PyFrameObject *frame, int what, PyObject *arg)\u00b6\nThe type of the trace function registered using\nPyEval_SetProfile()\nandPyEval_SetTrace()\n. The first parameter is the object passed to the registration function as obj, frame is the frame object to which the event pertains, what is one of the constantsPyTrace_CALL\n,PyTrace_EXCEPTION\n,PyTrace_LINE\n,PyTrace_RETURN\n,PyTrace_C_CALL\n,PyTrace_C_EXCEPTION\n,PyTrace_C_RETURN\n, orPyTrace_OPCODE\n, and arg depends on the value of what:Value of what\nMeaning of arg\nAlways\nPy_None\n.Exception information as returned by\nsys.exc_info()\n.Always\nPy_None\n.Value being returned to the caller, or\nNULL\nif caused by an exception.Function object being called.\nFunction object being called.\nFunction object being called.\nAlways\nPy_None\n.\n-\nint PyTrace_CALL\u00b6\nThe value of the what parameter to a\nPy_tracefunc\nfunction when a new call to a function or method is being reported, or a new entry into a generator. Note that the creation of the iterator for a generator function is not reported as there is no control transfer to the Python bytecode in the corresponding frame.\n-\nint PyTrace_EXCEPTION\u00b6\nThe value of the what parameter to a\nPy_tracefunc\nfunction when an exception has been raised. The callback function is called with this value for what when after any bytecode is processed after which the exception becomes set within the frame being executed. The effect of this is that as exception propagation causes the Python stack to unwind, the callback is called upon return to each frame as the exception propagates. Only trace functions receive these events; they are not needed by the profiler.\n-\nint PyTrace_LINE\u00b6\nThe value passed as the what parameter to a\nPy_tracefunc\nfunction (but not a profiling function) when a line-number event is being reported. It may be disabled for a frame by settingf_trace_lines\nto 0 on that frame.\n-\nint PyTrace_RETURN\u00b6\nThe value for the what parameter to\nPy_tracefunc\nfunctions when a call is about to return.\n-\nint PyTrace_C_CALL\u00b6\nThe value for the what parameter to\nPy_tracefunc\nfunctions when a C function is about to be called.\n-\nint PyTrace_C_EXCEPTION\u00b6\nThe value for the what parameter to\nPy_tracefunc\nfunctions when a C function has raised an exception.\n-\nint PyTrace_C_RETURN\u00b6\nThe value for the what parameter to\nPy_tracefunc\nfunctions when a C function has returned.\n-\nint PyTrace_OPCODE\u00b6\nThe value for the what parameter to\nPy_tracefunc\nfunctions (but not profiling functions) when a new opcode is about to be executed. This event is not emitted by default: it must be explicitly requested by settingf_trace_opcodes\nto 1 on the frame.\n-\nvoid PyEval_SetProfile(Py_tracefunc func, PyObject *obj)\u00b6\nSet the profiler function to func. The obj parameter is passed to the function as its first parameter, and may be any Python object, or\nNULL\n. If the profile function needs to maintain state, using a different value for obj for each thread provides a convenient and thread-safe place to store it. The profile function is called for all monitored events exceptPyTrace_LINE\nPyTrace_OPCODE\nandPyTrace_EXCEPTION\n.See also the\nsys.setprofile()\nfunction.The caller must have an attached thread state.\n-\nvoid PyEval_SetProfileAllThreads(Py_tracefunc func, PyObject *obj)\u00b6\nLike\nPyEval_SetProfile()\nbut sets the profile function in all running threads belonging to the current interpreter instead of the setting it only on the current thread.The caller must have an attached thread state.\nAs\nPyEval_SetProfile()\n, this function ignores any exceptions raised while setting the profile functions in all threads.\nAdded in version 3.12.\n-\nvoid PyEval_SetTrace(Py_tracefunc func, PyObject *obj)\u00b6\nSet the tracing function to func. This is similar to\nPyEval_SetProfile()\n, except the tracing function does receive line-number events and per-opcode events, but does not receive any event related to C function objects being called. Any trace function registered usingPyEval_SetTrace()\nwill not receivePyTrace_C_CALL\n,PyTrace_C_EXCEPTION\norPyTrace_C_RETURN\nas a value for the what parameter.See also the\nsys.settrace()\nfunction.The caller must have an attached thread state.\n-\nvoid PyEval_SetTraceAllThreads(Py_tracefunc func, PyObject *obj)\u00b6\nLike\nPyEval_SetTrace()\nbut sets the tracing function in all running threads belonging to the current interpreter instead of the setting it only on the current thread.The caller must have an attached thread state.\nAs\nPyEval_SetTrace()\n, this function ignores any exceptions raised while setting the trace functions in all threads.\nAdded in version 3.12.\nReference tracing\u00b6\nAdded in version 3.13.\n-\ntypedef int (*PyRefTracer)(PyObject*, int event, void *data)\u00b6\nThe type of the trace function registered using\nPyRefTracer_SetTracer()\n. The first parameter is a Python object that has been just created (when event is set toPyRefTracer_CREATE\n) or about to be destroyed (when event is set toPyRefTracer_DESTROY\n). The data argument is the opaque pointer that was provided whenPyRefTracer_SetTracer()\nwas called.\nAdded in version 3.13.\n-\nint PyRefTracer_CREATE\u00b6\nThe value for the event parameter to\nPyRefTracer\nfunctions when a Python object has been created.\n-\nint PyRefTracer_DESTROY\u00b6\nThe value for the event parameter to\nPyRefTracer\nfunctions when a Python object has been destroyed.\n-\nint PyRefTracer_SetTracer(PyRefTracer tracer, void *data)\u00b6\nRegister a reference tracer function. The function will be called when a new Python has been created or when an object is going to be destroyed. If data is provided it must be an opaque pointer that will be provided when the tracer function is called. Return\n0\non success. Set an exception and return-1\non error.Note that tracer functions must not create Python objects inside or otherwise the call will be re-entrant. The tracer also must not clear any existing exception or set an exception. A thread state will be active every time the tracer function is called.\nThere must be an attached thread state when calling this function.\nAdded in version 3.13.\n-\nPyRefTracer PyRefTracer_GetTracer(void **data)\u00b6\nGet the registered reference tracer function and the value of the opaque data pointer that was registered when\nPyRefTracer_SetTracer()\nwas called. If no tracer was registered this function will return NULL and will set the data pointer to NULL.There must be an attached thread state when calling this function.\nAdded in version 3.13.\nAdvanced Debugger Support\u00b6\nThese functions are only intended to be used by advanced debugging tools.\n-\nPyInterpreterState *PyInterpreterState_Head()\u00b6\nReturn the interpreter state object at the head of the list of all such objects.\n-\nPyInterpreterState *PyInterpreterState_Main()\u00b6\nReturn the main interpreter state object.\n-\nPyInterpreterState *PyInterpreterState_Next(PyInterpreterState *interp)\u00b6\nReturn the next interpreter state object after interp from the list of all such objects.\n-\nPyThreadState *PyInterpreterState_ThreadHead(PyInterpreterState *interp)\u00b6\nReturn the pointer to the first\nPyThreadState\nobject in the list of threads associated with the interpreter interp.\n-\nPyThreadState *PyThreadState_Next(PyThreadState *tstate)\u00b6\nReturn the next thread state object after tstate from the list of all such objects belonging to the same\nPyInterpreterState\nobject.\nThread Local Storage Support\u00b6\nThe Python interpreter provides low-level support for thread-local storage\n(TLS) which wraps the underlying native TLS implementation to support the\nPython-level thread local storage API (threading.local\n). The\nCPython C level APIs are similar to those offered by pthreads and Windows:\nuse a thread key and functions to associate a void* value per\nthread.\nA thread state does not need to be attached when calling these functions; they suppl their own locking.\nNote that Python.h\ndoes not include the declaration of the TLS APIs,\nyou need to include pythread.h\nto use thread-local storage.\nNote\nNone of these API functions handle memory management on behalf of the void* values. You need to allocate and deallocate them yourself. If the void* values happen to be PyObject*, these functions don\u2019t do refcount operations on them either.\nThread Specific Storage (TSS) API\u00b6\nTSS API is introduced to supersede the use of the existing TLS API within the\nCPython interpreter. This API uses a new type Py_tss_t\ninstead of\nint to represent thread keys.\nAdded in version 3.7.\nSee also\n\u201cA New C-API for Thread-Local Storage in CPython\u201d (PEP 539)\n-\ntype Py_tss_t\u00b6\nThis data structure represents the state of a thread key, the definition of which may depend on the underlying TLS implementation, and it has an internal field representing the key\u2019s initialization state. There are no public members in this structure.\nWhen Py_LIMITED_API is not defined, static allocation of this type by\nPy_tss_NEEDS_INIT\nis allowed.\n-\nPy_tss_NEEDS_INIT\u00b6\nThis macro expands to the initializer for\nPy_tss_t\nvariables. Note that this macro won\u2019t be defined with Py_LIMITED_API.\nDynamic Allocation\u00b6\nDynamic allocation of the Py_tss_t\n, required in extension modules\nbuilt with Py_LIMITED_API, where static allocation of this type\nis not possible due to its implementation being opaque at build time.\n-\nPy_tss_t *PyThread_tss_alloc()\u00b6\n- Part of the Stable ABI since version 3.7.\nReturn a value which is the same state as a value initialized with\nPy_tss_NEEDS_INIT\n, orNULL\nin the case of dynamic allocation failure.\n-\nvoid PyThread_tss_free(Py_tss_t *key)\u00b6\n- Part of the Stable ABI since version 3.7.\nFree the given key allocated by\nPyThread_tss_alloc()\n, after first callingPyThread_tss_delete()\nto ensure any associated thread locals have been unassigned. This is a no-op if the key argument isNULL\n.Note\nA freed key becomes a dangling pointer. You should reset the key to\nNULL\n.\nMethods\u00b6\nThe parameter key of these functions must not be NULL\n. Moreover, the\nbehaviors of PyThread_tss_set()\nand PyThread_tss_get()\nare\nundefined if the given Py_tss_t\nhas not been initialized by\nPyThread_tss_create()\n.\n-\nint PyThread_tss_is_created(Py_tss_t *key)\u00b6\n- Part of the Stable ABI since version 3.7.\nReturn a non-zero value if the given\nPy_tss_t\nhas been initialized byPyThread_tss_create()\n.\n-\nint PyThread_tss_create(Py_tss_t *key)\u00b6\n- Part of the Stable ABI since version 3.7.\nReturn a zero value on successful initialization of a TSS key. The behavior is undefined if the value pointed to by the key argument is not initialized by\nPy_tss_NEEDS_INIT\n. This function can be called repeatedly on the same key \u2013 calling it on an already initialized key is a no-op and immediately returns success.\n-\nvoid PyThread_tss_delete(Py_tss_t *key)\u00b6\n- Part of the Stable ABI since version 3.7.\nDestroy a TSS key to forget the values associated with the key across all threads, and change the key\u2019s initialization state to uninitialized. A destroyed key is able to be initialized again by\nPyThread_tss_create()\n. This function can be called repeatedly on the same key \u2013 calling it on an already destroyed key is a no-op.\n-\nint PyThread_tss_set(Py_tss_t *key, void *value)\u00b6\n- Part of the Stable ABI since version 3.7.\nReturn a zero value to indicate successfully associating a void* value with a TSS key in the current thread. Each thread has a distinct mapping of the key to a void* value.\n-\nvoid *PyThread_tss_get(Py_tss_t *key)\u00b6\n- Part of the Stable ABI since version 3.7.\nReturn the void* value associated with a TSS key in the current thread. This returns\nNULL\nif no value is associated with the key in the current thread.\nThread Local Storage (TLS) API\u00b6\nDeprecated since version 3.7: This API is superseded by Thread Specific Storage (TSS) API.\nNote\nThis version of the API does not support platforms where the native TLS key\nis defined in a way that cannot be safely cast to int\n. On such platforms,\nPyThread_create_key()\nwill return immediately with a failure status,\nand the other TLS functions will all be no-ops on such platforms.\nDue to the compatibility problem noted above, this version of the API should not be used in new code.\n-\nint PyThread_create_key()\u00b6\n- Part of the Stable ABI.\n-\nvoid PyThread_delete_key(int key)\u00b6\n- Part of the Stable ABI.\n-\nint PyThread_set_key_value(int key, void *value)\u00b6\n- Part of the Stable ABI.\n-\nvoid *PyThread_get_key_value(int key)\u00b6\n- Part of the Stable ABI.\n-\nvoid PyThread_delete_key_value(int key)\u00b6\n- Part of the Stable ABI.\n-\nvoid PyThread_ReInitTLS()\u00b6\n- Part of the Stable ABI.\nSynchronization Primitives\u00b6\nThe C-API provides a basic mutual exclusion lock.\n-\ntype PyMutex\u00b6\nA mutual exclusion lock. The\nPyMutex\nshould be initialized to zero to represent the unlocked state. For example:PyMutex mutex = {0};\nInstances of\nPyMutex\nshould not be copied or moved. Both the contents and address of aPyMutex\nare meaningful, and it must remain at a fixed, writable location in memory.Note\nA\nPyMutex\ncurrently occupies one byte, but the size should be considered unstable. The size may change in future Python releases without a deprecation period.Added in version 3.13.\n-\nvoid PyMutex_Lock(PyMutex *m)\u00b6\nLock mutex m. If another thread has already locked it, the calling thread will block until the mutex is unlocked. While blocked, the thread will temporarily detach the thread state if one exists.\nAdded in version 3.13.\n-\nvoid PyMutex_Unlock(PyMutex *m)\u00b6\nUnlock mutex m. The mutex must be locked \u2014 otherwise, the function will issue a fatal error.\nAdded in version 3.13.\n-\nint PyMutex_IsLocked(PyMutex *m)\u00b6\nReturns non-zero if the mutex m is currently locked, zero otherwise.\nNote\nThis function is intended for use in assertions and debugging only and should not be used to make concurrency control decisions, as the lock state may change immediately after the check.\nAdded in version 3.14.\nPython Critical Section API\u00b6\nThe critical section API provides a deadlock avoidance layer on top of per-object locks for free-threaded CPython. They are intended to replace reliance on the global interpreter lock, and are no-ops in versions of Python with the global interpreter lock.\nCritical sections are intended to be used for custom types implemented\nin C-API extensions. They should generally not be used with built-in types like\nlist\nand dict\nbecause their public C-APIs\nalready use critical sections internally, with the notable\nexception of PyDict_Next()\n, which requires critical section\nto be acquired externally.\nCritical sections avoid deadlocks by implicitly suspending active critical\nsections, hence, they do not provide exclusive access such as provided by\ntraditional locks like PyMutex\n. When a critical section is started,\nthe per-object lock for the object is acquired. If the code executed inside the\ncritical section calls C-API functions then it can suspend the critical section thereby\nreleasing the per-object lock, so other threads can acquire the per-object lock\nfor the same object.\nVariants that accept PyMutex\npointers rather than Python objects are also\navailable. Use these variants to start a critical section in a situation where\nthere is no PyObject\n\u2013 for example, when working with a C type that\ndoes not extend or wrap PyObject\nbut still needs to call into the C\nAPI in a manner that might lead to deadlocks.\nThe functions and structs used by the macros are exposed for cases where C macros are not available. They should only be used as in the given macro expansions. Note that the sizes and contents of the structures may change in future Python versions.\nNote\nOperations that need to lock two objects at once must use\nPy_BEGIN_CRITICAL_SECTION2\n. You cannot use nested critical\nsections to lock more than one object at once, because the inner critical\nsection may suspend the outer critical sections. This API does not provide\na way to lock more than two objects at once.\nExample usage:\nstatic PyObject *\nset_field(MyObject *self, PyObject *value)\n{\nPy_BEGIN_CRITICAL_SECTION(self);\nPy_SETREF(self->field, Py_XNewRef(value));\nPy_END_CRITICAL_SECTION();\nPy_RETURN_NONE;\n}\nIn the above example, Py_SETREF\ncalls Py_DECREF\n, which\ncan call arbitrary code through an object\u2019s deallocation function. The critical\nsection API avoids potential deadlocks due to reentrancy and lock ordering\nby allowing the runtime to temporarily suspend the critical section if the\ncode triggered by the finalizer blocks and calls PyEval_SaveThread()\n.\n-\nPy_BEGIN_CRITICAL_SECTION(op)\u00b6\nAcquires the per-object lock for the object op and begins a critical section.\nIn the free-threaded build, this macro expands to:\n{ PyCriticalSection _py_cs; PyCriticalSection_Begin(&_py_cs, (PyObject*)(op))\nIn the default build, this macro expands to\n{\n.Added in version 3.13.\n-\nPy_BEGIN_CRITICAL_SECTION_MUTEX(m)\u00b6\nLocks the mutex m and begins a critical section.\nIn the free-threaded build, this macro expands to:\n{ PyCriticalSection _py_cs; PyCriticalSection_BeginMutex(&_py_cs, m)\nNote that unlike\nPy_BEGIN_CRITICAL_SECTION\n, there is no cast for the argument of the macro - it must be aPyMutex\npointer.On the default build, this macro expands to\n{\n.Added in version 3.14.\n-\nPy_END_CRITICAL_SECTION()\u00b6\nEnds the critical section and releases the per-object lock.\nIn the free-threaded build, this macro expands to:\nPyCriticalSection_End(&_py_cs); }\nIn the default build, this macro expands to\n}\n.Added in version 3.13.\n-\nPy_BEGIN_CRITICAL_SECTION2(a, b)\u00b6\nAcquires the per-objects locks for the objects a and b and begins a critical section. The locks are acquired in a consistent order (lowest address first) to avoid lock ordering deadlocks.\nIn the free-threaded build, this macro expands to:\n{ PyCriticalSection2 _py_cs2; PyCriticalSection2_Begin(&_py_cs2, (PyObject*)(a), (PyObject*)(b))\nIn the default build, this macro expands to\n{\n.Added in version 3.13.\n-\nPy_BEGIN_CRITICAL_SECTION2_MUTEX(m1, m2)\u00b6\nLocks the mutexes m1 and m2 and begins a critical section.\nIn the free-threaded build, this macro expands to:\n{ PyCriticalSection2 _py_cs2; PyCriticalSection2_BeginMutex(&_py_cs2, m1, m2)\nNote that unlike\nPy_BEGIN_CRITICAL_SECTION2\n, there is no cast for the arguments of the macro - they must bePyMutex\npointers.On the default build, this macro expands to\n{\n.Added in version 3.14.\n-\nPy_END_CRITICAL_SECTION2()\u00b6\nEnds the critical section and releases the per-object locks.\nIn the free-threaded build, this macro expands to:\nPyCriticalSection2_End(&_py_cs2); }\nIn the default build, this macro expands to\n}\n.Added in version 3.13.\nLegacy Locking APIs\u00b6\nThese APIs are obsolete since Python 3.13 with the introduction of\nPyMutex\n.\nChanged in version 3.15: These APIs are now a simple wrapper around PyMutex\n.\n-\ntype PyThread_type_lock\u00b6\nA pointer to a mutual exclusion lock.\n-\ntype PyLockStatus\u00b6\nThe result of acquiring a lock with a timeout.\n-\nenumerator PY_LOCK_FAILURE\u00b6\nFailed to acquire the lock.\n-\nenumerator PY_LOCK_ACQUIRED\u00b6\nThe lock was successfully acquired.\n-\nenumerator PY_LOCK_INTR\u00b6\nThe lock was interrupted by a signal.\n-\nenumerator PY_LOCK_FAILURE\u00b6\n-\nPyThread_type_lock PyThread_allocate_lock(void)\u00b6\n- Part of the Stable ABI.\nAllocate a new lock.\nOn success, this function returns a lock; on failure, this function returns\n0\nwithout an exception set.The caller does not need to hold an attached thread state.\nChanged in version 3.15: This function now always uses\nPyMutex\n. In prior versions, this would use a lock provided by the operating system.\n-\nvoid PyThread_free_lock(PyThread_type_lock lock)\u00b6\n- Part of the Stable ABI.\nDestroy lock. The lock should not be held by any thread when calling this.\nThe caller does not need to hold an attached thread state.\n-\nPyLockStatus PyThread_acquire_lock_timed(PyThread_type_lock lock, long long microseconds, int intr_flag)\u00b6\n- Part of the Stable ABI.\nAcquire lock with a timeout.\nThis will wait for microseconds microseconds to acquire the lock. If the timeout expires, this function returns\nPY_LOCK_FAILURE\n. If microseconds is-1\n, this will wait indefinitely until the lock has been released.If intr_flag is\n1\n, acquiring the lock may be interrupted by a signal, in which case this function returnsPY_LOCK_INTR\n. Upon interruption, it\u2019s generally expected that the caller makes a call toPy_MakePendingCalls()\nto propagate an exception to Python code.If the lock is successfully acquired, this function returns\nPY_LOCK_ACQUIRED\n.The caller does not need to hold an attached thread state.\n-\nint PyThread_acquire_lock(PyThread_type_lock lock, int waitflag)\u00b6\n- Part of the Stable ABI.\nAcquire lock.\nIf waitflag is\n1\nand another thread currently holds the lock, this function will wait until the lock can be acquired and will always return1\n.If waitflag is\n0\nand another thread holds the lock, this function will not wait and instead return0\n. If the lock is not held by any other thread, then this function will acquire it and return1\n.Unlike\nPyThread_acquire_lock_timed()\n, acquiring the lock cannot be interrupted by a signal.The caller does not need to hold an attached thread state.\n-\nint PyThread_release_lock(PyThread_type_lock lock)\u00b6\n- Part of the Stable ABI.\nRelease lock. If lock is not held, then this function issues a fatal error.\nThe caller does not need to hold an attached thread state.\nOperating System Thread APIs\u00b6\n-\nPYTHREAD_INVALID_THREAD_ID\u00b6\nSentinel value for an invalid thread ID.\nThis is currently equivalent to\n(unsigned long)-1\n.\n-\nunsigned long PyThread_start_new_thread(void (*func)(void*), void *arg)\u00b6\n- Part of the Stable ABI.\nStart function func in a new thread with argument arg. The resulting thread is not intended to be joined.\nfunc must not be\nNULL\n, but arg may beNULL\n.On success, this function returns the identifier of the new thread; on failure, this returns\nPYTHREAD_INVALID_THREAD_ID\n.The caller does not need to hold an attached thread state.\n-\nunsigned long PyThread_get_thread_ident(void)\u00b6\n- Part of the Stable ABI.\nReturn the identifier of the current thread, which will never be zero.\nThis function cannot fail, and the caller does not need to hold an attached thread state.\nSee also\n-\nPyObject *PyThread_GetInfo(void)\u00b6\n- Part of the Stable ABI since version 3.3.\nGet general information about the current thread in the form of a struct sequence object. This information is accessible as\nsys.thread_info\nin Python.On success, this returns a new strong reference to the thread information; on failure, this returns\nNULL\nwith an exception set.The caller must hold an attached thread state.\n-\nPY_HAVE_THREAD_NATIVE_ID\u00b6\nThis macro is defined when the system supports native thread IDs.\n-\nunsigned long PyThread_get_thread_native_id(void)\u00b6\n- Part of the Stable ABI on platforms with native thread IDs.\nGet the native identifier of the current thread as it was assigned by the operating system\u2019s kernel, which will never be less than zero.\nThis function is only available when\nPY_HAVE_THREAD_NATIVE_ID\nis defined.This function cannot fail, and the caller does not need to hold an attached thread state.\nSee also\n-\nvoid PyThread_exit_thread(void)\u00b6\n- Part of the Stable ABI.\nTerminate the current thread. This function is generally considered unsafe and should be avoided. It is kept solely for backwards compatibility.\nThis function is only safe to call if all functions in the full call stack are written to safely allow it.\nWarning\nIf the current system uses POSIX threads (also known as \u201cpthreads\u201d), this calls pthread_exit(3), which attempts to unwind the stack and call C++ destructors on some libc implementations. However, if a\nnoexcept\nfunction is reached, it may terminate the process. Other systems, such as macOS, do unwinding.On Windows, this function calls\n_endthreadex()\n, which kills the thread without calling C++ destructors.In any case, there is a risk of corruption on the thread\u2019s stack.\nDeprecated since version 3.14.\n-\nvoid PyThread_init_thread(void)\u00b6\n- Part of the Stable ABI.\nInitialize\nPyThread*\nAPIs. Python executes this function automatically, so there\u2019s little need to call it from an extension module.\n-\nint PyThread_set_stacksize(size_t size)\u00b6\n- Part of the Stable ABI.\nSet the stack size of the current thread to size bytes.\nThis function returns\n0\non success,-1\nif size is invalid, or-2\nif the system does not support changing the stack size. This function does not set exceptions.The caller does not need to hold an attached thread state.\n-\nsize_t PyThread_get_stacksize(void)\u00b6\n- Part of the Stable ABI.\nReturn the stack size of the current thread in bytes, or\n0\nif the system\u2019s default stack size is in use.The caller does not need to hold an attached thread state.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 23972}
{"url": "https://docs.python.org/3/library/removed.html", "title": "Removed Modules", "content": "Removed Modules\u00b6\nThe modules described in this chapter have been removed from the Python standard library. They are documented here to help people find replacements.\naifc\n\u2014 Read and write AIFF and AIFC filesasynchat\n\u2014 Asynchronous socket command/response handlerasyncore\n\u2014 Asynchronous socket handleraudioop\n\u2014 Manipulate raw audio datacgi\n\u2014 Common Gateway Interface supportcgitb\n\u2014 Traceback manager for CGI scriptschunk\n\u2014 Read IFF chunked datacrypt\n\u2014 Function to check Unix passwordsdistutils\n\u2014 Building and installing Python modulesimghdr\n\u2014 Determine the type of an imageimp\n\u2014 Access the import internalsmailcap\n\u2014 Mailcap file handlingmsilib\n\u2014 Read and write Microsoft Installer filesnis\n\u2014 Interface to Sun\u2019s NIS (Yellow Pages)nntplib\n\u2014 NNTP protocol clientossaudiodev\n\u2014 Access to OSS-compatible audio devicespipes\n\u2014 Interface to shell pipelinessmtpd\n\u2014 SMTP Serversndhdr\n\u2014 Determine type of sound filespwd\n\u2014 The shadow password databasesunau\n\u2014 Read and write Sun AU filestelnetlib\n\u2014 Telnet clientuu\n\u2014 Encode and decode uuencode filesxdrlib\n\u2014 Encode and decode XDR data", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 267}
{"url": "https://docs.python.org/3/library/getopt.html", "title": " \u2014 C-style parser for command line options", "content": "getopt\n\u2014 C-style parser for command line options\u00b6\nSource code: Lib/getopt.py\nNote\nThis module is considered feature complete. A more declarative and\nextensible alternative to this API is provided in the optparse\nmodule. Further functional enhancements for command line parameter\nprocessing are provided either as third party modules on PyPI,\nor else as features in the argparse\nmodule.\nThis module helps scripts to parse the command line arguments in sys.argv\n.\nIt supports the same conventions as the Unix getopt()\nfunction (including\nthe special meanings of arguments of the form \u2018-\n\u2019 and \u2018--\n\u2018). Long\noptions similar to those supported by GNU software may be used as well via an\noptional third argument.\nUsers who are unfamiliar with the Unix getopt()\nfunction should consider\nusing the argparse\nmodule instead. Users who are familiar with the Unix\ngetopt()\nfunction, but would like to get equivalent behavior while\nwriting less code and getting better help and error messages should consider\nusing the optparse\nmodule. See Choosing an argument parsing library for\nadditional details.\nThis module provides two functions and an exception:\n- getopt.getopt(args, shortopts, longopts=[])\u00b6\nParses command line options and parameter list. args is the argument list to be parsed, without the leading reference to the running program. Typically, this means\nsys.argv[1:]\n. shortopts is the string of option letters that the script wants to recognize, with options that require an argument followed by a colon (':'\n) and options that accept an optional argument followed by two colons ('::'\n); i.e., the same format that Unixgetopt()\nuses.Note\nUnlike GNU\ngetopt()\n, after a non-option argument, all further arguments are considered also non-options. This is similar to the way non-GNU Unix systems work.longopts, if specified, must be a list of strings with the names of the long options which should be supported. The leading\n'--'\ncharacters should not be included in the option name. Long options which require an argument should be followed by an equal sign ('='\n). Long options which accept an optional argument should be followed by an equal sign and question mark ('=?'\n). To accept only long options, shortopts should be an empty string. Long options on the command line can be recognized so long as they provide a prefix of the option name that matches exactly one of the accepted options. For example, if longopts is['foo', 'frob']\n, the option--fo\nwill match as--foo\n, but--f\nwill not match uniquely, soGetoptError\nwill be raised.The return value consists of two elements: the first is a list of\n(option, value)\npairs; the second is the list of program arguments left after the option list was stripped (this is a trailing slice of args). Each option-and-value pair returned has the option as its first element, prefixed with a hyphen for short options (e.g.,'-x'\n) or two hyphens for long options (e.g.,'--long-option'\n), and the option argument as its second element, or an empty string if the option has no argument. The options occur in the list in the same order in which they were found, thus allowing multiple occurrences. Long and short options may be mixed.Changed in version 3.14: Optional arguments are supported.\n- getopt.gnu_getopt(args, shortopts, longopts=[])\u00b6\nThis function works like\ngetopt()\n, except that GNU style scanning mode is used by default. This means that option and non-option arguments may be intermixed. Thegetopt()\nfunction stops processing options as soon as a non-option argument is encountered.If the first character of the option string is\n'+'\n, or if the environment variablePOSIXLY_CORRECT\nis set, then option processing stops as soon as a non-option argument is encountered.If the first character of the option string is\n'-'\n, non-option arguments that are followed by options are added to the list of option-and-value pairs as a pair that hasNone\nas its first element and the list of non-option arguments as its second element. The second element of thegnu_getopt()\nresult is a list of program arguments after the last option.Changed in version 3.14: Support for returning intermixed options and non-option arguments in order.\n- exception getopt.GetoptError\u00b6\nThis is raised when an unrecognized option is found in the argument list or when an option requiring an argument is given none. The argument to the exception is a string indicating the cause of the error. For long options, an argument given to an option which does not require one will also cause this exception to be raised. The attributes\nmsg\nandopt\ngive the error message and related option; if there is no specific option to which the exception relates,opt\nis an empty string.\n- exception getopt.error\u00b6\nAlias for\nGetoptError\n; for backward compatibility.\nAn example using only Unix style options:\n>>> import getopt\n>>> args = '-a -b -cfoo -d bar a1 a2'.split()\n>>> args\n['-a', '-b', '-cfoo', '-d', 'bar', 'a1', 'a2']\n>>> optlist, args = getopt.getopt(args, 'abc:d:')\n>>> optlist\n[('-a', ''), ('-b', ''), ('-c', 'foo'), ('-d', 'bar')]\n>>> args\n['a1', 'a2']\nUsing long option names is equally easy:\n>>> s = '--condition=foo --testing --output-file abc.def -x a1 a2'\n>>> args = s.split()\n>>> args\n['--condition=foo', '--testing', '--output-file', 'abc.def', '-x', 'a1', 'a2']\n>>> optlist, args = getopt.getopt(args, 'x', [\n... 'condition=', 'output-file=', 'testing'])\n>>> optlist\n[('--condition', 'foo'), ('--testing', ''), ('--output-file', 'abc.def'), ('-x', '')]\n>>> args\n['a1', 'a2']\nOptional arguments should be specified explicitly:\n>>> s = '-Con -C --color=off --color a1 a2'\n>>> args = s.split()\n>>> args\n['-Con', '-C', '--color=off', '--color', 'a1', 'a2']\n>>> optlist, args = getopt.getopt(args, 'C::', ['color=?'])\n>>> optlist\n[('-C', 'on'), ('-C', ''), ('--color', 'off'), ('--color', '')]\n>>> args\n['a1', 'a2']\nThe order of options and non-option arguments can be preserved:\n>>> s = 'a1 -x a2 a3 a4 --long a5 a6'\n>>> args = s.split()\n>>> args\n['a1', '-x', 'a2', 'a3', 'a4', '--long', 'a5', 'a6']\n>>> optlist, args = getopt.gnu_getopt(args, '-x:', ['long='])\n>>> optlist\n[(None, ['a1']), ('-x', 'a2'), (None, ['a3', 'a4']), ('--long', 'a5')]\n>>> args\n['a6']\nIn a script, typical usage is something like this:\nimport getopt, sys\ndef main():\ntry:\nopts, args = getopt.getopt(sys.argv[1:], \"ho:v\", [\"help\", \"output=\"])\nexcept getopt.GetoptError as err:\n# print help information and exit:\nprint(err) # will print something like \"option -a not recognized\"\nusage()\nsys.exit(2)\noutput = None\nverbose = False\nfor o, a in opts:\nif o == \"-v\":\nverbose = True\nelif o in (\"-h\", \"--help\"):\nusage()\nsys.exit()\nelif o in (\"-o\", \"--output\"):\noutput = a\nelse:\nassert False, \"unhandled option\"\nprocess(args, output=output, verbose=verbose)\nif __name__ == \"__main__\":\nmain()\nNote that an equivalent command line interface could be produced with less code\nand more informative help and error messages by using the optparse\nmodule:\nimport optparse\nif __name__ == '__main__':\nparser = optparse.OptionParser()\nparser.add_option('-o', '--output')\nparser.add_option('-v', dest='verbose', action='store_true')\nopts, args = parser.parse_args()\nprocess(args, output=opts.output, verbose=opts.verbose)\nA roughly equivalent command line interface for this case can also be\nproduced by using the argparse\nmodule:\nimport argparse\nif __name__ == '__main__':\nparser = argparse.ArgumentParser()\nparser.add_argument('-o', '--output')\nparser.add_argument('-v', dest='verbose', action='store_true')\nparser.add_argument('rest', nargs='*')\nargs = parser.parse_args()\nprocess(args.rest, output=args.output, verbose=args.verbose)\nSee Choosing an argument parsing library for details on how the argparse\nversion of this code differs in behaviour from the optparse\n(and\ngetopt\n) version.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1939}
{"url": "https://docs.python.org/3/library/superseded.html", "title": "Superseded Modules", "content": "Superseded Modules\u00b6\nThe modules described in this chapter have been superseded by other modules for most use cases, and are retained primarily to preserve backwards compatibility.\nModules may appear in this chapter because they only cover a limited subset of\na problem space, and a more generally applicable solution is available elsewhere\nin the standard library (for example, getopt\ncovers the very specific\ntask of \u201cmimic the C getopt()\nAPI in Python\u201d, rather than the broader\ncommand line option parsing and argument parsing capabilities offered by\noptparse\nand argparse\n).\nAlternatively, modules may appear in this chapter because they are deprecated outright, and awaiting removal in a future release, or they are soft deprecated and their use is actively discouraged in new projects. With the removal of various obsolete modules through PEP 594, there are currently no modules in this latter category.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 227}
{"url": "https://docs.python.org/3/library/cmdline.html", "title": "Modules command-line interface (CLI)", "content": "Modules command-line interface (CLI)\u00b6\nThe following modules have a command-line interface.\nencodings.rot_13\nthis\nSee also the Python command-line interface.\nThe following modules have a command-line interface.\nencodings.rot_13\nthis\nSee also the Python command-line interface.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 69}
{"url": "https://docs.python.org/3/library/syslog.html", "title": " \u2014 Unix syslog library routines", "content": "syslog\n\u2014 Unix syslog library routines\u00b6\nThis module provides an interface to the Unix syslog\nlibrary routines.\nRefer to the Unix manual pages for a detailed description of the syslog\nfacility.\nAvailability: Unix, not WASI, not iOS.\nThis module wraps the system syslog\nfamily of routines. A pure Python\nlibrary that can speak to a syslog server is available in the\nlogging.handlers\nmodule as SysLogHandler\n.\nThe module defines the following functions:\n- syslog.syslog(message)\u00b6\n- syslog.syslog(priority, message)\nSend the string message to the system logger. A trailing newline is added if necessary. Each message is tagged with a priority composed of a facility and a level. The optional priority argument, which defaults to\nLOG_INFO\n, determines the message priority. If the facility is not encoded in priority using logical-or (LOG_INFO | LOG_USER\n), the value given in theopenlog()\ncall is used.If\nopenlog()\nhas not been called prior to the call tosyslog()\n,openlog()\nwill be called with no arguments.Raises an auditing event\nsyslog.syslog\nwith argumentspriority\n,message\n.Changed in version 3.2: In previous versions,\nopenlog()\nwould not be called automatically if it wasn\u2019t called prior to the call tosyslog()\n, deferring to the syslog implementation to callopenlog()\n.Changed in version 3.12: This function is restricted in subinterpreters. (Only code that runs in multiple interpreters is affected and the restriction is not relevant for most users.)\nopenlog()\nmust be called in the main interpreter beforesyslog()\nmay be used in a subinterpreter. Otherwise it will raiseRuntimeError\n.\n- syslog.openlog([ident[, logoption[, facility]]])\u00b6\nLogging options of subsequent\nsyslog()\ncalls can be set by callingopenlog()\n.syslog()\nwill callopenlog()\nwith no arguments if the log is not currently open.The optional ident keyword argument is a string which is prepended to every message, and defaults to\nsys.argv[0]\nwith leading path components stripped. The optional logoption keyword argument (default is 0) is a bit field \u2013 see below for possible values to combine. The optional facility keyword argument (default isLOG_USER\n) sets the default facility for messages which do not have a facility explicitly encoded.Raises an auditing event\nsyslog.openlog\nwith argumentsident\n,logoption\n,facility\n.Changed in version 3.2: In previous versions, keyword arguments were not allowed, and ident was required.\nChanged in version 3.12: This function is restricted in subinterpreters. (Only code that runs in multiple interpreters is affected and the restriction is not relevant for most users.) This may only be called in the main interpreter. It will raise\nRuntimeError\nif called in a subinterpreter.\n- syslog.closelog()\u00b6\nReset the syslog module values and call the system library\ncloselog()\n.This causes the module to behave as it does when initially imported. For example,\nopenlog()\nwill be called on the firstsyslog()\ncall (ifopenlog()\nhasn\u2019t already been called), and ident and otheropenlog()\nparameters are reset to defaults.Raises an auditing event\nsyslog.closelog\nwith no arguments.Changed in version 3.12: This function is restricted in subinterpreters. (Only code that runs in multiple interpreters is affected and the restriction is not relevant for most users.) This may only be called in the main interpreter. It will raise\nRuntimeError\nif called in a subinterpreter.\n- syslog.setlogmask(maskpri)\u00b6\nSet the priority mask to maskpri and return the previous mask value. Calls to\nsyslog()\nwith a priority level not set in maskpri are ignored. The default is to log all priorities. The functionLOG_MASK(pri)\ncalculates the mask for the individual priority pri. The functionLOG_UPTO(pri)\ncalculates the mask for all priorities up to and including pri.Raises an auditing event\nsyslog.setlogmask\nwith argumentmaskpri\n.\nThe module defines the following constants:\n- syslog.LOG_EMERG\u00b6\n- syslog.LOG_ALERT\u00b6\n- syslog.LOG_CRIT\u00b6\n- syslog.LOG_ERR\u00b6\n- syslog.LOG_WARNING\u00b6\n- syslog.LOG_NOTICE\u00b6\n- syslog.LOG_INFO\u00b6\n- syslog.LOG_DEBUG\u00b6\nPriority levels (high to low).\n- syslog.LOG_AUTH\u00b6\n- syslog.LOG_AUTHPRIV\u00b6\n- syslog.LOG_CRON\u00b6\n- syslog.LOG_DAEMON\u00b6\n- syslog.LOG_FTP\u00b6\n- syslog.LOG_INSTALL\u00b6\n- syslog.LOG_KERN\u00b6\n- syslog.LOG_LAUNCHD\u00b6\n- syslog.LOG_LPR\u00b6\n- syslog.LOG_MAIL\u00b6\n- syslog.LOG_NETINFO\u00b6\n- syslog.LOG_NEWS\u00b6\n- syslog.LOG_RAS\u00b6\n- syslog.LOG_REMOTEAUTH\u00b6\n- syslog.LOG_SYSLOG\u00b6\n- syslog.LOG_USER\u00b6\n- syslog.LOG_UUCP\u00b6\n- syslog.LOG_LOCAL0\u00b6\n- syslog.LOG_LOCAL1\u00b6\n- syslog.LOG_LOCAL2\u00b6\n- syslog.LOG_LOCAL3\u00b6\n- syslog.LOG_LOCAL4\u00b6\n- syslog.LOG_LOCAL5\u00b6\n- syslog.LOG_LOCAL6\u00b6\n- syslog.LOG_LOCAL7\u00b6\nFacilities, depending on availability in\n\nforLOG_AUTHPRIV\n,LOG_FTP\n,LOG_NETINFO\n,LOG_REMOTEAUTH\n,LOG_INSTALL\nandLOG_RAS\n.Changed in version 3.13: Added\nLOG_FTP\n,LOG_NETINFO\n,LOG_REMOTEAUTH\n,LOG_INSTALL\n,LOG_RAS\n, andLOG_LAUNCHD\n.\n- syslog.LOG_PID\u00b6\n- syslog.LOG_CONS\u00b6\n- syslog.LOG_NDELAY\u00b6\n- syslog.LOG_ODELAY\u00b6\n- syslog.LOG_NOWAIT\u00b6\n- syslog.LOG_PERROR\u00b6\nLog options, depending on availability in\n\nforLOG_ODELAY\n,LOG_NOWAIT\nandLOG_PERROR\n.\nExamples\u00b6\nSimple example\u00b6\nA simple set of examples:\nimport syslog\nsyslog.syslog('Processing started')\nif error:\nsyslog.syslog(syslog.LOG_ERR, 'Processing started')\nAn example of setting some log options, these would include the process ID in logged messages, and write the messages to the destination facility used for mail logging:\nsyslog.openlog(logoption=syslog.LOG_PID, facility=syslog.LOG_MAIL)\nsyslog.syslog('E-mail processing initiated...')", "code_snippets": ["\n\n", "\n", " ", "\n ", " ", "\n", " ", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 1366}
{"url": "https://docs.python.org/3/library/resource.html", "title": " \u2014 Resource usage information", "content": "resource\n\u2014 Resource usage information\u00b6\nThis module provides basic mechanisms for measuring and controlling system resources utilized by a program.\nAvailability: Unix, not WASI.\nSymbolic constants are used to specify particular system resources and to request usage information about either the current process or its children.\nAn OSError\nis raised on syscall failure.\nResource Limits\u00b6\nResources usage can be limited using the setrlimit()\nfunction described\nbelow. Each resource is controlled by a pair of limits: a soft limit and a hard\nlimit. The soft limit is the current limit, and may be lowered or raised by a\nprocess over time. The soft limit can never exceed the hard limit. The hard\nlimit can be lowered to any value greater than the soft limit, but not raised.\n(Only processes with the effective UID of the super-user can raise a hard\nlimit.)\nThe specific resources that can be limited are system dependent. They are described in the getrlimit(2) man page. The resources listed below are supported when the underlying operating system supports them; resources which cannot be checked or controlled by the operating system are not defined in this module for those platforms.\n- resource.RLIM_INFINITY\u00b6\nConstant used to represent the limit for an unlimited resource.\n- resource.getrlimit(resource)\u00b6\nReturns a tuple\n(soft, hard)\nwith the current soft and hard limits of resource. RaisesValueError\nif an invalid resource is specified, orerror\nif the underlying system call fails unexpectedly.\n- resource.setrlimit(resource, limits)\u00b6\nSets new limits of consumption of resource. The limits argument must be a tuple\n(soft, hard)\nof two integers describing the new limits. A value ofRLIM_INFINITY\ncan be used to request a limit that is unlimited.Raises\nValueError\nif an invalid resource is specified, if the new soft limit exceeds the hard limit, or if a process tries to raise its hard limit. Specifying a limit ofRLIM_INFINITY\nwhen the hard or system limit for that resource is not unlimited will result in aValueError\n. A process with the effective UID of super-user can request any valid limit value, including unlimited, butValueError\nwill still be raised if the requested limit exceeds the system imposed limit.setrlimit\nmay also raiseerror\nif the underlying system call fails.VxWorks only supports setting\nRLIMIT_NOFILE\n.Raises an auditing event\nresource.setrlimit\nwith argumentsresource\n,limits\n.\n- resource.prlimit(pid, resource[, limits])\u00b6\nCombines\nsetrlimit()\nandgetrlimit()\nin one function and supports to get and set the resources limits of an arbitrary process. If pid is 0, then the call applies to the current process. resource and limits have the same meaning as insetrlimit()\n, except that limits is optional.When limits is not given the function returns the resource limit of the process pid. When limits is given the resource limit of the process is set and the former resource limit is returned.\nRaises\nProcessLookupError\nwhen pid can\u2019t be found andPermissionError\nwhen the user doesn\u2019t haveCAP_SYS_RESOURCE\nfor the process.Raises an auditing event\nresource.prlimit\nwith argumentspid\n,resource\n,limits\n.Availability: Linux >= 2.6.36 with glibc >= 2.13.\nAdded in version 3.4.\nThese symbols define resources whose consumption can be controlled using the\nsetrlimit()\nand getrlimit()\nfunctions described below. The values of\nthese symbols are exactly the constants used by C programs.\nThe Unix man page for getrlimit(2) lists the available resources. Note that not all systems use the same symbol or same value to denote the same resource. This module does not attempt to mask platform differences \u2014 symbols not defined for a platform will not be available from this module on that platform.\n- resource.RLIMIT_CORE\u00b6\nThe maximum size (in bytes) of a core file that the current process can create. This may result in the creation of a partial core file if a larger core would be required to contain the entire process image.\n- resource.RLIMIT_CPU\u00b6\nThe maximum amount of processor time (in seconds) that a process can use. If this limit is exceeded, a\nSIGXCPU\nsignal is sent to the process. (See thesignal\nmodule documentation for information about how to catch this signal and do something useful, e.g. flush open files to disk.)\n- resource.RLIMIT_FSIZE\u00b6\nThe maximum size of a file which the process may create.\n- resource.RLIMIT_DATA\u00b6\nThe maximum size (in bytes) of the process\u2019s heap.\n- resource.RLIMIT_STACK\u00b6\nThe maximum size (in bytes) of the call stack for the current process. This only affects the stack of the main thread in a multi-threaded process.\n- resource.RLIMIT_RSS\u00b6\nThe maximum resident set size that should be made available to the process.\n- resource.RLIMIT_NPROC\u00b6\nThe maximum number of processes the current process may create.\n- resource.RLIMIT_NOFILE\u00b6\nThe maximum number of open file descriptors for the current process.\n- resource.RLIMIT_OFILE\u00b6\nThe BSD name for\nRLIMIT_NOFILE\n.\n- resource.RLIMIT_MEMLOCK\u00b6\nThe maximum address space which may be locked in memory.\n- resource.RLIMIT_VMEM\u00b6\nThe largest area of mapped memory which the process may occupy. Usually an alias of\nRLIMIT_AS\n.Availability: Solaris, FreeBSD, NetBSD.\n- resource.RLIMIT_AS\u00b6\nThe maximum area (in bytes) of address space which may be taken by the process.\n- resource.RLIMIT_MSGQUEUE\u00b6\nThe number of bytes that can be allocated for POSIX message queues.\nAvailability: Linux >= 2.6.8.\nAdded in version 3.4.\n- resource.RLIMIT_NICE\u00b6\nThe ceiling for the process\u2019s nice level (calculated as 20 - rlim_cur).\nAvailability: Linux >= 2.6.12.\nAdded in version 3.4.\n- resource.RLIMIT_RTPRIO\u00b6\nThe ceiling of the real-time priority.\nAvailability: Linux >= 2.6.12.\nAdded in version 3.4.\n- resource.RLIMIT_RTTIME\u00b6\nThe time limit (in microseconds) on CPU time that a process can spend under real-time scheduling without making a blocking syscall.\nAvailability: Linux >= 2.6.25.\nAdded in version 3.4.\n- resource.RLIMIT_SIGPENDING\u00b6\nThe number of signals which the process may queue.\nAvailability: Linux >= 2.6.8.\nAdded in version 3.4.\n- resource.RLIMIT_SBSIZE\u00b6\nThe maximum size (in bytes) of socket buffer usage for this user. This limits the amount of network memory, and hence the amount of mbufs, that this user may hold at any time.\nAvailability: FreeBSD, NetBSD.\nAdded in version 3.4.\n- resource.RLIMIT_SWAP\u00b6\nThe maximum size (in bytes) of the swap space that may be reserved or used by all of this user id\u2019s processes. This limit is enforced only if bit 1 of the vm.overcommit sysctl is set. Please see tuning(7) for a complete description of this sysctl.\nAvailability: FreeBSD >= 8.\nAdded in version 3.4.\n- resource.RLIMIT_NPTS\u00b6\nThe maximum number of pseudo-terminals created by this user id.\nAvailability: FreeBSD >= 8.\nAdded in version 3.4.\n- resource.RLIMIT_KQUEUES\u00b6\nThe maximum number of kqueues this user id is allowed to create.\nAvailability: FreeBSD >= 11.\nAdded in version 3.10.\nResource Usage\u00b6\nThese functions are used to retrieve resource usage information:\n- resource.getrusage(who)\u00b6\nThis function returns an object that describes the resources consumed by either the current process or its children, as specified by the who parameter. The who parameter should be specified using one of the\nRUSAGE_*\nconstants described below.A simple example:\nfrom resource import * import time # a non CPU-bound task time.sleep(3) print(getrusage(RUSAGE_SELF)) # a CPU-bound task for i in range(10 ** 8): _ = 1 + 1 print(getrusage(RUSAGE_SELF))\nThe fields of the return value each describe how a particular system resource has been used, e.g. amount of time spent running in user mode or number of times the process was swapped out of main memory. Some values are dependent on the clock tick interval, e.g. the amount of memory the process is using.\nFor backward compatibility, the return value is also accessible as a tuple of 16 elements.\nThe fields\nru_utime\nandru_stime\nof the return value are floating-point values representing the amount of time spent executing in user mode and the amount of time spent executing in system mode, respectively. The remaining values are integers. Consult the getrusage(2) man page for detailed information about these values. A brief summary is presented here:Index\nField\nResource\n0\nru_utime\ntime in user mode (float seconds)\n1\nru_stime\ntime in system mode (float seconds)\n2\nru_maxrss\nmaximum resident set size\n3\nru_ixrss\nshared memory size\n4\nru_idrss\nunshared memory size\n5\nru_isrss\nunshared stack size\n6\nru_minflt\npage faults not requiring I/O\n7\nru_majflt\npage faults requiring I/O\n8\nru_nswap\nnumber of swap outs\n9\nru_inblock\nblock input operations\n10\nru_oublock\nblock output operations\n11\nru_msgsnd\nmessages sent\n12\nru_msgrcv\nmessages received\n13\nru_nsignals\nsignals received\n14\nru_nvcsw\nvoluntary context switches\n15\nru_nivcsw\ninvoluntary context switches\nThis function will raise a\nValueError\nif an invalid who parameter is specified. It may also raiseerror\nexception in unusual circumstances.\n- resource.getpagesize()\u00b6\nReturns the number of bytes in a system page. (This need not be the same as the hardware page size.)\nThe following RUSAGE_*\nsymbols are passed to the getrusage()\nfunction to specify which processes information should be provided for.\n- resource.RUSAGE_SELF\u00b6\nPass to\ngetrusage()\nto request resources consumed by the calling process, which is the sum of resources used by all threads in the process.\n- resource.RUSAGE_CHILDREN\u00b6\nPass to\ngetrusage()\nto request resources consumed by child processes of the calling process which have been terminated and waited for.\n- resource.RUSAGE_BOTH\u00b6\nPass to\ngetrusage()\nto request resources consumed by both the current process and child processes. May not be available on all systems.\n- resource.RUSAGE_THREAD\u00b6\nPass to\ngetrusage()\nto request resources consumed by the current thread. May not be available on all systems.Added in version 3.2.", "code_snippets": [" ", "\n", "\n\n", "\n", "\n", "\n\n", "\n", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 2456}
{"url": "https://docs.python.org/3/library/fcntl.html", "title": " \u2014 The ", "content": "fcntl\n\u2014 The fcntl\nand ioctl\nsystem calls\u00b6\nThis module performs file and I/O control on file descriptors. It is an\ninterface to the fcntl()\nand ioctl()\nUnix routines.\nSee the fcntl(2) and ioctl(2) Unix manual pages\nfor full details.\nAvailability: Unix, not WASI.\nAll functions in this module take a file descriptor fd as their first\nargument. This can be an integer file descriptor, such as returned by\nsys.stdin.fileno()\n, or an io.IOBase\nobject, such as sys.stdin\nitself, which provides a fileno()\nthat returns a genuine file\ndescriptor.\nChanged in version 3.3: Operations in this module used to raise an IOError\nwhere they now\nraise an OSError\n.\nChanged in version 3.8: The fcntl\nmodule now contains F_ADD_SEALS\n, F_GET_SEALS\n, and\nF_SEAL_*\nconstants for sealing of os.memfd_create()\nfile\ndescriptors.\nChanged in version 3.9: On macOS, the fcntl\nmodule exposes the F_GETPATH\nconstant,\nwhich obtains the path of a file from a file descriptor.\nOn Linux(>=3.15), the fcntl\nmodule exposes the F_OFD_GETLK\n,\nF_OFD_SETLK\nand F_OFD_SETLKW\nconstants, which are used when working\nwith open file description locks.\nChanged in version 3.10: On Linux >= 2.6.11, the fcntl\nmodule exposes the F_GETPIPE_SZ\nand\nF_SETPIPE_SZ\nconstants, which allow to check and modify a pipe\u2019s size\nrespectively.\nChanged in version 3.11: On FreeBSD, the fcntl\nmodule exposes the F_DUP2FD\nand\nF_DUP2FD_CLOEXEC\nconstants, which allow to duplicate a file descriptor,\nthe latter setting FD_CLOEXEC\nflag in addition.\nChanged in version 3.12: On Linux >= 4.5, the fcntl\nmodule exposes the FICLONE\nand\nFICLONERANGE\nconstants, which allow to share some data of one file with\nanother file by reflinking on some filesystems (e.g., btrfs, OCFS2, and\nXFS). This behavior is commonly referred to as \u201ccopy-on-write\u201d.\nChanged in version 3.13: On Linux >= 2.6.32, the fcntl\nmodule exposes the\nF_GETOWN_EX\n, F_SETOWN_EX\n, F_OWNER_TID\n, F_OWNER_PID\n, F_OWNER_PGRP\nconstants, which allow to direct I/O availability signals\nto a specific thread, process, or process group.\nOn Linux >= 4.13, the fcntl\nmodule exposes the\nF_GET_RW_HINT\n, F_SET_RW_HINT\n, F_GET_FILE_RW_HINT\n,\nF_SET_FILE_RW_HINT\n, and RWH_WRITE_LIFE_*\nconstants, which allow\nto inform the kernel about the relative expected lifetime of writes on\na given inode or via a particular open file description.\nOn Linux >= 5.1 and NetBSD, the fcntl\nmodule exposes the\nF_SEAL_FUTURE_WRITE\nconstant for use with F_ADD_SEALS\nand\nF_GET_SEALS\noperations.\nOn FreeBSD, the fcntl\nmodule exposes the F_READAHEAD\n, F_ISUNIONSTACK\n, and F_KINFO\nconstants.\nOn macOS and FreeBSD, the fcntl\nmodule exposes the F_RDAHEAD\nconstant.\nOn NetBSD and AIX, the fcntl\nmodule exposes the F_CLOSEM\nconstant.\nOn NetBSD, the fcntl\nmodule exposes the F_MAXFD\nconstant.\nOn macOS and NetBSD, the fcntl\nmodule exposes the F_GETNOSIGPIPE\nand F_SETNOSIGPIPE\nconstant.\nChanged in version 3.14: On Linux >= 6.1, the fcntl\nmodule exposes the F_DUPFD_QUERY\nto query a file descriptor pointing to the same file.\nThe module defines the following functions:\n- fcntl.fcntl(fd, cmd, arg=0, /)\u00b6\nPerform the operation cmd on file descriptor fd (file objects providing a\nfileno()\nmethod are accepted as well). The values used for cmd are operating system dependent, and are available as constants in thefcntl\nmodule, using the same names as used in the relevant C header files. The argument arg can either be an integer value, a bytes-like object, or a string. The type and size of arg must match the type and size of the argument of the operation as specified in the relevant C documentation.When arg is an integer, the function returns the integer return value of the C\nfcntl()\ncall.When the argument is bytes-like object, it represents a binary structure, for example, created by\nstruct.pack()\n. A string value is encoded to binary using the UTF-8 encoding. The binary data is copied to a buffer whose address is passed to the Cfcntl()\ncall. The return value after a successful call is the contents of the buffer, converted to abytes\nobject. The length of the returned object will be the same as the length of the arg argument. This is limited to 1024 bytes.If the\nfcntl()\ncall fails, anOSError\nis raised.Note\nIf the type or the size of arg does not match the type or size of the argument of the operation (for example, if an integer is passed when a pointer is expected, or the information returned in the buffer by the operating system is larger than 1024 bytes), this is most likely to result in a segmentation violation or a more subtle data corruption.\nRaises an auditing event\nfcntl.fcntl\nwith argumentsfd\n,cmd\n,arg\n.Changed in version 3.14: Add support of arbitrary bytes-like objects, not only\nbytes\n.\n- fcntl.ioctl(fd, request, arg=0, mutate_flag=True, /)\u00b6\nThis function is identical to the\nfcntl()\nfunction, except that the argument handling is even more complicated.The request parameter is limited to values that can fit in 32-bits or 64-bits, depending on the platform. Additional constants of interest for use as the request argument can be found in the\ntermios\nmodule, under the same names as used in the relevant C header files.The parameter arg can be an integer, a bytes-like object, or a string. The type and size of arg must match the type and size of the argument of the operation as specified in the relevant C documentation.\nIf arg does not support the read-write buffer interface or the mutate_flag is false, behavior is as for the\nfcntl()\nfunction.If arg supports the read-write buffer interface (like\nbytearray\n) and mutate_flag is true (the default), then the buffer is (in effect) passed to the underlyingioctl()\nsystem call, the latter\u2019s return code is passed back to the calling Python, and the buffer\u2019s new contents reflect the action of theioctl()\n. This is a slight simplification, because if the supplied buffer is less than 1024 bytes long it is first copied into a static buffer 1024 bytes long which is then passed toioctl()\nand copied back into the supplied buffer.If the\nioctl()\ncall fails, anOSError\nexception is raised.Note\nIf the type or size of arg does not match the type or size of the operation\u2019s argument (for example, if an integer is passed when a pointer is expected, or the information returned in the buffer by the operating system is larger than 1024 bytes, or the size of the mutable bytes-like object is too small), this is most likely to result in a segmentation violation or a more subtle data corruption.\nAn example:\n>>> import array, fcntl, struct, termios, os >>> os.getpgrp() 13341 >>> struct.unpack('h', fcntl.ioctl(0, termios.TIOCGPGRP, \" \"))[0] 13341 >>> buf = array.array('h', [0]) >>> fcntl.ioctl(0, termios.TIOCGPGRP, buf, 1) 0 >>> buf array('h', [13341])\nRaises an auditing event\nfcntl.ioctl\nwith argumentsfd\n,request\n,arg\n.Changed in version 3.14: The GIL is always released during a system call. System calls failing with EINTR are automatically retried.\n- fcntl.flock(fd, operation, /)\u00b6\nPerform the lock operation operation on file descriptor fd (file objects providing a\nfileno()\nmethod are accepted as well). See the Unix manual flock(2) for details. (On some systems, this function is emulated usingfcntl()\n.)If the\nflock()\ncall fails, anOSError\nexception is raised.Raises an auditing event\nfcntl.flock\nwith argumentsfd\n,operation\n.\n- fcntl.lockf(fd, cmd, len=0, start=0, whence=0, /)\u00b6\nThis is essentially a wrapper around the\nfcntl()\nlocking calls. fd is the file descriptor (file objects providing afileno()\nmethod are accepted as well) of the file to lock or unlock, and cmd is one of the following values:- fcntl.LOCK_UN\u00b6\nRelease an existing lock.\n- fcntl.LOCK_SH\u00b6\nAcquire a shared lock.\n- fcntl.LOCK_EX\u00b6\nAcquire an exclusive lock.\n- fcntl.LOCK_NB\u00b6\nBitwise OR with any of the other three\nLOCK_*\nconstants to make the request non-blocking.\nIf\nLOCK_NB\nis used and the lock cannot be acquired, anOSError\nwill be raised and the exception will have an errno attribute set toEACCES\norEAGAIN\n(depending on the operating system; for portability, check for both values). On at least some systems,LOCK_EX\ncan only be used if the file descriptor refers to a file opened for writing.len is the number of bytes to lock, start is the byte offset at which the lock starts, relative to whence, and whence is as with\nio.IOBase.seek()\n, specifically:0\n\u2013 relative to the start of the file (os.SEEK_SET\n)1\n\u2013 relative to the current buffer position (os.SEEK_CUR\n)2\n\u2013 relative to the end of the file (os.SEEK_END\n)\nThe default for start is 0, which means to start at the beginning of the file. The default for len is 0 which means to lock to the end of the file. The default for whence is also 0.\nRaises an auditing event\nfcntl.lockf\nwith argumentsfd\n,cmd\n,len\n,start\n,whence\n.\nExamples (all on a SVR4 compliant system):\nimport struct, fcntl, os\nf = open(...)\nrv = fcntl.fcntl(f, fcntl.F_SETFL, os.O_NDELAY)\nlockdata = struct.pack('hhllhh', fcntl.F_WRLCK, 0, 0, 0, 0, 0)\nrv = fcntl.fcntl(f, fcntl.F_SETLKW, lockdata)\nNote that in the first example the return value variable rv will hold an\ninteger value; in the second example it will hold a bytes\nobject. The\nstructure lay-out for the lockdata variable is system dependent \u2014 therefore\nusing the flock()\ncall may be better.", "code_snippets": ["\n", "\n", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n\n", " ", " ", "\n", " ", " ", " ", " ", "\n\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 2298}
{"url": "https://docs.python.org/3/reference/datamodel.html", "title": "Data model", "content": "3. Data model\u00b6\n3.1. Objects, values and types\u00b6\nObjects are Python\u2019s abstraction for data. All data in a Python program is represented by objects or by relations between objects. Even code is represented by objects.\nEvery object has an identity, a type and a value. An object\u2019s identity never\nchanges once it has been created; you may think of it as the object\u2019s address in\nmemory. The is\noperator compares the identity of two objects; the\nid()\nfunction returns an integer representing its identity.\nCPython implementation detail: For CPython, id(x)\nis the memory address where x\nis stored.\nAn object\u2019s type determines the operations that the object supports (e.g., \u201cdoes\nit have a length?\u201d) and also defines the possible values for objects of that\ntype. The type()\nfunction returns an object\u2019s type (which is an object\nitself). Like its identity, an object\u2019s type is also unchangeable.\n[1]\nThe value of some objects can change. Objects whose value can change are said to be mutable; objects whose value is unchangeable once they are created are called immutable. (The value of an immutable container object that contains a reference to a mutable object can change when the latter\u2019s value is changed; however the container is still considered immutable, because the collection of objects it contains cannot be changed. So, immutability is not strictly the same as having an unchangeable value, it is more subtle.) An object\u2019s mutability is determined by its type; for instance, numbers, strings and tuples are immutable, while dictionaries and lists are mutable.\nObjects are never explicitly destroyed; however, when they become unreachable they may be garbage-collected. An implementation is allowed to postpone garbage collection or omit it altogether \u2014 it is a matter of implementation quality how garbage collection is implemented, as long as no objects are collected that are still reachable.\nCPython implementation detail: CPython currently uses a reference-counting scheme with (optional) delayed\ndetection of cyclically linked garbage, which collects most objects as soon\nas they become unreachable, but is not guaranteed to collect garbage\ncontaining circular references. See the documentation of the gc\nmodule for information on controlling the collection of cyclic garbage.\nOther implementations act differently and CPython may change.\nDo not depend on immediate finalization of objects when they become\nunreachable (so you should always close files explicitly).\nNote that the use of the implementation\u2019s tracing or debugging facilities may\nkeep objects alive that would normally be collectable. Also note that catching\nan exception with a try\n\u2026except\nstatement may keep\nobjects alive.\nSome objects contain references to \u201cexternal\u201d resources such as open files or\nwindows. It is understood that these resources are freed when the object is\ngarbage-collected, but since garbage collection is not guaranteed to happen,\nsuch objects also provide an explicit way to release the external resource,\nusually a close()\nmethod. Programs are strongly recommended to explicitly\nclose such objects. The try\n\u2026finally\nstatement\nand the with\nstatement provide convenient ways to do this.\nSome objects contain references to other objects; these are called containers. Examples of containers are tuples, lists and dictionaries. The references are part of a container\u2019s value. In most cases, when we talk about the value of a container, we imply the values, not the identities of the contained objects; however, when we talk about the mutability of a container, only the identities of the immediately contained objects are implied. So, if an immutable container (like a tuple) contains a reference to a mutable object, its value changes if that mutable object is changed.\nTypes affect almost all aspects of object behavior. Even the importance of\nobject identity is affected in some sense: for immutable types, operations that\ncompute new values may actually return a reference to any existing object with\nthe same type and value, while for mutable objects this is not allowed.\nFor example, after a = 1; b = 1\n, a and b may or may not refer to\nthe same object with the value one, depending on the implementation.\nThis is because int\nis an immutable type, so the reference to 1\ncan be reused. This behaviour depends on the implementation used, so should\nnot be relied upon, but is something to be aware of when making use of object\nidentity tests.\nHowever, after c = []; d = []\n, c and d are guaranteed to refer to two\ndifferent, unique, newly created empty lists. (Note that e = f = []\nassigns\nthe same object to both e and f.)\n3.2. The standard type hierarchy\u00b6\nBelow is a list of the types that are built into Python. Extension modules (written in C, Java, or other languages, depending on the implementation) can define additional types. Future versions of Python may add types to the type hierarchy (e.g., rational numbers, efficiently stored arrays of integers, etc.), although such additions will often be provided via the standard library instead.\nSome of the type descriptions below contain a paragraph listing \u2018special attributes.\u2019 These are attributes that provide access to the implementation and are not intended for general use. Their definition may change in the future.\n3.2.1. None\u00b6\nThis type has a single value. There is a single object with this value. This\nobject is accessed through the built-in name None\n. It is used to signify the\nabsence of a value in many situations, e.g., it is returned from functions that\ndon\u2019t explicitly return anything. Its truth value is false.\n3.2.2. NotImplemented\u00b6\nThis type has a single value. There is a single object with this value. This\nobject is accessed through the built-in name NotImplemented\n. Numeric methods\nand rich comparison methods should return this value if they do not implement the\noperation for the operands provided. (The interpreter will then try the\nreflected operation, or some other fallback, depending on the operator.) It\nshould not be evaluated in a boolean context.\nSee Implementing the arithmetic operations for more details.\nChanged in version 3.9: Evaluating NotImplemented\nin a boolean context was deprecated.\nChanged in version 3.14: Evaluating NotImplemented\nin a boolean context now raises a TypeError\n.\nIt previously evaluated to True\nand emitted a DeprecationWarning\nsince Python 3.9.\n3.2.3. Ellipsis\u00b6\nThis type has a single value. There is a single object with this value. This\nobject is accessed through the literal ...\nor the built-in name\nEllipsis\n. Its truth value is true.\n3.2.4. numbers.Number\n\u00b6\nThese are created by numeric literals and returned as results by arithmetic operators and arithmetic built-in functions. Numeric objects are immutable; once created their value never changes. Python numbers are of course strongly related to mathematical numbers, but subject to the limitations of numerical representation in computers.\nThe string representations of the numeric classes, computed by\n__repr__()\nand __str__()\n, have the following\nproperties:\nThey are valid numeric literals which, when passed to their class constructor, produce an object having the value of the original numeric.\nThe representation is in base 10, when possible.\nLeading zeros, possibly excepting a single zero before a decimal point, are not shown.\nTrailing zeros, possibly excepting a single zero after a decimal point, are not shown.\nA sign is shown only when the number is negative.\nPython distinguishes between integers, floating-point numbers, and complex numbers:\n3.2.4.1. numbers.Integral\n\u00b6\nThese represent elements from the mathematical set of integers (positive and negative).\nNote\nThe rules for integer representation are intended to give the most meaningful interpretation of shift and mask operations involving negative integers.\nThere are two types of integers:\n- Integers (\nint\n) These represent numbers in an unlimited range, subject to available (virtual) memory only. For the purpose of shift and mask operations, a binary representation is assumed, and negative numbers are represented in a variant of 2\u2019s complement which gives the illusion of an infinite string of sign bits extending to the left.\n- Booleans (\nbool\n) These represent the truth values False and True. The two objects representing the values\nFalse\nandTrue\nare the only Boolean objects. The Boolean type is a subtype of the integer type, and Boolean values behave like the values 0 and 1, respectively, in almost all contexts, the exception being that when converted to a string, the strings\"False\"\nor\"True\"\nare returned, respectively.\n3.2.4.2. numbers.Real\n(float\n)\u00b6\nThese represent machine-level double precision floating-point numbers. You are at the mercy of the underlying machine architecture (and C or Java implementation) for the accepted range and handling of overflow. Python does not support single-precision floating-point numbers; the savings in processor and memory usage that are usually the reason for using these are dwarfed by the overhead of using objects in Python, so there is no reason to complicate the language with two kinds of floating-point numbers.\n3.2.4.3. numbers.Complex\n(complex\n)\u00b6\nThese represent complex numbers as a pair of machine-level double precision\nfloating-point numbers. The same caveats apply as for floating-point numbers.\nThe real and imaginary parts of a complex number z\ncan be retrieved through\nthe read-only attributes z.real\nand z.imag\n.\n3.2.5. Sequences\u00b6\nThese represent finite ordered sets indexed by non-negative numbers. The\nbuilt-in function len()\nreturns the number of items of a sequence. When\nthe length of a sequence is n, the index set contains the numbers 0, 1,\n\u2026, n-1. Item i of sequence a is selected by a[i]\n. Some sequences,\nincluding built-in sequences, interpret negative subscripts by adding the\nsequence length. For example, a[-2]\nequals a[n-2]\n, the second to last\nitem of sequence a with length n\n.\nThe resulting value must be a nonnegative integer less than the number of items\nin the sequence. If it is not, an IndexError\nis raised.\nSequences also support slicing: a[start:stop]\nselects all items with index k such\nthat start <=\nk <\nstop. When used as an expression, a slice is a\nsequence of the same type. The comment above about negative subscripts also applies\nto negative slice positions.\nNote that no error is raised if a slice position is less than zero or larger\nthan the length of the sequence.\nIf start is missing or None\n, slicing behaves as if start was zero.\nIf stop is missing or None\n, slicing behaves as if stop was equal to\nthe length of the sequence.\nSome sequences also support \u201cextended slicing\u201d with a third \u201cstep\u201d parameter:\na[i:j:k]\nselects all items of a with index x where x = i + n*k\n, n\n>=\n0\nand i <=\nx <\nj.\nSequences are distinguished according to their mutability:\n3.2.5.1. Immutable sequences\u00b6\nAn object of an immutable sequence type cannot change once it is created. (If the object contains references to other objects, these other objects may be mutable and may be changed; however, the collection of objects directly referenced by an immutable object cannot change.)\nThe following types are immutable sequences:\n- Strings\nA string (\nstr\n) is a sequence of values that represent characters, or more formally, Unicode code points. All the code points in the range0\nto0x10FFFF\ncan be represented in a string.Python doesn\u2019t have a dedicated character type. Instead, every code point in the string is represented as a string object with length\n1\n.The built-in function\nord()\nconverts a code point from its string form to an integer in the range0\nto0x10FFFF\n;chr()\nconverts an integer in the range0\nto0x10FFFF\nto the corresponding length1\nstring object.str.encode()\ncan be used to convert astr\ntobytes\nusing the given text encoding, andbytes.decode()\ncan be used to achieve the opposite.- Tuples\nThe items of a\ntuple\nare arbitrary Python objects. Tuples of two or more items are formed by comma-separated lists of expressions. A tuple of one item (a \u2018singleton\u2019) can be formed by affixing a comma to an expression (an expression by itself does not create a tuple, since parentheses must be usable for grouping of expressions). An empty tuple can be formed by an empty pair of parentheses.- Bytes\nA\nbytes\nobject is an immutable array. The items are 8-bit bytes, represented by integers in the range 0 <= x < 256. Bytes literals (likeb'abc'\n) and the built-inbytes()\nconstructor can be used to create bytes objects. Also, bytes objects can be decoded to strings via thedecode()\nmethod.\n3.2.5.2. Mutable sequences\u00b6\nMutable sequences can be changed after they are created. The subscription and\nslicing notations can be used as the target of assignment and del\n(delete) statements.\nNote\nThe collections\nand array\nmodule provide\nadditional examples of mutable sequence types.\nThere are currently two intrinsic mutable sequence types:\n- Lists\nThe items of a list are arbitrary Python objects. Lists are formed by placing a comma-separated list of expressions in square brackets. (Note that there are no special cases needed to form lists of length 0 or 1.)\n- Byte Arrays\nA bytearray object is a mutable array. They are created by the built-in\nbytearray()\nconstructor. Aside from being mutable (and hence unhashable), byte arrays otherwise provide the same interface and functionality as immutablebytes\nobjects.\n3.2.6. Set types\u00b6\nThese represent unordered, finite sets of unique, immutable objects. As such,\nthey cannot be indexed by any subscript. However, they can be iterated over, and\nthe built-in function len()\nreturns the number of items in a set. Common\nuses for sets are fast membership testing, removing duplicates from a sequence,\nand computing mathematical operations such as intersection, union, difference,\nand symmetric difference.\nFor set elements, the same immutability rules apply as for dictionary keys. Note\nthat numeric types obey the normal rules for numeric comparison: if two numbers\ncompare equal (e.g., 1\nand 1.0\n), only one of them can be contained in a\nset.\nThere are currently two intrinsic set types:\n- Sets\nThese represent a mutable set. They are created by the built-in\nset()\nconstructor and can be modified afterwards by several methods, such asadd()\n.- Frozen sets\nThese represent an immutable set. They are created by the built-in\nfrozenset()\nconstructor. As a frozenset is immutable and hashable, it can be used again as an element of another set, or as a dictionary key.\n3.2.7. Mappings\u00b6\nThese represent finite sets of objects indexed by arbitrary index sets. The\nsubscript notation a[k]\nselects the item indexed by k\nfrom the mapping\na\n; this can be used in expressions and as the target of assignments or\ndel\nstatements. The built-in function len()\nreturns the number\nof items in a mapping.\nThere is currently a single intrinsic mapping type:\n3.2.7.1. Dictionaries\u00b6\nThese represent finite sets of objects indexed by nearly arbitrary values. The\nonly types of values not acceptable as keys are values containing lists or\ndictionaries or other mutable types that are compared by value rather than by\nobject identity, the reason being that the efficient implementation of\ndictionaries requires a key\u2019s hash value to remain constant. Numeric types used\nfor keys obey the normal rules for numeric comparison: if two numbers compare\nequal (e.g., 1\nand 1.0\n) then they can be used interchangeably to index\nthe same dictionary entry.\nDictionaries preserve insertion order, meaning that keys will be produced in the same order they were added sequentially over the dictionary. Replacing an existing key does not change the order, however removing a key and re-inserting it will add it to the end instead of keeping its old place.\nDictionaries are mutable; they can be created by the {}\nnotation (see\nsection Dictionary displays).\nThe extension modules dbm.ndbm\nand dbm.gnu\nprovide\nadditional examples of mapping types, as does the collections\nmodule.\nChanged in version 3.7: Dictionaries did not preserve insertion order in versions of Python before 3.6. In CPython 3.6, insertion order was preserved, but it was considered an implementation detail at that time rather than a language guarantee.\n3.2.8. Callable types\u00b6\nThese are the types to which the function call operation (see section Calls) can be applied:\n3.2.8.1. User-defined functions\u00b6\nA user-defined function object is created by a function definition (see section Function definitions). It should be called with an argument list containing the same number of items as the function\u2019s formal parameter list.\n3.2.8.1.1. Special read-only attributes\u00b6\nAttribute |\nMeaning |\n|---|---|\n|\nA reference to the Added in version 3.10. |\n|\nA reference to the |\n|\nA cell object has the attribute |\n3.2.8.1.2. Special writable attributes\u00b6\nMost of these attributes check the type of the assigned value:\nAttribute |\nMeaning |\n|---|---|\n|\nThe function\u2019s documentation string, or |\n|\nThe function\u2019s name.\nSee also: |\n|\nThe function\u2019s qualified name.\nSee also: Added in version 3.3. |\n|\nThe name of the module the function was defined in,\nor |\n|\nA |\n|\nThe code object representing the compiled function body. |\n|\nThe namespace supporting arbitrary function attributes.\nSee also: |\n|\nA Changed in version 3.14: Annotations are now lazily evaluated. See PEP 649. |\n|\nThe annotate function for this function, or Added in version 3.14. |\n|\nA |\n|\nA Added in version 3.12. |\nFunction objects also support getting and setting arbitrary attributes, which can be used, for example, to attach metadata to functions. Regular attribute dot-notation is used to get and set such attributes.\nCPython implementation detail: CPython\u2019s current implementation only supports function attributes on user-defined functions. Function attributes on built-in functions may be supported in the future.\nAdditional information about a function\u2019s definition can be retrieved from its\ncode object\n(accessible via the __code__\nattribute).\n3.2.8.2. Instance methods\u00b6\nAn instance method object combines a class, a class instance and any callable object (normally a user-defined function).\nSpecial read-only attributes:\n|\nRefers to the class instance object to which the method is bound |\n|\nRefers to the original function object |\n|\nThe method\u2019s documentation\n(same as |\n|\nThe name of the method\n(same as |\n|\nThe name of the module the method was defined in, or |\nMethods also support accessing (but not setting) the arbitrary function attributes on the underlying function object.\nUser-defined method objects may be created when getting an attribute of a\nclass (perhaps via an instance of that class), if that attribute is a\nuser-defined function object or a\nclassmethod\nobject.\nWhen an instance method object is created by retrieving a user-defined\nfunction object from a class via one of its\ninstances, its __self__\nattribute is the instance, and the\nmethod object is said to be bound. The new method\u2019s __func__\nattribute is the original function object.\nWhen an instance method object is created by retrieving a classmethod\nobject from a class or instance, its __self__\nattribute is the\nclass itself, and its __func__\nattribute is the function object\nunderlying the class method.\nWhen an instance method object is called, the underlying function\n(__func__\n) is called, inserting the class instance\n(__self__\n) in front of the argument list. For instance, when\nC\nis a class which contains a definition for a function\nf()\n, and x\nis an instance of C\n, calling x.f(1)\nis\nequivalent to calling C.f(x, 1)\n.\nWhen an instance method object is derived from a classmethod\nobject, the\n\u201cclass instance\u201d stored in __self__\nwill actually be the class\nitself, so that calling either x.f(1)\nor C.f(1)\nis equivalent to\ncalling f(C,1)\nwhere f\nis the underlying function.\nIt is important to note that user-defined functions which are attributes of a class instance are not converted to bound methods; this only happens when the function is an attribute of the class.\n3.2.8.3. Generator functions\u00b6\nA function or method which uses the yield\nstatement (see section\nThe yield statement) is called a generator function. Such a function, when\ncalled, always returns an iterator object which can be used to\nexecute the body of the function: calling the iterator\u2019s\niterator.__next__()\nmethod will cause the function to execute until\nit provides a value using the yield\nstatement. When the\nfunction executes a return\nstatement or falls off the end, a\nStopIteration\nexception is raised and the iterator will have\nreached the end of the set of values to be returned.\n3.2.8.4. Coroutine functions\u00b6\nA function or method which is defined using async def\nis called\na coroutine function. Such a function, when called, returns a\ncoroutine object. It may contain await\nexpressions,\nas well as async with\nand async for\nstatements. See\nalso the Coroutine Objects section.\n3.2.8.5. Asynchronous generator functions\u00b6\nA function or method which is defined using async def\nand\nwhich uses the yield\nstatement is called a\nasynchronous generator function. Such a function, when called,\nreturns an asynchronous iterator object which can be used in an\nasync for\nstatement to execute the body of the function.\nCalling the asynchronous iterator\u2019s\naiterator.__anext__\nmethod\nwill return an awaitable which when awaited\nwill execute until it provides a value using the yield\nexpression. When the function executes an empty return\nstatement or falls off the end, a StopAsyncIteration\nexception\nis raised and the asynchronous iterator will have reached the end of\nthe set of values to be yielded.\n3.2.8.6. Built-in functions\u00b6\nA built-in function object is a wrapper around a C function. Examples of\nbuilt-in functions are len()\nand math.sin()\n(math\nis a\nstandard built-in module). The number and type of the arguments are\ndetermined by the C function. Special read-only attributes:\n__doc__\nis the function\u2019s documentation string, orNone\nif unavailable. Seefunction.__doc__\n.__name__\nis the function\u2019s name. Seefunction.__name__\n.__self__\nis set toNone\n(but see the next item).__module__\nis the name of the module the function was defined in orNone\nif unavailable. Seefunction.__module__\n.\n3.2.8.7. Built-in methods\u00b6\nThis is really a different disguise of a built-in function, this time containing\nan object passed to the C function as an implicit extra argument. An example of\na built-in method is alist.append()\n, assuming alist is a list object. In\nthis case, the special read-only attribute __self__\nis set to the object\ndenoted by alist. (The attribute has the same semantics as it does with\nother instance methods\n.)\n3.2.8.8. Classes\u00b6\nClasses are callable. These objects normally act as factories for new\ninstances of themselves, but variations are possible for class types that\noverride __new__()\n. The arguments of the call are passed to\n__new__()\nand, in the typical case, to __init__()\nto\ninitialize the new instance.\n3.2.8.9. Class Instances\u00b6\nInstances of arbitrary classes can be made callable by defining a\n__call__()\nmethod in their class.\n3.2.9. Modules\u00b6\nModules are a basic organizational unit of Python code, and are created by\nthe import system as invoked either by the\nimport\nstatement, or by calling\nfunctions such as importlib.import_module()\nand built-in\n__import__()\n. A module object has a namespace implemented by a\ndictionary\nobject (this is the dictionary referenced by the\n__globals__\nattribute of functions defined in the module). Attribute references are\ntranslated to lookups in this dictionary, e.g., m.x\nis equivalent to\nm.__dict__[\"x\"]\n. A module object does not contain the code object used\nto initialize the module (since it isn\u2019t needed once the initialization is\ndone).\nAttribute assignment updates the module\u2019s namespace dictionary, e.g.,\nm.x = 1\nis equivalent to m.__dict__[\"x\"] = 1\n.\n3.2.9.2. Other writable attributes on module objects\u00b6\nAs well as the import-related attributes listed above, module objects also have the following writable attributes:\n- module.__doc__\u00b6\nThe module\u2019s documentation string, or\nNone\nif unavailable. See also:__doc__ attributes\n.\n- module.__annotations__\u00b6\nA dictionary containing variable annotations collected during module body execution. For best practices on working with\n__annotations__\n, seeannotationlib\n.Changed in version 3.14: Annotations are now lazily evaluated. See PEP 649.\n- module.__annotate__\u00b6\nThe annotate function for this module, or\nNone\nif the module has no annotations. See also:__annotate__\nattributes.Added in version 3.14.\n3.2.9.3. Module dictionaries\u00b6\nModule objects also have the following special read-only attribute:\n- module.__dict__\u00b6\nThe module\u2019s namespace as a dictionary object. Uniquely among the attributes listed here,\n__dict__\ncannot be accessed as a global variable from within a module; it can only be accessed as an attribute on module objects.CPython implementation detail: Because of the way CPython clears module dictionaries, the module dictionary will be cleared when the module falls out of scope even if the dictionary still has live references. To avoid this, copy the dictionary or keep the module around while using its dictionary directly.\n3.2.10. Custom classes\u00b6\nCustom class types are typically created by class definitions (see section\nClass definitions). A class has a namespace implemented by a dictionary object.\nClass attribute references are translated to lookups in this dictionary, e.g.,\nC.x\nis translated to C.__dict__[\"x\"]\n(although there are a number of\nhooks which allow for other means of locating attributes). When the attribute\nname is not found there, the attribute search continues in the base classes.\nThis search of the base classes uses the C3 method resolution order which\nbehaves correctly even in the presence of \u2018diamond\u2019 inheritance structures\nwhere there are multiple inheritance paths leading back to a common ancestor.\nAdditional details on the C3 MRO used by Python can be found at\nThe Python 2.3 Method Resolution Order.\nWhen a class attribute reference (for class C\n, say) would yield a\nclass method object, it is transformed into an instance method object whose\n__self__\nattribute is C\n.\nWhen it would yield a staticmethod\nobject,\nit is transformed into the object wrapped by the static method\nobject. See section Implementing Descriptors for another way in which attributes\nretrieved from a class may differ from those actually contained in its\n__dict__\n.\nClass attribute assignments update the class\u2019s dictionary, never the dictionary of a base class.\nA class object can be called (see above) to yield a class instance (see below).\n3.2.10.1. Special attributes\u00b6\nAttribute |\nMeaning |\n|---|---|\n|\nThe class\u2019s name.\nSee also: |\n|\nThe class\u2019s qualified name.\nSee also: |\n|\nThe name of the module in which the class was defined. |\n|\nA |\n|\nA |\n|\nCPython implementation detail: The single base class in the inheritance chain that is responsible\nfor the memory layout of instances. This attribute corresponds to\n|\n|\nThe class\u2019s documentation string, or |\n|\nA dictionary containing\nvariable annotations\ncollected during class body execution. See also:\nFor best practices on working with Warning Accessing the This attribute does not exist on certain builtin classes. On\nuser-defined classes without Changed in version 3.14: Annotations are now lazily evaluated. See PEP 649. |\n|\nThe annotate function for this class, or Added in version 3.14. |\n|\nA Added in version 3.12. |\n|\nA Added in version 3.13. |\n|\nThe line number of the first line of the class definition,\nincluding decorators.\nSetting the Added in version 3.13. |\n|\nThe |\n3.2.10.2. Special methods\u00b6\nIn addition to the special attributes described above, all Python classes also have the following two methods available:\n- type.mro()\u00b6\nThis method can be overridden by a metaclass to customize the method resolution order for its instances. It is called at class instantiation, and its result is stored in\n__mro__\n.\n- type.__subclasses__()\u00b6\nEach class keeps a list of weak references to its immediate subclasses. This method returns a list of all those references still alive. The list is in definition order. Example:\n>>> class A: pass >>> class B(A): pass >>> A.__subclasses__() []\n3.2.11. Class instances\u00b6\nA class instance is created by calling a class object (see above). A class\ninstance has a namespace implemented as a dictionary which is the first place\nin which attribute references are searched. When an attribute is not found\nthere, and the instance\u2019s class has an attribute by that name, the search\ncontinues with the class attributes. If a class attribute is found that is a\nuser-defined function object, it is transformed into an instance method\nobject whose __self__\nattribute is the instance. Static method and\nclass method objects are also transformed; see above under \u201cClasses\u201d. See\nsection Implementing Descriptors for another way in which attributes of a class\nretrieved via its instances may differ from the objects actually stored in\nthe class\u2019s __dict__\n. If no class attribute is found, and the\nobject\u2019s class has a __getattr__()\nmethod, that is called to satisfy\nthe lookup.\nAttribute assignments and deletions update the instance\u2019s dictionary, never a\nclass\u2019s dictionary. If the class has a __setattr__()\nor\n__delattr__()\nmethod, this is called instead of updating the instance\ndictionary directly.\nClass instances can pretend to be numbers, sequences, or mappings if they have methods with certain special names. See section Special method names.\n3.2.11.1. Special attributes\u00b6\n- object.__class__\u00b6\nThe class to which a class instance belongs.\n3.2.12. I/O objects (also known as file objects)\u00b6\nA file object represents an open file. Various shortcuts are\navailable to create file objects: the open()\nbuilt-in function, and\nalso os.popen()\n, os.fdopen()\n, and the\nmakefile()\nmethod of socket objects (and perhaps by\nother functions or methods provided by extension modules).\nThe objects sys.stdin\n, sys.stdout\nand sys.stderr\nare\ninitialized to file objects corresponding to the interpreter\u2019s standard\ninput, output and error streams; they are all open in text mode and\ntherefore follow the interface defined by the io.TextIOBase\nabstract class.\n3.2.13. Internal types\u00b6\nA few types used internally by the interpreter are exposed to the user. Their definitions may change with future versions of the interpreter, but they are mentioned here for completeness.\n3.2.13.1. Code objects\u00b6\nCode objects represent byte-compiled executable Python code, or bytecode. The difference between a code object and a function object is that the function object contains an explicit reference to the function\u2019s globals (the module in which it was defined), while a code object contains no context; also the default argument values are stored in the function object, not in the code object (because they represent values calculated at run-time). Unlike function objects, code objects are immutable and contain no references (directly or indirectly) to mutable objects.\n3.2.13.1.1. Special read-only attributes\u00b6\n|\nThe function name |\n|\nThe fully qualified function name Added in version 3.11. |\n|\nThe total number of positional parameters (including positional-only parameters and parameters with default values) that the function has |\n|\nThe number of positional-only parameters (including arguments with default values) that the function has |\n|\nThe number of keyword-only parameters (including arguments with default values) that the function has |\n|\nThe number of local variables used by the function (including parameters) |\n|\nA |\n|\nA |\n|\nA Note: references to global and builtin names are not included. |\n|\nA string representing the sequence of bytecode instructions in the function |\n|\nA |\n|\nA |\n|\nThe name of the file from which the code was compiled |\n|\nThe line number of the first line of the function |\n|\nA string encoding the mapping from bytecode offsets to line numbers. For details, see the source code of the interpreter. Deprecated since version 3.12: This attribute of code objects is deprecated, and may be removed in Python 3.15. |\n|\nThe required stack size of the code object |\n|\nAn |\nThe following flag bits are defined for co_flags\n:\nbit 0x04\nis set if\nthe function uses the *arguments\nsyntax to accept an arbitrary number of\npositional arguments; bit 0x08\nis set if the function uses the\n**keywords\nsyntax to accept arbitrary keyword arguments; bit 0x20\nis set\nif the function is a generator. See Code Objects Bit Flags for details\non the semantics of each flags that might be present.\nFuture feature declarations (for example, from __future__ import division\n) also use bits\nin co_flags\nto indicate whether a code object was compiled with a\nparticular feature enabled. See compiler_flag\n.\nOther bits in co_flags\nare reserved for internal use.\nIf a code object represents a function and has a docstring,\nthe CO_HAS_DOCSTRING\nbit is set in co_flags\nand the first item in co_consts\nis\nthe docstring of the function.\n3.2.13.1.2. Methods on code objects\u00b6\n- codeobject.co_positions()\u00b6\nReturns an iterable over the source code positions of each bytecode instruction in the code object.\nThe iterator returns\ntuple\ns containing the(start_line, end_line, start_column, end_column)\n. The i-th tuple corresponds to the position of the source code that compiled to the i-th code unit. Column information is 0-indexed utf-8 byte offsets on the given source line.This positional information can be missing. A non-exhaustive lists of cases where this may happen:\nRunning the interpreter with\n-X\nno_debug_ranges\n.Loading a pyc file compiled while using\n-X\nno_debug_ranges\n.Position tuples corresponding to artificial instructions.\nLine and column numbers that can\u2019t be represented due to implementation specific limitations.\nWhen this occurs, some or all of the tuple elements can be\nNone\n.Added in version 3.11.\nNote\nThis feature requires storing column positions in code objects which may result in a small increase of disk usage of compiled Python files or interpreter memory usage. To avoid storing the extra information and/or deactivate printing the extra traceback information, the\n-X\nno_debug_ranges\ncommand line flag or thePYTHONNODEBUGRANGES\nenvironment variable can be used.\n- codeobject.co_lines()\u00b6\nReturns an iterator that yields information about successive ranges of bytecodes. Each item yielded is a\n(start, end, lineno)\ntuple\n:start\n(anint\n) represents the offset (inclusive) of the start of the bytecode rangeend\n(anint\n) represents the offset (exclusive) of the end of the bytecode rangelineno\nis anint\nrepresenting the line number of the bytecode range, orNone\nif the bytecodes in the given range have no line number\nThe items yielded will have the following properties:\nThe first range yielded will have a\nstart\nof 0.The\n(start, end)\nranges will be non-decreasing and consecutive. That is, for any pair oftuple\ns, thestart\nof the second will be equal to theend\nof the first.No range will be backwards:\nend >= start\nfor all triples.The last\ntuple\nyielded will haveend\nequal to the size of the bytecode.\nZero-width ranges, where\nstart == end\n, are allowed. Zero-width ranges are used for lines that are present in the source code, but have been eliminated by the bytecode compiler.Added in version 3.10.\nSee also\n- PEP 626 - Precise line numbers for debugging and other tools.\nThe PEP that introduced the\nco_lines()\nmethod.\n- codeobject.replace(**kwargs)\u00b6\nReturn a copy of the code object with new values for the specified fields.\nCode objects are also supported by the generic function\ncopy.replace()\n.Added in version 3.8.\n3.2.13.2. Frame objects\u00b6\nFrame objects represent execution frames. They may occur in traceback objects, and are also passed to registered trace functions.\n3.2.13.2.1. Special read-only attributes\u00b6\n|\nPoints to the previous stack frame (towards the caller),\nor |\n|\nThe code object being executed in this frame.\nAccessing this attribute raises an auditing event\n|\n|\nThe mapping used by the frame to look up local variables. If the frame refers to an optimized scope, this may return a write-through proxy object. Changed in version 3.13: Return a proxy for optimized scopes. |\n|\nThe dictionary used by the frame to look up global variables |\n|\nThe dictionary used by the frame to look up built-in (intrinsic) names |\n|\nThe \u201cprecise instruction\u201d of the frame object (this is an index into the bytecode string of the code object) |\n|\nThe generator or coroutine object that owns this frame,\nor Added in version 3.14. |\n3.2.13.2.2. Special writable attributes\u00b6\n|\nIf not |\n|\nSet this attribute to |\n|\nSet this attribute to |\n|\nThe current line number of the frame \u2013 writing to this from within a trace function jumps to the given line (only for the bottom-most frame). A debugger can implement a Jump command (aka Set Next Statement) by writing to this attribute. |\n3.2.13.2.3. Frame object methods\u00b6\nFrame objects support one method:\n- frame.clear()\u00b6\nThis method clears all references to local variables held by the frame. Also, if the frame belonged to a generator, the generator is finalized. This helps break reference cycles involving frame objects (for example when catching an exception and storing its traceback for later use).\nRuntimeError\nis raised if the frame is currently executing or suspended.Added in version 3.4.\nChanged in version 3.13: Attempting to clear a suspended frame raises\nRuntimeError\n(as has always been the case for executing frames).\n3.2.13.3. Traceback objects\u00b6\nTraceback objects represent the stack trace of an exception.\nA traceback object\nis implicitly created when an exception occurs, and may also be explicitly\ncreated by calling types.TracebackType\n.\nChanged in version 3.7: Traceback objects can now be explicitly instantiated from Python code.\nFor implicitly created tracebacks, when the search for an exception handler\nunwinds the execution stack, at each unwound level a traceback object is\ninserted in front of the current traceback. When an exception handler is\nentered, the stack trace is made available to the program. (See section\nThe try statement.) It is accessible as the third item of the\ntuple returned by sys.exc_info()\n, and as the\n__traceback__\nattribute\nof the caught exception.\nWhen the program contains no suitable\nhandler, the stack trace is written (nicely formatted) to the standard error\nstream; if the interpreter is interactive, it is also made available to the user\nas sys.last_traceback\n.\nFor explicitly created tracebacks, it is up to the creator of the traceback\nto determine how the tb_next\nattributes should be linked to\nform a full stack trace.\nSpecial read-only attributes:\n|\nPoints to the execution frame of the current level. Accessing this attribute raises an\nauditing event |\n|\nGives the line number where the exception occurred |\n|\nIndicates the \u201cprecise instruction\u201d. |\nThe line number and last instruction in the traceback may differ from the\nline number of its frame object if the exception\noccurred in a\ntry\nstatement with no matching except clause or with a\nfinally\nclause.\n- traceback.tb_next\u00b6\nThe special writable attribute\ntb_next\nis the next level in the stack trace (towards the frame where the exception occurred), orNone\nif there is no next level.Changed in version 3.7: This attribute is now writable\n3.2.13.4. Slice objects\u00b6\nSlice objects are used to represent slices for\n__getitem__()\nmethods. They are also created by the built-in slice()\nfunction.\nSpecial read-only attributes: start\nis the lower bound;\nstop\nis the upper bound; step\nis the step\nvalue; each is None\nif omitted. These attributes can have any type.\nSlice objects support one method:\n- slice.indices(self, length)\u00b6\nThis method takes a single integer argument length and computes information about the slice that the slice object would describe if applied to a sequence of length items. It returns a tuple of three integers; respectively these are the start and stop indices and the step or stride length of the slice. Missing or out-of-bounds indices are handled in a manner consistent with regular slices.\n3.2.13.5. Static method objects\u00b6\nStatic method objects provide a way of defeating the transformation of function\nobjects to method objects described above. A static method object is a wrapper\naround any other object, usually a user-defined method object. When a static\nmethod object is retrieved from a class or a class instance, the object actually\nreturned is the wrapped object, which is not subject to any further\ntransformation. Static method objects are also callable. Static method\nobjects are created by the built-in staticmethod()\nconstructor.\n3.2.13.6. Class method objects\u00b6\nA class method object, like a static method object, is a wrapper around another\nobject that alters the way in which that object is retrieved from classes and\nclass instances. The behaviour of class method objects upon such retrieval is\ndescribed above, under \u201cinstance methods\u201d. Class method objects are created\nby the built-in classmethod()\nconstructor.\n3.3. Special method names\u00b6\nA class can implement certain operations that are invoked by special syntax\n(such as arithmetic operations or subscripting and slicing) by defining methods\nwith special names. This is Python\u2019s approach to operator overloading,\nallowing classes to define their own behavior with respect to language\noperators. For instance, if a class defines a method named\n__getitem__()\n,\nand x\nis an instance of this class, then x[i]\nis roughly equivalent\nto type(x).__getitem__(x, i)\n. Except where mentioned, attempts to execute an\noperation raise an exception when no appropriate method is defined (typically\nAttributeError\nor TypeError\n).\nSetting a special method to None\nindicates that the corresponding\noperation is not available. For example, if a class sets\n__iter__()\nto None\n, the class is not iterable, so calling\niter()\non its instances will raise a TypeError\n(without\nfalling back to __getitem__()\n). [2]\nWhen implementing a class that emulates any built-in type, it is important that the emulation only be implemented to the degree that it makes sense for the object being modelled. For example, some sequences may work well with retrieval of individual elements, but extracting a slice may not make sense. (One example of this is the NodeList interface in the W3C\u2019s Document Object Model.)\n3.3.1. Basic customization\u00b6\n- object.__new__(cls[, ...])\u00b6\nCalled to create a new instance of class cls.\n__new__()\nis a static method (special-cased so you need not declare it as such) that takes the class of which an instance was requested as its first argument. The remaining arguments are those passed to the object constructor expression (the call to the class). The return value of__new__()\nshould be the new object instance (usually an instance of cls).Typical implementations create a new instance of the class by invoking the superclass\u2019s\n__new__()\nmethod usingsuper().__new__(cls[, ...])\nwith appropriate arguments and then modifying the newly created instance as necessary before returning it.If\n__new__()\nis invoked during object construction and it returns an instance of cls, then the new instance\u2019s__init__()\nmethod will be invoked like__init__(self[, ...])\n, where self is the new instance and the remaining arguments are the same as were passed to the object constructor.If\n__new__()\ndoes not return an instance of cls, then the new instance\u2019s__init__()\nmethod will not be invoked.__new__()\nis intended mainly to allow subclasses of immutable types (like int, str, or tuple) to customize instance creation. It is also commonly overridden in custom metaclasses in order to customize class creation.\n- object.__init__(self[, ...])\u00b6\nCalled after the instance has been created (by\n__new__()\n), but before it is returned to the caller. The arguments are those passed to the class constructor expression. If a base class has an__init__()\nmethod, the derived class\u2019s__init__()\nmethod, if any, must explicitly call it to ensure proper initialization of the base class part of the instance; for example:super().__init__([args...])\n.Because\n__new__()\nand__init__()\nwork together in constructing objects (__new__()\nto create it, and__init__()\nto customize it), no non-None\nvalue may be returned by__init__()\n; doing so will cause aTypeError\nto be raised at runtime.\n- object.__del__(self)\u00b6\nCalled when the instance is about to be destroyed. This is also called a finalizer or (improperly) a destructor. If a base class has a\n__del__()\nmethod, the derived class\u2019s__del__()\nmethod, if any, must explicitly call it to ensure proper deletion of the base class part of the instance.It is possible (though not recommended!) for the\n__del__()\nmethod to postpone destruction of the instance by creating a new reference to it. This is called object resurrection. It is implementation-dependent whether__del__()\nis called a second time when a resurrected object is about to be destroyed; the current CPython implementation only calls it once.It is not guaranteed that\n__del__()\nmethods are called for objects that still exist when the interpreter exits.weakref.finalize\nprovides a straightforward way to register a cleanup function to be called when an object is garbage collected.Note\ndel x\ndoesn\u2019t directly callx.__del__()\n\u2014 the former decrements the reference count forx\nby one, and the latter is only called whenx\n\u2019s reference count reaches zero.CPython implementation detail: It is possible for a reference cycle to prevent the reference count of an object from going to zero. In this case, the cycle will be later detected and deleted by the cyclic garbage collector. A common cause of reference cycles is when an exception has been caught in a local variable. The frame\u2019s locals then reference the exception, which references its own traceback, which references the locals of all frames caught in the traceback.\nSee also\nDocumentation for the\ngc\nmodule.Warning\nDue to the precarious circumstances under which\n__del__()\nmethods are invoked, exceptions that occur during their execution are ignored, and a warning is printed tosys.stderr\ninstead. In particular:__del__()\ncan be invoked when arbitrary code is being executed, including from any arbitrary thread. If__del__()\nneeds to take a lock or invoke any other blocking resource, it may deadlock as the resource may already be taken by the code that gets interrupted to execute__del__()\n.__del__()\ncan be executed during interpreter shutdown. As a consequence, the global variables it needs to access (including other modules) may already have been deleted or set toNone\n. Python guarantees that globals whose name begins with a single underscore are deleted from their module before other globals are deleted; if no other references to such globals exist, this may help in assuring that imported modules are still available at the time when the__del__()\nmethod is called.\n- object.__repr__(self)\u00b6\nCalled by the\nrepr()\nbuilt-in function to compute the \u201cofficial\u201d string representation of an object. If at all possible, this should look like a valid Python expression that could be used to recreate an object with the same value (given an appropriate environment). If this is not possible, a string of the form<...some useful description...>\nshould be returned. The return value must be a string object. If a class defines__repr__()\nbut not__str__()\n, then__repr__()\nis also used when an \u201cinformal\u201d string representation of instances of that class is required.This is typically used for debugging, so it is important that the representation is information-rich and unambiguous. A default implementation is provided by the\nobject\nclass itself.\n- object.__str__(self)\u00b6\nCalled by\nstr(object)\n, the default__format__()\nimplementation, and the built-in functionprint()\n, to compute the \u201cinformal\u201d or nicely printable string representation of an object. The return value must be a str object.This method differs from\nobject.__repr__()\nin that there is no expectation that__str__()\nreturn a valid Python expression: a more convenient or concise representation can be used.The default implementation defined by the built-in type\nobject\ncallsobject.__repr__()\n.\n- object.__bytes__(self)\u00b6\nCalled by bytes to compute a byte-string representation of an object. This should return a\nbytes\nobject. Theobject\nclass itself does not provide this method.\n- object.__format__(self, format_spec)\u00b6\nCalled by the\nformat()\nbuilt-in function, and by extension, evaluation of formatted string literals and thestr.format()\nmethod, to produce a \u201cformatted\u201d string representation of an object. The format_spec argument is a string that contains a description of the formatting options desired. The interpretation of the format_spec argument is up to the type implementing__format__()\n, however most classes will either delegate formatting to one of the built-in types, or use a similar formatting option syntax.See Format Specification Mini-Language for a description of the standard formatting syntax.\nThe return value must be a string object.\nThe default implementation by the\nobject\nclass should be given an empty format_spec string. It delegates to__str__()\n.Changed in version 3.4: The __format__ method of\nobject\nitself raises aTypeError\nif passed any non-empty string.Changed in version 3.7:\nobject.__format__(x, '')\nis now equivalent tostr(x)\nrather thanformat(str(x), '')\n.\n- object.__lt__(self, other)\u00b6\n- object.__le__(self, other)\u00b6\n- object.__eq__(self, other)\u00b6\n- object.__ne__(self, other)\u00b6\n- object.__gt__(self, other)\u00b6\n- object.__ge__(self, other)\u00b6\nThese are the so-called \u201crich comparison\u201d methods. The correspondence between operator symbols and method names is as follows:\nxy\ncallsx.__gt__(y)\n, andx>=y\ncallsx.__ge__(y)\n.A rich comparison method may return the singleton\nNotImplemented\nif it does not implement the operation for a given pair of arguments. By convention,False\nandTrue\nare returned for a successful comparison. However, these methods can return any value, so if the comparison operator is used in a Boolean context (e.g., in the condition of anif\nstatement), Python will callbool()\non the value to determine if the result is true or false.By default,\nobject\nimplements__eq__()\nby usingis\n, returningNotImplemented\nin the case of a false comparison:True if x is y else NotImplemented\n. For__ne__()\n, by default it delegates to__eq__()\nand inverts the result unless it isNotImplemented\n. There are no other implied relationships among the comparison operators or default implementations; for example, the truth of(x.__hash__\n.If a class that does not override\n__eq__()\nwishes to suppress hash support, it should include__hash__ = None\nin the class definition. A class which defines its own__hash__()\nthat explicitly raises aTypeError\nwould be incorrectly identified as hashable by anisinstance(obj, collections.abc.Hashable)\ncall.Note\nBy default, the\n__hash__()\nvalues of str and bytes objects are \u201csalted\u201d with an unpredictable random value. Although they remain constant within an individual Python process, they are not predictable between repeated invocations of Python.This is intended to provide protection against a denial-of-service caused by carefully chosen inputs that exploit the worst case performance of a dict insertion, O(n2) complexity. See http://ocert.org/advisories/ocert-2011-003.html for details.\nChanging hash values affects the iteration order of sets. Python has never made guarantees about this ordering (and it typically varies between 32-bit and 64-bit builds).\nSee also\nPYTHONHASHSEED\n.Changed in version 3.3: Hash randomization is enabled by default.\n- object.__bool__(self)\u00b6\nCalled to implement truth value testing and the built-in operation\nbool()\n; should returnFalse\norTrue\n. When this method is not defined,__len__()\nis called, if it is defined, and the object is considered true if its result is nonzero. If a class defines neither__len__()\nnor__bool__()\n(which is true of theobject\nclass itself), all its instances are considered true.\n3.3.2. Customizing attribute access\u00b6\nThe following methods can be defined to customize the meaning of attribute\naccess (use of, assignment to, or deletion of x.name\n) for class instances.\n- object.__getattr__(self, name)\u00b6\nCalled when the default attribute access fails with an\nAttributeError\n(either__getattribute__()\nraises anAttributeError\nbecause name is not an instance attribute or an attribute in the class tree forself\n; or__get__()\nof a name property raisesAttributeError\n). This method should either return the (computed) attribute value or raise anAttributeError\nexception. Theobject\nclass itself does not provide this method.Note that if the attribute is found through the normal mechanism,\n__getattr__()\nis not called. (This is an intentional asymmetry between__getattr__()\nand__setattr__()\n.) This is done both for efficiency reasons and because otherwise__getattr__()\nwould have no way to access other attributes of the instance. Note that at least for instance variables, you can take total control by not inserting any values in the instance attribute dictionary (but instead inserting them in another object). See the__getattribute__()\nmethod below for a way to actually get total control over attribute access.\n- object.__getattribute__(self, name)\u00b6\nCalled unconditionally to implement attribute accesses for instances of the class. If the class also defines\n__getattr__()\n, the latter will not be called unless__getattribute__()\neither calls it explicitly or raises anAttributeError\n. This method should return the (computed) attribute value or raise anAttributeError\nexception. In order to avoid infinite recursion in this method, its implementation should always call the base class method with the same name to access any attributes it needs, for example,object.__getattribute__(self, name)\n.Note\nThis method may still be bypassed when looking up special methods as the result of implicit invocation via language syntax or built-in functions. See Special method lookup.\nFor certain sensitive attribute accesses, raises an auditing event\nobject.__getattr__\nwith argumentsobj\nandname\n.\n- object.__setattr__(self, name, value)\u00b6\nCalled when an attribute assignment is attempted. This is called instead of the normal mechanism (i.e. store the value in the instance dictionary). name is the attribute name, value is the value to be assigned to it.\nIf\n__setattr__()\nwants to assign to an instance attribute, it should call the base class method with the same name, for example,object.__setattr__(self, name, value)\n.For certain sensitive attribute assignments, raises an auditing event\nobject.__setattr__\nwith argumentsobj\n,name\n,value\n.\n- object.__delattr__(self, name)\u00b6\nLike\n__setattr__()\nbut for attribute deletion instead of assignment. This should only be implemented ifdel obj.name\nis meaningful for the object.For certain sensitive attribute deletions, raises an auditing event\nobject.__delattr__\nwith argumentsobj\nandname\n.\n- object.__dir__(self)\u00b6\nCalled when\ndir()\nis called on the object. An iterable must be returned.dir()\nconverts the returned iterable to a list and sorts it.\n3.3.2.1. Customizing module attribute access\u00b6\nSpecial names __getattr__\nand __dir__\ncan be also used to customize\naccess to module attributes. The __getattr__\nfunction at the module level\nshould accept one argument which is the name of an attribute and return the\ncomputed value or raise an AttributeError\n. If an attribute is\nnot found on a module object through the normal lookup, i.e.\nobject.__getattribute__()\n, then __getattr__\nis searched in\nthe module __dict__\nbefore raising an AttributeError\n. If found,\nit is called with the attribute name and the result is returned.\nThe __dir__\nfunction should accept no arguments, and return an iterable of\nstrings that represents the names accessible on module. If present, this\nfunction overrides the standard dir()\nsearch on a module.\n- module.__class__\u00b6\nFor a more fine grained customization of the module behavior (setting\nattributes, properties, etc.), one can set the __class__\nattribute of\na module object to a subclass of types.ModuleType\n. For example:\nimport sys\nfrom types import ModuleType\nclass VerboseModule(ModuleType):\ndef __repr__(self):\nreturn f'Verbose {self.__name__}'\ndef __setattr__(self, attr, value):\nprint(f'Setting {attr}...')\nsuper().__setattr__(attr, value)\nsys.modules[__name__].__class__ = VerboseModule\nNote\nDefining module __getattr__\nand setting module __class__\nonly\naffect lookups made using the attribute access syntax \u2013 directly accessing\nthe module globals (whether by code within the module, or via a reference\nto the module\u2019s globals dictionary) is unaffected.\nChanged in version 3.5: __class__\nmodule attribute is now writable.\nAdded in version 3.7: __getattr__\nand __dir__\nmodule attributes.\nSee also\n- PEP 562 - Module __getattr__ and __dir__\nDescribes the\n__getattr__\nand__dir__\nfunctions on modules.\n3.3.2.2. Implementing Descriptors\u00b6\nThe following methods only apply when an instance of the class containing the\nmethod (a so-called descriptor class) appears in an owner class (the\ndescriptor must be in either the owner\u2019s class dictionary or in the class\ndictionary for one of its parents). In the examples below, \u201cthe attribute\u201d\nrefers to the attribute whose name is the key of the property in the owner\nclass\u2019 __dict__\n. The object\nclass itself does not\nimplement any of these protocols.\n- object.__get__(self, instance, owner=None)\u00b6\nCalled to get the attribute of the owner class (class attribute access) or of an instance of that class (instance attribute access). The optional owner argument is the owner class, while instance is the instance that the attribute was accessed through, or\nNone\nwhen the attribute is accessed through the owner.This method should return the computed attribute value or raise an\nAttributeError\nexception.PEP 252 specifies that\n__get__()\nis callable with one or two arguments. Python\u2019s own built-in descriptors support this specification; however, it is likely that some third-party tools have descriptors that require both arguments. Python\u2019s own__getattribute__()\nimplementation always passes in both arguments whether they are required or not.\n- object.__set__(self, instance, value)\u00b6\nCalled to set the attribute on an instance instance of the owner class to a new value, value.\nNote, adding\n__set__()\nor__delete__()\nchanges the kind of descriptor to a \u201cdata descriptor\u201d. See Invoking Descriptors for more details.\n- object.__delete__(self, instance)\u00b6\nCalled to delete the attribute on an instance instance of the owner class.\nInstances of descriptors may also have the __objclass__\nattribute\npresent:\n- object.__objclass__\u00b6\nThe attribute\n__objclass__\nis interpreted by theinspect\nmodule as specifying the class where this object was defined (setting this appropriately can assist in runtime introspection of dynamic class attributes). For callables, it may indicate that an instance of the given type (or a subclass) is expected or required as the first positional argument (for example, CPython sets this attribute for unbound methods that are implemented in C).\n3.3.2.3. Invoking Descriptors\u00b6\nIn general, a descriptor is an object attribute with \u201cbinding behavior\u201d, one\nwhose attribute access has been overridden by methods in the descriptor\nprotocol: __get__()\n, __set__()\n, and\n__delete__()\n. If any of\nthose methods are defined for an object, it is said to be a descriptor.\nThe default behavior for attribute access is to get, set, or delete the\nattribute from an object\u2019s dictionary. For instance, a.x\nhas a lookup chain\nstarting with a.__dict__['x']\n, then type(a).__dict__['x']\n, and\ncontinuing through the base classes of type(a)\nexcluding metaclasses.\nHowever, if the looked-up value is an object defining one of the descriptor methods, then Python may override the default behavior and invoke the descriptor method instead. Where this occurs in the precedence chain depends on which descriptor methods were defined and how they were called.\nThe starting point for descriptor invocation is a binding, a.x\n. How the\narguments are assembled depends on a\n:\n- Direct Call\nThe simplest and least common call is when user code directly invokes a descriptor method:\nx.__get__(a)\n.- Instance Binding\nIf binding to an object instance,\na.x\nis transformed into the call:type(a).__dict__['x'].__get__(a, type(a))\n.- Class Binding\nIf binding to a class,\nA.x\nis transformed into the call:A.__dict__['x'].__get__(None, A)\n.- Super Binding\nA dotted lookup such as\nsuper(A, a).x\nsearchesa.__class__.__mro__\nfor a base classB\nfollowingA\nand then returnsB.__dict__['x'].__get__(a, A)\n. If not a descriptor,x\nis returned unchanged.\nFor instance bindings, the precedence of descriptor invocation depends on\nwhich descriptor methods are defined. A descriptor can define any combination\nof __get__()\n, __set__()\nand\n__delete__()\n. If it does not\ndefine __get__()\n, then accessing the attribute will return the descriptor\nobject itself unless there is a value in the object\u2019s instance dictionary. If\nthe descriptor defines __set__()\nand/or __delete__()\n, it is a data\ndescriptor; if it defines neither, it is a non-data descriptor. Normally, data\ndescriptors define both __get__()\nand __set__()\n, while non-data\ndescriptors have just the __get__()\nmethod. Data descriptors with\n__get__()\nand __set__()\n(and/or __delete__()\n) defined\nalways override a redefinition in an\ninstance dictionary. In contrast, non-data descriptors can be overridden by\ninstances.\nPython methods (including those decorated with\n@staticmethod\nand @classmethod\n) are\nimplemented as non-data descriptors. Accordingly, instances can redefine and\noverride methods. This allows individual instances to acquire behaviors that\ndiffer from other instances of the same class.\nThe property()\nfunction is implemented as a data descriptor. Accordingly,\ninstances cannot override the behavior of a property.\n3.3.2.4. __slots__\u00b6\n__slots__ allow us to explicitly declare data members (like\nproperties) and deny the creation of __dict__\nand __weakref__\n(unless explicitly declared in __slots__ or available in a parent.)\nThe space saved over using __dict__\ncan be significant.\nAttribute lookup speed can be significantly improved as well.\n- object.__slots__\u00b6\nThis class variable can be assigned a string, iterable, or sequence of strings with variable names used by instances. __slots__ reserves space for the declared variables and prevents the automatic creation of\n__dict__\nand __weakref__ for each instance.\nNotes on using __slots__:\nWhen inheriting from a class without __slots__, the\n__dict__\nand __weakref__ attribute of the instances will always be accessible.Without a\n__dict__\nvariable, instances cannot be assigned new variables not listed in the __slots__ definition. Attempts to assign to an unlisted variable name raisesAttributeError\n. If dynamic assignment of new variables is desired, then add'__dict__'\nto the sequence of strings in the __slots__ declaration.Without a __weakref__ variable for each instance, classes defining __slots__ do not support\nweak references\nto its instances. If weak reference support is needed, then add'__weakref__'\nto the sequence of strings in the __slots__ declaration.__slots__ are implemented at the class level by creating descriptors for each variable name. As a result, class attributes cannot be used to set default values for instance variables defined by __slots__; otherwise, the class attribute would overwrite the descriptor assignment.\nThe action of a __slots__ declaration is not limited to the class where it is defined. __slots__ declared in parents are available in child classes. However, instances of a child subclass will get a\n__dict__\nand __weakref__ unless the subclass also defines __slots__ (which should only contain names of any additional slots).If a class defines a slot also defined in a base class, the instance variable defined by the base class slot is inaccessible (except by retrieving its descriptor directly from the base class). This renders the meaning of the program undefined. In the future, a check may be added to prevent this.\nTypeError\nwill be raised if nonempty __slots__ are defined for a class derived from a\"variable-length\" built-in type\nsuch asint\n,bytes\n, andtuple\n.Any non-string iterable may be assigned to __slots__.\nIf a\ndictionary\nis used to assign __slots__, the dictionary keys will be used as the slot names. The values of the dictionary can be used to provide per-attribute docstrings that will be recognised byinspect.getdoc()\nand displayed in the output ofhelp()\n.__class__\nassignment works only if both classes have the same __slots__.Multiple inheritance with multiple slotted parent classes can be used, but only one parent is allowed to have attributes created by slots (the other bases must have empty slot layouts) - violations raise\nTypeError\n.If an iterator is used for __slots__ then a descriptor is created for each of the iterator\u2019s values. However, the __slots__ attribute will be an empty iterator.\n3.3.3. Customizing class creation\u00b6\nWhenever a class inherits from another class, __init_subclass__()\nis\ncalled on the parent class. This way, it is possible to write classes which\nchange the behavior of subclasses. This is closely related to class\ndecorators, but where class decorators only affect the specific class they\u2019re\napplied to, __init_subclass__\nsolely applies to future subclasses of the\nclass defining the method.\n- classmethod object.__init_subclass__(cls)\u00b6\nThis method is called whenever the containing class is subclassed. cls is then the new subclass. If defined as a normal instance method, this method is implicitly converted to a class method.\nKeyword arguments which are given to a new class are passed to the parent class\u2019s\n__init_subclass__\n. For compatibility with other classes using__init_subclass__\n, one should take out the needed keyword arguments and pass the others over to the base class, as in:class Philosopher: def __init_subclass__(cls, /, default_name, **kwargs): super().__init_subclass__(**kwargs) cls.default_name = default_name class AustralianPhilosopher(Philosopher, default_name=\"Bruce\"): pass\nThe default implementation\nobject.__init_subclass__\ndoes nothing, but raises an error if it is called with any arguments.Note\nThe metaclass hint\nmetaclass\nis consumed by the rest of the type machinery, and is never passed to__init_subclass__\nimplementations. The actual metaclass (rather than the explicit hint) can be accessed astype(cls)\n.Added in version 3.6.\nWhen a class is created, type.__new__()\nscans the class variables\nand makes callbacks to those with a __set_name__()\nhook.\n- object.__set_name__(self, owner, name)\u00b6\nAutomatically called at the time the owning class owner is created. The object has been assigned to name in that class:\nclass A: x = C() # Automatically calls: x.__set_name__(A, 'x')\nIf the class variable is assigned after the class is created,\n__set_name__()\nwill not be called automatically. If needed,__set_name__()\ncan be called directly:class A: pass c = C() A.x = c # The hook is not called c.__set_name__(A, 'x') # Manually invoke the hook\nSee Creating the class object for more details.\nAdded in version 3.6.\n3.3.3.1. Metaclasses\u00b6\nBy default, classes are constructed using type()\n. The class body is\nexecuted in a new namespace and the class name is bound locally to the\nresult of type(name, bases, namespace)\n.\nThe class creation process can be customized by passing the metaclass\nkeyword argument in the class definition line, or by inheriting from an\nexisting class that included such an argument. In the following example,\nboth MyClass\nand MySubclass\nare instances of Meta\n:\nclass Meta(type):\npass\nclass MyClass(metaclass=Meta):\npass\nclass MySubclass(MyClass):\npass\nAny other keyword arguments that are specified in the class definition are passed through to all metaclass operations described below.\nWhen a class definition is executed, the following steps occur:\nMRO entries are resolved;\nthe appropriate metaclass is determined;\nthe class namespace is prepared;\nthe class body is executed;\nthe class object is created.\n3.3.3.2. Resolving MRO entries\u00b6\n- object.__mro_entries__(self, bases)\u00b6\nIf a base that appears in a class definition is not an instance of\ntype\n, then an__mro_entries__()\nmethod is searched on the base. If an__mro_entries__()\nmethod is found, the base is substituted with the result of a call to__mro_entries__()\nwhen creating the class. The method is called with the original bases tuple passed to the bases parameter, and must return a tuple of classes that will be used instead of the base. The returned tuple may be empty: in these cases, the original base is ignored.\nSee also\ntypes.resolve_bases()\nDynamically resolve bases that are not instances of\ntype\n.types.get_original_bases()\nRetrieve a class\u2019s \u201coriginal bases\u201d prior to modifications by\n__mro_entries__()\n.- PEP 560\nCore support for typing module and generic types.\n3.3.3.3. Determining the appropriate metaclass\u00b6\nThe appropriate metaclass for a class definition is determined as follows:\nif no bases and no explicit metaclass are given, then\ntype()\nis used;if an explicit metaclass is given and it is not an instance of\ntype()\n, then it is used directly as the metaclass;if an instance of\ntype()\nis given as the explicit metaclass, or bases are defined, then the most derived metaclass is used.\nThe most derived metaclass is selected from the explicitly specified\nmetaclass (if any) and the metaclasses (i.e. type(cls)\n) of all specified\nbase classes. The most derived metaclass is one which is a subtype of all\nof these candidate metaclasses. If none of the candidate metaclasses meets\nthat criterion, then the class definition will fail with TypeError\n.\n3.3.3.4. Preparing the class namespace\u00b6\nOnce the appropriate metaclass has been identified, then the class namespace\nis prepared. If the metaclass has a __prepare__\nattribute, it is called\nas namespace = metaclass.__prepare__(name, bases, **kwds)\n(where the\nadditional keyword arguments, if any, come from the class definition). The\n__prepare__\nmethod should be implemented as a\nclassmethod\n. The\nnamespace returned by __prepare__\nis passed in to __new__\n, but when\nthe final class object is created the namespace is copied into a new dict\n.\nIf the metaclass has no __prepare__\nattribute, then the class namespace\nis initialised as an empty ordered mapping.\nSee also\n- PEP 3115 - Metaclasses in Python 3000\nIntroduced the\n__prepare__\nnamespace hook\n3.3.3.5. Executing the class body\u00b6\nThe class body is executed (approximately) as\nexec(body, globals(), namespace)\n. The key difference from a normal\ncall to exec()\nis that lexical scoping allows the class body (including\nany methods) to reference names from the current and outer scopes when the\nclass definition occurs inside a function.\nHowever, even when the class definition occurs inside the function, methods\ndefined inside the class still cannot see names defined at the class scope.\nClass variables must be accessed through the first parameter of instance or\nclass methods, or through the implicit lexically scoped __class__\nreference\ndescribed in the next section.\n3.3.3.6. Creating the class object\u00b6\nOnce the class namespace has been populated by executing the class body,\nthe class object is created by calling\nmetaclass(name, bases, namespace, **kwds)\n(the additional keywords\npassed here are the same as those passed to __prepare__\n).\nThis class object is the one that will be referenced by the zero-argument\nform of super()\n. __class__\nis an implicit closure reference\ncreated by the compiler if any methods in a class body refer to either\n__class__\nor super\n. This allows the zero argument form of\nsuper()\nto correctly identify the class being defined based on\nlexical scoping, while the class or instance that was used to make the\ncurrent call is identified based on the first argument passed to the method.\nCPython implementation detail: In CPython 3.6 and later, the __class__\ncell is passed to the metaclass\nas a __classcell__\nentry in the class namespace. If present, this must\nbe propagated up to the type.__new__\ncall in order for the class to be\ninitialised correctly.\nFailing to do so will result in a RuntimeError\nin Python 3.8.\nWhen using the default metaclass type\n, or any metaclass that ultimately\ncalls type.__new__\n, the following additional customization steps are\ninvoked after creating the class object:\nThe\ntype.__new__\nmethod collects all of the attributes in the class namespace that define a__set_name__()\nmethod;Those\n__set_name__\nmethods are called with the class being defined and the assigned name of that particular attribute;The\n__init_subclass__()\nhook is called on the immediate parent of the new class in its method resolution order.\nAfter the class object is created, it is passed to the class decorators included in the class definition (if any) and the resulting object is bound in the local namespace as the defined class.\nWhen a new class is created by type.__new__\n, the object provided as the\nnamespace parameter is copied to a new ordered mapping and the original\nobject is discarded. The new copy is wrapped in a read-only proxy, which\nbecomes the __dict__\nattribute of the class object.\nSee also\n- PEP 3135 - New super\nDescribes the implicit\n__class__\nclosure reference\n3.3.3.7. Uses for metaclasses\u00b6\nThe potential uses for metaclasses are boundless. Some ideas that have been explored include enum, logging, interface checking, automatic delegation, automatic property creation, proxies, frameworks, and automatic resource locking/synchronization.\n3.3.4. Customizing instance and subclass checks\u00b6\nThe following methods are used to override the default behavior of the\nisinstance()\nand issubclass()\nbuilt-in functions.\nIn particular, the metaclass abc.ABCMeta\nimplements these methods in\norder to allow the addition of Abstract Base Classes (ABCs) as \u201cvirtual base\nclasses\u201d to any class or type (including built-in types), including other\nABCs.\n- type.__instancecheck__(self, instance)\u00b6\nReturn true if instance should be considered a (direct or indirect) instance of class. If defined, called to implement\nisinstance(instance, class)\n.\n- type.__subclasscheck__(self, subclass)\u00b6\nReturn true if subclass should be considered a (direct or indirect) subclass of class. If defined, called to implement\nissubclass(subclass, class)\n.\nNote that these methods are looked up on the type (metaclass) of a class. They cannot be defined as class methods in the actual class. This is consistent with the lookup of special methods that are called on instances, only in this case the instance is itself a class.\nSee also\n- PEP 3119 - Introducing Abstract Base Classes\nIncludes the specification for customizing\nisinstance()\nandissubclass()\nbehavior through__instancecheck__()\nand__subclasscheck__()\n, with motivation for this functionality in the context of adding Abstract Base Classes (see theabc\nmodule) to the language.\n3.3.5. Emulating generic types\u00b6\nWhen using type annotations, it is often useful to\nparameterize a generic type using Python\u2019s square-brackets notation.\nFor example, the annotation list[int]\nmight be used to signify a\nlist\nin which all the elements are of type int\n.\nSee also\n- PEP 484 - Type Hints\nIntroducing Python\u2019s framework for type annotations\n- Generic Alias Types\nDocumentation for objects representing parameterized generic classes\n- Generics, user-defined generics and\ntyping.Generic\nDocumentation on how to implement generic classes that can be parameterized at runtime and understood by static type-checkers.\nA class can generally only be parameterized if it defines the special\nclass method __class_getitem__()\n.\n- classmethod object.__class_getitem__(cls, key)\u00b6\nReturn an object representing the specialization of a generic class by type arguments found in key.\nWhen defined on a class,\n__class_getitem__()\nis automatically a class method. As such, there is no need for it to be decorated with@classmethod\nwhen it is defined.\n3.3.5.1. The purpose of __class_getitem__\u00b6\nThe purpose of __class_getitem__()\nis to allow runtime\nparameterization of standard-library generic classes in order to more easily\napply type hints to these classes.\nTo implement custom generic classes that can be parameterized at runtime and\nunderstood by static type-checkers, users should either inherit from a standard\nlibrary class that already implements __class_getitem__()\n, or\ninherit from typing.Generic\n, which has its own implementation of\n__class_getitem__()\n.\nCustom implementations of __class_getitem__()\non classes defined\noutside of the standard library may not be understood by third-party\ntype-checkers such as mypy. Using __class_getitem__()\non any class for\npurposes other than type hinting is discouraged.\n3.3.5.2. __class_getitem__ versus __getitem__\u00b6\nUsually, the subscription of an object using square\nbrackets will call the __getitem__()\ninstance method defined on\nthe object\u2019s class. However, if the object being subscribed is itself a class,\nthe class method __class_getitem__()\nmay be called instead.\n__class_getitem__()\nshould return a GenericAlias\nobject if it is properly defined.\nPresented with the expression obj[x]\n, the Python interpreter\nfollows something like the following process to decide whether\n__getitem__()\nor __class_getitem__()\nshould be\ncalled:\nfrom inspect import isclass\ndef subscribe(obj, x):\n\"\"\"Return the result of the expression 'obj[x]'\"\"\"\nclass_of_obj = type(obj)\n# If the class of obj defines __getitem__,\n# call class_of_obj.__getitem__(obj, x)\nif hasattr(class_of_obj, '__getitem__'):\nreturn class_of_obj.__getitem__(obj, x)\n# Else, if obj is a class and defines __class_getitem__,\n# call obj.__class_getitem__(x)\nelif isclass(obj) and hasattr(obj, '__class_getitem__'):\nreturn obj.__class_getitem__(x)\n# Else, raise an exception\nelse:\nraise TypeError(\nf\"'{class_of_obj.__name__}' object is not subscriptable\"\n)\nIn Python, all classes are themselves instances of other classes. The class of\na class is known as that class\u2019s metaclass, and most classes have the\ntype\nclass as their metaclass. type\ndoes not define\n__getitem__()\n, meaning that expressions such as list[int]\n,\ndict[str, float]\nand tuple[str, bytes]\nall result in\n__class_getitem__()\nbeing called:\n>>> # list has class \"type\" as its metaclass, like most classes:\n>>> type(list)\n\n>>> type(dict) == type(list) == type(tuple) == type(str) == type(bytes)\nTrue\n>>> # \"list[int]\" calls \"list.__class_getitem__(int)\"\n>>> list[int]\nlist[int]\n>>> # list.__class_getitem__ returns a GenericAlias object:\n>>> type(list[int])\n\nHowever, if a class has a custom metaclass that defines\n__getitem__()\n, subscribing the class may result in different\nbehaviour. An example of this can be found in the enum\nmodule:\n>>> from enum import Enum\n>>> class Menu(Enum):\n... \"\"\"A breakfast menu\"\"\"\n... SPAM = 'spam'\n... BACON = 'bacon'\n...\n>>> # Enum classes have a custom metaclass:\n>>> type(Menu)\n\n>>> # EnumMeta defines __getitem__,\n>>> # so __class_getitem__ is not called,\n>>> # and the result is not a GenericAlias object:\n>>> Menu['SPAM']\n\n>>> type(Menu['SPAM'])\n\nSee also\n- PEP 560 - Core Support for typing module and generic types\nIntroducing\n__class_getitem__()\n, and outlining when a subscription results in__class_getitem__()\nbeing called instead of__getitem__()\n3.3.6. Emulating callable objects\u00b6\n3.3.7. Emulating container types\u00b6\nThe following methods can be defined to implement container objects. None of them\nare provided by the object\nclass itself. Containers usually are\nsequences (such as lists\nor\ntuples\n) or mappings (like\ndictionaries),\nbut can represent other containers as well. The first set of methods is used\neither to emulate a sequence or to emulate a mapping; the difference is that for\na sequence, the allowable keys should be the integers k for which 0 <= k <\nN\nwhere N is the length of the sequence, or slice\nobjects, which define a\nrange of items. It is also recommended that mappings provide the methods\nkeys()\n, values()\n, items()\n, get()\n, clear()\n,\nsetdefault()\n, pop()\n, popitem()\n, copy()\n, and\nupdate()\nbehaving similar to those for Python\u2019s standard dictionary\nobjects. The collections.abc\nmodule provides a\nMutableMapping\nabstract base class to help create those methods from a base set of\n__getitem__()\n, __setitem__()\n,\n__delitem__()\n, and keys()\n.\nMutable sequences should provide methods\nappend()\n, clear()\n, count()\n,\nextend()\n, index()\n, insert()\n,\npop()\n, remove()\n, and reverse()\n,\nlike Python standard list\nobjects.\nFinally, sequence types should implement addition (meaning concatenation) and\nmultiplication (meaning repetition) by defining the methods\n__add__()\n, __radd__()\n, __iadd__()\n,\n__mul__()\n, __rmul__()\nand __imul__()\ndescribed below; they should not define other numerical\noperators.\nIt is recommended that both mappings and sequences implement the\n__contains__()\nmethod to allow efficient use of the in\noperator; for\nmappings, in\nshould search the mapping\u2019s keys; for sequences, it should\nsearch through the values. It is further recommended that both mappings and\nsequences implement the __iter__()\nmethod to allow efficient iteration\nthrough the container; for mappings, __iter__()\nshould iterate\nthrough the object\u2019s keys; for sequences, it should iterate through the values.\n- object.__len__(self)\u00b6\nCalled to implement the built-in function\nlen()\n. Should return the length of the object, an integer>=\n0. Also, an object that doesn\u2019t define a__bool__()\nmethod and whose__len__()\nmethod returns zero is considered to be false in a Boolean context.CPython implementation detail: In CPython, the length is required to be at most\nsys.maxsize\n. If the length is larger thansys.maxsize\nsome features (such aslen()\n) may raiseOverflowError\n. To prevent raisingOverflowError\nby truth value testing, an object must define a__bool__()\nmethod.\n- object.__length_hint__(self)\u00b6\nCalled to implement\noperator.length_hint()\n. Should return an estimated length for the object (which may be greater or less than the actual length). The length must be an integer>=\n0. The return value may also beNotImplemented\n, which is treated the same as if the__length_hint__\nmethod didn\u2019t exist at all. This method is purely an optimization and is never required for correctness.Added in version 3.4.\nNote\nSlicing is done exclusively with the following three methods. A call like\na[1:2] = b\nis translated to\na[slice(1, 2, None)] = b\nand so forth. Missing slice items are always filled in with None\n.\n- object.__getitem__(self, subscript)\u00b6\nCalled to implement subscription, that is,\nself[subscript]\n. See Subscriptions and slicings for details on the syntax.There are two types of built-in objects that support subscription via\n__getitem__()\n:sequences, where subscript (also called index) should be an integer or a\nslice\nobject. See the sequence documentation for the expected behavior, including handlingslice\nobjects and negative indices.mappings, where subscript is also called the key. See mapping documentation for the expected behavior.\nIf subscript is of an inappropriate type,\n__getitem__()\nshould raiseTypeError\n. If subscript has an inappropriate value,__getitem__()\nshould raise anLookupError\nor one of its subclasses (IndexError\nfor sequences;KeyError\nfor mappings).Note\nThe sequence iteration protocol (used, for example, in\nfor\nloops), expects that anIndexError\nwill be raised for illegal indexes to allow proper detection of the end of a sequence.Note\nWhen subscripting a class, the special class method\n__class_getitem__()\nmay be called instead of__getitem__()\n. See __class_getitem__ versus __getitem__ for more details.\n- object.__setitem__(self, key, value)\u00b6\nCalled to implement assignment to\nself[key]\n. Same note as for__getitem__()\n. This should only be implemented for mappings if the objects support changes to the values for keys, or if new keys can be added, or for sequences if elements can be replaced. The same exceptions should be raised for improper key values as for the__getitem__()\nmethod.\n- object.__delitem__(self, key)\u00b6\nCalled to implement deletion of\nself[key]\n. Same note as for__getitem__()\n. This should only be implemented for mappings if the objects support removal of keys, or for sequences if elements can be removed from the sequence. The same exceptions should be raised for improper key values as for the__getitem__()\nmethod.\n- object.__missing__(self, key)\u00b6\nCalled by\ndict\n.__getitem__()\nto implementself[key]\nfor dict subclasses when key is not in the dictionary.\n- object.__iter__(self)\u00b6\nThis method is called when an iterator is required for a container. This method should return a new iterator object that can iterate over all the objects in the container. For mappings, it should iterate over the keys of the container.\n- object.__reversed__(self)\u00b6\nCalled (if present) by the\nreversed()\nbuilt-in to implement reverse iteration. It should return a new iterator object that iterates over all the objects in the container in reverse order.If the\n__reversed__()\nmethod is not provided, thereversed()\nbuilt-in will fall back to using the sequence protocol (__len__()\nand__getitem__()\n). Objects that support the sequence protocol should only provide__reversed__()\nif they can provide an implementation that is more efficient than the one provided byreversed()\n.\nThe membership test operators (in\nand not in\n) are normally\nimplemented as an iteration through a container. However, container objects can\nsupply the following special method with a more efficient implementation, which\nalso does not require the object be iterable.\n- object.__contains__(self, item)\u00b6\nCalled to implement membership test operators. Should return true if item is in self, false otherwise. For mapping objects, this should consider the keys of the mapping rather than the values or the key-item pairs.\nFor objects that don\u2019t define\n__contains__()\n, the membership test first tries iteration via__iter__()\n, then the old sequence iteration protocol via__getitem__()\n, see this section in the language reference.\n3.3.8. Emulating numeric types\u00b6\nThe following methods can be defined to emulate numeric objects. Methods corresponding to operations that are not supported by the particular kind of number implemented (e.g., bitwise operations for non-integral numbers) should be left undefined.\n- object.__add__(self, other)\u00b6\n- object.__sub__(self, other)\u00b6\n- object.__mul__(self, other)\u00b6\n- object.__matmul__(self, other)\u00b6\n- object.__truediv__(self, other)\u00b6\n- object.__floordiv__(self, other)\u00b6\n- object.__mod__(self, other)\u00b6\n- object.__divmod__(self, other)\u00b6\n- object.__pow__(self, other[, modulo])\u00b6\n- object.__lshift__(self, other)\u00b6\n- object.__rshift__(self, other)\u00b6\n- object.__and__(self, other)\u00b6\n- object.__xor__(self, other)\u00b6\n- object.__or__(self, other)\u00b6\nThese methods are called to implement the binary arithmetic operations (\n+\n,-\n,*\n,@\n,/\n,//\n,%\n,divmod()\n,pow()\n,**\n,<<\n,>>\n,&\n,^\n,|\n). For instance, to evaluate the expressionx + y\n, where x is an instance of a class that has an__add__()\nmethod,type(x).__add__(x, y)\nis called. The__divmod__()\nmethod should be the equivalent to using__floordiv__()\nand__mod__()\n; it should not be related to__truediv__()\n. Note that__pow__()\nshould be defined to accept an optional third argument if the three-argument version of the built-inpow()\nfunction is to be supported.If one of those methods does not support the operation with the supplied arguments, it should return\nNotImplemented\n.\n- object.__radd__(self, other)\u00b6\n- object.__rsub__(self, other)\u00b6\n- object.__rmul__(self, other)\u00b6\n- object.__rmatmul__(self, other)\u00b6\n- object.__rtruediv__(self, other)\u00b6\n- object.__rfloordiv__(self, other)\u00b6\n- object.__rmod__(self, other)\u00b6\n- object.__rdivmod__(self, other)\u00b6\n- object.__rpow__(self, other[, modulo])\u00b6\n- object.__rlshift__(self, other)\u00b6\n- object.__rrshift__(self, other)\u00b6\n- object.__rand__(self, other)\u00b6\n- object.__rxor__(self, other)\u00b6\n- object.__ror__(self, other)\u00b6\nThese methods are called to implement the binary arithmetic operations (\n+\n,-\n,*\n,@\n,/\n,//\n,%\n,divmod()\n,pow()\n,**\n,<<\n,>>\n,&\n,^\n,|\n) with reflected (swapped) operands. These functions are only called if the operands are of different types, when the left operand does not support the corresponding operation [3], or the right operand\u2019s class is derived from the left operand\u2019s class. [4] For instance, to evaluate the expressionx - y\n, where y is an instance of a class that has an__rsub__()\nmethod,type(y).__rsub__(y, x)\nis called iftype(x).__sub__(x, y)\nreturnsNotImplemented\nortype(y)\nis a subclass oftype(x)\n. [5]Note that\n__rpow__()\nshould be defined to accept an optional third argument if the three-argument version of the built-inpow()\nfunction is to be supported.Changed in version 3.14: Three-argument\npow()\nnow try calling__rpow__()\nif necessary. Previously it was only called in two-argumentpow()\nand the binary power operator.Note\nIf the right operand\u2019s type is a subclass of the left operand\u2019s type and that subclass provides a different implementation of the reflected method for the operation, this method will be called before the left operand\u2019s non-reflected method. This behavior allows subclasses to override their ancestors\u2019 operations.\n- object.__iadd__(self, other)\u00b6\n- object.__isub__(self, other)\u00b6\n- object.__imul__(self, other)\u00b6\n- object.__imatmul__(self, other)\u00b6\n- object.__itruediv__(self, other)\u00b6\n- object.__ifloordiv__(self, other)\u00b6\n- object.__imod__(self, other)\u00b6\n- object.__ipow__(self, other[, modulo])\u00b6\n- object.__ilshift__(self, other)\u00b6\n- object.__irshift__(self, other)\u00b6\n- object.__iand__(self, other)\u00b6\n- object.__ixor__(self, other)\u00b6\n- object.__ior__(self, other)\u00b6\nThese methods are called to implement the augmented arithmetic assignments (\n+=\n,-=\n,*=\n,@=\n,/=\n,//=\n,%=\n,**=\n,<<=\n,>>=\n,&=\n,^=\n,|=\n). These methods should attempt to do the operation in-place (modifying self) and return the result (which could be, but does not have to be, self). If a specific method is not defined, or if that method returnsNotImplemented\n, the augmented assignment falls back to the normal methods. For instance, if x is an instance of a class with an__iadd__()\nmethod,x += y\nis equivalent tox = x.__iadd__(y)\n. If__iadd__()\ndoes not exist, or ifx.__iadd__(y)\nreturnsNotImplemented\n,x.__add__(y)\nandy.__radd__(x)\nare considered, as with the evaluation ofx + y\n. In certain situations, augmented assignment can result in unexpected errors (see Why does a_tuple[i] += [\u2018item\u2019] raise an exception when the addition works?), but this behavior is in fact part of the data model.\n- object.__neg__(self)\u00b6\n- object.__pos__(self)\u00b6\n- object.__abs__(self)\u00b6\n- object.__invert__(self)\u00b6\nCalled to implement the unary arithmetic operations (\n-\n,+\n,abs()\nand~\n).\n- object.__complex__(self)\u00b6\n- object.__int__(self)\u00b6\n- object.__float__(self)\u00b6\nCalled to implement the built-in functions\ncomplex()\n,int()\nandfloat()\n. Should return a value of the appropriate type.\n- object.__index__(self)\u00b6\nCalled to implement\noperator.index()\n, and whenever Python needs to losslessly convert the numeric object to an integer object (such as in slicing, or in the built-inbin()\n,hex()\nandoct()\nfunctions). Presence of this method indicates that the numeric object is an integer type. Must return an integer.If\n__int__()\n,__float__()\nand__complex__()\nare not defined then corresponding built-in functionsint()\n,float()\nandcomplex()\nfall back to__index__()\n.\n- object.__round__(self[, ndigits])\u00b6\n- object.__trunc__(self)\u00b6\n- object.__floor__(self)\u00b6\n- object.__ceil__(self)\u00b6\nCalled to implement the built-in function\nround()\nandmath\nfunctionstrunc()\n,floor()\nandceil()\n. Unless ndigits is passed to__round__()\nall these methods should return the value of the object truncated to anIntegral\n(typically anint\n).Changed in version 3.14:\nint()\nno longer delegates to the__trunc__()\nmethod.\n3.3.9. With Statement Context Managers\u00b6\nA context manager is an object that defines the runtime context to be\nestablished when executing a with\nstatement. The context manager\nhandles the entry into, and the exit from, the desired runtime context for the\nexecution of the block of code. Context managers are normally invoked using the\nwith\nstatement (described in section The with statement), but can also be\nused by directly invoking their methods.\nTypical uses of context managers include saving and restoring various kinds of global state, locking and unlocking resources, closing opened files, etc.\nFor more information on context managers, see Context Manager Types.\nThe object\nclass itself does not provide the context manager methods.\n- object.__enter__(self)\u00b6\nEnter the runtime context related to this object. The\nwith\nstatement will bind this method\u2019s return value to the target(s) specified in theas\nclause of the statement, if any.\n- object.__exit__(self, exc_type, exc_value, traceback)\u00b6\nExit the runtime context related to this object. The parameters describe the exception that caused the context to be exited. If the context was exited without an exception, all three arguments will be\nNone\n.If an exception is supplied, and the method wishes to suppress the exception (i.e., prevent it from being propagated), it should return a true value. Otherwise, the exception will be processed normally upon exit from this method.\nNote that\n__exit__()\nmethods should not reraise the passed-in exception; this is the caller\u2019s responsibility.\n3.3.10. Customizing positional arguments in class pattern matching\u00b6\nWhen using a class name in a pattern, positional arguments in the pattern are not\nallowed by default, i.e. case MyClass(x, y)\nis typically invalid without special\nsupport in MyClass\n. To be able to use that kind of pattern, the class needs to\ndefine a __match_args__ attribute.\n- object.__match_args__\u00b6\nThis class variable can be assigned a tuple of strings. When this class is used in a class pattern with positional arguments, each positional argument will be converted into a keyword argument, using the corresponding value in __match_args__ as the keyword. The absence of this attribute is equivalent to setting it to\n()\n.\nFor example, if MyClass.__match_args__\nis (\"left\", \"center\", \"right\")\nthat means\nthat case MyClass(x, y)\nis equivalent to case MyClass(left=x, center=y)\n. Note\nthat the number of arguments in the pattern must be smaller than or equal to the number\nof elements in __match_args__; if it is larger, the pattern match attempt will raise\na TypeError\n.\nAdded in version 3.10.\nSee also\n- PEP 634 - Structural Pattern Matching\nThe specification for the Python\nmatch\nstatement.\n3.3.11. Emulating buffer types\u00b6\nThe buffer protocol provides a way for Python\nobjects to expose efficient access to a low-level memory array. This protocol\nis implemented by builtin types such as bytes\nand memoryview\n,\nand third-party libraries may define additional buffer types.\nWhile buffer types are usually implemented in C, it is also possible to implement the protocol in Python.\n- object.__buffer__(self, flags)\u00b6\nCalled when a buffer is requested from self (for example, by the\nmemoryview\nconstructor). The flags argument is an integer representing the kind of buffer requested, affecting for example whether the returned buffer is read-only or writable.inspect.BufferFlags\nprovides a convenient way to interpret the flags. The method must return amemoryview\nobject.\n- object.__release_buffer__(self, buffer)\u00b6\nCalled when a buffer is no longer needed. The buffer argument is a\nmemoryview\nobject that was previously returned by__buffer__()\n. The method must release any resources associated with the buffer. This method should returnNone\n. Buffer objects that do not need to perform any cleanup are not required to implement this method.\nAdded in version 3.12.\nSee also\n- PEP 688 - Making the buffer protocol accessible in Python\nIntroduces the Python\n__buffer__\nand__release_buffer__\nmethods.collections.abc.Buffer\nABC for buffer types.\n3.3.12. Annotations\u00b6\nFunctions, classes, and modules may contain annotations, which are a way to associate information (usually type hints) with a symbol.\n- object.__annotations__\u00b6\nThis attribute contains the annotations for an object. It is lazily evaluated, so accessing the attribute may execute arbitrary code and raise exceptions. If evaluation is successful, the attribute is set to a dictionary mapping from variable names to annotations.\nChanged in version 3.14: Annotations are now lazily evaluated.\n- object.__annotate__(format)\u00b6\nAn annotate function. Returns a new dictionary object mapping attribute/parameter names to their annotation values.\nTakes a format parameter specifying the format in which annotations values should be provided. It must be a member of the\nannotationlib.Format\nenum, or an integer with a value corresponding to a member of the enum.If an annotate function doesn\u2019t support the requested format, it must raise\nNotImplementedError\n. Annotate functions must always supportVALUE\nformat; they must not raiseNotImplementedError()\nwhen called with this format.When called with\nVALUE\nformat, an annotate function may raiseNameError\n; it must not raiseNameError\nwhen called requesting any other format.If an object does not have any annotations,\n__annotate__\nshould preferably be set toNone\n(it can\u2019t be deleted), rather than set to a function that returns an empty dict.Added in version 3.14.\nSee also\n- PEP 649 \u2014 Deferred evaluation of annotation using descriptors\nIntroduces lazy evaluation of annotations and the\n__annotate__\nfunction.\n3.3.13. Special method lookup\u00b6\nFor custom classes, implicit invocations of special methods are only guaranteed to work correctly if defined on an object\u2019s type, not in the object\u2019s instance dictionary. That behaviour is the reason why the following code raises an exception:\n>>> class C:\n... pass\n...\n>>> c = C()\n>>> c.__len__ = lambda: 5\n>>> len(c)\nTraceback (most recent call last):\nFile \"\", line 1, in \nTypeError: object of type 'C' has no len()\nThe rationale behind this behaviour lies with a number of special methods such\nas __hash__()\nand __repr__()\nthat are implemented\nby all objects,\nincluding type objects. If the implicit lookup of these methods used the\nconventional lookup process, they would fail when invoked on the type object\nitself:\n>>> 1 .__hash__() == hash(1)\nTrue\n>>> int.__hash__() == hash(int)\nTraceback (most recent call last):\nFile \"\", line 1, in \nTypeError: descriptor '__hash__' of 'int' object needs an argument\nIncorrectly attempting to invoke an unbound method of a class in this way is sometimes referred to as \u2018metaclass confusion\u2019, and is avoided by bypassing the instance when looking up special methods:\n>>> type(1).__hash__(1) == hash(1)\nTrue\n>>> type(int).__hash__(int) == hash(int)\nTrue\nIn addition to bypassing any instance attributes in the interest of\ncorrectness, implicit special method lookup generally also bypasses the\n__getattribute__()\nmethod even of the object\u2019s metaclass:\n>>> class Meta(type):\n... def __getattribute__(*args):\n... print(\"Metaclass getattribute invoked\")\n... return type.__getattribute__(*args)\n...\n>>> class C(object, metaclass=Meta):\n... def __len__(self):\n... return 10\n... def __getattribute__(*args):\n... print(\"Class getattribute invoked\")\n... return object.__getattribute__(*args)\n...\n>>> c = C()\n>>> c.__len__() # Explicit lookup via instance\nClass getattribute invoked\n10\n>>> type(c).__len__(c) # Explicit lookup via type\nMetaclass getattribute invoked\n10\n>>> len(c) # Implicit lookup\n10\nBypassing the __getattribute__()\nmachinery in this fashion\nprovides significant scope for speed optimisations within the\ninterpreter, at the cost of some flexibility in the handling of\nspecial methods (the special method must be set on the class\nobject itself in order to be consistently invoked by the interpreter).\n3.4. Coroutines\u00b6\n3.4.1. Awaitable Objects\u00b6\nAn awaitable object generally implements an __await__()\nmethod.\nCoroutine objects returned from async def\nfunctions\nare awaitable.\nNote\nThe generator iterator objects returned from generators\ndecorated with types.coroutine()\nare also awaitable, but they do not implement __await__()\n.\n- object.__await__(self)\u00b6\nMust return an iterator. Should be used to implement awaitable objects. For instance,\nasyncio.Future\nimplements this method to be compatible with theawait\nexpression. Theobject\nclass itself is not awaitable and does not provide this method.\nAdded in version 3.5.\nSee also\nPEP 492 for additional information about awaitable objects.\n3.4.2. Coroutine Objects\u00b6\nCoroutine objects are awaitable objects.\nA coroutine\u2019s execution can be controlled by calling __await__()\nand\niterating over the result. When the coroutine has finished executing and\nreturns, the iterator raises StopIteration\n, and the exception\u2019s\nvalue\nattribute holds the return value. If the\ncoroutine raises an exception, it is propagated by the iterator. Coroutines\nshould not directly raise unhandled StopIteration\nexceptions.\nCoroutines also have the methods listed below, which are analogous to those of generators (see Generator-iterator methods). However, unlike generators, coroutines do not directly support iteration.\nChanged in version 3.5.2: It is a RuntimeError\nto await on a coroutine more than once.\n- coroutine.send(value)\u00b6\nStarts or resumes execution of the coroutine. If value is\nNone\n, this is equivalent to advancing the iterator returned by__await__()\n. If value is notNone\n, this method delegates to thesend()\nmethod of the iterator that caused the coroutine to suspend. The result (return value,StopIteration\n, or other exception) is the same as when iterating over the__await__()\nreturn value, described above.\n- coroutine.throw(value)\u00b6\n- coroutine.throw(type[, value[, traceback]])\nRaises the specified exception in the coroutine. This method delegates to the\nthrow()\nmethod of the iterator that caused the coroutine to suspend, if it has such a method. Otherwise, the exception is raised at the suspension point. The result (return value,StopIteration\n, or other exception) is the same as when iterating over the__await__()\nreturn value, described above. If the exception is not caught in the coroutine, it propagates back to the caller.Changed in version 3.12: The second signature (type[, value[, traceback]]) is deprecated and may be removed in a future version of Python.\n- coroutine.close()\u00b6\nCauses the coroutine to clean itself up and exit. If the coroutine is suspended, this method first delegates to the\nclose()\nmethod of the iterator that caused the coroutine to suspend, if it has such a method. Then it raisesGeneratorExit\nat the suspension point, causing the coroutine to immediately clean itself up. Finally, the coroutine is marked as having finished executing, even if it was never started.Coroutine objects are automatically closed using the above process when they are about to be destroyed.\n3.4.3. Asynchronous Iterators\u00b6\nAn asynchronous iterator can call asynchronous code in\nits __anext__\nmethod.\nAsynchronous iterators can be used in an async for\nstatement.\nThe object\nclass itself does not provide these methods.\n- object.__aiter__(self)\u00b6\nMust return an asynchronous iterator object.\n- object.__anext__(self)\u00b6\nMust return an awaitable resulting in a next value of the iterator. Should raise a\nStopAsyncIteration\nerror when the iteration is over.\nAn example of an asynchronous iterable object:\nclass Reader:\nasync def readline(self):\n...\ndef __aiter__(self):\nreturn self\nasync def __anext__(self):\nval = await self.readline()\nif val == b'':\nraise StopAsyncIteration\nreturn val\nAdded in version 3.5.\nChanged in version 3.7: Prior to Python 3.7, __aiter__()\ncould return an awaitable\nthat would resolve to an\nasynchronous iterator.\nStarting with Python 3.7, __aiter__()\nmust return an\nasynchronous iterator object. Returning anything else\nwill result in a TypeError\nerror.\n3.4.4. Asynchronous Context Managers\u00b6\nAn asynchronous context manager is a context manager that is able to\nsuspend execution in its __aenter__\nand __aexit__\nmethods.\nAsynchronous context managers can be used in an async with\nstatement.\nThe object\nclass itself does not provide these methods.\n- object.__aenter__(self)\u00b6\nSemantically similar to\n__enter__()\n, the only difference being that it must return an awaitable.\n- object.__aexit__(self, exc_type, exc_value, traceback)\u00b6\nSemantically similar to\n__exit__()\n, the only difference being that it must return an awaitable.\nAn example of an asynchronous context manager class:\nclass AsyncContextManager:\nasync def __aenter__(self):\nawait log('entering context')\nasync def __aexit__(self, exc_type, exc, tb):\nawait log('exiting context')\nAdded in version 3.5.\nFootnotes", "code_snippets": ["\n ", " ", " ", " ", "\n", "\n", " ", "\n\n", "\n ", "\n ", " ", "\n\n ", " ", " ", "\n ", "\n ", " ", "\n\n", " ", " ", "\n", "\n ", " ", " ", " ", "\n ", "\n ", " ", " ", "\n\n", " ", "\n ", "\n", "\n ", " ", " ", " ", "\n", "\n ", "\n\n", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n ", "\n\n", "\n ", "\n\n", "\n ", "\n", " ", "\n\n", " ", "\n", "\n\n ", " ", " ", "\n\n ", "\n ", "\n ", " ", " ", "\n ", " ", " ", "\n\n ", "\n ", "\n ", " ", " ", " ", " ", "\n ", " ", "\n\n ", "\n ", "\n ", " ", "\n ", "\n ", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n", "\n", " ", "\n", "\n", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", " ", " ", " ", "\n", "\n", " ", " ", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", "\n", " ", "\n", " ", " ", "\n", "\n", " ", "\n", " ", "\n", " ", " ", "\n", " ", "\n", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", " ", "\n", "\n", "\n", " ", "\n", "\n", "\n", " ", "\n", "\n", "\n ", " ", "\n ", "\n\n ", "\n ", " ", "\n\n ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", "\n", "\n ", " ", "\n ", " ", "\n\n ", " ", " ", " ", " ", "\n ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 27042}
{"url": "https://docs.python.org/3/reference/expressions.html", "title": "Expressions", "content": "6. Expressions\u00b6\nThis chapter explains the meaning of the elements of expressions in Python.\nSyntax Notes: In this and the following chapters, grammar notation will be used to describe syntax, not lexical analysis.\nWhen (one alternative of) a syntax rule has the form:\nname: othername\nand no semantics are given, the semantics of this form of name\nare the same\nas for othername\n.\n6.1. Arithmetic conversions\u00b6\nWhen a description of an arithmetic operator below uses the phrase \u201cthe numeric arguments are converted to a common real type\u201d, this means that the operator implementation for built-in numeric types works as described in the Numeric Types section of the standard library documentation.\nSome additional rules apply for certain operators and non-numeric operands\n(for example, a string as a left argument to the %\noperator).\nExtensions must define their own conversion behavior.\n6.2. Atoms\u00b6\nAtoms are the most basic elements of expressions. The simplest atoms are names or literals. Forms enclosed in parentheses, brackets or braces are also categorized syntactically as atoms.\nFormally, the syntax for atoms is:\natom: | 'True' | 'False' | 'None' | '...' |identifier\n|literal\n|enclosure\nenclosure: |parenth_form\n|list_display\n|dict_display\n|set_display\n|generator_expression\n|yield_atom\n6.2.1. Built-in constants\u00b6\nThe keywords True\n, False\n, and None\nname\nbuilt-in constants.\nThe token ...\nnames the Ellipsis\nconstant.\nEvaluation of these atoms yields the corresponding value.\nNote\nSeveral more built-in constants are available as global variables, but only the ones mentioned here are keywords. In particular, these names cannot be reassigned or used as attributes:\n>>> False = 123\nFile \"\", line 1\nFalse = 123\n^^^^^\nSyntaxError: cannot assign to False\n6.2.2. Identifiers (Names)\u00b6\nAn identifier occurring as an atom is a name. See section Names (identifiers and keywords) for lexical definition and section Naming and binding for documentation of naming and binding.\nWhen the name is bound to an object, evaluation of the atom yields that object.\nWhen a name is not bound, an attempt to evaluate it raises a NameError\nexception.\n6.2.2.1. Private name mangling\u00b6\nWhen an identifier that textually occurs in a class definition begins with two or more underscore characters and does not end in two or more underscores, it is considered a private name of that class.\nSee also\nThe class specifications.\nMore precisely, private names are transformed to a longer form before code is generated for them. If the transformed name is longer than 255 characters, implementation-defined truncation may happen.\nThe transformation is independent of the syntactical context in which the identifier is used but only the following private identifiers are mangled:\nAny name used as the name of a variable that is assigned or read or any name of an attribute being accessed.\nThe\n__name__\nattribute of nested functions, classes, and type aliases is however not mangled.The name of imported modules, e.g.,\n__spam\ninimport __spam\n. If the module is part of a package (i.e., its name contains a dot), the name is not mangled, e.g., the__foo\ninimport __foo.bar\nis not mangled.The name of an imported member, e.g.,\n__f\ninfrom spam import __f\n.\nThe transformation rule is defined as follows:\nThe class name, with leading underscores removed and a single leading underscore inserted, is inserted in front of the identifier, e.g., the identifier\n__spam\noccurring in a class namedFoo\n,_Foo\nor__Foo\nis transformed to_Foo__spam\n.If the class name consists only of underscores, the transformation is the identity, e.g., the identifier\n__spam\noccurring in a class named_\nor__\nis left as is.\n6.2.3. Literals\u00b6\nA literal is a textual representation of a value. Python supports numeric, string and bytes literals. Format strings and template strings are treated as string literals.\nNumeric literals consist of a single NUMBER\ntoken, which names an integer, floating-point number, or an imaginary number.\nSee the Numeric literals section in Lexical analysis documentation for details.\nString and bytes literals may consist of several tokens. See section String literal concatenation for details.\nNote that negative and complex numbers, like -3\nor 3+4.2j\n,\nare syntactically not literals, but unary or\nbinary arithmetic operations involving the -\nor +\noperator.\nEvaluation of a literal yields an object of the given type\n(int\n, float\n, complex\n, str\n,\nbytes\n, or Template\n) with the given value.\nThe value may be approximated in the case of floating-point\nand imaginary literals.\nThe formal grammar for literals is:\nliteral:strings\n|NUMBER\n6.2.3.1. Literals and object identity\u00b6\nAll literals correspond to immutable data types, and hence the object\u2019s identity is less important than its value. Multiple evaluations of literals with the same value (either the same occurrence in the program text or a different occurrence) may obtain the same object or a different object with the same value.\nCPython implementation detail\nFor example, in CPython, small integers with the same value evaluate to the same object:\n>>> x = 7\n>>> y = 7\n>>> x is y\nTrue\nHowever, large integers evaluate to different objects:\n>>> x = 123456789\n>>> y = 123456789\n>>> x is y\nFalse\nThis behavior may change in future versions of CPython. In particular, the boundary between \u201csmall\u201d and \u201clarge\u201d integers has already changed in the past.\nCPython will emit a SyntaxWarning\nwhen you compare literals\nusing is\n:\n>>> x = 7\n>>> x is 7\n:1: SyntaxWarning: \"is\" with 'int' literal. Did you mean \"==\"?\nTrue\nSee When can I rely on identity tests with the is operator? for more information.\nTemplate strings are immutable but may reference mutable\nobjects as Interpolation\nvalues.\nFor the purposes of this section, two t-strings have the \u201csame value\u201d if\nboth their structure and the identity of the values match.\nCPython implementation detail: Currently, each evaluation of a template string results in a different object.\n6.2.3.2. String literal concatenation\u00b6\nMultiple adjacent string or bytes literals, possibly using different quoting conventions, are allowed, and their meaning is the same as their concatenation:\n>>> \"hello\" 'world'\n\"helloworld\"\nThis feature is defined at the syntactical level, so it only works with literals. To concatenate string expressions at run time, the \u2018+\u2019 operator may be used:\n>>> greeting = \"Hello\"\n>>> space = \" \"\n>>> name = \"Blaise\"\n>>> print(greeting + space + name) # not: print(greeting space name)\nHello Blaise\nLiteral concatenation can freely mix raw strings, triple-quoted strings, and formatted string literals. For example:\n>>> \"Hello\" r', ' f\"{name}!\"\n\"Hello, Blaise!\"\nThis feature can be used to reduce the number of backslashes needed, to split long strings conveniently across long lines, or even to add comments to parts of strings. For example:\nre.compile(\"[A-Za-z_]\" # letter or underscore\n\"[A-Za-z0-9_]*\" # letter, digit or underscore\n)\nHowever, bytes literals may only be combined with other byte literals; not with string literals of any kind. Also, template string literals may only be combined with other template string literals:\n>>> t\"Hello\" t\"{name}!\"\nTemplate(strings=('Hello', '!'), interpolations=(...))\nFormally:\nstrings: (STRING\n|fstring\n)+ |tstring\n+\n6.2.4. Parenthesized forms\u00b6\nA parenthesized form is an optional expression list enclosed in parentheses:\nparenth_form: \"(\" [starred_expression\n] \")\"\nA parenthesized expression list yields whatever that expression list yields: if the list contains at least one comma, it yields a tuple; otherwise, it yields the single expression that makes up the expression list.\nAn empty pair of parentheses yields an empty tuple object. Since tuples are immutable, the same rules as for literals apply (i.e., two occurrences of the empty tuple may or may not yield the same object).\nNote that tuples are not formed by the parentheses, but rather by use of the comma. The exception is the empty tuple, for which parentheses are required \u2014 allowing unparenthesized \u201cnothing\u201d in expressions would cause ambiguities and allow common typos to pass uncaught.\n6.2.5. Displays for lists, sets and dictionaries\u00b6\nFor constructing a list, a set or a dictionary Python provides special syntax called \u201cdisplays\u201d, each of them in two flavors:\neither the container contents are listed explicitly, or\nthey are computed via a set of looping and filtering instructions, called a comprehension.\nCommon syntax elements for comprehensions are:\ncomprehension:assignment_expression\ncomp_for\ncomp_for: [\"async\"] \"for\"target_list\n\"in\"or_test\n[comp_iter\n] comp_iter:comp_for\n|comp_if\ncomp_if: \"if\"or_test\n[comp_iter\n]\nThe comprehension consists of a single expression followed by at least one\nfor\nclause and zero or more for\nor if\nclauses.\nIn this case, the elements of the new container are those that would be produced\nby considering each of the for\nor if\nclauses a block,\nnesting from left to right, and evaluating the expression to produce an element\neach time the innermost block is reached.\nHowever, aside from the iterable expression in the leftmost for\nclause,\nthe comprehension is executed in a separate implicitly nested scope. This ensures\nthat names assigned to in the target list don\u2019t \u201cleak\u201d into the enclosing scope.\nThe iterable expression in the leftmost for\nclause is evaluated\ndirectly in the enclosing scope and then passed as an argument to the implicitly\nnested scope. Subsequent for\nclauses and any filter condition in the\nleftmost for\nclause cannot be evaluated in the enclosing scope as\nthey may depend on the values obtained from the leftmost iterable. For example:\n[x*y for x in range(10) for y in range(x, x+10)]\n.\nTo ensure the comprehension always results in a container of the appropriate\ntype, yield\nand yield from\nexpressions are prohibited in the implicitly\nnested scope.\nSince Python 3.6, in an async def\nfunction, an async for\nclause may be used to iterate over a asynchronous iterator.\nA comprehension in an async def\nfunction may consist of either a\nfor\nor async for\nclause following the leading\nexpression, may contain additional for\nor async for\nclauses, and may also use await\nexpressions.\nIf a comprehension contains async for\nclauses, or if it contains\nawait\nexpressions or other asynchronous comprehensions anywhere except\nthe iterable expression in the leftmost for\nclause, it is called an\nasynchronous comprehension. An asynchronous comprehension may suspend the\nexecution of the coroutine function in which it appears.\nSee also PEP 530.\nAdded in version 3.6: Asynchronous comprehensions were introduced.\nChanged in version 3.8: yield\nand yield from\nprohibited in the implicitly nested scope.\nChanged in version 3.11: Asynchronous comprehensions are now allowed inside comprehensions in asynchronous functions. Outer comprehensions implicitly become asynchronous.\n6.2.6. List displays\u00b6\nA list display is a possibly empty series of expressions enclosed in square brackets:\nlist_display: \"[\" [flexible_expression_list\n|comprehension\n] \"]\"\nA list display yields a new list object, the contents being specified by either a list of expressions or a comprehension. When a comma-separated list of expressions is supplied, its elements are evaluated from left to right and placed into the list object in that order. When a comprehension is supplied, the list is constructed from the elements resulting from the comprehension.\n6.2.7. Set displays\u00b6\nA set display is denoted by curly braces and distinguishable from dictionary displays by the lack of colons separating keys and values:\nset_display: \"{\" (flexible_expression_list\n|comprehension\n) \"}\"\nA set display yields a new mutable set object, the contents being specified by either a sequence of expressions or a comprehension. When a comma-separated list of expressions is supplied, its elements are evaluated from left to right and added to the set object. When a comprehension is supplied, the set is constructed from the elements resulting from the comprehension.\nAn empty set cannot be constructed with {}\n; this literal constructs an empty\ndictionary.\n6.2.8. Dictionary displays\u00b6\nA dictionary display is a possibly empty series of dict items (key/value pairs) enclosed in curly braces:\ndict_display: \"{\" [dict_item_list\n|dict_comprehension\n] \"}\" dict_item_list:dict_item\n(\",\"dict_item\n)* [\",\"] dict_item:expression\n\":\"expression\n| \"**\"or_expr\ndict_comprehension:expression\n\":\"expression\ncomp_for\nA dictionary display yields a new dictionary object.\nIf a comma-separated sequence of dict items is given, they are evaluated from left to right to define the entries of the dictionary: each key object is used as a key into the dictionary to store the corresponding value. This means that you can specify the same key multiple times in the dict item list, and the final dictionary\u2019s value for that key will be the last one given.\nA double asterisk **\ndenotes dictionary unpacking.\nIts operand must be a mapping. Each mapping item is added\nto the new dictionary. Later values replace values already set by\nearlier dict items and earlier dictionary unpackings.\nAdded in version 3.5: Unpacking into dictionary displays, originally proposed by PEP 448.\nA dict comprehension, in contrast to list and set comprehensions, needs two expressions separated with a colon followed by the usual \u201cfor\u201d and \u201cif\u201d clauses. When the comprehension is run, the resulting key and value elements are inserted in the new dictionary in the order they are produced.\nRestrictions on the types of the key values are listed earlier in section The standard type hierarchy. (To summarize, the key type should be hashable, which excludes all mutable objects.) Clashes between duplicate keys are not detected; the last value (textually rightmost in the display) stored for a given key value prevails.\nChanged in version 3.8: Prior to Python 3.8, in dict comprehensions, the evaluation order of key and value was not well-defined. In CPython, the value was evaluated before the key. Starting with 3.8, the key is evaluated before the value, as proposed by PEP 572.\n6.2.9. Generator expressions\u00b6\nA generator expression is a compact generator notation in parentheses:\ngenerator_expression: \"(\"expression\ncomp_for\n\")\"\nA generator expression yields a new generator object. Its syntax is the same as for comprehensions, except that it is enclosed in parentheses instead of brackets or curly braces.\nVariables used in the generator expression are evaluated lazily when the\n__next__()\nmethod is called for the generator object (in the same\nfashion as normal generators). However, the iterable expression in the\nleftmost for\nclause is immediately evaluated, and the\niterator is immediately created for that iterable, so that an error\nproduced while creating the iterator will be emitted at the point where the generator expression\nis defined, rather than at the point where the first value is retrieved.\nSubsequent for\nclauses and any filter condition in the leftmost\nfor\nclause cannot be evaluated in the enclosing scope as they may\ndepend on the values obtained from the leftmost iterable. For example:\n(x*y for x in range(10) for y in range(x, x+10))\n.\nThe parentheses can be omitted on calls with only one argument. See section Calls for details.\nTo avoid interfering with the expected operation of the generator expression\nitself, yield\nand yield from\nexpressions are prohibited in the\nimplicitly defined generator.\nIf a generator expression contains either async for\nclauses or await\nexpressions it is called an\nasynchronous generator expression. An asynchronous generator\nexpression returns a new asynchronous generator object,\nwhich is an asynchronous iterator (see Asynchronous Iterators).\nAdded in version 3.6: Asynchronous generator expressions were introduced.\nChanged in version 3.7: Prior to Python 3.7, asynchronous generator expressions could\nonly appear in async def\ncoroutines. Starting\nwith 3.7, any function can use asynchronous generator expressions.\nChanged in version 3.8: yield\nand yield from\nprohibited in the implicitly nested scope.\n6.2.10. Yield expressions\u00b6\nyield_atom: \"(\"yield_expression\n\")\" yield_from: \"yield\" \"from\"expression\nyield_expression: \"yield\"yield_list\n|yield_from\nThe yield expression is used when defining a generator function\nor an asynchronous generator function and\nthus can only be used in the body of a function definition. Using a yield\nexpression in a function\u2019s body causes that function to be a generator function,\nand using it in an async def\nfunction\u2019s body causes that\ncoroutine function to be an asynchronous generator function. For example:\ndef gen(): # defines a generator function\nyield 123\nasync def agen(): # defines an asynchronous generator function\nyield 123\nDue to their side effects on the containing scope, yield\nexpressions\nare not permitted as part of the implicitly defined scopes used to\nimplement comprehensions and generator expressions.\nChanged in version 3.8: Yield expressions prohibited in the implicitly nested scopes used to implement comprehensions and generator expressions.\nGenerator functions are described below, while asynchronous generator functions are described separately in section Asynchronous generator functions.\nWhen a generator function is called, it returns an iterator known as a\ngenerator. That generator then controls the execution of the generator\nfunction. The execution starts when one of the generator\u2019s methods is called.\nAt that time, the execution proceeds to the first yield expression, where it is\nsuspended again, returning the value of yield_list\nto the generator\u2019s caller,\nor None\nif yield_list\nis omitted.\nBy suspended, we mean that all local state is\nretained, including the current bindings of local variables, the instruction\npointer, the internal evaluation stack, and the state of any exception handling.\nWhen the execution is resumed by calling one of the generator\u2019s methods, the\nfunction can proceed exactly as if the yield expression were just another\nexternal call. The value of the yield expression after resuming depends on the\nmethod which resumed the execution. If __next__()\nis used\n(typically via either a for\nor the next()\nbuiltin) then the\nresult is None\n. Otherwise, if send()\nis used, then\nthe result will be the value passed in to that method.\nAll of this makes generator functions quite similar to coroutines; they yield multiple times, they have more than one entry point and their execution can be suspended. The only difference is that a generator function cannot control where the execution should continue after it yields; the control is always transferred to the generator\u2019s caller.\nYield expressions are allowed anywhere in a try\nconstruct. If the\ngenerator is not resumed before it is\nfinalized (by reaching a zero reference count or by being garbage collected),\nthe generator-iterator\u2019s close()\nmethod will be called,\nallowing any pending finally\nclauses to execute.\nWhen yield from \nis used, the supplied expression must be an\niterable. The values produced by iterating that iterable are passed directly\nto the caller of the current generator\u2019s methods. Any values passed in with\nsend()\nand any exceptions passed in with\nthrow()\nare passed to the underlying iterator if it has the\nappropriate methods. If this is not the case, then send()\nwill raise AttributeError\nor TypeError\n, while\nthrow()\nwill just raise the passed in exception immediately.\nWhen the underlying iterator is complete, the value\nattribute of the raised StopIteration\ninstance becomes the value of\nthe yield expression. It can be either set explicitly when raising\nStopIteration\n, or automatically when the subiterator is a generator\n(by returning a value from the subgenerator).\nChanged in version 3.3: Added yield from \nto delegate control flow to a subiterator.\nThe parentheses may be omitted when the yield expression is the sole expression on the right hand side of an assignment statement.\nSee also\n- PEP 255 - Simple Generators\nThe proposal for adding generators and the\nyield\nstatement to Python.- PEP 342 - Coroutines via Enhanced Generators\nThe proposal to enhance the API and syntax of generators, making them usable as simple coroutines.\n- PEP 380 - Syntax for Delegating to a Subgenerator\nThe proposal to introduce the\nyield_from\nsyntax, making delegation to subgenerators easy.- PEP 525 - Asynchronous Generators\nThe proposal that expanded on PEP 492 by adding generator capabilities to coroutine functions.\n6.2.10.1. Generator-iterator methods\u00b6\nThis subsection describes the methods of a generator iterator. They can be used to control the execution of a generator function.\nNote that calling any of the generator methods below when the generator\nis already executing raises a ValueError\nexception.\n- generator.__next__()\u00b6\nStarts the execution of a generator function or resumes it at the last executed yield expression. When a generator function is resumed with a\n__next__()\nmethod, the current yield expression always evaluates toNone\n. The execution then continues to the next yield expression, where the generator is suspended again, and the value of theyield_list\nis returned to__next__()\n\u2019s caller. If the generator exits without yielding another value, aStopIteration\nexception is raised.This method is normally called implicitly, e.g. by a\nfor\nloop, or by the built-innext()\nfunction.\n- generator.send(value)\u00b6\nResumes the execution and \u201csends\u201d a value into the generator function. The value argument becomes the result of the current yield expression. The\nsend()\nmethod returns the next value yielded by the generator, or raisesStopIteration\nif the generator exits without yielding another value. Whensend()\nis called to start the generator, it must be called withNone\nas the argument, because there is no yield expression that could receive the value.\n- generator.throw(value)\u00b6\n- generator.throw(type[, value[, traceback]])\nRaises an exception at the point where the generator was paused, and returns the next value yielded by the generator function. If the generator exits without yielding another value, a\nStopIteration\nexception is raised. If the generator function does not catch the passed-in exception, or raises a different exception, then that exception propagates to the caller.In typical use, this is called with a single exception instance similar to the way the\nraise\nkeyword is used.For backwards compatibility, however, the second signature is supported, following a convention from older versions of Python. The type argument should be an exception class, and value should be an exception instance. If the value is not provided, the type constructor is called to get an instance. If traceback is provided, it is set on the exception, otherwise any existing\n__traceback__\nattribute stored in value may be cleared.Changed in version 3.12: The second signature (type[, value[, traceback]]) is deprecated and may be removed in a future version of Python.\n- generator.close()\u00b6\nRaises a\nGeneratorExit\nexception at the point where the generator function was paused (equivalent to callingthrow(GeneratorExit)\n). The exception is raised by the yield expression where the generator was paused. If the generator function catches the exception and returns a value, this value is returned fromclose()\n. If the generator function is already closed, or raisesGeneratorExit\n(by not catching the exception),close()\nreturnsNone\n. If the generator yields a value, aRuntimeError\nis raised. If the generator raises any other exception, it is propagated to the caller. If the generator has already exited due to an exception or normal exit,close()\nreturnsNone\nand has no other effect.Changed in version 3.13: If a generator returns a value upon being closed, the value is returned by\nclose()\n.\n6.2.10.2. Examples\u00b6\nHere is a simple example that demonstrates the behavior of generators and generator functions:\n>>> def echo(value=None):\n... print(\"Execution starts when 'next()' is called for the first time.\")\n... try:\n... while True:\n... try:\n... value = (yield value)\n... except Exception as e:\n... value = e\n... finally:\n... print(\"Don't forget to clean up when 'close()' is called.\")\n...\n>>> generator = echo(1)\n>>> print(next(generator))\nExecution starts when 'next()' is called for the first time.\n1\n>>> print(next(generator))\nNone\n>>> print(generator.send(2))\n2\n>>> generator.throw(TypeError, \"spam\")\nTypeError('spam',)\n>>> generator.close()\nDon't forget to clean up when 'close()' is called.\nFor examples using yield from\n, see PEP 380: Syntax for Delegating to a Subgenerator in \u201cWhat\u2019s New in\nPython.\u201d\n6.2.10.3. Asynchronous generator functions\u00b6\nThe presence of a yield expression in a function or method defined using\nasync def\nfurther defines the function as an\nasynchronous generator function.\nWhen an asynchronous generator function is called, it returns an\nasynchronous iterator known as an asynchronous generator object.\nThat object then controls the execution of the generator function.\nAn asynchronous generator object is typically used in an\nasync for\nstatement in a coroutine function analogously to\nhow a generator object would be used in a for\nstatement.\nCalling one of the asynchronous generator\u2019s methods returns an awaitable\nobject, and the execution starts when this object is awaited on. At that time,\nthe execution proceeds to the first yield expression, where it is suspended\nagain, returning the value of yield_list\nto the\nawaiting coroutine. As with a generator, suspension means that all local state\nis retained, including the current bindings of local variables, the instruction\npointer, the internal evaluation stack, and the state of any exception handling.\nWhen the execution is resumed by awaiting on the next object returned by the\nasynchronous generator\u2019s methods, the function can proceed exactly as if the\nyield expression were just another external call. The value of the yield\nexpression after resuming depends on the method which resumed the execution. If\n__anext__()\nis used then the result is None\n. Otherwise, if\nasend()\nis used, then the result will be the value passed in to that\nmethod.\nIf an asynchronous generator happens to exit early by break\n, the caller\ntask being cancelled, or other exceptions, the generator\u2019s async cleanup code\nwill run and possibly raise exceptions or access context variables in an\nunexpected context\u2013perhaps after the lifetime of tasks it depends, or\nduring the event loop shutdown when the async-generator garbage collection hook\nis called.\nTo prevent this, the caller must explicitly close the async generator by calling\naclose()\nmethod to finalize the generator and ultimately detach it\nfrom the event loop.\nIn an asynchronous generator function, yield expressions are allowed anywhere\nin a try\nconstruct. However, if an asynchronous generator is not\nresumed before it is finalized (by reaching a zero reference count or by\nbeing garbage collected), then a yield expression within a try\nconstruct could result in a failure to execute pending finally\nclauses. In this case, it is the responsibility of the event loop or\nscheduler running the asynchronous generator to call the asynchronous\ngenerator-iterator\u2019s aclose()\nmethod and run the resulting\ncoroutine object, thus allowing any pending finally\nclauses\nto execute.\nTo take care of finalization upon event loop termination, an event loop should\ndefine a finalizer function which takes an asynchronous generator-iterator and\npresumably calls aclose()\nand executes the coroutine.\nThis finalizer may be registered by calling sys.set_asyncgen_hooks()\n.\nWhen first iterated over, an asynchronous generator-iterator will store the\nregistered finalizer to be called upon finalization. For a reference example\nof a finalizer method see the implementation of\nasyncio.Loop.shutdown_asyncgens\nin Lib/asyncio/base_events.py.\nThe expression yield from \nis a syntax error when used in an\nasynchronous generator function.\n6.2.10.4. Asynchronous generator-iterator methods\u00b6\nThis subsection describes the methods of an asynchronous generator iterator, which are used to control the execution of a generator function.\n- async agen.__anext__()\u00b6\nReturns an awaitable which when run starts to execute the asynchronous generator or resumes it at the last executed yield expression. When an asynchronous generator function is resumed with an\n__anext__()\nmethod, the current yield expression always evaluates toNone\nin the returned awaitable, which when run will continue to the next yield expression. The value of theyield_list\nof the yield expression is the value of theStopIteration\nexception raised by the completing coroutine. If the asynchronous generator exits without yielding another value, the awaitable instead raises aStopAsyncIteration\nexception, signalling that the asynchronous iteration has completed.This method is normally called implicitly by a\nasync for\nloop.\n- async agen.asend(value)\u00b6\nReturns an awaitable which when run resumes the execution of the asynchronous generator. As with the\nsend()\nmethod for a generator, this \u201csends\u201d a value into the asynchronous generator function, and the value argument becomes the result of the current yield expression. The awaitable returned by theasend()\nmethod will return the next value yielded by the generator as the value of the raisedStopIteration\n, or raisesStopAsyncIteration\nif the asynchronous generator exits without yielding another value. Whenasend()\nis called to start the asynchronous generator, it must be called withNone\nas the argument, because there is no yield expression that could receive the value.\n- async agen.athrow(value)\u00b6\n- async agen.athrow(type[, value[, traceback]])\nReturns an awaitable that raises an exception of type\ntype\nat the point where the asynchronous generator was paused, and returns the next value yielded by the generator function as the value of the raisedStopIteration\nexception. If the asynchronous generator exits without yielding another value, aStopAsyncIteration\nexception is raised by the awaitable. If the generator function does not catch the passed-in exception, or raises a different exception, then when the awaitable is run that exception propagates to the caller of the awaitable.Changed in version 3.12: The second signature (type[, value[, traceback]]) is deprecated and may be removed in a future version of Python.\n- async agen.aclose()\u00b6\nReturns an awaitable that when run will throw a\nGeneratorExit\ninto the asynchronous generator function at the point where it was paused. If the asynchronous generator function then exits gracefully, is already closed, or raisesGeneratorExit\n(by not catching the exception), then the returned awaitable will raise aStopIteration\nexception. Any further awaitables returned by subsequent calls to the asynchronous generator will raise aStopAsyncIteration\nexception. If the asynchronous generator yields a value, aRuntimeError\nis raised by the awaitable. If the asynchronous generator raises any other exception, it is propagated to the caller of the awaitable. If the asynchronous generator has already exited due to an exception or normal exit, then further calls toaclose()\nwill return an awaitable that does nothing.\n6.3. Primaries\u00b6\nPrimaries represent the most tightly bound operations of the language. Their syntax is:\nprimary:atom\n|attributeref\n|subscription\n|call\n6.3.1. Attribute references\u00b6\nAn attribute reference is a primary followed by a period and a name:\nattributeref:primary\n\".\"identifier\nThe primary must evaluate to an object of a type that supports attribute references, which most objects do. This object is then asked to produce the attribute whose name is the identifier. The type and value produced is determined by the object. Multiple evaluations of the same attribute reference may yield different objects.\nThis production can be customized by overriding the\n__getattribute__()\nmethod or the __getattr__()\nmethod. The __getattribute__()\nmethod is called first and either\nreturns a value or raises AttributeError\nif the attribute is not\navailable.\nIf an AttributeError\nis raised and the object has a __getattr__()\nmethod, that method is called as a fallback.\n6.3.2. Subscriptions and slicings\u00b6\nThe subscription syntax is usually used for selecting an element from a\ncontainer \u2013 for example, to get a value from\na dict\n:\n>>> digits_by_name = {'one': 1, 'two': 2}\n>>> digits_by_name['two'] # Subscripting a dictionary using the key 'two'\n2\nIn the subscription syntax, the object being subscribed \u2013 a primary \u2013 is followed by a subscript in square brackets. In the simplest case, the subscript is a single expression.\nDepending on the type of the object being subscribed, the subscript is sometimes called a key (for mappings), index (for sequences), or type argument (for generic types). Syntactically, these are all equivalent:\n>>> colors = ['red', 'blue', 'green', 'black']\n>>> colors[3] # Subscripting a list using the index 3\n'black'\n>>> list[str] # Parameterizing the list type using the type argument str\nlist[str]\nAt runtime, the interpreter will evaluate the primary and\nthe subscript, and call the primary\u2019s __getitem__()\nor\n__class_getitem__()\nspecial method with the subscript\nas argument.\nFor more details on which of these methods is called, see\n__class_getitem__ versus __getitem__.\nTo show how subscription works, we can define a custom object that\nimplements __getitem__()\nand prints out the value of\nthe subscript:\n>>> class SubscriptionDemo:\n... def __getitem__(self, key):\n... print(f'subscripted with: {key!r}')\n...\n>>> demo = SubscriptionDemo()\n>>> demo[1]\nsubscripted with: 1\n>>> demo['a' * 3]\nsubscripted with: 'aaa'\nSee __getitem__()\ndocumentation for how built-in types handle\nsubscription.\nSubscriptions may also be used as targets in assignment or\ndeletion statements.\nIn these cases, the interpreter will call the subscripted object\u2019s\n__setitem__()\nor __delitem__()\nspecial method, respectively, instead of __getitem__()\n.\n>>> colors = ['red', 'blue', 'green', 'black']\n>>> colors[3] = 'white' # Setting item at index\n>>> colors\n['red', 'blue', 'green', 'white']\n>>> del colors[3] # Deleting item at index 3\n>>> colors\n['red', 'blue', 'green']\nAll advanced forms of subscript documented in the following sections are also usable for assignment and deletion.\n6.3.2.1. Slicings\u00b6\nA more advanced form of subscription, slicing, is commonly used to extract a portion of a sequence. In this form, the subscript is a slice: up to three expressions separated by colons. Any of the expressions may be omitted, but a slice must contain at least one colon:\n>>> number_names = ['zero', 'one', 'two', 'three', 'four', 'five']\n>>> number_names[1:3]\n['one', 'two']\n>>> number_names[1:]\n['one', 'two', 'three', 'four', 'five']\n>>> number_names[:3]\n['zero', 'one', 'two']\n>>> number_names[:]\n['zero', 'one', 'two', 'three', 'four', 'five']\n>>> number_names[::2]\n['zero', 'two', 'four']\n>>> number_names[:-3]\n['zero', 'one', 'two']\n>>> del number_names[4:]\n>>> number_names\n['zero', 'one', 'two', 'three']\nWhen a slice is evaluated, the interpreter constructs a slice\nobject\nwhose start\n, stop\nand\nstep\nattributes, respectively, are the results of the\nexpressions between the colons.\nAny missing expression evaluates to None\n.\nThis slice\nobject is then passed to the __getitem__()\nor __class_getitem__()\nspecial method, as above.\n# continuing with the SubscriptionDemo instance defined above:\n>>> demo[2:3]\nsubscripted with: slice(2, 3, None)\n>>> demo[::'spam']\nsubscripted with: slice(None, None, 'spam')\n6.3.2.2. Comma-separated subscripts\u00b6\nThe subscript can also be given as two or more comma-separated expressions or slices:\n# continuing with the SubscriptionDemo instance defined above:\n>>> demo[1, 2, 3]\nsubscripted with: (1, 2, 3)\n>>> demo[1:2, 3]\nsubscripted with: (slice(1, 2, None), 3)\nThis form is commonly used with numerical libraries for slicing\nmulti-dimensional data.\nIn this case, the interpreter constructs a tuple\nof the results of the\nexpressions or slices, and passes this tuple to the __getitem__()\nor __class_getitem__()\nspecial method, as above.\nThe subscript may also be given as a single expression or slice followed by a comma, to specify a one-element tuple:\n>>> demo['spam',]\nsubscripted with: ('spam',)\n6.3.2.3. \u201cStarred\u201d subscriptions\u00b6\nAdded in version 3.11: Expressions in tuple_slices may be starred. See PEP 646.\nThe subscript can also contain a starred expression.\nIn this case, the interpreter unpacks the result into a tuple, and passes\nthis tuple to __getitem__()\nor __class_getitem__()\n:\n# continuing with the SubscriptionDemo instance defined above:\n>>> demo[*range(10)]\nsubscripted with: (0, 1, 2, 3, 4, 5, 6, 7, 8, 9)\nStarred expressions may be combined with comma-separated expressions and slices:\n>>> demo['a', 'b', *range(3), 'c']\nsubscripted with: ('a', 'b', 0, 1, 2, 'c')\n6.3.2.4. Formal subscription grammar\u00b6\nsubscription:primary\n'['subscript\n']' subscript:single_subscript\n|tuple_subscript\nsingle_subscript:proper_slice\n|assignment_expression\nproper_slice: [expression\n] \":\" [expression\n] [ \":\" [expression\n] ] tuple_subscript: ','.(single_subscript\n|starred_expression\n)+ [',']\nRecall that the |\noperator denotes ordered choice.\nSpecifically, in subscript\n, if both alternatives would match, the\nfirst (single_subscript\n) has priority.\n6.3.3. Calls\u00b6\nA call calls a callable object (e.g., a function) with a possibly empty series of arguments:\ncall:primary\n\"(\" [argument_list\n[\",\"] |comprehension\n] \")\" argument_list:positional_arguments\n[\",\"starred_and_keywords\n] [\",\"keywords_arguments\n] |starred_and_keywords\n[\",\"keywords_arguments\n] |keywords_arguments\npositional_arguments:positional_item\n(\",\"positional_item\n)* positional_item:assignment_expression\n| \"*\"expression\nstarred_and_keywords: (\"*\"expression\n|keyword_item\n) (\",\" \"*\"expression\n| \",\"keyword_item\n)* keywords_arguments: (keyword_item\n| \"**\"expression\n) (\",\"keyword_item\n| \",\" \"**\"expression\n)* keyword_item:identifier\n\"=\"expression\nAn optional trailing comma may be present after the positional and keyword arguments but does not affect the semantics.\nThe primary must evaluate to a callable object (user-defined functions, built-in\nfunctions, methods of built-in objects, class objects, methods of class\ninstances, and all objects having a __call__()\nmethod are callable). All\nargument expressions are evaluated before the call is attempted. Please refer\nto section Function definitions for the syntax of formal parameter lists.\nIf keyword arguments are present, they are first converted to positional\narguments, as follows. First, a list of unfilled slots is created for the\nformal parameters. If there are N positional arguments, they are placed in the\nfirst N slots. Next, for each keyword argument, the identifier is used to\ndetermine the corresponding slot (if the identifier is the same as the first\nformal parameter name, the first slot is used, and so on). If the slot is\nalready filled, a TypeError\nexception is raised. Otherwise, the\nargument is placed in the slot, filling it (even if the expression is\nNone\n, it fills the slot). When all arguments have been processed, the slots\nthat are still unfilled are filled with the corresponding default value from the\nfunction definition. (Default values are calculated, once, when the function is\ndefined; thus, a mutable object such as a list or dictionary used as default\nvalue will be shared by all calls that don\u2019t specify an argument value for the\ncorresponding slot; this should usually be avoided.) If there are any unfilled\nslots for which no default value is specified, a TypeError\nexception is\nraised. Otherwise, the list of filled slots is used as the argument list for\nthe call.\nCPython implementation detail: An implementation may provide built-in functions whose positional parameters\ndo not have names, even if they are \u2018named\u2019 for the purpose of documentation,\nand which therefore cannot be supplied by keyword. In CPython, this is the\ncase for functions implemented in C that use PyArg_ParseTuple()\nto\nparse their arguments.\nIf there are more positional arguments than there are formal parameter slots, a\nTypeError\nexception is raised, unless a formal parameter using the syntax\n*identifier\nis present; in this case, that formal parameter receives a tuple\ncontaining the excess positional arguments (or an empty tuple if there were no\nexcess positional arguments).\nIf any keyword argument does not correspond to a formal parameter name, a\nTypeError\nexception is raised, unless a formal parameter using the syntax\n**identifier\nis present; in this case, that formal parameter receives a\ndictionary containing the excess keyword arguments (using the keywords as keys\nand the argument values as corresponding values), or a (new) empty dictionary if\nthere were no excess keyword arguments.\nIf the syntax *expression\nappears in the function call, expression\nmust\nevaluate to an iterable. Elements from these iterables are\ntreated as if they were additional positional arguments. For the call\nf(x1, x2, *y, x3, x4)\n, if y evaluates to a sequence y1, \u2026, yM,\nthis is equivalent to a call with M+4 positional arguments x1, x2,\ny1, \u2026, yM, x3, x4.\nA consequence of this is that although the *expression\nsyntax may appear\nafter explicit keyword arguments, it is processed before the\nkeyword arguments (and any **expression\narguments \u2013 see below). So:\n>>> def f(a, b):\n... print(a, b)\n...\n>>> f(b=1, *(2,))\n2 1\n>>> f(a=1, *(2,))\nTraceback (most recent call last):\nFile \"\", line 1, in \nTypeError: f() got multiple values for keyword argument 'a'\n>>> f(1, *(2,))\n1 2\nIt is unusual for both keyword arguments and the *expression\nsyntax to be\nused in the same call, so in practice this confusion does not often arise.\nIf the syntax **expression\nappears in the function call, expression\nmust\nevaluate to a mapping, the contents of which are treated as\nadditional keyword arguments. If a parameter matching a key has already been\ngiven a value (by an explicit keyword argument, or from another unpacking),\na TypeError\nexception is raised.\nWhen **expression\nis used, each key in this mapping must be\na string.\nEach value from the mapping is assigned to the first formal parameter\neligible for keyword assignment whose name is equal to the key.\nA key need not be a Python identifier (e.g. \"max-temp \u00b0F\"\nis acceptable,\nalthough it will not match any formal parameter that could be declared).\nIf there is no match to a formal parameter\nthe key-value pair is collected by the **\nparameter, if there is one,\nor if there is not, a TypeError\nexception is raised.\nFormal parameters using the syntax *identifier\nor **identifier\ncannot be\nused as positional argument slots or as keyword argument names.\nChanged in version 3.5: Function calls accept any number of *\nand **\nunpackings,\npositional arguments may follow iterable unpackings (*\n),\nand keyword arguments may follow dictionary unpackings (**\n).\nOriginally proposed by PEP 448.\nA call always returns some value, possibly None\n, unless it raises an\nexception. How this value is computed depends on the type of the callable\nobject.\nIf it is\u2014\n- a user-defined function:\nThe code block for the function is executed, passing it the argument list. The first thing the code block will do is bind the formal parameters to the arguments; this is described in section Function definitions. When the code block executes a\nreturn\nstatement, this specifies the return value of the function call. If execution reaches the end of the code block without executing areturn\nstatement, the return value isNone\n.- a built-in function or method:\nThe result is up to the interpreter; see Built-in Functions for the descriptions of built-in functions and methods.\n- a class object:\nA new instance of that class is returned.\n- a class instance method:\nThe corresponding user-defined function is called, with an argument list that is one longer than the argument list of the call: the instance becomes the first argument.\n- a class instance:\nThe class must define a\n__call__()\nmethod; the effect is then the same as if that method was called.\n6.4. Await expression\u00b6\nSuspend the execution of coroutine on an awaitable object. Can only be used inside a coroutine function.\nawait_expr: \"await\" primary\nAdded in version 3.5.\n6.5. The power operator\u00b6\nThe power operator binds more tightly than unary operators on its left; it binds less tightly than unary operators on its right. The syntax is:\npower: (await_expr\n|primary\n) [\"**\"u_expr\n]\nThus, in an unparenthesized sequence of power and unary operators, the operators\nare evaluated from right to left (this does not constrain the evaluation order\nfor the operands): -1**2\nresults in -1\n.\nThe power operator has the same semantics as the built-in pow()\nfunction,\nwhen called with two arguments: it yields its left argument raised to the power\nof its right argument.\nNumeric arguments are first converted to a common type,\nand the result is of that type.\nFor int operands, the result has the same type as the operands unless the second\nargument is negative; in that case, all arguments are converted to float and a\nfloat result is delivered. For example, 10**2\nreturns 100\n, but\n10**-2\nreturns 0.01\n.\nRaising 0.0\nto a negative power results in a ZeroDivisionError\n.\nRaising a negative number to a fractional power results in a complex\nnumber. (In earlier versions it raised a ValueError\n.)\nThis operation can be customized using the special __pow__()\nand\n__rpow__()\nmethods.\n6.6. Unary arithmetic and bitwise operations\u00b6\nAll unary arithmetic and bitwise operations have the same priority:\nu_expr:power\n| \"-\"u_expr\n| \"+\"u_expr\n| \"~\"u_expr\nThe unary -\n(minus) operator yields the negation of its numeric argument; the\noperation can be overridden with the __neg__()\nspecial method.\nThe unary +\n(plus) operator yields its numeric argument unchanged; the\noperation can be overridden with the __pos__()\nspecial method.\nThe unary ~\n(invert) operator yields the bitwise inversion of its integer\nargument. The bitwise inversion of x\nis defined as -(x+1)\n. It only\napplies to integral numbers or to custom objects that override the\n__invert__()\nspecial method.\nIn all three cases, if the argument does not have the proper type, a\nTypeError\nexception is raised.\n6.7. Binary arithmetic operations\u00b6\nThe binary arithmetic operations have the conventional priority levels. Note that some of these operations also apply to certain non-numeric types. Apart from the power operator, there are only two levels, one for multiplicative operators and one for additive operators:\nm_expr:u_expr\n|m_expr\n\"*\"u_expr\n|m_expr\n\"@\"m_expr\n|m_expr\n\"//\"u_expr\n|m_expr\n\"/\"u_expr\n|m_expr\n\"%\"u_expr\na_expr:m_expr\n|a_expr\n\"+\"m_expr\n|a_expr\n\"-\"m_expr\nThe *\n(multiplication) operator yields the product of its arguments. The\narguments must either both be numbers, or one argument must be an integer and\nthe other must be a sequence. In the former case, the numbers are\nconverted to a common real type and then\nmultiplied together. In the latter case, sequence repetition is performed;\na negative repetition factor yields an empty sequence.\nThis operation can be customized using the special __mul__()\nand\n__rmul__()\nmethods.\nChanged in version 3.14: If only one operand is a complex number, the other operand is converted to a floating-point number.\nThe @\n(at) operator is intended to be used for matrix multiplication. No\nbuiltin Python types implement this operator.\nThis operation can be customized using the special __matmul__()\nand\n__rmatmul__()\nmethods.\nAdded in version 3.5.\nThe /\n(division) and //\n(floor division) operators yield the quotient of\ntheir arguments. The numeric arguments are first\nconverted to a common type.\nDivision of integers yields a float, while floor division of integers results in an\ninteger; the result is that of mathematical division with the \u2018floor\u2019 function\napplied to the result. Division by zero raises the ZeroDivisionError\nexception.\nThe division operation can be customized using the special __truediv__()\nand __rtruediv__()\nmethods.\nThe floor division operation can be customized using the special\n__floordiv__()\nand __rfloordiv__()\nmethods.\nThe %\n(modulo) operator yields the remainder from the division of the first\nargument by the second. The numeric arguments are first\nconverted to a common type.\nA zero right argument raises the ZeroDivisionError\nexception. The\narguments may be floating-point numbers, e.g., 3.14%0.7\nequals 0.34\n(since 3.14\nequals 4*0.7 + 0.34\n.) The modulo operator always yields a\nresult with the same sign as its second operand (or zero); the absolute value of\nthe result is strictly smaller than the absolute value of the second operand\n[1].\nThe floor division and modulo operators are connected by the following\nidentity: x == (x//y)*y + (x%y)\n. Floor division and modulo are also\nconnected with the built-in function divmod()\n: divmod(x, y) == (x//y,\nx%y)\n. [2].\nIn addition to performing the modulo operation on numbers, the %\noperator is\nalso overloaded by string objects to perform old-style string formatting (also\nknown as interpolation). The syntax for string formatting is described in the\nPython Library Reference, section printf-style String Formatting.\nThe modulo operation can be customized using the special __mod__()\nand __rmod__()\nmethods.\nThe floor division operator, the modulo operator, and the divmod()\nfunction are not defined for complex numbers. Instead, convert to a\nfloating-point number using the abs()\nfunction if appropriate.\nThe +\n(addition) operator yields the sum of its arguments. The arguments\nmust either both be numbers or both be sequences of the same type. In the\nformer case, the numbers are\nconverted to a common real type and then\nadded together.\nIn the latter case, the sequences are concatenated.\nThis operation can be customized using the special __add__()\nand\n__radd__()\nmethods.\nChanged in version 3.14: If only one operand is a complex number, the other operand is converted to a floating-point number.\nThe -\n(subtraction) operator yields the difference of its arguments.\nThe numeric arguments are first\nconverted to a common real type.\nThis operation can be customized using the special __sub__()\nand\n__rsub__()\nmethods.\nChanged in version 3.14: If only one operand is a complex number, the other operand is converted to a floating-point number.\n6.8. Shifting operations\u00b6\nThe shifting operations have lower priority than the arithmetic operations:\nshift_expr:a_expr\n|shift_expr\n(\"<<\" | \">>\")a_expr\nThese operators accept integers as arguments. They shift the first argument to the left or right by the number of bits given by the second argument.\nThe left shift operation can be customized using the special __lshift__()\nand __rlshift__()\nmethods.\nThe right shift operation can be customized using the special __rshift__()\nand __rrshift__()\nmethods.\nA right shift by n bits is defined as floor division by pow(2,n)\n. A left\nshift by n bits is defined as multiplication with pow(2,n)\n.\n6.9. Binary bitwise operations\u00b6\nEach of the three bitwise operations has a different priority level:\nand_expr:shift_expr\n|and_expr\n\"&\"shift_expr\nxor_expr:and_expr\n|xor_expr\n\"^\"and_expr\nor_expr:xor_expr\n|or_expr\n\"|\"xor_expr\nThe &\noperator yields the bitwise AND of its arguments, which must be\nintegers or one of them must be a custom object overriding __and__()\nor\n__rand__()\nspecial methods.\nThe ^\noperator yields the bitwise XOR (exclusive OR) of its arguments, which\nmust be integers or one of them must be a custom object overriding __xor__()\nor\n__rxor__()\nspecial methods.\nThe |\noperator yields the bitwise (inclusive) OR of its arguments, which\nmust be integers or one of them must be a custom object overriding __or__()\nor\n__ror__()\nspecial methods.\n6.10. Comparisons\u00b6\nUnlike C, all comparison operations in Python have the same priority, which is\nlower than that of any arithmetic, shifting or bitwise operation. Also unlike\nC, expressions like a < b < c\nhave the interpretation that is conventional\nin mathematics:\ncomparison:or_expr\n(comp_operator\nor_expr\n)* comp_operator: \"<\" | \">\" | \"==\" | \">=\" | \"<=\" | \"!=\" | \"is\" [\"not\"] | [\"not\"] \"in\"\nComparisons yield boolean values: True\nor False\n. Custom\nrich comparison methods may return non-boolean values. In this case\nPython will call bool()\non such value in boolean contexts.\nComparisons can be chained arbitrarily, e.g., x < y <= z\nis equivalent to\nx < y and y <= z\n, except that y\nis evaluated only once (but in both\ncases z\nis not evaluated at all when x < y\nis found to be false).\nFormally, if a, b, c, \u2026, y, z are expressions and op1, op2, \u2026,\nopN are comparison operators, then a op1 b op2 c ... y opN z\nis equivalent\nto a op1 b and b op2 c and ... y opN z\n, except that each expression is\nevaluated at most once.\nNote that a op1 b op2 c\ndoesn\u2019t imply any kind of comparison between a and\nc, so that, e.g., x < y > z\nis perfectly legal (though perhaps not\npretty).\n6.10.1. Value comparisons\u00b6\nThe operators <\n, >\n, ==\n, >=\n, <=\n, and !=\ncompare the\nvalues of two objects. The objects do not need to have the same type.\nChapter Objects, values and types states that objects have a value (in addition to type and identity). The value of an object is a rather abstract notion in Python: For example, there is no canonical access method for an object\u2019s value. Also, there is no requirement that the value of an object should be constructed in a particular way, e.g. comprised of all its data attributes. Comparison operators implement a particular notion of what the value of an object is. One can think of them as defining the value of an object indirectly, by means of their comparison implementation.\nBecause all types are (direct or indirect) subtypes of object\n, they\ninherit the default comparison behavior from object\n. Types can\ncustomize their comparison behavior by implementing\nrich comparison methods like __lt__()\n, described in\nBasic customization.\nThe default behavior for equality comparison (==\nand !=\n) is based on\nthe identity of the objects. Hence, equality comparison of instances with the\nsame identity results in equality, and equality comparison of instances with\ndifferent identities results in inequality. A motivation for this default\nbehavior is the desire that all objects should be reflexive (i.e. x is y\nimplies x == y\n).\nA default order comparison (<\n, >\n, <=\n, and >=\n) is not provided;\nan attempt raises TypeError\n. A motivation for this default behavior is\nthe lack of a similar invariant as for equality.\nThe behavior of the default equality comparison, that instances with different identities are always unequal, may be in contrast to what types will need that have a sensible definition of object value and value-based equality. Such types will need to customize their comparison behavior, and in fact, a number of built-in types have done that.\nThe following list describes the comparison behavior of the most important built-in types.\nNumbers of built-in numeric types (Numeric Types \u2014 int, float, complex) and of the standard library types\nfractions.Fraction\nanddecimal.Decimal\ncan be compared within and across their types, with the restriction that complex numbers do not support order comparison. Within the limits of the types involved, they compare mathematically (algorithmically) correct without loss of precision.The not-a-number values\nfloat('NaN')\nanddecimal.Decimal('NaN')\nare special. Any ordered comparison of a number to a not-a-number value is false. A counter-intuitive implication is that not-a-number values are not equal to themselves. For example, ifx = float('NaN')\n,3 < x\n,x < 3\nandx == x\nare all false, whilex != x\nis true. This behavior is compliant with IEEE 754.None\nandNotImplemented\nare singletons. PEP 8 advises that comparisons for singletons should always be done withis\noris not\n, never the equality operators.Binary sequences (instances of\nbytes\norbytearray\n) can be compared within and across their types. They compare lexicographically using the numeric values of their elements.Strings (instances of\nstr\n) compare lexicographically using the numerical Unicode code points (the result of the built-in functionord()\n) of their characters. [3]Strings and binary sequences cannot be directly compared.\nSequences (instances of\ntuple\n,list\n, orrange\n) can be compared only within each of their types, with the restriction that ranges do not support order comparison. Equality comparison across these types results in inequality, and ordering comparison across these types raisesTypeError\n.Sequences compare lexicographically using comparison of corresponding elements. The built-in containers typically assume identical objects are equal to themselves. That lets them bypass equality tests for identical objects to improve performance and to maintain their internal invariants.\nLexicographical comparison between built-in collections works as follows:\nFor two collections to compare equal, they must be of the same type, have the same length, and each pair of corresponding elements must compare equal (for example,\n[1,2] == (1,2)\nis false because the type is not the same).Collections that support order comparison are ordered the same as their first unequal elements (for example,\n[1,2,x] <= [1,2,y]\nhas the same value asx <= y\n). If a corresponding element does not exist, the shorter collection is ordered first (for example,[1,2] < [1,2,3]\nis true).\nMappings (instances of\ndict\n) compare equal if and only if they have equal(key, value)\npairs. Equality comparison of the keys and values enforces reflexivity.Order comparisons (\n<\n,>\n,<=\n, and>=\n) raiseTypeError\n.Sets (instances of\nset\norfrozenset\n) can be compared within and across their types.They define order comparison operators to mean subset and superset tests. Those relations do not define total orderings (for example, the two sets\n{1,2}\nand{2,3}\nare not equal, nor subsets of one another, nor supersets of one another). Accordingly, sets are not appropriate arguments for functions which depend on total ordering (for example,min()\n,max()\n, andsorted()\nproduce undefined results given a list of sets as inputs).Comparison of sets enforces reflexivity of its elements.\nMost other built-in types have no comparison methods implemented, so they inherit the default comparison behavior.\nUser-defined classes that customize their comparison behavior should follow some consistency rules, if possible:\nEquality comparison should be reflexive. In other words, identical objects should compare equal:\nx is y\nimpliesx == y\nComparison should be symmetric. In other words, the following expressions should have the same result:\nx == y\nandy == x\nx != y\nandy != x\nx < y\nandy > x\nx <= y\nandy >= x\nComparison should be transitive. The following (non-exhaustive) examples illustrate that:\nx > y and y > z\nimpliesx > z\nx < y and y <= z\nimpliesx < z\nInverse comparison should result in the boolean negation. In other words, the following expressions should have the same result:\nx == y\nandnot x != y\nx < y\nandnot x >= y\n(for total ordering)x > y\nandnot x <= y\n(for total ordering)The last two expressions apply to totally ordered collections (e.g. to sequences, but not to sets or mappings). See also the\ntotal_ordering()\ndecorator.The\nhash()\nresult should be consistent with equality. Objects that are equal should either have the same hash value, or be marked as unhashable.\nPython does not enforce these consistency rules. In fact, the not-a-number values are an example for not following these rules.\n6.10.2. Membership test operations\u00b6\nThe operators in\nand not in\ntest for membership. x in\ns\nevaluates to True\nif x is a member of s, and False\notherwise.\nx not in s\nreturns the negation of x in s\n. All built-in sequences and\nset types support this as well as dictionary, for which in\ntests\nwhether the dictionary has a given key. For container types such as list, tuple,\nset, frozenset, dict, or collections.deque, the expression x in y\nis equivalent\nto any(x is e or x == e for e in y)\n.\nFor the string and bytes types, x in y\nis True\nif and only if x is a\nsubstring of y. An equivalent test is y.find(x) != -1\n. Empty strings are\nalways considered to be a substring of any other string, so \"\" in \"abc\"\nwill\nreturn True\n.\nFor user-defined classes which define the __contains__()\nmethod, x in\ny\nreturns True\nif y.__contains__(x)\nreturns a true value, and\nFalse\notherwise.\nFor user-defined classes which do not define __contains__()\nbut do define\n__iter__()\n, x in y\nis True\nif some value z\n, for which the\nexpression x is z or x == z\nis true, is produced while iterating over y\n.\nIf an exception is raised during the iteration, it is as if in\nraised\nthat exception.\nLastly, the old-style iteration protocol is tried: if a class defines\n__getitem__()\n, x in y\nis True\nif and only if there is a non-negative\ninteger index i such that x is y[i] or x == y[i]\n, and no lower integer index\nraises the IndexError\nexception. (If any other exception is raised, it is as\nif in\nraised that exception).\nThe operator not in\nis defined to have the inverse truth value of\nin\n.\n6.10.3. Identity comparisons\u00b6\nThe operators is\nand is not\ntest for an object\u2019s identity: x\nis y\nis true if and only if x and y are the same object. An Object\u2019s identity\nis determined using the id()\nfunction. x is not y\nyields the inverse\ntruth value. [4]\n6.11. Boolean operations\u00b6\nor_test:and_test\n|or_test\n\"or\"and_test\nand_test:not_test\n|and_test\n\"and\"not_test\nnot_test:comparison\n| \"not\"not_test\nIn the context of Boolean operations, and also when expressions are used by\ncontrol flow statements, the following values are interpreted as false:\nFalse\n, None\n, numeric zero of all types, and empty strings and containers\n(including strings, tuples, lists, dictionaries, sets and frozensets). All\nother values are interpreted as true. User-defined objects can customize their\ntruth value by providing a __bool__()\nmethod.\nThe operator not\nyields True\nif its argument is false, False\notherwise.\nThe expression x and y\nfirst evaluates x; if x is false, its value is\nreturned; otherwise, y is evaluated and the resulting value is returned.\nThe expression x or y\nfirst evaluates x; if x is true, its value is\nreturned; otherwise, y is evaluated and the resulting value is returned.\nNote that neither and\nnor or\nrestrict the value and type\nthey return to False\nand True\n, but rather return the last evaluated\nargument. This is sometimes useful, e.g., if s\nis a string that should be\nreplaced by a default value if it is empty, the expression s or 'foo'\nyields\nthe desired value. Because not\nhas to create a new value, it\nreturns a boolean value regardless of the type of its argument\n(for example, not 'foo'\nproduces False\nrather than ''\n.)\n6.12. Assignment expressions\u00b6\nassignment_expression: [identifier\n\":=\"]expression\nAn assignment expression (sometimes also called a \u201cnamed expression\u201d or\n\u201cwalrus\u201d) assigns an expression\nto an\nidentifier\n, while also returning the value of the\nexpression\n.\nOne common use case is when handling matched regular expressions:\nif matching := pattern.search(data):\ndo_something(matching)\nOr, when processing a file stream in chunks:\nwhile chunk := file.read(9000):\nprocess(chunk)\nAssignment expressions must be surrounded by parentheses when\nused as expression statements and when used as sub-expressions in\nslicing, conditional, lambda,\nkeyword-argument, and comprehension-if expressions and\nin assert\n, with\n, and assignment\nstatements.\nIn all other places where they can be used, parentheses are not required,\nincluding in if\nand while\nstatements.\nAdded in version 3.8: See PEP 572 for more details about assignment expressions.\n6.13. Conditional expressions\u00b6\nconditional_expression:or_test\n[\"if\"or_test\n\"else\"expression\n] expression:conditional_expression\n|lambda_expr\nA conditional expression (sometimes called a \u201cternary operator\u201d) is an alternative to the if-else statement. As it is an expression, it returns a value and can appear as a sub-expression.\nThe expression x if C else y\nfirst evaluates the condition, C rather than x.\nIf C is true, x is evaluated and its value is returned; otherwise, y is\nevaluated and its value is returned.\nSee PEP 308 for more details about conditional expressions.\n6.14. Lambdas\u00b6\nlambda_expr: \"lambda\" [parameter_list\n] \":\"expression\nLambda expressions (sometimes called lambda forms) are used to create anonymous\nfunctions. The expression lambda parameters: expression\nyields a function\nobject. The unnamed object behaves like a function object defined with:\ndef (parameters):\nreturn expression\nSee section Function definitions for the syntax of parameter lists. Note that functions created with lambda expressions cannot contain statements or annotations.\n6.15. Expression lists\u00b6\nstarred_expression: \"*\"or_expr\n|expression\nflexible_expression:assignment_expression\n|starred_expression\nflexible_expression_list:flexible_expression\n(\",\"flexible_expression\n)* [\",\"] starred_expression_list:starred_expression\n(\",\"starred_expression\n)* [\",\"] expression_list:expression\n(\",\"expression\n)* [\",\"] yield_list:expression_list\n|starred_expression\n\",\" [starred_expression_list\n]\nExcept when part of a list or set display, an expression list containing at least one comma yields a tuple. The length of the tuple is the number of expressions in the list. The expressions are evaluated from left to right.\nAn asterisk *\ndenotes iterable unpacking. Its operand must be\nan iterable. The iterable is expanded into a sequence of items,\nwhich are included in the new tuple, list, or set, at the site of\nthe unpacking.\nAdded in version 3.5: Iterable unpacking in expression lists, originally proposed by PEP 448.\nAdded in version 3.11: Any item in an expression list may be starred. See PEP 646.\nA trailing comma is required only to create a one-item tuple,\nsuch as 1,\n; it is optional in all other cases.\nA single expression without a\ntrailing comma doesn\u2019t create a tuple, but rather yields the value of that\nexpression. (To create an empty tuple, use an empty pair of parentheses:\n()\n.)\n6.16. Evaluation order\u00b6\nPython evaluates expressions from left to right. Notice that while evaluating an assignment, the right-hand side is evaluated before the left-hand side.\nIn the following lines, expressions will be evaluated in the arithmetic order of their suffixes:\nexpr1, expr2, expr3, expr4\n(expr1, expr2, expr3, expr4)\n{expr1: expr2, expr3: expr4}\nexpr1 + expr2 * (expr3 - expr4)\nexpr1(expr2, expr3, *expr4, **expr5)\nexpr3, expr4 = expr1, expr2\n6.17. Operator precedence\u00b6\nThe following table summarizes the operator precedence in Python, from highest precedence (most binding) to lowest precedence (least binding). Operators in the same box have the same precedence. Unless the syntax is explicitly given, operators are binary. Operators in the same box group left to right (except for exponentiation and conditional expressions, which group from right to left).\nNote that comparisons, membership tests, and identity tests, all have the same precedence and have a left-to-right chaining feature as described in the Comparisons section.\nOperator |\nDescription |\n|---|---|\n|\nBinding or parenthesized expression, list display, dictionary display, set display |\n|\nSubscription (including slicing), call, attribute reference |\nAwait expression |\n|\n|\nExponentiation [5] |\n|\nPositive, negative, bitwise NOT |\n|\nMultiplication, matrix multiplication, division, floor division, remainder [6] |\n|\nAddition and subtraction |\n|\nShifts |\n|\nBitwise AND |\n|\nBitwise XOR |\n|\nBitwise OR |\nComparisons, including membership tests and identity tests |\n|\nBoolean NOT |\n|\nBoolean AND |\n|\nBoolean OR |\n|\n|\nConditional expression |\nLambda expression |\n|\n|\nAssignment expression |\nFootnotes", "code_snippets": [" ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", " ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", "\n ", " ", "\n ", "\n", " ", "\n", "\n", " ", "\n ", " ", "\n\n", " ", " ", "\n ", " ", "\n", "\n", " ", "\n", " ", "\n", " ", " ", "\n", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", "\n", " ", "\n", "\n", " ", " ", " ", " ", " ", "\n", " ", "\n", "\n\n", " ", "\n", "\n", "\n", " ", " ", "\n", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", " ", "\n", " ", " ", " ", " ", "\n", " ", "\n", " ", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", " ", "\n", " ", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", " ", "\n", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 16880}
{"url": "https://docs.python.org/3/", "title": "Python 3.14.3 documentation", "content": "Python 3.14.3 documentation\nWelcome! This is the official documentation for Python 3.14.3.\nDocumentation sections:\n|\nWhat's new in Python 3.14?\nOr all \"What's new\" documents since Python 2.0\nTutorial\nStart here: a tour of Python's syntax and features\nLibrary reference\nStandard library and builtins\nLanguage reference\nSyntax and language elements\nPython setup and usage\nHow to install, configure, and use Python\nPython HOWTOs\nIn-depth topic manuals\n|\nInstalling Python modules\nThird-party modules and PyPI.org\nDistributing Python modules\nPublishing modules for use by other people\nExtending and embedding\nFor C/C++ programmers\nPython's C API\nC API reference\nFAQs\nFrequently asked questions (with answers!)\nDeprecations\nDeprecated functionality\n|\nIndices, glossary, and search:\n|\nGlobal module index\nAll modules and libraries\nGeneral index\nAll functions, classes, and terms\nGlossary\nTerms explained\n|\nSearch page\nSearch this documentation\nComplete table of contents\nLists all sections and subsections\n|\nProject information:", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 255}
{"url": "https://docs.python.org/3/c-api/extension-modules.html", "title": "Defining extension modules", "content": "Defining extension modules\u00b6\nA C extension for CPython is a shared library (for example, a .so\nfile\non Linux, .pyd\nDLL on Windows), which is loadable into the Python process\n(for example, it is compiled with compatible compiler settings), and which\nexports an initialization function.\nTo be importable by default (that is, by\nimportlib.machinery.ExtensionFileLoader\n),\nthe shared library must be available on sys.path\n,\nand must be named after the module name plus an extension listed in\nimportlib.machinery.EXTENSION_SUFFIXES\n.\nNote\nBuilding, packaging and distributing extension modules is best done with third-party tools, and is out of scope of this document. One suitable tool is Setuptools, whose documentation can be found at https://setuptools.pypa.io/en/latest/setuptools.html.\nNormally, the initialization function returns a module definition initialized\nusing PyModuleDef_Init()\n.\nThis allows splitting the creation process into several phases:\nBefore any substantial code is executed, Python can determine which capabilities the module supports, and it can adjust the environment or refuse loading an incompatible extension.\nBy default, Python itself creates the module object \u2013 that is, it does the equivalent of\nobject.__new__()\nfor classes. It also sets initial attributes like__package__\nand__loader__\n.Afterwards, the module object is initialized using extension-specific code \u2013 the equivalent of\n__init__()\non classes.\nThis is called multi-phase initialization to distinguish it from the legacy (but still supported) single-phase initialization scheme, where the initialization function returns a fully constructed module. See the single-phase-initialization section below for details.\nChanged in version 3.5: Added support for multi-phase initialization (PEP 489).\nMultiple module instances\u00b6\nBy default, extension modules are not singletons.\nFor example, if the sys.modules\nentry is removed and the module\nis re-imported, a new module object is created, and typically populated with\nfresh method and type objects.\nThe old module is subject to normal garbage collection.\nThis mirrors the behavior of pure-Python modules.\nAdditional module instances may be created in\nsub-interpreters\nor after Python runtime reinitialization\n(Py_Finalize()\nand Py_Initialize()\n).\nIn these cases, sharing Python objects between module instances would likely\ncause crashes or undefined behavior.\nTo avoid such issues, each instance of an extension module should be isolated: changes to one instance should not implicitly affect the others, and all state owned by the module, including references to Python objects, should be specific to a particular module instance. See Isolating Extension Modules for more details and a practical guide.\nA simpler way to avoid these issues is raising an error on repeated initialization.\nAll modules are expected to support\nsub-interpreters, or otherwise explicitly\nsignal a lack of support.\nThis is usually achieved by isolation or blocking repeated initialization,\nas above.\nA module may also be limited to the main interpreter using\nthe Py_mod_multiple_interpreters\nslot.\nInitialization function\u00b6\nThe initialization function defined by an extension module has the following signature:\nIts name should be PyInit_\n, with \nreplaced by the\nname of the module.\nFor modules with ASCII-only names, the function must instead be named\nPyInit_\n, with \nreplaced by the name of the module.\nWhen using Multi-phase initialization, non-ASCII module names\nare allowed. In this case, the initialization function name is\nPyInitU_\n, with \nencoded using Python\u2019s\npunycode encoding with hyphens replaced by underscores. In Python:\ndef initfunc_name(name):\ntry:\nsuffix = b'_' + name.encode('ascii')\nexcept UnicodeEncodeError:\nsuffix = b'U_' + name.encode('punycode').replace(b'-', b'_')\nreturn b'PyInit' + suffix\nIt is recommended to define the initialization function using a helper macro:\n-\nPyMODINIT_FUNC\u00b6\nDeclare an extension module initialization function. This macro:\nspecifies the PyObject* return type,\nadds any special linkage declarations required by the platform, and\nfor C++, declares the function as\nextern \"C\"\n.\nFor example, a module called spam\nwould be defined like this:\nstatic struct PyModuleDef spam_module = {\n.m_base = PyModuleDef_HEAD_INIT,\n.m_name = \"spam\",\n...\n};\nPyMODINIT_FUNC\nPyInit_spam(void)\n{\nreturn PyModuleDef_Init(&spam_module);\n}\nIt is possible to export multiple modules from a single shared library by defining multiple initialization functions. However, importing them requires using symbolic links or a custom importer, because by default only the function corresponding to the filename is found. See the Multiple modules in one library section in PEP 489 for details.\nThe initialization function is typically the only non-static\nitem defined in the module\u2019s C source.\nMulti-phase initialization\u00b6\nNormally, the initialization function\n(PyInit_modulename\n) returns a PyModuleDef\ninstance with\nnon-NULL\nm_slots\n.\nBefore it is returned, the PyModuleDef\ninstance must be initialized\nusing the following function:\n-\nPyObject *PyModuleDef_Init(PyModuleDef *def)\u00b6\n- Part of the Stable ABI since version 3.5.\nEnsure a module definition is a properly initialized Python object that correctly reports its type and a reference count.\nReturn def cast to\nPyObject*\n, orNULL\nif an error occurred.Calling this function is required for Multi-phase initialization. It should not be used in other contexts.\nNote that Python assumes that\nPyModuleDef\nstructures are statically allocated. This function may return either a new reference or a borrowed one; this reference must not be released.Added in version 3.5.\nLegacy single-phase initialization\u00b6\nAttention\nSingle-phase initialization is a legacy mechanism to initialize extension modules, with known drawbacks and design flaws. Extension module authors are encouraged to use multi-phase initialization instead.\nIn single-phase initialization, the\ninitialization function (PyInit_modulename\n)\nshould create, populate and return a module object.\nThis is typically done using PyModule_Create()\nand functions like\nPyModule_AddObjectRef()\n.\nSingle-phase initialization differs from the default in the following ways:\nSingle-phase modules are, or rather contain, \u201csingletons\u201d.\nWhen the module is first initialized, Python saves the contents of the module\u2019s\n__dict__\n(that is, typically, the module\u2019s functions and types).For subsequent imports, Python does not call the initialization function again. Instead, it creates a new module object with a new\n__dict__\n, and copies the saved contents to it. For example, given a single-phase module_testsinglephase\n[1] that defines a functionsum\nand an exception classerror\n:>>> import sys >>> import _testsinglephase as one >>> del sys.modules['_testsinglephase'] >>> import _testsinglephase as two >>> one is two False >>> one.__dict__ is two.__dict__ False >>> one.sum is two.sum True >>> one.error is two.error True\nThe exact behavior should be considered a CPython implementation detail.\nTo work around the fact that\nPyInit_modulename\ndoes not take a spec argument, some state of the import machinery is saved and applied to the first suitable module created during thePyInit_modulename\ncall. Specifically, when a sub-module is imported, this mechanism prepends the parent package name to the name of the module.A single-phase\nPyInit_modulename\nfunction should create \u201cits\u201d module object as soon as possible, before any other module objects can be created.Non-ASCII module names (\nPyInitU_modulename\n) are not supported.Single-phase modules support module lookup functions like\nPyState_FindModule()\n.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1915}
{"url": "https://docs.python.org/3/about.html", "title": "About this documentation", "content": "About this documentation\u00b6\nPython\u2019s documentation is generated from reStructuredText sources using Sphinx, a documentation generator originally created for Python and now maintained as an independent project.\nDevelopment of the documentation and its toolchain is an entirely volunteer effort, just like Python itself. If you want to contribute, please take a look at the Dealing with Bugs page for information on how to do so. New volunteers are always welcome!\nMany thanks go to:\nFred L. Drake, Jr., the creator of the original Python documentation toolset and author of much of the content;\nthe Docutils project for creating reStructuredText and the Docutils suite;\nFredrik Lundh for his Alternative Python Reference project from which Sphinx got many good ideas.\nContributors to the Python documentation\u00b6\nMany people have contributed to the Python language, the Python standard library, and the Python documentation. See Misc/ACKS in the Python source distribution for a partial list of contributors.\nIt is only with the input and contributions of the Python community that Python has such wonderful documentation \u2013 Thank You!", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 282}
{"url": "https://docs.python.org/3/extending/newtypes.html", "title": "Defining Extension Types: Assorted Topics", "content": "3. Defining Extension Types: Assorted Topics\u00b6\nThis section aims to give a quick fly-by on the various type methods you can implement and what they do.\nHere is the definition of PyTypeObject\n, with some fields only used in\ndebug builds omitted:\ntypedef struct _typeobject {\nPyObject_VAR_HEAD\nconst char *tp_name; /* For printing, in format \".\" */\nPy_ssize_t tp_basicsize, tp_itemsize; /* For allocation */\n/* Methods to implement standard operations */\ndestructor tp_dealloc;\nPy_ssize_t tp_vectorcall_offset;\ngetattrfunc tp_getattr;\nsetattrfunc tp_setattr;\nPyAsyncMethods *tp_as_async; /* formerly known as tp_compare (Python 2)\nor tp_reserved (Python 3) */\nreprfunc tp_repr;\n/* Method suites for standard classes */\nPyNumberMethods *tp_as_number;\nPySequenceMethods *tp_as_sequence;\nPyMappingMethods *tp_as_mapping;\n/* More standard operations (here for binary compatibility) */\nhashfunc tp_hash;\nternaryfunc tp_call;\nreprfunc tp_str;\ngetattrofunc tp_getattro;\nsetattrofunc tp_setattro;\n/* Functions to access object as input/output buffer */\nPyBufferProcs *tp_as_buffer;\n/* Flags to define presence of optional/expanded features */\nunsigned long tp_flags;\nconst char *tp_doc; /* Documentation string */\n/* Assigned meaning in release 2.0 */\n/* call function for all accessible objects */\ntraverseproc tp_traverse;\n/* delete references to contained objects */\ninquiry tp_clear;\n/* Assigned meaning in release 2.1 */\n/* rich comparisons */\nrichcmpfunc tp_richcompare;\n/* weak reference enabler */\nPy_ssize_t tp_weaklistoffset;\n/* Iterators */\ngetiterfunc tp_iter;\niternextfunc tp_iternext;\n/* Attribute descriptor and subclassing stuff */\nPyMethodDef *tp_methods;\nPyMemberDef *tp_members;\nPyGetSetDef *tp_getset;\n// Strong reference on a heap type, borrowed reference on a static type\nPyTypeObject *tp_base;\nPyObject *tp_dict;\ndescrgetfunc tp_descr_get;\ndescrsetfunc tp_descr_set;\nPy_ssize_t tp_dictoffset;\ninitproc tp_init;\nallocfunc tp_alloc;\nnewfunc tp_new;\nfreefunc tp_free; /* Low-level free-memory routine */\ninquiry tp_is_gc; /* For PyObject_IS_GC */\nPyObject *tp_bases;\nPyObject *tp_mro; /* method resolution order */\nPyObject *tp_cache; /* no longer used */\nvoid *tp_subclasses; /* for static builtin types this is an index */\nPyObject *tp_weaklist; /* not used for static builtin types */\ndestructor tp_del;\n/* Type attribute cache version tag. Added in version 2.6.\n* If zero, the cache is invalid and must be initialized.\n*/\nunsigned int tp_version_tag;\ndestructor tp_finalize;\nvectorcallfunc tp_vectorcall;\n/* bitset of which type-watchers care about this type */\nunsigned char tp_watched;\n/* Number of tp_version_tag values used.\n* Set to _Py_ATTR_CACHE_UNUSED if the attribute cache is\n* disabled for this type (e.g. due to custom MRO entries).\n* Otherwise, limited to MAX_VERSIONS_PER_CLASS (defined elsewhere).\n*/\nuint16_t tp_versions_used;\n} PyTypeObject;\nNow that\u2019s a lot of methods. Don\u2019t worry too much though \u2013 if you have a type you want to define, the chances are very good that you will only implement a handful of these.\nAs you probably expect by now, we\u2019re going to go over this and give more information about the various handlers. We won\u2019t go in the order they are defined in the structure, because there is a lot of historical baggage that impacts the ordering of the fields. It\u2019s often easiest to find an example that includes the fields you need and then change the values to suit your new type.\nconst char *tp_name; /* For printing */\nThe name of the type \u2013 as mentioned in the previous chapter, this will appear in various places, almost entirely for diagnostic purposes. Try to choose something that will be helpful in such a situation!\nPy_ssize_t tp_basicsize, tp_itemsize; /* For allocation */\nThese fields tell the runtime how much memory to allocate when new objects of\nthis type are created. Python has some built-in support for variable length\nstructures (think: strings, tuples) which is where the tp_itemsize\nfield\ncomes in. This will be dealt with later.\nconst char *tp_doc;\nHere you can put a string (or its address) that you want returned when the\nPython script references obj.__doc__\nto retrieve the doc string.\nNow we come to the basic type methods \u2013 the ones most extension types will implement.\n3.1. Finalization and De-allocation\u00b6\ndestructor tp_dealloc;\nThis function is called when the reference count of the instance of your type is reduced to zero and the Python interpreter wants to reclaim it. If your type has memory to free or other clean-up to perform, you can put it here. The object itself needs to be freed here as well. Here is an example of this function:\nstatic void\nnewdatatype_dealloc(PyObject *op)\n{\nnewdatatypeobject *self = (newdatatypeobject *) op;\nfree(self->obj_UnderlyingDatatypePtr);\nPy_TYPE(self)->tp_free(self);\n}\nIf your type supports garbage collection, the destructor should call\nPyObject_GC_UnTrack()\nbefore clearing any member fields:\nstatic void\nnewdatatype_dealloc(PyObject *op)\n{\nnewdatatypeobject *self = (newdatatypeobject *) op;\nPyObject_GC_UnTrack(op);\nPy_CLEAR(self->other_obj);\n...\nPy_TYPE(self)->tp_free(self);\n}\nOne important requirement of the deallocator function is that it leaves any\npending exceptions alone. This is important since deallocators are frequently\ncalled as the interpreter unwinds the Python stack; when the stack is unwound\ndue to an exception (rather than normal returns), nothing is done to protect the\ndeallocators from seeing that an exception has already been set. Any actions\nwhich a deallocator performs which may cause additional Python code to be\nexecuted may detect that an exception has been set. This can lead to misleading\nerrors from the interpreter. The proper way to protect against this is to save\na pending exception before performing the unsafe action, and restoring it when\ndone. This can be done using the PyErr_Fetch()\nand\nPyErr_Restore()\nfunctions:\nstatic void\nmy_dealloc(PyObject *obj)\n{\nMyObject *self = (MyObject *) obj;\nPyObject *cbresult;\nif (self->my_callback != NULL) {\nPyObject *err_type, *err_value, *err_traceback;\n/* This saves the current exception state */\nPyErr_Fetch(&err_type, &err_value, &err_traceback);\ncbresult = PyObject_CallNoArgs(self->my_callback);\nif (cbresult == NULL) {\nPyErr_WriteUnraisable(self->my_callback);\n}\nelse {\nPy_DECREF(cbresult);\n}\n/* This restores the saved exception state */\nPyErr_Restore(err_type, err_value, err_traceback);\nPy_DECREF(self->my_callback);\n}\nPy_TYPE(self)->tp_free(self);\n}\nNote\nThere are limitations to what you can safely do in a deallocator function.\nFirst, if your type supports garbage collection (using tp_traverse\nand/or tp_clear\n), some of the object\u2019s members can have been\ncleared or finalized by the time tp_dealloc\nis called. Second, in\ntp_dealloc\n, your object is in an unstable state: its reference\ncount is equal to zero. Any call to a non-trivial object or API (as in the\nexample above) might end up calling tp_dealloc\nagain, causing a\ndouble free and a crash.\nStarting with Python 3.4, it is recommended not to put any complex\nfinalization code in tp_dealloc\n, and instead use the new\ntp_finalize\ntype method.\nSee also\nPEP 442 explains the new finalization scheme.\n3.2. Object Presentation\u00b6\nIn Python, there are two ways to generate a textual representation of an object:\nthe repr()\nfunction, and the str()\nfunction. (The print()\nfunction just calls str()\n.) These handlers are both optional.\nreprfunc tp_repr;\nreprfunc tp_str;\nThe tp_repr\nhandler should return a string object containing a\nrepresentation of the instance for which it is called. Here is a simple\nexample:\nstatic PyObject *\nnewdatatype_repr(PyObject *op)\n{\nnewdatatypeobject *self = (newdatatypeobject *) op;\nreturn PyUnicode_FromFormat(\"Repr-ified_newdatatype{{size:%d}}\",\nself->obj_UnderlyingDatatypePtr->size);\n}\nIf no tp_repr\nhandler is specified, the interpreter will supply a\nrepresentation that uses the type\u2019s tp_name\nand a uniquely identifying\nvalue for the object.\nThe tp_str\nhandler is to str()\nwhat the tp_repr\nhandler\ndescribed above is to repr()\n; that is, it is called when Python code calls\nstr()\non an instance of your object. Its implementation is very similar\nto the tp_repr\nfunction, but the resulting string is intended for human\nconsumption. If tp_str\nis not specified, the tp_repr\nhandler is\nused instead.\nHere is a simple example:\nstatic PyObject *\nnewdatatype_str(PyObject *op)\n{\nnewdatatypeobject *self = (newdatatypeobject *) op;\nreturn PyUnicode_FromFormat(\"Stringified_newdatatype{{size:%d}}\",\nself->obj_UnderlyingDatatypePtr->size);\n}\n3.3. Attribute Management\u00b6\nFor every object which can support attributes, the corresponding type must\nprovide the functions that control how the attributes are resolved. There needs\nto be a function which can retrieve attributes (if any are defined), and another\nto set attributes (if setting attributes is allowed). Removing an attribute is\na special case, for which the new value passed to the handler is NULL\n.\nPython supports two pairs of attribute handlers; a type that supports attributes only needs to implement the functions for one pair. The difference is that one pair takes the name of the attribute as a char*, while the other accepts a PyObject*. Each type can use whichever pair makes more sense for the implementation\u2019s convenience.\ngetattrfunc tp_getattr; /* char * version */\nsetattrfunc tp_setattr;\n/* ... */\ngetattrofunc tp_getattro; /* PyObject * version */\nsetattrofunc tp_setattro;\nIf accessing attributes of an object is always a simple operation (this will be explained shortly), there are generic implementations which can be used to provide the PyObject* version of the attribute management functions. The actual need for type-specific attribute handlers almost completely disappeared starting with Python 2.2, though there are many examples which have not been updated to use some of the new generic mechanism that is available.\n3.3.1. Generic Attribute Management\u00b6\nMost extension types only use simple attributes. So, what makes the attributes simple? There are only a couple of conditions that must be met:\nThe name of the attributes must be known when\nPyType_Ready()\nis called.No special processing is needed to record that an attribute was looked up or set, nor do actions need to be taken based on the value.\nNote that this list does not place any restrictions on the values of the attributes, when the values are computed, or how relevant data is stored.\nWhen PyType_Ready()\nis called, it uses three tables referenced by the\ntype object to create descriptors which are placed in the dictionary of the\ntype object. Each descriptor controls access to one attribute of the instance\nobject. Each of the tables is optional; if all three are NULL\n, instances of\nthe type will only have attributes that are inherited from their base type, and\nshould leave the tp_getattro\nand tp_setattro\nfields NULL\nas\nwell, allowing the base type to handle attributes.\nThe tables are declared as three fields of the type object:\nstruct PyMethodDef *tp_methods;\nstruct PyMemberDef *tp_members;\nstruct PyGetSetDef *tp_getset;\nIf tp_methods\nis not NULL\n, it must refer to an array of\nPyMethodDef\nstructures. Each entry in the table is an instance of this\nstructure:\ntypedef struct PyMethodDef {\nconst char *ml_name; /* method name */\nPyCFunction ml_meth; /* implementation function */\nint ml_flags; /* flags */\nconst char *ml_doc; /* docstring */\n} PyMethodDef;\nOne entry should be defined for each method provided by the type; no entries are\nneeded for methods inherited from a base type. One additional entry is needed\nat the end; it is a sentinel that marks the end of the array. The\nml_name\nfield of the sentinel must be NULL\n.\nThe second table is used to define attributes which map directly to data stored in the instance. A variety of primitive C types are supported, and access may be read-only or read-write. The structures in the table are defined as:\ntypedef struct PyMemberDef {\nconst char *name;\nint type;\nint offset;\nint flags;\nconst char *doc;\n} PyMemberDef;\nFor each entry in the table, a descriptor will be constructed and added to the\ntype which will be able to extract a value from the instance structure. The\ntype\nfield should contain a type code like Py_T_INT\nor\nPy_T_DOUBLE\n; the value will be used to determine how to\nconvert Python values to and from C values. The flags\nfield is used to\nstore flags which control how the attribute can be accessed: you can set it to\nPy_READONLY\nto prevent Python code from setting it.\nAn interesting advantage of using the tp_members\ntable to build\ndescriptors that are used at runtime is that any attribute defined this way can\nhave an associated doc string simply by providing the text in the table. An\napplication can use the introspection API to retrieve the descriptor from the\nclass object, and get the doc string using its __doc__\nattribute.\nAs with the tp_methods\ntable, a sentinel entry with a ml_name\nvalue\nof NULL\nis required.\n3.3.2. Type-specific Attribute Management\u00b6\nFor simplicity, only the char* version will be demonstrated here; the type of the name parameter is the only difference between the char* and PyObject* flavors of the interface. This example effectively does the same thing as the generic example above, but does not use the generic support added in Python 2.2. It explains how the handler functions are called, so that if you do need to extend their functionality, you\u2019ll understand what needs to be done.\nThe tp_getattr\nhandler is called when the object requires an attribute\nlook-up. It is called in the same situations where the __getattr__()\nmethod of a class would be called.\nHere is an example:\nstatic PyObject *\nnewdatatype_getattr(PyObject *op, char *name)\n{\nnewdatatypeobject *self = (newdatatypeobject *) op;\nif (strcmp(name, \"data\") == 0) {\nreturn PyLong_FromLong(self->data);\n}\nPyErr_Format(PyExc_AttributeError,\n\"'%.100s' object has no attribute '%.400s'\",\nPy_TYPE(self)->tp_name, name);\nreturn NULL;\n}\nThe tp_setattr\nhandler is called when the __setattr__()\nor\n__delattr__()\nmethod of a class instance would be called. When an\nattribute should be deleted, the third parameter will be NULL\n. Here is an\nexample that simply raises an exception; if this were really all you wanted, the\ntp_setattr\nhandler should be set to NULL\n.\nstatic int\nnewdatatype_setattr(PyObject *op, char *name, PyObject *v)\n{\nPyErr_Format(PyExc_RuntimeError, \"Read-only attribute: %s\", name);\nreturn -1;\n}\n3.4. Object Comparison\u00b6\nrichcmpfunc tp_richcompare;\nThe tp_richcompare\nhandler is called when comparisons are needed. It is\nanalogous to the rich comparison methods, like\n__lt__()\n, and also called by PyObject_RichCompare()\nand\nPyObject_RichCompareBool()\n.\nThis function is called with two Python objects and the operator as arguments,\nwhere the operator is one of Py_EQ\n, Py_NE\n, Py_LE\n, Py_GE\n,\nPy_LT\nor Py_GT\n. It should compare the two objects with respect to the\nspecified operator and return Py_True\nor Py_False\nif the comparison is\nsuccessful, Py_NotImplemented\nto indicate that comparison is not\nimplemented and the other object\u2019s comparison method should be tried, or NULL\nif an exception was set.\nHere is a sample implementation, for a datatype that is considered equal if the size of an internal pointer is equal:\nstatic PyObject *\nnewdatatype_richcmp(PyObject *lhs, PyObject *rhs, int op)\n{\nnewdatatypeobject *obj1 = (newdatatypeobject *) lhs;\nnewdatatypeobject *obj2 = (newdatatypeobject *) rhs;\nPyObject *result;\nint c, size1, size2;\n/* code to make sure that both arguments are of type\nnewdatatype omitted */\nsize1 = obj1->obj_UnderlyingDatatypePtr->size;\nsize2 = obj2->obj_UnderlyingDatatypePtr->size;\nswitch (op) {\ncase Py_LT: c = size1 < size2; break;\ncase Py_LE: c = size1 <= size2; break;\ncase Py_EQ: c = size1 == size2; break;\ncase Py_NE: c = size1 != size2; break;\ncase Py_GT: c = size1 > size2; break;\ncase Py_GE: c = size1 >= size2; break;\n}\nresult = c ? Py_True : Py_False;\nreturn Py_NewRef(result);\n}\n3.5. Abstract Protocol Support\u00b6\nPython supports a variety of abstract \u2018protocols;\u2019 the specific interfaces provided to use these interfaces are documented in Abstract Objects Layer.\nA number of these abstract interfaces were defined early in the development of\nthe Python implementation. In particular, the number, mapping, and sequence\nprotocols have been part of Python since the beginning. Other protocols have\nbeen added over time. For protocols which depend on several handler routines\nfrom the type implementation, the older protocols have been defined as optional\nblocks of handlers referenced by the type object. For newer protocols there are\nadditional slots in the main type object, with a flag bit being set to indicate\nthat the slots are present and should be checked by the interpreter. (The flag\nbit does not indicate that the slot values are non-NULL\n. The flag may be set\nto indicate the presence of a slot, but a slot may still be unfilled.)\nPyNumberMethods *tp_as_number;\nPySequenceMethods *tp_as_sequence;\nPyMappingMethods *tp_as_mapping;\nIf you wish your object to be able to act like a number, a sequence, or a\nmapping object, then you place the address of a structure that implements the C\ntype PyNumberMethods\n, PySequenceMethods\n, or\nPyMappingMethods\n, respectively. It is up to you to fill in this\nstructure with appropriate values. You can find examples of the use of each of\nthese in the Objects\ndirectory of the Python source distribution.\nhashfunc tp_hash;\nThis function, if you choose to provide it, should return a hash number for an instance of your data type. Here is a simple example:\nstatic Py_hash_t\nnewdatatype_hash(PyObject *op)\n{\nnewdatatypeobject *self = (newdatatypeobject *) op;\nPy_hash_t result;\nresult = self->some_size + 32767 * self->some_number;\nif (result == -1) {\nresult = -2;\n}\nreturn result;\n}\nPy_hash_t\nis a signed integer type with a platform-varying width.\nReturning -1\nfrom tp_hash\nindicates an error,\nwhich is why you should be careful to avoid returning it when hash computation\nis successful, as seen above.\nternaryfunc tp_call;\nThis function is called when an instance of your data type is \u201ccalled\u201d, for\nexample, if obj1\nis an instance of your data type and the Python script\ncontains obj1('hello')\n, the tp_call\nhandler is invoked.\nThis function takes three arguments:\nself is the instance of the data type which is the subject of the call. If the call is\nobj1('hello')\n, then self isobj1\n.args is a tuple containing the arguments to the call. You can use\nPyArg_ParseTuple()\nto extract the arguments.kwds is a dictionary of keyword arguments that were passed. If this is non-\nNULL\nand you support keyword arguments, usePyArg_ParseTupleAndKeywords()\nto extract the arguments. If you do not want to support keyword arguments and this is non-NULL\n, raise aTypeError\nwith a message saying that keyword arguments are not supported.\nHere is a toy tp_call\nimplementation:\nstatic PyObject *\nnewdatatype_call(PyObject *op, PyObject *args, PyObject *kwds)\n{\nnewdatatypeobject *self = (newdatatypeobject *) op;\nPyObject *result;\nconst char *arg1;\nconst char *arg2;\nconst char *arg3;\nif (!PyArg_ParseTuple(args, \"sss:call\", &arg1, &arg2, &arg3)) {\nreturn NULL;\n}\nresult = PyUnicode_FromFormat(\n\"Returning -- value: [%d] arg1: [%s] arg2: [%s] arg3: [%s]\\n\",\nself->obj_UnderlyingDatatypePtr->size,\narg1, arg2, arg3);\nreturn result;\n}\n/* Iterators */\ngetiterfunc tp_iter;\niternextfunc tp_iternext;\nThese functions provide support for the iterator protocol. Both handlers\ntake exactly one parameter, the instance for which they are being called,\nand return a new reference. In the case of an error, they should set an\nexception and return NULL\n. tp_iter\ncorresponds\nto the Python __iter__()\nmethod, while tp_iternext\ncorresponds to the Python __next__()\nmethod.\nAny iterable object must implement the tp_iter\nhandler, which must return an iterator object. Here the same guidelines\napply as for Python classes:\nFor collections (such as lists and tuples) which can support multiple independent iterators, a new iterator should be created and returned by each call to\ntp_iter\n.Objects which can only be iterated over once (usually due to side effects of iteration, such as file objects) can implement\ntp_iter\nby returning a new reference to themselves \u2013 and should also therefore implement thetp_iternext\nhandler.\nAny iterator object should implement both tp_iter\nand tp_iternext\n. An iterator\u2019s\ntp_iter\nhandler should return a new reference\nto the iterator. Its tp_iternext\nhandler should\nreturn a new reference to the next object in the iteration, if there is one.\nIf the iteration has reached the end, tp_iternext\nmay return NULL\nwithout setting an exception, or it may set\nStopIteration\nin addition to returning NULL\n; avoiding\nthe exception can yield slightly better performance. If an actual error\noccurs, tp_iternext\nshould always set an exception\nand return NULL\n.\n3.6. Weak Reference Support\u00b6\nOne of the goals of Python\u2019s weak reference implementation is to allow any type to participate in the weak reference mechanism without incurring the overhead on performance-critical objects (such as numbers).\nSee also\nDocumentation for the weakref\nmodule.\nFor an object to be weakly referenceable, the extension type must set the\nPy_TPFLAGS_MANAGED_WEAKREF\nbit of the tp_flags\nfield. The legacy tp_weaklistoffset\nfield should\nbe left as zero.\nConcretely, here is how the statically declared type object would look:\nstatic PyTypeObject TrivialType = {\nPyVarObject_HEAD_INIT(NULL, 0)\n/* ... other members omitted for brevity ... */\n.tp_flags = Py_TPFLAGS_MANAGED_WEAKREF | ...,\n};\nThe only further addition is that tp_dealloc\nneeds to clear any weak\nreferences (by calling PyObject_ClearWeakRefs()\n):\nstatic void\nTrivial_dealloc(PyObject *op)\n{\n/* Clear weakrefs first before calling any destructors */\nPyObject_ClearWeakRefs(op);\n/* ... remainder of destruction code omitted for brevity ... */\nPy_TYPE(op)->tp_free(op);\n}\n3.7. More Suggestions\u00b6\nIn order to learn how to implement any specific method for your new data type,\nget the CPython source code. Go to the Objects\ndirectory,\nthen search the C source files for tp_\nplus the function you want\n(for example, tp_richcompare\n). You will find examples of the function\nyou want to implement.\nWhen you need to verify that an object is a concrete instance of the type you\nare implementing, use the PyObject_TypeCheck()\nfunction. A sample of\nits use might be something like the following:\nif (!PyObject_TypeCheck(some_object, &MyType)) {\nPyErr_SetString(PyExc_TypeError, \"arg #1 not a mything\");\nreturn NULL;\n}\nSee also\n- Download CPython source releases.\n- The CPython project on GitHub, where the CPython source code is developed.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 5673}
{"url": "https://docs.python.org/3/extending/newtypes_tutorial.html", "title": "Defining Extension Types: Tutorial", "content": "2. Defining Extension Types: Tutorial\u00b6\nPython allows the writer of a C extension module to define new types that\ncan be manipulated from Python code, much like the built-in str\nand list\ntypes. The code for all extension types follows a\npattern, but there are some details that you need to understand before you\ncan get started. This document is a gentle introduction to the topic.\n2.1. The Basics\u00b6\nThe CPython runtime sees all Python objects as variables of type\nPyObject*, which serves as a \u201cbase type\u201d for all Python objects.\nThe PyObject\nstructure itself only contains the object\u2019s\nreference count and a pointer to the object\u2019s \u201ctype object\u201d.\nThis is where the action is; the type object determines which (C) functions\nget called by the interpreter when, for instance, an attribute gets looked up\non an object, a method called, or it is multiplied by another object. These\nC functions are called \u201ctype methods\u201d.\nSo, if you want to define a new extension type, you need to create a new type object.\nThis sort of thing can only be explained by example, so here\u2019s a minimal, but\ncomplete, module that defines a new type named Custom\ninside a C\nextension module custom\n:\nNote\nWhat we\u2019re showing here is the traditional way of defining static\nextension types. It should be adequate for most uses. The C API also\nallows defining heap-allocated extension types using the\nPyType_FromSpec()\nfunction, which isn\u2019t covered in this tutorial.\n#define PY_SSIZE_T_CLEAN\n#include \ntypedef struct {\nPyObject_HEAD\n/* Type-specific fields go here. */\n} CustomObject;\nstatic PyTypeObject CustomType = {\n.ob_base = PyVarObject_HEAD_INIT(NULL, 0)\n.tp_name = \"custom.Custom\",\n.tp_doc = PyDoc_STR(\"Custom objects\"),\n.tp_basicsize = sizeof(CustomObject),\n.tp_itemsize = 0,\n.tp_flags = Py_TPFLAGS_DEFAULT,\n.tp_new = PyType_GenericNew,\n};\nstatic int\ncustom_module_exec(PyObject *m)\n{\nif (PyType_Ready(&CustomType) < 0) {\nreturn -1;\n}\nif (PyModule_AddObjectRef(m, \"Custom\", (PyObject *) &CustomType) < 0) {\nreturn -1;\n}\nreturn 0;\n}\nstatic PyModuleDef_Slot custom_module_slots[] = {\n{Py_mod_exec, custom_module_exec},\n// Just use this while using static types\n{Py_mod_multiple_interpreters, Py_MOD_MULTIPLE_INTERPRETERS_NOT_SUPPORTED},\n{0, NULL}\n};\nstatic PyModuleDef custom_module = {\n.m_base = PyModuleDef_HEAD_INIT,\n.m_name = \"custom\",\n.m_doc = \"Example module that creates an extension type.\",\n.m_size = 0,\n.m_slots = custom_module_slots,\n};\nPyMODINIT_FUNC\nPyInit_custom(void)\n{\nreturn PyModuleDef_Init(&custom_module);\n}\nNow that\u2019s quite a bit to take in at once, but hopefully bits will seem familiar from the previous chapter. This file defines three things:\nWhat a\nCustom\nobject contains: this is theCustomObject\nstruct, which is allocated once for eachCustom\ninstance.How the\nCustom\ntype behaves: this is theCustomType\nstruct, which defines a set of flags and function pointers that the interpreter inspects when specific operations are requested.How to define and execute the\ncustom\nmodule: this is thePyInit_custom\nfunction and the associatedcustom_module\nstruct for defining the module, and thecustom_module_exec\nfunction to set up a fresh module object.\nThe first bit is:\ntypedef struct {\nPyObject_HEAD\n} CustomObject;\nThis is what a Custom object will contain. PyObject_HEAD\nis mandatory\nat the start of each object struct and defines a field called ob_base\nof type PyObject\n, containing a pointer to a type object and a\nreference count (these can be accessed using the macros Py_TYPE\nand Py_REFCNT\nrespectively). The reason for the macro is to\nabstract away the layout and to enable additional fields in debug builds.\nNote\nThere is no semicolon above after the PyObject_HEAD\nmacro.\nBe wary of adding one by accident: some compilers will complain.\nOf course, objects generally store additional data besides the standard\nPyObject_HEAD\nboilerplate; for example, here is the definition for\nstandard Python floats:\ntypedef struct {\nPyObject_HEAD\ndouble ob_fval;\n} PyFloatObject;\nThe second bit is the definition of the type object.\nstatic PyTypeObject CustomType = {\n.ob_base = PyVarObject_HEAD_INIT(NULL, 0)\n.tp_name = \"custom.Custom\",\n.tp_doc = PyDoc_STR(\"Custom objects\"),\n.tp_basicsize = sizeof(CustomObject),\n.tp_itemsize = 0,\n.tp_flags = Py_TPFLAGS_DEFAULT,\n.tp_new = PyType_GenericNew,\n};\nNote\nWe recommend using C99-style designated initializers as above, to\navoid listing all the PyTypeObject\nfields that you don\u2019t care\nabout and also to avoid caring about the fields\u2019 declaration order.\nThe actual definition of PyTypeObject\nin object.h\nhas\nmany more fields than the definition above. The\nremaining fields will be filled with zeros by the C compiler, and it\u2019s\ncommon practice to not specify them explicitly unless you need them.\nWe\u2019re going to pick it apart, one field at a time:\n.ob_base = PyVarObject_HEAD_INIT(NULL, 0)\nThis line is mandatory boilerplate to initialize the ob_base\nfield mentioned above.\n.tp_name = \"custom.Custom\",\nThe name of our type. This will appear in the default textual representation of our objects and in some error messages, for example:\n>>> \"\" + custom.Custom()\nTraceback (most recent call last):\nFile \"\", line 1, in \nTypeError: can only concatenate str (not \"custom.Custom\") to str\nNote that the name is a dotted name that includes both the module name and the\nname of the type within the module. The module in this case is custom\nand\nthe type is Custom\n, so we set the type name to custom.Custom\n.\nUsing the real dotted import path is important to make your type compatible\nwith the pydoc\nand pickle\nmodules.\n.tp_basicsize = sizeof(CustomObject),\n.tp_itemsize = 0,\nThis is so that Python knows how much memory to allocate when creating\nnew Custom\ninstances. tp_itemsize\nis\nonly used for variable-sized objects and should otherwise be zero.\nNote\nIf you want your type to be subclassable from Python, and your type has the same\ntp_basicsize\nas its base type, you may have problems with multiple\ninheritance. A Python subclass of your type will have to list your type first\nin its __bases__\n, or else it will not be able to call your type\u2019s\n__new__()\nmethod without getting an error. You can avoid this problem by\nensuring that your type has a larger value for tp_basicsize\nthan its\nbase type does. Most of the time, this will be true anyway, because either your\nbase type will be object\n, or else you will be adding data members to\nyour base type, and therefore increasing its size.\nWe set the class flags to Py_TPFLAGS_DEFAULT\n.\n.tp_flags = Py_TPFLAGS_DEFAULT,\nAll types should include this constant in their flags. It enables all of the members defined until at least Python 3.3. If you need further members, you will need to OR the corresponding flags.\nWe provide a doc string for the type in tp_doc\n.\n.tp_doc = PyDoc_STR(\"Custom objects\"),\nTo enable object creation, we have to provide a tp_new\nhandler. This is the equivalent of the Python method __new__()\n, but\nhas to be specified explicitly. In this case, we can just use the default\nimplementation provided by the API function PyType_GenericNew()\n.\n.tp_new = PyType_GenericNew,\nEverything else in the file should be familiar, except for some code in\ncustom_module_exec()\n:\nif (PyType_Ready(&CustomType) < 0) {\nreturn -1;\n}\nThis initializes the Custom\ntype, filling in a number of members\nto the appropriate default values, including ob_type\nthat we initially\nset to NULL\n.\nif (PyModule_AddObjectRef(m, \"Custom\", (PyObject *) &CustomType) < 0) {\nreturn -1;\n}\nThis adds the type to the module dictionary. This allows us to create\nCustom\ninstances by calling the Custom\nclass:\n>>> import custom\n>>> mycustom = custom.Custom()\nThat\u2019s it! All that remains is to build it; put the above code in a file called\ncustom.c\n,\n[build-system]\nrequires = [\"setuptools\"]\nbuild-backend = \"setuptools.build_meta\"\n[project]\nname = \"custom\"\nversion = \"1\"\nin a file called pyproject.toml\n, and\nfrom setuptools import Extension, setup\nsetup(ext_modules=[Extension(\"custom\", [\"custom.c\"])])\nin a file called setup.py\n; then typing\n$ python -m pip install .\nin a shell should produce a file custom.so\nin a subdirectory\nand install it; now fire up Python \u2014 you should be able to import custom\nand play around with Custom\nobjects.\nThat wasn\u2019t so hard, was it?\nOf course, the current Custom type is pretty uninteresting. It has no data and doesn\u2019t do anything. It can\u2019t even be subclassed.\n2.2. Adding data and methods to the Basic example\u00b6\nLet\u2019s extend the basic example to add some data and methods. Let\u2019s also make\nthe type usable as a base class. We\u2019ll create a new module, custom2\nthat\nadds these capabilities:\n#define PY_SSIZE_T_CLEAN\n#include \n#include /* for offsetof() */\ntypedef struct {\nPyObject_HEAD\nPyObject *first; /* first name */\nPyObject *last; /* last name */\nint number;\n} CustomObject;\nstatic void\nCustom_dealloc(PyObject *op)\n{\nCustomObject *self = (CustomObject *) op;\nPy_XDECREF(self->first);\nPy_XDECREF(self->last);\nPy_TYPE(self)->tp_free(self);\n}\nstatic PyObject *\nCustom_new(PyTypeObject *type, PyObject *args, PyObject *kwds)\n{\nCustomObject *self;\nself = (CustomObject *) type->tp_alloc(type, 0);\nif (self != NULL) {\nself->first = Py_GetConstant(Py_CONSTANT_EMPTY_STR);\nif (self->first == NULL) {\nPy_DECREF(self);\nreturn NULL;\n}\nself->last = Py_GetConstant(Py_CONSTANT_EMPTY_STR);\nif (self->last == NULL) {\nPy_DECREF(self);\nreturn NULL;\n}\nself->number = 0;\n}\nreturn (PyObject *) self;\n}\nstatic int\nCustom_init(PyObject *op, PyObject *args, PyObject *kwds)\n{\nCustomObject *self = (CustomObject *) op;\nstatic char *kwlist[] = {\"first\", \"last\", \"number\", NULL};\nPyObject *first = NULL, *last = NULL;\nif (!PyArg_ParseTupleAndKeywords(args, kwds, \"|OOi\", kwlist,\n&first, &last,\n&self->number))\nreturn -1;\nif (first) {\nPy_XSETREF(self->first, Py_NewRef(first));\n}\nif (last) {\nPy_XSETREF(self->last, Py_NewRef(last));\n}\nreturn 0;\n}\nstatic PyMemberDef Custom_members[] = {\n{\"first\", Py_T_OBJECT_EX, offsetof(CustomObject, first), 0,\n\"first name\"},\n{\"last\", Py_T_OBJECT_EX, offsetof(CustomObject, last), 0,\n\"last name\"},\n{\"number\", Py_T_INT, offsetof(CustomObject, number), 0,\n\"custom number\"},\n{NULL} /* Sentinel */\n};\nstatic PyObject *\nCustom_name(PyObject *op, PyObject *Py_UNUSED(dummy))\n{\nCustomObject *self = (CustomObject *) op;\nif (self->first == NULL) {\nPyErr_SetString(PyExc_AttributeError, \"first\");\nreturn NULL;\n}\nif (self->last == NULL) {\nPyErr_SetString(PyExc_AttributeError, \"last\");\nreturn NULL;\n}\nreturn PyUnicode_FromFormat(\"%S %S\", self->first, self->last);\n}\nstatic PyMethodDef Custom_methods[] = {\n{\"name\", Custom_name, METH_NOARGS,\n\"Return the name, combining the first and last name\"\n},\n{NULL} /* Sentinel */\n};\nstatic PyTypeObject CustomType = {\n.ob_base = PyVarObject_HEAD_INIT(NULL, 0)\n.tp_name = \"custom2.Custom\",\n.tp_doc = PyDoc_STR(\"Custom objects\"),\n.tp_basicsize = sizeof(CustomObject),\n.tp_itemsize = 0,\n.tp_flags = Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE,\n.tp_new = Custom_new,\n.tp_init = Custom_init,\n.tp_dealloc = Custom_dealloc,\n.tp_members = Custom_members,\n.tp_methods = Custom_methods,\n};\nstatic int\ncustom_module_exec(PyObject *m)\n{\nif (PyType_Ready(&CustomType) < 0) {\nreturn -1;\n}\nif (PyModule_AddObjectRef(m, \"Custom\", (PyObject *) &CustomType) < 0) {\nreturn -1;\n}\nreturn 0;\n}\nstatic PyModuleDef_Slot custom_module_slots[] = {\n{Py_mod_exec, custom_module_exec},\n{Py_mod_multiple_interpreters, Py_MOD_MULTIPLE_INTERPRETERS_NOT_SUPPORTED},\n{0, NULL}\n};\nstatic PyModuleDef custom_module = {\n.m_base = PyModuleDef_HEAD_INIT,\n.m_name = \"custom2\",\n.m_doc = \"Example module that creates an extension type.\",\n.m_size = 0,\n.m_slots = custom_module_slots,\n};\nPyMODINIT_FUNC\nPyInit_custom2(void)\n{\nreturn PyModuleDef_Init(&custom_module);\n}\nThis version of the module has a number of changes.\nThe Custom\ntype now has three data attributes in its C struct,\nfirst, last, and number. The first and last variables are Python\nstrings containing first and last names. The number attribute is a C integer.\nThe object structure is updated accordingly:\ntypedef struct {\nPyObject_HEAD\nPyObject *first; /* first name */\nPyObject *last; /* last name */\nint number;\n} CustomObject;\nBecause we now have data to manage, we have to be more careful about object allocation and deallocation. At a minimum, we need a deallocation method:\nstatic void\nCustom_dealloc(PyObject *op)\n{\nCustomObject *self = (CustomObject *) op;\nPy_XDECREF(self->first);\nPy_XDECREF(self->last);\nPy_TYPE(self)->tp_free(self);\n}\nwhich is assigned to the tp_dealloc\nmember:\n.tp_dealloc = Custom_dealloc,\nThis method first clears the reference counts of the two Python attributes.\nPy_XDECREF()\ncorrectly handles the case where its argument is\nNULL\n(which might happen here if tp_new\nfailed midway). It then\ncalls the tp_free\nmember of the object\u2019s type\n(computed by Py_TYPE(self)\n) to free the object\u2019s memory. Note that\nthe object\u2019s type might not be CustomType\n, because the object may\nbe an instance of a subclass.\nNote\nThe explicit cast to CustomObject *\nabove is needed because we defined\nCustom_dealloc\nto take a PyObject *\nargument, as the tp_dealloc\nfunction pointer expects to receive a PyObject *\nargument.\nBy assigning to the tp_dealloc\nslot of a type, we declare\nthat it can only be called with instances of our CustomObject\nclass, so the cast to (CustomObject *)\nis safe.\nThis is object-oriented polymorphism, in C!\nIn existing code, or in previous versions of this tutorial,\nyou might see similar functions take a pointer to the subtype\nobject structure (CustomObject*\n) directly, like this:\nCustom_dealloc(CustomObject *self)\n{\nPy_XDECREF(self->first);\nPy_XDECREF(self->last);\nPy_TYPE(self)->tp_free((PyObject *) self);\n}\n...\n.tp_dealloc = (destructor) Custom_dealloc,\nThis does the same thing on all architectures that CPython supports, but according to the C standard, it invokes undefined behavior.\nWe want to make sure that the first and last names are initialized to empty\nstrings, so we provide a tp_new\nimplementation:\nstatic PyObject *\nCustom_new(PyTypeObject *type, PyObject *args, PyObject *kwds)\n{\nCustomObject *self;\nself = (CustomObject *) type->tp_alloc(type, 0);\nif (self != NULL) {\nself->first = PyUnicode_FromString(\"\");\nif (self->first == NULL) {\nPy_DECREF(self);\nreturn NULL;\n}\nself->last = PyUnicode_FromString(\"\");\nif (self->last == NULL) {\nPy_DECREF(self);\nreturn NULL;\n}\nself->number = 0;\n}\nreturn (PyObject *) self;\n}\nand install it in the tp_new\nmember:\n.tp_new = Custom_new,\nThe tp_new\nhandler is responsible for creating (as opposed to initializing)\nobjects of the type. It is exposed in Python as the __new__()\nmethod.\nIt is not required to define a tp_new\nmember, and indeed many extension\ntypes will simply reuse PyType_GenericNew()\nas done in the first\nversion of the Custom\ntype above. In this case, we use the tp_new\nhandler to initialize the first\nand last\nattributes to non-NULL\ndefault values.\ntp_new\nis passed the type being instantiated (not necessarily CustomType\n,\nif a subclass is instantiated) and any arguments passed when the type was\ncalled, and is expected to return the instance created. tp_new\nhandlers\nalways accept positional and keyword arguments, but they often ignore the\narguments, leaving the argument handling to initializer (a.k.a. tp_init\nin C or __init__\nin Python) methods.\nNote\ntp_new\nshouldn\u2019t call tp_init\nexplicitly, as the interpreter\nwill do it itself.\nThe tp_new\nimplementation calls the tp_alloc\nslot to allocate memory:\nself = (CustomObject *) type->tp_alloc(type, 0);\nSince memory allocation may fail, we must check the tp_alloc\nresult against NULL\nbefore proceeding.\nNote\nWe didn\u2019t fill the tp_alloc\nslot ourselves. Rather\nPyType_Ready()\nfills it for us by inheriting it from our base class,\nwhich is object\nby default. Most types use the default allocation\nstrategy.\nNote\nIf you are creating a co-operative tp_new\n(one\nthat calls a base type\u2019s tp_new\nor __new__()\n),\nyou must not try to determine what method to call using method resolution\norder at runtime. Always statically determine what type you are going to\ncall, and call its tp_new\ndirectly, or via\ntype->tp_base->tp_new\n. If you do not do this, Python subclasses of your\ntype that also inherit from other Python-defined classes may not work correctly.\n(Specifically, you may not be able to create instances of such subclasses\nwithout getting a TypeError\n.)\nWe also define an initialization function which accepts arguments to provide initial values for our instance:\nstatic int\nCustom_init(PyObject *op, PyObject *args, PyObject *kwds)\n{\nCustomObject *self = (CustomObject *) op;\nstatic char *kwlist[] = {\"first\", \"last\", \"number\", NULL};\nPyObject *first = NULL, *last = NULL, *tmp;\nif (!PyArg_ParseTupleAndKeywords(args, kwds, \"|OOi\", kwlist,\n&first, &last,\n&self->number))\nreturn -1;\nif (first) {\ntmp = self->first;\nPy_INCREF(first);\nself->first = first;\nPy_XDECREF(tmp);\n}\nif (last) {\ntmp = self->last;\nPy_INCREF(last);\nself->last = last;\nPy_XDECREF(tmp);\n}\nreturn 0;\n}\nby filling the tp_init\nslot.\n.tp_init = Custom_init,\nThe tp_init\nslot is exposed in Python as the\n__init__()\nmethod. It is used to initialize an object after it\u2019s\ncreated. Initializers always accept positional and keyword arguments,\nand they should return either 0\non success or -1\non error.\nUnlike the tp_new\nhandler, there is no guarantee that tp_init\nis called at all (for example, the pickle\nmodule by default\ndoesn\u2019t call __init__()\non unpickled instances). It can also be\ncalled multiple times. Anyone can call the __init__()\nmethod on\nour objects. For this reason, we have to be extra careful when assigning\nthe new attribute values. We might be tempted, for example to assign the\nfirst\nmember like this:\nif (first) {\nPy_XDECREF(self->first);\nPy_INCREF(first);\nself->first = first;\n}\nBut this would be risky. Our type doesn\u2019t restrict the type of the\nfirst\nmember, so it could be any kind of object. It could have a\ndestructor that causes code to be executed that tries to access the\nfirst\nmember; or that destructor could detach the\nthread state and let arbitrary code run in other\nthreads that accesses and modifies our object.\nTo be paranoid and protect ourselves against this possibility, we almost always reassign members before decrementing their reference counts. When don\u2019t we have to do this?\nwhen we absolutely know that the reference count is greater than 1;\nwhen we know that deallocation of the object [1] will neither detach the thread state nor cause any calls back into our type\u2019s code;\nwhen decrementing a reference count in a\ntp_dealloc\nhandler on a type which doesn\u2019t support cyclic garbage collection [2].\nWe want to expose our instance variables as attributes. There are a number of ways to do that. The simplest way is to define member definitions:\nstatic PyMemberDef Custom_members[] = {\n{\"first\", Py_T_OBJECT_EX, offsetof(CustomObject, first), 0,\n\"first name\"},\n{\"last\", Py_T_OBJECT_EX, offsetof(CustomObject, last), 0,\n\"last name\"},\n{\"number\", Py_T_INT, offsetof(CustomObject, number), 0,\n\"custom number\"},\n{NULL} /* Sentinel */\n};\nand put the definitions in the tp_members\nslot:\n.tp_members = Custom_members,\nEach member definition has a member name, type, offset, access flags and documentation string. See the Generic Attribute Management section below for details.\nA disadvantage of this approach is that it doesn\u2019t provide a way to restrict the\ntypes of objects that can be assigned to the Python attributes. We expect the\nfirst and last names to be strings, but any Python objects can be assigned.\nFurther, the attributes can be deleted, setting the C pointers to NULL\n. Even\nthough we can make sure the members are initialized to non-NULL\nvalues, the\nmembers can be set to NULL\nif the attributes are deleted.\nWe define a single method, Custom.name()\n, that outputs the objects name as the\nconcatenation of the first and last names.\nstatic PyObject *\nCustom_name(PyObject *op, PyObject *Py_UNUSED(dummy))\n{\nCustomObject *self = (CustomObject *) op;\nif (self->first == NULL) {\nPyErr_SetString(PyExc_AttributeError, \"first\");\nreturn NULL;\n}\nif (self->last == NULL) {\nPyErr_SetString(PyExc_AttributeError, \"last\");\nreturn NULL;\n}\nreturn PyUnicode_FromFormat(\"%S %S\", self->first, self->last);\n}\nThe method is implemented as a C function that takes a Custom\n(or\nCustom\nsubclass) instance as the first argument. Methods always take an\ninstance as the first argument. Methods often take positional and keyword\narguments as well, but in this case we don\u2019t take any and don\u2019t need to accept\na positional argument tuple or keyword argument dictionary. This method is\nequivalent to the Python method:\ndef name(self):\nreturn \"%s %s\" % (self.first, self.last)\nNote that we have to check for the possibility that our first\nand\nlast\nmembers are NULL\n. This is because they can be deleted, in which\ncase they are set to NULL\n. It would be better to prevent deletion of these\nattributes and to restrict the attribute values to be strings. We\u2019ll see how to\ndo that in the next section.\nNow that we\u2019ve defined the method, we need to create an array of method definitions:\nstatic PyMethodDef Custom_methods[] = {\n{\"name\", Custom_name, METH_NOARGS,\n\"Return the name, combining the first and last name\"\n},\n{NULL} /* Sentinel */\n};\n(note that we used the METH_NOARGS\nflag to indicate that the method\nis expecting no arguments other than self)\nand assign it to the tp_methods\nslot:\n.tp_methods = Custom_methods,\nFinally, we\u2019ll make our type usable as a base class for subclassing. We\u2019ve\nwritten our methods carefully so far so that they don\u2019t make any assumptions\nabout the type of the object being created or used, so all we need to do is\nto add the Py_TPFLAGS_BASETYPE\nto our class flag definition:\n.tp_flags = Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE,\nWe rename PyInit_custom()\nto PyInit_custom2()\n, update the\nmodule name in the PyModuleDef\nstruct, and update the full class\nname in the PyTypeObject\nstruct.\nFinally, we update our setup.py\nfile to include the new module,\nfrom setuptools import Extension, setup\nsetup(ext_modules=[\nExtension(\"custom\", [\"custom.c\"]),\nExtension(\"custom2\", [\"custom2.c\"]),\n])\nand then we re-install so that we can import custom2\n:\n$ python -m pip install .\n2.3. Providing finer control over data attributes\u00b6\nIn this section, we\u2019ll provide finer control over how the first\nand\nlast\nattributes are set in the Custom\nexample. In the previous\nversion of our module, the instance variables first\nand last\ncould be set to non-string values or even deleted. We want to make sure that\nthese attributes always contain strings.\n#define PY_SSIZE_T_CLEAN\n#include \n#include /* for offsetof() */\ntypedef struct {\nPyObject_HEAD\nPyObject *first; /* first name */\nPyObject *last; /* last name */\nint number;\n} CustomObject;\nstatic void\nCustom_dealloc(PyObject *op)\n{\nCustomObject *self = (CustomObject *) op;\nPy_XDECREF(self->first);\nPy_XDECREF(self->last);\nPy_TYPE(self)->tp_free(self);\n}\nstatic PyObject *\nCustom_new(PyTypeObject *type, PyObject *args, PyObject *kwds)\n{\nCustomObject *self;\nself = (CustomObject *) type->tp_alloc(type, 0);\nif (self != NULL) {\nself->first = Py_GetConstant(Py_CONSTANT_EMPTY_STR);\nif (self->first == NULL) {\nPy_DECREF(self);\nreturn NULL;\n}\nself->last = Py_GetConstant(Py_CONSTANT_EMPTY_STR);\nif (self->last == NULL) {\nPy_DECREF(self);\nreturn NULL;\n}\nself->number = 0;\n}\nreturn (PyObject *) self;\n}\nstatic int\nCustom_init(PyObject *op, PyObject *args, PyObject *kwds)\n{\nCustomObject *self = (CustomObject *) op;\nstatic char *kwlist[] = {\"first\", \"last\", \"number\", NULL};\nPyObject *first = NULL, *last = NULL;\nif (!PyArg_ParseTupleAndKeywords(args, kwds, \"|UUi\", kwlist,\n&first, &last,\n&self->number))\nreturn -1;\nif (first) {\nPy_SETREF(self->first, Py_NewRef(first));\n}\nif (last) {\nPy_SETREF(self->last, Py_NewRef(last));\n}\nreturn 0;\n}\nstatic PyMemberDef Custom_members[] = {\n{\"number\", Py_T_INT, offsetof(CustomObject, number), 0,\n\"custom number\"},\n{NULL} /* Sentinel */\n};\nstatic PyObject *\nCustom_getfirst(PyObject *op, void *closure)\n{\nCustomObject *self = (CustomObject *) op;\nreturn Py_NewRef(self->first);\n}\nstatic int\nCustom_setfirst(PyObject *op, PyObject *value, void *closure)\n{\nCustomObject *self = (CustomObject *) op;\nif (value == NULL) {\nPyErr_SetString(PyExc_TypeError, \"Cannot delete the first attribute\");\nreturn -1;\n}\nif (!PyUnicode_Check(value)) {\nPyErr_SetString(PyExc_TypeError,\n\"The first attribute value must be a string\");\nreturn -1;\n}\nPy_SETREF(self->first, Py_NewRef(value));\nreturn 0;\n}\nstatic PyObject *\nCustom_getlast(PyObject *op, void *closure)\n{\nCustomObject *self = (CustomObject *) op;\nreturn Py_NewRef(self->last);\n}\nstatic int\nCustom_setlast(PyObject *op, PyObject *value, void *closure)\n{\nCustomObject *self = (CustomObject *) op;\nif (value == NULL) {\nPyErr_SetString(PyExc_TypeError, \"Cannot delete the last attribute\");\nreturn -1;\n}\nif (!PyUnicode_Check(value)) {\nPyErr_SetString(PyExc_TypeError,\n\"The last attribute value must be a string\");\nreturn -1;\n}\nPy_SETREF(self->last, Py_NewRef(value));\nreturn 0;\n}\nstatic PyGetSetDef Custom_getsetters[] = {\n{\"first\", Custom_getfirst, Custom_setfirst,\n\"first name\", NULL},\n{\"last\", Custom_getlast, Custom_setlast,\n\"last name\", NULL},\n{NULL} /* Sentinel */\n};\nstatic PyObject *\nCustom_name(PyObject *op, PyObject *Py_UNUSED(dummy))\n{\nCustomObject *self = (CustomObject *) op;\nreturn PyUnicode_FromFormat(\"%S %S\", self->first, self->last);\n}\nstatic PyMethodDef Custom_methods[] = {\n{\"name\", Custom_name, METH_NOARGS,\n\"Return the name, combining the first and last name\"\n},\n{NULL} /* Sentinel */\n};\nstatic PyTypeObject CustomType = {\n.ob_base = PyVarObject_HEAD_INIT(NULL, 0)\n.tp_name = \"custom3.Custom\",\n.tp_doc = PyDoc_STR(\"Custom objects\"),\n.tp_basicsize = sizeof(CustomObject),\n.tp_itemsize = 0,\n.tp_flags = Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE,\n.tp_new = Custom_new,\n.tp_init = Custom_init,\n.tp_dealloc = Custom_dealloc,\n.tp_members = Custom_members,\n.tp_methods = Custom_methods,\n.tp_getset = Custom_getsetters,\n};\nstatic int\ncustom_module_exec(PyObject *m)\n{\nif (PyType_Ready(&CustomType) < 0) {\nreturn -1;\n}\nif (PyModule_AddObjectRef(m, \"Custom\", (PyObject *) &CustomType) < 0) {\nreturn -1;\n}\nreturn 0;\n}\nstatic PyModuleDef_Slot custom_module_slots[] = {\n{Py_mod_exec, custom_module_exec},\n{Py_mod_multiple_interpreters, Py_MOD_MULTIPLE_INTERPRETERS_NOT_SUPPORTED},\n{0, NULL}\n};\nstatic PyModuleDef custom_module = {\n.m_base = PyModuleDef_HEAD_INIT,\n.m_name = \"custom3\",\n.m_doc = \"Example module that creates an extension type.\",\n.m_size = 0,\n.m_slots = custom_module_slots,\n};\nPyMODINIT_FUNC\nPyInit_custom3(void)\n{\nreturn PyModuleDef_Init(&custom_module);\n}\nTo provide greater control, over the first\nand last\nattributes,\nwe\u2019ll use custom getter and setter functions. Here are the functions for\ngetting and setting the first\nattribute:\nstatic PyObject *\nCustom_getfirst(PyObject *op, void *closure)\n{\nCustomObject *self = (CustomObject *) op;\nPy_INCREF(self->first);\nreturn self->first;\n}\nstatic int\nCustom_setfirst(PyObject *op, PyObject *value, void *closure)\n{\nCustomObject *self = (CustomObject *) op;\nPyObject *tmp;\nif (value == NULL) {\nPyErr_SetString(PyExc_TypeError, \"Cannot delete the first attribute\");\nreturn -1;\n}\nif (!PyUnicode_Check(value)) {\nPyErr_SetString(PyExc_TypeError,\n\"The first attribute value must be a string\");\nreturn -1;\n}\ntmp = self->first;\nPy_INCREF(value);\nself->first = value;\nPy_DECREF(tmp);\nreturn 0;\n}\nThe getter function is passed a Custom\nobject and a \u201cclosure\u201d, which is\na void pointer. In this case, the closure is ignored. (The closure supports an\nadvanced usage in which definition data is passed to the getter and setter. This\ncould, for example, be used to allow a single set of getter and setter functions\nthat decide the attribute to get or set based on data in the closure.)\nThe setter function is passed the Custom\nobject, the new value, and the\nclosure. The new value may be NULL\n, in which case the attribute is being\ndeleted. In our setter, we raise an error if the attribute is deleted or if its\nnew value is not a string.\nWe create an array of PyGetSetDef\nstructures:\nstatic PyGetSetDef Custom_getsetters[] = {\n{\"first\", Custom_getfirst, Custom_setfirst,\n\"first name\", NULL},\n{\"last\", Custom_getlast, Custom_setlast,\n\"last name\", NULL},\n{NULL} /* Sentinel */\n};\nand register it in the tp_getset\nslot:\n.tp_getset = Custom_getsetters,\nThe last item in a PyGetSetDef\nstructure is the \u201cclosure\u201d mentioned\nabove. In this case, we aren\u2019t using a closure, so we just pass NULL\n.\nWe also remove the member definitions for these attributes:\nstatic PyMemberDef Custom_members[] = {\n{\"number\", Py_T_INT, offsetof(CustomObject, number), 0,\n\"custom number\"},\n{NULL} /* Sentinel */\n};\nWe also need to update the tp_init\nhandler to only\nallow strings [3] to be passed:\nstatic int\nCustom_init(PyObject *op, PyObject *args, PyObject *kwds)\n{\nCustomObject *self = (CustomObject *) op;\nstatic char *kwlist[] = {\"first\", \"last\", \"number\", NULL};\nPyObject *first = NULL, *last = NULL, *tmp;\nif (!PyArg_ParseTupleAndKeywords(args, kwds, \"|UUi\", kwlist,\n&first, &last,\n&self->number))\nreturn -1;\nif (first) {\ntmp = self->first;\nPy_INCREF(first);\nself->first = first;\nPy_DECREF(tmp);\n}\nif (last) {\ntmp = self->last;\nPy_INCREF(last);\nself->last = last;\nPy_DECREF(tmp);\n}\nreturn 0;\n}\nWith these changes, we can assure that the first\nand last\nmembers are\nnever NULL\nso we can remove checks for NULL\nvalues in almost all cases.\nThis means that most of the Py_XDECREF()\ncalls can be converted to\nPy_DECREF()\ncalls. The only place we can\u2019t change these calls is in\nthe tp_dealloc\nimplementation, where there is the possibility that the\ninitialization of these members failed in tp_new\n.\nWe also rename the module initialization function and module name in the\ninitialization function, as we did before, and we add an extra definition to the\nsetup.py\nfile.\n2.4. Supporting cyclic garbage collection\u00b6\nPython has a cyclic garbage collector (GC) that can identify unneeded objects even when their reference counts are not zero. This can happen when objects are involved in cycles. For example, consider:\n>>> l = []\n>>> l.append(l)\n>>> del l\nIn this example, we create a list that contains itself. When we delete it, it still has a reference from itself. Its reference count doesn\u2019t drop to zero. Fortunately, Python\u2019s cyclic garbage collector will eventually figure out that the list is garbage and free it.\nIn the second version of the Custom\nexample, we allowed any kind of\nobject to be stored in the first\nor last\nattributes [4].\nBesides, in the second and third versions, we allowed subclassing\nCustom\n, and subclasses may add arbitrary attributes. For any of\nthose two reasons, Custom\nobjects can participate in cycles:\n>>> import custom3\n>>> class Derived(custom3.Custom): pass\n...\n>>> n = Derived()\n>>> n.some_attribute = n\nTo allow a Custom\ninstance participating in a reference cycle to\nbe properly detected and collected by the cyclic GC, our Custom\ntype\nneeds to fill two additional slots and to enable a flag that enables these slots:\n#define PY_SSIZE_T_CLEAN\n#include \n#include /* for offsetof() */\ntypedef struct {\nPyObject_HEAD\nPyObject *first; /* first name */\nPyObject *last; /* last name */\nint number;\n} CustomObject;\nstatic int\nCustom_traverse(PyObject *op, visitproc visit, void *arg)\n{\nCustomObject *self = (CustomObject *) op;\nPy_VISIT(self->first);\nPy_VISIT(self->last);\nreturn 0;\n}\nstatic int\nCustom_clear(PyObject *op)\n{\nCustomObject *self = (CustomObject *) op;\nPy_CLEAR(self->first);\nPy_CLEAR(self->last);\nreturn 0;\n}\nstatic void\nCustom_dealloc(PyObject *op)\n{\nPyObject_GC_UnTrack(op);\n(void)Custom_clear(op);\nPy_TYPE(op)->tp_free(op);\n}\nstatic PyObject *\nCustom_new(PyTypeObject *type, PyObject *args, PyObject *kwds)\n{\nCustomObject *self;\nself = (CustomObject *) type->tp_alloc(type, 0);\nif (self != NULL) {\nself->first = Py_GetConstant(Py_CONSTANT_EMPTY_STR);\nif (self->first == NULL) {\nPy_DECREF(self);\nreturn NULL;\n}\nself->last = Py_GetConstant(Py_CONSTANT_EMPTY_STR);\nif (self->last == NULL) {\nPy_DECREF(self);\nreturn NULL;\n}\nself->number = 0;\n}\nreturn (PyObject *) self;\n}\nstatic int\nCustom_init(PyObject *op, PyObject *args, PyObject *kwds)\n{\nCustomObject *self = (CustomObject *) op;\nstatic char *kwlist[] = {\"first\", \"last\", \"number\", NULL};\nPyObject *first = NULL, *last = NULL;\nif (!PyArg_ParseTupleAndKeywords(args, kwds, \"|UUi\", kwlist,\n&first, &last,\n&self->number))\nreturn -1;\nif (first) {\nPy_SETREF(self->first, Py_NewRef(first));\n}\nif (last) {\nPy_SETREF(self->last, Py_NewRef(last));\n}\nreturn 0;\n}\nstatic PyMemberDef Custom_members[] = {\n{\"number\", Py_T_INT, offsetof(CustomObject, number), 0,\n\"custom number\"},\n{NULL} /* Sentinel */\n};\nstatic PyObject *\nCustom_getfirst(PyObject *op, void *closure)\n{\nCustomObject *self = (CustomObject *) op;\nreturn Py_NewRef(self->first);\n}\nstatic int\nCustom_setfirst(PyObject *op, PyObject *value, void *closure)\n{\nCustomObject *self = (CustomObject *) op;\nif (value == NULL) {\nPyErr_SetString(PyExc_TypeError, \"Cannot delete the first attribute\");\nreturn -1;\n}\nif (!PyUnicode_Check(value)) {\nPyErr_SetString(PyExc_TypeError,\n\"The first attribute value must be a string\");\nreturn -1;\n}\nPy_XSETREF(self->first, Py_NewRef(value));\nreturn 0;\n}\nstatic PyObject *\nCustom_getlast(PyObject *op, void *closure)\n{\nCustomObject *self = (CustomObject *) op;\nreturn Py_NewRef(self->last);\n}\nstatic int\nCustom_setlast(PyObject *op, PyObject *value, void *closure)\n{\nCustomObject *self = (CustomObject *) op;\nif (value == NULL) {\nPyErr_SetString(PyExc_TypeError, \"Cannot delete the last attribute\");\nreturn -1;\n}\nif (!PyUnicode_Check(value)) {\nPyErr_SetString(PyExc_TypeError,\n\"The last attribute value must be a string\");\nreturn -1;\n}\nPy_XSETREF(self->last, Py_NewRef(value));\nreturn 0;\n}\nstatic PyGetSetDef Custom_getsetters[] = {\n{\"first\", Custom_getfirst, Custom_setfirst,\n\"first name\", NULL},\n{\"last\", Custom_getlast, Custom_setlast,\n\"last name\", NULL},\n{NULL} /* Sentinel */\n};\nstatic PyObject *\nCustom_name(PyObject *op, PyObject *Py_UNUSED(dummy))\n{\nCustomObject *self = (CustomObject *) op;\nreturn PyUnicode_FromFormat(\"%S %S\", self->first, self->last);\n}\nstatic PyMethodDef Custom_methods[] = {\n{\"name\", Custom_name, METH_NOARGS,\n\"Return the name, combining the first and last name\"\n},\n{NULL} /* Sentinel */\n};\nstatic PyTypeObject CustomType = {\n.ob_base = PyVarObject_HEAD_INIT(NULL, 0)\n.tp_name = \"custom4.Custom\",\n.tp_doc = PyDoc_STR(\"Custom objects\"),\n.tp_basicsize = sizeof(CustomObject),\n.tp_itemsize = 0,\n.tp_flags = Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE | Py_TPFLAGS_HAVE_GC,\n.tp_new = Custom_new,\n.tp_init = Custom_init,\n.tp_dealloc = Custom_dealloc,\n.tp_traverse = Custom_traverse,\n.tp_clear = Custom_clear,\n.tp_members = Custom_members,\n.tp_methods = Custom_methods,\n.tp_getset = Custom_getsetters,\n};\nstatic int\ncustom_module_exec(PyObject *m)\n{\nif (PyType_Ready(&CustomType) < 0) {\nreturn -1;\n}\nif (PyModule_AddObjectRef(m, \"Custom\", (PyObject *) &CustomType) < 0) {\nreturn -1;\n}\nreturn 0;\n}\nstatic PyModuleDef_Slot custom_module_slots[] = {\n{Py_mod_exec, custom_module_exec},\n{Py_mod_multiple_interpreters, Py_MOD_MULTIPLE_INTERPRETERS_NOT_SUPPORTED},\n{0, NULL}\n};\nstatic PyModuleDef custom_module = {\n.m_base = PyModuleDef_HEAD_INIT,\n.m_name = \"custom4\",\n.m_doc = \"Example module that creates an extension type.\",\n.m_size = 0,\n.m_slots = custom_module_slots,\n};\nPyMODINIT_FUNC\nPyInit_custom4(void)\n{\nreturn PyModuleDef_Init(&custom_module);\n}\nFirst, the traversal method lets the cyclic GC know about subobjects that could participate in cycles:\nstatic int\nCustom_traverse(PyObject *op, visitproc visit, void *arg)\n{\nCustomObject *self = (CustomObject *) op;\nint vret;\nif (self->first) {\nvret = visit(self->first, arg);\nif (vret != 0)\nreturn vret;\n}\nif (self->last) {\nvret = visit(self->last, arg);\nif (vret != 0)\nreturn vret;\n}\nreturn 0;\n}\nFor each subobject that can participate in cycles, we need to call the\nvisit()\nfunction, which is passed to the traversal method. The\nvisit()\nfunction takes as arguments the subobject and the extra argument\narg passed to the traversal method. It returns an integer value that must be\nreturned if it is non-zero.\nPython provides a Py_VISIT()\nmacro that automates calling visit\nfunctions. With Py_VISIT()\n, we can minimize the amount of boilerplate\nin Custom_traverse\n:\nstatic int\nCustom_traverse(PyObject *op, visitproc visit, void *arg)\n{\nCustomObject *self = (CustomObject *) op;\nPy_VISIT(self->first);\nPy_VISIT(self->last);\nreturn 0;\n}\nNote\nThe tp_traverse\nimplementation must name its\narguments exactly visit and arg in order to use Py_VISIT()\n.\nSecond, we need to provide a method for clearing any subobjects that can participate in cycles:\nstatic int\nCustom_clear(PyObject *op)\n{\nCustomObject *self = (CustomObject *) op;\nPy_CLEAR(self->first);\nPy_CLEAR(self->last);\nreturn 0;\n}\nNotice the use of the Py_CLEAR()\nmacro. It is the recommended and safe\nway to clear data attributes of arbitrary types while decrementing\ntheir reference counts. If you were to call Py_XDECREF()\ninstead\non the attribute before setting it to NULL\n, there is a possibility\nthat the attribute\u2019s destructor would call back into code that reads the\nattribute again (especially if there is a reference cycle).\nNote\nYou could emulate Py_CLEAR()\nby writing:\nPyObject *tmp;\ntmp = self->first;\nself->first = NULL;\nPy_XDECREF(tmp);\nNevertheless, it is much easier and less error-prone to always\nuse Py_CLEAR()\nwhen deleting an attribute. Don\u2019t\ntry to micro-optimize at the expense of robustness!\nThe deallocator Custom_dealloc\nmay call arbitrary code when clearing\nattributes. It means the circular GC can be triggered inside the function.\nSince the GC assumes reference count is not zero, we need to untrack the object\nfrom the GC by calling PyObject_GC_UnTrack()\nbefore clearing members.\nHere is our reimplemented deallocator using PyObject_GC_UnTrack()\nand Custom_clear\n:\nstatic void\nCustom_dealloc(PyObject *op)\n{\nPyObject_GC_UnTrack(op);\n(void)Custom_clear(op);\nPy_TYPE(op)->tp_free(op);\n}\nFinally, we add the Py_TPFLAGS_HAVE_GC\nflag to the class flags:\n.tp_flags = Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE | Py_TPFLAGS_HAVE_GC,\nThat\u2019s pretty much it. If we had written custom tp_alloc\nor\ntp_free\nhandlers, we\u2019d need to modify them for cyclic\ngarbage collection. Most extensions will use the versions automatically provided.\n2.5. Subclassing other types\u00b6\nIt is possible to create new extension types that are derived from existing\ntypes. It is easiest to inherit from the built in types, since an extension can\neasily use the PyTypeObject\nit needs. It can be difficult to share\nthese PyTypeObject\nstructures between extension modules.\nIn this example we will create a SubList\ntype that inherits from the\nbuilt-in list\ntype. The new type will be completely compatible with\nregular lists, but will have an additional increment()\nmethod that\nincreases an internal counter:\n>>> import sublist\n>>> s = sublist.SubList(range(3))\n>>> s.extend(s)\n>>> print(len(s))\n6\n>>> print(s.increment())\n1\n>>> print(s.increment())\n2\n#define PY_SSIZE_T_CLEAN\n#include \ntypedef struct {\nPyListObject list;\nint state;\n} SubListObject;\nstatic PyObject *\nSubList_increment(PyObject *op, PyObject *Py_UNUSED(dummy))\n{\nSubListObject *self = (SubListObject *) op;\nself->state++;\nreturn PyLong_FromLong(self->state);\n}\nstatic PyMethodDef SubList_methods[] = {\n{\"increment\", SubList_increment, METH_NOARGS,\nPyDoc_STR(\"increment state counter\")},\n{NULL},\n};\nstatic int\nSubList_init(PyObject *op, PyObject *args, PyObject *kwds)\n{\nSubListObject *self = (SubListObject *) op;\nif (PyList_Type.tp_init(op, args, kwds) < 0)\nreturn -1;\nself->state = 0;\nreturn 0;\n}\nstatic PyTypeObject SubListType = {\n.ob_base = PyVarObject_HEAD_INIT(NULL, 0)\n.tp_name = \"sublist.SubList\",\n.tp_doc = PyDoc_STR(\"SubList objects\"),\n.tp_basicsize = sizeof(SubListObject),\n.tp_itemsize = 0,\n.tp_flags = Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE,\n.tp_init = SubList_init,\n.tp_methods = SubList_methods,\n};\nstatic int\nsublist_module_exec(PyObject *m)\n{\nSubListType.tp_base = &PyList_Type;\nif (PyType_Ready(&SubListType) < 0) {\nreturn -1;\n}\nif (PyModule_AddObjectRef(m, \"SubList\", (PyObject *) &SubListType) < 0) {\nreturn -1;\n}\nreturn 0;\n}\nstatic PyModuleDef_Slot sublist_module_slots[] = {\n{Py_mod_exec, sublist_module_exec},\n{Py_mod_multiple_interpreters, Py_MOD_MULTIPLE_INTERPRETERS_NOT_SUPPORTED},\n{0, NULL}\n};\nstatic PyModuleDef sublist_module = {\n.m_base = PyModuleDef_HEAD_INIT,\n.m_name = \"sublist\",\n.m_doc = \"Example module that creates an extension type.\",\n.m_size = 0,\n.m_slots = sublist_module_slots,\n};\nPyMODINIT_FUNC\nPyInit_sublist(void)\n{\nreturn PyModuleDef_Init(&sublist_module);\n}\nAs you can see, the source code closely resembles the Custom\nexamples in\nprevious sections. We will break down the main differences between them.\ntypedef struct {\nPyListObject list;\nint state;\n} SubListObject;\nThe primary difference for derived type objects is that the base type\u2019s\nobject structure must be the first value. The base type will already include\nthe PyObject_HEAD()\nat the beginning of its structure.\nWhen a Python object is a SubList\ninstance, its PyObject *\npointer\ncan be safely cast to both PyListObject *\nand SubListObject *\n:\nstatic int\nSubList_init(PyObject *op, PyObject *args, PyObject *kwds)\n{\nSubListObject *self = (SubListObject *) op;\nif (PyList_Type.tp_init(op, args, kwds) < 0)\nreturn -1;\nself->state = 0;\nreturn 0;\n}\nWe see above how to call through to the __init__()\nmethod of the base\ntype.\nThis pattern is important when writing a type with custom\ntp_new\nand tp_dealloc\nmembers. The tp_new\nhandler should not actually\ncreate the memory for the object with its tp_alloc\n,\nbut let the base class handle it by calling its own tp_new\n.\nThe PyTypeObject\nstruct supports a tp_base\nspecifying the type\u2019s concrete base class. Due to cross-platform compiler\nissues, you can\u2019t fill that field directly with a reference to\nPyList_Type\n; it should be done in the Py_mod_exec\nfunction:\nstatic int\nsublist_module_exec(PyObject *m)\n{\nSubListType.tp_base = &PyList_Type;\nif (PyType_Ready(&SubListType) < 0) {\nreturn -1;\n}\nif (PyModule_AddObjectRef(m, \"SubList\", (PyObject *) &SubListType) < 0) {\nreturn -1;\n}\nreturn 0;\n}\nBefore calling PyType_Ready()\n, the type structure must have the\ntp_base\nslot filled in. When we are deriving an\nexisting type, it is not necessary to fill out the tp_alloc\nslot with PyType_GenericNew()\n\u2013 the allocation function from the base\ntype will be inherited.\nAfter that, calling PyType_Ready()\nand adding the type object to the\nmodule is the same as with the basic Custom\nexamples.\nFootnotes", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 10635}
{"url": "https://docs.python.org/3/c-api/mapping.html", "title": "Mapping Protocol", "content": "Mapping Protocol\u00b6\nSee also PyObject_GetItem()\n, PyObject_SetItem()\nand\nPyObject_DelItem()\n.\n-\nint PyMapping_Check(PyObject *o)\u00b6\n- Part of the Stable ABI.\nReturn\n1\nif the object provides the mapping protocol or supports slicing, and0\notherwise. Note that it returns1\nfor Python classes with a__getitem__()\nmethod, since in general it is impossible to determine what type of keys the class supports. This function always succeeds.\n-\nPy_ssize_t PyMapping_Size(PyObject *o)\u00b6\n-\nPy_ssize_t PyMapping_Length(PyObject *o)\u00b6\n- Part of the Stable ABI.\nReturns the number of keys in object o on success, and\n-1\non failure. This is equivalent to the Python expressionlen(o)\n.\n-\nPyObject *PyMapping_GetItemString(PyObject *o, const char *key)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nThis is the same as\nPyObject_GetItem()\n, but key is specified as a const char* UTF-8 encoded bytes string, rather than a PyObject*.\n-\nint PyMapping_GetOptionalItem(PyObject *obj, PyObject *key, PyObject **result)\u00b6\n- Part of the Stable ABI since version 3.13.\nVariant of\nPyObject_GetItem()\nwhich doesn\u2019t raiseKeyError\nif the key is not found.If the key is found, return\n1\nand set *result to a new strong reference to the corresponding value. If the key is not found, return0\nand set *result toNULL\n; theKeyError\nis silenced. If an error other thanKeyError\nis raised, return-1\nand set *result toNULL\n.Added in version 3.13.\n-\nint PyMapping_GetOptionalItemString(PyObject *obj, const char *key, PyObject **result)\u00b6\n- Part of the Stable ABI since version 3.13.\nThis is the same as\nPyMapping_GetOptionalItem()\n, but key is specified as a const char* UTF-8 encoded bytes string, rather than a PyObject*.Added in version 3.13.\n-\nint PyMapping_SetItemString(PyObject *o, const char *key, PyObject *v)\u00b6\n- Part of the Stable ABI.\nThis is the same as\nPyObject_SetItem()\n, but key is specified as a const char* UTF-8 encoded bytes string, rather than a PyObject*.\n-\nint PyMapping_DelItem(PyObject *o, PyObject *key)\u00b6\nThis is an alias of\nPyObject_DelItem()\n.\n-\nint PyMapping_DelItemString(PyObject *o, const char *key)\u00b6\nThis is the same as\nPyObject_DelItem()\n, but key is specified as a const char* UTF-8 encoded bytes string, rather than a PyObject*.\n-\nint PyMapping_HasKeyWithError(PyObject *o, PyObject *key)\u00b6\n- Part of the Stable ABI since version 3.13.\nReturn\n1\nif the mapping object has the key key and0\notherwise. This is equivalent to the Python expressionkey in o\n. On failure, return-1\n.Added in version 3.13.\n-\nint PyMapping_HasKeyStringWithError(PyObject *o, const char *key)\u00b6\n- Part of the Stable ABI since version 3.13.\nThis is the same as\nPyMapping_HasKeyWithError()\n, but key is specified as a const char* UTF-8 encoded bytes string, rather than a PyObject*.Added in version 3.13.\n-\nint PyMapping_HasKey(PyObject *o, PyObject *key)\u00b6\n- Part of the Stable ABI.\nReturn\n1\nif the mapping object has the key key and0\notherwise. This is equivalent to the Python expressionkey in o\n. This function always succeeds.Note\nExceptions which occur when this calls the\n__getitem__()\nmethod are silently ignored. For proper error handling, usePyMapping_HasKeyWithError()\n,PyMapping_GetOptionalItem()\norPyObject_GetItem()\ninstead.\n-\nint PyMapping_HasKeyString(PyObject *o, const char *key)\u00b6\n- Part of the Stable ABI.\nThis is the same as\nPyMapping_HasKey()\n, but key is specified as a const char* UTF-8 encoded bytes string, rather than a PyObject*.Note\nExceptions that occur when this calls the\n__getitem__()\nmethod or while creating the temporarystr\nobject are silently ignored. For proper error handling, usePyMapping_HasKeyStringWithError()\n,PyMapping_GetOptionalItemString()\norPyMapping_GetItemString()\ninstead.\n-\nPyObject *PyMapping_Keys(PyObject *o)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nOn success, return a list of the keys in object o. On failure, return\nNULL\n.Changed in version 3.7: Previously, the function returned a list or a tuple.\n-\nPyObject *PyMapping_Values(PyObject *o)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nOn success, return a list of the values in object o. On failure, return\nNULL\n.Changed in version 3.7: Previously, the function returned a list or a tuple.\n-\nPyObject *PyMapping_Items(PyObject *o)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nOn success, return a list of the items in object o, where each item is a tuple containing a key-value pair. On failure, return\nNULL\n.Changed in version 3.7: Previously, the function returned a list or a tuple.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1124}
{"url": "https://docs.python.org/3/c-api/dict.html", "title": "Dictionary Objects", "content": "Dictionary Objects\u00b6\n-\nPyTypeObject PyDict_Type\u00b6\n- Part of the Stable ABI.\nThis instance of\nPyTypeObject\nrepresents the Python dictionary type. This is the same object asdict\nin the Python layer.\n-\nint PyDict_Check(PyObject *p)\u00b6\nReturn true if p is a dict object or an instance of a subtype of the dict type. This function always succeeds.\n-\nint PyDict_CheckExact(PyObject *p)\u00b6\nReturn true if p is a dict object, but not an instance of a subtype of the dict type. This function always succeeds.\n-\nPyObject *PyDict_New()\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a new empty dictionary, or\nNULL\non failure.\n-\nPyObject *PyDictProxy_New(PyObject *mapping)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a\ntypes.MappingProxyType\nobject for a mapping which enforces read-only behavior. This is normally used to create a view to prevent modification of the dictionary for non-dynamic class types.\n-\nPyTypeObject PyDictProxy_Type\u00b6\n- Part of the Stable ABI.\nThe type object for mapping proxy objects created by\nPyDictProxy_New()\nand for the read-only__dict__\nattribute of many built-in types. APyDictProxy_Type\ninstance provides a dynamic, read-only view of an underlying dictionary: changes to the underlying dictionary are reflected in the proxy, but the proxy itself does not support mutation operations. This corresponds totypes.MappingProxyType\nin Python.\n-\nvoid PyDict_Clear(PyObject *p)\u00b6\n- Part of the Stable ABI.\nEmpty an existing dictionary of all key-value pairs.\n-\nint PyDict_Contains(PyObject *p, PyObject *key)\u00b6\n- Part of the Stable ABI.\nDetermine if dictionary p contains key. If an item in p matches key, return\n1\n, otherwise return0\n. On error, return-1\n. This is equivalent to the Python expressionkey in p\n.\n-\nint PyDict_ContainsString(PyObject *p, const char *key)\u00b6\nThis is the same as\nPyDict_Contains()\n, but key is specified as a const char* UTF-8 encoded bytes string, rather than a PyObject*.Added in version 3.13.\n-\nPyObject *PyDict_Copy(PyObject *p)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a new dictionary that contains the same key-value pairs as p.\n-\nint PyDict_SetItem(PyObject *p, PyObject *key, PyObject *val)\u00b6\n- Part of the Stable ABI.\nInsert val into the dictionary p with a key of key. key must be hashable; if it isn\u2019t,\nTypeError\nwill be raised. Return0\non success or-1\non failure. This function does not steal a reference to val.\n-\nint PyDict_SetItemString(PyObject *p, const char *key, PyObject *val)\u00b6\n- Part of the Stable ABI.\nThis is the same as\nPyDict_SetItem()\n, but key is specified as a const char* UTF-8 encoded bytes string, rather than a PyObject*.\n-\nint PyDict_DelItem(PyObject *p, PyObject *key)\u00b6\n- Part of the Stable ABI.\nRemove the entry in dictionary p with key key. key must be hashable; if it isn\u2019t,\nTypeError\nis raised. If key is not in the dictionary,KeyError\nis raised. Return0\non success or-1\non failure.\n-\nint PyDict_DelItemString(PyObject *p, const char *key)\u00b6\n- Part of the Stable ABI.\nThis is the same as\nPyDict_DelItem()\n, but key is specified as a const char* UTF-8 encoded bytes string, rather than a PyObject*.\n-\nint PyDict_GetItemRef(PyObject *p, PyObject *key, PyObject **result)\u00b6\n- Part of the Stable ABI since version 3.13.\nReturn a new strong reference to the object from dictionary p which has a key key:\nIf the key is present, set *result to a new strong reference to the value and return\n1\n.If the key is missing, set *result to\nNULL\nand return0\n.On error, raise an exception and return\n-1\n.\nAdded in version 3.13.\nSee also the\nPyObject_GetItem()\nfunction.\n-\nPyObject *PyDict_GetItem(PyObject *p, PyObject *key)\u00b6\n- Return value: Borrowed reference. Part of the Stable ABI.\nReturn a borrowed reference to the object from dictionary p which has a key key. Return\nNULL\nif the key key is missing without setting an exception.Note\nExceptions that occur while this calls\n__hash__()\nand__eq__()\nmethods are silently ignored. Prefer thePyDict_GetItemWithError()\nfunction instead.Changed in version 3.10: Calling this API without an attached thread state had been allowed for historical reason. It is no longer allowed.\n-\nPyObject *PyDict_GetItemWithError(PyObject *p, PyObject *key)\u00b6\n- Return value: Borrowed reference. Part of the Stable ABI.\nVariant of\nPyDict_GetItem()\nthat does not suppress exceptions. ReturnNULL\nwith an exception set if an exception occurred. ReturnNULL\nwithout an exception set if the key wasn\u2019t present.\n-\nPyObject *PyDict_GetItemString(PyObject *p, const char *key)\u00b6\n- Return value: Borrowed reference. Part of the Stable ABI.\nThis is the same as\nPyDict_GetItem()\n, but key is specified as a const char* UTF-8 encoded bytes string, rather than a PyObject*.Note\nExceptions that occur while this calls\n__hash__()\nand__eq__()\nmethods or while creating the temporarystr\nobject are silently ignored. Prefer using thePyDict_GetItemWithError()\nfunction with your ownPyUnicode_FromString()\nkey instead.\n-\nint PyDict_GetItemStringRef(PyObject *p, const char *key, PyObject **result)\u00b6\n- Part of the Stable ABI since version 3.13.\nSimilar to\nPyDict_GetItemRef()\n, but key is specified as a const char* UTF-8 encoded bytes string, rather than a PyObject*.Added in version 3.13.\n-\nPyObject *PyDict_SetDefault(PyObject *p, PyObject *key, PyObject *defaultobj)\u00b6\n- Return value: Borrowed reference.\nThis is the same as the Python-level\ndict.setdefault()\n. If present, it returns the value corresponding to key from the dictionary p. If the key is not in the dict, it is inserted with value defaultobj and defaultobj is returned. This function evaluates the hash function of key only once, instead of evaluating it independently for the lookup and the insertion.Added in version 3.4.\n-\nint PyDict_SetDefaultRef(PyObject *p, PyObject *key, PyObject *default_value, PyObject **result)\u00b6\nInserts default_value into the dictionary p with a key of key if the key is not already present in the dictionary. If result is not\nNULL\n, then *result is set to a strong reference to either default_value, if the key was not present, or the existing value, if key was already present in the dictionary. Returns1\nif the key was present and default_value was not inserted, or0\nif the key was not present and default_value was inserted. On failure, returns-1\n, sets an exception, and sets*result\ntoNULL\n.For clarity: if you have a strong reference to default_value before calling this function, then after it returns, you hold a strong reference to both default_value and *result (if it\u2019s not\nNULL\n). These may refer to the same object: in that case you hold two separate references to it.Added in version 3.13.\n-\nint PyDict_Pop(PyObject *p, PyObject *key, PyObject **result)\u00b6\nRemove key from dictionary p and optionally return the removed value. Do not raise\nKeyError\nif the key is missing.If the key is present, set *result to a new reference to the removed value if result is not\nNULL\n, and return1\n.If the key is missing, set *result to\nNULL\nif result is notNULL\n, and return0\n.On error, raise an exception and return\n-1\n.\nSimilar to\ndict.pop()\n, but without the default value and not raisingKeyError\nif the key is missing.Added in version 3.13.\n-\nint PyDict_PopString(PyObject *p, const char *key, PyObject **result)\u00b6\nSimilar to\nPyDict_Pop()\n, but key is specified as a const char* UTF-8 encoded bytes string, rather than a PyObject*.Added in version 3.13.\n-\nPyObject *PyDict_Items(PyObject *p)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a\nPyListObject\ncontaining all the items from the dictionary.\n-\nPyObject *PyDict_Keys(PyObject *p)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a\nPyListObject\ncontaining all the keys from the dictionary.\n-\nPyObject *PyDict_Values(PyObject *p)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a\nPyListObject\ncontaining all the values from the dictionary p.\n-\nPy_ssize_t PyDict_Size(PyObject *p)\u00b6\n- Part of the Stable ABI.\nReturn the number of items in the dictionary. This is equivalent to\nlen(p)\non a dictionary.\n-\nPy_ssize_t PyDict_GET_SIZE(PyObject *p)\u00b6\nSimilar to\nPyDict_Size()\n, but without error checking.\n-\nint PyDict_Next(PyObject *p, Py_ssize_t *ppos, PyObject **pkey, PyObject **pvalue)\u00b6\n- Part of the Stable ABI.\nIterate over all key-value pairs in the dictionary p. The\nPy_ssize_t\nreferred to by ppos must be initialized to0\nprior to the first call to this function to start the iteration; the function returns true for each pair in the dictionary, and false once all pairs have been reported. The parameters pkey and pvalue should either point to PyObject* variables that will be filled in with each key and value, respectively, or may beNULL\n. Any references returned through them are borrowed. ppos should not be altered during iteration. Its value represents offsets within the internal dictionary structure, and since the structure is sparse, the offsets are not consecutive.For example:\nPyObject *key, *value; Py_ssize_t pos = 0; while (PyDict_Next(self->dict, &pos, &key, &value)) { /* do something interesting with the values... */ ... }\nThe dictionary p should not be mutated during iteration. It is safe to modify the values of the keys as you iterate over the dictionary, but only so long as the set of keys does not change. For example:\nPyObject *key, *value; Py_ssize_t pos = 0; while (PyDict_Next(self->dict, &pos, &key, &value)) { long i = PyLong_AsLong(value); if (i == -1 && PyErr_Occurred()) { return -1; } PyObject *o = PyLong_FromLong(i + 1); if (o == NULL) return -1; if (PyDict_SetItem(self->dict, key, o) < 0) { Py_DECREF(o); return -1; } Py_DECREF(o); }\nThe function is not thread-safe in the free-threaded build without external synchronization. You can use\nPy_BEGIN_CRITICAL_SECTION\nto lock the dictionary while iterating over it:Py_BEGIN_CRITICAL_SECTION(self->dict); while (PyDict_Next(self->dict, &pos, &key, &value)) { ... } Py_END_CRITICAL_SECTION();\nNote\nOn the free-threaded build, this function can be used safely inside a critical section. However, the references returned for pkey and pvalue are borrowed and are only valid while the critical section is held. If you need to use these objects outside the critical section or when the critical section can be suspended, create a strong reference (for example, using\nPy_NewRef()\n).\n-\nint PyDict_Merge(PyObject *a, PyObject *b, int override)\u00b6\n- Part of the Stable ABI.\nIterate over mapping object b adding key-value pairs to dictionary a. b may be a dictionary, or any object supporting\nPyMapping_Keys()\nandPyObject_GetItem()\n. If override is true, existing pairs in a will be replaced if a matching key is found in b, otherwise pairs will only be added if there is not a matching key in a. Return0\non success or-1\nif an exception was raised.\n-\nint PyDict_Update(PyObject *a, PyObject *b)\u00b6\n- Part of the Stable ABI.\nThis is the same as\nPyDict_Merge(a, b, 1)\nin C, and is similar toa.update(b)\nin Python except thatPyDict_Update()\ndoesn\u2019t fall back to the iterating over a sequence of key value pairs if the second argument has no \u201ckeys\u201d attribute. Return0\non success or-1\nif an exception was raised.\n-\nint PyDict_MergeFromSeq2(PyObject *a, PyObject *seq2, int override)\u00b6\n- Part of the Stable ABI.\nUpdate or merge into dictionary a, from the key-value pairs in seq2. seq2 must be an iterable object producing iterable objects of length 2, viewed as key-value pairs. In case of duplicate keys, the last wins if override is true, else the first wins. Return\n0\non success or-1\nif an exception was raised. Equivalent Python (except for the return value):def PyDict_MergeFromSeq2(a, seq2, override): for key, value in seq2: if override or key not in a: a[key] = value\n-\nint PyDict_AddWatcher(PyDict_WatchCallback callback)\u00b6\nRegister callback as a dictionary watcher. Return a non-negative integer id which must be passed to future calls to\nPyDict_Watch()\n. In case of error (e.g. no more watcher IDs available), return-1\nand set an exception.Added in version 3.12.\n-\nint PyDict_ClearWatcher(int watcher_id)\u00b6\nClear watcher identified by watcher_id previously returned from\nPyDict_AddWatcher()\n. Return0\non success,-1\non error (e.g. if the given watcher_id was never registered.)Added in version 3.12.\n-\nint PyDict_Watch(int watcher_id, PyObject *dict)\u00b6\nMark dictionary dict as watched. The callback granted watcher_id by\nPyDict_AddWatcher()\nwill be called when dict is modified or deallocated. Return0\non success or-1\non error.Added in version 3.12.\n-\nint PyDict_Unwatch(int watcher_id, PyObject *dict)\u00b6\nMark dictionary dict as no longer watched. The callback granted watcher_id by\nPyDict_AddWatcher()\nwill no longer be called when dict is modified or deallocated. The dict must previously have been watched by this watcher. Return0\non success or-1\non error.Added in version 3.12.\n-\ntype PyDict_WatchEvent\u00b6\nEnumeration of possible dictionary watcher events:\nPyDict_EVENT_ADDED\n,PyDict_EVENT_MODIFIED\n,PyDict_EVENT_DELETED\n,PyDict_EVENT_CLONED\n,PyDict_EVENT_CLEARED\n, orPyDict_EVENT_DEALLOCATED\n.Added in version 3.12.\n-\ntypedef int (*PyDict_WatchCallback)(PyDict_WatchEvent event, PyObject *dict, PyObject *key, PyObject *new_value)\u00b6\nType of a dict watcher callback function.\nIf event is\nPyDict_EVENT_CLEARED\norPyDict_EVENT_DEALLOCATED\n, both key and new_value will beNULL\n. If event isPyDict_EVENT_ADDED\norPyDict_EVENT_MODIFIED\n, new_value will be the new value for key. If event isPyDict_EVENT_DELETED\n, key is being deleted from the dictionary and new_value will beNULL\n.PyDict_EVENT_CLONED\noccurs when dict was previously empty and another dict is merged into it. To maintain efficiency of this operation, per-keyPyDict_EVENT_ADDED\nevents are not issued in this case; instead a singlePyDict_EVENT_CLONED\nis issued, and key will be the source dictionary.The callback may inspect but must not modify dict; doing so could have unpredictable effects, including infinite recursion. Do not trigger Python code execution in the callback, as it could modify the dict as a side effect.\nIf event is\nPyDict_EVENT_DEALLOCATED\n, taking a new reference in the callback to the about-to-be-destroyed dictionary will resurrect it and prevent it from being freed at this time. When the resurrected object is destroyed later, any watcher callbacks active at that time will be called again.Callbacks occur before the notified modification to dict takes place, so the prior state of dict can be inspected.\nIf the callback sets an exception, it must return\n-1\n; this exception will be printed as an unraisable exception usingPyErr_WriteUnraisable()\n. Otherwise it should return0\n.There may already be a pending exception set on entry to the callback. In this case, the callback should return\n0\nwith the same exception still set. This means the callback may not call any other API that can set an exception unless it saves and clears the exception state first, and restores it before returning.Added in version 3.12.\nDictionary View Objects\u00b6\n-\nint PyDictViewSet_Check(PyObject *op)\u00b6\nReturn true if op is a view of a set inside a dictionary. This is currently equivalent to PyDictKeys_Check(op) || PyDictItems_Check(op). This function always succeeds.\n-\nPyTypeObject PyDictKeys_Type\u00b6\n- Part of the Stable ABI.\nType object for a view of dictionary keys. In Python, this is the type of the object returned by\ndict.keys()\n.\n-\nint PyDictKeys_Check(PyObject *op)\u00b6\nReturn true if op is an instance of a dictionary keys view. This function always succeeds.\n-\nPyTypeObject PyDictValues_Type\u00b6\n- Part of the Stable ABI.\nType object for a view of dictionary values. In Python, this is the type of the object returned by\ndict.values()\n.\n-\nint PyDictValues_Check(PyObject *op)\u00b6\nReturn true if op is an instance of a dictionary values view. This function always succeeds.\n-\nPyTypeObject PyDictItems_Type\u00b6\n- Part of the Stable ABI.\nType object for a view of dictionary items. In Python, this is the type of the object returned by\ndict.items()\n.\nOrdered Dictionaries\u00b6\nPython\u2019s C API provides interface for collections.OrderedDict\nfrom C.\nSince Python 3.7, dictionaries are ordered by default, so there is usually\nlittle need for these functions; prefer PyDict*\nwhere possible.\n-\nPyTypeObject PyODict_Type\u00b6\nType object for ordered dictionaries. This is the same object as\ncollections.OrderedDict\nin the Python layer.\n-\nint PyODict_Check(PyObject *od)\u00b6\nReturn true if od is an ordered dictionary object or an instance of a subtype of the\nOrderedDict\ntype. This function always succeeds.\n-\nint PyODict_CheckExact(PyObject *od)\u00b6\nReturn true if od is an ordered dictionary object, but not an instance of a subtype of the\nOrderedDict\ntype. This function always succeeds.\n-\nPyTypeObject PyODictKeys_Type\u00b6\nAnalogous to\nPyDictKeys_Type\nfor ordered dictionaries.\n-\nPyTypeObject PyODictValues_Type\u00b6\nAnalogous to\nPyDictValues_Type\nfor ordered dictionaries.\n-\nPyTypeObject PyODictItems_Type\u00b6\nAnalogous to\nPyDictItems_Type\nfor ordered dictionaries.\n-\nPyObject *PyODict_New(void)\u00b6\nReturn a new empty ordered dictionary, or\nNULL\non failure.This is analogous to\nPyDict_New()\n.\n-\nint PyODict_SetItem(PyObject *od, PyObject *key, PyObject *value)\u00b6\nInsert value into the ordered dictionary od with a key of key. Return\n0\non success or-1\nwith an exception set on failure.This is analogous to\nPyDict_SetItem()\n.\n-\nint PyODict_DelItem(PyObject *od, PyObject *key)\u00b6\nRemove the entry in the ordered dictionary od with key key. Return\n0\non success or-1\nwith an exception set on failure.This is analogous to\nPyDict_DelItem()\n.\nThese are soft deprecated aliases to PyDict\nAPIs:\n|\n|\n|---|---|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 4384}
{"url": "https://docs.python.org/3/c-api/none.html", "title": "The ", "content": "The None\nObject\u00b6\nNote that the PyTypeObject\nfor None\nis not directly exposed in the\nPython/C API. Since None\nis a singleton, testing for object identity (using\n==\nin C) is sufficient. There is no PyNone_Check()\nfunction for the\nsame reason.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 60}
{"url": "https://docs.python.org/3/library/uu.html", "title": " \u2014 Encode and decode uuencode files", "content": "uu\n\u2014 Encode and decode uuencode files\u00b6\nDeprecated since version 3.11, removed in version 3.13.\nThis module is no longer part of the Python standard library. It was removed in Python 3.13 after being deprecated in Python 3.11. The removal was decided in PEP 594.\nThe last version of Python that provided the uu\nmodule was\nPython 3.12.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 83}
{"url": "https://docs.python.org/3/library/telnetlib.html", "title": " \u2014 Telnet client", "content": "telnetlib\n\u2014 Telnet client\u00b6\nDeprecated since version 3.11, removed in version 3.13.\nThis module is no longer part of the Python standard library. It was removed in Python 3.13 after being deprecated in Python 3.11. The removal was decided in PEP 594.\nPossible replacements are third-party libraries from PyPI: telnetlib3 or Exscript. These are not supported or maintained by the Python core team.\nThe last version of Python that provided the telnetlib\nmodule was\nPython 3.12.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 118}
{"url": "https://docs.python.org/3/library/sunau.html", "title": " \u2014 Read and write Sun AU files", "content": "sunau\n\u2014 Read and write Sun AU files\u00b6\nDeprecated since version 3.11, removed in version 3.13.\nThis module is no longer part of the Python standard library. It was removed in Python 3.13 after being deprecated in Python 3.11. The removal was decided in PEP 594.\nThe last version of Python that provided the sunau\nmodule was\nPython 3.12.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 83}
{"url": "https://docs.python.org/3/library/spwd.html", "title": " \u2014 The shadow password database", "content": "spwd\n\u2014 The shadow password database\u00b6\nDeprecated since version 3.11, removed in version 3.13.\nThis module is no longer part of the Python standard library. It was removed in Python 3.13 after being deprecated in Python 3.11. The removal was decided in PEP 594.\nA possible replacement is the third-party library python-pam. This library is not supported or maintained by the Python core team.\nThe last version of Python that provided the spwd\nmodule was\nPython 3.12.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 116}
{"url": "https://docs.python.org/3/library/sndhdr.html", "title": " \u2014 Determine type of sound file", "content": "sndhdr\n\u2014 Determine type of sound file\u00b6\nDeprecated since version 3.11, removed in version 3.13.\nThis module is no longer part of the Python standard library. It was removed in Python 3.13 after being deprecated in Python 3.11. The removal was decided in PEP 594.\nPossible replacements are third-party modules from PyPI: filetype, puremagic, or python-magic. These are not supported or maintained by the Python core team.\nThe last version of Python that provided the sndhdr\nmodule was\nPython 3.12.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 124}
{"url": "https://docs.python.org/3/library/smtpd.html", "title": " \u2014 SMTP Server", "content": "smtpd\n\u2014 SMTP Server\u00b6\nDeprecated since version 3.6, removed in version 3.12.\nThis module is no longer part of the Python standard library. It was removed in Python 3.12 after being deprecated in Python 3.6. The removal was decided in PEP 594.\nA possible replacement is the third-party aiosmtpd library. This library is not maintained or supported by the Python core team.\nThe last version of Python that provided the smtpd\nmodule was\nPython 3.11.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 111}
{"url": "https://docs.python.org/3/library/pipes.html", "title": " \u2014 Interface to shell pipelines", "content": "pipes\n\u2014 Interface to shell pipelines\u00b6\nDeprecated since version 3.11, removed in version 3.13.\nThis module is no longer part of the Python standard library. It was removed in Python 3.13 after being deprecated in Python 3.11. The removal was decided in PEP 594.\nApplications should use the subprocess\nmodule instead.\nThe last version of Python that provided the pipes\nmodule was\nPython 3.12.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 97}
{"url": "https://docs.python.org/3/library/asyncio-future.html", "title": "Futures", "content": "Futures\u00b6\nSource code: Lib/asyncio/futures.py, Lib/asyncio/base_futures.py\nFuture objects are used to bridge low-level callback-based code with high-level async/await code.\nFuture Functions\u00b6\n- asyncio.isfuture(obj)\u00b6\nReturn\nTrue\nif obj is either of:an instance of\nasyncio.Future\n,an instance of\nasyncio.Task\n,a Future-like object with a\n_asyncio_future_blocking\nattribute.\nAdded in version 3.5.\n- asyncio.ensure_future(obj, *, loop=None)\u00b6\nReturn:\nobj argument as is, if obj is a\nFuture\n, aTask\n, or a Future-like object (isfuture()\nis used for the test.)a\nTask\nobject wrapping obj, if obj is a coroutine (iscoroutine()\nis used for the test); in this case the coroutine will be scheduled byensure_future()\n.a\nTask\nobject that would await on obj, if obj is an awaitable (inspect.isawaitable()\nis used for the test.)\nIf obj is neither of the above a\nTypeError\nis raised.Important\nSave a reference to the result of this function, to avoid a task disappearing mid-execution.\nSee also the\ncreate_task()\nfunction which is the preferred way for creating new tasks or useasyncio.TaskGroup\nwhich keeps reference to the task internally.Changed in version 3.5.1: The function accepts any awaitable object.\nDeprecated since version 3.10: Deprecation warning is emitted if obj is not a Future-like object and loop is not specified and there is no running event loop.\n- asyncio.wrap_future(future, *, loop=None)\u00b6\nWrap a\nconcurrent.futures.Future\nobject in aasyncio.Future\nobject.Deprecated since version 3.10: Deprecation warning is emitted if future is not a Future-like object and loop is not specified and there is no running event loop.\nFuture Object\u00b6\n- class asyncio.Future(*, loop=None)\u00b6\nA Future represents an eventual result of an asynchronous operation. Not thread-safe.\nFuture is an awaitable object. Coroutines can await on Future objects until they either have a result or an exception set, or until they are cancelled. A Future can be awaited multiple times and the result is same.\nTypically Futures are used to enable low-level callback-based code (e.g. in protocols implemented using asyncio transports) to interoperate with high-level async/await code.\nThe rule of thumb is to never expose Future objects in user-facing APIs, and the recommended way to create a Future object is to call\nloop.create_future()\n. This way alternative event loop implementations can inject their own optimized implementations of a Future object.Changed in version 3.7: Added support for the\ncontextvars\nmodule.Deprecated since version 3.10: Deprecation warning is emitted if loop is not specified and there is no running event loop.\n- result()\u00b6\nReturn the result of the Future.\nIf the Future is done and has a result set by the\nset_result()\nmethod, the result value is returned.If the Future is done and has an exception set by the\nset_exception()\nmethod, this method raises the exception.If the Future has been cancelled, this method raises a\nCancelledError\nexception.If the Future\u2019s result isn\u2019t yet available, this method raises an\nInvalidStateError\nexception.\n- set_result(result)\u00b6\nMark the Future as done and set its result.\nRaises an\nInvalidStateError\nerror if the Future is already done.\n- set_exception(exception)\u00b6\nMark the Future as done and set an exception.\nRaises an\nInvalidStateError\nerror if the Future is already done.\n- done()\u00b6\nReturn\nTrue\nif the Future is done.A Future is done if it was cancelled or if it has a result or an exception set with\nset_result()\norset_exception()\ncalls.\n- cancelled()\u00b6\nReturn\nTrue\nif the Future was cancelled.The method is usually used to check if a Future is not cancelled before setting a result or an exception for it:\nif not fut.cancelled(): fut.set_result(42)\n- add_done_callback(callback, *, context=None)\u00b6\nAdd a callback to be run when the Future is done.\nThe callback is called with the Future object as its only argument.\nIf the Future is already done when this method is called, the callback is scheduled with\nloop.call_soon()\n.An optional keyword-only context argument allows specifying a custom\ncontextvars.Context\nfor the callback to run in. The current context is used when no context is provided.functools.partial()\ncan be used to pass parameters to the callback, e.g.:# Call 'print(\"Future:\", fut)' when \"fut\" is done. fut.add_done_callback( functools.partial(print, \"Future:\"))\nChanged in version 3.7: The context keyword-only parameter was added. See PEP 567 for more details.\n- remove_done_callback(callback)\u00b6\nRemove callback from the callbacks list.\nReturns the number of callbacks removed, which is typically 1, unless a callback was added more than once.\n- cancel(msg=None)\u00b6\nCancel the Future and schedule callbacks.\nIf the Future is already done or cancelled, return\nFalse\n. Otherwise, change the Future\u2019s state to cancelled, schedule the callbacks, and returnTrue\n.Changed in version 3.9: Added the msg parameter.\n- exception()\u00b6\nReturn the exception that was set on this Future.\nThe exception (or\nNone\nif no exception was set) is returned only if the Future is done.If the Future has been cancelled, this method raises a\nCancelledError\nexception.If the Future isn\u2019t done yet, this method raises an\nInvalidStateError\nexception.\n- get_loop()\u00b6\nReturn the event loop the Future object is bound to.\nAdded in version 3.7.\nThis example creates a Future object, creates and schedules an asynchronous Task to set result for the Future, and waits until the Future has a result:\nasync def set_after(fut, delay, value):\n# Sleep for *delay* seconds.\nawait asyncio.sleep(delay)\n# Set *value* as a result of *fut* Future.\nfut.set_result(value)\nasync def main():\n# Get the current event loop.\nloop = asyncio.get_running_loop()\n# Create a new Future object.\nfut = loop.create_future()\n# Run \"set_after()\" coroutine in a parallel Task.\n# We are using the low-level \"loop.create_task()\" API here because\n# we already have a reference to the event loop at hand.\n# Otherwise we could have just used \"asyncio.create_task()\".\nloop.create_task(\nset_after(fut, 1, '... world'))\nprint('hello ...')\n# Wait until *fut* has a result (1 second) and print it.\nprint(await fut)\nasyncio.run(main())\nImportant\nThe Future object was designed to mimic\nconcurrent.futures.Future\n. Key differences include:\nunlike asyncio Futures,\nconcurrent.futures.Future\ninstances cannot be awaited.asyncio.Future.result()\nandasyncio.Future.exception()\ndo not accept the timeout argument.asyncio.Future.result()\nandasyncio.Future.exception()\nraise anInvalidStateError\nexception when the Future is not done.Callbacks registered with\nasyncio.Future.add_done_callback()\nare not called immediately. They are scheduled withloop.call_soon()\ninstead.asyncio Future is not compatible with the\nconcurrent.futures.wait()\nandconcurrent.futures.as_completed()\nfunctions.asyncio.Future.cancel()\naccepts an optionalmsg\nargument, butconcurrent.futures.Future.cancel()\ndoes not.", "code_snippets": [" ", " ", "\n ", "\n", "\n", "\n ", " ", "\n", " ", " ", " ", "\n ", "\n ", " ", "\n\n ", "\n ", "\n\n", " ", "\n ", "\n ", " ", " ", "\n\n ", "\n ", " ", " ", "\n\n ", "\n ", "\n ", "\n ", "\n ", "\n ", " ", " ", "\n\n ", "\n\n ", "\n ", " ", "\n\n", "\n"], "language": "Python", "source": "python.org", "token_count": 1714}
{"url": "https://docs.python.org/3/whatsnew/3.3.html", "title": "What\u2019s New In Python 3.3", "content": "What\u2019s New In Python 3.3\u00b6\nThis article explains the new features in Python 3.3, compared to 3.2. Python 3.3 was released on September 29, 2012. For full details, see the changelog.\nSee also\nPEP 398 - Python 3.3 Release Schedule\nSummary \u2013 Release highlights\u00b6\nNew syntax features:\nNew\nyield from\nexpression for generator delegation.The\nu'unicode'\nsyntax is accepted again forstr\nobjects.\nNew library modules:\nfaulthandler\n(helps debugging low-level crashes)ipaddress\n(high-level objects representing IP addresses and masks)lzma\n(compress data using the XZ / LZMA algorithm)unittest.mock\n(replace parts of your system under test with mock objects)venv\n(Python virtual environments, as in the popularvirtualenv\npackage)\nNew built-in features:\nReworked I/O exception hierarchy.\nImplementation improvements:\nRewritten import machinery based on\nimportlib\n.More compact unicode strings.\nMore compact attribute dictionaries.\nSignificantly Improved Library Modules:\nC Accelerator for the decimal module.\nBetter unicode handling in the email module (provisional).\nSecurity improvements:\nHash randomization is switched on by default.\nPlease read on for a comprehensive list of user-facing changes.\nPEP 405: Virtual Environments\u00b6\nVirtual environments help create separate Python setups while sharing a\nsystem-wide base install, for ease of maintenance. Virtual environments\nhave their own set of private site packages (i.e. locally installed\nlibraries), and are optionally segregated from the system-wide site\npackages. Their concept and implementation are inspired by the popular\nvirtualenv\nthird-party package, but benefit from tighter integration\nwith the interpreter core.\nThis PEP adds the venv\nmodule for programmatic access, and the\npyvenv\nscript for command-line access and\nadministration. The Python interpreter checks for a pyvenv.cfg\n,\nfile whose existence signals the base of a virtual environment\u2019s directory\ntree.\nSee also\n- PEP 405 - Python Virtual Environments\nPEP written by Carl Meyer; implementation by Carl Meyer and Vinay Sajip\nPEP 420: Implicit Namespace Packages\u00b6\nNative support for package directories that don\u2019t require __init__.py\nmarker files and can automatically span multiple path segments (inspired by\nvarious third party approaches to namespace packages, as described in\nPEP 420)\nSee also\n- PEP 420 - Implicit Namespace Packages\nPEP written by Eric V. Smith; implementation by Eric V. Smith and Barry Warsaw\nPEP 3118: New memoryview implementation and buffer protocol documentation\u00b6\nThe implementation of PEP 3118 has been significantly improved.\nThe new memoryview implementation comprehensively fixes all ownership and lifetime issues of dynamically allocated fields in the Py_buffer struct that led to multiple crash reports. Additionally, several functions that crashed or returned incorrect results for non-contiguous or multi-dimensional input have been fixed.\nThe memoryview object now has a PEP-3118 compliant getbufferproc() that checks the consumer\u2019s request type. Many new features have been added, most of them work in full generality for non-contiguous arrays and arrays with suboffsets.\nThe documentation has been updated, clearly spelling out responsibilities for both exporters and consumers. Buffer request flags are grouped into basic and compound flags. The memory layout of non-contiguous and multi-dimensional NumPy-style arrays is explained.\nFeatures\u00b6\nAll native single character format specifiers in struct module syntax (optionally prefixed with \u2018@\u2019) are now supported.\nWith some restrictions, the cast() method allows changing of format and shape of C-contiguous arrays.\nMulti-dimensional list representations are supported for any array type.\nMulti-dimensional comparisons are supported for any array type.\nOne-dimensional memoryviews of hashable (read-only) types with formats B, b or c are now hashable. (Contributed by Antoine Pitrou in bpo-13411.)\nArbitrary slicing of any 1-D arrays type is supported. For example, it is now possible to reverse a memoryview in O(1) by using a negative step.\nAPI changes\u00b6\nThe maximum number of dimensions is officially limited to 64.\nThe representation of empty shape, strides and suboffsets is now an empty tuple instead of\nNone\n.Accessing a memoryview element with format \u2018B\u2019 (unsigned bytes) now returns an integer (in accordance with the struct module syntax). For returning a bytes object the view must be cast to \u2018c\u2019 first.\nmemoryview comparisons now use the logical structure of the operands and compare all array elements by value. All format strings in struct module syntax are supported. Views with unrecognised format strings are still permitted, but will always compare as unequal, regardless of view contents.\nFor further changes see Build and C API Changes and Porting C code.\n(Contributed by Stefan Krah in bpo-10181.)\nSee also\nPEP 3118 - Revising the Buffer Protocol\nPEP 393: Flexible String Representation\u00b6\nThe Unicode string type is changed to support multiple internal representations, depending on the character with the largest Unicode ordinal (1, 2, or 4 bytes) in the represented string. This allows a space-efficient representation in common cases, but gives access to full UCS-4 on all systems. For compatibility with existing APIs, several representations may exist in parallel; over time, this compatibility should be phased out.\nOn the Python side, there should be no downside to this change.\nOn the C API side, PEP 393 is fully backward compatible. The legacy API should remain available at least five years. Applications using the legacy API will not fully benefit of the memory reduction, or - worse - may use a bit more memory, because Python may have to maintain two versions of each string (in the legacy format and in the new efficient storage).\nFunctionality\u00b6\nChanges introduced by PEP 393 are the following:\nPython now always supports the full range of Unicode code points, including non-BMP ones (i.e. from\nU+0000\ntoU+10FFFF\n). The distinction between narrow and wide builds no longer exists and Python now behaves like a wide build, even under Windows.With the death of narrow builds, the problems specific to narrow builds have also been fixed, for example:\nlen()\nnow always returns 1 for non-BMP characters, solen('\\U0010FFFF') == 1\n;surrogate pairs are not recombined in string literals, so\n'\\uDBFF\\uDFFF' != '\\U0010FFFF'\n;indexing or slicing non-BMP characters returns the expected value, so\n'\\U0010FFFF'[0]\nnow returns'\\U0010FFFF'\nand not'\\uDBFF'\n;all other functions in the standard library now correctly handle non-BMP code points.\nThe value of\nsys.maxunicode\nis now always1114111\n(0x10FFFF\nin hexadecimal). ThePyUnicode_GetMax()\nfunction still returns either0xFFFF\nor0x10FFFF\nfor backward compatibility, and it should not be used with the new Unicode API (see bpo-13054).The\n./configure\nflag--with-wide-unicode\nhas been removed.\nPerformance and resource usage\u00b6\nThe storage of Unicode strings now depends on the highest code point in the string:\npure ASCII and Latin1 strings (\nU+0000-U+00FF\n) use 1 byte per code point;BMP strings (\nU+0000-U+FFFF\n) use 2 bytes per code point;non-BMP strings (\nU+10000-U+10FFFF\n) use 4 bytes per code point.\nThe net effect is that for most applications, memory usage of string storage should decrease significantly - especially compared to former wide unicode builds - as, in many cases, strings will be pure ASCII even in international contexts (because many strings store non-human language data, such as XML fragments, HTTP headers, JSON-encoded data, etc.). We also hope that it will, for the same reasons, increase CPU cache efficiency on non-trivial applications. The memory usage of Python 3.3 is two to three times smaller than Python 3.2, and a little bit better than Python 2.7, on a Django benchmark (see the PEP for details).\nSee also\n- PEP 393 - Flexible String Representation\nPEP written by Martin von L\u00f6wis; implementation by Torsten Becker and Martin von L\u00f6wis.\nPEP 397: Python Launcher for Windows\u00b6\nThe Python 3.3 Windows installer now includes a py\nlauncher application\nthat can be used to launch Python applications in a version independent\nfashion.\nThis launcher is invoked implicitly when double-clicking *.py\nfiles.\nIf only a single Python version is installed on the system, that version\nwill be used to run the file. If multiple versions are installed, the most\nrecent version is used by default, but this can be overridden by including\na Unix-style \u201cshebang line\u201d in the Python script.\nThe launcher can also be used explicitly from the command line as the py\napplication. Running py\nfollows the same version selection rules as\nimplicitly launching scripts, but a more specific version can be selected\nby passing appropriate arguments (such as -3\nto request Python 3 when\nPython 2 is also installed, or -2.6\nto specifically request an earlier\nPython version when a more recent version is installed).\nIn addition to the launcher, the Windows installer now includes an option to add the newly installed Python to the system PATH. (Contributed by Brian Curtin in bpo-3561.)\nSee also\n- PEP 397 - Python Launcher for Windows\nPEP written by Mark Hammond and Martin v. L\u00f6wis; implementation by Vinay Sajip.\nLauncher documentation: Python install manager\nInstaller PATH modification: Python install manager\nPEP 3151: Reworking the OS and IO exception hierarchy\u00b6\nThe hierarchy of exceptions raised by operating system errors is now both simplified and finer-grained.\nYou don\u2019t have to worry anymore about choosing the appropriate exception\ntype between OSError\n, IOError\n, EnvironmentError\n,\nWindowsError\n, mmap.error\n, socket.error\nor\nselect.error\n. All these exception types are now only one:\nOSError\n. The other names are kept as aliases for compatibility\nreasons.\nAlso, it is now easier to catch a specific error condition. Instead of\ninspecting the errno\nattribute (or args[0]\n) for a particular\nconstant from the errno\nmodule, you can catch the adequate\nOSError\nsubclass. The available subclasses are the following:\nAnd the ConnectionError\nitself has finer-grained subclasses:\nThanks to the new exceptions, common usages of the errno\ncan now be\navoided. For example, the following code written for Python 3.2:\nfrom errno import ENOENT, EACCES, EPERM\ntry:\nwith open(\"document.txt\") as f:\ncontent = f.read()\nexcept IOError as err:\nif err.errno == ENOENT:\nprint(\"document.txt file is missing\")\nelif err.errno in (EACCES, EPERM):\nprint(\"You are not allowed to read document.txt\")\nelse:\nraise\ncan now be written without the errno\nimport and without manual\ninspection of exception attributes:\ntry:\nwith open(\"document.txt\") as f:\ncontent = f.read()\nexcept FileNotFoundError:\nprint(\"document.txt file is missing\")\nexcept PermissionError:\nprint(\"You are not allowed to read document.txt\")\nSee also\n- PEP 3151 - Reworking the OS and IO Exception Hierarchy\nPEP written and implemented by Antoine Pitrou\nPEP 380: Syntax for Delegating to a Subgenerator\u00b6\nPEP 380 adds the yield from\nexpression, allowing a generator to\ndelegate\npart of its operations to another generator. This allows a section of code\ncontaining yield\nto be factored out and placed in another generator.\nAdditionally, the subgenerator is allowed to return with a value, and the\nvalue is made available to the delegating generator.\nWhile designed primarily for use in delegating to a subgenerator, the yield\nfrom\nexpression actually allows delegation to arbitrary subiterators.\nFor simple iterators, yield from iterable\nis essentially just a shortened\nform of for item in iterable: yield item\n:\n>>> def g(x):\n... yield from range(x, 0, -1)\n... yield from range(x)\n...\n>>> list(g(5))\n[5, 4, 3, 2, 1, 0, 1, 2, 3, 4]\nHowever, unlike an ordinary loop, yield from\nallows subgenerators to\nreceive sent and thrown values directly from the calling scope, and\nreturn a final value to the outer generator:\n>>> def accumulate():\n... tally = 0\n... while 1:\n... next = yield\n... if next is None:\n... return tally\n... tally += next\n...\n>>> def gather_tallies(tallies):\n... while 1:\n... tally = yield from accumulate()\n... tallies.append(tally)\n...\n>>> tallies = []\n>>> acc = gather_tallies(tallies)\n>>> next(acc) # Ensure the accumulator is ready to accept values\n>>> for i in range(4):\n... acc.send(i)\n...\n>>> acc.send(None) # Finish the first tally\n>>> for i in range(5):\n... acc.send(i)\n...\n>>> acc.send(None) # Finish the second tally\n>>> tallies\n[6, 10]\nThe main principle driving this change is to allow even generators that are\ndesigned to be used with the send\nand throw\nmethods to be split into\nmultiple subgenerators as easily as a single large function can be split into\nmultiple subfunctions.\nSee also\n- PEP 380 - Syntax for Delegating to a Subgenerator\nPEP written by Greg Ewing; implementation by Greg Ewing, integrated into 3.3 by Renaud Blanch, Ryan Kelly and Nick Coghlan; documentation by Zbigniew J\u0119drzejewski-Szmek and Nick Coghlan\nPEP 409: Suppressing exception context\u00b6\nPEP 409 introduces new syntax that allows the display of the chained exception context to be disabled. This allows cleaner error messages in applications that convert between exception types:\n>>> class D:\n... def __init__(self, extra):\n... self._extra_attributes = extra\n... def __getattr__(self, attr):\n... try:\n... return self._extra_attributes[attr]\n... except KeyError:\n... raise AttributeError(attr) from None\n...\n>>> D({}).x\nTraceback (most recent call last):\nFile \"\", line 1, in \nFile \"\", line 8, in __getattr__\nAttributeError: x\nWithout the from None\nsuffix to suppress the cause, the original\nexception would be displayed by default:\n>>> class C:\n... def __init__(self, extra):\n... self._extra_attributes = extra\n... def __getattr__(self, attr):\n... try:\n... return self._extra_attributes[attr]\n... except KeyError:\n... raise AttributeError(attr)\n...\n>>> C({}).x\nTraceback (most recent call last):\nFile \"\", line 6, in __getattr__\nKeyError: 'x'\nDuring handling of the above exception, another exception occurred:\nTraceback (most recent call last):\nFile \"\", line 1, in \nFile \"\", line 8, in __getattr__\nAttributeError: x\nNo debugging capability is lost, as the original exception context remains available if needed (for example, if an intervening library has incorrectly suppressed valuable underlying details):\n>>> try:\n... D({}).x\n... except AttributeError as exc:\n... print(repr(exc.__context__))\n...\nKeyError('x',)\nSee also\n- PEP 409 - Suppressing exception context\nPEP written by Ethan Furman; implemented by Ethan Furman and Nick Coghlan.\nPEP 414: Explicit Unicode literals\u00b6\nTo ease the transition from Python 2 for Unicode aware Python applications\nthat make heavy use of Unicode literals, Python 3.3 once again supports the\n\u201cu\n\u201d prefix for string literals. This prefix has no semantic significance\nin Python 3, it is provided solely to reduce the number of purely mechanical\nchanges in migrating to Python 3, making it easier for developers to focus on\nthe more significant semantic changes (such as the stricter default\nseparation of binary and text data).\nSee also\n- PEP 414 - Explicit Unicode literals\nPEP written by Armin Ronacher.\nPEP 3155: Qualified name for classes and functions\u00b6\nFunctions and class objects have a new __qualname__\nattribute representing\nthe \u201cpath\u201d from the module top-level to their definition. For global functions\nand classes, this is the same as __name__\n.\nFor other functions and classes,\nit provides better information about where they were actually defined, and\nhow they might be accessible from the global scope.\nExample with (non-bound) methods:\n>>> class C:\n... def meth(self):\n... pass\n...\n>>> C.meth.__name__\n'meth'\n>>> C.meth.__qualname__\n'C.meth'\nExample with nested classes:\n>>> class C:\n... class D:\n... def meth(self):\n... pass\n...\n>>> C.D.__name__\n'D'\n>>> C.D.__qualname__\n'C.D'\n>>> C.D.meth.__name__\n'meth'\n>>> C.D.meth.__qualname__\n'C.D.meth'\nExample with nested functions:\n>>> def outer():\n... def inner():\n... pass\n... return inner\n...\n>>> outer().__name__\n'inner'\n>>> outer().__qualname__\n'outer..inner'\nThe string representation of those objects is also changed to include the new, more precise information:\n>>> str(C.D)\n\"\"\n>>> str(C.D.meth)\n''\nSee also\n- PEP 3155 - Qualified name for classes and functions\nPEP written and implemented by Antoine Pitrou.\nPEP 412: Key-Sharing Dictionary\u00b6\nDictionaries used for the storage of objects\u2019 attributes are now able to share part of their internal storage between each other (namely, the part which stores the keys and their respective hashes). This reduces the memory consumption of programs creating many instances of non-builtin types.\nSee also\n- PEP 412 - Key-Sharing Dictionary\nPEP written and implemented by Mark Shannon.\nPEP 362: Function Signature Object\u00b6\nA new function inspect.signature()\nmakes introspection of python\ncallables easy and straightforward. A broad range of callables is supported:\npython functions, decorated or not, classes, and functools.partial()\nobjects. New classes inspect.Signature\n, inspect.Parameter\nand inspect.BoundArguments\nhold information about the call signatures,\nsuch as, annotations, default values, parameters kinds, and bound arguments,\nwhich considerably simplifies writing decorators and any code that validates\nor amends calling signatures or arguments.\nSee also\n- PEP 362: - Function Signature Object\nPEP written by Brett Cannon, Yury Selivanov, Larry Hastings, Jiwon Seo; implemented by Yury Selivanov.\nPEP 421: Adding sys.implementation\u00b6\nA new attribute on the sys\nmodule exposes details specific to the\nimplementation of the currently running interpreter. The initial set of\nattributes on sys.implementation\nare name\n, version\n,\nhexversion\n, and cache_tag\n.\nThe intention of sys.implementation\nis to consolidate into one namespace\nthe implementation-specific data used by the standard library. This allows\ndifferent Python implementations to share a single standard library code base\nmuch more easily. In its initial state, sys.implementation\nholds only a\nsmall portion of the implementation-specific data. Over time that ratio will\nshift in order to make the standard library more portable.\nOne example of improved standard library portability is cache_tag\n. As of\nPython 3.3, sys.implementation.cache_tag\nis used by importlib\nto\nsupport PEP 3147 compliance. Any Python implementation that uses\nimportlib\nfor its built-in import system may use cache_tag\nto control\nthe caching behavior for modules.\nSimpleNamespace\u00b6\nThe implementation of sys.implementation\nalso introduces a new type to\nPython: types.SimpleNamespace\n. In contrast to a mapping-based\nnamespace, like dict\n, SimpleNamespace\nis attribute-based, like\nobject\n. However, unlike object\n, SimpleNamespace\ninstances\nare writable. This means that you can add, remove, and modify the namespace\nthrough normal attribute access.\nSee also\n- PEP 421 - Adding sys.implementation\nPEP written and implemented by Eric Snow.\nUsing importlib as the Implementation of Import\u00b6\nbpo-2377 - Replace __import__ w/ importlib.__import__\nbpo-13959 - Re-implement parts of imp\nin pure Python\nbpo-14605 - Make import machinery explicit\nbpo-14646 - Require loaders set __loader__ and __package__\nThe __import__()\nfunction is now powered by importlib.__import__()\n.\nThis work leads to the completion of \u201cphase 2\u201d of PEP 302. There are\nmultiple benefits to this change. First, it has allowed for more of the\nmachinery powering import to be exposed instead of being implicit and hidden\nwithin the C code. It also provides a single implementation for all Python VMs\nsupporting Python 3.3 to use, helping to end any VM-specific deviations in\nimport semantics. And finally it eases the maintenance of import, allowing for\nfuture growth to occur.\nFor the common user, there should be no visible change in semantics. For those whose code currently manipulates import or calls import programmatically, the code changes that might possibly be required are covered in the Porting Python code section of this document.\nNew APIs\u00b6\nOne of the large benefits of this work is the exposure of what goes into\nmaking the import statement work. That means the various importers that were\nonce implicit are now fully exposed as part of the importlib\npackage.\nThe abstract base classes defined in importlib.abc\nhave been expanded\nto properly delineate between meta path finders\nand path entry finders by introducing\nimportlib.abc.MetaPathFinder\nand\nimportlib.abc.PathEntryFinder\n, respectively. The old ABC of\nimportlib.abc.Finder\nis now only provided for backwards-compatibility\nand does not enforce any method requirements.\nIn terms of finders, importlib.machinery.FileFinder\nexposes the\nmechanism used to search for source and bytecode files of a module. Previously\nthis class was an implicit member of sys.path_hooks\n.\nFor loaders, the new abstract base class importlib.abc.FileLoader\nhelps\nwrite a loader that uses the file system as the storage mechanism for a module\u2019s\ncode. The loader for source files\n(importlib.machinery.SourceFileLoader\n), sourceless bytecode files\n(importlib.machinery.SourcelessFileLoader\n), and extension modules\n(importlib.machinery.ExtensionFileLoader\n) are now available for\ndirect use.\nImportError\nnow has name\nand path\nattributes which are set when\nthere is relevant data to provide. The message for failed imports will also\nprovide the full name of the module now instead of just the tail end of the\nmodule\u2019s name.\nThe importlib.invalidate_caches()\nfunction will now call the method with\nthe same name on all finders cached in sys.path_importer_cache\nto help\nclean up any stored state as necessary.\nVisible Changes\u00b6\nFor potential required changes to code, see the Porting Python code section.\nBeyond the expanse of what importlib\nnow exposes, there are other\nvisible changes to import. The biggest is that sys.meta_path\nand\nsys.path_hooks\nnow store all of the meta path finders and path entry\nhooks used by import. Previously the finders were implicit and hidden within\nthe C code of import instead of being directly exposed. This means that one can\nnow easily remove or change the order of the various finders to fit one\u2019s needs.\nAnother change is that all modules have a __loader__\nattribute, storing the\nloader used to create the module. PEP 302 has been updated to make this\nattribute mandatory for loaders to implement, so in the future once 3rd-party\nloaders have been updated people will be able to rely on the existence of the\nattribute. Until such time, though, import is setting the module post-load.\nLoaders are also now expected to set the __package__\nattribute from\nPEP 366. Once again, import itself is already setting this on all loaders\nfrom importlib\nand import itself is setting the attribute post-load.\nNone\nis now inserted into sys.path_importer_cache\nwhen no finder\ncan be found on sys.path_hooks\n. Since imp.NullImporter\nis not\ndirectly exposed on sys.path_hooks\nit could no longer be relied upon to\nalways be available to use as a value representing no finder found.\nAll other changes relate to semantic changes which should be taken into consideration when updating code for Python 3.3, and thus should be read about in the Porting Python code section of this document.\n(Implementation by Brett Cannon)\nOther Language Changes\u00b6\nSome smaller changes made to the core Python language are:\nAdded support for Unicode name aliases and named sequences. Both\nunicodedata.lookup()\nand'\\N{...}'\nnow resolve name aliases, andunicodedata.lookup()\nresolves named sequences too.(Contributed by Ezio Melotti in bpo-12753.)\nUnicode database updated to UCD version 6.1.0\nEquality comparisons on\nrange()\nobjects now return a result reflecting the equality of the underlying sequences generated by those range objects. (bpo-13201)The\ncount()\n,find()\n,rfind()\n,index()\nandrindex()\nmethods ofbytes\nandbytearray\nobjects now accept an integer between 0 and 255 as their first argument.(Contributed by Petri Lehtinen in bpo-12170.)\nThe\nrjust()\n,ljust()\n, andcenter()\nmethods ofbytes\nandbytearray\nnow accept abytearray\nfor thefill\nargument. (Contributed by Petri Lehtinen in bpo-12380.)New methods have been added to\nlist\nandbytearray\n:copy()\nandclear()\n(bpo-10516). Consequently,MutableSequence\nnow also defines aclear()\nmethod (bpo-11388).Raw bytes literals can now be written\nrb\"...\"\nas well asbr\"...\"\n.(Contributed by Antoine Pitrou in bpo-13748.)\ndict.setdefault()\nnow does only one lookup for the given key, making it atomic when used with built-in types.(Contributed by Filip Gruszczy\u0144ski in bpo-13521.)\nThe error messages produced when a function call does not match the function signature have been significantly improved.\n(Contributed by Benjamin Peterson.)\nA Finer-Grained Import Lock\u00b6\nPrevious versions of CPython have always relied on a global import lock.\nThis led to unexpected annoyances, such as deadlocks when importing a module\nwould trigger code execution in a different thread as a side-effect.\nClumsy workarounds were sometimes employed, such as the\nPyImport_ImportModuleNoBlock()\nC API function.\nIn Python 3.3, importing a module takes a per-module lock. This correctly serializes importation of a given module from multiple threads (preventing the exposure of incompletely initialized modules), while eliminating the aforementioned annoyances.\n(Contributed by Antoine Pitrou in bpo-9260.)\nBuiltin functions and types\u00b6\nopen()\ngets a new opener parameter: the underlying file descriptor for the file object is then obtained by calling opener with (file, flags). It can be used to use custom flags likeos.O_CLOEXEC\nfor example. The'x'\nmode was added: open for exclusive creation, failing if the file already exists.print()\n: added the flush keyword argument. If the flush keyword argument is true, the stream is forcibly flushed.hash()\n: hash randomization is enabled by default, seeobject.__hash__()\nandPYTHONHASHSEED\n.The\nstr\ntype gets a newcasefold()\nmethod: return a casefolded copy of the string, casefolded strings may be used for caseless matching. For example,'\u00df'.casefold()\nreturns'ss'\n.The sequence documentation has been substantially rewritten to better explain the binary/text sequence distinction and to provide specific documentation sections for the individual builtin sequence types (bpo-4966).\nNew Modules\u00b6\nfaulthandler\u00b6\nThis new debug module faulthandler\ncontains functions to dump Python tracebacks explicitly,\non a fault (a crash like a segmentation fault), after a timeout, or on a user\nsignal. Call faulthandler.enable()\nto install fault handlers for the\nSIGSEGV\n, SIGFPE\n, SIGABRT\n,\nSIGBUS\n, and SIGILL\nsignals.\nYou can also enable them at startup by setting the PYTHONFAULTHANDLER\nenvironment variable or by using -X\nfaulthandler\ncommand line option.\nExample of a segmentation fault on Linux:\n$ python -q -X faulthandler\n>>> import ctypes\n>>> ctypes.string_at(0)\nFatal Python error: Segmentation fault\nCurrent thread 0x00007fb899f39700:\nFile \"/home/python/cpython/Lib/ctypes/__init__.py\", line 486 in string_at\nFile \"\", line 1 in \nSegmentation fault\nipaddress\u00b6\nThe new ipaddress\nmodule provides tools for creating and manipulating\nobjects representing IPv4 and IPv6 addresses, networks and interfaces (i.e.\nan IP address associated with a specific IP subnet).\n(Contributed by Google and Peter Moody in PEP 3144.)\nlzma\u00b6\nThe newly added lzma\nmodule provides data compression and decompression\nusing the LZMA algorithm, including support for the .xz\nand .lzma\nfile formats.\n(Contributed by Nadeem Vawda and Per \u00d8yvind Karlsen in bpo-6715.)\nImproved Modules\u00b6\nabc\u00b6\nImproved support for abstract base classes containing descriptors composed with\nabstract methods. The recommended approach to declaring abstract descriptors is\nnow to provide __isabstractmethod__\nas a dynamically updated\nproperty. The built-in descriptors have been updated accordingly.\nabc.abstractproperty\nhas been deprecated, useproperty\nwithabc.abstractmethod()\ninstead.abc.abstractclassmethod\nhas been deprecated, useclassmethod\nwithabc.abstractmethod()\ninstead.abc.abstractstaticmethod\nhas been deprecated, usestaticmethod\nwithabc.abstractmethod()\ninstead.\n(Contributed by Darren Dale in bpo-11610.)\nabc.ABCMeta.register()\nnow returns the registered subclass, which means\nit can now be used as a class decorator (bpo-10868).\narray\u00b6\nThe array\nmodule supports the long long type using q\nand\nQ\ntype codes.\n(Contributed by Oren Tirosh and Hirokazu Yamamoto in bpo-1172711.)\nbase64\u00b6\nASCII-only Unicode strings are now accepted by the decoding functions of the\nbase64\nmodern interface. For example, base64.b64decode('YWJj')\nreturns b'abc'\n. (Contributed by Catalin Iacob in bpo-13641.)\nbinascii\u00b6\nIn addition to the binary objects they normally accept, the a2b_\nfunctions\nnow all also accept ASCII-only strings as input. (Contributed by Antoine\nPitrou in bpo-13637.)\nbz2\u00b6\nThe bz2\nmodule has been rewritten from scratch. In the process, several\nnew features have been added:\nNew\nbz2.open()\nfunction: open a bzip2-compressed file in binary or text mode.bz2.BZ2File\ncan now read from and write to arbitrary file-like objects, by means of its constructor\u2019s fileobj argument.(Contributed by Nadeem Vawda in bpo-5863.)\nbz2.BZ2File\nandbz2.decompress()\ncan now decompress multi-stream inputs (such as those produced by the pbzip2 tool).bz2.BZ2File\ncan now also be used to create this type of file, using the'a'\n(append) mode.(Contributed by Nir Aides in bpo-1625.)\nbz2.BZ2File\nnow implements all of theio.BufferedIOBase\nAPI, except for thedetach()\nandtruncate()\nmethods.\ncodecs\u00b6\nThe mbcs\ncodec has been rewritten to handle correctly\nreplace\nand ignore\nerror handlers on all Windows versions. The\nmbcs\ncodec now supports all error handlers, instead of only\nreplace\nto encode and ignore\nto decode.\nA new Windows-only codec has been added: cp65001\n(bpo-13216). It is the\nWindows code page 65001 (Windows UTF-8, CP_UTF8\n). For example, it is used\nby sys.stdout\nif the console output code page is set to cp65001 (e.g., using\nchcp 65001\ncommand).\nMultibyte CJK decoders now resynchronize faster. They only ignore the first\nbyte of an invalid byte sequence. For example, b'\\xff\\n'.decode('gb2312',\n'replace')\nnow returns a \\n\nafter the replacement character.\nIncremental CJK codec encoders are no longer reset at each call to their encode() methods. For example:\n>>> import codecs\n>>> encoder = codecs.getincrementalencoder('hz')('strict')\n>>> b''.join(encoder.encode(x) for x in '\\u52ff\\u65bd\\u65bc\\u4eba\\u3002 Bye.')\nb'~{NpJ)l6HK!#~} Bye.'\nThis example gives b'~{Np~}~{J)~}~{l6~}~{HK~}~{!#~} Bye.'\nwith older Python\nversions.\nThe unicode_internal\ncodec has been deprecated.\ncollections\u00b6\nAddition of a new ChainMap\nclass to allow treating a\nnumber of mappings as a single unit. (Written by Raymond Hettinger for\nbpo-11089, made public in bpo-11297.)\nThe abstract base classes have been moved in a new collections.abc\nmodule, to better differentiate between the abstract and the concrete\ncollections classes. Aliases for ABCs are still present in the\ncollections\nmodule to preserve existing imports. (bpo-11085)\nThe Counter\nclass now supports the unary +\nand -\noperators, as well as the in-place operators +=\n, -=\n, |=\n, and\n&=\n. (Contributed by Raymond Hettinger in bpo-13121.)\ncontextlib\u00b6\nExitStack\nnow provides a solid foundation for\nprogrammatic manipulation of context managers and similar cleanup\nfunctionality. Unlike the previous contextlib.nested\nAPI (which was\ndeprecated and removed), the new API is designed to work correctly\nregardless of whether context managers acquire their resources in\ntheir __init__\nmethod (for example, file objects) or in their\n__enter__\nmethod (for example, synchronisation objects from the\nthreading\nmodule).\ncrypt\u00b6\nAddition of salt and modular crypt format (hashing method) and the mksalt()\nfunction to the crypt\nmodule.\ncurses\u00b6\nIf the\ncurses\nmodule is linked to the ncursesw library, use Unicode functions when Unicode strings or characters are passed (e.g.waddwstr()\n), and bytes functions otherwise (e.g.waddstr()\n).Use the locale encoding instead of\nutf-8\nto encode Unicode strings.curses.window\nhas a newcurses.window.encoding\nattribute.The\ncurses.window\nclass has a newget_wch()\nmethod to get a wide characterThe\ncurses\nmodule has a newunget_wch()\nfunction to push a wide character so the nextget_wch()\nwill return it\n(Contributed by I\u00f1igo Serna in bpo-6755.)\ndatetime\u00b6\nEquality comparisons between naive and aware\ndatetime\ninstances now returnFalse\ninstead of raisingTypeError\n(bpo-15006).New\ndatetime.datetime.timestamp()\nmethod: Return POSIX timestamp corresponding to thedatetime\ninstance.The\ndatetime.datetime.strftime()\nmethod supports formatting years older than 1000.The\ndatetime.datetime.astimezone()\nmethod can now be called without arguments to convert datetime instance to the system timezone.\ndecimal\u00b6\n- bpo-7652 - integrate fast native decimal arithmetic.\nC-module and libmpdec written by Stefan Krah.\nThe new C version of the decimal module integrates the high speed libmpdec library for arbitrary precision correctly rounded decimal floating-point arithmetic. libmpdec conforms to IBM\u2019s General Decimal Arithmetic Specification.\nPerformance gains range from 10x for database applications to 100x for numerically intensive applications. These numbers are expected gains for standard precisions used in decimal floating-point arithmetic. Since the precision is user configurable, the exact figures may vary. For example, in integer bignum arithmetic the differences can be significantly higher.\nThe following table is meant as an illustration. Benchmarks are available at https://www.bytereef.org/mpdecimal/quickstart.html.\ndecimal.py\n_decimal\nspeedup\npi\n42.02s\n0.345s\n120x\ntelco\n172.19s\n5.68s\n30x\npsycopg\n3.57s\n0.29s\n12x\nFeatures\u00b6\nThe\nFloatOperation\nsignal optionally enables stricter semantics for mixing floats and Decimals.If Python is compiled without threads, the C version automatically disables the expensive thread local context machinery. In this case, the variable\nHAVE_THREADS\nis set toFalse\n.\nAPI changes\u00b6\nThe C module has the following context limits, depending on the machine architecture:\nIn the context templates (\nDefaultContext\n,BasicContext\nandExtendedContext\n) the magnitude ofEmax\nandEmin\nhas changed to999999\n.The\nDecimal\nconstructor in decimal.py does not observe the context limits and converts values with arbitrary exponents or precision exactly. Since the C version has internal limits, the following scheme is used: If possible, values are converted exactly, otherwiseInvalidOperation\nis raised and the result is NaN. In the latter case it is always possible to usecreate_decimal()\nin order to obtain a rounded or inexact value.The power function in decimal.py is always correctly rounded. In the C version, it is defined in terms of the correctly rounded\nexp()\nandln()\nfunctions, but the final result is only \u201calmost always correctly rounded\u201d.In the C version, the context dictionary containing the signals is a\nMutableMapping\n. For speed reasons,flags\nandtraps\nalways refer to the sameMutableMapping\nthat the context was initialized with. If a new signal dictionary is assigned,flags\nandtraps\nare updated with the new values, but they do not reference the RHS dictionary.Pickling a\nContext\nproduces a different output in order to have a common interchange format for the Python and C versions.The order of arguments in the\nContext\nconstructor has been changed to match the order displayed byrepr()\n.The\nwatchexp\nparameter in thequantize()\nmethod is deprecated.\nemail\u00b6\nPolicy Framework\u00b6\nThe email package now has a policy\nframework. A\nPolicy\nis an object with several methods and properties\nthat control how the email package behaves. The primary policy for Python 3.3\nis the Compat32\npolicy, which provides backward\ncompatibility with the email package in Python 3.2. A policy\ncan be\nspecified when an email message is parsed by a parser\n, or when a\nMessage\nobject is created, or when an email is\nserialized using a generator\n. Unless overridden, a policy passed\nto a parser\nis inherited by all the Message\nobject and sub-objects\ncreated by the parser\n. By default a generator\nwill use the policy of\nthe Message\nobject it is serializing. The default policy is\ncompat32\n.\nThe minimum set of controls implemented by all policy\nobjects are:\nmax_line_length |\nThe maximum length, excluding the linesep character(s),\nindividual lines may have when a |\nlinesep |\nThe character used to separate individual lines when a\n|\ncte_type |\n|\nraise_on_defect |\nCauses a |\nA new policy instance, with new settings, is created using the\nclone()\nmethod of policy objects. clone\ntakes\nany of the above controls as keyword arguments. Any control not specified in\nthe call retains its default value. Thus you can create a policy that uses\n\\r\\n\nlinesep characters like this:\nmypolicy = compat32.clone(linesep='\\r\\n')\nPolicies can be used to make the generation of messages in the format needed by\nyour application simpler. Instead of having to remember to specify\nlinesep='\\r\\n'\nin all the places you call a generator\n, you can specify\nit once, when you set the policy used by the parser\nor the Message\n,\nwhichever your program uses to create Message\nobjects. On the other hand,\nif you need to generate messages in multiple forms, you can still specify the\nparameters in the appropriate generator\ncall. Or you can have custom\npolicy instances for your different cases, and pass those in when you create\nthe generator\n.\nProvisional Policy with New Header API\u00b6\nWhile the policy framework is worthwhile all by itself, the main motivation for introducing it is to allow the creation of new policies that implement new features for the email package in a way that maintains backward compatibility for those who do not use the new policies. Because the new policies introduce a new API, we are releasing them in Python 3.3 as a provisional policy. Backwards incompatible changes (up to and including removal of the code) may occur if deemed necessary by the core developers.\nThe new policies are instances of EmailPolicy\n,\nand add the following additional controls:\nrefold_source |\nControls whether or not headers parsed by a\n|\nheader_factory |\nA callable that take a |\nThe header_factory\nis the key to the new features provided by the new\npolicies. When one of the new policies is used, any header retrieved from\na Message\nobject is an object produced by the header_factory\n, and any\ntime you set a header on a Message\nit becomes an object produced by\nheader_factory\n. All such header objects have a name\nattribute equal\nto the header name. Address and Date headers have additional attributes\nthat give you access to the parsed data of the header. This means you can now\ndo things like this:\n>>> m = Message(policy=SMTP)\n>>> m['To'] = '\u00c9ric '\n>>> m['to']\n'\u00c9ric '\n>>> m['to'].addresses\n(Address(display_name='\u00c9ric', username='foo', domain='example.com'),)\n>>> m['to'].addresses[0].username\n'foo'\n>>> m['to'].addresses[0].display_name\n'\u00c9ric'\n>>> m['Date'] = email.utils.localtime()\n>>> m['Date'].datetime\ndatetime.datetime(2012, 5, 25, 21, 39, 24, 465484, tzinfo=datetime.timezone(datetime.timedelta(-1, 72000), 'EDT'))\n>>> m['Date']\n'Fri, 25 May 2012 21:44:27 -0400'\n>>> print(m)\nTo: =?utf-8?q?=C3=89ric?= \nDate: Fri, 25 May 2012 21:44:27 -0400\nYou will note that the unicode display name is automatically encoded as\nutf-8\nwhen the message is serialized, but that when the header is accessed\ndirectly, you get the unicode version. This eliminates any need to deal with\nthe email.header\ndecode_header()\nor\nmake_header()\nfunctions.\nYou can also create addresses from parts:\n>>> m['cc'] = [Group('pals', [Address('Bob', 'bob', 'example.com'),\n... Address('Sally', 'sally', 'example.com')]),\n... Address('Bonzo', addr_spec='bonz@laugh.com')]\n>>> print(m)\nTo: =?utf-8?q?=C3=89ric?= \nDate: Fri, 25 May 2012 21:44:27 -0400\ncc: pals: Bob , Sally ;, Bonzo \nDecoding to unicode is done automatically:\n>>> m2 = message_from_string(str(m))\n>>> m2['to']\n'\u00c9ric '\nWhen you parse a message, you can use the addresses\nand groups\nattributes of the header objects to access the groups and individual\naddresses:\n>>> m2['cc'].addresses\n(Address(display_name='Bob', username='bob', domain='example.com'), Address(display_name='Sally', username='sally', domain='example.com'), Address(display_name='Bonzo', username='bonz', domain='laugh.com'))\n>>> m2['cc'].groups\n(Group(display_name='pals', addresses=(Address(display_name='Bob', username='bob', domain='example.com'), Address(display_name='Sally', username='sally', domain='example.com')), Group(display_name=None, addresses=(Address(display_name='Bonzo', username='bonz', domain='laugh.com'),))\nIn summary, if you use one of the new policies, header manipulation works the way it ought to: your application works with unicode strings, and the email package transparently encodes and decodes the unicode to and from the RFC standard Content Transfer Encodings.\nOther API Changes\u00b6\nNew BytesHeaderParser\n, added to the parser\nmodule to complement HeaderParser\nand complete the Bytes\nAPI.\nNew utility functions:\nformat_datetime()\n: given adatetime\n, produce a string formatted for use in an email header.parsedate_to_datetime()\n: given a date string from an email header, convert it into an awaredatetime\n, or a naivedatetime\nif the offset is-0000\n.localtime()\n: With no argument, returns the current local time as an awaredatetime\nusing the localtimezone\n. Given an awaredatetime\n, converts it into an awaredatetime\nusing the localtimezone\n.\nftplib\u00b6\nftplib.FTP\nnow accepts asource_address\nkeyword argument to specify the(host, port)\nto use as the source address in the bind call when creating the outgoing socket. (Contributed by Giampaolo Rodol\u00e0 in bpo-8594.)The\nFTP_TLS\nclass now provides a newccc()\nfunction to revert control channel back to plaintext. This can be useful to take advantage of firewalls that know how to handle NAT with non-secure FTP without opening fixed ports. (Contributed by Giampaolo Rodol\u00e0 in bpo-12139.)Added\nftplib.FTP.mlsd()\nmethod which provides a parsable directory listing format and deprecatesftplib.FTP.nlst()\nandftplib.FTP.dir()\n. (Contributed by Giampaolo Rodol\u00e0 in bpo-11072.)\nfunctools\u00b6\nThe functools.lru_cache()\ndecorator now accepts a typed\nkeyword\nargument (that defaults to False\nto ensure that it caches values of\ndifferent types that compare equal in separate cache slots. (Contributed\nby Raymond Hettinger in bpo-13227.)\ngc\u00b6\nIt is now possible to register callbacks invoked by the garbage collector\nbefore and after collection using the new callbacks\nlist.\nhmac\u00b6\nA new compare_digest()\nfunction has been added to prevent side\nchannel attacks on digests through timing analysis. (Contributed by Nick\nCoghlan and Christian Heimes in bpo-15061.)\nhttp\u00b6\nhttp.server.BaseHTTPRequestHandler\nnow buffers the headers and writes\nthem all at once when end_headers()\nis\ncalled. A new method flush_headers()\ncan be used to directly manage when the accumulated headers are sent.\n(Contributed by Andrew Schaaf in bpo-3709.)\nhttp.server\nnow produces valid HTML 4.01 strict\noutput.\n(Contributed by Ezio Melotti in bpo-13295.)\nhttp.client.HTTPResponse\nnow has a\nreadinto()\nmethod, which means it can be used\nas an io.RawIOBase\nclass. (Contributed by John Kuhn in\nbpo-13464.)\nhtml\u00b6\nhtml.parser.HTMLParser\nis now able to parse broken markup without\nraising errors, therefore the strict argument of the constructor and the\nHTMLParseError\nexception are now deprecated.\nThe ability to parse broken markup is the result of a number of bug fixes that\nare also available on the latest bug fix releases of Python 2.7/3.2.\n(Contributed by Ezio Melotti in bpo-15114, and bpo-14538,\nbpo-13993, bpo-13960, bpo-13358, bpo-1745761,\nbpo-755670, bpo-13357, bpo-12629, bpo-1200313,\nbpo-670664, bpo-13273, bpo-12888, bpo-7311.)\nA new html5\ndictionary that maps HTML5 named character\nreferences to the equivalent Unicode character(s) (e.g. html5['gt;'] ==\n'>'\n) has been added to the html.entities\nmodule. The dictionary is\nnow also used by HTMLParser\n. (Contributed by Ezio\nMelotti in bpo-11113 and bpo-15156.)\nimaplib\u00b6\nThe IMAP4_SSL\nconstructor now accepts an SSLContext\nparameter to control parameters of the secure channel.\n(Contributed by Sijin Joseph in bpo-8808.)\ninspect\u00b6\nA new getclosurevars()\nfunction has been added. This function\nreports the current binding of all names referenced from the function body and\nwhere those names were resolved, making it easier to verify correct internal\nstate when testing code that relies on stateful closures.\n(Contributed by Meador Inge and Nick Coghlan in bpo-13062.)\nA new getgeneratorlocals()\nfunction has been added. This\nfunction reports the current binding of local variables in the generator\u2019s\nstack frame, making it easier to verify correct internal state when testing\ngenerators.\n(Contributed by Meador Inge in bpo-15153.)\nio\u00b6\nThe open()\nfunction has a new 'x'\nmode that can be used to\nexclusively create a new file, and raise a FileExistsError\nif the file\nalready exists. It is based on the C11 \u2018x\u2019 mode to fopen().\n(Contributed by David Townshend in bpo-12760.)\nThe constructor of the TextIOWrapper\nclass has a new\nwrite_through optional argument. If write_through is True\n, calls to\nwrite()\nare guaranteed not to be buffered: any data\nwritten on the TextIOWrapper\nobject is immediately handled to its\nunderlying binary buffer.\nitertools\u00b6\naccumulate()\nnow takes an optional func\nargument for\nproviding a user-supplied binary function.\nlogging\u00b6\nThe basicConfig()\nfunction now supports an optional handlers\nargument taking an iterable of handlers to be added to the root logger.\nA class level attribute append_nul\nhas\nbeen added to SysLogHandler\nto allow control of the\nappending of the NUL\n(\\000\n) byte to syslog records, since for some\ndaemons it is required while for others it is passed through to the log.\nmath\u00b6\nThe math\nmodule has a new function, log2()\n, which returns\nthe base-2 logarithm of x.\n(Written by Mark Dickinson in bpo-11888.)\nmmap\u00b6\nThe read()\nmethod is now more compatible with other file-like\nobjects: if the argument is omitted or specified as None\n, it returns the\nbytes from the current file position to the end of the mapping. (Contributed\nby Petri Lehtinen in bpo-12021.)\nmultiprocessing\u00b6\nThe new multiprocessing.connection.wait()\nfunction allows polling\nmultiple objects (such as connections, sockets and pipes) with a timeout.\n(Contributed by Richard Oudkerk in bpo-12328.)\nmultiprocessing.connection.Connection\nobjects can now be transferred\nover multiprocessing connections.\n(Contributed by Richard Oudkerk in bpo-4892.)\nmultiprocessing.Process\nnow accepts a daemon\nkeyword argument\nto override the default behavior of inheriting the daemon\nflag from\nthe parent process (bpo-6064).\nNew attribute multiprocessing.Process.sentinel\nallows a\nprogram to wait on multiple Process\nobjects at one\ntime using the appropriate OS primitives (for example, select\non\nposix systems).\nNew methods multiprocessing.pool.Pool.starmap()\nand\nstarmap_async()\nprovide\nitertools.starmap()\nequivalents to the existing\nmultiprocessing.pool.Pool.map()\nand\nmap_async()\nfunctions. (Contributed by Hynek\nSchlawack in bpo-12708.)\nnntplib\u00b6\nThe nntplib.NNTP\nclass now supports the context management protocol to\nunconditionally consume socket.error\nexceptions and to close the NNTP\nconnection when done:\n>>> from nntplib import NNTP\n>>> with NNTP('news.gmane.org') as n:\n... n.group('gmane.comp.python.committers')\n...\n('211 1755 1 1755 gmane.comp.python.committers', 1755, 1, 1755, 'gmane.comp.python.committers')\n>>>\n(Contributed by Giampaolo Rodol\u00e0 in bpo-9795.)\nos\u00b6\nThe\nos\nmodule has a newpipe2()\nfunction that makes it possible to create a pipe withO_CLOEXEC\norO_NONBLOCK\nflags set atomically. This is especially useful to avoid race conditions in multi-threaded programs.The\nos\nmodule has a newsendfile()\nfunction which provides an efficient \u201czero-copy\u201d way for copying data from one file (or socket) descriptor to another. The phrase \u201czero-copy\u201d refers to the fact that all of the copying of data between the two descriptors is done entirely by the kernel, with no copying of data into userspace buffers.sendfile()\ncan be used to efficiently copy data from a file on disk to a network socket, e.g. for downloading a file.(Patch submitted by Ross Lagerwall and Giampaolo Rodol\u00e0 in bpo-10882.)\nTo avoid race conditions like symlink attacks and issues with temporary files and directories, it is more reliable (and also faster) to manipulate file descriptors instead of file names. Python 3.3 enhances existing functions and introduces new functions to work on file descriptors (bpo-4761, bpo-10755 and bpo-14626).\nThe\nos\nmodule has a newfwalk()\nfunction similar towalk()\nexcept that it also yields file descriptors referring to the directories visited. This is especially useful to avoid symlink races.The following functions get new optional dir_fd (paths relative to directory descriptors) and/or follow_symlinks (not following symlinks):\naccess()\n,chflags()\n,chmod()\n,chown()\n,link()\n,lstat()\n,mkdir()\n,mkfifo()\n,mknod()\n,open()\n,readlink()\n,remove()\n,rename()\n,replace()\n,rmdir()\n,stat()\n,symlink()\n,unlink()\n,utime()\n. Platform support for using these parameters can be checked via the setsos.supports_dir_fd\nandos.supports_follow_symlinks\n.The following functions now support a file descriptor for their path argument:\nchdir()\n,chmod()\n,chown()\n,execve()\n,listdir()\n,pathconf()\n,exists()\n,stat()\n,statvfs()\n,utime()\n. Platform support for this can be checked via theos.supports_fd\nset.\naccess()\naccepts aneffective_ids\nkeyword argument to turn on using the effective uid/gid rather than the real uid/gid in the access check. Platform support for this can be checked via thesupports_effective_ids\nset.The\nos\nmodule has two new functions:getpriority()\nandsetpriority()\n. They can be used to get or set process niceness/priority in a fashion similar toos.nice()\nbut extended to all processes instead of just the current one.(Patch submitted by Giampaolo Rodol\u00e0 in bpo-10784.)\nThe new\nos.replace()\nfunction allows cross-platform renaming of a file with overwriting the destination. Withos.rename()\n, an existing destination file is overwritten under POSIX, but raises an error under Windows. (Contributed by Antoine Pitrou in bpo-8828.)The stat family of functions (\nstat()\n,fstat()\n, andlstat()\n) now support reading a file\u2019s timestamps with nanosecond precision. Symmetrically,utime()\ncan now write file timestamps with nanosecond precision. (Contributed by Larry Hastings in bpo-14127.)The new\nos.get_terminal_size()\nfunction queries the size of the terminal attached to a file descriptor. See alsoshutil.get_terminal_size()\n. (Contributed by Zbigniew J\u0119drzejewski-Szmek in bpo-13609.)\nNew functions to support Linux extended attributes (bpo-12720):\ngetxattr()\n,listxattr()\n,removexattr()\n,setxattr()\n.New interface to the scheduler. These functions control how a process is allocated CPU time by the operating system. New functions:\nsched_get_priority_max()\n,sched_get_priority_min()\n,sched_getaffinity()\n,sched_getparam()\n,sched_getscheduler()\n,sched_rr_get_interval()\n,sched_setaffinity()\n,sched_setparam()\n,sched_setscheduler()\n,sched_yield()\n,New functions to control the file system:\nposix_fadvise()\n: Announces an intention to access data in a specific pattern thus allowing the kernel to make optimizations.posix_fallocate()\n: Ensures that enough disk space is allocated for a file.sync()\n: Force write of everything to disk.\nAdditional new posix functions:\nlockf()\n: Apply, test or remove a POSIX lock on an open file descriptor.pread()\n: Read from a file descriptor at an offset, the file offset remains unchanged.pwrite()\n: Write to a file descriptor from an offset, leaving the file offset unchanged.readv()\n: Read from a file descriptor into a number of writable buffers.truncate()\n: Truncate the file corresponding to path, so that it is at most length bytes in size.waitid()\n: Wait for the completion of one or more child processes.writev()\n: Write the contents of buffers to a file descriptor, where buffers is an arbitrary sequence of buffers.getgrouplist()\n(bpo-9344): Return list of group ids that specified user belongs to.\ntimes()\nanduname()\n: Return type changed from a tuple to a tuple-like object with named attributes.Some platforms now support additional constants for the\nlseek()\nfunction, such asos.SEEK_HOLE\nandos.SEEK_DATA\n.New constants\nRTLD_LAZY\n,RTLD_NOW\n,RTLD_GLOBAL\n,RTLD_LOCAL\n,RTLD_NODELETE\n,RTLD_NOLOAD\n, andRTLD_DEEPBIND\nare available on platforms that support them. These are for use with thesys.setdlopenflags()\nfunction, and supersede the similar constants defined inctypes\nandDLFCN\n. (Contributed by Victor Stinner in bpo-13226.)os.symlink()\nnow accepts (and ignores) thetarget_is_directory\nkeyword argument on non-Windows platforms, to ease cross-platform support.\npdb\u00b6\nTab-completion is now available not only for command names, but also their\narguments. For example, for the break\ncommand, function and file names\nare completed.\n(Contributed by Georg Brandl in bpo-14210)\npickle\u00b6\npickle.Pickler\nobjects now have an optional\ndispatch_table\nattribute allowing per-pickler\nreduction functions to be set.\n(Contributed by Richard Oudkerk in bpo-14166.)\npydoc\u00b6\nThe Tk GUI and the serve()\nfunction have been removed from the\npydoc\nmodule: pydoc -g\nand serve()\nhave been deprecated\nin Python 3.2.\nre\u00b6\nstr\nregular expressions now support \\u\nand \\U\nescapes.\n(Contributed by Serhiy Storchaka in bpo-3665.)\nsched\u00b6\nrun()\nnow accepts a blocking parameter which when set to false makes the method execute the scheduled events due to expire soonest (if any) and then return immediately. This is useful in case you want to use thescheduler\nin non-blocking applications. (Contributed by Giampaolo Rodol\u00e0 in bpo-13449.)scheduler\nclass can now be safely used in multi-threaded environments. (Contributed by Josiah Carlson and Giampaolo Rodol\u00e0 in bpo-8684.)timefunc and delayfunct parameters of\nscheduler\nclass constructor are now optional and defaults totime.time()\nandtime.sleep()\nrespectively. (Contributed by Chris Clark in bpo-13245.)enter()\nandenterabs()\nargument parameter is now optional. (Contributed by Chris Clark in bpo-13245.)enter()\nandenterabs()\nnow accept a kwargs parameter. (Contributed by Chris Clark in bpo-13245.)\nselect\u00b6\nSolaris and derivative platforms have a new class select.devpoll\nfor high performance asynchronous sockets via /dev/poll\n.\n(Contributed by Jes\u00fas Cea Avi\u00f3n in bpo-6397.)\nshlex\u00b6\nThe previously undocumented helper function quote\nfrom the\npipes\nmodules has been moved to the shlex\nmodule and\ndocumented. quote()\nproperly escapes all characters in a string\nthat might be otherwise given special meaning by the shell.\nshutil\u00b6\nNew functions:\ndisk_usage()\n: provides total, used and free disk space statistics. (Contributed by Giampaolo Rodol\u00e0 in bpo-12442.)chown()\n: allows one to change user and/or group of the given path also specifying the user/group names and not only their numeric ids. (Contributed by Sandro Tosi in bpo-12191.)shutil.get_terminal_size()\n: returns the size of the terminal window to which the interpreter is attached. (Contributed by Zbigniew J\u0119drzejewski-Szmek in bpo-13609.)\ncopy2()\nandcopystat()\nnow preserve file timestamps with nanosecond precision on platforms that support it. They also preserve file \u201cextended attributes\u201d on Linux. (Contributed by Larry Hastings in bpo-14127 and bpo-15238.)Several functions now take an optional\nsymlinks\nargument: when that parameter is true, symlinks aren\u2019t dereferenced and the operation instead acts on the symlink itself (or creates one, if relevant). (Contributed by Hynek Schlawack in bpo-12715.)When copying files to a different file system,\nmove()\nnow handles symlinks the way the posixmv\ncommand does, recreating the symlink rather than copying the target file contents. (Contributed by Jonathan Niehof in bpo-9993.)move()\nnow also returns thedst\nargument as its result.rmtree()\nis now resistant to symlink attacks on platforms which support the newdir_fd\nparameter inos.open()\nandos.unlink()\n. (Contributed by Martin von L\u00f6wis and Hynek Schlawack in bpo-4489.)\nsignal\u00b6\nThe\nsignal\nmodule has new functions:pthread_sigmask()\n: fetch and/or change the signal mask of the calling thread (Contributed by Jean-Paul Calderone in bpo-8407);pthread_kill()\n: send a signal to a thread;sigpending()\n: examine pending functions;sigwait()\n: wait a signal;sigwaitinfo()\n: wait for a signal, returning detailed information about it;sigtimedwait()\n: likesigwaitinfo()\nbut with a timeout.\nThe signal handler writes the signal number as a single byte instead of a nul byte into the wakeup file descriptor. So it is possible to wait more than one signal and know which signals were raised.\nsignal.signal()\nandsignal.siginterrupt()\nraise an OSError, instead of a RuntimeError: OSError has an errno attribute.\nsmtpd\u00b6\nThe smtpd\nmodule now supports RFC 5321 (extended SMTP) and RFC 1870\n(size extension). Per the standard, these extensions are enabled if and only\nif the client initiates the session with an EHLO\ncommand.\n(Initial ELHO\nsupport by Alberto Trevino. Size extension by Juhana\nJauhiainen. Substantial additional work on the patch contributed by Michele\nOrr\u00f9 and Dan Boswell. bpo-8739)\nsmtplib\u00b6\nThe SMTP\n, SMTP_SSL\n, and\nLMTP\nclasses now accept a source_address\nkeyword argument\nto specify the (host, port)\nto use as the source address in the bind call\nwhen creating the outgoing socket. (Contributed by Paulo Scardine in\nbpo-11281.)\nSMTP\nnow supports the context management protocol, allowing an\nSMTP\ninstance to be used in a with\nstatement. (Contributed\nby Giampaolo Rodol\u00e0 in bpo-11289.)\nThe SMTP_SSL\nconstructor and the starttls()\nmethod now accept an SSLContext parameter to control parameters of the secure\nchannel. (Contributed by Kasun Herath in bpo-8809.)\nsocket\u00b6\nThe\nsocket\nclass now exposes additional methods to process ancillary data when supported by the underlying platform:(Contributed by David Watson in bpo-6560, based on an earlier patch by Heiko Wundram)\nThe\nsocket\nclass now supports the PF_CAN protocol family (https://en.wikipedia.org/wiki/Socketcan), on Linux (https://lwn.net/Articles/253425).(Contributed by Matthias Fuchs, updated by Tiago Gon\u00e7alves in bpo-10141.)\nThe\nsocket\nclass now supports the PF_RDS protocol family (https://en.wikipedia.org/wiki/Reliable_Datagram_Sockets and https://oss.oracle.com/projects/rds).The\nsocket\nclass now supports thePF_SYSTEM\nprotocol family on OS X. (Contributed by Michael Goderbauer in bpo-13777.)New function\nsethostname()\nallows the hostname to be set on Unix systems if the calling process has sufficient privileges. (Contributed by Ross Lagerwall in bpo-10866.)\nsocketserver\u00b6\nBaseServer\nnow has an overridable method\nservice_actions()\nthat is called by the\nserve_forever()\nmethod in the service loop.\nForkingMixIn\nnow uses this to clean up zombie\nchild processes. (Contributed by Justin Warkentin in bpo-11109.)\nsqlite3\u00b6\nNew sqlite3.Connection\nmethod\nset_trace_callback()\ncan be used to capture a trace of\nall sql commands processed by sqlite. (Contributed by Torsten Landschoff\nin bpo-11688.)\nssl\u00b6\nThe\nssl\nmodule has two new random generation functions:RAND_bytes()\n: generate cryptographically strong pseudo-random bytes.RAND_pseudo_bytes()\n: generate pseudo-random bytes.\n(Contributed by Victor Stinner in bpo-12049.)\nThe\nssl\nmodule now exposes a finer-grained exception hierarchy in order to make it easier to inspect the various kinds of errors. (Contributed by Antoine Pitrou in bpo-11183.)load_cert_chain()\nnow accepts a password argument to be used if the private key is encrypted. (Contributed by Adam Simpkins in bpo-12803.)Diffie-Hellman key exchange, both regular and Elliptic Curve-based, is now supported through the\nload_dh_params()\nandset_ecdh_curve()\nmethods. (Contributed by Antoine Pitrou in bpo-13626 and bpo-13627.)SSL sockets have a new\nget_channel_binding()\nmethod allowing the implementation of certain authentication mechanisms such as SCRAM-SHA-1-PLUS. (Contributed by Jacek Konieczny in bpo-12551.)You can query the SSL compression algorithm used by an SSL socket, thanks to its new\ncompression()\nmethod. The new attributeOP_NO_COMPRESSION\ncan be used to disable compression. (Contributed by Antoine Pitrou in bpo-13634.)Support has been added for the Next Protocol Negotiation extension using the\nssl.SSLContext.set_npn_protocols()\nmethod. (Contributed by Colin Marc in bpo-14204.)SSL errors can now be introspected more easily thanks to\nlibrary\nandreason\nattributes. (Contributed by Antoine Pitrou in bpo-14837.)The\nget_server_certificate()\nfunction now supports IPv6. (Contributed by Charles-Fran\u00e7ois Natali in bpo-11811.)New attribute\nOP_CIPHER_SERVER_PREFERENCE\nallows setting SSLv3 server sockets to use the server\u2019s cipher ordering preference rather than the client\u2019s (bpo-13635).\nstat\u00b6\nThe undocumented tarfile.filemode function has been moved to\nstat.filemode()\n. It can be used to convert a file\u2019s mode to a string of\nthe form \u2018-rwxrwxrwx\u2019.\n(Contributed by Giampaolo Rodol\u00e0 in bpo-14807.)\nstruct\u00b6\nThe struct\nmodule now supports ssize_t\nand size_t\nvia the\nnew codes n\nand N\n, respectively. (Contributed by Antoine Pitrou\nin bpo-3163.)\nsubprocess\u00b6\nCommand strings can now be bytes objects on posix platforms. (Contributed by Victor Stinner in bpo-8513.)\nA new constant DEVNULL\nallows suppressing output in a\nplatform-independent fashion. (Contributed by Ross Lagerwall in\nbpo-5870.)\nsys\u00b6\nThe sys\nmodule has a new thread_info\nnamed\ntuple holding information about the thread implementation\n(bpo-11223).\ntarfile\u00b6\ntarfile\nnow supports lzma\nencoding via the lzma\nmodule.\n(Contributed by Lars Gust\u00e4bel in bpo-5689.)\ntempfile\u00b6\ntempfile.SpooledTemporaryFile\n's truncate()\nmethod now accepts\na size\nparameter. (Contributed by Ryan Kelly in bpo-9957.)\ntextwrap\u00b6\nThe textwrap\nmodule has a new indent()\nthat makes\nit straightforward to add a common prefix to selected lines in a block\nof text (bpo-13857).\nthreading\u00b6\nthreading.Condition\n, threading.Semaphore\n,\nthreading.BoundedSemaphore\n, threading.Event\n, and\nthreading.Timer\n, all of which used to be factory functions returning a\nclass instance, are now classes and may be subclassed. (Contributed by \u00c9ric\nAraujo in bpo-10968.)\nThe threading.Thread\nconstructor now accepts a daemon\nkeyword\nargument to override the default behavior of inheriting the daemon\nflag\nvalue from the parent thread (bpo-6064).\nThe formerly private function _thread.get_ident\nis now available as the\npublic function threading.get_ident()\n. This eliminates several cases of\ndirect access to the _thread\nmodule in the stdlib. Third party code that\nused _thread.get_ident\nshould likewise be changed to use the new public\ninterface.\ntime\u00b6\nThe PEP 418 added new functions to the time\nmodule:\nget_clock_info()\n: Get information on a clock.monotonic()\n: Monotonic clock (cannot go backward), not affected by system clock updates.perf_counter()\n: Performance counter with the highest available resolution to measure a short duration.process_time()\n: Sum of the system and user CPU time of the current process.\nOther new functions:\nclock_getres()\n,clock_gettime()\nandclock_settime()\nfunctions withCLOCK_xxx\nconstants. (Contributed by Victor Stinner in bpo-10278.)\nTo improve cross platform consistency, sleep()\nnow raises a\nValueError\nwhen passed a negative sleep value. Previously this was an\nerror on posix, but produced an infinite sleep on Windows.\ntypes\u00b6\nAdd a new types.MappingProxyType\nclass: Read-only proxy of a mapping.\n(bpo-14386)\nThe new functions types.new_class()\nand types.prepare_class()\nprovide support\nfor PEP 3115 compliant dynamic type creation. (bpo-14588)\nunittest\u00b6\nassertRaises()\n, assertRaisesRegex()\n, assertWarns()\n, and\nassertWarnsRegex()\nnow accept a keyword argument msg when used as\ncontext managers. (Contributed by Ezio Melotti and Winston Ewert in\nbpo-10775.)\nunittest.TestCase.run()\nnow returns the TestResult\nobject.\nurllib\u00b6\nThe Request\nclass, now accepts a method argument\nused by get_method()\nto determine what HTTP method\nshould be used. For example, this will send a 'HEAD'\nrequest:\n>>> urlopen(Request('https://www.python.org', method='HEAD'))\nwebbrowser\u00b6\nThe webbrowser\nmodule supports more \u201cbrowsers\u201d: Google Chrome (named\nchrome, chromium, chrome-browser or\nchromium-browser depending on the version and operating system),\nand the generic launchers xdg-open, from the FreeDesktop.org\nproject, and gvfs-open, which is the default URI handler for GNOME\n3. (The former contributed by Arnaud Calmettes in bpo-13620, the latter\nby Matthias Klose in bpo-14493.)\nxml.etree.ElementTree\u00b6\nThe xml.etree.ElementTree\nmodule now imports its C accelerator by\ndefault; there is no longer a need to explicitly import\nxml.etree.cElementTree\n(this module stays for backwards compatibility,\nbut is now deprecated). In addition, the iter\nfamily of methods of\nElement\nhas been optimized (rewritten in C).\nThe module\u2019s documentation has also been greatly improved with added examples\nand a more detailed reference.\nzlib\u00b6\nNew attribute zlib.Decompress.eof\nmakes it possible to distinguish\nbetween a properly formed compressed stream and an incomplete or truncated one.\n(Contributed by Nadeem Vawda in bpo-12646.)\nNew attribute zlib.ZLIB_RUNTIME_VERSION\nreports the version string of\nthe underlying zlib\nlibrary that is loaded at runtime. (Contributed by\nTorsten Landschoff in bpo-12306.)\nOptimizations\u00b6\nMajor performance enhancements have been added:\nThanks to PEP 393, some operations on Unicode strings have been optimized:\nthe memory footprint is divided by 2 to 4 depending on the text\nencode an ASCII string to UTF-8 doesn\u2019t need to encode characters anymore, the UTF-8 representation is shared with the ASCII representation\nthe UTF-8 encoder has been optimized\nrepeating a single ASCII letter and getting a substring of an ASCII string is 4 times faster\nUTF-8 is now 2x to 4x faster. UTF-16 encoding is now up to 10x faster.\n(Contributed by Serhiy Storchaka, bpo-14624, bpo-14738 and bpo-15026.)\nBuild and C API Changes\u00b6\nChanges to Python\u2019s build process and to the C API include:\nNew PEP 3118 related function:\nPEP 393 added new Unicode types, macros and functions:\nHigh-level API:\nLow-level API:\nPyASCIIObject\nandPyCompactUnicodeObject\nstructuresPyUnicode_DATA\n,PyUnicode_1BYTE_DATA\n,PyUnicode_2BYTE_DATA\n,PyUnicode_4BYTE_DATA\nPyUnicode_KIND\nwithPyUnicode_Kind\nenum:PyUnicode_WCHAR_KIND\n,PyUnicode_1BYTE_KIND\n,PyUnicode_2BYTE_KIND\n,PyUnicode_4BYTE_KIND\nPyArg_ParseTuple\nnow accepts abytearray\nfor thec\nformat (bpo-12380).\nDeprecated\u00b6\nUnsupported Operating Systems\u00b6\nOS/2 and VMS are no longer supported due to the lack of a maintainer.\nWindows 2000 and Windows platforms which set COMSPEC\nto command.com\nare no longer supported due to maintenance burden.\nOSF support, which was deprecated in 3.2, has been completely removed.\nDeprecated Python modules, functions and methods\u00b6\nPassing a non-empty string to\nobject.__format__()\nis deprecated, and will produce aTypeError\nin Python 3.4 (bpo-9856).The\nunicode_internal\ncodec has been deprecated because of the PEP 393, use UTF-8, UTF-16 (utf-16-le\norutf-16-be\n), or UTF-32 (utf-32-le\norutf-32-be\n)ftplib.FTP.nlst()\nandftplib.FTP.dir()\n: useftplib.FTP.mlsd()\nplatform.popen()\n: use thesubprocess\nmodule. Check especially the Replacing Older Functions with the subprocess Module section (bpo-11377).bpo-13374: The Windows bytes API has been deprecated in the\nos\nmodule. Use Unicode filenames, instead of bytes filenames, to not depend on the ANSI code page anymore and to support any filename.bpo-13988: The\nxml.etree.cElementTree\nmodule is deprecated. The accelerator is used automatically whenever available.The behaviour of\ntime.clock()\ndepends on the platform: use the newtime.perf_counter()\nortime.process_time()\nfunction instead, depending on your requirements, to have a well defined behaviour.The\nos.stat_float_times()\nfunction is deprecated.abc\nmodule:abc.abstractproperty\nhas been deprecated, useproperty\nwithabc.abstractmethod()\ninstead.abc.abstractclassmethod\nhas been deprecated, useclassmethod\nwithabc.abstractmethod()\ninstead.abc.abstractstaticmethod\nhas been deprecated, usestaticmethod\nwithabc.abstractmethod()\ninstead.\nimportlib\npackage:importlib.abc.SourceLoader.path_mtime()\nis now deprecated in favour ofimportlib.abc.SourceLoader.path_stats()\nas bytecode files now store both the modification time and size of the source file the bytecode file was compiled from.\nDeprecated functions and types of the C API\u00b6\nThe Py_UNICODE\nhas been deprecated by PEP 393 and will be\nremoved in Python 4. All functions using this type are deprecated:\nUnicode functions and methods using Py_UNICODE\nand\nPy_UNICODE* types:\nPyUnicode_FromUnicode\n: usePyUnicode_FromWideChar()\norPyUnicode_FromKindAndData()\nPyUnicode_AS_UNICODE\n,PyUnicode_AsUnicode()\n,PyUnicode_AsUnicodeAndSize()\n: usePyUnicode_AsWideCharString()\nPyUnicode_AS_DATA\n: usePyUnicode_DATA\nwithPyUnicode_READ\nandPyUnicode_WRITE\nPyUnicode_GET_SIZE\n,PyUnicode_GetSize()\n: usePyUnicode_GET_LENGTH\norPyUnicode_GetLength()\nPyUnicode_GET_DATA_SIZE\n: usePyUnicode_GET_LENGTH(str) * PyUnicode_KIND(str)\n(only work on ready strings)PyUnicode_AsUnicodeCopy()\n: usePyUnicode_AsUCS4Copy()\norPyUnicode_AsWideCharString()\nPyUnicode_GetMax()\nFunctions and macros manipulating Py_UNICODE* strings:\nPy_UNICODE_strlen()\n: usePyUnicode_GetLength()\norPyUnicode_GET_LENGTH\nPy_UNICODE_strcat()\n: usePyUnicode_CopyCharacters()\norPyUnicode_FromFormat()\nPy_UNICODE_strcpy()\n,Py_UNICODE_strncpy()\n,Py_UNICODE_COPY()\n: usePyUnicode_CopyCharacters()\norPyUnicode_Substring()\nPy_UNICODE_strcmp()\n: usePyUnicode_Compare()\nPy_UNICODE_strncmp()\n: usePyUnicode_Tailmatch()\nPy_UNICODE_strchr()\n,Py_UNICODE_strrchr()\n: usePyUnicode_FindChar()\nPy_UNICODE_FILL()\n: usePyUnicode_Fill()\nPy_UNICODE_MATCH\nEncoders:\nPyUnicode_Encode()\n: usePyUnicode_AsEncodedObject()\nPyUnicode_EncodeUTF7()\nPyUnicode_EncodeUTF8()\n: usePyUnicode_AsUTF8()\norPyUnicode_AsUTF8String()\nPyUnicode_EncodeUTF32()\nPyUnicode_EncodeUTF16()\nPyUnicode_EncodeUnicodeEscape()\nusePyUnicode_AsUnicodeEscapeString()\nPyUnicode_EncodeRawUnicodeEscape()\nusePyUnicode_AsRawUnicodeEscapeString()\nPyUnicode_EncodeLatin1()\n: usePyUnicode_AsLatin1String()\nPyUnicode_EncodeASCII()\n: usePyUnicode_AsASCIIString()\nPyUnicode_EncodeCharmap()\nPyUnicode_TranslateCharmap()\nPyUnicode_EncodeMBCS()\n: usePyUnicode_AsMBCSString()\norPyUnicode_EncodeCodePage()\n(withCP_ACP\ncode_page)PyUnicode_EncodeDecimal()\n,PyUnicode_TransformDecimalToASCII()\nDeprecated features\u00b6\nThe array\nmodule\u2019s 'u'\nformat code is now deprecated and will be\nremoved in Python 4 together with the rest of the (Py_UNICODE\n) API.\nPorting to Python 3.3\u00b6\nThis section lists previously described changes and other bugfixes that may require changes to your code.\nPorting Python code\u00b6\nHash randomization is enabled by default. Set the\nPYTHONHASHSEED\nenvironment variable to0\nto disable hash randomization. See also theobject.__hash__()\nmethod.bpo-12326: On Linux, sys.platform doesn\u2019t contain the major version anymore. It is now always \u2018linux\u2019, instead of \u2018linux2\u2019 or \u2018linux3\u2019 depending on the Linux version used to build Python. Replace sys.platform == \u2018linux2\u2019 with sys.platform.startswith(\u2018linux\u2019), or directly sys.platform == \u2018linux\u2019 if you don\u2019t need to support older Python versions.\nbpo-13847, bpo-14180:\ntime\nanddatetime\n:OverflowError\nis now raised instead ofValueError\nif a timestamp is out of range.OSError\nis now raised if C functionsgmtime()\norlocaltime()\nfailed.The default finders used by import now utilize a cache of what is contained within a specific directory. If you create a Python source file or sourceless bytecode file, make sure to call\nimportlib.invalidate_caches()\nto clear out the cache for the finders to notice the new file.ImportError\nnow uses the full name of the module that was attempted to be imported. Doctests that check ImportErrors\u2019 message will need to be updated to use the full name of the module instead of just the tail of the name.The index argument to\n__import__()\nnow defaults to 0 instead of -1 and no longer support negative values. It was an oversight when PEP 328 was implemented that the default value remained -1. If you need to continue to perform a relative import followed by an absolute import, then perform the relative import using an index of 1, followed by another import using an index of 0. It is preferred, though, that you useimportlib.import_module()\nrather than call__import__()\ndirectly.__import__()\nno longer allows one to use an index value other than 0 for top-level modules. E.g.__import__('sys', level=1)\nis now an error.Because\nsys.meta_path\nandsys.path_hooks\nnow have finders on them by default, you will most likely want to uselist.insert()\ninstead oflist.append()\nto add to those lists.Because\nNone\nis now inserted intosys.path_importer_cache\n, if you are clearing out entries in the dictionary of paths that do not have a finder, you will need to remove keys paired with values ofNone\nandimp.NullImporter\nto be backwards-compatible. This will lead to extra overhead on older versions of Python that re-insertNone\nintosys.path_importer_cache\nwhere it represents the use of implicit finders, but semantically it should not change anything.importlib.abc.Finder\nno longer specifies afind_module()\nabstract method that must be implemented. If you were relying on subclasses to implement that method, make sure to check for the method\u2019s existence first. You will probably want to check forfind_loader()\nfirst, though, in the case of working with path entry finders.pkgutil\nhas been converted to useimportlib\ninternally. This eliminates many edge cases where the old behaviour of the PEP 302 import emulation failed to match the behaviour of the real import system. The import emulation itself is still present, but is now deprecated. Thepkgutil.iter_importers()\nandpkgutil.walk_packages()\nfunctions special case the standard import hooks so they are still supported even though they do not provide the non-standarditer_modules()\nmethod.A longstanding RFC-compliance bug (bpo-1079) in the parsing done by\nemail.header.decode_header()\nhas been fixed. Code that uses the standard idiom to convert encoded headers into unicode (str(make_header(decode_header(h))\n) will see no change, but code that looks at the individual tuples returned by decode_header will see that whitespace that precedes or followsASCII\nsections is now included in theASCII\nsection. Code that builds headers usingmake_header\nshould also continue to work without change, sincemake_header\ncontinues to add whitespace betweenASCII\nand non-ASCII\nsections if it is not already present in the input strings.email.utils.formataddr()\nnow does the correct content transfer encoding when passed non-ASCII\ndisplay names. Any code that depended on the previous buggy behavior that preserved the non-ASCII\nunicode in the formatted output string will need to be changed (bpo-1690608).poplib.POP3.quit()\nmay now raise protocol errors like all otherpoplib\nmethods. Code that assumesquit\ndoes not raisepoplib.error_proto\nerrors may need to be changed if errors onquit\nare encountered by a particular application (bpo-11291).The\nstrict\nargument toemail.parser.Parser\n, deprecated since Python 2.4, has finally been removed.The deprecated method\nunittest.TestCase.assertSameElements\nhas been removed.The deprecated variable\ntime.accept2dyear\nhas been removed.The deprecated\nContext._clamp\nattribute has been removed from thedecimal\nmodule. It was previously replaced by the public attributeclamp\n. (See bpo-8540.)The undocumented internal helper class\nSSLFakeFile\nhas been removed fromsmtplib\n, since its functionality has long been provided directly bysocket.socket.makefile()\n.Passing a negative value to\ntime.sleep()\non Windows now raises an error instead of sleeping forever. It has always raised an error on posix.The\nast.__version__\nconstant has been removed. If you need to make decisions affected by the AST version, usesys.version_info\nto make the decision.Code that used to work around the fact that the\nthreading\nmodule used factory functions by subclassing the private classes will need to change to subclass the now-public classes.The undocumented debugging machinery in the threading module has been removed, simplifying the code. This should have no effect on production code, but is mentioned here in case any application debug frameworks were interacting with it (bpo-13550).\nPorting C code\u00b6\nIn the course of changes to the buffer API the undocumented\nsmalltable\nmember of thePy_buffer\nstructure has been removed and the layout of thePyMemoryViewObject\nhas changed.All extensions relying on the relevant parts in\nmemoryobject.h\norobject.h\nmust be rebuilt.Due to PEP 393, the\nPy_UNICODE\ntype and all functions using this type are deprecated (but will stay available for at least five years). If you were using low-level Unicode APIs to construct and access unicode objects and you want to benefit of the memory footprint reduction provided by PEP 393, you have to convert your code to the new Unicode API.However, if you only have been using high-level functions such as\nPyUnicode_Concat()\n,PyUnicode_Join()\norPyUnicode_FromFormat()\n, your code will automatically take advantage of the new unicode representations.PyImport_GetMagicNumber()\nnow returns-1\nupon failure.As a negative value for the level argument to\n__import__()\nis no longer valid, the same now holds forPyImport_ImportModuleLevel()\n. This also means that the value of level used byPyImport_ImportModuleEx()\nis now0\ninstead of-1\n.\nBuilding C extensions\u00b6\nThe range of possible file names for C extensions has been narrowed. Very rarely used spellings have been suppressed: under POSIX, files named\nxxxmodule.so\n,xxxmodule.abi3.so\nandxxxmodule.cpython-*.so\nare no longer recognized as implementing thexxx\nmodule. If you had been generating such files, you have to switch to the other spellings (i.e., remove themodule\nstring from the file names).(implemented in bpo-14040.)\nCommand Line Switch Changes\u00b6\nThe -Q command-line flag and related artifacts have been removed. Code checking sys.flags.division_warning will need updating.\n(bpo-10998, contributed by \u00c9ric Araujo.)\nWhen python is started with\n-S\n,import site\nwill no longer add site-specific paths to the module search paths. In previous versions, it did.(bpo-11591, contributed by Carl Meyer with editions by \u00c9ric Araujo.)", "code_snippets": [" ", " ", " ", "\n\n", "\n ", " ", " ", " ", "\n ", " ", " ", "\n", " ", " ", " ", "\n ", " ", " ", " ", "\n ", "\n ", " ", " ", " ", " ", "\n ", "\n ", "\n ", "\n", "\n ", " ", " ", " ", "\n ", " ", " ", "\n", " ", "\n ", "\n", " ", "\n ", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", "\n", " ", " ", " ", "\n", " ", "\n", "\n", " ", "\n", " ", " ", " ", "\n", " ", "\n", "\n", " ", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", "\n File ", ", line ", ", in ", "\n File ", ", line ", ", in ", "\n", ": ", "\n", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n\n", "\n\n", "\n File ", ", line ", ", in ", "\n File ", ", line ", ", in ", "\n", ": ", "\n", "\n", " ", "\n", " ", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", " ", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 19663}
{"url": "https://docs.python.org/3/search.html", "title": "Search", "content": "Please activate JavaScript to enable the search functionality.\nSearching for multiple words only shows matches that contain all words.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 33}
{"url": "https://docs.python.org/3/c-api/utilities.html", "title": "Utilities", "content": "Utilities\u00b6\nThe functions in this chapter perform various utility tasks, ranging from helping C code be more portable across platforms, using Python modules from C, and parsing function arguments and constructing Python values from C values.\n- Operating System Utilities\n- System Functions\n- Process Control\n- Importing Modules\n- Data marshalling support\n- Parsing arguments and building values\n- String conversion and formatting\n- Character classification and conversion\n- PyHash API\n- Reflection\n- Codec registry and support functions\n- PyTime C API\n- Support for Perf Maps", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 143}
{"url": "https://docs.python.org/3/c-api/abstract.html", "title": "Abstract Objects Layer", "content": "Abstract Objects Layer\u00b6\nThe functions in this chapter interact with Python objects regardless of their type, or with wide classes of object types (e.g. all numerical types, or all sequence types). When used on object types for which they do not apply, they will raise a Python exception.\nIt is not possible to use these functions on objects that are not properly\ninitialized, such as a list object that has been created by PyList_New()\n,\nbut whose items have not been set to some non-NULL\nvalue yet.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 125}
{"url": "https://docs.python.org/3/extending/building.html", "title": "Building C and C++ Extensions", "content": "4. Building C and C++ Extensions\u00b6\nA C extension for CPython is a shared library (for example, a .so\nfile on\nLinux, .pyd\non Windows), which exports an initialization function.\nSee Defining extension modules for details.\n4.1. Building C and C++ Extensions with setuptools\u00b6\nBuilding, packaging and distributing extension modules is best done with third-party tools, and is out of scope of this document. One suitable tool is Setuptools, whose documentation can be found at https://setuptools.pypa.io/en/latest/setuptools.html.\nThe distutils\nmodule, which was included in the standard library\nuntil Python 3.12, is now maintained as part of Setuptools.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 162}
{"url": "https://docs.python.org/3/extending/extending.html", "title": "Extending Python with C or C++", "content": "1. Extending Python with C or C++\u00b6\nIt is quite easy to add new built-in modules to Python, if you know how to program in C. Such extension modules can do two things that can\u2019t be done directly in Python: they can implement new built-in object types, and they can call C library functions and system calls.\nTo support extensions, the Python API (Application Programmers Interface)\ndefines a set of functions, macros and variables that provide access to most\naspects of the Python run-time system. The Python API is incorporated in a C\nsource file by including the header \"Python.h\"\n.\nThe compilation of an extension module depends on its intended use as well as on your system setup; details are given in later chapters.\nNote\nThe C extension interface is specific to CPython, and extension modules do\nnot work on other Python implementations. In many cases, it is possible to\navoid writing C extensions and preserve portability to other implementations.\nFor example, if your use case is calling C library functions or system calls,\nyou should consider using the ctypes\nmodule or the cffi library rather than writing\ncustom C code.\nThese modules let you write Python code to interface with C code and are more\nportable between implementations of Python than writing and compiling a C\nextension module.\n1.1. A Simple Example\u00b6\nLet\u2019s create an extension module called spam\n(the favorite food of Monty\nPython fans\u2026) and let\u2019s say we want to create a Python interface to the C\nlibrary function system()\n[1]. This function takes a null-terminated\ncharacter string as argument and returns an integer. We want this function to\nbe callable from Python as follows:\n>>> import spam\n>>> status = spam.system(\"ls -l\")\nBegin by creating a file spammodule.c\n. (Historically, if a module is\ncalled spam\n, the C file containing its implementation is called\nspammodule.c\n; if the module name is very long, like spammify\n, the\nmodule name can be just spammify.c\n.)\nThe first two lines of our file can be:\n#define PY_SSIZE_T_CLEAN\n#include \nwhich pulls in the Python API (you can add a comment describing the purpose of the module and a copyright notice if you like).\nNote\nSince Python may define some pre-processor definitions which affect the standard\nheaders on some systems, you must include Python.h\nbefore any standard\nheaders are included.\n#define PY_SSIZE_T_CLEAN\nwas used to indicate that Py_ssize_t\nshould be\nused in some APIs instead of int\n.\nIt is not necessary since Python 3.13, but we keep it here for backward compatibility.\nSee Strings and buffers for a description of this macro.\nAll user-visible symbols defined by Python.h\nhave a prefix of Py\nor\nPY\n, except those defined in standard header files.\nTip\nFor backward compatibility, Python.h\nincludes several standard header files.\nC extensions should include the standard headers that they use,\nand should not rely on these implicit includes.\nIf using the limited C API version 3.13 or newer, the implicit includes are:\n\n\n(on Windows)\n\n\n\n\n\n(if present)\nIf Py_LIMITED_API\nis not defined, or is set to version 3.12 or older,\nthe headers below are also included:\n\n\n(on POSIX)\nIf Py_LIMITED_API\nis not defined, or is set to version 3.10 or older,\nthe headers below are also included:\n\n\n\n\nThe next thing we add to our module file is the C function that will be called\nwhen the Python expression spam.system(string)\nis evaluated (we\u2019ll see\nshortly how it ends up being called):\nstatic PyObject *\nspam_system(PyObject *self, PyObject *args)\n{\nconst char *command;\nint sts;\nif (!PyArg_ParseTuple(args, \"s\", &command))\nreturn NULL;\nsts = system(command);\nreturn PyLong_FromLong(sts);\n}\nThere is a straightforward translation from the argument list in Python (for\nexample, the single expression \"ls -l\"\n) to the arguments passed to the C\nfunction. The C function always has two arguments, conventionally named self\nand args.\nThe self argument points to the module object for module-level functions; for a method it would point to the object instance.\nThe args argument will be a pointer to a Python tuple object containing the\narguments. Each item of the tuple corresponds to an argument in the call\u2019s\nargument list. The arguments are Python objects \u2014 in order to do anything\nwith them in our C function we have to convert them to C values. The function\nPyArg_ParseTuple()\nin the Python API checks the argument types and\nconverts them to C values. It uses a template string to determine the required\ntypes of the arguments as well as the types of the C variables into which to\nstore the converted values. More about this later.\nPyArg_ParseTuple()\nreturns true (nonzero) if all arguments have the right\ntype and its components have been stored in the variables whose addresses are\npassed. It returns false (zero) if an invalid argument list was passed. In the\nlatter case it also raises an appropriate exception so the calling function can\nreturn NULL\nimmediately (as we saw in the example).\n1.2. Intermezzo: Errors and Exceptions\u00b6\nAn important convention throughout the Python interpreter is the following: when\na function fails, it should set an exception condition and return an error value\n(usually -1\nor a NULL\npointer). Exception information is stored in\nthree members of the interpreter\u2019s thread state. These are NULL\nif\nthere is no exception. Otherwise they are the C equivalents of the members\nof the Python tuple returned by sys.exc_info()\n. These are the\nexception type, exception instance, and a traceback object. It is important\nto know about them to understand how errors are passed around.\nThe Python API defines a number of functions to set various types of exceptions.\nThe most common one is PyErr_SetString()\n. Its arguments are an exception\nobject and a C string. The exception object is usually a predefined object like\nPyExc_ZeroDivisionError\n. The C string indicates the cause of the error\nand is converted to a Python string object and stored as the \u201cassociated value\u201d\nof the exception.\nAnother useful function is PyErr_SetFromErrno()\n, which only takes an\nexception argument and constructs the associated value by inspection of the\nglobal variable errno\n. The most general function is\nPyErr_SetObject()\n, which takes two object arguments, the exception and\nits associated value. You don\u2019t need to Py_INCREF()\nthe objects passed\nto any of these functions.\nYou can test non-destructively whether an exception has been set with\nPyErr_Occurred()\n. This returns the current exception object, or NULL\nif no exception has occurred. You normally don\u2019t need to call\nPyErr_Occurred()\nto see whether an error occurred in a function call,\nsince you should be able to tell from the return value.\nWhen a function f that calls another function g detects that the latter\nfails, f should itself return an error value (usually NULL\nor -1\n). It\nshould not call one of the PyErr_*\nfunctions \u2014 one has already\nbeen called by g. f\u2019s caller is then supposed to also return an error\nindication to its caller, again without calling PyErr_*\n, and so on\n\u2014 the most detailed cause of the error was already reported by the function\nthat first detected it. Once the error reaches the Python interpreter\u2019s main\nloop, this aborts the currently executing Python code and tries to find an\nexception handler specified by the Python programmer.\n(There are situations where a module can actually give a more detailed error\nmessage by calling another PyErr_*\nfunction, and in such cases it is\nfine to do so. As a general rule, however, this is not necessary, and can cause\ninformation about the cause of the error to be lost: most operations can fail\nfor a variety of reasons.)\nTo ignore an exception set by a function call that failed, the exception\ncondition must be cleared explicitly by calling PyErr_Clear()\n. The only\ntime C code should call PyErr_Clear()\nis if it doesn\u2019t want to pass the\nerror on to the interpreter but wants to handle it completely by itself\n(possibly by trying something else, or pretending nothing went wrong).\nEvery failing malloc()\ncall must be turned into an exception \u2014 the\ndirect caller of malloc()\n(or realloc()\n) must call\nPyErr_NoMemory()\nand return a failure indicator itself. All the\nobject-creating functions (for example, PyLong_FromLong()\n) already do\nthis, so this note is only relevant to those who call malloc()\ndirectly.\nAlso note that, with the important exception of PyArg_ParseTuple()\nand\nfriends, functions that return an integer status usually return a positive value\nor zero for success and -1\nfor failure, like Unix system calls.\nFinally, be careful to clean up garbage (by making Py_XDECREF()\nor\nPy_DECREF()\ncalls for objects you have already created) when you return\nan error indicator!\nThe choice of which exception to raise is entirely yours. There are predeclared\nC objects corresponding to all built-in Python exceptions, such as\nPyExc_ZeroDivisionError\n, which you can use directly. Of course, you\nshould choose exceptions wisely \u2014 don\u2019t use PyExc_TypeError\nto mean\nthat a file couldn\u2019t be opened (that should probably be PyExc_OSError\n).\nIf something\u2019s wrong with the argument list, the PyArg_ParseTuple()\nfunction usually raises PyExc_TypeError\n. If you have an argument whose\nvalue must be in a particular range or must satisfy other conditions,\nPyExc_ValueError\nis appropriate.\nYou can also define a new exception that is unique to your module. The simplest way to do this is to declare a static global object variable at the beginning of the file:\nstatic PyObject *SpamError = NULL;\nand initialize it by calling PyErr_NewException()\nin the module\u2019s\nPy_mod_exec\nfunction (spam_module_exec()\n):\nSpamError = PyErr_NewException(\"spam.error\", NULL, NULL);\nSince SpamError\nis a global variable, it will be overwritten every time\nthe module is reinitialized, when the Py_mod_exec\nfunction is called.\nFor now, let\u2019s avoid the issue: we will block repeated initialization by raising an\nImportError\n:\nstatic PyObject *SpamError = NULL;\nstatic int\nspam_module_exec(PyObject *m)\n{\nif (SpamError != NULL) {\nPyErr_SetString(PyExc_ImportError,\n\"cannot initialize spam module more than once\");\nreturn -1;\n}\nSpamError = PyErr_NewException(\"spam.error\", NULL, NULL);\nif (PyModule_AddObjectRef(m, \"SpamError\", SpamError) < 0) {\nreturn -1;\n}\nreturn 0;\n}\nstatic PyModuleDef_Slot spam_module_slots[] = {\n{Py_mod_exec, spam_module_exec},\n{0, NULL}\n};\nstatic struct PyModuleDef spam_module = {\n.m_base = PyModuleDef_HEAD_INIT,\n.m_name = \"spam\",\n.m_size = 0, // non-negative\n.m_slots = spam_module_slots,\n};\nPyMODINIT_FUNC\nPyInit_spam(void)\n{\nreturn PyModuleDef_Init(&spam_module);\n}\nNote that the Python name for the exception object is spam.error\n. The\nPyErr_NewException()\nfunction may create a class with the base class\nbeing Exception\n(unless another class is passed in instead of NULL\n),\ndescribed in Built-in Exceptions.\nNote also that the SpamError\nvariable retains a reference to the newly\ncreated exception class; this is intentional! Since the exception could be\nremoved from the module by external code, an owned reference to the class is\nneeded to ensure that it will not be discarded, causing SpamError\nto\nbecome a dangling pointer. Should it become a dangling pointer, C code which\nraises the exception could cause a core dump or other unintended side effects.\nFor now, the Py_DECREF()\ncall to remove this reference is missing.\nEven when the Python interpreter shuts down, the global SpamError\nvariable will not be garbage-collected. It will \u201cleak\u201d.\nWe did, however, ensure that this will happen at most once per process.\nWe discuss the use of PyMODINIT_FUNC\nas a function return type later in this\nsample.\nThe spam.error\nexception can be raised in your extension module using a\ncall to PyErr_SetString()\nas shown below:\nstatic PyObject *\nspam_system(PyObject *self, PyObject *args)\n{\nconst char *command;\nint sts;\nif (!PyArg_ParseTuple(args, \"s\", &command))\nreturn NULL;\nsts = system(command);\nif (sts < 0) {\nPyErr_SetString(SpamError, \"System command failed\");\nreturn NULL;\n}\nreturn PyLong_FromLong(sts);\n}\n1.3. Back to the Example\u00b6\nGoing back to our example function, you should now be able to understand this statement:\nif (!PyArg_ParseTuple(args, \"s\", &command))\nreturn NULL;\nIt returns NULL\n(the error indicator for functions returning object pointers)\nif an error is detected in the argument list, relying on the exception set by\nPyArg_ParseTuple()\n. Otherwise the string value of the argument has been\ncopied to the local variable command\n. This is a pointer assignment and\nyou are not supposed to modify the string to which it points (so in Standard C,\nthe variable command\nshould properly be declared as const char\n*command\n).\nThe next statement is a call to the Unix function system()\n, passing it\nthe string we just got from PyArg_ParseTuple()\n:\nsts = system(command);\nOur spam.system()\nfunction must return the value of sts\nas a\nPython object. This is done using the function PyLong_FromLong()\n.\nreturn PyLong_FromLong(sts);\nIn this case, it will return an integer object. (Yes, even integers are objects on the heap in Python!)\nIf you have a C function that returns no useful argument (a function returning\nvoid), the corresponding Python function must return None\n. You\nneed this idiom to do so (which is implemented by the Py_RETURN_NONE\nmacro):\nPy_INCREF(Py_None);\nreturn Py_None;\nPy_None\nis the C name for the special Python object None\n. It is a\ngenuine Python object rather than a NULL\npointer, which means \u201cerror\u201d in most\ncontexts, as we have seen.\n1.4. The Module\u2019s Method Table and Initialization Function\u00b6\nI promised to show how spam_system()\nis called from Python programs.\nFirst, we need to list its name and address in a \u201cmethod table\u201d:\nstatic PyMethodDef spam_methods[] = {\n...\n{\"system\", spam_system, METH_VARARGS,\n\"Execute a shell command.\"},\n...\n{NULL, NULL, 0, NULL} /* Sentinel */\n};\nNote the third entry (METH_VARARGS\n). This is a flag telling the interpreter\nthe calling convention to be used for the C function. It should normally always\nbe METH_VARARGS\nor METH_VARARGS | METH_KEYWORDS\n; a value of 0\nmeans\nthat an obsolete variant of PyArg_ParseTuple()\nis used.\nWhen using only METH_VARARGS\n, the function should expect the Python-level\nparameters to be passed in as a tuple acceptable for parsing via\nPyArg_ParseTuple()\n; more information on this function is provided below.\nThe METH_KEYWORDS\nbit may be set in the third field if keyword\narguments should be passed to the function. In this case, the C function should\naccept a third PyObject *\nparameter which will be a dictionary of keywords.\nUse PyArg_ParseTupleAndKeywords()\nto parse the arguments to such a\nfunction.\nThe method table must be referenced in the module definition structure:\nstatic struct PyModuleDef spam_module = {\n...\n.m_methods = spam_methods,\n...\n};\nThis structure, in turn, must be passed to the interpreter in the module\u2019s\ninitialization function. The initialization function must be named\nPyInit_name()\n, where name is the name of the module, and should be the\nonly non-static\nitem defined in the module file:\nPyMODINIT_FUNC\nPyInit_spam(void)\n{\nreturn PyModuleDef_Init(&spam_module);\n}\nNote that PyMODINIT_FUNC\ndeclares the function as PyObject *\nreturn type,\ndeclares any special linkage declarations required by the platform, and for C++\ndeclares the function as extern \"C\"\n.\nPyInit_spam()\nis called when each interpreter imports its module\nspam\nfor the first time. (See below for comments about embedding Python.)\nA pointer to the module definition must be returned via PyModuleDef_Init()\n,\nso that the import machinery can create the module and store it in sys.modules\n.\nWhen embedding Python, the PyInit_spam()\nfunction is not called\nautomatically unless there\u2019s an entry in the PyImport_Inittab\ntable.\nTo add the module to the initialization table, use PyImport_AppendInittab()\n,\noptionally followed by an import of the module:\n#define PY_SSIZE_T_CLEAN\n#include \nint\nmain(int argc, char *argv[])\n{\nPyStatus status;\nPyConfig config;\nPyConfig_InitPythonConfig(&config);\n/* Add a built-in module, before Py_Initialize */\nif (PyImport_AppendInittab(\"spam\", PyInit_spam) == -1) {\nfprintf(stderr, \"Error: could not extend in-built modules table\\n\");\nexit(1);\n}\n/* Pass argv[0] to the Python interpreter */\nstatus = PyConfig_SetBytesString(&config, &config.program_name, argv[0]);\nif (PyStatus_Exception(status)) {\ngoto exception;\n}\n/* Initialize the Python interpreter. Required.\nIf this step fails, it will be a fatal error. */\nstatus = Py_InitializeFromConfig(&config);\nif (PyStatus_Exception(status)) {\ngoto exception;\n}\nPyConfig_Clear(&config);\n/* Optionally import the module; alternatively,\nimport can be deferred until the embedded script\nimports it. */\nPyObject *pmodule = PyImport_ImportModule(\"spam\");\nif (!pmodule) {\nPyErr_Print();\nfprintf(stderr, \"Error: could not import module 'spam'\\n\");\n}\n// ... use Python C API here ...\nreturn 0;\nexception:\nPyConfig_Clear(&config);\nPy_ExitStatusException(status);\n}\nNote\nIf you declare a global variable or a local static one, the module may\nexperience unintended side-effects on re-initialisation, for example when\nremoving entries from sys.modules\nor importing compiled modules into\nmultiple interpreters within a process\n(or following a fork()\nwithout an intervening exec()\n).\nIf module state is not yet fully isolated,\nauthors should consider marking the module as having no support for subinterpreters\n(via Py_MOD_MULTIPLE_INTERPRETERS_NOT_SUPPORTED\n).\nA more substantial example module is included in the Python source distribution\nas Modules/xxlimited.c\n. This file may be used as a template or simply\nread as an example.\n1.5. Compilation and Linkage\u00b6\nThere are two more things to do before you can use your new extension: compiling and linking it with the Python system. If you use dynamic loading, the details may depend on the style of dynamic loading your system uses; see the chapters about building extension modules (chapter Building C and C++ Extensions) and additional information that pertains only to building on Windows (chapter Building C and C++ Extensions on Windows) for more information about this.\nIf you can\u2019t use dynamic loading, or if you want to make your module a permanent\npart of the Python interpreter, you will have to change the configuration setup\nand rebuild the interpreter. Luckily, this is very simple on Unix: just place\nyour file (spammodule.c\nfor example) in the Modules/\ndirectory\nof an unpacked source distribution, add a line to the file\nModules/Setup.local\ndescribing your file:\nspam spammodule.o\nand rebuild the interpreter by running make in the toplevel\ndirectory. You can also run make in the Modules/\nsubdirectory, but then you must first rebuild Makefile\nthere by running\n\u2018make Makefile\u2019. (This is necessary each time you change the\nSetup\nfile.)\nIf your module requires additional libraries to link with, these can be listed on the line in the configuration file as well, for instance:\nspam spammodule.o -lX11\n1.6. Calling Python Functions from C\u00b6\nSo far we have concentrated on making C functions callable from Python. The reverse is also useful: calling Python functions from C. This is especially the case for libraries that support so-called \u201ccallback\u201d functions. If a C interface makes use of callbacks, the equivalent Python often needs to provide a callback mechanism to the Python programmer; the implementation will require calling the Python callback functions from a C callback. Other uses are also imaginable.\nFortunately, the Python interpreter is easily called recursively, and there is a\nstandard interface to call a Python function. (I won\u2019t dwell on how to call the\nPython parser with a particular string as input \u2014 if you\u2019re interested, have a\nlook at the implementation of the -c\ncommand line option in\nModules/main.c\nfrom the Python source code.)\nCalling a Python function is easy. First, the Python program must somehow pass\nyou the Python function object. You should provide a function (or some other\ninterface) to do this. When this function is called, save a pointer to the\nPython function object (be careful to Py_INCREF()\nit!) in a global\nvariable \u2014 or wherever you see fit. For example, the following function might\nbe part of a module definition:\nstatic PyObject *my_callback = NULL;\nstatic PyObject *\nmy_set_callback(PyObject *dummy, PyObject *args)\n{\nPyObject *result = NULL;\nPyObject *temp;\nif (PyArg_ParseTuple(args, \"O:set_callback\", &temp)) {\nif (!PyCallable_Check(temp)) {\nPyErr_SetString(PyExc_TypeError, \"parameter must be callable\");\nreturn NULL;\n}\nPy_XINCREF(temp); /* Add a reference to new callback */\nPy_XDECREF(my_callback); /* Dispose of previous callback */\nmy_callback = temp; /* Remember new callback */\n/* Boilerplate to return \"None\" */\nPy_INCREF(Py_None);\nresult = Py_None;\n}\nreturn result;\n}\nThis function must be registered with the interpreter using the\nMETH_VARARGS\nflag; this is described in section The Module\u2019s Method Table and Initialization Function. The\nPyArg_ParseTuple()\nfunction and its arguments are documented in section\nExtracting Parameters in Extension Functions.\nThe macros Py_XINCREF()\nand Py_XDECREF()\nincrement/decrement the\nreference count of an object and are safe in the presence of NULL\npointers\n(but note that temp will not be NULL\nin this context). More info on them\nin section Reference Counts.\nLater, when it is time to call the function, you call the C function\nPyObject_CallObject()\n. This function has two arguments, both pointers to\narbitrary Python objects: the Python function, and the argument list. The\nargument list must always be a tuple object, whose length is the number of\narguments. To call the Python function with no arguments, pass in NULL\n, or\nan empty tuple; to call it with one argument, pass a singleton tuple.\nPy_BuildValue()\nreturns a tuple when its format string consists of zero\nor more format codes between parentheses. For example:\nint arg;\nPyObject *arglist;\nPyObject *result;\n...\narg = 123;\n...\n/* Time to call the callback */\narglist = Py_BuildValue(\"(i)\", arg);\nresult = PyObject_CallObject(my_callback, arglist);\nPy_DECREF(arglist);\nPyObject_CallObject()\nreturns a Python object pointer: this is the return\nvalue of the Python function. PyObject_CallObject()\nis\n\u201creference-count-neutral\u201d with respect to its arguments. In the example a new\ntuple was created to serve as the argument list, which is\nPy_DECREF()\n-ed immediately after the PyObject_CallObject()\ncall.\nThe return value of PyObject_CallObject()\nis \u201cnew\u201d: either it is a brand\nnew object, or it is an existing object whose reference count has been\nincremented. So, unless you want to save it in a global variable, you should\nsomehow Py_DECREF()\nthe result, even (especially!) if you are not\ninterested in its value.\nBefore you do this, however, it is important to check that the return value\nisn\u2019t NULL\n. If it is, the Python function terminated by raising an exception.\nIf the C code that called PyObject_CallObject()\nis called from Python, it\nshould now return an error indication to its Python caller, so the interpreter\ncan print a stack trace, or the calling Python code can handle the exception.\nIf this is not possible or desirable, the exception should be cleared by calling\nPyErr_Clear()\n. For example:\nif (result == NULL)\nreturn NULL; /* Pass error back */\n...use result...\nPy_DECREF(result);\nDepending on the desired interface to the Python callback function, you may also\nhave to provide an argument list to PyObject_CallObject()\n. In some cases\nthe argument list is also provided by the Python program, through the same\ninterface that specified the callback function. It can then be saved and used\nin the same manner as the function object. In other cases, you may have to\nconstruct a new tuple to pass as the argument list. The simplest way to do this\nis to call Py_BuildValue()\n. For example, if you want to pass an integral\nevent code, you might use the following code:\nPyObject *arglist;\n...\narglist = Py_BuildValue(\"(l)\", eventcode);\nresult = PyObject_CallObject(my_callback, arglist);\nPy_DECREF(arglist);\nif (result == NULL)\nreturn NULL; /* Pass error back */\n/* Here maybe use the result */\nPy_DECREF(result);\nNote the placement of Py_DECREF(arglist)\nimmediately after the call, before\nthe error check! Also note that strictly speaking this code is not complete:\nPy_BuildValue()\nmay run out of memory, and this should be checked.\nYou may also call a function with keyword arguments by using\nPyObject_Call()\n, which supports arguments and keyword arguments. As in\nthe above example, we use Py_BuildValue()\nto construct the dictionary.\nPyObject *dict;\n...\ndict = Py_BuildValue(\"{s:i}\", \"name\", val);\nresult = PyObject_Call(my_callback, NULL, dict);\nPy_DECREF(dict);\nif (result == NULL)\nreturn NULL; /* Pass error back */\n/* Here maybe use the result */\nPy_DECREF(result);\n1.7. Extracting Parameters in Extension Functions\u00b6\nThe PyArg_ParseTuple()\nfunction is declared as follows:\nint PyArg_ParseTuple(PyObject *arg, const char *format, ...);\nThe arg argument must be a tuple object containing an argument list passed from Python to a C function. The format argument must be a format string, whose syntax is explained in Parsing arguments and building values in the Python/C API Reference Manual. The remaining arguments must be addresses of variables whose type is determined by the format string.\nNote that while PyArg_ParseTuple()\nchecks that the Python arguments have\nthe required types, it cannot check the validity of the addresses of C variables\npassed to the call: if you make mistakes there, your code will probably crash or\nat least overwrite random bits in memory. So be careful!\nNote that any Python object references which are provided to the caller are borrowed references; do not decrement their reference count!\nSome example calls:\n#define PY_SSIZE_T_CLEAN\n#include \nint ok;\nint i, j;\nlong k, l;\nconst char *s;\nPy_ssize_t size;\nok = PyArg_ParseTuple(args, \"\"); /* No arguments */\n/* Python call: f() */\nok = PyArg_ParseTuple(args, \"s\", &s); /* A string */\n/* Possible Python call: f('whoops!') */\nok = PyArg_ParseTuple(args, \"lls\", &k, &l, &s); /* Two longs and a string */\n/* Possible Python call: f(1, 2, 'three') */\nok = PyArg_ParseTuple(args, \"(ii)s#\", &i, &j, &s, &size);\n/* A pair of ints and a string, whose size is also returned */\n/* Possible Python call: f((1, 2), 'three') */\n{\nconst char *file;\nconst char *mode = \"r\";\nint bufsize = 0;\nok = PyArg_ParseTuple(args, \"s|si\", &file, &mode, &bufsize);\n/* A string, and optionally another string and an integer */\n/* Possible Python calls:\nf('spam')\nf('spam', 'w')\nf('spam', 'wb', 100000) */\n}\n{\nint left, top, right, bottom, h, v;\nok = PyArg_ParseTuple(args, \"((ii)(ii))(ii)\",\n&left, &top, &right, &bottom, &h, &v);\n/* A rectangle and a point */\n/* Possible Python call:\nf(((0, 0), (400, 300)), (10, 10)) */\n}\n{\nPy_complex c;\nok = PyArg_ParseTuple(args, \"D:myfunction\", &c);\n/* a complex, also providing a function name for errors */\n/* Possible Python call: myfunction(1+2j) */\n}\n1.8. Keyword Parameters for Extension Functions\u00b6\nThe PyArg_ParseTupleAndKeywords()\nfunction is declared as follows:\nint PyArg_ParseTupleAndKeywords(PyObject *arg, PyObject *kwdict,\nconst char *format, char * const *kwlist, ...);\nThe arg and format parameters are identical to those of the\nPyArg_ParseTuple()\nfunction. The kwdict parameter is the dictionary of\nkeywords received as the third parameter from the Python runtime. The kwlist\nparameter is a NULL\n-terminated list of strings which identify the parameters;\nthe names are matched with the type information from format from left to\nright. On success, PyArg_ParseTupleAndKeywords()\nreturns true, otherwise\nit returns false and raises an appropriate exception.\nNote\nNested tuples cannot be parsed when using keyword arguments! Keyword parameters\npassed in which are not present in the kwlist will cause TypeError\nto\nbe raised.\nHere is an example module which uses keywords, based on an example by Geoff Philbrick (philbrick@hks.com):\n#define PY_SSIZE_T_CLEAN\n#include \nstatic PyObject *\nkeywdarg_parrot(PyObject *self, PyObject *args, PyObject *keywds)\n{\nint voltage;\nconst char *state = \"a stiff\";\nconst char *action = \"voom\";\nconst char *type = \"Norwegian Blue\";\nstatic char *kwlist[] = {\"voltage\", \"state\", \"action\", \"type\", NULL};\nif (!PyArg_ParseTupleAndKeywords(args, keywds, \"i|sss\", kwlist,\n&voltage, &state, &action, &type))\nreturn NULL;\nprintf(\"-- This parrot wouldn't %s if you put %i Volts through it.\\n\",\naction, voltage);\nprintf(\"-- Lovely plumage, the %s -- It's %s!\\n\", type, state);\nPy_RETURN_NONE;\n}\nstatic PyMethodDef keywdarg_methods[] = {\n/* The cast of the function is necessary since PyCFunction values\n* only take two PyObject* parameters, and keywdarg_parrot() takes\n* three.\n*/\n{\"parrot\", (PyCFunction)(void(*)(void))keywdarg_parrot, METH_VARARGS | METH_KEYWORDS,\n\"Print a lovely skit to standard output.\"},\n{NULL, NULL, 0, NULL} /* sentinel */\n};\nstatic struct PyModuleDef keywdarg_module = {\n.m_base = PyModuleDef_HEAD_INIT,\n.m_name = \"keywdarg\",\n.m_size = 0,\n.m_methods = keywdarg_methods,\n};\nPyMODINIT_FUNC\nPyInit_keywdarg(void)\n{\nreturn PyModuleDef_Init(&keywdarg_module);\n}\n1.9. Building Arbitrary Values\u00b6\nThis function is the counterpart to PyArg_ParseTuple()\n. It is declared\nas follows:\nPyObject *Py_BuildValue(const char *format, ...);\nIt recognizes a set of format units similar to the ones recognized by\nPyArg_ParseTuple()\n, but the arguments (which are input to the function,\nnot output) must not be pointers, just values. It returns a new Python object,\nsuitable for returning from a C function called from Python.\nOne difference with PyArg_ParseTuple()\n: while the latter requires its\nfirst argument to be a tuple (since Python argument lists are always represented\nas tuples internally), Py_BuildValue()\ndoes not always build a tuple. It\nbuilds a tuple only if its format string contains two or more format units. If\nthe format string is empty, it returns None\n; if it contains exactly one\nformat unit, it returns whatever object is described by that format unit. To\nforce it to return a tuple of size 0 or one, parenthesize the format string.\nExamples (to the left the call, to the right the resulting Python value):\nPy_BuildValue(\"\") None\nPy_BuildValue(\"i\", 123) 123\nPy_BuildValue(\"iii\", 123, 456, 789) (123, 456, 789)\nPy_BuildValue(\"s\", \"hello\") 'hello'\nPy_BuildValue(\"y\", \"hello\") b'hello'\nPy_BuildValue(\"ss\", \"hello\", \"world\") ('hello', 'world')\nPy_BuildValue(\"s#\", \"hello\", 4) 'hell'\nPy_BuildValue(\"y#\", \"hello\", 4) b'hell'\nPy_BuildValue(\"()\") ()\nPy_BuildValue(\"(i)\", 123) (123,)\nPy_BuildValue(\"(ii)\", 123, 456) (123, 456)\nPy_BuildValue(\"(i,i)\", 123, 456) (123, 456)\nPy_BuildValue(\"[i,i]\", 123, 456) [123, 456]\nPy_BuildValue(\"{s:i,s:i}\",\n\"abc\", 123, \"def\", 456) {'abc': 123, 'def': 456}\nPy_BuildValue(\"((ii)(ii)) (ii)\",\n1, 2, 3, 4, 5, 6) (((1, 2), (3, 4)), (5, 6))\n1.10. Reference Counts\u00b6\nIn languages like C or C++, the programmer is responsible for dynamic allocation\nand deallocation of memory on the heap. In C, this is done using the functions\nmalloc()\nand free()\n. In C++, the operators new\nand\ndelete\nare used with essentially the same meaning and we\u2019ll restrict\nthe following discussion to the C case.\nEvery block of memory allocated with malloc()\nshould eventually be\nreturned to the pool of available memory by exactly one call to free()\n.\nIt is important to call free()\nat the right time. If a block\u2019s address\nis forgotten but free()\nis not called for it, the memory it occupies\ncannot be reused until the program terminates. This is called a memory\nleak. On the other hand, if a program calls free()\nfor a block and then\ncontinues to use the block, it creates a conflict with reuse of the block\nthrough another malloc()\ncall. This is called using freed memory.\nIt has the same bad consequences as referencing uninitialized data \u2014 core\ndumps, wrong results, mysterious crashes.\nCommon causes of memory leaks are unusual paths through the code. For instance, a function may allocate a block of memory, do some calculation, and then free the block again. Now a change in the requirements for the function may add a test to the calculation that detects an error condition and can return prematurely from the function. It\u2019s easy to forget to free the allocated memory block when taking this premature exit, especially when it is added later to the code. Such leaks, once introduced, often go undetected for a long time: the error exit is taken only in a small fraction of all calls, and most modern machines have plenty of virtual memory, so the leak only becomes apparent in a long-running process that uses the leaking function frequently. Therefore, it\u2019s important to prevent leaks from happening by having a coding convention or strategy that minimizes this kind of errors.\nSince Python makes heavy use of malloc()\nand free()\n, it needs a\nstrategy to avoid memory leaks as well as the use of freed memory. The chosen\nmethod is called reference counting. The principle is simple: every\nobject contains a counter, which is incremented when a reference to the object\nis stored somewhere, and which is decremented when a reference to it is deleted.\nWhen the counter reaches zero, the last reference to the object has been deleted\nand the object is freed.\nAn alternative strategy is called automatic garbage collection.\n(Sometimes, reference counting is also referred to as a garbage collection\nstrategy, hence my use of \u201cautomatic\u201d to distinguish the two.) The big\nadvantage of automatic garbage collection is that the user doesn\u2019t need to call\nfree()\nexplicitly. (Another claimed advantage is an improvement in speed\nor memory usage \u2014 this is no hard fact however.) The disadvantage is that for\nC, there is no truly portable automatic garbage collector, while reference\ncounting can be implemented portably (as long as the functions malloc()\nand free()\nare available \u2014 which the C Standard guarantees). Maybe some\nday a sufficiently portable automatic garbage collector will be available for C.\nUntil then, we\u2019ll have to live with reference counts.\nWhile Python uses the traditional reference counting implementation, it also offers a cycle detector that works to detect reference cycles. This allows applications to not worry about creating direct or indirect circular references; these are the weakness of garbage collection implemented using only reference counting. Reference cycles consist of objects which contain (possibly indirect) references to themselves, so that each object in the cycle has a reference count which is non-zero. Typical reference counting implementations are not able to reclaim the memory belonging to any objects in a reference cycle, or referenced from the objects in the cycle, even though there are no further references to the cycle itself.\nThe cycle detector is able to detect garbage cycles and can reclaim them.\nThe gc\nmodule exposes a way to run the detector (the\ncollect()\nfunction), as well as configuration\ninterfaces and the ability to disable the detector at runtime.\n1.10.1. Reference Counting in Python\u00b6\nThere are two macros, Py_INCREF(x)\nand Py_DECREF(x)\n, which handle the\nincrementing and decrementing of the reference count. Py_DECREF()\nalso\nfrees the object when the count reaches zero. For flexibility, it doesn\u2019t call\nfree()\ndirectly \u2014 rather, it makes a call through a function pointer in\nthe object\u2019s type object. For this purpose (and others), every object\nalso contains a pointer to its type object.\nThe big question now remains: when to use Py_INCREF(x)\nand Py_DECREF(x)\n?\nLet\u2019s first introduce some terms. Nobody \u201cowns\u201d an object; however, you can\nown a reference to an object. An object\u2019s reference count is now defined\nas the number of owned references to it. The owner of a reference is\nresponsible for calling Py_DECREF()\nwhen the reference is no longer\nneeded. Ownership of a reference can be transferred. There are three ways to\ndispose of an owned reference: pass it on, store it, or call Py_DECREF()\n.\nForgetting to dispose of an owned reference creates a memory leak.\nIt is also possible to borrow [2] a reference to an object. The\nborrower of a reference should not call Py_DECREF()\n. The borrower must\nnot hold on to the object longer than the owner from which it was borrowed.\nUsing a borrowed reference after the owner has disposed of it risks using freed\nmemory and should be avoided completely [3].\nThe advantage of borrowing over owning a reference is that you don\u2019t need to take care of disposing of the reference on all possible paths through the code \u2014 in other words, with a borrowed reference you don\u2019t run the risk of leaking when a premature exit is taken. The disadvantage of borrowing over owning is that there are some subtle situations where in seemingly correct code a borrowed reference can be used after the owner from which it was borrowed has in fact disposed of it.\nA borrowed reference can be changed into an owned reference by calling\nPy_INCREF()\n. This does not affect the status of the owner from which the\nreference was borrowed \u2014 it creates a new owned reference, and gives full\nowner responsibilities (the new owner must dispose of the reference properly, as\nwell as the previous owner).\n1.10.2. Ownership Rules\u00b6\nWhenever an object reference is passed into or out of a function, it is part of the function\u2019s interface specification whether ownership is transferred with the reference or not.\nMost functions that return a reference to an object pass on ownership with the\nreference. In particular, all functions whose function it is to create a new\nobject, such as PyLong_FromLong()\nand Py_BuildValue()\n, pass\nownership to the receiver. Even if the object is not actually new, you still\nreceive ownership of a new reference to that object. For instance,\nPyLong_FromLong()\nmaintains a cache of popular values and can return a\nreference to a cached item.\nMany functions that extract objects from other objects also transfer ownership\nwith the reference, for instance PyObject_GetAttrString()\n. The picture\nis less clear, here, however, since a few common routines are exceptions:\nPyTuple_GetItem()\n, PyList_GetItem()\n, PyDict_GetItem()\n, and\nPyDict_GetItemString()\nall return references that you borrow from the\ntuple, list or dictionary.\nThe function PyImport_AddModule()\nalso returns a borrowed reference, even\nthough it may actually create the object it returns: this is possible because an\nowned reference to the object is stored in sys.modules\n.\nWhen you pass an object reference into another function, in general, the\nfunction borrows the reference from you \u2014 if it needs to store it, it will use\nPy_INCREF()\nto become an independent owner. There are exactly two\nimportant exceptions to this rule: PyTuple_SetItem()\nand\nPyList_SetItem()\n. These functions take over ownership of the item passed\nto them \u2014 even if they fail! (Note that PyDict_SetItem()\nand friends\ndon\u2019t take over ownership \u2014 they are \u201cnormal.\u201d)\nWhen a C function is called from Python, it borrows references to its arguments\nfrom the caller. The caller owns a reference to the object, so the borrowed\nreference\u2019s lifetime is guaranteed until the function returns. Only when such a\nborrowed reference must be stored or passed on, it must be turned into an owned\nreference by calling Py_INCREF()\n.\nThe object reference returned from a C function that is called from Python must be an owned reference \u2014 ownership is transferred from the function to its caller.\n1.10.3. Thin Ice\u00b6\nThere are a few situations where seemingly harmless use of a borrowed reference can lead to problems. These all have to do with implicit invocations of the interpreter, which can cause the owner of a reference to dispose of it.\nThe first and most important case to know about is using Py_DECREF()\non\nan unrelated object while borrowing a reference to a list item. For instance:\nvoid\nbug(PyObject *list)\n{\nPyObject *item = PyList_GetItem(list, 0);\nPyList_SetItem(list, 1, PyLong_FromLong(0L));\nPyObject_Print(item, stdout, 0); /* BUG! */\n}\nThis function first borrows a reference to list[0]\n, then replaces\nlist[1]\nwith the value 0\n, and finally prints the borrowed reference.\nLooks harmless, right? But it\u2019s not!\nLet\u2019s follow the control flow into PyList_SetItem()\n. The list owns\nreferences to all its items, so when item 1 is replaced, it has to dispose of\nthe original item 1. Now let\u2019s suppose the original item 1 was an instance of a\nuser-defined class, and let\u2019s further suppose that the class defined a\n__del__()\nmethod. If this class instance has a reference count of 1,\ndisposing of it will call its __del__()\nmethod. Internally,\nPyList_SetItem()\ncalls Py_DECREF()\non the replaced item,\nwhich invokes replaced item\u2019s corresponding\ntp_dealloc\nfunction. During\ndeallocation, tp_dealloc\ncalls\ntp_finalize\n, which is mapped to the\n__del__()\nmethod for class instances (see PEP 442). This entire\nsequence happens synchronously within the PyList_SetItem()\ncall.\nSince it is written in Python, the __del__()\nmethod can execute arbitrary\nPython code. Could it perhaps do something to invalidate the reference to\nitem\nin bug()\n? You bet! Assuming that the list passed into\nbug()\nis accessible to the __del__()\nmethod, it could execute a\nstatement to the effect of del list[0]\n, and assuming this was the last\nreference to that object, it would free the memory associated with it, thereby\ninvalidating item\n.\nThe solution, once you know the source of the problem, is easy: temporarily increment the reference count. The correct version of the function reads:\nvoid\nno_bug(PyObject *list)\n{\nPyObject *item = PyList_GetItem(list, 0);\nPy_INCREF(item);\nPyList_SetItem(list, 1, PyLong_FromLong(0L));\nPyObject_Print(item, stdout, 0);\nPy_DECREF(item);\n}\nThis is a true story. An older version of Python contained variants of this bug\nand someone spent a considerable amount of time in a C debugger to figure out\nwhy his __del__()\nmethods would fail\u2026\nThe second case of problems with a borrowed reference is a variant involving\nthreads. Normally, multiple threads in the Python interpreter can\u2019t get in each\nother\u2019s way, because there is a global lock\nprotecting Python\u2019s entire object space.\nHowever, it is possible to temporarily release this lock using the macro\nPy_BEGIN_ALLOW_THREADS\n, and to re-acquire it using\nPy_END_ALLOW_THREADS\n. This is common around blocking I/O calls, to\nlet other threads use the processor while waiting for the I/O to complete.\nObviously, the following function has the same problem as the previous one:\nvoid\nbug(PyObject *list)\n{\nPyObject *item = PyList_GetItem(list, 0);\nPy_BEGIN_ALLOW_THREADS\n...some blocking I/O call...\nPy_END_ALLOW_THREADS\nPyObject_Print(item, stdout, 0); /* BUG! */\n}\n1.10.4. NULL Pointers\u00b6\nIn general, functions that take object references as arguments do not expect you\nto pass them NULL\npointers, and will dump core (or cause later core dumps) if\nyou do so. Functions that return object references generally return NULL\nonly\nto indicate that an exception occurred. The reason for not testing for NULL\narguments is that functions often pass the objects they receive on to other\nfunction \u2014 if each function were to test for NULL\n, there would be a lot of\nredundant tests and the code would run more slowly.\nIt is better to test for NULL\nonly at the \u201csource:\u201d when a pointer that may be\nNULL\nis received, for example, from malloc()\nor from a function that\nmay raise an exception.\nThe macros Py_INCREF()\nand Py_DECREF()\ndo not check for NULL\npointers \u2014 however, their variants Py_XINCREF()\nand Py_XDECREF()\ndo.\nThe macros for checking for a particular object type (Pytype_Check()\n) don\u2019t\ncheck for NULL\npointers \u2014 again, there is much code that calls several of\nthese in a row to test an object against various different expected types, and\nthis would generate redundant tests. There are no variants with NULL\nchecking.\nThe C function calling mechanism guarantees that the argument list passed to C\nfunctions (args\nin the examples) is never NULL\n\u2014 in fact it guarantees\nthat it is always a tuple [4].\nIt is a severe error to ever let a NULL\npointer \u201cescape\u201d to the Python user.\n1.11. Writing Extensions in C++\u00b6\nIt is possible to write extension modules in C++. Some restrictions apply. If\nthe main program (the Python interpreter) is compiled and linked by the C\ncompiler, global or static objects with constructors cannot be used. This is\nnot a problem if the main program is linked by the C++ compiler. Functions that\nwill be called by the Python interpreter (in particular, module initialization\nfunctions) have to be declared using extern \"C\"\n. It is unnecessary to\nenclose the Python header files in extern \"C\" {...}\n\u2014 they use this form\nalready if the symbol __cplusplus\nis defined (all recent C++ compilers\ndefine this symbol).\n1.12. Providing a C API for an Extension Module\u00b6\nMany extension modules just provide new functions and types to be used from Python, but sometimes the code in an extension module can be useful for other extension modules. For example, an extension module could implement a type \u201ccollection\u201d which works like lists without order. Just like the standard Python list type has a C API which permits extension modules to create and manipulate lists, this new collection type should have a set of C functions for direct manipulation from other extension modules.\nAt first sight this seems easy: just write the functions (without declaring them\nstatic\n, of course), provide an appropriate header file, and document\nthe C API. And in fact this would work if all extension modules were always\nlinked statically with the Python interpreter. When modules are used as shared\nlibraries, however, the symbols defined in one module may not be visible to\nanother module. The details of visibility depend on the operating system; some\nsystems use one global namespace for the Python interpreter and all extension\nmodules (Windows, for example), whereas others require an explicit list of\nimported symbols at module link time (AIX is one example), or offer a choice of\ndifferent strategies (most Unices). And even if symbols are globally visible,\nthe module whose functions one wishes to call might not have been loaded yet!\nPortability therefore requires not to make any assumptions about symbol\nvisibility. This means that all symbols in extension modules should be declared\nstatic\n, except for the module\u2019s initialization function, in order to\navoid name clashes with other extension modules (as discussed in section\nThe Module\u2019s Method Table and Initialization Function). And it means that symbols that should be accessible from\nother extension modules must be exported in a different way.\nPython provides a special mechanism to pass C-level information (pointers) from one extension module to another one: Capsules. A Capsule is a Python data type which stores a pointer (void*). Capsules can only be created and accessed via their C API, but they can be passed around like any other Python object. In particular, they can be assigned to a name in an extension module\u2019s namespace. Other extension modules can then import this module, retrieve the value of this name, and then retrieve the pointer from the Capsule.\nThere are many ways in which Capsules can be used to export the C API of an extension module. Each function could get its own Capsule, or all C API pointers could be stored in an array whose address is published in a Capsule. And the various tasks of storing and retrieving the pointers can be distributed in different ways between the module providing the code and the client modules.\nWhichever method you choose, it\u2019s important to name your Capsules properly.\nThe function PyCapsule_New()\ntakes a name parameter\n(const char*); you\u2019re permitted to pass in a NULL\nname, but\nwe strongly encourage you to specify a name. Properly named Capsules provide\na degree of runtime type-safety; there is no feasible way to tell one unnamed\nCapsule from another.\nIn particular, Capsules used to expose C APIs should be given a name following this convention:\nmodulename.attributename\nThe convenience function PyCapsule_Import()\nmakes it easy to\nload a C API provided via a Capsule, but only if the Capsule\u2019s name\nmatches this convention. This behavior gives C API users a high degree\nof certainty that the Capsule they load contains the correct C API.\nThe following example demonstrates an approach that puts most of the burden on the writer of the exporting module, which is appropriate for commonly used library modules. It stores all C API pointers (just one in the example!) in an array of void pointers which becomes the value of a Capsule. The header file corresponding to the module provides a macro that takes care of importing the module and retrieving its C API pointers; client modules only have to call this macro before accessing the C API.\nThe exporting module is a modification of the spam\nmodule from section\nA Simple Example. The function spam.system()\ndoes not call\nthe C library function system()\ndirectly, but a function\nPySpam_System()\n, which would of course do something more complicated in\nreality (such as adding \u201cspam\u201d to every command). This function\nPySpam_System()\nis also exported to other extension modules.\nThe function PySpam_System()\nis a plain C function, declared\nstatic\nlike everything else:\nstatic int\nPySpam_System(const char *command)\n{\nreturn system(command);\n}\nThe function spam_system()\nis modified in a trivial way:\nstatic PyObject *\nspam_system(PyObject *self, PyObject *args)\n{\nconst char *command;\nint sts;\nif (!PyArg_ParseTuple(args, \"s\", &command))\nreturn NULL;\nsts = PySpam_System(command);\nreturn PyLong_FromLong(sts);\n}\nIn the beginning of the module, right after the line\n#include \ntwo more lines must be added:\n#define SPAM_MODULE\n#include \"spammodule.h\"\nThe #define\nis used to tell the header file that it is being included in the\nexporting module, not a client module. Finally, the module\u2019s mod_exec\nfunction must take care of initializing the C API pointer array:\nstatic int\nspam_module_exec(PyObject *m)\n{\nstatic void *PySpam_API[PySpam_API_pointers];\nPyObject *c_api_object;\n/* Initialize the C API pointer array */\nPySpam_API[PySpam_System_NUM] = (void *)PySpam_System;\n/* Create a Capsule containing the API pointer array's address */\nc_api_object = PyCapsule_New((void *)PySpam_API, \"spam._C_API\", NULL);\nif (PyModule_Add(m, \"_C_API\", c_api_object) < 0) {\nreturn -1;\n}\nreturn 0;\n}\nNote that PySpam_API\nis declared static\n; otherwise the pointer\narray would disappear when PyInit_spam()\nterminates!\nThe bulk of the work is in the header file spammodule.h\n, which looks\nlike this:\n#ifndef Py_SPAMMODULE_H\n#define Py_SPAMMODULE_H\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n/* Header file for spammodule */\n/* C API functions */\n#define PySpam_System_NUM 0\n#define PySpam_System_RETURN int\n#define PySpam_System_PROTO (const char *command)\n/* Total number of C API pointers */\n#define PySpam_API_pointers 1\n#ifdef SPAM_MODULE\n/* This section is used when compiling spammodule.c */\nstatic PySpam_System_RETURN PySpam_System PySpam_System_PROTO;\n#else\n/* This section is used in modules that use spammodule's API */\nstatic void **PySpam_API;\n#define PySpam_System \\\n(*(PySpam_System_RETURN (*)PySpam_System_PROTO) PySpam_API[PySpam_System_NUM])\n/* Return -1 on error, 0 on success.\n* PyCapsule_Import will set an exception if there's an error.\n*/\nstatic int\nimport_spam(void)\n{\nPySpam_API = (void **)PyCapsule_Import(\"spam._C_API\", 0);\nreturn (PySpam_API != NULL) ? 0 : -1;\n}\n#endif\n#ifdef __cplusplus\n}\n#endif\n#endif /* !defined(Py_SPAMMODULE_H) */\nAll that a client module must do in order to have access to the function\nPySpam_System()\nis to call the function (or rather macro)\nimport_spam()\nin its mod_exec\nfunction:\nstatic int\nclient_module_exec(PyObject *m)\n{\nif (import_spam() < 0) {\nreturn -1;\n}\n/* additional initialization can happen here */\nreturn 0;\n}\nThe main disadvantage of this approach is that the file spammodule.h\nis\nrather complicated. However, the basic structure is the same for each function\nthat is exported, so it has to be learned only once.\nFinally it should be mentioned that Capsules offer additional functionality,\nwhich is especially useful for memory allocation and deallocation of the pointer\nstored in a Capsule. The details are described in the Python/C API Reference\nManual in the section Capsules and in the implementation of Capsules (files\nInclude/pycapsule.h\nand Objects/pycapsule.c\nin the Python source\ncode distribution).\nFootnotes", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 13144}
{"url": "https://docs.python.org/3/distributing/index.html", "title": "Distributing Python Modules", "content": "Distributing Python Modules\u00b6\nNote\nInformation and guidance on distributing Python modules and packages has been moved to the Python Packaging User Guide, and the tutorial on packaging Python projects.\nNote\nInformation and guidance on distributing Python modules and packages has been moved to the Python Packaging User Guide, and the tutorial on packaging Python projects.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 93}
{"url": "https://docs.python.org/3/c-api/unicode.html", "title": "Unicode Objects and Codecs", "content": "Unicode Objects and Codecs\u00b6\nUnicode Objects\u00b6\nSince the implementation of PEP 393 in Python 3.3, Unicode objects internally use a variety of representations, in order to allow handling the complete range of Unicode characters while staying memory efficient. There are special cases for strings where all code points are below 128, 256, or 65536; otherwise, code points must be below 1114112 (which is the full Unicode range).\nUTF-8 representation is created on demand and cached in the Unicode object.\nNote\nThe Py_UNICODE\nrepresentation has been removed since Python 3.12\nwith deprecated APIs.\nSee PEP 623 for more information.\nUnicode Type\u00b6\nThese are the basic Unicode object types used for the Unicode implementation in Python:\n-\nPyTypeObject PyUnicode_Type\u00b6\n- Part of the Stable ABI.\nThis instance of\nPyTypeObject\nrepresents the Python Unicode type. It is exposed to Python code asstr\n.\n-\nPyTypeObject PyUnicodeIter_Type\u00b6\n- Part of the Stable ABI.\nThis instance of\nPyTypeObject\nrepresents the Python Unicode iterator type. It is used to iterate over Unicode string objects.\n-\ntype Py_UCS4\u00b6\n-\ntype Py_UCS2\u00b6\n-\ntype Py_UCS1\u00b6\n- Part of the Stable ABI.\nThese types are typedefs for unsigned integer types wide enough to contain characters of 32 bits, 16 bits and 8 bits, respectively. When dealing with single Unicode characters, use\nPy_UCS4\n.Added in version 3.3.\n-\ntype PyASCIIObject\u00b6\n-\ntype PyCompactUnicodeObject\u00b6\n-\ntype PyUnicodeObject\u00b6\nThese subtypes of\nPyObject\nrepresent a Python Unicode object. In almost all cases, they shouldn\u2019t be used directly, since all API functions that deal with Unicode objects take and returnPyObject\npointers.Added in version 3.3.\nThe structure of a particular object can be determined using the following macros. The macros cannot fail; their behavior is undefined if their argument is not a Python Unicode object.\n-\nPyUnicode_IS_COMPACT(o)\u00b6\nTrue if o uses the\nPyCompactUnicodeObject\nstructure.Added in version 3.3.\n-\nPyUnicode_IS_COMPACT_ASCII(o)\u00b6\nTrue if o uses the\nPyASCIIObject\nstructure.Added in version 3.3.\n-\nPyUnicode_IS_COMPACT(o)\u00b6\nThe following APIs are C macros and static inlined functions for fast checks and access to internal read-only data of Unicode objects:\n-\nint PyUnicode_Check(PyObject *obj)\u00b6\nReturn true if the object obj is a Unicode object or an instance of a Unicode subtype. This function always succeeds.\n-\nint PyUnicode_CheckExact(PyObject *obj)\u00b6\nReturn true if the object obj is a Unicode object, but not an instance of a subtype. This function always succeeds.\n-\nPy_ssize_t PyUnicode_GET_LENGTH(PyObject *unicode)\u00b6\nReturn the length of the Unicode string, in code points. unicode has to be a Unicode object in the \u201ccanonical\u201d representation (not checked).\nAdded in version 3.3.\n-\nPy_UCS1 *PyUnicode_1BYTE_DATA(PyObject *unicode)\u00b6\n-\nPy_UCS2 *PyUnicode_2BYTE_DATA(PyObject *unicode)\u00b6\n-\nPy_UCS4 *PyUnicode_4BYTE_DATA(PyObject *unicode)\u00b6\nReturn a pointer to the canonical representation cast to UCS1, UCS2 or UCS4 integer types for direct character access. No checks are performed if the canonical representation has the correct character size; use\nPyUnicode_KIND()\nto select the right function.Added in version 3.3.\n-\nPyUnicode_1BYTE_KIND\u00b6\n-\nPyUnicode_2BYTE_KIND\u00b6\n-\nPyUnicode_4BYTE_KIND\u00b6\nReturn values of the\nPyUnicode_KIND()\nmacro.Added in version 3.3.\nChanged in version 3.12:\nPyUnicode_WCHAR_KIND\nhas been removed.\n-\nint PyUnicode_KIND(PyObject *unicode)\u00b6\nReturn one of the PyUnicode kind constants (see above) that indicate how many bytes per character this Unicode object uses to store its data. unicode has to be a Unicode object in the \u201ccanonical\u201d representation (not checked).\nAdded in version 3.3.\n-\nvoid *PyUnicode_DATA(PyObject *unicode)\u00b6\nReturn a void pointer to the raw Unicode buffer. unicode has to be a Unicode object in the \u201ccanonical\u201d representation (not checked).\nAdded in version 3.3.\n-\nvoid PyUnicode_WRITE(int kind, void *data, Py_ssize_t index, Py_UCS4 value)\u00b6\nWrite the code point value to the given zero-based index in a string.\nThe kind value and data pointer must have been obtained from a string using\nPyUnicode_KIND()\nandPyUnicode_DATA()\nrespectively. You must hold a reference to that string while callingPyUnicode_WRITE()\n. All requirements ofPyUnicode_WriteChar()\nalso apply.The function performs no checks for any of its requirements, and is intended for usage in loops.\nAdded in version 3.3.\n-\nPy_UCS4 PyUnicode_READ(int kind, void *data, Py_ssize_t index)\u00b6\nRead a code point from a canonical representation data (as obtained with\nPyUnicode_DATA()\n). No checks or ready calls are performed.Added in version 3.3.\n-\nPy_UCS4 PyUnicode_READ_CHAR(PyObject *unicode, Py_ssize_t index)\u00b6\nRead a character from a Unicode object unicode, which must be in the \u201ccanonical\u201d representation. This is less efficient than\nPyUnicode_READ()\nif you do multiple consecutive reads.Added in version 3.3.\n-\nPy_UCS4 PyUnicode_MAX_CHAR_VALUE(PyObject *unicode)\u00b6\nReturn the maximum code point that is suitable for creating another string based on unicode, which must be in the \u201ccanonical\u201d representation. This is always an approximation but more efficient than iterating over the string.\nAdded in version 3.3.\n-\nint PyUnicode_IsIdentifier(PyObject *unicode)\u00b6\n- Part of the Stable ABI.\nReturn\n1\nif the string is a valid identifier according to the language definition, section Names (identifiers and keywords). Return0\notherwise.Changed in version 3.9: The function does not call\nPy_FatalError()\nanymore if the string is not ready.\n-\nunsigned int PyUnicode_IS_ASCII(PyObject *unicode)\u00b6\nReturn true if the string only contains ASCII characters. Equivalent to\nstr.isascii()\n.Added in version 3.2.\nUnicode Character Properties\u00b6\nUnicode provides many different character properties. The most often needed ones are available through these macros which are mapped to C functions depending on the Python configuration.\n-\nint Py_UNICODE_ISSPACE(Py_UCS4 ch)\u00b6\nReturn\n1\nor0\ndepending on whether ch is a whitespace character.\n-\nint Py_UNICODE_ISUPPER(Py_UCS4 ch)\u00b6\nReturn\n1\nor0\ndepending on whether ch is an uppercase character.\n-\nint Py_UNICODE_ISLINEBREAK(Py_UCS4 ch)\u00b6\nReturn\n1\nor0\ndepending on whether ch is a linebreak character.\n-\nint Py_UNICODE_ISALPHA(Py_UCS4 ch)\u00b6\nReturn\n1\nor0\ndepending on whether ch is an alphabetic character.\n-\nint Py_UNICODE_ISALNUM(Py_UCS4 ch)\u00b6\nReturn\n1\nor0\ndepending on whether ch is an alphanumeric character.\n-\nint Py_UNICODE_ISPRINTABLE(Py_UCS4 ch)\u00b6\nReturn\n1\nor0\ndepending on whether ch is a printable character, in the sense ofstr.isprintable()\n.\nThese APIs can be used for fast direct character conversions:\n-\nint Py_UNICODE_TODECIMAL(Py_UCS4 ch)\u00b6\nReturn the character ch converted to a decimal positive integer. Return\n-1\nif this is not possible. This function does not raise exceptions.\n-\nint Py_UNICODE_TODIGIT(Py_UCS4 ch)\u00b6\nReturn the character ch converted to a single digit integer. Return\n-1\nif this is not possible. This function does not raise exceptions.\n-\ndouble Py_UNICODE_TONUMERIC(Py_UCS4 ch)\u00b6\nReturn the character ch converted to a double. Return\n-1.0\nif this is not possible. This function does not raise exceptions.\nThese APIs can be used to work with surrogates:\n-\nint Py_UNICODE_IS_HIGH_SURROGATE(Py_UCS4 ch)\u00b6\nCheck if ch is a high surrogate (\n0xD800 <= ch <= 0xDBFF\n).\n-\nint Py_UNICODE_IS_LOW_SURROGATE(Py_UCS4 ch)\u00b6\nCheck if ch is a low surrogate (\n0xDC00 <= ch <= 0xDFFF\n).\n-\nPy_UCS4 Py_UNICODE_HIGH_SURROGATE(Py_UCS4 ch)\u00b6\nReturn the high UTF-16 surrogate (\n0xD800\nto0xDBFF\n) for a Unicode code point in the range[0x10000; 0x10FFFF]\n.\n-\nPy_UCS4 Py_UNICODE_LOW_SURROGATE(Py_UCS4 ch)\u00b6\nReturn the low UTF-16 surrogate (\n0xDC00\nto0xDFFF\n) for a Unicode code point in the range[0x10000; 0x10FFFF]\n.\n-\nPy_UCS4 Py_UNICODE_JOIN_SURROGATES(Py_UCS4 high, Py_UCS4 low)\u00b6\nJoin two surrogate code points and return a single\nPy_UCS4\nvalue. high and low are respectively the leading and trailing surrogates in a surrogate pair. high must be in the range[0xD800; 0xDBFF]\nand low must be in the range[0xDC00; 0xDFFF]\n.\nCreating and accessing Unicode strings\u00b6\nTo create Unicode objects and access their basic sequence properties, use these APIs:\n-\nPyObject *PyUnicode_New(Py_ssize_t size, Py_UCS4 maxchar)\u00b6\n- Return value: New reference.\nCreate a new Unicode object. maxchar should be the true maximum code point to be placed in the string. As an approximation, it can be rounded up to the nearest value in the sequence 127, 255, 65535, 1114111.\nOn error, set an exception and return\nNULL\n.After creation, the string can be filled by\nPyUnicode_WriteChar()\n,PyUnicode_CopyCharacters()\n,PyUnicode_Fill()\n,PyUnicode_WRITE()\nor similar. Since strings are supposed to be immutable, take care to not \u201cuse\u201d the result while it is being modified. In particular, before it\u2019s filled with its final contents, a string:must not be hashed,\nmust not be\nconverted to UTF-8\n, or another non-\u201ccanonical\u201d representation,must not have its reference count changed,\nmust not be shared with code that might do one of the above.\nThis list is not exhaustive. Avoiding these uses is your responsibility; Python does not always check these requirements.\nTo avoid accidentally exposing a partially-written string object, prefer using the\nPyUnicodeWriter\nAPI, or one of thePyUnicode_From*\nfunctions below.Added in version 3.3.\n-\nPyObject *PyUnicode_FromKindAndData(int kind, const void *buffer, Py_ssize_t size)\u00b6\n- Return value: New reference.\nCreate a new Unicode object with the given kind (possible values are\nPyUnicode_1BYTE_KIND\netc., as returned byPyUnicode_KIND()\n). The buffer must point to an array of size units of 1, 2 or 4 bytes per character, as given by the kind.If necessary, the input buffer is copied and transformed into the canonical representation. For example, if the buffer is a UCS4 string (\nPyUnicode_4BYTE_KIND\n) and it consists only of codepoints in the UCS1 range, it will be transformed into UCS1 (PyUnicode_1BYTE_KIND\n).Added in version 3.3.\n-\nPyObject *PyUnicode_FromStringAndSize(const char *str, Py_ssize_t size)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCreate a Unicode object from the char buffer str. The bytes will be interpreted as being UTF-8 encoded. The buffer is copied into the new object. The return value might be a shared object, i.e. modification of the data is not allowed.\nThis function raises\nSystemError\nwhen:size < 0,\nstr is\nNULL\nand size > 0\nChanged in version 3.12: str ==\nNULL\nwith size > 0 is not allowed anymore.\n-\nPyObject *PyUnicode_FromString(const char *str)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCreate a Unicode object from a UTF-8 encoded null-terminated char buffer str.\n-\nPyObject *PyUnicode_FromFormat(const char *format, ...)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nTake a C\nprintf()\n-style format string and a variable number of arguments, calculate the size of the resulting Python Unicode string and return a string with the values formatted into it. The variable arguments must be C types and must correspond exactly to the format characters in the format ASCII-encoded string.A conversion specifier contains two or more characters and has the following components, which must occur in this order:\nThe\n'%'\ncharacter, which marks the start of the specifier.Conversion flags (optional), which affect the result of some conversion types.\nMinimum field width (optional). If specified as an\n'*'\n(asterisk), the actual width is given in the next argument, which must be of type int, and the object to convert comes after the minimum field width and optional precision.Precision (optional), given as a\n'.'\n(dot) followed by the precision. If specified as'*'\n(an asterisk), the actual precision is given in the next argument, which must be of type int, and the value to convert comes after the precision.Length modifier (optional).\nConversion type.\nThe conversion flag characters are:\nFlag\nMeaning\n0\nThe conversion will be zero padded for numeric values.\n-\nThe converted value is left adjusted (overrides the\n0\nflag if both are given).The length modifiers for following integer conversions (\nd\n,i\n,o\n,u\n,x\n, orX\n) specify the type of the argument (int by default):Modifier\nTypes\nl\nlong or unsigned long\nll\nlong long or unsigned long long\nj\nintmax_t\noruintmax_t\nz\nsize_t\norssize_t\nt\nptrdiff_t\nThe length modifier\nl\nfor following conversionss\norV\nspecify that the type of the argument is const wchar_t*.The conversion specifiers are:\nConversion Specifier\nType\nComment\n%\nn/a\nThe literal\n%\ncharacter.d\n,i\nSpecified by the length modifier\nThe decimal representation of a signed C integer.\nu\nSpecified by the length modifier\nThe decimal representation of an unsigned C integer.\no\nSpecified by the length modifier\nThe octal representation of an unsigned C integer.\nx\nSpecified by the length modifier\nThe hexadecimal representation of an unsigned C integer (lowercase).\nX\nSpecified by the length modifier\nThe hexadecimal representation of an unsigned C integer (uppercase).\nc\nint\nA single character.\ns\nconst char* or const wchar_t*\nA null-terminated C character array.\np\nconst void*\nThe hex representation of a C pointer. Mostly equivalent to\nprintf(\"%p\")\nexcept that it is guaranteed to start with the literal0x\nregardless of what the platform\u2019sprintf\nyields.A\nThe result of calling\nascii()\n.U\nA Unicode object.\nV\nPyObject*, const char* or const wchar_t*\nA Unicode object (which may be\nNULL\n) and a null-terminated C character array as a second parameter (which will be used, if the first parameter isNULL\n).S\nThe result of calling\nPyObject_Str()\n.R\nThe result of calling\nPyObject_Repr()\n.T\nGet the fully qualified name of an object type; call\nPyType_GetFullyQualifiedName()\n.#T\nSimilar to\nT\nformat, but use a colon (:\n) as separator between the module name and the qualified name.N\nGet the fully qualified name of a type; call\nPyType_GetFullyQualifiedName()\n.#N\nSimilar to\nN\nformat, but use a colon (:\n) as separator between the module name and the qualified name.Note\nThe width formatter unit is number of characters rather than bytes. The precision formatter unit is number of bytes or\nwchar_t\nitems (if the length modifierl\nis used) for\"%s\"\nand\"%V\"\n(if thePyObject*\nargument isNULL\n), and a number of characters for\"%A\"\n,\"%U\"\n,\"%S\"\n,\"%R\"\nand\"%V\"\n(if thePyObject*\nargument is notNULL\n).Note\nUnlike to C\nprintf()\nthe0\nflag has effect even when a precision is given for integer conversions (d\n,i\n,u\n,o\n,x\n, orX\n).Changed in version 3.2: Support for\n\"%lld\"\nand\"%llu\"\nadded.Changed in version 3.3: Support for\n\"%li\"\n,\"%lli\"\nand\"%zi\"\nadded.Changed in version 3.4: Support width and precision formatter for\n\"%s\"\n,\"%A\"\n,\"%U\"\n,\"%V\"\n,\"%S\"\n,\"%R\"\nadded.Changed in version 3.12: Support for conversion specifiers\no\nandX\n. Support for length modifiersj\nandt\n. Length modifiers are now applied to all integer conversions. Length modifierl\nis now applied to conversion specifierss\nandV\n. Support for variable width and precision*\n. Support for flag-\n.An unrecognized format character now sets a\nSystemError\n. In previous versions it caused all the rest of the format string to be copied as-is to the result string, and any extra arguments discarded.Changed in version 3.13: Support for\n%T\n,%#T\n,%N\nand%#N\nformats added.\n-\nPyObject *PyUnicode_FromFormatV(const char *format, va_list vargs)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nIdentical to\nPyUnicode_FromFormat()\nexcept that it takes exactly two arguments.\n-\nPyObject *PyUnicode_FromObject(PyObject *obj)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCopy an instance of a Unicode subtype to a new true Unicode object if necessary. If obj is already a true Unicode object (not a subtype), return a new strong reference to the object.\nObjects other than Unicode or its subtypes will cause a\nTypeError\n.\n-\nPyObject *PyUnicode_FromOrdinal(int ordinal)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCreate a Unicode Object from the given Unicode code point ordinal.\nThe ordinal must be in\nrange(0x110000)\n. AValueError\nis raised in the case it is not.\n-\nPyObject *PyUnicode_FromEncodedObject(PyObject *obj, const char *encoding, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nDecode an encoded object obj to a Unicode object.\nbytes\n,bytearray\nand other bytes-like objects are decoded according to the given encoding and using the error handling defined by errors. Both can beNULL\nto have the interface use the default values (see Built-in Codecs for details).All other objects, including Unicode objects, cause a\nTypeError\nto be set.The API returns\nNULL\nif there was an error. The caller is responsible for decref\u2019ing the returned objects.\n-\nvoid PyUnicode_Append(PyObject **p_left, PyObject *right)\u00b6\n- Part of the Stable ABI.\nAppend the string right to the end of p_left. p_left must point to a strong reference to a Unicode object;\nPyUnicode_Append()\nreleases (\u201csteals\u201d) this reference.On error, set *p_left to\nNULL\nand set an exception.On success, set *p_left to a new strong reference to the result.\n-\nvoid PyUnicode_AppendAndDel(PyObject **p_left, PyObject *right)\u00b6\n- Part of the Stable ABI.\nThe function is similar to\nPyUnicode_Append()\n, with the only difference being that it decrements the reference count of right by one.\n-\nPyObject *PyUnicode_BuildEncodingMap(PyObject *string)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a mapping suitable for decoding a custom single-byte encoding. Given a Unicode string string of up to 256 characters representing an encoding table, returns either a compact internal mapping object or a dictionary mapping character ordinals to byte values. Raises a\nTypeError\nand returnNULL\non invalid input.Added in version 3.2.\n-\nconst char *PyUnicode_GetDefaultEncoding(void)\u00b6\n- Part of the Stable ABI.\nReturn the name of the default string encoding,\n\"utf-8\"\n. Seesys.getdefaultencoding()\n.The returned string does not need to be freed, and is valid until interpreter shutdown.\n-\nPy_ssize_t PyUnicode_GetLength(PyObject *unicode)\u00b6\n- Part of the Stable ABI since version 3.7.\nReturn the length of the Unicode object, in code points.\nOn error, set an exception and return\n-1\n.Added in version 3.3.\n-\nPy_ssize_t PyUnicode_CopyCharacters(PyObject *to, Py_ssize_t to_start, PyObject *from, Py_ssize_t from_start, Py_ssize_t how_many)\u00b6\nCopy characters from one Unicode object into another. This function performs character conversion when necessary and falls back to\nmemcpy()\nif possible. Returns-1\nand sets an exception on error, otherwise returns the number of copied characters.The string must not have been \u201cused\u201d yet. See\nPyUnicode_New()\nfor details.Added in version 3.3.\n-\nint PyUnicode_Resize(PyObject **unicode, Py_ssize_t length);\u00b6\n- Part of the Stable ABI.\nResize a Unicode object *unicode to the new length in code points.\nTry to resize the string in place (which is usually faster than allocating a new string and copying characters), or create a new string.\n*unicode is modified to point to the new (resized) object and\n0\nis returned on success. Otherwise,-1\nis returned and an exception is set, and *unicode is left untouched.The function doesn\u2019t check string content, the result may not be a string in canonical representation.\n-\nPy_ssize_t PyUnicode_Fill(PyObject *unicode, Py_ssize_t start, Py_ssize_t length, Py_UCS4 fill_char)\u00b6\nFill a string with a character: write fill_char into\nunicode[start:start+length]\n.Fail if fill_char is bigger than the string maximum character, or if the string has more than 1 reference.\nThe string must not have been \u201cused\u201d yet. See\nPyUnicode_New()\nfor details.Return the number of written character, or return\n-1\nand raise an exception on error.Added in version 3.3.\n-\nint PyUnicode_WriteChar(PyObject *unicode, Py_ssize_t index, Py_UCS4 character)\u00b6\n- Part of the Stable ABI since version 3.7.\nWrite a character to the string unicode at the zero-based index. Return\n0\non success,-1\non error with an exception set.This function checks that unicode is a Unicode object, that the index is not out of bounds, and that the object\u2019s reference count is one. See\nPyUnicode_WRITE()\nfor a version that skips these checks, making them your responsibility.The string must not have been \u201cused\u201d yet. See\nPyUnicode_New()\nfor details.Added in version 3.3.\n-\nPy_UCS4 PyUnicode_ReadChar(PyObject *unicode, Py_ssize_t index)\u00b6\n- Part of the Stable ABI since version 3.7.\nRead a character from a string. This function checks that unicode is a Unicode object and the index is not out of bounds, in contrast to\nPyUnicode_READ_CHAR()\n, which performs no error checking.Return character on success,\n-1\non error with an exception set.Added in version 3.3.\n-\nPyObject *PyUnicode_Substring(PyObject *unicode, Py_ssize_t start, Py_ssize_t end)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.7.\nReturn a substring of unicode, from character index start (included) to character index end (excluded). Negative indices are not supported. On error, set an exception and return\nNULL\n.Added in version 3.3.\n-\nPy_UCS4 *PyUnicode_AsUCS4(PyObject *unicode, Py_UCS4 *buffer, Py_ssize_t buflen, int copy_null)\u00b6\n- Part of the Stable ABI since version 3.7.\nCopy the string unicode into a UCS4 buffer, including a null character, if copy_null is set. Returns\nNULL\nand sets an exception on error (in particular, aSystemError\nif buflen is smaller than the length of unicode). buffer is returned on success.Added in version 3.3.\n-\nPy_UCS4 *PyUnicode_AsUCS4Copy(PyObject *unicode)\u00b6\n- Part of the Stable ABI since version 3.7.\nCopy the string unicode into a new UCS4 buffer that is allocated using\nPyMem_Malloc()\n. If this fails,NULL\nis returned with aMemoryError\nset. The returned buffer always has an extra null code point appended.Added in version 3.3.\nLocale Encoding\u00b6\nThe current locale encoding can be used to decode text from the operating system.\n-\nPyObject *PyUnicode_DecodeLocaleAndSize(const char *str, Py_ssize_t length, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.7.\nDecode a string from UTF-8 on Android and VxWorks, or from the current locale encoding on other platforms. The supported error handlers are\n\"strict\"\nand\"surrogateescape\"\n(PEP 383). The decoder uses\"strict\"\nerror handler if errors isNULL\n. str must end with a null character but cannot contain embedded null characters.Use\nPyUnicode_DecodeFSDefaultAndSize()\nto decode a string from the filesystem encoding and error handler.This function ignores the Python UTF-8 Mode.\nSee also\nThe\nPy_DecodeLocale()\nfunction.Added in version 3.3.\nChanged in version 3.7: The function now also uses the current locale encoding for the\nsurrogateescape\nerror handler, except on Android. Previously,Py_DecodeLocale()\nwas used for thesurrogateescape\n, and the current locale encoding was used forstrict\n.\n-\nPyObject *PyUnicode_DecodeLocale(const char *str, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.7.\nSimilar to\nPyUnicode_DecodeLocaleAndSize()\n, but compute the string length usingstrlen()\n.Added in version 3.3.\n-\nPyObject *PyUnicode_EncodeLocale(PyObject *unicode, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.7.\nEncode a Unicode object to UTF-8 on Android and VxWorks, or to the current locale encoding on other platforms. The supported error handlers are\n\"strict\"\nand\"surrogateescape\"\n(PEP 383). The encoder uses\"strict\"\nerror handler if errors isNULL\n. Return abytes\nobject. unicode cannot contain embedded null characters.Use\nPyUnicode_EncodeFSDefault()\nto encode a string to the filesystem encoding and error handler.This function ignores the Python UTF-8 Mode.\nSee also\nThe\nPy_EncodeLocale()\nfunction.Added in version 3.3.\nChanged in version 3.7: The function now also uses the current locale encoding for the\nsurrogateescape\nerror handler, except on Android. Previously,Py_EncodeLocale()\nwas used for thesurrogateescape\n, and the current locale encoding was used forstrict\n.\nFile System Encoding\u00b6\nFunctions encoding to and decoding from the filesystem encoding and error handler (PEP 383 and PEP 529).\nTo encode file names to bytes\nduring argument parsing, the \"O&\"\nconverter should be used, passing PyUnicode_FSConverter()\nas the\nconversion function:\n-\nint PyUnicode_FSConverter(PyObject *obj, void *result)\u00b6\n- Part of the Stable ABI.\nPyArg_Parse* converter: encode\nstr\nobjects \u2013 obtained directly or through theos.PathLike\ninterface \u2013 tobytes\nusingPyUnicode_EncodeFSDefault()\n;bytes\nobjects are output as-is. result must be an address of a C variable of type PyObject* (or PyBytesObject*). On success, set the variable to a new strong reference to a bytes object which must be released when it is no longer used and return a non-zero value (Py_CLEANUP_SUPPORTED\n). Embedded null bytes are not allowed in the result. On failure, return0\nwith an exception set.If obj is\nNULL\n, the function releases a strong reference stored in the variable referred by result and returns1\n.Added in version 3.1.\nChanged in version 3.6: Accepts a path-like object.\nTo decode file names to str\nduring argument parsing, the \"O&\"\nconverter should be used, passing PyUnicode_FSDecoder()\nas the\nconversion function:\n-\nint PyUnicode_FSDecoder(PyObject *obj, void *result)\u00b6\n- Part of the Stable ABI.\nPyArg_Parse* converter: decode\nbytes\nobjects \u2013 obtained either directly or indirectly through theos.PathLike\ninterface \u2013 tostr\nusingPyUnicode_DecodeFSDefaultAndSize()\n;str\nobjects are output as-is. result must be an address of a C variable of type PyObject* (or PyUnicodeObject*). On success, set the variable to a new strong reference to a Unicode object which must be released when it is no longer used and return a non-zero value (Py_CLEANUP_SUPPORTED\n). Embedded null characters are not allowed in the result. On failure, return0\nwith an exception set.If obj is\nNULL\n, release the strong reference to the object referred to by result and return1\n.Added in version 3.2.\nChanged in version 3.6: Accepts a path-like object.\n-\nPyObject *PyUnicode_DecodeFSDefaultAndSize(const char *str, Py_ssize_t size)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nDecode a string from the filesystem encoding and error handler.\nIf you need to decode a string from the current locale encoding, use\nPyUnicode_DecodeLocaleAndSize()\n.See also\nThe\nPy_DecodeLocale()\nfunction.Changed in version 3.6: The filesystem error handler is now used.\n-\nPyObject *PyUnicode_DecodeFSDefault(const char *str)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nDecode a null-terminated string from the filesystem encoding and error handler.\nIf the string length is known, use\nPyUnicode_DecodeFSDefaultAndSize()\n.Changed in version 3.6: The filesystem error handler is now used.\n-\nPyObject *PyUnicode_EncodeFSDefault(PyObject *unicode)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nEncode a Unicode object to the filesystem encoding and error handler, and return\nbytes\n. Note that the resultingbytes\nobject can contain null bytes.If you need to encode a string to the current locale encoding, use\nPyUnicode_EncodeLocale()\n.See also\nThe\nPy_EncodeLocale()\nfunction.Added in version 3.2.\nChanged in version 3.6: The filesystem error handler is now used.\nwchar_t Support\u00b6\nwchar_t\nsupport for platforms which support it:\n-\nPyObject *PyUnicode_FromWideChar(const wchar_t *wstr, Py_ssize_t size)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCreate a Unicode object from the\nwchar_t\nbuffer wstr of the given size. Passing-1\nas the size indicates that the function must itself compute the length, usingwcslen()\n. ReturnNULL\non failure.\n-\nPy_ssize_t PyUnicode_AsWideChar(PyObject *unicode, wchar_t *wstr, Py_ssize_t size)\u00b6\n- Part of the Stable ABI.\nCopy the Unicode object contents into the\nwchar_t\nbuffer wstr. At most sizewchar_t\ncharacters are copied (excluding a possibly trailing null termination character). Return the number ofwchar_t\ncharacters copied or-1\nin case of an error.When wstr is\nNULL\n, instead return the size that would be required to store all of unicode including a terminating null.Note that the resulting wchar_t* string may or may not be null-terminated. It is the responsibility of the caller to make sure that the wchar_t* string is null-terminated in case this is required by the application. Also, note that the wchar_t* string might contain null characters, which would cause the string to be truncated when used with most C functions.\n-\nwchar_t *PyUnicode_AsWideCharString(PyObject *unicode, Py_ssize_t *size)\u00b6\n- Part of the Stable ABI since version 3.7.\nConvert the Unicode object to a wide character string. The output string always ends with a null character. If size is not\nNULL\n, write the number of wide characters (excluding the trailing null termination character) into *size. Note that the resultingwchar_t\nstring might contain null characters, which would cause the string to be truncated when used with most C functions. If size isNULL\nand the wchar_t* string contains null characters aValueError\nis raised.Returns a buffer allocated by\nPyMem_New\n(usePyMem_Free()\nto free it) on success. On error, returnsNULL\nand *size is undefined. Raises aMemoryError\nif memory allocation is failed.Added in version 3.2.\nChanged in version 3.7: Raises a\nValueError\nif size isNULL\nand the wchar_t* string contains null characters.\nBuilt-in Codecs\u00b6\nPython provides a set of built-in codecs which are written in C for speed. All of these codecs are directly usable via the following functions.\nMany of the following APIs take two arguments encoding and errors, and they\nhave the same semantics as the ones of the built-in str()\nstring object\nconstructor.\nSetting encoding to NULL\ncauses the default encoding to be used\nwhich is UTF-8. The file system calls should use\nPyUnicode_FSConverter()\nfor encoding file names. This uses the\nfilesystem encoding and error handler internally.\nError handling is set by errors which may also be set to NULL\nmeaning to use\nthe default handling defined for the codec. Default error handling for all\nbuilt-in codecs is \u201cstrict\u201d (ValueError\nis raised).\nThe codecs all use a similar interface. Only deviations from the following generic ones are documented for simplicity.\nGeneric Codecs\u00b6\nThe following macro is provided:\n-\nPy_UNICODE_REPLACEMENT_CHARACTER\u00b6\nThe Unicode code point\nU+FFFD\n(replacement character).This Unicode character is used as the replacement character during decoding if the errors argument is set to \u201creplace\u201d.\nThese are the generic codec APIs:\n-\nPyObject *PyUnicode_Decode(const char *str, Py_ssize_t size, const char *encoding, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCreate a Unicode object by decoding size bytes of the encoded string str. encoding and errors have the same meaning as the parameters of the same name in the\nstr()\nbuilt-in function. The codec to be used is looked up using the Python codec registry. ReturnNULL\nif an exception was raised by the codec.\n-\nPyObject *PyUnicode_AsEncodedString(PyObject *unicode, const char *encoding, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nEncode a Unicode object and return the result as Python bytes object. encoding and errors have the same meaning as the parameters of the same name in the Unicode\nencode()\nmethod. The codec to be used is looked up using the Python codec registry. ReturnNULL\nif an exception was raised by the codec.\nUTF-8 Codecs\u00b6\nThese are the UTF-8 codec APIs:\n-\nPyObject *PyUnicode_DecodeUTF8(const char *str, Py_ssize_t size, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCreate a Unicode object by decoding size bytes of the UTF-8 encoded string str. Return\nNULL\nif an exception was raised by the codec.\n-\nPyObject *PyUnicode_DecodeUTF8Stateful(const char *str, Py_ssize_t size, const char *errors, Py_ssize_t *consumed)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nIf consumed is\nNULL\n, behave likePyUnicode_DecodeUTF8()\n. If consumed is notNULL\n, trailing incomplete UTF-8 byte sequences will not be treated as an error. Those bytes will not be decoded and the number of bytes that have been decoded will be stored in consumed.\n-\nPyObject *PyUnicode_AsUTF8String(PyObject *unicode)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nEncode a Unicode object using UTF-8 and return the result as Python bytes object. Error handling is \u201cstrict\u201d. Return\nNULL\nif an exception was raised by the codec.The function fails if the string contains surrogate code points (\nU+D800\n-U+DFFF\n).\n-\nconst char *PyUnicode_AsUTF8AndSize(PyObject *unicode, Py_ssize_t *size)\u00b6\n- Part of the Stable ABI since version 3.10.\nReturn a pointer to the UTF-8 encoding of the Unicode object, and store the size of the encoded representation (in bytes) in size. The size argument can be\nNULL\n; in this case no size will be stored. The returned buffer always has an extra null byte appended (not included in size), regardless of whether there are any other null code points.On error, set an exception, set size to\n-1\n(if it\u2019s not NULL) and returnNULL\n.The function fails if the string contains surrogate code points (\nU+D800\n-U+DFFF\n).This caches the UTF-8 representation of the string in the Unicode object, and subsequent calls will return a pointer to the same buffer. The caller is not responsible for deallocating the buffer. The buffer is deallocated and pointers to it become invalid when the Unicode object is garbage collected.\nAdded in version 3.3.\nChanged in version 3.7: The return type is now\nconst char *\nrather ofchar *\n.Changed in version 3.10: This function is a part of the limited API.\n-\nconst char *PyUnicode_AsUTF8(PyObject *unicode)\u00b6\nAs\nPyUnicode_AsUTF8AndSize()\n, but does not store the size.Warning\nThis function does not have any special behavior for null characters embedded within unicode. As a result, strings containing null characters will remain in the returned string, which some C functions might interpret as the end of the string, leading to truncation. If truncation is an issue, it is recommended to use\nPyUnicode_AsUTF8AndSize()\ninstead.Added in version 3.3.\nChanged in version 3.7: The return type is now\nconst char *\nrather ofchar *\n.\nUTF-32 Codecs\u00b6\nThese are the UTF-32 codec APIs:\n-\nPyObject *PyUnicode_DecodeUTF32(const char *str, Py_ssize_t size, const char *errors, int *byteorder)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nDecode size bytes from a UTF-32 encoded buffer string and return the corresponding Unicode object. errors (if non-\nNULL\n) defines the error handling. It defaults to \u201cstrict\u201d.If byteorder is non-\nNULL\n, the decoder starts decoding using the given byte order:*byteorder == -1: little endian *byteorder == 0: native order *byteorder == 1: big endian\nIf\n*byteorder\nis zero, and the first four bytes of the input data are a byte order mark (BOM), the decoder switches to this byte order and the BOM is not copied into the resulting Unicode string. If*byteorder\nis-1\nor1\n, any byte order mark is copied to the output.After completion, *byteorder is set to the current byte order at the end of input data.\nIf byteorder is\nNULL\n, the codec starts in native order mode.Return\nNULL\nif an exception was raised by the codec.\n-\nPyObject *PyUnicode_DecodeUTF32Stateful(const char *str, Py_ssize_t size, const char *errors, int *byteorder, Py_ssize_t *consumed)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nIf consumed is\nNULL\n, behave likePyUnicode_DecodeUTF32()\n. If consumed is notNULL\n,PyUnicode_DecodeUTF32Stateful()\nwill not treat trailing incomplete UTF-32 byte sequences (such as a number of bytes not divisible by four) as an error. Those bytes will not be decoded and the number of bytes that have been decoded will be stored in consumed.\n-\nPyObject *PyUnicode_AsUTF32String(PyObject *unicode)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a Python byte string using the UTF-32 encoding in native byte order. The string always starts with a BOM mark. Error handling is \u201cstrict\u201d. Return\nNULL\nif an exception was raised by the codec.\nUTF-16 Codecs\u00b6\nThese are the UTF-16 codec APIs:\n-\nPyObject *PyUnicode_DecodeUTF16(const char *str, Py_ssize_t size, const char *errors, int *byteorder)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nDecode size bytes from a UTF-16 encoded buffer string and return the corresponding Unicode object. errors (if non-\nNULL\n) defines the error handling. It defaults to \u201cstrict\u201d.If byteorder is non-\nNULL\n, the decoder starts decoding using the given byte order:*byteorder == -1: little endian *byteorder == 0: native order *byteorder == 1: big endian\nIf\n*byteorder\nis zero, and the first two bytes of the input data are a byte order mark (BOM), the decoder switches to this byte order and the BOM is not copied into the resulting Unicode string. If*byteorder\nis-1\nor1\n, any byte order mark is copied to the output (where it will result in either a\\ufeff\nor a\\ufffe\ncharacter).After completion,\n*byteorder\nis set to the current byte order at the end of input data.If byteorder is\nNULL\n, the codec starts in native order mode.Return\nNULL\nif an exception was raised by the codec.\n-\nPyObject *PyUnicode_DecodeUTF16Stateful(const char *str, Py_ssize_t size, const char *errors, int *byteorder, Py_ssize_t *consumed)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nIf consumed is\nNULL\n, behave likePyUnicode_DecodeUTF16()\n. If consumed is notNULL\n,PyUnicode_DecodeUTF16Stateful()\nwill not treat trailing incomplete UTF-16 byte sequences (such as an odd number of bytes or a split surrogate pair) as an error. Those bytes will not be decoded and the number of bytes that have been decoded will be stored in consumed.\n-\nPyObject *PyUnicode_AsUTF16String(PyObject *unicode)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a Python byte string using the UTF-16 encoding in native byte order. The string always starts with a BOM mark. Error handling is \u201cstrict\u201d. Return\nNULL\nif an exception was raised by the codec.\nUTF-7 Codecs\u00b6\nThese are the UTF-7 codec APIs:\n-\nPyObject *PyUnicode_DecodeUTF7(const char *str, Py_ssize_t size, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCreate a Unicode object by decoding size bytes of the UTF-7 encoded string str. Return\nNULL\nif an exception was raised by the codec.\n-\nPyObject *PyUnicode_DecodeUTF7Stateful(const char *str, Py_ssize_t size, const char *errors, Py_ssize_t *consumed)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nIf consumed is\nNULL\n, behave likePyUnicode_DecodeUTF7()\n. If consumed is notNULL\n, trailing incomplete UTF-7 base-64 sections will not be treated as an error. Those bytes will not be decoded and the number of bytes that have been decoded will be stored in consumed.\nUnicode-Escape Codecs\u00b6\nThese are the \u201cUnicode Escape\u201d codec APIs:\n-\nPyObject *PyUnicode_DecodeUnicodeEscape(const char *str, Py_ssize_t size, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCreate a Unicode object by decoding size bytes of the Unicode-Escape encoded string str. Return\nNULL\nif an exception was raised by the codec.\n-\nPyObject *PyUnicode_AsUnicodeEscapeString(PyObject *unicode)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nEncode a Unicode object using Unicode-Escape and return the result as a bytes object. Error handling is \u201cstrict\u201d. Return\nNULL\nif an exception was raised by the codec.\nRaw-Unicode-Escape Codecs\u00b6\nThese are the \u201cRaw Unicode Escape\u201d codec APIs:\n-\nPyObject *PyUnicode_DecodeRawUnicodeEscape(const char *str, Py_ssize_t size, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCreate a Unicode object by decoding size bytes of the Raw-Unicode-Escape encoded string str. Return\nNULL\nif an exception was raised by the codec.\n-\nPyObject *PyUnicode_AsRawUnicodeEscapeString(PyObject *unicode)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nEncode a Unicode object using Raw-Unicode-Escape and return the result as a bytes object. Error handling is \u201cstrict\u201d. Return\nNULL\nif an exception was raised by the codec.\nLatin-1 Codecs\u00b6\nThese are the Latin-1 codec APIs: Latin-1 corresponds to the first 256 Unicode ordinals and only these are accepted by the codecs during encoding.\n-\nPyObject *PyUnicode_DecodeLatin1(const char *str, Py_ssize_t size, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCreate a Unicode object by decoding size bytes of the Latin-1 encoded string str. Return\nNULL\nif an exception was raised by the codec.\n-\nPyObject *PyUnicode_AsLatin1String(PyObject *unicode)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nEncode a Unicode object using Latin-1 and return the result as Python bytes object. Error handling is \u201cstrict\u201d. Return\nNULL\nif an exception was raised by the codec.\nASCII Codecs\u00b6\nThese are the ASCII codec APIs. Only 7-bit ASCII data is accepted. All other codes generate errors.\n-\nPyObject *PyUnicode_DecodeASCII(const char *str, Py_ssize_t size, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCreate a Unicode object by decoding size bytes of the ASCII encoded string str. Return\nNULL\nif an exception was raised by the codec.\n-\nPyObject *PyUnicode_AsASCIIString(PyObject *unicode)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nEncode a Unicode object using ASCII and return the result as Python bytes object. Error handling is \u201cstrict\u201d. Return\nNULL\nif an exception was raised by the codec.\nCharacter Map Codecs\u00b6\nThis codec is special in that it can be used to implement many different codecs\n(and this is in fact what was done to obtain most of the standard codecs\nincluded in the encodings\npackage). The codec uses mappings to encode and\ndecode characters. The mapping objects provided must support the\n__getitem__()\nmapping interface; dictionaries and sequences work well.\nThese are the mapping codec APIs:\n-\nPyObject *PyUnicode_DecodeCharmap(const char *str, Py_ssize_t length, PyObject *mapping, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCreate a Unicode object by decoding size bytes of the encoded string str using the given mapping object. Return\nNULL\nif an exception was raised by the codec.If mapping is\nNULL\n, Latin-1 decoding will be applied. Else mapping must map bytes ordinals (integers in the range from 0 to 255) to Unicode strings, integers (which are then interpreted as Unicode ordinals) orNone\n. Unmapped data bytes \u2013 ones which cause aLookupError\n, as well as ones which get mapped toNone\n,0xFFFE\nor'\\ufffe'\n, are treated as undefined mappings and cause an error.\n-\nPyObject *PyUnicode_AsCharmapString(PyObject *unicode, PyObject *mapping)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nEncode a Unicode object using the given mapping object and return the result as a bytes object. Error handling is \u201cstrict\u201d. Return\nNULL\nif an exception was raised by the codec.The mapping object must map Unicode ordinal integers to bytes objects, integers in the range from 0 to 255 or\nNone\n. Unmapped character ordinals (ones which cause aLookupError\n) as well as mapped toNone\nare treated as \u201cundefined mapping\u201d and cause an error.\nThe following codec API is special in that maps Unicode to Unicode.\n-\nPyObject *PyUnicode_Translate(PyObject *unicode, PyObject *table, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nTranslate a string by applying a character mapping table to it and return the resulting Unicode object. Return\nNULL\nif an exception was raised by the codec.The mapping table must map Unicode ordinal integers to Unicode ordinal integers or\nNone\n(causing deletion of the character).Mapping tables need only provide the\n__getitem__()\ninterface; dictionaries and sequences work well. Unmapped character ordinals (ones which cause aLookupError\n) are left untouched and are copied as-is.errors has the usual meaning for codecs. It may be\nNULL\nwhich indicates to use the default error handling.\nMBCS codecs for Windows\u00b6\nThese are the MBCS codec APIs. They are currently only available on Windows and use the Win32 MBCS converters to implement the conversions. Note that MBCS (or DBCS) is a class of encodings, not just one. The target encoding is defined by the user settings on the machine running the codec.\n-\nPyObject *PyUnicode_DecodeMBCS(const char *str, Py_ssize_t size, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI on Windows since version 3.7.\nCreate a Unicode object by decoding size bytes of the MBCS encoded string str. Return\nNULL\nif an exception was raised by the codec.\n-\nPyObject *PyUnicode_DecodeMBCSStateful(const char *str, Py_ssize_t size, const char *errors, Py_ssize_t *consumed)\u00b6\n- Return value: New reference. Part of the Stable ABI on Windows since version 3.7.\nIf consumed is\nNULL\n, behave likePyUnicode_DecodeMBCS()\n. If consumed is notNULL\n,PyUnicode_DecodeMBCSStateful()\nwill not decode trailing lead byte and the number of bytes that have been decoded will be stored in consumed.\n-\nPyObject *PyUnicode_DecodeCodePageStateful(int code_page, const char *str, Py_ssize_t size, const char *errors, Py_ssize_t *consumed)\u00b6\n- Return value: New reference. Part of the Stable ABI on Windows since version 3.7.\nSimilar to\nPyUnicode_DecodeMBCSStateful()\n, except uses the code page specified by code_page.\n-\nPyObject *PyUnicode_AsMBCSString(PyObject *unicode)\u00b6\n- Return value: New reference. Part of the Stable ABI on Windows since version 3.7.\nEncode a Unicode object using MBCS and return the result as Python bytes object. Error handling is \u201cstrict\u201d. Return\nNULL\nif an exception was raised by the codec.\n-\nPyObject *PyUnicode_EncodeCodePage(int code_page, PyObject *unicode, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI on Windows since version 3.7.\nEncode the Unicode object using the specified code page and return a Python bytes object. Return\nNULL\nif an exception was raised by the codec. UseCP_ACP\ncode page to get the MBCS encoder.Added in version 3.3.\nMethods and Slot Functions\u00b6\nThe following APIs are capable of handling Unicode objects and strings on input (we refer to them as strings in the descriptions) and return Unicode objects or integers as appropriate.\nThey all return NULL\nor -1\nif an exception occurs.\n-\nPyObject *PyUnicode_Concat(PyObject *left, PyObject *right)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nConcat two strings giving a new Unicode string.\n-\nPyObject *PyUnicode_Split(PyObject *unicode, PyObject *sep, Py_ssize_t maxsplit)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nSplit a string giving a list of Unicode strings. If sep is\nNULL\n, splitting will be done at all whitespace substrings. Otherwise, splits occur at the given separator. At most maxsplit splits will be done. If negative, no limit is set. Separators are not included in the resulting list.On error, return\nNULL\nwith an exception set.Equivalent to\nstr.split()\n.\n-\nPyObject *PyUnicode_RSplit(PyObject *unicode, PyObject *sep, Py_ssize_t maxsplit)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nSimilar to\nPyUnicode_Split()\n, but splitting will be done beginning at the end of the string.On error, return\nNULL\nwith an exception set.Equivalent to\nstr.rsplit()\n.\n-\nPyObject *PyUnicode_Splitlines(PyObject *unicode, int keepends)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nSplit a Unicode string at line breaks, returning a list of Unicode strings. CRLF is considered to be one line break. If keepends is\n0\n, the Line break characters are not included in the resulting strings.\n-\nPyObject *PyUnicode_Partition(PyObject *unicode, PyObject *sep)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nSplit a Unicode string at the first occurrence of sep, and return a 3-tuple containing the part before the separator, the separator itself, and the part after the separator. If the separator is not found, return a 3-tuple containing the string itself, followed by two empty strings.\nsep must not be empty.\nOn error, return\nNULL\nwith an exception set.Equivalent to\nstr.partition()\n.\n-\nPyObject *PyUnicode_RPartition(PyObject *unicode, PyObject *sep)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nSimilar to\nPyUnicode_Partition()\n, but split a Unicode string at the last occurrence of sep. If the separator is not found, return a 3-tuple containing two empty strings, followed by the string itself.sep must not be empty.\nOn error, return\nNULL\nwith an exception set.Equivalent to\nstr.rpartition()\n.\n-\nPyObject *PyUnicode_Join(PyObject *separator, PyObject *seq)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nJoin a sequence of strings using the given separator and return the resulting Unicode string.\n-\nPy_ssize_t PyUnicode_Tailmatch(PyObject *unicode, PyObject *substr, Py_ssize_t start, Py_ssize_t end, int direction)\u00b6\n- Part of the Stable ABI.\nReturn\n1\nif substr matchesunicode[start:end]\nat the given tail end (direction ==-1\nmeans to do a prefix match, direction ==1\na suffix match),0\notherwise. Return-1\nif an error occurred.\n-\nPy_ssize_t PyUnicode_Find(PyObject *unicode, PyObject *substr, Py_ssize_t start, Py_ssize_t end, int direction)\u00b6\n- Part of the Stable ABI.\nReturn the first position of substr in\nunicode[start:end]\nusing the given direction (direction ==1\nmeans to do a forward search, direction ==-1\na backward search). The return value is the index of the first match; a value of-1\nindicates that no match was found, and-2\nindicates that an error occurred and an exception has been set.\n-\nPy_ssize_t PyUnicode_FindChar(PyObject *unicode, Py_UCS4 ch, Py_ssize_t start, Py_ssize_t end, int direction)\u00b6\n- Part of the Stable ABI since version 3.7.\nReturn the first position of the character ch in\nunicode[start:end]\nusing the given direction (direction ==1\nmeans to do a forward search, direction ==-1\na backward search). The return value is the index of the first match; a value of-1\nindicates that no match was found, and-2\nindicates that an error occurred and an exception has been set.Added in version 3.3.\nChanged in version 3.7: start and end are now adjusted to behave like\nunicode[start:end]\n.\n-\nPy_ssize_t PyUnicode_Count(PyObject *unicode, PyObject *substr, Py_ssize_t start, Py_ssize_t end)\u00b6\n- Part of the Stable ABI.\nReturn the number of non-overlapping occurrences of substr in\nunicode[start:end]\n. Return-1\nif an error occurred.\n-\nPyObject *PyUnicode_Replace(PyObject *unicode, PyObject *substr, PyObject *replstr, Py_ssize_t maxcount)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReplace at most maxcount occurrences of substr in unicode with replstr and return the resulting Unicode object. maxcount ==\n-1\nmeans replace all occurrences.\n-\nint PyUnicode_Compare(PyObject *left, PyObject *right)\u00b6\n- Part of the Stable ABI.\nCompare two strings and return\n-1\n,0\n,1\nfor less than, equal, and greater than, respectively.This function returns\n-1\nupon failure, so one should callPyErr_Occurred()\nto check for errors.See also\nThe\nPyUnicode_Equal()\nfunction.\n-\nint PyUnicode_Equal(PyObject *a, PyObject *b)\u00b6\n- Part of the Stable ABI since version 3.14.\nTest if two strings are equal:\nReturn\n1\nif a is equal to b.Return\n0\nif a is not equal to b.Set a\nTypeError\nexception and return-1\nif a or b is not astr\nobject.\nThe function always succeeds if a and b are\nstr\nobjects.The function works for\nstr\nsubclasses, but does not honor custom__eq__()\nmethod.See also\nThe\nPyUnicode_Compare()\nfunction.Added in version 3.14.\n-\nint PyUnicode_EqualToUTF8AndSize(PyObject *unicode, const char *string, Py_ssize_t size)\u00b6\n- Part of the Stable ABI since version 3.13.\nCompare a Unicode object with a char buffer which is interpreted as being UTF-8 or ASCII encoded and return true (\n1\n) if they are equal, or false (0\n) otherwise. If the Unicode object contains surrogate code points (U+D800\n-U+DFFF\n) or the C string is not valid UTF-8, false (0\n) is returned.This function does not raise exceptions.\nAdded in version 3.13.\n-\nint PyUnicode_EqualToUTF8(PyObject *unicode, const char *string)\u00b6\n- Part of the Stable ABI since version 3.13.\nSimilar to\nPyUnicode_EqualToUTF8AndSize()\n, but compute string length usingstrlen()\n. If the Unicode object contains null characters, false (0\n) is returned.Added in version 3.13.\n-\nint PyUnicode_CompareWithASCIIString(PyObject *unicode, const char *string)\u00b6\n- Part of the Stable ABI.\nCompare a Unicode object, unicode, with string and return\n-1\n,0\n,1\nfor less than, equal, and greater than, respectively. It is best to pass only ASCII-encoded strings, but the function interprets the input string as ISO-8859-1 if it contains non-ASCII characters.This function does not raise exceptions.\n-\nPyObject *PyUnicode_RichCompare(PyObject *left, PyObject *right, int op)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nRich compare two Unicode strings and return one of the following:\nNULL\nin case an exception was raisedPy_NotImplemented\nin case the type combination is unknown\nPossible values for op are\nPy_GT\n,Py_GE\n,Py_EQ\n,Py_NE\n,Py_LT\n, andPy_LE\n.\n-\nPyObject *PyUnicode_Format(PyObject *format, PyObject *args)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a new string object from format and args; this is analogous to\nformat % args\n.\n-\nint PyUnicode_Contains(PyObject *unicode, PyObject *substr)\u00b6\n- Part of the Stable ABI.\nCheck whether substr is contained in unicode and return true or false accordingly.\nsubstr has to coerce to a one element Unicode string.\n-1\nis returned if there was an error.\n-\nvoid PyUnicode_InternInPlace(PyObject **p_unicode)\u00b6\n- Part of the Stable ABI.\nIntern the argument *p_unicode in place. The argument must be the address of a pointer variable pointing to a Python Unicode string object. If there is an existing interned string that is the same as *p_unicode, it sets *p_unicode to it (releasing the reference to the old string object and creating a new strong reference to the interned string object), otherwise it leaves *p_unicode alone and interns it.\n(Clarification: even though there is a lot of talk about references, think of this function as reference-neutral. You must own the object you pass in; after the call you no longer own the passed-in reference, but you newly own the result.)\nThis function never raises an exception. On error, it leaves its argument unchanged without interning it.\nInstances of subclasses of\nstr\nmay not be interned, that is, PyUnicode_CheckExact(*p_unicode) must be true. If it is not, then \u2013 as with any other error \u2013 the argument is left unchanged.Note that interned strings are not \u201cimmortal\u201d. You must keep a reference to the result to benefit from interning.\n-\nPyObject *PyUnicode_InternFromString(const char *str)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nA combination of\nPyUnicode_FromString()\nandPyUnicode_InternInPlace()\n, meant for statically allocated strings.Return a new (\u201cowned\u201d) reference to either a new Unicode string object that has been interned, or an earlier interned string object with the same value.\nPython may keep a reference to the result, or make it immortal, preventing it from being garbage-collected promptly. For interning an unbounded number of different strings, such as ones coming from user input, prefer calling\nPyUnicode_FromString()\nandPyUnicode_InternInPlace()\ndirectly.\n-\nunsigned int PyUnicode_CHECK_INTERNED(PyObject *str)\u00b6\nReturn a non-zero value if str is interned, zero if not. The str argument must be a string; this is not checked. This function always succeeds.\nCPython implementation detail: A non-zero return value may carry additional information about how the string is interned. The meaning of such non-zero values, as well as each specific string\u2019s intern-related details, may change between CPython versions.\nPyUnicodeWriter\u00b6\nThe PyUnicodeWriter\nAPI can be used to create a Python str\nobject.\nAdded in version 3.14.\n-\ntype PyUnicodeWriter\u00b6\nA Unicode writer instance.\nThe instance must be destroyed by\nPyUnicodeWriter_Finish()\non success, orPyUnicodeWriter_Discard()\non error.\n-\nPyUnicodeWriter *PyUnicodeWriter_Create(Py_ssize_t length)\u00b6\nCreate a Unicode writer instance.\nlength must be greater than or equal to\n0\n.If length is greater than\n0\n, preallocate an internal buffer of length characters.Set an exception and return\nNULL\non error.\n-\nPyObject *PyUnicodeWriter_Finish(PyUnicodeWriter *writer)\u00b6\nReturn the final Python\nstr\nobject and destroy the writer instance.Set an exception and return\nNULL\non error.The writer instance is invalid after this call.\n-\nvoid PyUnicodeWriter_Discard(PyUnicodeWriter *writer)\u00b6\nDiscard the internal Unicode buffer and destroy the writer instance.\nIf writer is\nNULL\n, no operation is performed.The writer instance is invalid after this call.\n-\nint PyUnicodeWriter_WriteChar(PyUnicodeWriter *writer, Py_UCS4 ch)\u00b6\nWrite the single Unicode character ch into writer.\nOn success, return\n0\n. On error, set an exception, leave the writer unchanged, and return-1\n.\n-\nint PyUnicodeWriter_WriteUTF8(PyUnicodeWriter *writer, const char *str, Py_ssize_t size)\u00b6\nDecode the string str from UTF-8 in strict mode and write the output into writer.\nsize is the string length in bytes. If size is equal to\n-1\n, callstrlen(str)\nto get the string length.On success, return\n0\n. On error, set an exception, leave the writer unchanged, and return-1\n.See also\nPyUnicodeWriter_DecodeUTF8Stateful()\n.\n-\nint PyUnicodeWriter_WriteASCII(PyUnicodeWriter *writer, const char *str, Py_ssize_t size)\u00b6\nWrite the ASCII string str into writer.\nsize is the string length in bytes. If size is equal to\n-1\n, callstrlen(str)\nto get the string length.str must only contain ASCII characters. The behavior is undefined if str contains non-ASCII characters.\nOn success, return\n0\n. On error, set an exception, leave the writer unchanged, and return-1\n.Added in version 3.14.\n-\nint PyUnicodeWriter_WriteWideChar(PyUnicodeWriter *writer, const wchar_t *str, Py_ssize_t size)\u00b6\nWrite the wide string str into writer.\nsize is a number of wide characters. If size is equal to\n-1\n, callwcslen(str)\nto get the string length.On success, return\n0\n. On error, set an exception, leave the writer unchanged, and return-1\n.\n-\nint PyUnicodeWriter_WriteUCS4(PyUnicodeWriter *writer, Py_UCS4 *str, Py_ssize_t size)\u00b6\nWriter the UCS4 string str into writer.\nsize is a number of UCS4 characters.\nOn success, return\n0\n. On error, set an exception, leave the writer unchanged, and return-1\n.\n-\nint PyUnicodeWriter_WriteStr(PyUnicodeWriter *writer, PyObject *obj)\u00b6\nCall\nPyObject_Str()\non obj and write the output into writer.On success, return\n0\n. On error, set an exception, leave the writer unchanged, and return-1\n.\n-\nint PyUnicodeWriter_WriteRepr(PyUnicodeWriter *writer, PyObject *obj)\u00b6\nCall\nPyObject_Repr()\non obj and write the output into writer.On success, return\n0\n. On error, set an exception, leave the writer unchanged, and return-1\n.\n-\nint PyUnicodeWriter_WriteSubstring(PyUnicodeWriter *writer, PyObject *str, Py_ssize_t start, Py_ssize_t end)\u00b6\nWrite the substring\nstr[start:end]\ninto writer.str must be Python\nstr\nobject. start must be greater than or equal to 0, and less than or equal to end. end must be less than or equal to str length.On success, return\n0\n. On error, set an exception, leave the writer unchanged, and return-1\n.\n-\nint PyUnicodeWriter_Format(PyUnicodeWriter *writer, const char *format, ...)\u00b6\nSimilar to\nPyUnicode_FromFormat()\n, but write the output directly into writer.On success, return\n0\n. On error, set an exception, leave the writer unchanged, and return-1\n.\n-\nint PyUnicodeWriter_DecodeUTF8Stateful(PyUnicodeWriter *writer, const char *string, Py_ssize_t length, const char *errors, Py_ssize_t *consumed)\u00b6\nDecode the string str from UTF-8 with errors error handler and write the output into writer.\nsize is the string length in bytes. If size is equal to\n-1\n, callstrlen(str)\nto get the string length.errors is an error handler name, such as\n\"replace\"\n. If errors isNULL\n, use the strict error handler.If consumed is not\nNULL\n, set *consumed to the number of decoded bytes on success. If consumed isNULL\n, treat trailing incomplete UTF-8 byte sequences as an error.On success, return\n0\n. On error, set an exception, leave the writer unchanged, and return-1\n.See also\nPyUnicodeWriter_WriteUTF8()\n.\nDeprecated API\u00b6\nThe following API is deprecated.\n-\ntype Py_UNICODE\u00b6\nThis is a typedef of\nwchar_t\n, which is a 16-bit type or 32-bit type depending on the platform. Please usewchar_t\ndirectly instead.Changed in version 3.3: In previous versions, this was a 16-bit type or a 32-bit type depending on whether you selected a \u201cnarrow\u201d or \u201cwide\u201d Unicode version of Python at build time.\nDeprecated since version 3.13, will be removed in version 3.15.\n-\nint PyUnicode_READY(PyObject *unicode)\u00b6\nDo nothing and return\n0\n. This API is kept only for backward compatibility, but there are no plans to remove it.Added in version 3.3.\nDeprecated since version 3.10: This API does nothing since Python 3.12. Previously, this needed to be called for each string created using the old API (\nPyUnicode_FromUnicode()\nor similar).\n-\nunsigned int PyUnicode_IS_READY(PyObject *unicode)\u00b6\nDo nothing and return\n1\n. This API is kept only for backward compatibility, but there are no plans to remove it.Added in version 3.3.\nDeprecated since version 3.14: This API does nothing since Python 3.12. Previously, this could be called to check if\nPyUnicode_READY()\nis necessary.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 15326}
{"url": "https://docs.python.org/3/c-api/concrete.html", "title": "Concrete Objects Layer", "content": "Concrete Objects Layer\u00b6\nThe functions in this chapter are specific to certain Python object types.\nPassing them an object of the wrong type is not a good idea; if you receive an\nobject from a Python program and you are not sure that it has the right type,\nyou must perform a type check first; for example, to check that an object is a\ndictionary, use PyDict_Check()\n. The chapter is structured like the\n\u201cfamily tree\u201d of Python object types.\nWarning\nWhile the functions described in this chapter carefully check the type of the\nobjects which are passed in, many of them do not check for NULL\nbeing passed\ninstead of a valid object. Allowing NULL\nto be passed in can cause memory\naccess violations and immediate termination of the interpreter.\nFundamental Objects\u00b6\nThis section describes Python type objects and the singleton object None\n.\nNumeric Objects\u00b6\nSequence Objects\u00b6\nGeneric operations on sequence objects were discussed in the previous chapter; this section deals with the specific kinds of sequence objects that are intrinsic to the Python language.\nContainer Objects\u00b6\nFunction Objects\u00b6\nOther Objects\u00b6\n- File Objects\n- Module Objects\n- Module definitions\n- Creating extension modules dynamically\n- Support functions\n- Iterator Objects\n- Descriptor Objects\n- Slice Objects\n- MemoryView objects\n- Pickle buffer objects\n- Weak Reference Objects\n- Capsules\n- Frame Objects\n- Generator Objects\n- Coroutine Objects\n- Context Variables Objects\n- Objects for Type Hinting", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 368}
{"url": "https://docs.python.org/3/c-api/set.html", "title": "Set Objects", "content": "Set Objects\u00b6\nThis section details the public API for set\nand frozenset\nobjects. Any functionality not listed below is best accessed using either\nthe abstract object protocol (including PyObject_CallMethod()\n,\nPyObject_RichCompareBool()\n, PyObject_Hash()\n,\nPyObject_Repr()\n, PyObject_IsTrue()\n, PyObject_Print()\n, and\nPyObject_GetIter()\n) or the abstract number protocol (including\nPyNumber_And()\n, PyNumber_Subtract()\n, PyNumber_Or()\n,\nPyNumber_Xor()\n, PyNumber_InPlaceAnd()\n,\nPyNumber_InPlaceSubtract()\n, PyNumber_InPlaceOr()\n, and\nPyNumber_InPlaceXor()\n).\n-\ntype PySetObject\u00b6\nThis subtype of\nPyObject\nis used to hold the internal data for bothset\nandfrozenset\nobjects. It is like aPyDictObject\nin that it is a fixed size for small sets (much like tuple storage) and will point to a separate, variable sized block of memory for medium and large sized sets (much like list storage). None of the fields of this structure should be considered public and all are subject to change. All access should be done through the documented API rather than by manipulating the values in the structure.\n-\nPyTypeObject PySet_Type\u00b6\n- Part of the Stable ABI.\nThis is an instance of\nPyTypeObject\nrepresenting the Pythonset\ntype.\n-\nPyTypeObject PyFrozenSet_Type\u00b6\n- Part of the Stable ABI.\nThis is an instance of\nPyTypeObject\nrepresenting the Pythonfrozenset\ntype.\nThe following type check macros work on pointers to any Python object. Likewise, the constructor functions work with any iterable Python object.\n-\nint PySet_Check(PyObject *p)\u00b6\nReturn true if p is a\nset\nobject or an instance of a subtype. This function always succeeds.\n-\nint PyFrozenSet_Check(PyObject *p)\u00b6\nReturn true if p is a\nfrozenset\nobject or an instance of a subtype. This function always succeeds.\n-\nint PyAnySet_Check(PyObject *p)\u00b6\nReturn true if p is a\nset\nobject, afrozenset\nobject, or an instance of a subtype. This function always succeeds.\n-\nint PySet_CheckExact(PyObject *p)\u00b6\nReturn true if p is a\nset\nobject but not an instance of a subtype. This function always succeeds.Added in version 3.10.\n-\nint PyAnySet_CheckExact(PyObject *p)\u00b6\nReturn true if p is a\nset\nobject or afrozenset\nobject but not an instance of a subtype. This function always succeeds.\n-\nint PyFrozenSet_CheckExact(PyObject *p)\u00b6\nReturn true if p is a\nfrozenset\nobject but not an instance of a subtype. This function always succeeds.\n-\nPyObject *PySet_New(PyObject *iterable)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a new\nset\ncontaining objects returned by the iterable. The iterable may beNULL\nto create a new empty set. Return the new set on success orNULL\non failure. RaiseTypeError\nif iterable is not actually iterable. The constructor is also useful for copying a set (c=set(s)\n).\n-\nPyObject *PyFrozenSet_New(PyObject *iterable)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a new\nfrozenset\ncontaining objects returned by the iterable. The iterable may beNULL\nto create a new empty frozenset. Return the new set on success orNULL\non failure. RaiseTypeError\nif iterable is not actually iterable.\nThe following functions and macros are available for instances of set\nor frozenset\nor instances of their subtypes.\n-\nPy_ssize_t PySet_Size(PyObject *anyset)\u00b6\n- Part of the Stable ABI.\nReturn the length of a\nset\norfrozenset\nobject. Equivalent tolen(anyset)\n. Raises aSystemError\nif anyset is not aset\n,frozenset\n, or an instance of a subtype.\n-\nPy_ssize_t PySet_GET_SIZE(PyObject *anyset)\u00b6\nMacro form of\nPySet_Size()\nwithout error checking.\n-\nint PySet_Contains(PyObject *anyset, PyObject *key)\u00b6\n- Part of the Stable ABI.\nReturn\n1\nif found,0\nif not found, and-1\nif an error is encountered. Unlike the Python__contains__()\nmethod, this function does not automatically convert unhashable sets into temporary frozensets. Raise aTypeError\nif the key is unhashable. RaiseSystemError\nif anyset is not aset\n,frozenset\n, or an instance of a subtype.\n-\nint PySet_Add(PyObject *set, PyObject *key)\u00b6\n- Part of the Stable ABI.\nAdd key to a\nset\ninstance. Also works withfrozenset\ninstances (likePyTuple_SetItem()\nit can be used to fill in the values of brand new frozensets before they are exposed to other code). Return0\non success or-1\non failure. Raise aTypeError\nif the key is unhashable. Raise aMemoryError\nif there is no room to grow. Raise aSystemError\nif set is not an instance ofset\nor its subtype.\nThe following functions are available for instances of set\nor its\nsubtypes but not for instances of frozenset\nor its subtypes.\n-\nint PySet_Discard(PyObject *set, PyObject *key)\u00b6\n- Part of the Stable ABI.\nReturn\n1\nif found and removed,0\nif not found (no action taken), and-1\nif an error is encountered. Does not raiseKeyError\nfor missing keys. Raise aTypeError\nif the key is unhashable. Unlike the Pythondiscard()\nmethod, this function does not automatically convert unhashable sets into temporary frozensets. RaiseSystemError\nif set is not an instance ofset\nor its subtype.\n-\nPyObject *PySet_Pop(PyObject *set)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn a new reference to an arbitrary object in the set, and removes the object from the set. Return\nNULL\non failure. RaiseKeyError\nif the set is empty. Raise aSystemError\nif set is not an instance ofset\nor its subtype.\n-\nint PySet_Clear(PyObject *set)\u00b6\n- Part of the Stable ABI.\nEmpty an existing set of all elements. Return\n0\non success. Return-1\nand raiseSystemError\nif set is not an instance ofset\nor its subtype.\nDeprecated API\u00b6\n-\nPySet_MINSIZE\u00b6\nA soft deprecated constant representing the size of an internal preallocated table inside\nPySetObject\ninstances.This is documented solely for completeness, as there are no guarantees that a given version of CPython uses preallocated tables with a fixed size. In code that does not deal with unstable set internals,\nPySet_MINSIZE\ncan be replaced with a small constant like8\n.If looking for the size of a set, use\nPySet_Size()\ninstead.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1479}
{"url": "https://docs.python.org/3/whatsnew/index.html", "title": "What\u2019s New in Python", "content": "What\u2019s New in Python\u00b6\nThe \u201cWhat\u2019s New in Python\u201d series of essays takes tours through the most important changes between major Python versions. They are a \u201cmust read\u201d for anyone wishing to stay up-to-date after a new release.\n- What\u2019s new in Python 3.14\n- What\u2019s New In Python 3.13\n- What\u2019s New In Python 3.12\n- What\u2019s New In Python 3.11\n- Summary \u2013 Release highlights\n- New Features\n- New Features Related to Type Hints\n- Other Language Changes\n- Other CPython Implementation Changes\n- New Modules\n- Improved Modules\n- Optimizations\n- Faster CPython\n- CPython bytecode changes\n- Deprecated\n- Pending Removal in Python 3.12\n- Removed\n- Porting to Python 3.11\n- Build Changes\n- C API Changes\n- Notable changes in 3.11.4\n- Notable changes in 3.11.5\n- What\u2019s New In Python 3.10\n- Summary \u2013 Release highlights\n- New Features\n- New Features Related to Type Hints\n- Other Language Changes\n- New Modules\n- Improved Modules\n- Optimizations\n- Deprecated\n- Removed\n- Porting to Python 3.10\n- CPython bytecode changes\n- Build Changes\n- C API Changes\n- Notable security feature in 3.10.7\n- Notable security feature in 3.10.8\n- Notable changes in 3.10.12\n- What\u2019s New In Python 3.9\n- Summary \u2013 Release highlights\n- You should check for DeprecationWarning in your code\n- New Features\n- Other Language Changes\n- New Modules\n- Improved Modules\n- Optimizations\n- Deprecated\n- Removed\n- Porting to Python 3.9\n- Build Changes\n- C API Changes\n- Notable changes in Python 3.9.1\n- Notable changes in Python 3.9.2\n- Notable changes in Python 3.9.3\n- Notable changes in Python 3.9.5\n- Notable security feature in 3.9.14\n- Notable changes in 3.9.17\n- What\u2019s New In Python 3.8\n- Summary \u2013 Release highlights\n- New Features\n- Other Language Changes\n- New Modules\n- Improved Modules\n- Optimizations\n- Build and C API Changes\n- Deprecated\n- API and Feature Removals\n- Porting to Python 3.8\n- Notable changes in Python 3.8.1\n- Notable changes in Python 3.8.2\n- Notable changes in Python 3.8.3\n- Notable changes in Python 3.8.8\n- Notable changes in Python 3.8.9\n- Notable changes in Python 3.8.10\n- Notable changes in Python 3.8.10\n- Notable changes in Python 3.8.12\n- Notable security feature in 3.8.14\n- Notable changes in 3.8.17\n- What\u2019s New In Python 3.7\n- Summary \u2013 Release Highlights\n- New Features\n- Other Language Changes\n- New Modules\n- Improved Modules\n- C API Changes\n- Build Changes\n- Optimizations\n- Other CPython Implementation Changes\n- Deprecated Python Behavior\n- Deprecated Python modules, functions and methods\n- Deprecated functions and types of the C API\n- Platform Support Removals\n- API and Feature Removals\n- Module Removals\n- Windows-only Changes\n- Porting to Python 3.7\n- Notable changes in Python 3.7.1\n- Notable changes in Python 3.7.2\n- Notable changes in Python 3.7.6\n- Notable changes in Python 3.7.10\n- Notable changes in Python 3.7.11\n- Notable security feature in 3.7.14\n- What\u2019s New In Python 3.6\n- Summary \u2013 Release highlights\n- New Features\n- Other Language Changes\n- New Modules\n- Improved Modules\n- Optimizations\n- Build and C API Changes\n- Other Improvements\n- Deprecated\n- Removed\n- Porting to Python 3.6\n- Notable changes in Python 3.6.2\n- Notable changes in Python 3.6.4\n- Notable changes in Python 3.6.5\n- Notable changes in Python 3.6.7\n- Notable changes in Python 3.6.10\n- Notable changes in Python 3.6.13\n- Notable changes in Python 3.6.14\n- What\u2019s New In Python 3.5\n- What\u2019s New In Python 3.4\n- What\u2019s New In Python 3.3\n- Summary \u2013 Release highlights\n- PEP 405: Virtual Environments\n- PEP 420: Implicit Namespace Packages\n- PEP 3118: New memoryview implementation and buffer protocol documentation\n- PEP 393: Flexible String Representation\n- PEP 397: Python Launcher for Windows\n- PEP 3151: Reworking the OS and IO exception hierarchy\n- PEP 380: Syntax for Delegating to a Subgenerator\n- PEP 409: Suppressing exception context\n- PEP 414: Explicit Unicode literals\n- PEP 3155: Qualified name for classes and functions\n- PEP 412: Key-Sharing Dictionary\n- PEP 362: Function Signature Object\n- PEP 421: Adding sys.implementation\n- Using importlib as the Implementation of Import\n- Other Language Changes\n- A Finer-Grained Import Lock\n- Builtin functions and types\n- New Modules\n- Improved Modules\n- Optimizations\n- Build and C API Changes\n- Deprecated\n- Porting to Python 3.3\n- What\u2019s New In Python 3.2\n- PEP 384: Defining a Stable ABI\n- PEP 389: Argparse Command Line Parsing Module\n- PEP 391: Dictionary Based Configuration for Logging\n- PEP 3148: The\nconcurrent.futures\nmodule - PEP 3147: PYC Repository Directories\n- PEP 3149: ABI Version Tagged .so Files\n- PEP 3333: Python Web Server Gateway Interface v1.0.1\n- Other Language Changes\n- New, Improved, and Deprecated Modules\n- Multi-threading\n- Optimizations\n- Unicode\n- Codecs\n- Documentation\n- IDLE\n- Code Repository\n- Build and C API Changes\n- Porting to Python 3.2\n- What\u2019s New In Python 3.1\n- What\u2019s New In Python 3.0\n- What\u2019s New in Python 2.7\n- The Future for Python 2.x\n- Changes to the Handling of Deprecation Warnings\n- Python 3.1 Features\n- PEP 372: Adding an Ordered Dictionary to collections\n- PEP 378: Format Specifier for Thousands Separator\n- PEP 389: The argparse Module for Parsing Command Lines\n- PEP 391: Dictionary-Based Configuration For Logging\n- PEP 3106: Dictionary Views\n- PEP 3137: The memoryview Object\n- Other Language Changes\n- New and Improved Modules\n- Build and C API Changes\n- Other Changes and Fixes\n- Porting to Python 2.7\n- New Features Added to Python 2.7 Maintenance Releases\n- Acknowledgements\n- What\u2019s New in Python 2.6\n- Python 3.0\n- Changes to the Development Process\n- PEP 343: The \u2018with\u2019 statement\n- PEP 366: Explicit Relative Imports From a Main Module\n- PEP 370: Per-user\nsite-packages\nDirectory - PEP 371: The\nmultiprocessing\nPackage - PEP 3101: Advanced String Formatting\n- PEP 3105:\nprint\nAs a Function - PEP 3110: Exception-Handling Changes\n- PEP 3112: Byte Literals\n- PEP 3116: New I/O Library\n- PEP 3118: Revised Buffer Protocol\n- PEP 3119: Abstract Base Classes\n- PEP 3127: Integer Literal Support and Syntax\n- PEP 3129: Class Decorators\n- PEP 3141: A Type Hierarchy for Numbers\n- Other Language Changes\n- New and Improved Modules\n- Deprecations and Removals\n- Build and C API Changes\n- Porting to Python 2.6\n- Acknowledgements\n- What\u2019s New in Python 2.5\n- PEP 308: Conditional Expressions\n- PEP 309: Partial Function Application\n- PEP 314: Metadata for Python Software Packages v1.1\n- PEP 328: Absolute and Relative Imports\n- PEP 338: Executing Modules as Scripts\n- PEP 341: Unified try/except/finally\n- PEP 342: New Generator Features\n- PEP 343: The \u2018with\u2019 statement\n- PEP 352: Exceptions as New-Style Classes\n- PEP 353: Using ssize_t as the index type\n- PEP 357: The \u2018__index__\u2019 method\n- Other Language Changes\n- New, Improved, and Removed Modules\n- Build and C API Changes\n- Porting to Python 2.5\n- Acknowledgements\n- What\u2019s New in Python 2.4\n- PEP 218: Built-In Set Objects\n- PEP 237: Unifying Long Integers and Integers\n- PEP 289: Generator Expressions\n- PEP 292: Simpler String Substitutions\n- PEP 318: Decorators for Functions and Methods\n- PEP 322: Reverse Iteration\n- PEP 324: New subprocess Module\n- PEP 327: Decimal Data Type\n- PEP 328: Multi-line Imports\n- PEP 331: Locale-Independent Float/String Conversions\n- Other Language Changes\n- New, Improved, and Deprecated Modules\n- Build and C API Changes\n- Porting to Python 2.4\n- Acknowledgements\n- What\u2019s New in Python 2.3\n- PEP 218: A Standard Set Datatype\n- PEP 255: Simple Generators\n- PEP 263: Source Code Encodings\n- PEP 273: Importing Modules from ZIP Archives\n- PEP 277: Unicode file name support for Windows NT\n- PEP 278: Universal Newline Support\n- PEP 279: enumerate()\n- PEP 282: The logging Package\n- PEP 285: A Boolean Type\n- PEP 293: Codec Error Handling Callbacks\n- PEP 301: Package Index and Metadata for Distutils\n- PEP 302: New Import Hooks\n- PEP 305: Comma-separated Files\n- PEP 307: Pickle Enhancements\n- Extended Slices\n- Other Language Changes\n- New, Improved, and Deprecated Modules\n- Pymalloc: A Specialized Object Allocator\n- Build and C API Changes\n- Other Changes and Fixes\n- Porting to Python 2.3\n- Acknowledgements\n- What\u2019s New in Python 2.2\n- Introduction\n- PEPs 252 and 253: Type and Class Changes\n- PEP 234: Iterators\n- PEP 255: Simple Generators\n- PEP 237: Unifying Long Integers and Integers\n- PEP 238: Changing the Division Operator\n- Unicode Changes\n- PEP 227: Nested Scopes\n- New and Improved Modules\n- Interpreter Changes and Fixes\n- Other Changes and Fixes\n- Acknowledgements\n- What\u2019s New in Python 2.1\n- Introduction\n- PEP 227: Nested Scopes\n- PEP 236: __future__ Directives\n- PEP 207: Rich Comparisons\n- PEP 230: Warning Framework\n- PEP 229: New Build System\n- PEP 205: Weak References\n- PEP 232: Function Attributes\n- PEP 235: Importing Modules on Case-Insensitive Platforms\n- PEP 217: Interactive Display Hook\n- PEP 208: New Coercion Model\n- PEP 241: Metadata in Python Packages\n- New and Improved Modules\n- Other Changes and Fixes\n- Acknowledgements\n- What\u2019s New in Python 2.0\n- Introduction\n- What About Python 1.6?\n- New Development Process\n- Unicode\n- List Comprehensions\n- Augmented Assignment\n- String Methods\n- Garbage Collection of Cycles\n- Other Core Changes\n- Porting to 2.0\n- Extending/Embedding Changes\n- Distutils: Making Modules Easy to Install\n- XML Modules\n- Module changes\n- New modules\n- IDLE Improvements\n- Deleted and Deprecated Modules\n- Acknowledgements\nThe \u201cChangelog\u201d is an HTML version of the file built from the contents of the Misc/NEWS.d directory tree, which contains all nontrivial changes to Python for the current version.\n- Changelog\n- Python next\n- Python 3.14.3 final\n- Python 3.14.2 final\n- Python 3.14.1 final\n- Python 3.14.0 final\n- Python 3.14.0 release candidate 3\n- Python 3.14.0 release candidate 2\n- Python 3.14.0 release candidate 1\n- Python 3.14.0 beta 4\n- Python 3.14.0 beta 3\n- Python 3.14.0 beta 2\n- Python 3.14.0 beta 1\n- Python 3.14.0 alpha 7\n- Python 3.14.0 alpha 6\n- Python 3.14.0 alpha 5\n- Python 3.14.0 alpha 4\n- Python 3.14.0 alpha 3\n- Python 3.14.0 alpha 2\n- Python 3.14.0 alpha 1\n- Python 3.13.0 beta 1\n- Python 3.13.0 alpha 6\n- Python 3.13.0 alpha 5\n- Python 3.13.0 alpha 4\n- Python 3.13.0 alpha 3\n- Python 3.13.0 alpha 2\n- Python 3.13.0 alpha 1\n- Python 3.12.0 beta 1\n- Python 3.12.0 alpha 7\n- Python 3.12.0 alpha 6\n- Python 3.12.0 alpha 5\n- Python 3.12.0 alpha 4\n- Python 3.12.0 alpha 3\n- Python 3.12.0 alpha 2\n- Python 3.12.0 alpha 1\n- Python 3.11.0 beta 1\n- Python 3.11.0 alpha 7\n- Python 3.11.0 alpha 6\n- Python 3.11.0 alpha 5\n- Python 3.11.0 alpha 4\n- Python 3.11.0 alpha 3\n- Python 3.11.0 alpha 2\n- Python 3.11.0 alpha 1\n- Python 3.10.0 beta 1\n- Python 3.10.0 alpha 7\n- Python 3.10.0 alpha 6\n- Python 3.10.0 alpha 5\n- Python 3.10.0 alpha 4\n- Python 3.10.0 alpha 3\n- Python 3.10.0 alpha 2\n- Python 3.10.0 alpha 1\n- Python 3.9.0 beta 1\n- Python 3.9.0 alpha 6\n- Python 3.9.0 alpha 5\n- Python 3.9.0 alpha 4\n- Python 3.9.0 alpha 3\n- Python 3.9.0 alpha 2\n- Python 3.9.0 alpha 1\n- Python 3.8.0 beta 1\n- Python 3.8.0 alpha 4\n- Python 3.8.0 alpha 3\n- Python 3.8.0 alpha 2\n- Python 3.8.0 alpha 1\n- Python 3.7.0 final\n- Python 3.7.0 release candidate 1\n- Python 3.7.0 beta 5\n- Python 3.7.0 beta 4\n- Python 3.7.0 beta 3\n- Python 3.7.0 beta 2\n- Python 3.7.0 beta 1\n- Python 3.7.0 alpha 4\n- Python 3.7.0 alpha 3\n- Python 3.7.0 alpha 2\n- Python 3.7.0 alpha 1\n- Python 3.6.6 final\n- Python 3.6.6 release candidate 1\n- Python 3.6.5 final\n- Python 3.6.5 release candidate 1\n- Python 3.6.4 final\n- Python 3.6.4 release candidate 1\n- Python 3.6.3 final\n- Python 3.6.3 release candidate 1\n- Python 3.6.2 final\n- Python 3.6.2 release candidate 2\n- Python 3.6.2 release candidate 1\n- Python 3.6.1 final\n- Python 3.6.1 release candidate 1\n- Python 3.6.0 final\n- Python 3.6.0 release candidate 2\n- Python 3.6.0 release candidate 1\n- Python 3.6.0 beta 4\n- Python 3.6.0 beta 3\n- Python 3.6.0 beta 2\n- Python 3.6.0 beta 1\n- Python 3.6.0 alpha 4\n- Python 3.6.0 alpha 3\n- Python 3.6.0 alpha 2\n- Python 3.6.0 alpha 1\n- Python 3.5.5 final\n- Python 3.5.5 release candidate 1\n- Python 3.5.4 final\n- Python 3.5.4 release candidate 1\n- Python 3.5.3 final\n- Python 3.5.3 release candidate 1\n- Python 3.5.2 final\n- Python 3.5.2 release candidate 1\n- Python 3.5.1 final\n- Python 3.5.1 release candidate 1\n- Python 3.5.0 final\n- Python 3.5.0 release candidate 4\n- Python 3.5.0 release candidate 3\n- Python 3.5.0 release candidate 2\n- Python 3.5.0 release candidate 1\n- Python 3.5.0 beta 4\n- Python 3.5.0 beta 3\n- Python 3.5.0 beta 2\n- Python 3.5.0 beta 1\n- Python 3.5.0 alpha 4\n- Python 3.5.0 alpha 3\n- Python 3.5.0 alpha 2\n- Python 3.5.0 alpha 1", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 3149}
{"url": "https://docs.python.org/3/reference/introduction.html", "title": "Introduction", "content": "1. Introduction\u00b6\nThis reference manual describes the Python programming language. It is not intended as a tutorial.\nWhile I am trying to be as precise as possible, I chose to use English rather than formal specifications for everything except syntax and lexical analysis. This should make the document more understandable to the average reader, but will leave room for ambiguities. Consequently, if you were coming from Mars and tried to re-implement Python from this document alone, you might have to guess things and in fact you would probably end up implementing quite a different language. On the other hand, if you are using Python and wonder what the precise rules about a particular area of the language are, you should definitely be able to find them here. If you would like to see a more formal definition of the language, maybe you could volunteer your time \u2014 or invent a cloning machine :-).\nIt is dangerous to add too many implementation details to a language reference document \u2014 the implementation may change, and other implementations of the same language may work differently. On the other hand, CPython is the one Python implementation in widespread use (although alternate implementations continue to gain support), and its particular quirks are sometimes worth being mentioned, especially where the implementation imposes additional limitations. Therefore, you\u2019ll find short \u201cimplementation notes\u201d sprinkled throughout the text.\nEvery Python implementation comes with a number of built-in and standard modules. These are documented in The Python Standard Library. A few built-in modules are mentioned when they interact in a significant way with the language definition.\n1.1. Alternate Implementations\u00b6\nThough there is one Python implementation which is by far the most popular, there are some alternate implementations which are of particular interest to different audiences.\nKnown implementations include:\n- CPython\nThis is the original and most-maintained implementation of Python, written in C. New language features generally appear here first.\n- Jython\nPython implemented in Java. This implementation can be used as a scripting language for Java applications, or can be used to create applications using the Java class libraries. It is also often used to create tests for Java libraries. More information can be found at the Jython website.\n- Python for .NET\nThis implementation actually uses the CPython implementation, but is a managed .NET application and makes .NET libraries available. It was created by Brian Lloyd. For more information, see the Python for .NET home page.\n- IronPython\nAn alternate Python for .NET. Unlike Python.NET, this is a complete Python implementation that generates IL, and compiles Python code directly to .NET assemblies. It was created by Jim Hugunin, the original creator of Jython. For more information, see the IronPython website.\n- PyPy\nAn implementation of Python written completely in Python. It supports several advanced features not found in other implementations like stackless support and a Just in Time compiler. One of the goals of the project is to encourage experimentation with the language itself by making it easier to modify the interpreter (since it is written in Python). Additional information is available on the PyPy project\u2019s home page.\nEach of these implementations varies in some way from the language as documented in this manual, or introduces specific information beyond what\u2019s covered in the standard Python documentation. Please refer to the implementation-specific documentation to determine what else you need to know about the specific implementation you\u2019re using.\n1.2. Notation\u00b6\nThe descriptions of lexical analysis and syntax use a grammar notation that is a mixture of EBNF and PEG. For example:\nname:letter\n(letter\n|digit\n| \"_\")* letter: \"a\"...\"z\" | \"A\"...\"Z\" digit: \"0\"...\"9\"\nIn this example, the first line says that a name\nis a letter\nfollowed\nby a sequence of zero or more letter\ns, digit\ns, and underscores.\nA letter\nin turn is any of the single characters 'a'\nthrough\n'z'\nand A\nthrough Z\n; a digit\nis a single character from 0\nto 9\n.\nEach rule begins with a name (which identifies the rule that\u2019s being defined)\nfollowed by a colon, :\n.\nThe definition to the right of the colon uses the following syntax elements:\nname\n: A name refers to another rule. Where possible, it is a link to the rule\u2019s definition.TOKEN\n: An uppercase name refers to a token. For the purposes of grammar definitions, tokens are the same as rules.\n\"text\"\n,'text'\n: Text in single or double quotes must match literally (without the quotes). The type of quote is chosen according to the meaning oftext\n:'if'\n: A name in single quotes denotes a keyword.\"case\"\n: A name in double quotes denotes a soft-keyword.'@'\n: A non-letter symbol in single quotes denotes anOP\ntoken, that is, a delimiter or operator.\ne1 e2\n: Items separated only by whitespace denote a sequence. Here,e1\nmust be followed bye2\n.e1 | e2\n: A vertical bar is used to separate alternatives. It denotes PEG\u2019s \u201cordered choice\u201d: ife1\nmatches,e2\nis not considered. In traditional PEG grammars, this is written as a slash,/\n, rather than a vertical bar. See PEP 617 for more background and details.e*\n: A star means zero or more repetitions of the preceding item.e+\n: Likewise, a plus means one or more repetitions.[e]\n: A phrase enclosed in square brackets means zero or one occurrences. In other words, the enclosed phrase is optional.e?\n: A question mark has exactly the same meaning as square brackets: the preceding item is optional.(e)\n: Parentheses are used for grouping.\nThe following notation is only used in lexical definitions.\n\"a\"...\"z\"\n: Two literal characters separated by three dots mean a choice of any single character in the given (inclusive) range of ASCII characters.<...>\n: A phrase between angular brackets gives an informal description of the matched symbol (for example,\n), or an abbreviation that is defined in nearby text (for example,\n).\nSome definitions also use lookaheads, which indicate that an element must (or must not) match at a given position, but without consuming any input:\n&e\n: a positive lookahead (that is,e\nis required to match)!e\n: a negative lookahead (that is,e\nis required not to match)\nThe unary operators (*\n, +\n, ?\n) bind as tightly as possible;\nthe vertical bar (|\n) binds most loosely.\nWhite space is only meaningful to separate tokens.\nRules are normally contained on a single line, but rules that are too long may be wrapped:\nliteral: stringliteral | bytesliteral | integer | floatnumber | imagnumber\nAlternatively, rules may be formatted with the first line ending at the colon, and each alternative beginning with a vertical bar on a new line. For example:\nliteral: | stringliteral | bytesliteral | integer | floatnumber | imagnumber\nThis does not mean that there is an empty first alternative.\n1.2.1. Lexical and Syntactic definitions\u00b6\nThere is some difference between lexical and syntactic analysis: the lexical analyzer operates on the individual characters of the input source, while the parser (syntactic analyzer) operates on the stream of tokens generated by the lexical analysis. However, in some cases the exact boundary between the two phases is a CPython implementation detail.\nThe practical difference between the two is that in lexical definitions,\nall whitespace is significant.\nThe lexical analyzer discards all whitespace that is not\nconverted to tokens like token.INDENT\nor NEWLINE\n.\nSyntactic definitions then use these tokens, rather than source characters.\nThis documentation uses the same BNF grammar for both styles of definitions. All uses of BNF in the next chapter (Lexical analysis) are lexical definitions; uses in subsequent chapters are syntactic definitions.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1954}
{"url": "https://docs.python.org/3/whatsnew/2.0.html", "title": "What\u2019s New in Python 2.0", "content": "What\u2019s New in Python 2.0\u00b6\n- Author:\nA.M. Kuchling and Moshe Zadka\nIntroduction\u00b6\nA new release of Python, version 2.0, was released on October 16, 2000. This article covers the exciting new features in 2.0, highlights some other useful changes, and points out a few incompatible changes that may require rewriting code.\nPython\u2019s development never completely stops between releases, and a steady flow of bug fixes and improvements are always being submitted. A host of minor fixes, a few optimizations, additional docstrings, and better error messages went into 2.0; to list them all would be impossible, but they\u2019re certainly significant. Consult the publicly available CVS logs if you want to see the full list. This progress is due to the five developers working for PythonLabs are now getting paid to spend their days fixing bugs, and also due to the improved communication resulting from moving to SourceForge.\nWhat About Python 1.6?\u00b6\nPython 1.6 can be thought of as the Contractual Obligations Python release. After the core development team left CNRI in May 2000, CNRI requested that a 1.6 release be created, containing all the work on Python that had been performed at CNRI. Python 1.6 therefore represents the state of the CVS tree as of May 2000, with the most significant new feature being Unicode support. Development continued after May, of course, so the 1.6 tree received a few fixes to ensure that it\u2019s forward-compatible with Python 2.0. 1.6 is therefore part of Python\u2019s evolution, and not a side branch.\nSo, should you take much interest in Python 1.6? Probably not. The 1.6final and 2.0beta1 releases were made on the same day (September 5, 2000), the plan being to finalize Python 2.0 within a month or so. If you have applications to maintain, there seems little point in breaking things by moving to 1.6, fixing them, and then having another round of breakage within a month by moving to 2.0; you\u2019re better off just going straight to 2.0. Most of the really interesting features described in this document are only in 2.0, because a lot of work was done between May and September.\nNew Development Process\u00b6\nThe most important change in Python 2.0 may not be to the code at all, but to how Python is developed: in May 2000 the Python developers began using the tools made available by SourceForge for storing source code, tracking bug reports, and managing the queue of patch submissions. To report bugs or submit patches for Python 2.0, use the bug tracking and patch manager tools available from Python\u2019s project page, located at https://sourceforge.net/projects/python/.\nThe most important of the services now hosted at SourceForge is the Python CVS tree, the version-controlled repository containing the source code for Python. Previously, there were roughly 7 or so people who had write access to the CVS tree, and all patches had to be inspected and checked in by one of the people on this short list. Obviously, this wasn\u2019t very scalable. By moving the CVS tree to SourceForge, it became possible to grant write access to more people; as of September 2000 there were 27 people able to check in changes, a fourfold increase. This makes possible large-scale changes that wouldn\u2019t be attempted if they\u2019d have to be filtered through the small group of core developers. For example, one day Peter Schneider-Kamp took it into his head to drop K&R C compatibility and convert the C source for Python to ANSI C. After getting approval on the python-dev mailing list, he launched into a flurry of checkins that lasted about a week, other developers joined in to help, and the job was done. If there were only 5 people with write access, probably that task would have been viewed as \u201cnice, but not worth the time and effort needed\u201d and it would never have gotten done.\nThe shift to using SourceForge\u2019s services has resulted in a remarkable increase in the speed of development. Patches now get submitted, commented on, revised by people other than the original submitter, and bounced back and forth between people until the patch is deemed worth checking in. Bugs are tracked in one central location and can be assigned to a specific person for fixing, and we can count the number of open bugs to measure progress. This didn\u2019t come without a cost: developers now have more e-mail to deal with, more mailing lists to follow, and special tools had to be written for the new environment. For example, SourceForge sends default patch and bug notification e-mail messages that are completely unhelpful, so Ka-Ping Yee wrote an HTML screen-scraper that sends more useful messages.\nThe ease of adding code caused a few initial growing pains, such as code was checked in before it was ready or without getting clear agreement from the developer group. The approval process that has emerged is somewhat similar to that used by the Apache group. Developers can vote +1, +0, -0, or -1 on a patch; +1 and -1 denote acceptance or rejection, while +0 and -0 mean the developer is mostly indifferent to the change, though with a slight positive or negative slant. The most significant change from the Apache model is that the voting is essentially advisory, letting Guido van Rossum, who has Benevolent Dictator For Life status, know what the general opinion is. He can still ignore the result of a vote, and approve or reject a change even if the community disagrees with him.\nProducing an actual patch is the last step in adding a new feature, and is usually easy compared to the earlier task of coming up with a good design. Discussions of new features can often explode into lengthy mailing list threads, making the discussion hard to follow, and no one can read every posting to python-dev. Therefore, a relatively formal process has been set up to write Python Enhancement Proposals (PEPs), modelled on the internet RFC process. PEPs are draft documents that describe a proposed new feature, and are continually revised until the community reaches a consensus, either accepting or rejecting the proposal. Quoting from the introduction to PEP 1, \u201cPEP Purpose and Guidelines\u201d:\nPEP stands for Python Enhancement Proposal. A PEP is a design document providing information to the Python community, or describing a new feature for Python. The PEP should provide a concise technical specification of the feature and a rationale for the feature.\nWe intend PEPs to be the primary mechanisms for proposing new features, for collecting community input on an issue, and for documenting the design decisions that have gone into Python. The PEP author is responsible for building consensus within the community and documenting dissenting opinions.\nRead the rest of PEP 1 for the details of the PEP editorial process, style, and format. PEPs are kept in the Python CVS tree on SourceForge, though they\u2019re not part of the Python 2.0 distribution, and are also available in HTML form from https://peps.python.org/. As of September 2000, there are 25 PEPs, ranging from PEP 201, \u201cLockstep Iteration\u201d, to PEP 225, \u201cElementwise/Objectwise Operators\u201d.\nUnicode\u00b6\nThe largest new feature in Python 2.0 is a new fundamental data type: Unicode strings. Unicode uses 16-bit numbers to represent characters instead of the 8-bit number used by ASCII, meaning that 65,536 distinct characters can be supported.\nThe final interface for Unicode support was arrived at through countless often-stormy discussions on the python-dev mailing list, and mostly implemented by Marc-Andr\u00e9 Lemburg, based on a Unicode string type implementation by Fredrik Lundh. A detailed explanation of the interface was written up as PEP 100, \u201cPython Unicode Integration\u201d. This article will simply cover the most significant points about the Unicode interfaces.\nIn Python source code, Unicode strings are written as u\"string\"\n. Arbitrary\nUnicode characters can be written using a new escape sequence, \\uHHHH\n, where\nHHHH is a 4-digit hexadecimal number from 0000 to FFFF. The existing\n\\xHH\nescape sequence can also be used, and octal escapes can be used for\ncharacters up to U+01FF, which is represented by \\777\n.\nUnicode strings, just like regular strings, are an immutable sequence type.\nThey can be indexed and sliced, but not modified in place. Unicode strings have\nan encode( [encoding] )\nmethod that returns an 8-bit string in the desired\nencoding. Encodings are named by strings, such as 'ascii'\n, 'utf-8'\n,\n'iso-8859-1'\n, or whatever. A codec API is defined for implementing and\nregistering new encodings that are then available throughout a Python program.\nIf an encoding isn\u2019t specified, the default encoding is usually 7-bit ASCII,\nthough it can be changed for your Python installation by calling the\nsys.setdefaultencoding(encoding)\nfunction in a customized version of\nsite.py\n.\nCombining 8-bit and Unicode strings always coerces to Unicode, using the default\nASCII encoding; the result of 'a' + u'bc'\nis u'abc'\n.\nNew built-in functions have been added, and existing built-ins modified to support Unicode:\nunichr(ch)\nreturns a Unicode string 1 character long, containing the character ch.ord(u)\n, where u is a 1-character regular or Unicode string, returns the number of the character as an integer.unicode(string [, encoding] [, errors] )\ncreates a Unicode string from an 8-bit string.encoding\nis a string naming the encoding to use. Theerrors\nparameter specifies the treatment of characters that are invalid for the current encoding; passing'strict'\nas the value causes an exception to be raised on any encoding error, while'ignore'\ncauses errors to be silently ignored and'replace'\nuses U+FFFD, the official replacement character, in case of any problems.The\nexec\nstatement, and various built-ins such aseval()\n,getattr()\n, andsetattr()\nwill also accept Unicode strings as well as regular strings. (It\u2019s possible that the process of fixing this missed some built-ins; if you find a built-in function that accepts strings but doesn\u2019t accept Unicode strings at all, please report it as a bug.)\nA new module, unicodedata\n, provides an interface to Unicode character\nproperties. For example, unicodedata.category(u'A')\nreturns the 2-character\nstring \u2018Lu\u2019, the \u2018L\u2019 denoting it\u2019s a letter, and \u2018u\u2019 meaning that it\u2019s\nuppercase. unicodedata.bidirectional(u'\\u0660')\nreturns \u2018AN\u2019, meaning that\nU+0660 is an Arabic number.\nThe codecs\nmodule contains functions to look up existing encodings and\nregister new ones. Unless you want to implement a new encoding, you\u2019ll most\noften use the codecs.lookup(encoding)\nfunction, which returns a\n4-element tuple: (encode_func, decode_func, stream_reader, stream_writer)\n.\nencode_func is a function that takes a Unicode string, and returns a 2-tuple\n(string, length)\n. string is an 8-bit string containing a portion (perhaps all) of the Unicode string converted into the given encoding, and length tells you how much of the Unicode string was converted.decode_func is the opposite of encode_func, taking an 8-bit string and returning a 2-tuple\n(ustring, length)\n, consisting of the resulting Unicode string ustring and the integer length telling how much of the 8-bit string was consumed.stream_reader is a class that supports decoding input from a stream. stream_reader(file_obj) returns an object that supports the\nread()\n,readline()\n, andreadlines()\nmethods. These methods will all translate from the given encoding and return Unicode strings.stream_writer, similarly, is a class that supports encoding output to a stream. stream_writer(file_obj) returns an object that supports the\nwrite()\nandwritelines()\nmethods. These methods expect Unicode strings, translating them to the given encoding on output.\nFor example, the following code writes a Unicode string into a file, encoding it as UTF-8:\nimport codecs\nunistr = u'\\u0660\\u2000ab ...'\n(UTF8_encode, UTF8_decode,\nUTF8_streamreader, UTF8_streamwriter) = codecs.lookup('UTF-8')\noutput = UTF8_streamwriter( open( '/tmp/output', 'wb') )\noutput.write( unistr )\noutput.close()\nThe following code would then read UTF-8 input from the file:\ninput = UTF8_streamreader( open( '/tmp/output', 'rb') )\nprint repr(input.read())\ninput.close()\nUnicode-aware regular expressions are available through the re\nmodule,\nwhich has a new underlying implementation called SRE written by Fredrik Lundh of\nSecret Labs AB.\nA -U\ncommand line option was added which causes the Python compiler to\ninterpret all string literals as Unicode string literals. This is intended to be\nused in testing and future-proofing your Python code, since some future version\nof Python may drop support for 8-bit strings and provide only Unicode strings.\nList Comprehensions\u00b6\nLists are a workhorse data type in Python, and many programs manipulate a list at some point. Two common operations on lists are to loop over them, and either pick out the elements that meet a certain criterion, or apply some function to each element. For example, given a list of strings, you might want to pull out all the strings containing a given substring, or strip off trailing whitespace from each line.\nThe existing map()\nand filter()\nfunctions can be used for this\npurpose, but they require a function as one of their arguments. This is fine if\nthere\u2019s an existing built-in function that can be passed directly, but if there\nisn\u2019t, you have to create a little function to do the required work, and\nPython\u2019s scoping rules make the result ugly if the little function needs\nadditional information. Take the first example in the previous paragraph,\nfinding all the strings in the list containing a given substring. You could\nwrite the following to do it:\n# Given the list L, make a list of all strings\n# containing the substring S.\nsublist = filter( lambda s, substring=S:\nstring.find(s, substring) != -1,\nL)\nBecause of Python\u2019s scoping rules, a default argument is used so that the\nanonymous function created by the lambda\nexpression knows what\nsubstring is being searched for. List comprehensions make this cleaner:\nsublist = [ s for s in L if string.find(s, S) != -1 ]\nList comprehensions have the form:\n[ expression for expr in sequence1\nfor expr2 in sequence2 ...\nfor exprN in sequenceN\nif condition ]\nThe for\n\u2026in\nclauses contain the sequences to be\niterated over. The sequences do not have to be the same length, because they\nare not iterated over in parallel, but from left to right; this is explained\nmore clearly in the following paragraphs. The elements of the generated list\nwill be the successive values of expression. The final if\nclause\nis optional; if present, expression is only evaluated and added to the result\nif condition is true.\nTo make the semantics very clear, a list comprehension is equivalent to the following Python code:\nfor expr1 in sequence1:\nfor expr2 in sequence2:\n...\nfor exprN in sequenceN:\nif (condition):\n# Append the value of\n# the expression to the\n# resulting list.\nThis means that when there are multiple for\n\u2026in\nclauses, the resulting list will be equal to the product of the lengths of all\nthe sequences. If you have two lists of length 3, the output list is 9 elements\nlong:\nseq1 = 'abc'\nseq2 = (1,2,3)\n>>> [ (x,y) for x in seq1 for y in seq2]\n[('a', 1), ('a', 2), ('a', 3), ('b', 1), ('b', 2), ('b', 3), ('c', 1),\n('c', 2), ('c', 3)]\nTo avoid introducing an ambiguity into Python\u2019s grammar, if expression is creating a tuple, it must be surrounded with parentheses. The first list comprehension below is a syntax error, while the second one is correct:\n# Syntax error\n[ x,y for x in seq1 for y in seq2]\n# Correct\n[ (x,y) for x in seq1 for y in seq2]\nThe idea of list comprehensions originally comes from the functional programming language Haskell (https://www.haskell.org). Greg Ewing argued most effectively for adding them to Python and wrote the initial list comprehension patch, which was then discussed for a seemingly endless time on the python-dev mailing list and kept up-to-date by Skip Montanaro.\nAugmented Assignment\u00b6\nAugmented assignment operators, another long-requested feature, have been added\nto Python 2.0. Augmented assignment operators include +=\n, -=\n, *=\n,\nand so forth. For example, the statement a += 2\nincrements the value of the\nvariable a\nby 2, equivalent to the slightly lengthier a = a + 2\n.\nThe full list of supported assignment operators is +=\n, -=\n, *=\n,\n/=\n, %=\n, **=\n, &=\n, |=\n, ^=\n, >>=\n, and <<=\n. Python\nclasses can override the augmented assignment operators by defining methods\nnamed __iadd__()\n, __isub__()\n, etc. For example, the following\nNumber\nclass stores a number and supports using += to create a new\ninstance with an incremented value.\nclass Number:\ndef __init__(self, value):\nself.value = value\ndef __iadd__(self, increment):\nreturn Number( self.value + increment)\nn = Number(5)\nn += 3\nprint n.value\nThe __iadd__()\nspecial method is called with the value of the increment,\nand should return a new instance with an appropriately modified value; this\nreturn value is bound as the new value of the variable on the left-hand side.\nAugmented assignment operators were first introduced in the C programming language, and most C-derived languages, such as awk, C++, Java, Perl, and PHP also support them. The augmented assignment patch was implemented by Thomas Wouters.\nString Methods\u00b6\nUntil now string-manipulation functionality was in the string\nmodule,\nwhich was usually a front-end for the strop\nmodule written in C. The\naddition of Unicode posed a difficulty for the strop\nmodule, because the\nfunctions would all need to be rewritten in order to accept either 8-bit or\nUnicode strings. For functions such as string.replace()\n, which takes 3\nstring arguments, that means eight possible permutations, and correspondingly\ncomplicated code.\nInstead, Python 2.0 pushes the problem onto the string type, making string manipulation functionality available through methods on both 8-bit strings and Unicode strings.\n>>> 'andrew'.capitalize()\n'Andrew'\n>>> 'hostname'.replace('os', 'linux')\n'hlinuxtname'\n>>> 'moshe'.find('sh')\n2\nOne thing that hasn\u2019t changed, a noteworthy April Fools\u2019 joke notwithstanding, is that Python strings are immutable. Thus, the string methods return new strings, and do not modify the string on which they operate.\nThe old string\nmodule is still around for backwards compatibility, but it\nmostly acts as a front-end to the new string methods.\nTwo methods which have no parallel in pre-2.0 versions, although they did exist\nin JPython for quite some time, are startswith()\nand endswith()\n.\ns.startswith(t)\nis equivalent to s[:len(t)] == t\n, while\ns.endswith(t)\nis equivalent to s[-len(t):] == t\n.\nOne other method which deserves special mention is join()\n. The\njoin()\nmethod of a string receives one parameter, a sequence of strings,\nand is equivalent to the string.join()\nfunction from the old string\nmodule, with the arguments reversed. In other words, s.join(seq)\nis\nequivalent to the old string.join(seq, s)\n.\nGarbage Collection of Cycles\u00b6\nThe C implementation of Python uses reference counting to implement garbage collection. Every Python object maintains a count of the number of references pointing to itself, and adjusts the count as references are created or destroyed. Once the reference count reaches zero, the object is no longer accessible, since you need to have a reference to an object to access it, and if the count is zero, no references exist any longer.\nReference counting has some pleasant properties: it\u2019s easy to understand and implement, and the resulting implementation is portable, fairly fast, and reacts well with other libraries that implement their own memory handling schemes. The major problem with reference counting is that it sometimes doesn\u2019t realise that objects are no longer accessible, resulting in a memory leak. This happens when there are cycles of references.\nConsider the simplest possible cycle, a class instance which has a reference to itself:\ninstance = SomeClass()\ninstance.myself = instance\nAfter the above two lines of code have been executed, the reference count of\ninstance\nis 2; one reference is from the variable named 'instance'\n, and\nthe other is from the myself\nattribute of the instance.\nIf the next line of code is del instance\n, what happens? The reference count\nof instance\nis decreased by 1, so it has a reference count of 1; the\nreference in the myself\nattribute still exists. Yet the instance is no\nlonger accessible through Python code, and it could be deleted. Several objects\ncan participate in a cycle if they have references to each other, causing all of\nthe objects to be leaked.\nPython 2.0 fixes this problem by periodically executing a cycle detection\nalgorithm which looks for inaccessible cycles and deletes the objects involved.\nA new gc\nmodule provides functions to perform a garbage collection,\nobtain debugging statistics, and tuning the collector\u2019s parameters.\nRunning the cycle detection algorithm takes some time, and therefore will result\nin some additional overhead. It is hoped that after we\u2019ve gotten experience\nwith the cycle collection from using 2.0, Python 2.1 will be able to minimize\nthe overhead with careful tuning. It\u2019s not yet obvious how much performance is\nlost, because benchmarking this is tricky and depends crucially on how often the\nprogram creates and destroys objects. The detection of cycles can be disabled\nwhen Python is compiled, if you can\u2019t afford even a tiny speed penalty or\nsuspect that the cycle collection is buggy, by specifying the\n--without-cycle-gc\nswitch when running the configure\nscript.\nSeveral people tackled this problem and contributed to a solution. An early implementation of the cycle detection approach was written by Toby Kelsey. The current algorithm was suggested by Eric Tiedemann during a visit to CNRI, and Guido van Rossum and Neil Schemenauer wrote two different implementations, which were later integrated by Neil. Lots of other people offered suggestions along the way; the March 2000 archives of the python-dev mailing list contain most of the relevant discussion, especially in the threads titled \u201cReference cycle collection for Python\u201d and \u201cFinalization again\u201d.\nOther Core Changes\u00b6\nVarious minor changes have been made to Python\u2019s syntax and built-in functions. None of the changes are very far-reaching, but they\u2019re handy conveniences.\nMinor Language Changes\u00b6\nA new syntax makes it more convenient to call a given function with a tuple of\narguments and/or a dictionary of keyword arguments. In Python 1.5 and earlier,\nyou\u2019d use the apply()\nbuilt-in function: apply(f, args, kw)\ncalls the\nfunction f()\nwith the argument tuple args and the keyword arguments in\nthe dictionary kw. apply()\nis the same in 2.0, but thanks to a patch\nfrom Greg Ewing, f(*args, **kw)\nis a shorter and clearer way to achieve the\nsame effect. This syntax is symmetrical with the syntax for defining\nfunctions:\ndef f(*args, **kw):\n# args is a tuple of positional args,\n# kw is a dictionary of keyword args\n...\nThe print\nstatement can now have its output directed to a file-like\nobject by following the print\nwith >> file\n, similar to the\nredirection operator in Unix shells. Previously you\u2019d either have to use the\nwrite()\nmethod of the file-like object, which lacks the convenience and\nsimplicity of print\n, or you could assign a new value to\nsys.stdout\nand then restore the old value. For sending output to standard\nerror, it\u2019s much easier to write this:\nprint >> sys.stderr, \"Warning: action field not supplied\"\nModules can now be renamed on importing them, using the syntax import module\nas name\nor from module import name as othername\n. The patch was submitted\nby Thomas Wouters.\nA new format style is available when using the %\noperator; \u2018%r\u2019 will insert\nthe repr()\nof its argument. This was also added from symmetry\nconsiderations, this time for symmetry with the existing \u2018%s\u2019 format style,\nwhich inserts the str()\nof its argument. For example, '%r %s' % ('abc',\n'abc')\nreturns a string containing 'abc' abc\n.\nPreviously there was no way to implement a class that overrode Python\u2019s built-in\nin\noperator and implemented a custom version. obj in seq\nreturns\ntrue if obj is present in the sequence seq; Python computes this by simply\ntrying every index of the sequence until either obj is found or an\nIndexError\nis encountered. Moshe Zadka contributed a patch which adds a\n__contains__()\nmagic method for providing a custom implementation for\nin\n. Additionally, new built-in objects written in C can define what\nin\nmeans for them via a new slot in the sequence protocol.\nEarlier versions of Python used a recursive algorithm for deleting objects. Deeply nested data structures could cause the interpreter to fill up the C stack and crash; Christian Tismer rewrote the deletion logic to fix this problem. On a related note, comparing recursive objects recursed infinitely and crashed; Jeremy Hylton rewrote the code to no longer crash, producing a useful result instead. For example, after this code:\na = []\nb = []\na.append(a)\nb.append(b)\nThe comparison a==b\nreturns true, because the two recursive data structures\nare isomorphic. See the thread \u201ctrashcan and PR#7\u201d in the April 2000 archives of\nthe python-dev mailing list for the discussion leading up to this\nimplementation, and some useful relevant links. Note that comparisons can now\nalso raise exceptions. In earlier versions of Python, a comparison operation\nsuch as cmp(a,b)\nwould always produce an answer, even if a user-defined\n__cmp__()\nmethod encountered an error, since the resulting exception would\nsimply be silently swallowed.\nWork has been done on porting Python to 64-bit Windows on the Itanium processor,\nmostly by Trent Mick of ActiveState. (Confusingly, sys.platform\nis still\n'win32'\non Win64 because it seems that for ease of porting, MS Visual C++\ntreats code as 32 bit on Itanium.) PythonWin also supports Windows CE; see the\nPython CE page at https://pythonce.sourceforge.net/ for more information.\nAnother new platform is Darwin/MacOS X; initial support for it is in Python 2.0. Dynamic loading works, if you specify \u201cconfigure \u2013with-dyld \u2013with-suffix=.x\u201d. Consult the README in the Python source distribution for more instructions.\nAn attempt has been made to alleviate one of Python\u2019s warts, the often-confusing\nNameError\nexception when code refers to a local variable before the\nvariable has been assigned a value. For example, the following code raises an\nexception on the print\nstatement in both 1.5.2 and 2.0; in 1.5.2 a\nNameError\nexception is raised, while 2.0 raises a new\nUnboundLocalError\nexception. UnboundLocalError\nis a subclass of\nNameError\n, so any existing code that expects NameError\nto be\nraised should still work.\ndef f():\nprint \"i=\",i\ni = i + 1\nf()\nTwo new exceptions, TabError\nand IndentationError\n, have been\nintroduced. They\u2019re both subclasses of SyntaxError\n, and are raised when\nPython code is found to be improperly indented.\nChanges to Built-in Functions\u00b6\nA new built-in, zip(seq1, seq2, ...)\n, has been added. zip()\nreturns a list of tuples where each tuple contains the i-th element from each of\nthe argument sequences. The difference between zip()\nand map(None,\nseq1, seq2)\nis that map()\npads the sequences with None\nif the\nsequences aren\u2019t all of the same length, while zip()\ntruncates the\nreturned list to the length of the shortest argument sequence.\nThe int()\nand long()\nfunctions now accept an optional \u201cbase\u201d\nparameter when the first argument is a string. int('123', 10)\nreturns 123,\nwhile int('123', 16)\nreturns 291. int(123, 16)\nraises a\nTypeError\nexception with the message \u201ccan\u2019t convert non-string with\nexplicit base\u201d.\nA new variable holding more detailed version information has been added to the\nsys\nmodule. sys.version_info\nis a tuple (major, minor, micro,\nlevel, serial)\nFor example, in a hypothetical 2.0.1beta1, sys.version_info\nwould be (2, 0, 1, 'beta', 1)\n. level is a string such as \"alpha\"\n,\n\"beta\"\n, or \"final\"\nfor a final release.\nDictionaries have an odd new method, setdefault(key, default)\n, which\nbehaves similarly to the existing get()\nmethod. However, if the key is\nmissing, setdefault()\nboth returns the value of default as get()\nwould do, and also inserts it into the dictionary as the value for key. Thus,\nthe following lines of code:\nif dict.has_key( key ): return dict[key]\nelse:\ndict[key] = []\nreturn dict[key]\ncan be reduced to a single return dict.setdefault(key, [])\nstatement.\nThe interpreter sets a maximum recursion depth in order to catch runaway\nrecursion before filling the C stack and causing a core dump or GPF..\nPreviously this limit was fixed when you compiled Python, but in 2.0 the maximum\nrecursion depth can be read and modified using sys.getrecursionlimit()\nand\nsys.setrecursionlimit()\n. The default value is 1000, and a rough maximum\nvalue for a given platform can be found by running a new script,\nMisc/find_recursionlimit.py\n.\nPorting to 2.0\u00b6\nNew Python releases try hard to be compatible with previous releases, and the record has been pretty good. However, some changes are considered useful enough, usually because they fix initial design decisions that turned out to be actively mistaken, that breaking backward compatibility can\u2019t always be avoided. This section lists the changes in Python 2.0 that may cause old Python code to break.\nThe change which will probably break the most code is tightening up the\narguments accepted by some methods. Some methods would take multiple arguments\nand treat them as a tuple, particularly various list methods such as\nappend()\nand insert()\n.\nIn earlier versions of Python, if L\nis\na list, L.append( 1,2 )\nappends the tuple (1,2)\nto the list. In Python\n2.0 this causes a TypeError\nexception to be raised, with the message:\n\u2018append requires exactly 1 argument; 2 given\u2019. The fix is to simply add an\nextra set of parentheses to pass both values as a tuple: L.append( (1,2) )\n.\nThe earlier versions of these methods were more forgiving because they used an\nold function in Python\u2019s C interface to parse their arguments; 2.0 modernizes\nthem to use PyArg_ParseTuple()\n, the current argument parsing function,\nwhich provides more helpful error messages and treats multi-argument calls as\nerrors. If you absolutely must use 2.0 but can\u2019t fix your code, you can edit\nObjects/listobject.c\nand define the preprocessor symbol\nNO_STRICT_LIST_APPEND\nto preserve the old behaviour; this isn\u2019t recommended.\nSome of the functions in the socket\nmodule are still forgiving in this\nway. For example, socket.connect( ('hostname', 25) )\nis the correct\nform, passing a tuple representing an IP address, but socket.connect('hostname', 25)\nalso works. socket.connect_ex\nand socket.bind\nare similarly easy-going. 2.0alpha1 tightened these functions up, but because\nthe documentation actually used the erroneous multiple argument form, many\npeople wrote code which would break with the stricter checking. GvR backed out\nthe changes in the face of public reaction, so for the socket\nmodule, the\ndocumentation was fixed and the multiple argument form is simply marked as\ndeprecated; it will be tightened up again in a future Python version.\nThe \\x\nescape in string literals now takes exactly 2 hex digits. Previously\nit would consume all the hex digits following the \u2018x\u2019 and take the lowest 8 bits\nof the result, so \\x123456\nwas equivalent to \\x56\n.\nThe AttributeError\nand NameError\nexceptions have a more friendly\nerror message, whose text will be something like 'Spam' instance has no\nattribute 'eggs'\nor name 'eggs' is not defined\n. Previously the error\nmessage was just the missing attribute name eggs\n, and code written to take\nadvantage of this fact will break in 2.0.\nSome work has been done to make integers and long integers a bit more\ninterchangeable. In 1.5.2, large-file support was added for Solaris, to allow\nreading files larger than 2 GiB; this made the tell()\nmethod of file\nobjects return a long integer instead of a regular integer. Some code would\nsubtract two file offsets and attempt to use the result to multiply a sequence\nor slice a string, but this raised a TypeError\n. In 2.0, long integers\ncan be used to multiply or slice a sequence, and it\u2019ll behave as you\u2019d\nintuitively expect it to; 3L * 'abc'\nproduces \u2018abcabcabc\u2019, and\n(0,1,2,3)[2L:4L]\nproduces (2,3). Long integers can also be used in various\ncontexts where previously only integers were accepted, such as in the\nseek()\nmethod of file objects, and in the formats supported by the %\noperator (%d\n, %i\n, %x\n, etc.). For example, \"%d\" % 2L**64\nwill\nproduce the string 18446744073709551616\n.\nThe subtlest long integer change of all is that the str()\nof a long\ninteger no longer has a trailing \u2018L\u2019 character, though repr()\nstill\nincludes it. The \u2018L\u2019 annoyed many people who wanted to print long integers that\nlooked just like regular integers, since they had to go out of their way to chop\noff the character. This is no longer a problem in 2.0, but code which does\nstr(longval)[:-1]\nand assumes the \u2018L\u2019 is there, will now lose the final\ndigit.\nTaking the repr()\nof a float now uses a different formatting precision\nthan str()\n. repr()\nuses %.17g\nformat string for C\u2019s\nsprintf()\n, while str()\nuses %.12g\nas before. The effect is that\nrepr()\nmay occasionally show more decimal places than str()\n, for\ncertain numbers. For example, the number 8.1 can\u2019t be represented exactly in\nbinary, so repr(8.1)\nis '8.0999999999999996'\n, while str(8.1) is\n'8.1'\n.\nThe -X\ncommand-line option, which turned all standard exceptions into\nstrings instead of classes, has been removed; the standard exceptions will now\nalways be classes. The exceptions\nmodule containing the standard\nexceptions was translated from Python to a built-in C module, written by Barry\nWarsaw and Fredrik Lundh.\nExtending/Embedding Changes\u00b6\nSome of the changes are under the covers, and will only be apparent to people writing C extension modules or embedding a Python interpreter in a larger application. If you aren\u2019t dealing with Python\u2019s C API, you can safely skip this section.\nThe version number of the Python C API was incremented, so C extensions compiled for 1.5.2 must be recompiled in order to work with 2.0. On Windows, it\u2019s not possible for Python 2.0 to import a third party extension built for Python 1.5.x due to how Windows DLLs work, so Python will raise an exception and the import will fail.\nUsers of Jim Fulton\u2019s ExtensionClass module will be pleased to find out that\nhooks have been added so that ExtensionClasses are now supported by\nisinstance()\nand issubclass()\n. This means you no longer have to\nremember to write code such as if type(obj) == myExtensionClass\n, but can use\nthe more natural if isinstance(obj, myExtensionClass)\n.\nThe Python/importdl.c\nfile, which was a mass of #ifdefs to support\ndynamic loading on many different platforms, was cleaned up and reorganised by\nGreg Stein. importdl.c\nis now quite small, and platform-specific code\nhas been moved into a bunch of Python/dynload_*.c\nfiles. Another\ncleanup: there were also a number of my*.h\nfiles in the Include/\ndirectory that held various portability hacks; they\u2019ve been merged into a single\nfile, Include/pyport.h\n.\nVladimir Marangozov\u2019s long-awaited malloc restructuring was completed, to make\nit easy to have the Python interpreter use a custom allocator instead of C\u2019s\nstandard malloc()\n. For documentation, read the comments in\nInclude/pymem.h\nand Include/objimpl.h\n. For the lengthy\ndiscussions during which the interface was hammered out, see the web archives of\nthe \u2018patches\u2019 and \u2018python-dev\u2019 lists at python.org.\nRecent versions of the GUSI development environment for MacOS support POSIX\nthreads. Therefore, Python\u2019s POSIX threading support now works on the\nMacintosh. Threading support using the user-space GNU pth\nlibrary was also\ncontributed.\nThreading support on Windows was enhanced, too. Windows supports thread locks that use kernel objects only in case of contention; in the common case when there\u2019s no contention, they use simpler functions which are an order of magnitude faster. A threaded version of Python 1.5.2 on NT is twice as slow as an unthreaded version; with the 2.0 changes, the difference is only 10%. These improvements were contributed by Yakov Markovitch.\nPython 2.0\u2019s source now uses only ANSI C prototypes, so compiling Python now requires an ANSI C compiler, and can no longer be done using a compiler that only supports K&R C.\nPreviously the Python virtual machine used 16-bit numbers in its bytecode,\nlimiting the size of source files. In particular, this affected the maximum\nsize of literal lists and dictionaries in Python source; occasionally people who\nare generating Python code would run into this limit. A patch by Charles G.\nWaldman raises the limit from 2**16\nto 2**32\n.\nThree new convenience functions intended for adding constants to a module\u2019s\ndictionary at module initialization time were added: PyModule_AddObject()\n,\nPyModule_AddIntConstant()\n, and PyModule_AddStringConstant()\n. Each\nof these functions takes a module object, a null-terminated C string containing\nthe name to be added, and a third argument for the value to be assigned to the\nname. This third argument is, respectively, a Python object, a C long, or a C\nstring.\nA wrapper API was added for Unix-style signal handlers. PyOS_getsig()\ngets\na signal handler and PyOS_setsig()\nwill set a new handler.\nDistutils: Making Modules Easy to Install\u00b6\nBefore Python 2.0, installing modules was a tedious affair \u2013 there was no way to figure out automatically where Python is installed, or what compiler options to use for extension modules. Software authors had to go through an arduous ritual of editing Makefiles and configuration files, which only really work on Unix and leave Windows and MacOS unsupported. Python users faced wildly differing installation instructions which varied between different extension packages, which made administering a Python installation something of a chore.\nThe SIG for distribution utilities, shepherded by Greg Ward, has created the\nDistutils, a system to make package installation much easier. They form the\ndistutils\npackage, a new part of Python\u2019s standard library. In the best\ncase, installing a Python module from source will require the same steps: first\nyou simply mean unpack the tarball or zip archive, and the run \u201cpython\nsetup.py install\n\u201d. The platform will be automatically detected, the compiler\nwill be recognized, C extension modules will be compiled, and the distribution\ninstalled into the proper directory. Optional command-line arguments provide\nmore control over the installation process, the distutils package offers many\nplaces to override defaults \u2013 separating the build from the install, building\nor installing in non-default directories, and more.\nIn order to use the Distutils, you need to write a setup.py\nscript. For\nthe simple case, when the software contains only .py files, a minimal\nsetup.py\ncan be just a few lines long:\nfrom distutils.core import setup\nsetup (name = \"foo\", version = \"1.0\",\npy_modules = [\"module1\", \"module2\"])\nThe setup.py\nfile isn\u2019t much more complicated if the software consists\nof a few packages:\nfrom distutils.core import setup\nsetup (name = \"foo\", version = \"1.0\",\npackages = [\"package\", \"package.subpackage\"])\nA C extension can be the most complicated case; here\u2019s an example taken from the PyXML package:\nfrom distutils.core import setup, Extension\nexpat_extension = Extension('xml.parsers.pyexpat',\ndefine_macros = [('XML_NS', None)],\ninclude_dirs = [ 'extensions/expat/xmltok',\n'extensions/expat/xmlparse' ],\nsources = [ 'extensions/pyexpat.c',\n'extensions/expat/xmltok/xmltok.c',\n'extensions/expat/xmltok/xmlrole.c', ]\n)\nsetup (name = \"PyXML\", version = \"0.5.4\",\next_modules =[ expat_extension ] )\nThe Distutils can also take care of creating source and binary distributions.\nThe \u201csdist\u201d command, run by \u201cpython setup.py sdist\n\u2019, builds a source\ndistribution such as foo-1.0.tar.gz\n. Adding new commands isn\u2019t\ndifficult, \u201cbdist_rpm\u201d and \u201cbdist_wininst\u201d commands have already been\ncontributed to create an RPM distribution and a Windows installer for the\nsoftware, respectively. Commands to create other distribution formats such as\nDebian packages and Solaris .pkg\nfiles are in various stages of\ndevelopment.\nAll this is documented in a new manual, Distributing Python Modules, that joins the basic set of Python documentation.\nXML Modules\u00b6\nPython 1.5.2 included a simple XML parser in the form of the xmllib\nmodule, contributed by Sjoerd Mullender. Since 1.5.2\u2019s release, two different\ninterfaces for processing XML have become common: SAX2 (version 2 of the Simple\nAPI for XML) provides an event-driven interface with some similarities to\nxmllib\n, and the DOM (Document Object Model) provides a tree-based\ninterface, transforming an XML document into a tree of nodes that can be\ntraversed and modified. Python 2.0 includes a SAX2 interface and a stripped-down\nDOM interface as part of the xml\npackage. Here we will give a brief\noverview of these new interfaces; consult the Python documentation or the source\ncode for complete details. The Python XML SIG is also working on improved\ndocumentation.\nSAX2 Support\u00b6\nSAX defines an event-driven interface for parsing XML. To use SAX, you must\nwrite a SAX handler class. Handler classes inherit from various classes\nprovided by SAX, and override various methods that will then be called by the\nXML parser. For example, the startElement()\nand endElement()\nmethods are called for every starting and end tag encountered by the parser, the\ncharacters()\nmethod is called for every chunk of character data, and so\nforth.\nThe advantage of the event-driven approach is that the whole document doesn\u2019t have to be resident in memory at any one time, which matters if you are processing really huge documents. However, writing the SAX handler class can get very complicated if you\u2019re trying to modify the document structure in some elaborate way.\nFor example, this little example program defines a handler that prints a message\nfor every starting and ending tag, and then parses the file hamlet.xml\nusing it:\nfrom xml import sax\nclass SimpleHandler(sax.ContentHandler):\ndef startElement(self, name, attrs):\nprint 'Start of element:', name, attrs.keys()\ndef endElement(self, name):\nprint 'End of element:', name\n# Create a parser object\nparser = sax.make_parser()\n# Tell it what handler to use\nhandler = SimpleHandler()\nparser.setContentHandler( handler )\n# Parse a file!\nparser.parse( 'hamlet.xml' )\nFor more information, consult the Python documentation, or the XML HOWTO at https://pyxml.sourceforge.net/topics/howto/xml-howto.html.\nDOM Support\u00b6\nThe Document Object Model is a tree-based representation for an XML document. A\ntop-level Document\ninstance is the root of the tree, and has a single\nchild which is the top-level Element\ninstance. This Element\nhas children nodes representing character data and any sub-elements, which may\nhave further children of their own, and so forth. Using the DOM you can\ntraverse the resulting tree any way you like, access element and attribute\nvalues, insert and delete nodes, and convert the tree back into XML.\nThe DOM is useful for modifying XML documents, because you can create a DOM\ntree, modify it by adding new nodes or rearranging subtrees, and then produce a\nnew XML document as output. You can also construct a DOM tree manually and\nconvert it to XML, which can be a more flexible way of producing XML output than\nsimply writing \n\u2026\nto a file.\nThe DOM implementation included with Python lives in the xml.dom.minidom\nmodule. It\u2019s a lightweight implementation of the Level 1 DOM with support for\nXML namespaces. The parse()\nand parseString()\nconvenience\nfunctions are provided for generating a DOM tree:\nfrom xml.dom import minidom\ndoc = minidom.parse('hamlet.xml')\ndoc\nis a Document\ninstance. Document\n, like all the other\nDOM classes such as Element\nand Text\n, is a subclass of the\nNode\nbase class. All the nodes in a DOM tree therefore support certain\ncommon methods, such as toxml()\nwhich returns a string containing the XML\nrepresentation of the node and its children. Each class also has special\nmethods of its own; for example, Element\nand Document\ninstances have a method to find all child elements with a given tag name.\nContinuing from the previous 2-line example:\nperslist = doc.getElementsByTagName( 'PERSONA' )\nprint perslist[0].toxml()\nprint perslist[1].toxml()\nFor the Hamlet XML file, the above few lines output:\nCLAUDIUS, king of Denmark. \nHAMLET, son to the late, and nephew to the present king.\nThe root element of the document is available as doc.documentElement\n, and\nits children can be easily modified by deleting, adding, or removing nodes:\nroot = doc.documentElement\n# Remove the first child\nroot.removeChild( root.childNodes[0] )\n# Move the new first child to the end\nroot.appendChild( root.childNodes[0] )\n# Insert the new first child (originally,\n# the third child) before the 20th child.\nroot.insertBefore( root.childNodes[0], root.childNodes[20] )\nAgain, I will refer you to the Python documentation for a complete listing of\nthe different Node\nclasses and their various methods.\nRelationship to PyXML\u00b6\nThe XML Special Interest Group has been working on XML-related Python code for a\nwhile. Its code distribution, called PyXML, is available from the SIG\u2019s web\npages at https://www.python.org/community/sigs/current/xml-sig. The PyXML distribution also used\nthe package name xml\n. If you\u2019ve written programs that used PyXML, you\u2019re\nprobably wondering about its compatibility with the 2.0 xml\npackage.\nThe answer is that Python 2.0\u2019s xml\npackage isn\u2019t compatible with PyXML,\nbut can be made compatible by installing a recent version PyXML. Many\napplications can get by with the XML support that is included with Python 2.0,\nbut more complicated applications will require that the full PyXML package will\nbe installed. When installed, PyXML versions 0.6.0 or greater will replace the\nxml\npackage shipped with Python, and will be a strict superset of the\nstandard package, adding a bunch of additional features. Some of the additional\nfeatures in PyXML include:\n4DOM, a full DOM implementation from FourThought, Inc.\nThe xmlproc validating parser, written by Lars Marius Garshol.\nThe\nsgmlop\nparser accelerator module, written by Fredrik Lundh.\nModule changes\u00b6\nLots of improvements and bugfixes were made to Python\u2019s extensive standard\nlibrary; some of the affected modules include readline\n,\nConfigParser\n, cgi\n, calendar\n, posix\n, readline\n,\nxmllib\n, aifc\n, chunk\n, wave\n, random\n, shelve\n,\nand nntplib\n. Consult the CVS logs for the exact patch-by-patch details.\nBrian Gallew contributed OpenSSL support for the socket\nmodule. OpenSSL\nis an implementation of the Secure Socket Layer, which encrypts the data being\nsent over a socket. When compiling Python, you can edit Modules/Setup\nto include SSL support, which adds an additional function to the socket\nmodule: socket.ssl(socket, keyfile, certfile)\n, which takes a socket\nobject and returns an SSL socket. The httplib\nand urllib\nmodules\nwere also changed to support https://\nURLs, though no one has implemented\nFTP or SMTP over SSL.\nThe httplib\nmodule has been rewritten by Greg Stein to support HTTP/1.1.\nBackward compatibility with the 1.5 version of httplib\nis provided,\nthough using HTTP/1.1 features such as pipelining will require rewriting code to\nuse a different set of interfaces.\nThe Tkinter\nmodule now supports Tcl/Tk version 8.1, 8.2, or 8.3, and\nsupport for the older 7.x versions has been dropped. The Tkinter module now\nsupports displaying Unicode strings in Tk widgets. Also, Fredrik Lundh\ncontributed an optimization which makes operations like create_line\nand\ncreate_polygon\nmuch faster, especially when using lots of coordinates.\nThe curses\nmodule has been greatly extended, starting from Oliver\nAndrich\u2019s enhanced version, to provide many additional functions from ncurses\nand SYSV curses, such as colour, alternative character set support, pads, and\nmouse support. This means the module is no longer compatible with operating\nsystems that only have BSD curses, but there don\u2019t seem to be any currently\nmaintained OSes that fall into this category.\nAs mentioned in the earlier discussion of 2.0\u2019s Unicode support, the underlying\nimplementation of the regular expressions provided by the re\nmodule has\nbeen changed. SRE, a new regular expression engine written by Fredrik Lundh and\npartially funded by Hewlett Packard, supports matching against both 8-bit\nstrings and Unicode strings.\nNew modules\u00b6\nA number of new modules were added. We\u2019ll simply list them with brief descriptions; consult the 2.0 documentation for the details of a particular module.\natexit\n: For registering functions to be called before the Python interpreter exits. Code that currently setssys.exitfunc\ndirectly should be changed to use theatexit\nmodule instead, importingatexit\nand callingatexit.register()\nwith the function to be called on exit. (Contributed by Skip Montanaro.)codecs\n,encodings\n,unicodedata\n: Added as part of the new Unicode support.filecmp\n: Supersedes the oldcmp\n,cmpcache\nanddircmp\nmodules, which have now become deprecated. (Contributed by Gordon MacMillan and Moshe Zadka.)gettext\n: This module provides internationalization (I18N) and localization (L10N) support for Python programs by providing an interface to the GNU gettext message catalog library. (Integrated by Barry Warsaw, from separate contributions by Martin von L\u00f6wis, Peter Funk, and James Henstridge.)linuxaudiodev\n: Support for the/dev/audio\ndevice on Linux, a twin to the existingsunaudiodev\nmodule. (Contributed by Peter Bosch, with fixes by Jeremy Hylton.)mmap\n: An interface to memory-mapped files on both Windows and Unix. A file\u2019s contents can be mapped directly into memory, at which point it behaves like a mutable string, so its contents can be read and modified. They can even be passed to functions that expect ordinary strings, such as there\nmodule. (Contributed by Sam Rushing, with some extensions by A.M. Kuchling.)pyexpat\n: An interface to the Expat XML parser. (Contributed by Paul Prescod.)robotparser\n: Parse arobots.txt\nfile, which is used for writing web spiders that politely avoid certain areas of a web site. The parser accepts the contents of arobots.txt\nfile, builds a set of rules from it, and can then answer questions about the fetchability of a given URL. (Contributed by Skip Montanaro.)tabnanny\n: A module/script to check Python source code for ambiguous indentation. (Contributed by Tim Peters.)UserString\n: A base class useful for deriving objects that behave like strings.webbrowser\n: A module that provides a platform independent way to launch a web browser on a specific URL. For each platform, various browsers are tried in a specific order. The user can alter which browser is launched by setting the BROWSER environment variable. (Originally inspired by Eric S. Raymond\u2019s patch tourllib\nwhich added similar functionality, but the final module comes from code originally implemented by Fred Drake asTools/idle/BrowserControl.py\n, and adapted for the standard library by Fred.)_winreg\n: An interface to the Windows registry._winreg\nis an adaptation of functions that have been part of PythonWin since 1995, but has now been added to the core distribution, and enhanced to support Unicode._winreg\nwas written by Bill Tutt and Mark Hammond.zipfile\n: A module for reading and writing ZIP-format archives. These are archives produced by PKZIP on DOS/Windows or zip on Unix, not to be confused with gzip-format files (which are supported by thegzip\nmodule) (Contributed by James C. Ahlstrom.)imputil\n: A module that provides a simpler way for writing customized import hooks, in comparison to the existingihooks\nmodule. (Implemented by Greg Stein, with much discussion on python-dev along the way.)\nIDLE Improvements\u00b6\nIDLE is the official Python cross-platform IDE, written using Tkinter. Python 2.0 includes IDLE 0.6, which adds a number of new features and improvements. A partial list:\nUI improvements and optimizations, especially in the area of syntax highlighting and auto-indentation.\nThe class browser now shows more information, such as the top level functions in a module.\nTab width is now a user settable option. When opening an existing Python file, IDLE automatically detects the indentation conventions, and adapts.\nThere is now support for calling browsers on various platforms, used to open the Python documentation in a browser.\nIDLE now has a command line, which is largely similar to the vanilla Python interpreter.\nCall tips were added in many places.\nIDLE can now be installed as a package.\nIn the editor window, there is now a line/column bar at the bottom.\nThree new keystroke commands: Check module (Alt-F5), Import module (F5) and Run script (Ctrl-F5).\nDeleted and Deprecated Modules\u00b6\nA few modules have been dropped because they\u2019re obsolete, or because there are\nnow better ways to do the same thing. The stdwin\nmodule is gone; it was\nfor a platform-independent windowing toolkit that\u2019s no longer developed.\nA number of modules have been moved to the lib-old\nsubdirectory:\ncmp\n, cmpcache\n, dircmp\n, dump\n, find\n,\ngrep\n, packmail\n, poly\n, util\n, whatsound\n,\nzmod\n. If you have code which relies on a module that\u2019s been moved to\nlib-old\n, you can simply add that directory to sys.path\nto get them\nback, but you\u2019re encouraged to update any code that uses these modules.\nAcknowledgements\u00b6\nThe authors would like to thank the following people for offering suggestions on various drafts of this article: David Bolen, Mark Hammond, Gregg Hauser, Jeremy Hylton, Fredrik Lundh, Detlef Lannert, Aahz Maruch, Skip Montanaro, Vladimir Marangozov, Tobias Polzin, Guido van Rossum, Neil Schemenauer, and Russ Schmidt.", "code_snippets": ["\n\n", " ", " ", "\n\n", " ", "\n ", " ", " ", " ", "\n\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n", " ", " ", " ", "\n ", " ", " ", " ", "\n ", "\n ", " ", " ", " ", "\n ", " ", "\n ", "\n ", "\n ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n ", " ", "\n ", " ", " ", "\n ", " ", "\n ", " ", " ", " ", " ", "\n\n", " ", " ", "\n", " ", " ", "\n", " ", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", "\n ", "\n ", "\n ", "\n", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n ", " ", "\n ", " ", " ", " ", " ", "\n", "\n", " ", " ", " ", " ", " ", "\n", "\n ", " ", " ", "\n ", " ", "\n", " ", "\n", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n", " ", "\n", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n", " ", " ", "\n\n", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", " ", " ", "\n ", "\n ", " ", "\n ", "\n", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n", " ", "\n\n", "\n ", " ", " ", "\n ", " ", " ", " ", "\n\n ", " ", "\n ", " ", " ", "\n\n", "\n", " ", " ", "\n\n", "\n", " ", " ", "\n", " ", " ", "\n\n", "\n", " ", " ", "\n", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", "\n", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", "\n\n", "\n", " ", " ", "\n\n", "\n", " ", " ", "\n\n", "\n", "\n", " ", " ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 13399}
{"url": "https://docs.python.org/3/reference/import.html", "title": "The import system", "content": "5. The import system\u00b6\nPython code in one module gains access to the code in another module\nby the process of importing it. The import\nstatement is\nthe most common way of invoking the import machinery, but it is not the only\nway. Functions such as importlib.import_module()\nand built-in\n__import__()\ncan also be used to invoke the import machinery.\nThe import\nstatement combines two operations; it searches for the\nnamed module, then it binds the results of that search to a name in the local\nscope. The search operation of the import\nstatement is defined as\na call to the __import__()\nfunction, with the appropriate arguments.\nThe return value of __import__()\nis used to perform the name\nbinding operation of the import\nstatement. See the\nimport\nstatement for the exact details of that name binding\noperation.\nA direct call to __import__()\nperforms only the module search and, if\nfound, the module creation operation. While certain side-effects may occur,\nsuch as the importing of parent packages, and the updating of various caches\n(including sys.modules\n), only the import\nstatement performs\na name binding operation.\nWhen an import\nstatement is executed, the standard builtin\n__import__()\nfunction is called. Other mechanisms for invoking the\nimport system (such as importlib.import_module()\n) may choose to bypass\n__import__()\nand use their own solutions to implement import semantics.\nWhen a module is first imported, Python searches for the module and if found,\nit creates a module object [1], initializing it. If the named module\ncannot be found, a ModuleNotFoundError\nis raised. Python implements various\nstrategies to search for the named module when the import machinery is\ninvoked. These strategies can be modified and extended by using various hooks\ndescribed in the sections below.\nChanged in version 3.3: The import system has been updated to fully implement the second phase\nof PEP 302. There is no longer any implicit import machinery - the full\nimport system is exposed through sys.meta_path\n. In addition,\nnative namespace package support has been implemented (see PEP 420).\n5.1. importlib\n\u00b6\nThe importlib\nmodule provides a rich API for interacting with the\nimport system. For example importlib.import_module()\nprovides a\nrecommended, simpler API than built-in __import__()\nfor invoking the\nimport machinery. Refer to the importlib\nlibrary documentation for\nadditional detail.\n5.2. Packages\u00b6\nPython has only one type of module object, and all modules are of this type, regardless of whether the module is implemented in Python, C, or something else. To help organize modules and provide a naming hierarchy, Python has a concept of packages.\nYou can think of packages as the directories on a file system and modules as files within directories, but don\u2019t take this analogy too literally since packages and modules need not originate from the file system. For the purposes of this documentation, we\u2019ll use this convenient analogy of directories and files. Like file system directories, packages are organized hierarchically, and packages may themselves contain subpackages, as well as regular modules.\nIt\u2019s important to keep in mind that all packages are modules, but not all\nmodules are packages. Or put another way, packages are just a special kind of\nmodule. Specifically, any module that contains a __path__\nattribute is\nconsidered a package.\nAll modules have a name. Subpackage names are separated from their parent\npackage name by a dot, akin to Python\u2019s standard attribute access syntax. Thus\nyou might have a package called email\n, which in turn has a subpackage\ncalled email.mime\nand a module within that subpackage called\nemail.mime.text\n.\n5.2.1. Regular packages\u00b6\nPython defines two types of packages, regular packages and namespace packages. Regular\npackages are traditional packages as they existed in Python 3.2 and earlier.\nA regular package is typically implemented as a directory containing an\n__init__.py\nfile. When a regular package is imported, this\n__init__.py\nfile is implicitly executed, and the objects it defines are\nbound to names in the package\u2019s namespace. The __init__.py\nfile can\ncontain the same Python code that any other module can contain, and Python\nwill add some additional attributes to the module when it is imported.\nFor example, the following file system layout defines a top level parent\npackage with three subpackages:\nparent/\n__init__.py\none/\n__init__.py\ntwo/\n__init__.py\nthree/\n__init__.py\nImporting parent.one\nwill implicitly execute parent/__init__.py\nand\nparent/one/__init__.py\n. Subsequent imports of parent.two\nor\nparent.three\nwill execute parent/two/__init__.py\nand\nparent/three/__init__.py\nrespectively.\n5.2.2. Namespace packages\u00b6\nA namespace package is a composite of various portions, where each portion contributes a subpackage to the parent package. Portions may reside in different locations on the file system. Portions may also be found in zip files, on the network, or anywhere else that Python searches during import. Namespace packages may or may not correspond directly to objects on the file system; they may be virtual modules that have no concrete representation.\nNamespace packages do not use an ordinary list for their __path__\nattribute. They instead use a custom iterable type which will automatically\nperform a new search for package portions on the next import attempt within\nthat package if the path of their parent package (or sys.path\nfor a\ntop level package) changes.\nWith namespace packages, there is no parent/__init__.py\nfile. In fact,\nthere may be multiple parent\ndirectories found during import search, where\neach one is provided by a different portion. Thus parent/one\nmay not be\nphysically located next to parent/two\n. In this case, Python will create a\nnamespace package for the top-level parent\npackage whenever it or one of\nits subpackages is imported.\nSee also PEP 420 for the namespace package specification.\n5.3. Searching\u00b6\nTo begin the search, Python needs the fully qualified\nname of the module (or package, but for the purposes of this discussion, the\ndifference is immaterial) being imported. This name may come from various\narguments to the import\nstatement, or from the parameters to the\nimportlib.import_module()\nor __import__()\nfunctions.\nThis name will be used in various phases of the import search, and it may be\nthe dotted path to a submodule, e.g. foo.bar.baz\n. In this case, Python\nfirst tries to import foo\n, then foo.bar\n, and finally foo.bar.baz\n.\nIf any of the intermediate imports fail, a ModuleNotFoundError\nis raised.\n5.3.1. The module cache\u00b6\nThe first place checked during import search is sys.modules\n. This\nmapping serves as a cache of all modules that have been previously imported,\nincluding the intermediate paths. So if foo.bar.baz\nwas previously\nimported, sys.modules\nwill contain entries for foo\n, foo.bar\n,\nand foo.bar.baz\n. Each key will have as its value the corresponding module\nobject.\nDuring import, the module name is looked up in sys.modules\nand if\npresent, the associated value is the module satisfying the import, and the\nprocess completes. However, if the value is None\n, then a\nModuleNotFoundError\nis raised. If the module name is missing, Python will\ncontinue searching for the module.\nsys.modules\nis writable. Deleting a key may not destroy the\nassociated module (as other modules may hold references to it),\nbut it will invalidate the cache entry for the named module, causing\nPython to search anew for the named module upon its next\nimport. The key can also be assigned to None\n, forcing the next import\nof the module to result in a ModuleNotFoundError\n.\nBeware though, as if you keep a reference to the module object,\ninvalidate its cache entry in sys.modules\n, and then re-import the\nnamed module, the two module objects will not be the same. By contrast,\nimportlib.reload()\nwill reuse the same module object, and simply\nreinitialise the module contents by rerunning the module\u2019s code.\n5.3.2. Finders and loaders\u00b6\nIf the named module is not found in sys.modules\n, then Python\u2019s import\nprotocol is invoked to find and load the module. This protocol consists of\ntwo conceptual objects, finders and loaders.\nA finder\u2019s job is to determine whether it can find the named module using\nwhatever strategy it knows about. Objects that implement both of these\ninterfaces are referred to as importers - they return\nthemselves when they find that they can load the requested module.\nPython includes a number of default finders and importers. The first one knows how to locate built-in modules, and the second knows how to locate frozen modules. A third default finder searches an import path for modules. The import path is a list of locations that may name file system paths or zip files. It can also be extended to search for any locatable resource, such as those identified by URLs.\nThe import machinery is extensible, so new finders can be added to extend the range and scope of module searching.\nFinders do not actually load modules. If they can find the named module, they return a module spec, an encapsulation of the module\u2019s import-related information, which the import machinery then uses when loading the module.\nThe following sections describe the protocol for finders and loaders in more detail, including how you can create and register new ones to extend the import machinery.\nChanged in version 3.4: In previous versions of Python, finders returned loaders directly, whereas now they return module specs which contain loaders. Loaders are still used during import but have fewer responsibilities.\n5.3.3. Import hooks\u00b6\nThe import machinery is designed to be extensible; the primary mechanism for this are the import hooks. There are two types of import hooks: meta hooks and import path hooks.\nMeta hooks are called at the start of import processing, before any other\nimport processing has occurred, other than sys.modules\ncache look up.\nThis allows meta hooks to override sys.path\nprocessing, frozen\nmodules, or even built-in modules. Meta hooks are registered by adding new\nfinder objects to sys.meta_path\n, as described below.\nImport path hooks are called as part of sys.path\n(or\npackage.__path__\n) processing, at the point where their associated path\nitem is encountered. Import path hooks are registered by adding new callables\nto sys.path_hooks\nas described below.\n5.3.4. The meta path\u00b6\nWhen the named module is not found in sys.modules\n, Python next\nsearches sys.meta_path\n, which contains a list of meta path finder\nobjects. These finders are queried in order to see if they know how to handle\nthe named module. Meta path finders must implement a method called\nfind_spec()\nwhich takes three arguments:\na name, an import path, and (optionally) a target module. The meta path\nfinder can use any strategy it wants to determine whether it can handle\nthe named module or not.\nIf the meta path finder knows how to handle the named module, it returns a\nspec object. If it cannot handle the named module, it returns None\n. If\nsys.meta_path\nprocessing reaches the end of its list without returning\na spec, then a ModuleNotFoundError\nis raised. Any other exceptions\nraised are simply propagated up, aborting the import process.\nThe find_spec()\nmethod of meta path\nfinders is called with two or three arguments. The first is the fully\nqualified name of the module being imported, for example foo.bar.baz\n.\nThe second argument is the path entries to use for the module search. For\ntop-level modules, the second argument is None\n, but for submodules or\nsubpackages, the second argument is the value of the parent package\u2019s\n__path__\nattribute. If the appropriate __path__\nattribute cannot\nbe accessed, a ModuleNotFoundError\nis raised. The third argument\nis an existing module object that will be the target of loading later.\nThe import system passes in a target module only during reload.\nThe meta path may be traversed multiple times for a single import request.\nFor example, assuming none of the modules involved has already been cached,\nimporting foo.bar.baz\nwill first perform a top level import, calling\nmpf.find_spec(\"foo\", None, None)\non each meta path finder (mpf\n). After\nfoo\nhas been imported, foo.bar\nwill be imported by traversing the\nmeta path a second time, calling\nmpf.find_spec(\"foo.bar\", foo.__path__, None)\n. Once foo.bar\nhas been\nimported, the final traversal will call\nmpf.find_spec(\"foo.bar.baz\", foo.bar.__path__, None)\n.\nSome meta path finders only support top level imports. These importers will\nalways return None\nwhen anything other than None\nis passed as the\nsecond argument.\nPython\u2019s default sys.meta_path\nhas three meta path finders, one that\nknows how to import built-in modules, one that knows how to import frozen\nmodules, and one that knows how to import modules from an import path\n(i.e. the path based finder).\nChanged in version 3.4: The find_spec()\nmethod of meta path\nfinders replaced find_module()\n, which\nis now deprecated. While it will continue to work without change, the\nimport machinery will try it only if the finder does not implement\nfind_spec()\n.\nChanged in version 3.10: Use of find_module()\nby the import system\nnow raises ImportWarning\n.\nChanged in version 3.12: find_module()\nhas been removed.\nUse find_spec()\ninstead.\n5.4. Loading\u00b6\nIf and when a module spec is found, the import machinery will use it (and the loader it contains) when loading the module. Here is an approximation of what happens during the loading portion of import:\nmodule = None\nif spec.loader is not None and hasattr(spec.loader, 'create_module'):\n# It is assumed 'exec_module' will also be defined on the loader.\nmodule = spec.loader.create_module(spec)\nif module is None:\nmodule = ModuleType(spec.name)\n# The import-related module attributes get set here:\n_init_module_attrs(spec, module)\nif spec.loader is None:\n# unsupported\nraise ImportError\nif spec.origin is None and spec.submodule_search_locations is not None:\n# namespace package\nsys.modules[spec.name] = module\nelif not hasattr(spec.loader, 'exec_module'):\nmodule = spec.loader.load_module(spec.name)\nelse:\nsys.modules[spec.name] = module\ntry:\nspec.loader.exec_module(module)\nexcept BaseException:\ntry:\ndel sys.modules[spec.name]\nexcept KeyError:\npass\nraise\nreturn sys.modules[spec.name]\nNote the following details:\nIf there is an existing module object with the given name in\nsys.modules\n, import will have already returned it.The module will exist in\nsys.modules\nbefore the loader executes the module code. This is crucial because the module code may (directly or indirectly) import itself; adding it tosys.modules\nbeforehand prevents unbounded recursion in the worst case and multiple loading in the best.If loading fails, the failing module \u2013 and only the failing module \u2013 gets removed from\nsys.modules\n. Any module already in thesys.modules\ncache, and any module that was successfully loaded as a side-effect, must remain in the cache. This contrasts with reloading where even the failing module is left insys.modules\n.After the module is created but before execution, the import machinery sets the import-related module attributes (\u201c_init_module_attrs\u201d in the pseudo-code example above), as summarized in a later section.\nModule execution is the key moment of loading in which the module\u2019s namespace gets populated. Execution is entirely delegated to the loader, which gets to decide what gets populated and how.\nThe module created during loading and passed to exec_module() may not be the one returned at the end of import [2].\nChanged in version 3.4: The import system has taken over the boilerplate responsibilities of\nloaders. These were previously performed by the\nimportlib.abc.Loader.load_module()\nmethod.\n5.4.1. Loaders\u00b6\nModule loaders provide the critical function of loading: module execution.\nThe import machinery calls the importlib.abc.Loader.exec_module()\nmethod with a single argument, the module object to execute. Any value\nreturned from exec_module()\nis ignored.\nLoaders must satisfy the following requirements:\nIf the module is a Python module (as opposed to a built-in module or a dynamically loaded extension), the loader should execute the module\u2019s code in the module\u2019s global name space (\nmodule.__dict__\n).If the loader cannot execute the module, it should raise an\nImportError\n, although any other exception raised duringexec_module()\nwill be propagated.\nIn many cases, the finder and loader can be the same object; in such cases the\nfind_spec()\nmethod would just return a\nspec with the loader set to self\n.\nModule loaders may opt in to creating the module object during loading\nby implementing a create_module()\nmethod.\nIt takes one argument, the module spec, and returns the new module object\nto use during loading. create_module()\ndoes not need to set any attributes\non the module object. If the method returns None\n, the\nimport machinery will create the new module itself.\nAdded in version 3.4: The create_module()\nmethod of loaders.\nChanged in version 3.4: The load_module()\nmethod was replaced by\nexec_module()\nand the import\nmachinery assumed all the boilerplate responsibilities of loading.\nFor compatibility with existing loaders, the import machinery will use\nthe load_module()\nmethod of loaders if it exists and the loader does\nnot also implement exec_module()\n. However, load_module()\nhas been\ndeprecated and loaders should implement exec_module()\ninstead.\nThe load_module()\nmethod must implement all the boilerplate loading\nfunctionality described above in addition to executing the module. All\nthe same constraints apply, with some additional clarification:\nIf there is an existing module object with the given name in\nsys.modules\n, the loader must use that existing module. (Otherwise,importlib.reload()\nwill not work correctly.) If the named module does not exist insys.modules\n, the loader must create a new module object and add it tosys.modules\n.The module must exist in\nsys.modules\nbefore the loader executes the module code, to prevent unbounded recursion or multiple loading.If loading fails, the loader must remove any modules it has inserted into\nsys.modules\n, but it must remove only the failing module(s), and only if the loader itself has loaded the module(s) explicitly.\nChanged in version 3.5: A DeprecationWarning\nis raised when exec_module()\nis defined but\ncreate_module()\nis not.\nChanged in version 3.6: An ImportError\nis raised when exec_module()\nis defined but\ncreate_module()\nis not.\nChanged in version 3.10: Use of load_module()\nwill raise ImportWarning\n.\n5.4.2. Submodules\u00b6\nWhen a submodule is loaded using any mechanism (e.g. importlib\nAPIs, the\nimport\nor import-from\nstatements, or built-in __import__()\n) a\nbinding is placed in the parent module\u2019s namespace to the submodule object.\nFor example, if package spam\nhas a submodule foo\n, after importing\nspam.foo\n, spam\nwill have an attribute foo\nwhich is bound to the\nsubmodule. Let\u2019s say you have the following directory structure:\nspam/\n__init__.py\nfoo.py\nand spam/__init__.py\nhas the following line in it:\nfrom .foo import Foo\nthen executing the following puts name bindings for foo\nand Foo\nin the\nspam\nmodule:\n>>> import spam\n>>> spam.foo\n\n>>> spam.Foo\n\nGiven Python\u2019s familiar name binding rules this might seem surprising, but\nit\u2019s actually a fundamental feature of the import system. The invariant\nholding is that if you have sys.modules['spam']\nand\nsys.modules['spam.foo']\n(as you would after the above import), the latter\nmust appear as the foo\nattribute of the former.\n5.4.3. Module specs\u00b6\nThe import machinery uses a variety of information about each module during import, especially before loading. Most of the information is common to all modules. The purpose of a module\u2019s spec is to encapsulate this import-related information on a per-module basis.\nUsing a spec during import allows state to be transferred between import system components, e.g. between the finder that creates the module spec and the loader that executes it. Most importantly, it allows the import machinery to perform the boilerplate operations of loading, whereas without a module spec the loader had that responsibility.\nThe module\u2019s spec is exposed as module.__spec__\n. Setting\n__spec__\nappropriately applies equally to\nmodules initialized during interpreter startup.\nThe one exception is __main__\n, where __spec__\nis\nset to None in some cases.\nSee ModuleSpec\nfor details on the contents of\nthe module spec.\nAdded in version 3.4.\n5.4.4. __path__ attributes on modules\u00b6\nThe __path__\nattribute should be a (possibly empty)\nsequence of strings enumerating the locations where the package\u2019s\nsubmodules will be found. By definition, if a module has a __path__\nattribute, it is a package.\nA package\u2019s __path__\nattribute is used during imports of its\nsubpackages.\nWithin the import machinery, it functions much the same as sys.path\n,\ni.e. providing a list of locations to search for modules during import.\nHowever, __path__\nis typically much more constrained than\nsys.path\n.\nThe same rules used for sys.path\nalso apply to a package\u2019s\n__path__\n. sys.path_hooks\n(described below) are\nconsulted when traversing a package\u2019s __path__\n.\nA package\u2019s __init__.py\nfile may set or alter the package\u2019s\n__path__\nattribute, and this was typically the way namespace packages were implemented\nprior to PEP 420. With the adoption of PEP 420, namespace packages no\nlonger need to supply __init__.py\nfiles containing only __path__\nmanipulation code; the import machinery automatically sets __path__\ncorrectly for the namespace package.\n5.4.5. Module reprs\u00b6\nBy default, all modules have a usable repr, however depending on the attributes set above, and in the module\u2019s spec, you can more explicitly control the repr of module objects.\nIf the module has a spec (__spec__\n), the import machinery will try\nto generate a repr from it. If that fails or there is no spec, the import\nsystem will craft a default repr using whatever information is available\non the module. It will try to use the module.__name__\n,\nmodule.__file__\n, and module.__loader__\nas input into the repr,\nwith defaults for whatever information is missing.\nHere are the exact rules used:\nIf the module has a\n__spec__\nattribute, the information in the spec is used to generate the repr. The \u201cname\u201d, \u201cloader\u201d, \u201corigin\u201d, and \u201chas_location\u201d attributes are consulted.If the module has a\n__file__\nattribute, this is used as part of the module\u2019s repr.If the module has no\n__file__\nbut does have a__loader__\nthat is notNone\n, then the loader\u2019s repr is used as part of the module\u2019s repr.Otherwise, just use the module\u2019s\n__name__\nin the repr.\nChanged in version 3.12: Use of module_repr()\n, having been deprecated since Python 3.4, was\nremoved in Python 3.12 and is no longer called during the resolution of a\nmodule\u2019s repr.\n5.4.6. Cached bytecode invalidation\u00b6\nBefore Python loads cached bytecode from a .pyc\nfile, it checks whether the\ncache is up-to-date with the source .py\nfile. By default, Python does this\nby storing the source\u2019s last-modified timestamp and size in the cache file when\nwriting it. At runtime, the import system then validates the cache file by\nchecking the stored metadata in the cache file against the source\u2019s\nmetadata.\nPython also supports \u201chash-based\u201d cache files, which store a hash of the source\nfile\u2019s contents rather than its metadata. There are two variants of hash-based\n.pyc\nfiles: checked and unchecked. For checked hash-based .pyc\nfiles,\nPython validates the cache file by hashing the source file and comparing the\nresulting hash with the hash in the cache file. If a checked hash-based cache\nfile is found to be invalid, Python regenerates it and writes a new checked\nhash-based cache file. For unchecked hash-based .pyc\nfiles, Python simply\nassumes the cache file is valid if it exists. Hash-based .pyc\nfiles\nvalidation behavior may be overridden with the --check-hash-based-pycs\nflag.\nChanged in version 3.7: Added hash-based .pyc\nfiles. Previously, Python only supported\ntimestamp-based invalidation of bytecode caches.\n5.5. The Path Based Finder\u00b6\nAs mentioned previously, Python comes with several default meta path finders.\nOne of these, called the path based finder\n(PathFinder\n), searches an import path,\nwhich contains a list of path entries. Each path\nentry names a location to search for modules.\nThe path based finder itself doesn\u2019t know how to import anything. Instead, it traverses the individual path entries, associating each of them with a path entry finder that knows how to handle that particular kind of path.\nThe default set of path entry finders implement all the semantics for finding\nmodules on the file system, handling special file types such as Python source\ncode (.py\nfiles), Python byte code (.pyc\nfiles) and\nshared libraries (e.g. .so\nfiles). When supported by the zipimport\nmodule in the standard library, the default path entry finders also handle\nloading all of these file types (other than shared libraries) from zipfiles.\nPath entries need not be limited to file system locations. They can refer to URLs, database queries, or any other location that can be specified as a string.\nThe path based finder provides additional hooks and protocols so that you can extend and customize the types of searchable path entries. For example, if you wanted to support path entries as network URLs, you could write a hook that implements HTTP semantics to find modules on the web. This hook (a callable) would return a path entry finder supporting the protocol described below, which was then used to get a loader for the module from the web.\nA word of warning: this section and the previous both use the term finder,\ndistinguishing between them by using the terms meta path finder and\npath entry finder. These two types of finders are very similar,\nsupport similar protocols, and function in similar ways during the import\nprocess, but it\u2019s important to keep in mind that they are subtly different.\nIn particular, meta path finders operate at the beginning of the import\nprocess, as keyed off the sys.meta_path\ntraversal.\nBy contrast, path entry finders are in a sense an implementation detail\nof the path based finder, and in fact, if the path based finder were to be\nremoved from sys.meta_path\n, none of the path entry finder semantics\nwould be invoked.\n5.5.1. Path entry finders\u00b6\nThe path based finder is responsible for finding and loading Python modules and packages whose location is specified with a string path entry. Most path entries name locations in the file system, but they need not be limited to this.\nAs a meta path finder, the path based finder implements the\nfind_spec()\nprotocol previously\ndescribed, however it exposes additional hooks that can be used to\ncustomize how modules are found and loaded from the import path.\nThree variables are used by the path based finder, sys.path\n,\nsys.path_hooks\nand sys.path_importer_cache\n. The __path__\nattributes on package objects are also used. These provide additional ways\nthat the import machinery can be customized.\nsys.path\ncontains a list of strings providing search locations for\nmodules and packages. It is initialized from the PYTHONPATH\nenvironment variable and various other installation- and\nimplementation-specific defaults. Entries in sys.path\ncan name\ndirectories on the file system, zip files, and potentially other \u201clocations\u201d\n(see the site\nmodule) that should be searched for modules, such as\nURLs, or database queries. Only strings should be present on\nsys.path\n; all other data types are ignored.\nThe path based finder is a meta path finder, so the import\nmachinery begins the import path search by calling the path\nbased finder\u2019s find_spec()\nmethod as\ndescribed previously. When the path\nargument to\nfind_spec()\nis given, it will be a\nlist of string paths to traverse - typically a package\u2019s __path__\nattribute for an import within that package. If the path\nargument is\nNone\n, this indicates a top level import and sys.path\nis used.\nThe path based finder iterates over every entry in the search path, and\nfor each of these, looks for an appropriate path entry finder\n(PathEntryFinder\n) for the\npath entry. Because this can be an expensive operation (e.g. there may be\nstat()\ncall overheads for this search), the path based finder maintains\na cache mapping path entries to path entry finders. This cache is maintained\nin sys.path_importer_cache\n(despite the name, this cache actually\nstores finder objects rather than being limited to importer objects).\nIn this way, the expensive search for a particular path entry\nlocation\u2019s path entry finder need only be done once. User code is\nfree to remove cache entries from sys.path_importer_cache\nforcing\nthe path based finder to perform the path entry search again.\nIf the path entry is not present in the cache, the path based finder iterates\nover every callable in sys.path_hooks\n. Each of the path entry\nhooks in this list is called with a single argument, the\npath entry to be searched. This callable may either return a path\nentry finder that can handle the path entry, or it may raise\nImportError\n. An ImportError\nis used by the path based finder to\nsignal that the hook cannot find a path entry finder\nfor that path entry. The\nexception is ignored and import path iteration continues. The hook\nshould expect either a string or bytes object; the encoding of bytes objects\nis up to the hook (e.g. it may be a file system encoding, UTF-8, or something\nelse), and if the hook cannot decode the argument, it should raise\nImportError\n.\nIf sys.path_hooks\niteration ends with no path entry finder\nbeing returned, then the path based finder\u2019s\nfind_spec()\nmethod will store None\nin sys.path_importer_cache\n(to indicate that there is no finder for\nthis path entry) and return None\n, indicating that this\nmeta path finder could not find the module.\nIf a path entry finder is returned by one of the path entry\nhook callables on sys.path_hooks\n, then the following protocol is used\nto ask the finder for a module spec, which is then used when loading the\nmodule.\nThe current working directory \u2013 denoted by an empty string \u2013 is handled\nslightly differently from other entries on sys.path\n. First, if the\ncurrent working directory cannot be determined or is found not to exist, no\nvalue is stored in sys.path_importer_cache\n. Second, the value for the\ncurrent working directory is looked up fresh for each module lookup. Third,\nthe path used for sys.path_importer_cache\nand returned by\nimportlib.machinery.PathFinder.find_spec()\nwill be the actual current\nworking directory and not the empty string.\n5.5.2. Path entry finder protocol\u00b6\nIn order to support imports of modules and initialized packages and also to\ncontribute portions to namespace packages, path entry finders must implement\nthe find_spec()\nmethod.\nfind_spec()\ntakes two arguments: the\nfully qualified name of the module being imported, and the (optional) target\nmodule. find_spec()\nreturns a fully populated spec for the module.\nThis spec will always have \u201cloader\u201d set (with one exception).\nTo indicate to the import machinery that the spec represents a namespace\nportion, the path entry finder sets submodule_search_locations\nto\na list containing the portion.\nChanged in version 3.4: find_spec()\nreplaced\nfind_loader()\nand\nfind_module()\n, both of which\nare now deprecated, but will be used if find_spec()\nis not defined.\nOlder path entry finders may implement one of these two deprecated methods\ninstead of find_spec()\n. The methods are still respected for the\nsake of backward compatibility. However, if find_spec()\nis\nimplemented on the path entry finder, the legacy methods are ignored.\nfind_loader()\ntakes one argument, the\nfully qualified name of the module being imported. find_loader()\nreturns a 2-tuple where the first item is the loader and the second item\nis a namespace portion.\nFor backwards compatibility with other implementations of the import\nprotocol, many path entry finders also support the same,\ntraditional find_module()\nmethod that meta path finders support.\nHowever path entry finder find_module()\nmethods are never called\nwith a path\nargument (they are expected to record the appropriate\npath information from the initial call to the path hook).\nThe find_module()\nmethod on path entry finders is deprecated,\nas it does not allow the path entry finder to contribute portions to\nnamespace packages. If both find_loader()\nand find_module()\nexist on a path entry finder, the import system will always call\nfind_loader()\nin preference to find_module()\n.\nChanged in version 3.10: Calls to find_module()\nand\nfind_loader()\nby the import\nsystem will raise ImportWarning\n.\nChanged in version 3.12: find_module()\nand find_loader()\nhave been removed.\n5.6. Replacing the standard import system\u00b6\nThe most reliable mechanism for replacing the entire import system is to\ndelete the default contents of sys.meta_path\n, replacing them\nentirely with a custom meta path hook.\nIf it is acceptable to only alter the behaviour of import statements\nwithout affecting other APIs that access the import system, then replacing\nthe builtin __import__()\nfunction may be sufficient.\nTo selectively prevent the import of some modules from a hook early on the\nmeta path (rather than disabling the standard import system entirely),\nit is sufficient to raise ModuleNotFoundError\ndirectly from\nfind_spec()\ninstead of returning\nNone\n. The latter indicates that the meta path search should continue,\nwhile raising an exception terminates it immediately.\n5.7. Package Relative Imports\u00b6\nRelative imports use leading dots. A single leading dot indicates a relative import, starting with the current package. Two or more leading dots indicate a relative import to the parent(s) of the current package, one level per dot after the first. For example, given the following package layout:\npackage/\n__init__.py\nsubpackage1/\n__init__.py\nmoduleX.py\nmoduleY.py\nsubpackage2/\n__init__.py\nmoduleZ.py\nmoduleA.py\nIn either subpackage1/moduleX.py\nor subpackage1/__init__.py\n,\nthe following are valid relative imports:\nfrom .moduleY import spam\nfrom .moduleY import spam as ham\nfrom . import moduleY\nfrom ..subpackage1 import moduleY\nfrom ..subpackage2.moduleZ import eggs\nfrom ..moduleA import foo\nAbsolute imports may use either the import <>\nor from <> import <>\nsyntax, but relative imports may only use the second form; the reason\nfor this is that:\nimport XXX.YYY.ZZZ\nshould expose XXX.YYY.ZZZ\nas a usable expression, but .moduleY is\nnot a valid expression.\n5.8. Special considerations for __main__\u00b6\nThe __main__\nmodule is a special case relative to Python\u2019s import\nsystem. As noted elsewhere, the __main__\nmodule\nis directly initialized at interpreter startup, much like sys\nand\nbuiltins\n. However, unlike those two, it doesn\u2019t strictly\nqualify as a built-in module. This is because the manner in which\n__main__\nis initialized depends on the flags and other options with\nwhich the interpreter is invoked.\n5.8.1. __main__.__spec__\u00b6\nDepending on how __main__\nis initialized, __main__.__spec__\ngets set appropriately or to None\n.\nWhen Python is started with the -m\noption, __spec__\nis set\nto the module spec of the corresponding module or package. __spec__\nis\nalso populated when the __main__\nmodule is loaded as part of executing a\ndirectory, zipfile or other sys.path\nentry.\nIn the remaining cases\n__main__.__spec__\nis set to None\n, as the code used to populate the\n__main__\ndoes not correspond directly with an importable module:\ninteractive prompt\n-c\noptionrunning from stdin\nrunning directly from a source or bytecode file\nNote that __main__.__spec__\nis always None\nin the last case,\neven if the file could technically be imported directly as a module\ninstead. Use the -m\nswitch if valid module metadata is desired\nin __main__\n.\nNote also that even when __main__\ncorresponds with an importable module\nand __main__.__spec__\nis set accordingly, they\u2019re still considered\ndistinct modules. This is due to the fact that blocks guarded by\nif __name__ == \"__main__\":\nchecks only execute when the module is used\nto populate the __main__\nnamespace, and not during normal import.\n5.9. References\u00b6\nThe import machinery has evolved considerably since Python\u2019s early days. The original specification for packages is still available to read, although some details have changed since the writing of that document.\nThe original specification for sys.meta_path\nwas PEP 302, with\nsubsequent extension in PEP 420.\nPEP 420 introduced namespace packages for\nPython 3.3. PEP 420 also introduced the find_loader()\nprotocol as an\nalternative to find_module()\n.\nPEP 366 describes the addition of the __package__\nattribute for\nexplicit relative imports in main modules.\nPEP 328 introduced absolute and explicit relative imports and initially\nproposed __name__\nfor semantics PEP 366 would eventually specify for\n__package__\n.\nPEP 338 defines executing modules as scripts.\nPEP 451 adds the encapsulation of per-module import state in spec objects. It also off-loads most of the boilerplate responsibilities of loaders back onto the import machinery. These changes allow the deprecation of several APIs in the import system and also addition of new methods to finders and loaders.\nFootnotes", "code_snippets": ["\n ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", "\n ", "\n ", " ", " ", "\n", " ", " ", " ", "\n ", " ", " ", "\n", "\n", " ", "\n\n", " ", " ", " ", "\n ", "\n ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n ", "\n ", " ", " ", "\n", " ", " ", " ", "\n ", " ", " ", "\n", "\n ", " ", " ", "\n ", "\n ", "\n ", " ", "\n ", "\n ", " ", "\n ", " ", "\n ", "\n ", "\n", " ", "\n", "\n ", "\n ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n", " ", "\n", " ", " ", " ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 9279}
{"url": "https://docs.python.org/3/reference/executionmodel.html", "title": "Execution model", "content": "4. Execution model\u00b6\n4.1. Structure of a program\u00b6\nA Python program is constructed from code blocks.\nA block is a piece of Python program text that is executed as a unit.\nThe following are blocks: a module, a function body, and a class definition.\nEach command typed interactively is a block. A script file (a file given as\nstandard input to the interpreter or specified as a command line argument to the\ninterpreter) is a code block. A script command (a command specified on the\ninterpreter command line with the -c\noption) is a code block.\nA module run as a top level script (as module __main__\n) from the command\nline using a -m\nargument is also a code block. The string\nargument passed to the built-in functions eval()\nand exec()\nis a\ncode block.\nA code block is executed in an execution frame. A frame contains some administrative information (used for debugging) and determines where and how execution continues after the code block\u2019s execution has completed.\n4.2. Naming and binding\u00b6\n4.2.1. Binding of names\u00b6\nNames refer to objects. Names are introduced by name binding operations.\nThe following constructs bind names:\nformal parameters to functions,\nclass definitions,\nfunction definitions,\nassignment expressions,\ntargets that are identifiers if occurring in an assignment:\nimport\nstatements.type\nstatements.\nThe import\nstatement of the form from ... import *\nbinds all\nnames defined in the imported module, except those beginning with an underscore.\nThis form may only be used at the module level.\nA target occurring in a del\nstatement is also considered bound for\nthis purpose (though the actual semantics are to unbind the name).\nEach assignment or import statement occurs within a block defined by a class or function definition or at the module level (the top-level code block).\nIf a name is bound in a block, it is a local variable of that block, unless\ndeclared as nonlocal\nor global\n. If a name is bound at\nthe module level, it is a global variable. (The variables of the module code\nblock are local and global.) If a variable is used in a code block but not\ndefined there, it is a free variable.\nEach occurrence of a name in the program text refers to the binding of that name established by the following name resolution rules.\n4.2.2. Resolution of names\u00b6\nA scope defines the visibility of a name within a block. If a local variable is defined in a block, its scope includes that block. If the definition occurs in a function block, the scope extends to any blocks contained within the defining one, unless a contained block introduces a different binding for the name.\nWhen a name is used in a code block, it is resolved using the nearest enclosing scope. The set of all such scopes visible to a code block is called the block\u2019s environment.\nWhen a name is not found at all, a NameError\nexception is raised.\nIf the current scope is a function scope, and the name refers to a local\nvariable that has not yet been bound to a value at the point where the name is\nused, an UnboundLocalError\nexception is raised.\nUnboundLocalError\nis a subclass of NameError\n.\nIf a name binding operation occurs anywhere within a code block, all uses of the name within the block are treated as references to the current block. This can lead to errors when a name is used within a block before it is bound. This rule is subtle. Python lacks declarations and allows name binding operations to occur anywhere within a code block. The local variables of a code block can be determined by scanning the entire text of the block for name binding operations. See the FAQ entry on UnboundLocalError for examples.\nIf the global\nstatement occurs within a block, all uses of the names\nspecified in the statement refer to the bindings of those names in the top-level\nnamespace. Names are resolved in the top-level namespace by searching the\nglobal namespace, i.e. the namespace of the module containing the code block,\nand the builtins namespace, the namespace of the module builtins\n. The\nglobal namespace is searched first. If the names are not found there, the\nbuiltins namespace is searched next. If the names are also not found in the\nbuiltins namespace, new variables are created in the global namespace.\nThe global statement must precede all uses of the listed names.\nThe global\nstatement has the same scope as a name binding operation\nin the same block. If the nearest enclosing scope for a free variable contains\na global statement, the free variable is treated as a global.\nThe nonlocal\nstatement causes corresponding names to refer\nto previously bound variables in the nearest enclosing function scope.\nSyntaxError\nis raised at compile time if the given name does not\nexist in any enclosing function scope. Type parameters\ncannot be rebound with the nonlocal\nstatement.\nThe namespace for a module is automatically created the first time a module is\nimported. The main module for a script is always called __main__\n.\nClass definition blocks and arguments to exec()\nand eval()\nare\nspecial in the context of name resolution.\nA class definition is an executable statement that may use and define names.\nThese references follow the normal rules for name resolution with an exception\nthat unbound local variables are looked up in the global namespace.\nThe namespace of the class definition becomes the attribute dictionary of\nthe class. The scope of names defined in a class block is limited to the\nclass block; it does not extend to the code blocks of methods. This includes\ncomprehensions and generator expressions, but it does not include\nannotation scopes,\nwhich have access to their enclosing class scopes.\nThis means that the following will fail:\nclass A:\na = 42\nb = list(a + i for i in range(10))\nHowever, the following will succeed:\nclass A:\ntype Alias = Nested\nclass Nested: pass\nprint(A.Alias.__value__) # \n4.2.3. Annotation scopes\u00b6\nAnnotations, type parameter lists\nand type\nstatements\nintroduce annotation scopes, which behave mostly like function scopes,\nbut with some exceptions discussed below.\nAnnotation scopes are used in the following contexts:\nType parameter lists for generic type aliases.\nType parameter lists for generic functions. A generic function\u2019s annotations are executed within the annotation scope, but its defaults and decorators are not.\nType parameter lists for generic classes. A generic class\u2019s base classes and keyword arguments are executed within the annotation scope, but its decorators are not.\nThe bounds, constraints, and default values for type parameters (lazily evaluated).\nThe value of type aliases (lazily evaluated).\nAnnotation scopes differ from function scopes in the following ways:\nAnnotation scopes have access to their enclosing class namespace. If an annotation scope is immediately within a class scope, or within another annotation scope that is immediately within a class scope, the code in the annotation scope can use names defined in the class scope as if it were executed directly within the class body. This contrasts with regular functions defined within classes, which cannot access names defined in the class scope.\nExpressions in annotation scopes cannot contain\nyield\n,yield from\n,await\n, or:=\nexpressions. (These expressions are allowed in other scopes contained within the annotation scope.)Names defined in annotation scopes cannot be rebound with\nnonlocal\nstatements in inner scopes. This includes only type parameters, as no other syntactic elements that can appear within annotation scopes can introduce new names.While annotation scopes have an internal name, that name is not reflected in the qualified name of objects defined within the scope. Instead, the\n__qualname__\nof such objects is as if the object were defined in the enclosing scope.\nAdded in version 3.12: Annotation scopes were introduced in Python 3.12 as part of PEP 695.\nChanged in version 3.13: Annotation scopes are also used for type parameter defaults, as introduced by PEP 696.\n4.2.4. Lazy evaluation\u00b6\nMost annotation scopes are lazily evaluated. This includes annotations,\nthe values of type aliases created through the type\nstatement, and\nthe bounds, constraints, and default values of type\nvariables created through the type parameter syntax.\nThis means that they are not evaluated when the type alias or type variable is\ncreated, or when the object carrying annotations is created. Instead, they\nare only evaluated when necessary, for example when the __value__\nattribute on a type alias is accessed.\nExample:\n>>> type Alias = 1/0\n>>> Alias.__value__\nTraceback (most recent call last):\n...\nZeroDivisionError: division by zero\n>>> def func[T: 1/0](): pass\n>>> T = func.__type_params__[0]\n>>> T.__bound__\nTraceback (most recent call last):\n...\nZeroDivisionError: division by zero\nHere the exception is raised only when the __value__\nattribute\nof the type alias or the __bound__\nattribute of the type variable\nis accessed.\nThis behavior is primarily useful for references to types that have not yet been defined when the type alias or type variable is created. For example, lazy evaluation enables creation of mutually recursive type aliases:\nfrom typing import Literal\ntype SimpleExpr = int | Parenthesized\ntype Parenthesized = tuple[Literal[\"(\"], Expr, Literal[\")\"]]\ntype Expr = SimpleExpr | tuple[SimpleExpr, Literal[\"+\", \"-\"], Expr]\nLazily evaluated values are evaluated in annotation scope, which means that names that appear inside the lazily evaluated value are looked up as if they were used in the immediately enclosing scope.\nAdded in version 3.12.\n4.2.5. Builtins and restricted execution\u00b6\nCPython implementation detail: Users should not touch __builtins__\n; it is strictly an implementation\ndetail. Users wanting to override values in the builtins namespace should\nimport\nthe builtins\nmodule and modify its\nattributes appropriately.\nThe builtins namespace associated with the execution of a code block\nis actually found by looking up the name __builtins__\nin its\nglobal namespace; this should be a dictionary or a module (in the\nlatter case the module\u2019s dictionary is used). By default, when in the\n__main__\nmodule, __builtins__\nis the built-in module\nbuiltins\n; when in any other module, __builtins__\nis an\nalias for the dictionary of the builtins\nmodule itself.\n4.2.6. Interaction with dynamic features\u00b6\nName resolution of free variables occurs at runtime, not at compile time. This means that the following code will print 42:\ni = 10\ndef f():\nprint(i)\ni = 42\nf()\nThe eval()\nand exec()\nfunctions do not have access to the full\nenvironment for resolving names. Names may be resolved in the local and global\nnamespaces of the caller. Free variables are not resolved in the nearest\nenclosing namespace, but in the global namespace. [1] The exec()\nand\neval()\nfunctions have optional arguments to override the global and local\nnamespace. If only one namespace is specified, it is used for both.\n4.3. Exceptions\u00b6\nExceptions are a means of breaking out of the normal flow of control of a code block in order to handle errors or other exceptional conditions. An exception is raised at the point where the error is detected; it may be handled by the surrounding code block or by any code block that directly or indirectly invoked the code block where the error occurred.\nThe Python interpreter raises an exception when it detects a run-time error\n(such as division by zero). A Python program can also explicitly raise an\nexception with the raise\nstatement. Exception handlers are specified\nwith the try\n\u2026 except\nstatement. The finally\nclause of such a statement can be used to specify cleanup code which does not\nhandle the exception, but is executed whether an exception occurred or not in\nthe preceding code.\nPython uses the \u201ctermination\u201d model of error handling: an exception handler can find out what happened and continue execution at an outer level, but it cannot repair the cause of the error and retry the failing operation (except by re-entering the offending piece of code from the top).\nWhen an exception is not handled at all, the interpreter terminates execution of\nthe program, or returns to its interactive main loop. In either case, it prints\na stack traceback, except when the exception is SystemExit\n.\nExceptions are identified by class instances. The except\nclause is\nselected depending on the class of the instance: it must reference the class of\nthe instance or a non-virtual base class thereof.\nThe instance can be received by the handler and can carry additional information\nabout the exceptional condition.\nNote\nException messages are not part of the Python API. Their contents may change from one version of Python to the next without warning and should not be relied on by code which will run under multiple versions of the interpreter.\nSee also the description of the try\nstatement in section The try statement\nand raise\nstatement in section The raise statement.\n4.4. Runtime Components\u00b6\n4.4.1. General Computing Model\u00b6\nPython\u2019s execution model does not operate in a vacuum. It runs on a host machine and through that host\u2019s runtime environment, including its operating system (OS), if there is one. When a program runs, the conceptual layers of how it runs on the host look something like this:\nhost machineprocess (global resources)thread (runs machine code)\nEach process represents a program running on the host. Think of each process itself as the data part of its program. Think of the process\u2019 threads as the execution part of the program. This distinction will be important to understand the conceptual Python runtime.\nThe process, as the data part, is the execution context in which the program runs. It mostly consists of the set of resources assigned to the program by the host, including memory, signals, file handles, sockets, and environment variables.\nProcesses are isolated and independent from one another. (The same is true for hosts.) The host manages the process\u2019 access to its assigned resources, in addition to coordinating between processes.\nEach thread represents the actual execution of the program\u2019s machine code, running relative to the resources assigned to the program\u2019s process. It\u2019s strictly up to the host how and when that execution takes place.\nFrom the point of view of Python, a program always starts with exactly one thread. However, the program may grow to run in multiple simultaneous threads. Not all hosts support multiple threads per process, but most do. Unlike processes, threads in a process are not isolated and independent from one another. Specifically, all threads in a process share all of the process\u2019 resources.\nThe fundamental point of threads is that each one does run independently, at the same time as the others. That may be only conceptually at the same time (\u201cconcurrently\u201d) or physically (\u201cin parallel\u201d). Either way, the threads effectively run at a non-synchronized rate.\nNote\nThat non-synchronized rate means none of the process\u2019 memory is guaranteed to stay consistent for the code running in any given thread. Thus multi-threaded programs must take care to coordinate access to intentionally shared resources. Likewise, they must take care to be absolutely diligent about not accessing any other resources in multiple threads; otherwise two threads running at the same time might accidentally interfere with each other\u2019s use of some shared data. All this is true for both Python programs and the Python runtime.\nThe cost of this broad, unstructured requirement is the tradeoff for the kind of raw concurrency that threads provide. The alternative to the required discipline generally means dealing with non-deterministic bugs and data corruption.\n4.4.2. Python Runtime Model\u00b6\nThe same conceptual layers apply to each Python program, with some extra data layers specific to Python:\nhost machineprocess (global resources)Python global runtime (state)Python interpreter (state)thread (runs Python bytecode and \u201cC-API\u201d)Python thread state\nAt the conceptual level: when a Python program starts, it looks exactly like that diagram, with one of each. The runtime may grow to include multiple interpreters, and each interpreter may grow to include multiple thread states.\nNote\nA Python implementation won\u2019t necessarily implement the runtime\nlayers distinctly or even concretely. The only exception is places\nwhere distinct layers are directly specified or exposed to users,\nlike through the threading\nmodule.\nNote\nThe initial interpreter is typically called the \u201cmain\u201d interpreter. Some Python implementations, like CPython, assign special roles to the main interpreter.\nLikewise, the host thread where the runtime was initialized is known as the \u201cmain\u201d thread. It may be different from the process\u2019 initial thread, though they are often the same. In some cases \u201cmain thread\u201d may be even more specific and refer to the initial thread state. A Python runtime might assign specific responsibilities to the main thread, such as handling signals.\nAs a whole, the Python runtime consists of the global runtime state, interpreters, and thread states. The runtime ensures all that state stays consistent over its lifetime, particularly when used with multiple host threads.\nThe global runtime, at the conceptual level, is just a set of interpreters. While those interpreters are otherwise isolated and independent from one another, they may share some data or other resources. The runtime is responsible for managing these global resources safely. The actual nature and management of these resources is implementation-specific. Ultimately, the external utility of the global runtime is limited to managing interpreters.\nIn contrast, an \u201cinterpreter\u201d is conceptually what we would normally think of as the (full-featured) \u201cPython runtime\u201d. When machine code executing in a host thread interacts with the Python runtime, it calls into Python in the context of a specific interpreter.\nNote\nThe term \u201cinterpreter\u201d here is not the same as the \u201cbytecode interpreter\u201d, which is what regularly runs in threads, executing compiled Python code.\nIn an ideal world, \u201cPython runtime\u201d would refer to what we currently call \u201cinterpreter\u201d. However, it\u2019s been called \u201cinterpreter\u201d at least since introduced in 1997 (CPython:a027efa5b).\nEach interpreter completely encapsulates all of the non-process-global,\nnon-thread-specific state needed for the Python runtime to work.\nNotably, the interpreter\u2019s state persists between uses. It includes\nfundamental data like sys.modules\n. The runtime ensures\nmultiple threads using the same interpreter will safely\nshare it between them.\nA Python implementation may support using multiple interpreters at the\nsame time in the same process. They are independent and isolated from\none another. For example, each interpreter has its own\nsys.modules\n.\nFor thread-specific runtime state, each interpreter has a set of thread states, which it manages, in the same way the global runtime contains a set of interpreters. It can have thread states for as many host threads as it needs. It may even have multiple thread states for the same host thread, though that isn\u2019t as common.\nEach thread state, conceptually, has all the thread-specific runtime data an interpreter needs to operate in one host thread. The thread state includes the current raised exception and the thread\u2019s Python call stack. It may include other thread-specific resources.\nNote\nThe term \u201cPython thread\u201d can sometimes refer to a thread state, but\nnormally it means a thread created using the threading\nmodule.\nEach thread state, over its lifetime, is always tied to exactly one interpreter and exactly one host thread. It will only ever be used in that thread and with that interpreter.\nMultiple thread states may be tied to the same host thread, whether for different interpreters or even the same interpreter. However, for any given host thread, only one of the thread states tied to it can be used by the thread at a time.\nThread states are isolated and independent from one another and don\u2019t share any data, except for possibly sharing an interpreter and objects or other resources belonging to that interpreter.\nOnce a program is running, new Python threads can be created using the\nthreading\nmodule (on platforms and Python implementations that\nsupport threads). Additional processes can be created using the\nos\n, subprocess\n, and multiprocessing\nmodules.\nInterpreters can be created and used with the\ninterpreters\nmodule. Coroutines (async) can\nbe run using asyncio\nin each interpreter, typically only\nin a single thread (often the main thread).\nFootnotes", "code_snippets": ["\n ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n ", " ", " ", " ", "\n ", " ", "\n\n", " ", "\n", " ", "\n\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n ", "\n", " ", " ", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 5120}
{"url": "https://docs.python.org/3/library/ossaudiodev.html", "title": " \u2014 Access to OSS-compatible audio devices", "content": "ossaudiodev\n\u2014 Access to OSS-compatible audio devices\u00b6\nDeprecated since version 3.11, removed in version 3.13.\nThis module is no longer part of the Python standard library. It was removed in Python 3.13 after being deprecated in Python 3.11. The removal was decided in PEP 594.\nThe last version of Python that provided the ossaudiodev\nmodule was\nPython 3.12.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 89}
{"url": "https://docs.python.org/3/library/nntplib.html", "title": " \u2014 NNTP protocol client", "content": "nntplib\n\u2014 NNTP protocol client\u00b6\nDeprecated since version 3.11, removed in version 3.13.\nThis module is no longer part of the Python standard library. It was removed in Python 3.13 after being deprecated in Python 3.11. The removal was decided in PEP 594.\nThe last version of Python that provided the nntplib\nmodule was\nPython 3.12.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 83}
{"url": "https://docs.python.org/3/library/asyncio-exceptions.html", "title": "Exceptions", "content": "Exceptions\u00b6\nSource code: Lib/asyncio/exceptions.py\n- exception asyncio.TimeoutError\u00b6\nA deprecated alias of\nTimeoutError\n, raised when the operation has exceeded the given deadline.Changed in version 3.11: This class was made an alias of\nTimeoutError\n.\n- exception asyncio.CancelledError\u00b6\nThe operation has been cancelled.\nThis exception can be caught to perform custom operations when asyncio Tasks are cancelled. In almost all situations the exception must be re-raised.\nChanged in version 3.8:\nCancelledError\nis now a subclass ofBaseException\nrather thanException\n.\n- exception asyncio.InvalidStateError\u00b6\nInvalid internal state of\nTask\norFuture\n.Can be raised in situations like setting a result value for a Future object that already has a result value set.\n- exception asyncio.SendfileNotAvailableError\u00b6\nThe \u201csendfile\u201d syscall is not available for the given socket or file type.\nA subclass of\nRuntimeError\n.\n- exception asyncio.IncompleteReadError\u00b6\nThe requested read operation did not complete fully.\nRaised by the asyncio stream APIs.\nThis exception is a subclass of\nEOFError\n.\n- exception asyncio.LimitOverrunError\u00b6\nReached the buffer size limit while looking for a separator.\nRaised by the asyncio stream APIs.\n- consumed\u00b6\nThe total number of to be consumed bytes.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 318}
{"url": "https://docs.python.org/3/c-api/memoryview.html", "title": "MemoryView objects", "content": "MemoryView objects\u00b6\nA memoryview\nobject exposes the C level buffer interface as a Python object which can then be passed around like\nany other object.\n-\nPyTypeObject PyMemoryView_Type\u00b6\n- Part of the Stable ABI.\nThis instance of\nPyTypeObject\nrepresents the Python memoryview type. This is the same object asmemoryview\nin the Python layer.\n-\nPyObject *PyMemoryView_FromObject(PyObject *obj)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCreate a memoryview object from an object that provides the buffer interface. If obj supports writable buffer exports, the memoryview object will be read/write, otherwise it may be either read-only or read/write at the discretion of the exporter.\n-\nPyBUF_READ\u00b6\n- Part of the Stable ABI since version 3.11.\nFlag to request a readonly buffer.\n-\nPyBUF_WRITE\u00b6\n- Part of the Stable ABI since version 3.11.\nFlag to request a writable buffer.\n-\nPyObject *PyMemoryView_FromMemory(char *mem, Py_ssize_t size, int flags)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.7.\nCreate a memoryview object using mem as the underlying buffer. flags can be one of\nPyBUF_READ\norPyBUF_WRITE\n.Added in version 3.3.\n-\nPyObject *PyMemoryView_FromBuffer(const Py_buffer *view)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.11.\nCreate a memoryview object wrapping the given buffer structure view. For simple byte buffers,\nPyMemoryView_FromMemory()\nis the preferred function.\n-\nPyObject *PyMemoryView_GetContiguous(PyObject *obj, int buffertype, char order)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCreate a memoryview object to a contiguous chunk of memory (in either \u2018C\u2019 or \u2018F\u2019ortran order) from an object that defines the buffer interface. If memory is contiguous, the memoryview object points to the original memory. Otherwise, a copy is made and the memoryview points to a new bytes object.\nbuffertype can be one of\nPyBUF_READ\norPyBUF_WRITE\n.\n-\nint PyMemoryView_Check(PyObject *obj)\u00b6\nReturn true if the object obj is a memoryview object. It is not currently allowed to create subclasses of\nmemoryview\n. This function always succeeds.\n-\nPy_buffer *PyMemoryView_GET_BUFFER(PyObject *mview)\u00b6\nReturn a pointer to the memoryview\u2019s private copy of the exporter\u2019s buffer. mview must be a memoryview instance; this macro doesn\u2019t check its type, you must do it yourself or you will risk crashes.\n-\nPyObject *PyMemoryView_GET_BASE(PyObject *mview)\u00b6\nReturn either a pointer to the exporting object that the memoryview is based on or\nNULL\nif the memoryview has been created by one of the functionsPyMemoryView_FromMemory()\norPyMemoryView_FromBuffer()\n. mview must be a memoryview instance.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 666}
{"url": "https://docs.python.org/3/library/email.utils.html", "title": ": Miscellaneous utilities", "content": "email.utils\n: Miscellaneous utilities\u00b6\nSource code: Lib/email/utils.py\nThere are a couple of useful utilities provided in the email.utils\nmodule:\n- email.utils.localtime(dt=None)\u00b6\nReturn local time as an aware datetime object. If called without arguments, return current time. Otherwise dt argument should be a\ndatetime\ninstance, and it is converted to the local time zone according to the system time zone database. If dt is naive (that is,dt.tzinfo\nisNone\n), it is assumed to be in local time.Added in version 3.3.\nDeprecated since version 3.12, removed in version 3.14: The isdst parameter.\n- email.utils.make_msgid(idstring=None, domain=None)\u00b6\nReturns a string suitable for an RFC 2822-compliant Message-ID header. Optional idstring if given, is a string used to strengthen the uniqueness of the message id. Optional domain if given provides the portion of the msgid after the \u2018@\u2019. The default is the local hostname. It is not normally necessary to override this default, but may be useful certain cases, such as a constructing distributed system that uses a consistent domain name across multiple hosts.\nChanged in version 3.2: Added the domain keyword.\nThe remaining functions are part of the legacy (Compat32\n) email API. There\nis no need to directly use these with the new API, since the parsing and\nformatting they provide is done automatically by the header parsing machinery\nof the new API.\n- email.utils.quote(str)\u00b6\nReturn a new string with backslashes in str replaced by two backslashes, and double quotes replaced by backslash-double quote.\n- email.utils.unquote(str)\u00b6\nReturn a new string which is an unquoted version of str. If str ends and begins with double quotes, they are stripped off. Likewise if str ends and begins with angle brackets, they are stripped off.\n- email.utils.parseaddr(address, *, strict=True)\u00b6\nParse address \u2013 which should be the value of some address-containing field such as To or Cc \u2013 into its constituent realname and email address parts. Returns a tuple of that information, unless the parse fails, in which case a 2-tuple of\n('', '')\nis returned.If strict is true, use a strict parser which rejects malformed inputs.\nChanged in version 3.13: Add strict optional parameter and reject malformed inputs by default.\n- email.utils.formataddr(pair, charset='utf-8')\u00b6\nThe inverse of\nparseaddr()\n, this takes a 2-tuple of the form(realname, email_address)\nand returns the string value suitable for a To or Cc header. If the first element of pair is false, then the second element is returned unmodified.Optional charset is the character set that will be used in the RFC 2047 encoding of the\nrealname\nif therealname\ncontains non-ASCII characters. Can be an instance ofstr\nor aCharset\n. Defaults toutf-8\n.Changed in version 3.3: Added the charset option.\n- email.utils.getaddresses(fieldvalues, *, strict=True)\u00b6\nThis method returns a list of 2-tuples of the form returned by\nparseaddr()\n. fieldvalues is a sequence of header field values as might be returned byMessage.get_all\n.If strict is true, use a strict parser which rejects malformed inputs.\nHere\u2019s a simple example that gets all the recipients of a message:\nfrom email.utils import getaddresses tos = msg.get_all('to', []) ccs = msg.get_all('cc', []) resent_tos = msg.get_all('resent-to', []) resent_ccs = msg.get_all('resent-cc', []) all_recipients = getaddresses(tos + ccs + resent_tos + resent_ccs)\nChanged in version 3.13: Add strict optional parameter and reject malformed inputs by default.\n- email.utils.parsedate(date)\u00b6\nAttempts to parse a date according to the rules in RFC 2822. however, some mailers don\u2019t follow that format as specified, so\nparsedate()\ntries to guess correctly in such cases. date is a string containing an RFC 2822 date, such as\"Mon, 20 Nov 1995 19:12:08 -0500\"\n. If it succeeds in parsing the date,parsedate()\nreturns a 9-tuple that can be passed directly totime.mktime()\n; otherwiseNone\nwill be returned. Note that indexes 6, 7, and 8 of the result tuple are not usable.\n- email.utils.parsedate_tz(date)\u00b6\nPerforms the same function as\nparsedate()\n, but returns eitherNone\nor a 10-tuple; the first 9 elements make up a tuple that can be passed directly totime.mktime()\n, and the tenth is the offset of the date\u2019s timezone from UTC (which is the official term for Greenwich Mean Time) [1]. If the input string has no timezone, the last element of the tuple returned is0\n, which represents UTC. Note that indexes 6, 7, and 8 of the result tuple are not usable.\n- email.utils.parsedate_to_datetime(date)\u00b6\nThe inverse of\nformat_datetime()\n. Performs the same function asparsedate()\n, but on success returns adatetime\n; otherwiseValueError\nis raised if date contains an invalid value such as an hour greater than 23 or a timezone offset not between -24 and 24 hours. If the input date has a timezone of-0000\n, thedatetime\nwill be a naivedatetime\n, and if the date is conforming to the RFCs it will represent a time in UTC but with no indication of the actual source timezone of the message the date comes from. If the input date has any other valid timezone offset, thedatetime\nwill be an awaredatetime\nwith the corresponding atimezone\ntzinfo\n.Added in version 3.3.\n- email.utils.mktime_tz(tuple)\u00b6\nTurn a 10-tuple as returned by\nparsedate_tz()\ninto a UTC timestamp (seconds since the Epoch). If the timezone item in the tuple isNone\n, assume local time.\n- email.utils.formatdate(timeval=None, localtime=False, usegmt=False)\u00b6\nReturns a date string as per RFC 2822, e.g.:\nFri, 09 Nov 2001 01:08:47 -0000\nOptional timeval if given is a floating-point time value as accepted by\ntime.gmtime()\nandtime.localtime()\n, otherwise the current time is used.Optional localtime is a flag that when\nTrue\n, interprets timeval, and returns a date relative to the local timezone instead of UTC, properly taking daylight savings time into account. The default isFalse\nmeaning UTC is used.Optional usegmt is a flag that when\nTrue\n, outputs a date string with the timezone as an ascii stringGMT\n, rather than a numeric-0000\n. This is needed for some protocols (such as HTTP). This only applies when localtime isFalse\n. The default isFalse\n.\n- email.utils.format_datetime(dt, usegmt=False)\u00b6\nLike\nformatdate\n, but the input is adatetime\ninstance. If it is a naive datetime, it is assumed to be \u201cUTC with no information about the source timezone\u201d, and the conventional-0000\nis used for the timezone. If it is an awaredatetime\n, then the numeric timezone offset is used. If it is an aware timezone with offset zero, then usegmt may be set toTrue\n, in which case the stringGMT\nis used instead of the numeric timezone offset. This provides a way to generate standards conformant HTTP date headers.Added in version 3.3.\n- email.utils.encode_rfc2231(s, charset=None, language=None)\u00b6\nEncode the string s according to RFC 2231. Optional charset and language, if given is the character set name and language name to use. If neither is given, s is returned as-is. If charset is given but language is not, the string is encoded using the empty string for language.\n- email.utils.collapse_rfc2231_value(value, errors='replace', fallback_charset='us-ascii')\u00b6\nWhen a header parameter is encoded in RFC 2231 format,\nMessage.get_param\nmay return a 3-tuple containing the character set, language, and value.collapse_rfc2231_value()\nturns this into a unicode string. Optional errors is passed to the errors argument ofstr\n\u2019sencode()\nmethod; it defaults to'replace'\n. Optional fallback_charset specifies the character set to use if the one in the RFC 2231 header is not known by Python; it defaults to'us-ascii'\n.For convenience, if the value passed to\ncollapse_rfc2231_value()\nis not a tuple, it should be a string and it is returned unquoted.\n- email.utils.decode_params(params)\u00b6\nDecode parameters list according to RFC 2231. params is a sequence of 2-tuples containing elements of the form\n(content-type, string-value)\n.\nFootnotes", "code_snippets": [" ", "\n\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 1977}
{"url": "https://docs.python.org/3/c-api/perfmaps.html", "title": "Support for Perf Maps", "content": "Support for Perf Maps\u00b6\nOn supported platforms (as of this writing, only Linux), the runtime can take\nadvantage of perf map files to make Python functions visible to an external\nprofiling tool (such as perf).\nA running process may create a file in the /tmp\ndirectory, which contains entries\nthat can map a section of executable code to a name. This interface is described in the\ndocumentation of the Linux Perf tool.\nIn Python, these helper APIs can be used by libraries and features that rely on generating machine code on the fly.\nNote that holding an attached thread state is not required for these APIs.\n-\nint PyUnstable_PerfMapState_Init(void)\u00b6\n- This is Unstable API. It may change without warning in minor releases.\nOpen the\n/tmp/perf-$pid.map\nfile, unless it\u2019s already opened, and create a lock to ensure thread-safe writes to the file (provided the writes are done throughPyUnstable_WritePerfMapEntry()\n). Normally, there\u2019s no need to call this explicitly; just usePyUnstable_WritePerfMapEntry()\nand it will initialize the state on first call.Returns\n0\non success,-1\non failure to create/open the perf map file, or-2\non failure to create a lock. Checkerrno\nfor more information about the cause of a failure.\n-\nint PyUnstable_WritePerfMapEntry(const void *code_addr, unsigned int code_size, const char *entry_name)\u00b6\n- This is Unstable API. It may change without warning in minor releases.\nWrite one single entry to the\n/tmp/perf-$pid.map\nfile. This function is thread safe. Here is what an example entry looks like:# address size name 7f3529fcf759 b py::bar:/run/t.py\nWill call\nPyUnstable_PerfMapState_Init()\nbefore writing the entry, if the perf map file is not already opened. Returns0\non success, or the same error codes asPyUnstable_PerfMapState_Init()\non failure.\n-\nvoid PyUnstable_PerfMapState_Fini(void)\u00b6\n- This is Unstable API. It may change without warning in minor releases.\nClose the perf map file opened by\nPyUnstable_PerfMapState_Init()\n. This is called by the runtime itself during interpreter shut-down. In general, there shouldn\u2019t be a reason to explicitly call this, except to handle specific scenarios such as forking.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 536}
{"url": "https://docs.python.org/3/c-api/time.html", "title": "PyTime C API", "content": "PyTime C API\u00b6\nAdded in version 3.13.\nThe clock C API provides access to system clocks.\nIt is similar to the Python time\nmodule.\nFor C API related to the datetime\nmodule, see DateTime Objects.\nTypes\u00b6\n-\ntype PyTime_t\u00b6\nA timestamp or duration in nanoseconds, represented as a signed 64-bit integer.\nThe reference point for timestamps depends on the clock used. For example,\nPyTime_Time()\nreturns timestamps relative to the UNIX epoch.The supported range is around [-292.3 years; +292.3 years]. Using the Unix epoch (January 1st, 1970) as reference, the supported date range is around [1677-09-21; 2262-04-11]. The exact limits are exposed as constants:\nClock Functions\u00b6\nThe following functions take a pointer to a PyTime_t that they set to the value of a particular clock. Details of each clock are given in the documentation of the corresponding Python function.\nThe functions return 0\non success, or -1\n(with an exception set)\non failure.\nOn integer overflow, they set the PyExc_OverflowError\nexception and\nset *result\nto the value clamped to the [PyTime_MIN; PyTime_MAX]\nrange.\n(On current systems, integer overflows are likely caused by misconfigured\nsystem time.)\nAs any other C API (unless otherwise specified), the functions must be called with an attached thread state.\n-\nint PyTime_Monotonic(PyTime_t *result)\u00b6\nRead the monotonic clock. See\ntime.monotonic()\nfor important details on this clock.\n-\nint PyTime_PerfCounter(PyTime_t *result)\u00b6\nRead the performance counter. See\ntime.perf_counter()\nfor important details on this clock.\n-\nint PyTime_Time(PyTime_t *result)\u00b6\nRead the \u201cwall clock\u201d time. See\ntime.time()\nfor details important on this clock.\nRaw Clock Functions\u00b6\nSimilar to clock functions, but don\u2019t set an exception on error and don\u2019t require the caller to have an attached thread state.\nOn success, the functions return 0\n.\nOn failure, they set *result\nto 0\nand return -1\n, without setting\nan exception. To get the cause of the error, attach a thread state,\nand call the regular (non-Raw\n) function. Note that the regular function may succeed after\nthe Raw\none failed.\n-\nint PyTime_MonotonicRaw(PyTime_t *result)\u00b6\nSimilar to\nPyTime_Monotonic()\n, but don\u2019t set an exception on error and don\u2019t require an attached thread state.\n-\nint PyTime_PerfCounterRaw(PyTime_t *result)\u00b6\nSimilar to\nPyTime_PerfCounter()\n, but don\u2019t set an exception on error and don\u2019t require an attached thread state.\n-\nint PyTime_TimeRaw(PyTime_t *result)\u00b6\nSimilar to\nPyTime_Time()\n, but don\u2019t set an exception on error and don\u2019t require an attached thread state.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 637}
{"url": "https://docs.python.org/3/extending/windows.html", "title": "Building C and C++ Extensions on Windows", "content": "5. Building C and C++ Extensions on Windows\u00b6\nThis chapter briefly explains how to create a Windows extension module for Python using Microsoft Visual C++, and follows with more detailed background information on how it works. The explanatory material is useful for both the Windows programmer learning to build Python extensions and the Unix programmer interested in producing software which can be successfully built on both Unix and Windows.\nModule authors are encouraged to use the distutils approach for building extension modules, instead of the one described in this section. You will still need the C compiler that was used to build Python; typically Microsoft Visual C++.\nNote\nThis chapter mentions a number of filenames that include an encoded Python\nversion number. These filenames are represented with the version number shown\nas XY\n; in practice, 'X'\nwill be the major version number and 'Y'\nwill be the minor version number of the Python release you\u2019re working with. For\nexample, if you are using Python 2.2.1, XY\nwill actually be 22\n.\n5.1. A Cookbook Approach\u00b6\nThere are two approaches to building extension modules on Windows, just as there\nare on Unix: use the setuptools\npackage to control the build process, or\ndo things manually. The setuptools approach works well for most extensions;\ndocumentation on using setuptools\nto build and package extension modules\nis available in Building C and C++ Extensions with setuptools. If you find you really need to do\nthings manually, it may be instructive to study the project file for the\nwinsound standard library module.\n5.2. Differences Between Unix and Windows\u00b6\nUnix and Windows use completely different paradigms for run-time loading of code. Before you try to build a module that can be dynamically loaded, be aware of how your system works.\nIn Unix, a shared object (.so\n) file contains code to be used by the\nprogram, and also the names of functions and data that it expects to find in the\nprogram. When the file is joined to the program, all references to those\nfunctions and data in the file\u2019s code are changed to point to the actual\nlocations in the program where the functions and data are placed in memory.\nThis is basically a link operation.\nIn Windows, a dynamic-link library (.dll\n) file has no dangling\nreferences. Instead, an access to functions or data goes through a lookup\ntable. So the DLL code does not have to be fixed up at runtime to refer to the\nprogram\u2019s memory; instead, the code already uses the DLL\u2019s lookup table, and the\nlookup table is modified at runtime to point to the functions and data.\nIn Unix, there is only one type of library file (.a\n) which contains code\nfrom several object files (.o\n). During the link step to create a shared\nobject file (.so\n), the linker may find that it doesn\u2019t know where an\nidentifier is defined. The linker will look for it in the object files in the\nlibraries; if it finds it, it will include all the code from that object file.\nIn Windows, there are two types of library, a static library and an import\nlibrary (both called .lib\n). A static library is like a Unix .a\nfile; it contains code to be included as necessary. An import library is\nbasically used only to reassure the linker that a certain identifier is legal,\nand will be present in the program when the DLL is loaded. So the linker uses\nthe information from the import library to build the lookup table for using\nidentifiers that are not included in the DLL. When an application or a DLL is\nlinked, an import library may be generated, which will need to be used for all\nfuture DLLs that depend on the symbols in the application or DLL.\nSuppose you are building two dynamic-load modules, B and C, which should share\nanother block of code A. On Unix, you would not pass A.a\nto the\nlinker for B.so\nand C.so\n; that would cause it to be included\ntwice, so that B and C would each have their own copy. In Windows, building\nA.dll\nwill also build A.lib\n. You do pass A.lib\nto the\nlinker for B and C. A.lib\ndoes not contain code; it just contains\ninformation which will be used at runtime to access A\u2019s code.\nIn Windows, using an import library is sort of like using import spam\n; it\ngives you access to spam\u2019s names, but does not create a separate copy. On Unix,\nlinking with a library is more like from spam import *\n; it does create a\nseparate copy.\n-\nPy_NO_LINK_LIB\u00b6\nTurn off the implicit,\n#pragma\n-based linkage with the Python library, performed inside CPython header files.Added in version 3.14.\n5.3. Using DLLs in Practice\u00b6\nWindows Python is built in Microsoft Visual C++; using other compilers may or may not work. The rest of this section is MSVC++ specific.\nWhen creating DLLs in Windows, you can use the CPython library in two ways:\nBy default, inclusion of\nPC/pyconfig.h\ndirectly or viaPython.h\ntriggers an implicit, configure-aware link with the library. The header file choosespythonXY_d.lib\nfor Debug,pythonXY.lib\nfor Release, andpythonX.lib\nfor Release with the Limited API enabled.To build two DLLs, spam and ni (which uses C functions found in spam), you could use these commands:\ncl /LD /I/python/include spam.c cl /LD /I/python/include ni.c spam.lib\nThe first command created three files:\nspam.obj\n,spam.dll\nandspam.lib\n.Spam.dll\ndoes not contain any Python functions (such asPyArg_ParseTuple()\n), but it does know how to find the Python code thanks to the implicitly linkedpythonXY.lib\n.The second command created\nni.dll\n(and.obj\nand.lib\n), which knows how to find the necessary functions from spam, and also from the Python executable.Manually by defining\nPy_NO_LINK_LIB\nmacro before includingPython.h\n. You must passpythonXY.lib\nto the linker.To build two DLLs, spam and ni (which uses C functions found in spam), you could use these commands:\ncl /LD /DPy_NO_LINK_LIB /I/python/include spam.c ../libs/pythonXY.lib cl /LD /DPy_NO_LINK_LIB /I/python/include ni.c spam.lib ../libs/pythonXY.lib\nThe first command created three files:\nspam.obj\n,spam.dll\nandspam.lib\n.Spam.dll\ndoes not contain any Python functions (such asPyArg_ParseTuple()\n), but it does know how to find the Python code thanks topythonXY.lib\n.The second command created\nni.dll\n(and.obj\nand.lib\n), which knows how to find the necessary functions from spam, and also from the Python executable.\nNot every identifier is exported to the lookup table. If you want any other\nmodules (including Python) to be able to see your identifiers, you have to say\n_declspec(dllexport)\n, as in void _declspec(dllexport) initspam(void)\nor\nPyObject _declspec(dllexport) *NiGetSpamData(void)\n.\nDeveloper Studio will throw in a lot of import libraries that you do not really\nneed, adding about 100K to your executable. To get rid of them, use the Project\nSettings dialog, Link tab, to specify ignore default libraries. Add the\ncorrect msvcrtxx.lib\nto the list of libraries.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1705}
{"url": "https://docs.python.org/3/c-api/capsule.html", "title": "Capsules", "content": "Capsules\u00b6\nRefer to Providing a C API for an Extension Module for more information on using these objects.\nAdded in version 3.1.\n-\ntype PyCapsule\u00b6\nThis subtype of\nPyObject\nrepresents an opaque value, useful for C extension modules which need to pass an opaque value (as a void* pointer) through Python code to other C code. It is often used to make a C function pointer defined in one module available to other modules, so the regular import mechanism can be used to access C APIs defined in dynamically loaded modules.\n-\nPyTypeObject PyCapsule_Type\u00b6\n- Part of the Stable ABI.\nThe type object corresponding to capsule objects. This is the same object as\ntypes.CapsuleType\nin the Python layer.\n-\ntype PyCapsule_Destructor\u00b6\n- Part of the Stable ABI.\nThe type of a destructor callback for a capsule. Defined as:\ntypedef void (*PyCapsule_Destructor)(PyObject *);\nSee\nPyCapsule_New()\nfor the semantics of PyCapsule_Destructor callbacks.\n-\nint PyCapsule_CheckExact(PyObject *p)\u00b6\nReturn true if its argument is a\nPyCapsule\n. This function always succeeds.\n-\nPyObject *PyCapsule_New(void *pointer, const char *name, PyCapsule_Destructor destructor)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nCreate a\nPyCapsule\nencapsulating the pointer. The pointer argument may not beNULL\n.On failure, set an exception and return\nNULL\n.The name string may either be\nNULL\nor a pointer to a valid C string. If non-NULL\n, this string must outlive the capsule. (Though it is permitted to free it inside the destructor.)If the destructor argument is not\nNULL\n, it will be called with the capsule as its argument when it is destroyed.If this capsule will be stored as an attribute of a module, the name should be specified as\nmodulename.attributename\n. This will enable other modules to import the capsule usingPyCapsule_Import()\n.\n-\nvoid *PyCapsule_GetPointer(PyObject *capsule, const char *name)\u00b6\n- Part of the Stable ABI.\nRetrieve the pointer stored in the capsule. On failure, set an exception and return\nNULL\n.The name parameter must compare exactly to the name stored in the capsule. If the name stored in the capsule is\nNULL\n, the name passed in must also beNULL\n. Python uses the C functionstrcmp()\nto compare capsule names.\n-\nPyCapsule_Destructor PyCapsule_GetDestructor(PyObject *capsule)\u00b6\n- Part of the Stable ABI.\nReturn the current destructor stored in the capsule. On failure, set an exception and return\nNULL\n.It is legal for a capsule to have a\nNULL\ndestructor. This makes aNULL\nreturn code somewhat ambiguous; usePyCapsule_IsValid()\norPyErr_Occurred()\nto disambiguate.\n-\nvoid *PyCapsule_GetContext(PyObject *capsule)\u00b6\n- Part of the Stable ABI.\nReturn the current context stored in the capsule. On failure, set an exception and return\nNULL\n.It is legal for a capsule to have a\nNULL\ncontext. This makes aNULL\nreturn code somewhat ambiguous; usePyCapsule_IsValid()\norPyErr_Occurred()\nto disambiguate.\n-\nconst char *PyCapsule_GetName(PyObject *capsule)\u00b6\n- Part of the Stable ABI.\nReturn the current name stored in the capsule. On failure, set an exception and return\nNULL\n.It is legal for a capsule to have a\nNULL\nname. This makes aNULL\nreturn code somewhat ambiguous; usePyCapsule_IsValid()\norPyErr_Occurred()\nto disambiguate.\n-\nvoid *PyCapsule_Import(const char *name, int no_block)\u00b6\n- Part of the Stable ABI.\nImport a pointer to a C object from a capsule attribute in a module. The name parameter should specify the full name to the attribute, as in\nmodule.attribute\n. The name stored in the capsule must match this string exactly.This function splits name on the\n.\ncharacter, and imports the first element. It then processes further elements using attribute lookups.Return the capsule\u2019s internal pointer on success. On failure, set an exception and return\nNULL\n.Note\nIf name points to an attribute of some submodule or subpackage, this submodule or subpackage must be previously imported using other means (for example, by using\nPyImport_ImportModule()\n) for the attribute lookups to succeed.Changed in version 3.3: no_block has no effect anymore.\n-\nint PyCapsule_IsValid(PyObject *capsule, const char *name)\u00b6\n- Part of the Stable ABI.\nDetermines whether or not capsule is a valid capsule. A valid capsule is non-\nNULL\n, passesPyCapsule_CheckExact()\n, has a non-NULL\npointer stored in it, and its internal name matches the name parameter. (SeePyCapsule_GetPointer()\nfor information on how capsule names are compared.)In other words, if\nPyCapsule_IsValid()\nreturns a true value, calls to any of the accessors (any function starting withPyCapsule_Get\n) are guaranteed to succeed.Return a nonzero value if the object is valid and matches the name passed in. Return\n0\notherwise. This function will not fail.\n-\nint PyCapsule_SetContext(PyObject *capsule, void *context)\u00b6\n- Part of the Stable ABI.\nSet the context pointer inside capsule to context.\nReturn\n0\non success. Return nonzero and set an exception on failure.\n-\nint PyCapsule_SetDestructor(PyObject *capsule, PyCapsule_Destructor destructor)\u00b6\n- Part of the Stable ABI.\nSet the destructor inside capsule to destructor.\nReturn\n0\non success. Return nonzero and set an exception on failure.\n-\nint PyCapsule_SetName(PyObject *capsule, const char *name)\u00b6\n- Part of the Stable ABI.\nSet the name inside capsule to name. If non-\nNULL\n, the name must outlive the capsule. If the previous name stored in the capsule was notNULL\n, no attempt is made to free it.Return\n0\non success. Return nonzero and set an exception on failure.\n-\nint PyCapsule_SetPointer(PyObject *capsule, void *pointer)\u00b6\n- Part of the Stable ABI.\nSet the void pointer inside capsule to pointer. The pointer may not be\nNULL\n.Return\n0\non success. Return nonzero and set an exception on failure.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1426}
{"url": "https://docs.python.org/3/c-api/codec.html", "title": "Codec registry and support functions", "content": "Codec registry and support functions\u00b6\n-\nint PyCodec_Register(PyObject *search_function)\u00b6\n- Part of the Stable ABI.\nRegister a new codec search function.\nAs a side effect, this tries to load the\nencodings\npackage, if not yet done, to make sure that it is always first in the list of search functions.\n-\nint PyCodec_Unregister(PyObject *search_function)\u00b6\n- Part of the Stable ABI since version 3.10.\nUnregister a codec search function and clear the registry\u2019s cache. If the search function is not registered, do nothing. Return 0 on success. Raise an exception and return -1 on error.\nAdded in version 3.10.\n-\nint PyCodec_KnownEncoding(const char *encoding)\u00b6\n- Part of the Stable ABI.\nReturn\n1\nor0\ndepending on whether there is a registered codec for the given encoding. This function always succeeds.\n-\nPyObject *PyCodec_Encode(PyObject *object, const char *encoding, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nGeneric codec based encoding API.\nobject is passed through the encoder function found for the given encoding using the error handling method defined by errors. errors may be\nNULL\nto use the default method defined for the codec. Raises aLookupError\nif no encoder can be found.\n-\nPyObject *PyCodec_Decode(PyObject *object, const char *encoding, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nGeneric codec based decoding API.\nobject is passed through the decoder function found for the given encoding using the error handling method defined by errors. errors may be\nNULL\nto use the default method defined for the codec. Raises aLookupError\nif no decoder can be found.\nCodec lookup API\u00b6\nIn the following functions, the encoding string is looked up converted to all\nlower-case characters, which makes encodings looked up through this mechanism\neffectively case-insensitive. If no codec is found, a KeyError\nis set\nand NULL\nreturned.\n-\nPyObject *PyCodec_Encoder(const char *encoding)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nGet an encoder function for the given encoding.\n-\nPyObject *PyCodec_Decoder(const char *encoding)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nGet a decoder function for the given encoding.\n-\nPyObject *PyCodec_IncrementalEncoder(const char *encoding, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nGet an\nIncrementalEncoder\nobject for the given encoding.\n-\nPyObject *PyCodec_IncrementalDecoder(const char *encoding, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nGet an\nIncrementalDecoder\nobject for the given encoding.\n-\nPyObject *PyCodec_StreamReader(const char *encoding, PyObject *stream, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nGet a\nStreamReader\nfactory function for the given encoding.\n-\nPyObject *PyCodec_StreamWriter(const char *encoding, PyObject *stream, const char *errors)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nGet a\nStreamWriter\nfactory function for the given encoding.\nRegistry API for Unicode encoding error handlers\u00b6\n-\nint PyCodec_RegisterError(const char *name, PyObject *error)\u00b6\n- Part of the Stable ABI.\nRegister the error handling callback function error under the given name. This callback function will be called by a codec when it encounters unencodable characters/undecodable bytes and name is specified as the error parameter in the call to the encode/decode function.\nThe callback gets a single argument, an instance of\nUnicodeEncodeError\n,UnicodeDecodeError\norUnicodeTranslateError\nthat holds information about the problematic sequence of characters or bytes and their offset in the original string (see Unicode Exception Objects for functions to extract this information). The callback must either raise the given exception, or return a two-item tuple containing the replacement for the problematic sequence, and an integer giving the offset in the original string at which encoding/decoding should be resumed.Return\n0\non success,-1\non error.\n-\nPyObject *PyCodec_LookupError(const char *name)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nLookup the error handling callback function registered under name. As a special case\nNULL\ncan be passed, in which case the error handling callback for \u201cstrict\u201d will be returned.\n-\nPyObject *PyCodec_StrictErrors(PyObject *exc)\u00b6\n- Return value: Always NULL. Part of the Stable ABI.\nRaise exc as an exception.\n-\nPyObject *PyCodec_IgnoreErrors(PyObject *exc)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nIgnore the unicode error, skipping the faulty input.\n-\nPyObject *PyCodec_ReplaceErrors(PyObject *exc)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReplace the unicode encode error with\n?\norU+FFFD\n.\n-\nPyObject *PyCodec_XMLCharRefReplaceErrors(PyObject *exc)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReplace the unicode encode error with XML character references.\n-\nPyObject *PyCodec_BackslashReplaceErrors(PyObject *exc)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReplace the unicode encode error with backslash escapes (\n\\x\n,\\u\nand\\U\n).\n-\nPyObject *PyCodec_NameReplaceErrors(PyObject *exc)\u00b6\n- Return value: New reference. Part of the Stable ABI since version 3.7.\nReplace the unicode encode error with\n\\N{...}\nescapes.Added in version 3.5.\nCodec utility variables\u00b6\n-\nconst char *Py_hexdigits\u00b6\nA string constant containing the lowercase hexadecimal digits:\n\"0123456789abcdef\"\n.Added in version 3.3.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1364}
{"url": "https://docs.python.org/3/c-api/bool.html", "title": "Boolean Objects", "content": "Boolean Objects\u00b6\nBooleans in Python are implemented as a subclass of integers. There are only\ntwo booleans, Py_False\nand Py_True\n. As such, the normal\ncreation and deletion functions don\u2019t apply to booleans. The following macros\nare available, however.\n-\nPyTypeObject PyBool_Type\u00b6\n- Part of the Stable ABI.\nThis instance of\nPyTypeObject\nrepresents the Python boolean type; it is the same object asbool\nin the Python layer.\n-\nint PyBool_Check(PyObject *o)\u00b6\nReturn true if o is of type\nPyBool_Type\n. This function always succeeds.\n-\nPyObject *PyBool_FromLong(long v)\u00b6\n- Return value: New reference. Part of the Stable ABI.\nReturn\nPy_True\norPy_False\n, depending on the truth value of v.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 171}
{"url": "https://docs.python.org/3/c-api/curses.html", "title": "Curses C API", "content": "Curses C API\u00b6\ncurses\nexposes a small C interface for extension modules.\nConsumers must include the header file py_curses.h\n(which is not\nincluded by default by Python.h\n) and import_curses()\nmust\nbe invoked, usually as part of the module initialisation function, to populate\nPyCurses_API\n.\nWarning\nNeither the C API nor the pure Python curses\nmodule are compatible\nwith subinterpreters.\n-\nimport_curses()\u00b6\nImport the curses C API. The macro does not need a semi-colon to be called.\nOn success, populate the\nPyCurses_API\npointer.On failure, set\nPyCurses_API\nto NULL and set an exception. The caller must check if an error occurred viaPyErr_Occurred()\n:import_curses(); // semi-colon is optional but recommended if (PyErr_Occurred()) { /* cleanup */ }\n-\nvoid **PyCurses_API\u00b6\nDynamically allocated object containing the curses C API. This variable is only available once\nimport_curses\nsucceeds.PyCurses_API[0]\ncorresponds toPyCursesWindow_Type\n.PyCurses_API[1]\n,PyCurses_API[2]\n, andPyCurses_API[3]\nare pointers to predicate functions of typeint (*)(void)\n.When called, these predicates return whether\ncurses.setupterm()\n,curses.initscr()\n, andcurses.start_color()\nhave been called respectively.See also the convenience macros\nPyCursesSetupTermCalled\n,PyCursesInitialised\n, andPyCursesInitialisedColor\n.Note\nThe number of entries in this structure is subject to changes. Consider using\nPyCurses_API_pointers\nto check if new fields are available or not.\n-\nPyCurses_API_pointers\u00b6\nThe number of accessible fields (\n4\n) inPyCurses_API\n. This number is incremented whenever new fields are added.\n-\nPyTypeObject PyCursesWindow_Type\u00b6\nThe heap type corresponding to\ncurses.window\n.\n-\nint PyCursesWindow_Check(PyObject *op)\u00b6\nReturn true if op is a\ncurses.window\ninstance, false otherwise.\nThe following macros are convenience macros expanding into C statements.\nIn particular, they can only be used as macro;\nor macro\n, but not\nmacro()\nor macro();\n.\n-\nPyCursesSetupTermCalled\u00b6\nMacro checking if\ncurses.setupterm()\nhas been called.The macro expansion is roughly equivalent to:\n{ typedef int (*predicate_t)(void); predicate_t was_setupterm_called = (predicate_t)PyCurses_API[1]; if (!was_setupterm_called()) { return NULL; } }\n-\nPyCursesInitialised\u00b6\nMacro checking if\ncurses.initscr()\nhas been called.The macro expansion is roughly equivalent to:\n{ typedef int (*predicate_t)(void); predicate_t was_initscr_called = (predicate_t)PyCurses_API[2]; if (!was_initscr_called()) { return NULL; } }\n-\nPyCursesInitialisedColor\u00b6\nMacro checking if\ncurses.start_color()\nhas been called.The macro expansion is roughly equivalent to:\n{ typedef int (*predicate_t)(void); predicate_t was_start_color_called = (predicate_t)PyCurses_API[3]; if (!was_start_color_called()) { return NULL; } }\nInternal data\u00b6\nThe following objects are exposed by the C API but should be considered internal-only.\n-\nPyCurses_CAPSULE_NAME\u00b6\nName of the curses capsule to pass to\nPyCapsule_Import()\n.Internal usage only. Use\nimport_curses\ninstead.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 748}
{"url": "https://docs.python.org/3/c-api/typehints.html", "title": "Objects for Type Hinting", "content": "Objects for Type Hinting\u00b6\nVarious built-in types for type hinting are provided. Currently,\ntwo types exist \u2013 GenericAlias and\nUnion. Only GenericAlias\nis exposed to C.\n-\nPyObject *Py_GenericAlias(PyObject *origin, PyObject *args)\u00b6\n- Part of the Stable ABI since version 3.9.\nCreate a GenericAlias object. Equivalent to calling the Python class\ntypes.GenericAlias\n. The origin and args arguments set theGenericAlias\n\u2018s__origin__\nand__args__\nattributes respectively. origin should be a PyTypeObject*, and args can be a PyTupleObject* or anyPyObject*\n. If args passed is not a tuple, a 1-tuple is automatically constructed and__args__\nis set to(args,)\n. Minimal checking is done for the arguments, so the function will succeed even if origin is not a type. TheGenericAlias\n\u2018s__parameters__\nattribute is constructed lazily from__args__\n. On failure, an exception is raised andNULL\nis returned.Here\u2019s an example of how to make an extension type generic:\n... static PyMethodDef my_obj_methods[] = { // Other methods. ... {\"__class_getitem__\", Py_GenericAlias, METH_O|METH_CLASS, \"See PEP 585\"} ... }\nSee also\nThe data model method\n__class_getitem__()\n.Added in version 3.9.\n-\nPyTypeObject Py_GenericAliasType\u00b6\n- Part of the Stable ABI since version 3.9.\nThe C type of the object returned by\nPy_GenericAlias()\n. Equivalent totypes.GenericAlias\nin Python.Added in version 3.9.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 342}
{"url": "https://docs.python.org/3/whatsnew/2.1.html", "title": "What\u2019s New in Python 2.1", "content": "What\u2019s New in Python 2.1\u00b6\n- Author:\nA.M. Kuchling\nIntroduction\u00b6\nThis article explains the new features in Python 2.1. While there aren\u2019t as many changes in 2.1 as there were in Python 2.0, there are still some pleasant surprises in store. 2.1 is the first release to be steered through the use of Python Enhancement Proposals, or PEPs, so most of the sizable changes have accompanying PEPs that provide more complete documentation and a design rationale for the change. This article doesn\u2019t attempt to document the new features completely, but simply provides an overview of the new features for Python programmers. Refer to the Python 2.1 documentation, or to the specific PEP, for more details about any new feature that particularly interests you.\nOne recent goal of the Python development team has been to accelerate the pace of new releases, with a new release coming every 6 to 9 months. 2.1 is the first release to come out at this faster pace, with the first alpha appearing in January, 3 months after the final version of 2.0 was released.\nThe final release of Python 2.1 was made on April 17, 2001.\nPEP 227: Nested Scopes\u00b6\nThe largest change in Python 2.1 is to Python\u2019s scoping rules. In Python 2.0, at any given time there are at most three namespaces used to look up variable names: local, module-level, and the built-in namespace. This often surprised people because it didn\u2019t match their intuitive expectations. For example, a nested recursive function definition doesn\u2019t work:\ndef f():\n...\ndef g(value):\n...\nreturn g(value-1) + 1\n...\nThe function g()\nwill always raise a NameError\nexception, because\nthe binding of the name g\nisn\u2019t in either its local namespace or in the\nmodule-level namespace. This isn\u2019t much of a problem in practice (how often do\nyou recursively define interior functions like this?), but this also made using\nthe lambda\nexpression clumsier, and this was a problem in practice.\nIn code which uses lambda\nyou can often find local variables being\ncopied by passing them as the default values of arguments.\ndef find(self, name):\n\"Return list of any entries equal to 'name'\"\nL = filter(lambda x, name=name: x == name,\nself.list_attribute)\nreturn L\nThe readability of Python code written in a strongly functional style suffers greatly as a result.\nThe most significant change to Python 2.1 is that static scoping has been added\nto the language to fix this problem. As a first effect, the name=name\ndefault argument is now unnecessary in the above example. Put simply, when a\ngiven variable name is not assigned a value within a function (by an assignment,\nor the def\n, class\n, or import\nstatements),\nreferences to the variable will be looked up in the local namespace of the\nenclosing scope. A more detailed explanation of the rules, and a dissection of\nthe implementation, can be found in the PEP.\nThis change may cause some compatibility problems for code where the same variable name is used both at the module level and as a local variable within a function that contains further function definitions. This seems rather unlikely though, since such code would have been pretty confusing to read in the first place.\nOne side effect of the change is that the from module import *\nand\nexec\nstatements have been made illegal inside a function scope under\ncertain conditions. The Python reference manual has said all along that from\nmodule import *\nis only legal at the top level of a module, but the CPython\ninterpreter has never enforced this before. As part of the implementation of\nnested scopes, the compiler which turns Python source into bytecodes has to\ngenerate different code to access variables in a containing scope. from\nmodule import *\nand exec\nmake it impossible for the compiler to\nfigure this out, because they add names to the local namespace that are\nunknowable at compile time. Therefore, if a function contains function\ndefinitions or lambda\nexpressions with free variables, the compiler\nwill flag this by raising a SyntaxError\nexception.\nTo make the preceding explanation a bit clearer, here\u2019s an example:\nx = 1\ndef f():\n# The next line is a syntax error\nexec 'x=2'\ndef g():\nreturn x\nLine 4 containing the exec\nstatement is a syntax error, since\nexec\nwould define a new local variable named x\nwhose value should\nbe accessed by g()\n.\nThis shouldn\u2019t be much of a limitation, since exec\nis rarely used in\nmost Python code (and when it is used, it\u2019s often a sign of a poor design\nanyway).\nCompatibility concerns have led to nested scopes being introduced gradually; in Python 2.1, they aren\u2019t enabled by default, but can be turned on within a module by using a future statement as described in PEP 236. (See the following section for further discussion of PEP 236.) In Python 2.2, nested scopes will become the default and there will be no way to turn them off, but users will have had all of 2.1\u2019s lifetime to fix any breakage resulting from their introduction.\nSee also\n- PEP 227 - Statically Nested Scopes\nWritten and implemented by Jeremy Hylton.\nPEP 236: __future__ Directives\u00b6\nThe reaction to nested scopes was widespread concern about the dangers of breaking code with the 2.1 release, and it was strong enough to make the Pythoneers take a more conservative approach. This approach consists of introducing a convention for enabling optional functionality in release N that will become compulsory in release N+1.\nThe syntax uses a from...import\nstatement using the reserved module name\n__future__\n. Nested scopes can be enabled by the following statement:\nfrom __future__ import nested_scopes\nWhile it looks like a normal import\nstatement, it\u2019s not; there are\nstrict rules on where such a future statement can be put. They can only be at\nthe top of a module, and must precede any Python code or regular\nimport\nstatements. This is because such statements can affect how\nthe Python bytecode compiler parses code and generates bytecode, so they must\nprecede any statement that will result in bytecodes being produced.\nSee also\n- PEP 236 - Back to the\n__future__\nWritten by Tim Peters, and primarily implemented by Jeremy Hylton.\nPEP 207: Rich Comparisons\u00b6\nIn earlier versions, Python\u2019s support for implementing comparisons on user-defined\nclasses and extension types was quite simple. Classes could implement a\n__cmp__()\nmethod that was given two instances of a class, and could only\nreturn 0 if they were equal or +1 or -1 if they weren\u2019t; the method couldn\u2019t\nraise an exception or return anything other than a Boolean value. Users of\nNumeric Python often found this model too weak and restrictive, because in the\nnumber-crunching programs that numeric Python is used for, it would be more\nuseful to be able to perform elementwise comparisons of two matrices, returning\na matrix containing the results of a given comparison for each element. If the\ntwo matrices are of different sizes, then the compare has to be able to raise an\nexception to signal the error.\nIn Python 2.1, rich comparisons were added in order to support this need.\nPython classes can now individually overload each of the <\n, <=\n, >\n,\n>=\n, ==\n, and !=\noperations. The new magic method names are:\nOperation |\nMethod name |\n|---|---|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n|\n(The magic methods are named after the corresponding Fortran operators .LT.\n.\n.LE.\n, &c. Numeric programmers are almost certainly quite familiar with\nthese names and will find them easy to remember.)\nEach of these magic methods is of the form method(self, other)\n, where\nself\nwill be the object on the left-hand side of the operator, while\nother\nwill be the object on the right-hand side. For example, the\nexpression A < B\nwill cause A.__lt__(B)\nto be called.\nEach of these magic methods can return anything at all: a Boolean, a matrix, a list, or any other Python object. Alternatively they can raise an exception if the comparison is impossible, inconsistent, or otherwise meaningless.\nThe built-in cmp(A,B)\nfunction can use the rich comparison machinery,\nand now accepts an optional argument specifying which comparison operation to\nuse; this is given as one of the strings \"<\"\n, \"<=\"\n, \">\"\n, \">=\"\n,\n\"==\"\n, or \"!=\"\n. If called without the optional third argument,\ncmp()\nwill only return -1, 0, or +1 as in previous versions of Python;\notherwise it will call the appropriate method and can return any Python object.\nThere are also corresponding changes of interest to C programmers; there\u2019s a new\nslot tp_richcmp\nin type objects and an API for performing a given rich\ncomparison. I won\u2019t cover the C API here, but will refer you to PEP 207, or to\n2.1\u2019s C API documentation, for the full list of related functions.\nSee also\n- PEP 207 - Rich Comparisons\nWritten by Guido van Rossum, heavily based on earlier work by David Ascher, and implemented by Guido van Rossum.\nPEP 230: Warning Framework\u00b6\nOver its 10 years of existence, Python has accumulated a certain number of obsolete modules and features along the way. It\u2019s difficult to know when a feature is safe to remove, since there\u2019s no way of knowing how much code uses it \u2014 perhaps no programs depend on the feature, or perhaps many do. To enable removing old features in a more structured way, a warning framework was added. When the Python developers want to get rid of a feature, it will first trigger a warning in the next version of Python. The following Python version can then drop the feature, and users will have had a full release cycle to remove uses of the old feature.\nPython 2.1 adds the warning framework to be used in this scheme. It adds a\nwarnings\nmodule that provide functions to issue warnings, and to filter\nout warnings that you don\u2019t want to be displayed. Third-party modules can also\nuse this framework to deprecate old features that they no longer wish to\nsupport.\nFor example, in Python 2.1 the regex\nmodule is deprecated, so importing\nit causes a warning to be printed:\n>>> import regex\n__main__:1: DeprecationWarning: the regex module\nis deprecated; please use the re module\n>>>\nWarnings can be issued by calling the warnings.warn()\nfunction:\nwarnings.warn(\"feature X no longer supported\")\nThe first parameter is the warning message; an additional optional parameters can be used to specify a particular warning category.\nFilters can be added to disable certain warnings; a regular expression pattern\ncan be applied to the message or to the module name in order to suppress a\nwarning. For example, you may have a program that uses the regex\nmodule\nand not want to spare the time to convert it to use the re\nmodule right\nnow. The warning can be suppressed by calling\nimport warnings\nwarnings.filterwarnings(action = 'ignore',\nmessage='.*regex module is deprecated',\ncategory=DeprecationWarning,\nmodule = '__main__')\nThis adds a filter that will apply only to warnings of the class\nDeprecationWarning\ntriggered in the __main__\nmodule, and applies\na regular expression to only match the message about the regex\nmodule\nbeing deprecated, and will cause such warnings to be ignored. Warnings can also\nbe printed only once, printed every time the offending code is executed, or\nturned into exceptions that will cause the program to stop (unless the\nexceptions are caught in the usual way, of course).\nFunctions were also added to Python\u2019s C API for issuing warnings; refer to PEP 230 or to Python\u2019s API documentation for the details.\nSee also\n- PEP 5 - Guidelines for Language Evolution\nWritten by Paul Prescod, to specify procedures to be followed when removing old features from Python. The policy described in this PEP hasn\u2019t been officially adopted, but the eventual policy probably won\u2019t be too different from Prescod\u2019s proposal.\n- PEP 230 - Warning Framework\nWritten and implemented by Guido van Rossum.\nPEP 229: New Build System\u00b6\nWhen compiling Python, the user had to go in and edit the Modules/Setup\nfile in order to enable various additional modules; the default set is\nrelatively small and limited to modules that compile on most Unix platforms.\nThis means that on Unix platforms with many more features, most notably Linux,\nPython installations often don\u2019t contain all useful modules they could.\nPython 2.0 added the Distutils, a set of modules for distributing and installing extensions. In Python 2.1, the Distutils are used to compile much of the standard library of extension modules, autodetecting which ones are supported on the current machine. It\u2019s hoped that this will make Python installations easier and more featureful.\nInstead of having to edit the Modules/Setup\nfile in order to enable\nmodules, a setup.py\nscript in the top directory of the Python source\ndistribution is run at build time, and attempts to discover which modules can be\nenabled by examining the modules and header files on the system. If a module is\nconfigured in Modules/Setup\n, the setup.py\nscript won\u2019t attempt\nto compile that module and will defer to the Modules/Setup\nfile\u2019s\ncontents. This provides a way to specific any strange command-line flags or\nlibraries that are required for a specific platform.\nIn another far-reaching change to the build mechanism, Neil Schemenauer\nrestructured things so Python now uses a single makefile that isn\u2019t recursive,\ninstead of makefiles in the top directory and in each of the Python/\n,\nParser/\n, Objects/\n, and Modules/\nsubdirectories. This\nmakes building Python faster and also makes hacking the Makefiles clearer and\nsimpler.\nSee also\n- PEP 229 - Using Distutils to Build Python\nWritten and implemented by A.M. Kuchling.\nPEP 205: Weak References\u00b6\nWeak references, available through the weakref\nmodule, are a minor but\nuseful new data type in the Python programmer\u2019s toolbox.\nStoring a reference to an object (say, in a dictionary or a list) has the side effect of keeping that object alive forever. There are a few specific cases where this behaviour is undesirable, object caches being the most common one, and another being circular references in data structures such as trees.\nFor example, consider a memoizing function that caches the results of another\nfunction f(x)\nby storing the function\u2019s argument and its result in a\ndictionary:\n_cache = {}\ndef memoize(x):\nif _cache.has_key(x):\nreturn _cache[x]\nretval = f(x)\n# Cache the returned object\n_cache[x] = retval\nreturn retval\nThis version works for simple things such as integers, but it has a side effect;\nthe _cache\ndictionary holds a reference to the return values, so they\u2019ll\nnever be deallocated until the Python process exits and cleans up. This isn\u2019t\nvery noticeable for integers, but if f()\nreturns an object, or a data\nstructure that takes up a lot of memory, this can be a problem.\nWeak references provide a way to implement a cache that won\u2019t keep objects alive\nbeyond their time. If an object is only accessible through weak references, the\nobject will be deallocated and the weak references will now indicate that the\nobject it referred to no longer exists. A weak reference to an object obj is\ncreated by calling wr = weakref.ref(obj)\n. The object being referred to is\nreturned by calling the weak reference as if it were a function: wr()\n. It\nwill return the referenced object, or None\nif the object no longer exists.\nThis makes it possible to write a memoize()\nfunction whose cache doesn\u2019t\nkeep objects alive, by storing weak references in the cache.\n_cache = {}\ndef memoize(x):\nif _cache.has_key(x):\nobj = _cache[x]()\n# If weak reference object still exists,\n# return it\nif obj is not None: return obj\nretval = f(x)\n# Cache a weak reference\n_cache[x] = weakref.ref(retval)\nreturn retval\nThe weakref\nmodule also allows creating proxy objects which behave like\nweak references \u2014 an object referenced only by proxy objects is deallocated \u2013\nbut instead of requiring an explicit call to retrieve the object, the proxy\ntransparently forwards all operations to the object as long as the object still\nexists. If the object is deallocated, attempting to use a proxy will cause a\nweakref.ReferenceError\nexception to be raised.\nproxy = weakref.proxy(obj)\nproxy.attr # Equivalent to obj.attr\nproxy.meth() # Equivalent to obj.meth()\ndel obj\nproxy.attr # raises weakref.ReferenceError\nSee also\n- PEP 205 - Weak References\nWritten and implemented by Fred L. Drake, Jr.\nPEP 232: Function Attributes\u00b6\nIn Python 2.1, functions can now have arbitrary information attached to them.\nPeople were often using docstrings to hold information about functions and\nmethods, because the __doc__\nattribute was the only way of\nattaching any\ninformation to a function. For example, in the Zope web application server,\nfunctions are marked as safe for public access by having a docstring, and in\nJohn Aycock\u2019s SPARK parsing framework, docstrings hold parts of the BNF grammar\nto be parsed. This overloading is unfortunate, since docstrings are really\nintended to hold a function\u2019s documentation; for example, it means you can\u2019t\nproperly document functions intended for private use in Zope.\nArbitrary attributes can now be set and retrieved on functions using the regular Python syntax:\ndef f(): pass\nf.publish = 1\nf.secure = 1\nf.grammar = \"A ::= B (C D)*\"\nThe dictionary containing attributes can be accessed as the function\u2019s\n__dict__\n. Unlike the __dict__\nattribute of class instances, in\nfunctions you can actually assign a new dictionary to __dict__\n, though\nthe new value is restricted to a regular Python dictionary; you can\u2019t be\ntricky and set it to a UserDict\ninstance, or any other random object\nthat behaves like a mapping.\nSee also\n- PEP 232 - Function Attributes\nWritten and implemented by Barry Warsaw.\nPEP 235: Importing Modules on Case-Insensitive Platforms\u00b6\nSome operating systems have filesystems that are case-insensitive, MacOS and\nWindows being the primary examples; on these systems, it\u2019s impossible to\ndistinguish the filenames FILE.PY\nand file.py\n, even though they do store\nthe file\u2019s name in its original case (they\u2019re case-preserving, too).\nIn Python 2.1, the import\nstatement will work to simulate case-sensitivity\non case-insensitive platforms. Python will now search for the first\ncase-sensitive match by default, raising an ImportError\nif no such file\nis found, so import file\nwill not import a module named FILE.PY\n.\nCase-insensitive matching can be requested by setting the PYTHONCASEOK\nenvironment variable before starting the Python interpreter.\nPEP 217: Interactive Display Hook\u00b6\nWhen using the Python interpreter interactively, the output of commands is\ndisplayed using the built-in repr()\nfunction. In Python 2.1, the variable\nsys.displayhook()\ncan be set to a callable object which will be called\ninstead of repr()\n. For example, you can set it to a special\npretty-printing function:\n>>> # Create a recursive data structure\n... L = [1,2,3]\n>>> L.append(L)\n>>> L # Show Python's default output\n[1, 2, 3, [...]]\n>>> # Use pprint.pprint() as the display function\n... import sys, pprint\n>>> sys.displayhook = pprint.pprint\n>>> L\n[1, 2, 3, ]\n>>>\nSee also\n- PEP 217 - Display Hook for Interactive Use\nWritten and implemented by Moshe Zadka.\nPEP 208: New Coercion Model\u00b6\nHow numeric coercion is done at the C level was significantly modified. This will only affect the authors of C extensions to Python, allowing them more flexibility in writing extension types that support numeric operations.\nExtension types can now set the type flag Py_TPFLAGS_CHECKTYPES\nin their\nPyTypeObject\nstructure to indicate that they support the new coercion model.\nIn such extension types, the numeric slot functions can no longer assume that\nthey\u2019ll be passed two arguments of the same type; instead they may be passed two\narguments of differing types, and can then perform their own internal coercion.\nIf the slot function is passed a type it can\u2019t handle, it can indicate the\nfailure by returning a reference to the Py_NotImplemented\nsingleton value.\nThe numeric functions of the other type will then be tried, and perhaps they can\nhandle the operation; if the other type also returns Py_NotImplemented\n, then\na TypeError\nwill be raised. Numeric methods written in Python can also\nreturn Py_NotImplemented\n, causing the interpreter to act as if the method\ndid not exist (perhaps raising a TypeError\n, perhaps trying another\nobject\u2019s numeric methods).\nSee also\n- PEP 208 - Reworking the Coercion Model\nWritten and implemented by Neil Schemenauer, heavily based upon earlier work by Marc-Andr\u00e9 Lemburg. Read this to understand the fine points of how numeric operations will now be processed at the C level.\nPEP 241: Metadata in Python Packages\u00b6\nA common complaint from Python users is that there\u2019s no single catalog of all\nthe Python modules in existence. T. Middleton\u2019s Vaults of Parnassus at\nwww.vex.net/parnassus/\n(retired in February 2009, available in the\nInternet Archive Wayback Machine)\nwas the largest catalog of Python modules, but\nregistering software at the Vaults is optional, and many people did not bother.\nAs a first small step toward fixing the problem, Python software packaged using\nthe Distutils sdist command will include a file named\nPKG-INFO\ncontaining information about the package such as its name,\nversion, and author (metadata, in cataloguing terminology). PEP 241 contains\nthe full list of fields that can be present in the PKG-INFO\nfile. As\npeople began to package their software using Python 2.1, more and more packages\nwill include metadata, making it possible to build automated cataloguing systems\nand experiment with them. With the result experience, perhaps it\u2019ll be possible\nto design a really good catalog and then build support for it into Python 2.2.\nFor example, the Distutils sdist and bdist_* commands\ncould support an upload\noption that would automatically upload your\npackage to a catalog server.\nYou can start creating packages containing PKG-INFO\neven if you\u2019re not\nusing Python 2.1, since a new release of the Distutils will be made for users of\nearlier Python versions. Version 1.0.2 of the Distutils includes the changes\ndescribed in PEP 241, as well as various bugfixes and enhancements. It will be\navailable from the Distutils SIG at https://www.python.org/community/sigs/current/distutils-sig/.\nNew and Improved Modules\u00b6\nKa-Ping Yee contributed two new modules:\ninspect.py\n, a module for getting information about live Python code, andpydoc.py\n, a module for interactively converting docstrings to HTML or text. As a bonus,Tools/scripts/pydoc\n, which is now automatically installed, usespydoc.py\nto display documentation given a Python module, package, or class name. For example,pydoc xml.dom\ndisplays the following:Python Library Documentation: package xml.dom in xml NAME xml.dom - W3C Document Object Model implementation for Python. FILE /usr/local/lib/python2.1/xml/dom/__init__.pyc DESCRIPTION The Python mapping of the Document Object Model is documented in the Python Library Reference in the section on the xml.dom package. This package contains the following modules: ...\npydoc\nalso includes a Tk-based interactive help browser.pydoc\nquickly becomes addictive; try it out!Two different modules for unit testing were added to the standard library. The\ndoctest\nmodule, contributed by Tim Peters, provides a testing framework based on running embedded examples in docstrings and comparing the results against the expected output. PyUnit, contributed by Steve Purcell, is a unit testing framework inspired by JUnit, which was in turn an adaptation of Kent Beck\u2019s Smalltalk testing framework. See https://pyunit.sourceforge.net/ for more information about PyUnit.The\ndifflib\nmodule contains a class,SequenceMatcher\n, which compares two sequences and computes the changes required to transform one sequence into the other. For example, this module can be used to write a tool similar to the Unix diff program, and in fact the sample programTools/scripts/ndiff.py\ndemonstrates how to write such a script.curses.panel\n, a wrapper for the panel library, part of ncurses and of SYSV curses, was contributed by Thomas Gellekum. The panel library provides windows with the additional feature of depth. Windows can be moved higher or lower in the depth ordering, and the panel library figures out where panels overlap and which sections are visible.The PyXML package has gone through a few releases since Python 2.0, and Python 2.1 includes an updated version of the\nxml\npackage. Some of the noteworthy changes include support for Expat 1.2 and later versions, the ability for Expat parsers to handle files in any encoding supported by Python, and various bugfixes for SAX, DOM, and theminidom\nmodule.Ping also contributed another hook for handling uncaught exceptions.\nsys.excepthook()\ncan be set to a callable object. When an exception isn\u2019t caught by anytry\n\u2026except\nblocks, the exception will be passed tosys.excepthook()\n, which can then do whatever it likes. At the Ninth Python Conference, Ping demonstrated an application for this hook: printing an extended traceback that not only lists the stack frames, but also lists the function arguments and the local variables for each frame.Various functions in the\ntime\nmodule, such asasctime()\nandlocaltime()\n, require a floating-point argument containing the time in seconds since the epoch. The most common use of these functions is to work with the current time, so the floating-point argument has been made optional; when a value isn\u2019t provided, the current time will be used. For example, log file entries usually need a string containing the current time; in Python 2.1,time.asctime()\ncan be used, instead of the lengthiertime.asctime(time.localtime(time.time()))\nthat was previously required.This change was proposed and implemented by Thomas Wouters.\nThe\nftplib\nmodule now defaults to retrieving files in passive mode, because passive mode is more likely to work from behind a firewall. This request came from the Debian bug tracking system, since other Debian packages useftplib\nto retrieve files and then don\u2019t work from behind a firewall. It\u2019s deemed unlikely that this will cause problems for anyone, because Netscape defaults to passive mode and few people complain, but if passive mode is unsuitable for your application or network setup, callset_pasv(0)\non FTP objects to disable passive mode.Support for raw socket access has been added to the\nsocket\nmodule, contributed by Grant Edwards.The\npstats\nmodule now contains a simple interactive statistics browser for displaying timing profiles for Python programs, invoked when the module is run as a script. Contributed by Eric S. Raymond.A new implementation-dependent function,\nsys._getframe([depth])\n, has been added to return a given frame object from the current call stack.sys._getframe()\nreturns the frame at the top of the call stack; if the optional integer argument depth is supplied, the function returns the frame that is depth calls below the top of the stack. For example,sys._getframe(1)\nreturns the caller\u2019s frame object.This function is only present in CPython, not in Jython or the .NET implementation. Use it for debugging, and resist the temptation to put it into production code.\nOther Changes and Fixes\u00b6\nThere were relatively few smaller changes made in Python 2.1 due to the shorter release cycle. A search through the CVS change logs turns up 117 patches applied, and 136 bugs fixed; both figures are likely to be underestimates. Some of the more notable changes are:\nA specialized object allocator is now optionally available, that should be faster than the system\nmalloc()\nand have less memory overhead. The allocator uses C\u2019smalloc()\nfunction to get large pools of memory, and then fulfills smaller memory requests from these pools. It can be enabled by providing the--with-pymalloc\noption to the configure script; seeObjects/obmalloc.c\nfor the implementation details.Authors of C extension modules should test their code with the object allocator enabled, because some incorrect code may break, causing core dumps at runtime. There are a bunch of memory allocation functions in Python\u2019s C API that have previously been just aliases for the C library\u2019s\nmalloc()\nandfree()\n, meaning that if you accidentally called mismatched functions, the error wouldn\u2019t be noticeable. When the object allocator is enabled, these functions aren\u2019t aliases ofmalloc()\nandfree()\nany more, and calling the wrong function to free memory will get you a core dump. For example, if memory was allocated usingPyMem_New\n, it has to be freed usingPyMem_Del()\n, notfree()\n. A few modules included with Python fell afoul of this and had to be fixed; doubtless there are more third-party modules that will have the same problem.The object allocator was contributed by Vladimir Marangozov.\nThe speed of line-oriented file I/O has been improved because people often complain about its lack of speed, and because it\u2019s often been used as a na\u00efve benchmark. The\nreadline()\nmethod of file objects has therefore been rewritten to be much faster. The exact amount of the speedup will vary from platform to platform depending on how slow the C library\u2019sgetc()\nwas, but is around 66%, and potentially much faster on some particular operating systems. Tim Peters did much of the benchmarking and coding for this change, motivated by a discussion in comp.lang.python.A new module and method for file objects was also added, contributed by Jeff Epler. The new method,\nxreadlines()\n, is similar to the existingxrange()\nbuilt-in.xreadlines()\nreturns an opaque sequence object that only supports being iterated over, reading a line on every iteration but not reading the entire file into memory as the existingreadlines()\nmethod does. You\u2019d use it like this:for line in sys.stdin.xreadlines(): # ... do something for each line ... ...\nFor a fuller discussion of the line I/O changes, see the python-dev summary for January 1\u201315, 2001 at https://mail.python.org/pipermail/python-dev/2001-January/.\nA new method,\npopitem()\n, was added to dictionaries to enable destructively iterating through the contents of a dictionary; this can be faster for large dictionaries because there\u2019s no need to construct a list containing all the keys or values.D.popitem()\nremoves a random(key, value)\npair from the dictionaryD\nand returns it as a 2-tuple. This was implemented mostly by Tim Peters and Guido van Rossum, after a suggestion and preliminary patch by Moshe Zadka.Modules can now control which names are imported when\nfrom module import *\nis used, by defining an__all__\nattribute containing a list of names that will be imported. One common complaint is that if the module imports other modules such assys\norstring\n,from module import *\nwill add them to the importing module\u2019s namespace. To fix this, simply list the public names in__all__\n:# List public names __all__ = ['Database', 'open']\nA stricter version of this patch was first suggested and implemented by Ben Wolfson, but after some python-dev discussion, a weaker final version was checked in.\nApplying\nrepr()\nto strings previously used octal escapes for non-printable characters; for example, a newline was'\\012'\n. This was a vestigial trace of Python\u2019s C ancestry, but today octal is of very little practical use. Ka-Ping Yee suggested using hex escapes instead of octal ones, and using the\\n\n,\\t\n,\\r\nescapes for the appropriate characters, and implemented this new formatting.Syntax errors detected at compile-time can now raise exceptions containing the filename and line number of the error, a pleasant side effect of the compiler reorganization done by Jeremy Hylton.\nC extensions which import other modules have been changed to use\nPyImport_ImportModule()\n, which means that they will use any import hooks that have been installed. This is also encouraged for third-party extensions that need to import some other module from C code.The size of the Unicode character database was shrunk by another 340K thanks to Fredrik Lundh.\nSome new ports were contributed: MacOS X (by Steven Majewski), Cygwin (by Jason Tishler); RISCOS (by Dietmar Schwertberger); Unixware 7 (by Billy G. Allie).\nAnd there\u2019s the usual list of minor bugfixes, minor memory leaks, docstring edits, and other tweaks, too lengthy to be worth itemizing; see the CVS logs for the full details if you want them.\nAcknowledgements\u00b6\nThe author would like to thank the following people for offering suggestions on various drafts of this article: Graeme Cross, David Goodger, Jay Graves, Michael Hudson, Marc-Andr\u00e9 Lemburg, Fredrik Lundh, Neil Schemenauer, Thomas Wouters.", "code_snippets": ["\n ", "\n ", "\n ", "\n ", " ", " ", " ", "\n ", "\n", " ", "\n ", "\n ", " ", " ", " ", " ", " ", " ", " ", "\n ", "\n ", " ", "\n", " ", " ", "\n", "\n ", "\n ", " ", "\n ", "\n ", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n ", "\n ", "\n ", " ", " ", "\n", " ", " ", "\n", "\n ", " ", "\n ", " ", "\n\n ", " ", " ", "\n\n ", "\n ", " ", " ", "\n\n ", " ", "\n", " ", " ", "\n", "\n ", " ", "\n ", " ", " ", "\n ", "\n ", "\n ", " ", " ", " ", " ", " ", " ", "\n\n ", " ", " ", "\n\n ", "\n ", " ", " ", "\n\n ", " ", "\n", " ", " ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", "\n\n", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", "\n\n", "\n ", "\n\n", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n\n ", " ", " ", " ", " ", " ", "\n ", "\n", " ", " ", " ", "\n ", "\n ", "\n", "\n", " ", " ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 8079}
{"url": "https://docs.python.org/3/whatsnew/2.2.html", "title": "What\u2019s New in Python 2.2", "content": "What\u2019s New in Python 2.2\u00b6\n- Author:\nA.M. Kuchling\nIntroduction\u00b6\nThis article explains the new features in Python 2.2.2, released on October 14, 2002. Python 2.2.2 is a bugfix release of Python 2.2, originally released on December 21, 2001.\nPython 2.2 can be thought of as the \u201ccleanup release\u201d. There are some features such as generators and iterators that are completely new, but most of the changes, significant and far-reaching though they may be, are aimed at cleaning up irregularities and dark corners of the language design.\nThis article doesn\u2019t attempt to provide a complete specification of the new features, but instead provides a convenient overview. For full details, you should refer to the documentation for Python 2.2, such as the Python Library Reference and the Python Reference Manual. If you want to understand the complete implementation and design rationale for a change, refer to the PEP for a particular new feature.\nPEPs 252 and 253: Type and Class Changes\u00b6\nThe largest and most far-reaching changes in Python 2.2 are to Python\u2019s model of objects and classes. The changes should be backward compatible, so it\u2019s likely that your code will continue to run unchanged, but the changes provide some amazing new capabilities. Before beginning this, the longest and most complicated section of this article, I\u2019ll provide an overview of the changes and offer some comments.\nA long time ago I wrote a web page listing flaws in Python\u2019s design. One of the\nmost significant flaws was that it\u2019s impossible to subclass Python types\nimplemented in C. In particular, it\u2019s not possible to subclass built-in types,\nso you can\u2019t just subclass, say, lists in order to add a single useful method to\nthem. The UserList\nmodule provides a class that supports all of the\nmethods of lists and that can be subclassed further, but there\u2019s lots of C code\nthat expects a regular Python list and won\u2019t accept a UserList\ninstance.\nPython 2.2 fixes this, and in the process adds some exciting new capabilities. A brief summary:\nYou can subclass built-in types such as lists and even integers, and your subclasses should work in every place that requires the original type.\nIt\u2019s now possible to define static and class methods, in addition to the instance methods available in previous versions of Python.\nIt\u2019s also possible to automatically call methods on accessing or setting an instance attribute by using a new mechanism called properties. Many uses of\n__getattr__()\ncan be rewritten to use properties instead, making the resulting code simpler and faster. As a small side benefit, attributes can now have docstrings, too.The list of legal attributes for an instance can be limited to a particular set using slots, making it possible to safeguard against typos and perhaps make more optimizations possible in future versions of Python.\nSome users have voiced concern about all these changes. Sure, they say, the new features are neat and lend themselves to all sorts of tricks that weren\u2019t possible in previous versions of Python, but they also make the language more complicated. Some people have said that they\u2019ve always recommended Python for its simplicity, and feel that its simplicity is being lost.\nPersonally, I think there\u2019s no need to worry. Many of the new features are quite esoteric, and you can write a lot of Python code without ever needed to be aware of them. Writing a simple class is no more difficult than it ever was, so you don\u2019t need to bother learning or teaching them unless they\u2019re actually needed. Some very complicated tasks that were previously only possible from C will now be possible in pure Python, and to my mind that\u2019s all for the better.\nI\u2019m not going to attempt to cover every single corner case and small change that were required to make the new features work. Instead this section will paint only the broad strokes. See section Related Links, \u201cRelated Links\u201d, for further sources of information about Python 2.2\u2019s new object model.\nOld and New Classes\u00b6\nFirst, you should know that Python 2.2 really has two kinds of classes: classic or old-style classes, and new-style classes. The old-style class model is exactly the same as the class model in earlier versions of Python. All the new features described in this section apply only to new-style classes. This divergence isn\u2019t intended to last forever; eventually old-style classes will be dropped, possibly in Python 3.0.\nSo how do you define a new-style class? You do it by subclassing an existing\nnew-style class. Most of Python\u2019s built-in types, such as integers, lists,\ndictionaries, and even files, are new-style classes now. A new-style class\nnamed object\n, the base class for all built-in types, has also been\nadded so if no built-in type is suitable, you can just subclass\nobject\n:\nclass C(object):\ndef __init__ (self):\n...\n...\nThis means that class\nstatements that don\u2019t have any base classes are\nalways classic classes in Python 2.2. (Actually you can also change this by\nsetting a module-level variable named __metaclass__\n\u2014 see PEP 253\nfor the details \u2014 but it\u2019s easier to just subclass object\n.)\nThe type objects for the built-in types are available as built-ins, named using\na clever trick. Python has always had built-in functions named int()\n,\nfloat()\n, and str()\n. In 2.2, they aren\u2019t functions any more, but\ntype objects that behave as factories when called.\n>>> int\n\n>>> int('123')\n123\nTo make the set of types complete, new type objects such as dict()\nand\nfile()\nhave been added. Here\u2019s a more interesting example, adding a\nlock()\nmethod to file objects:\nclass LockableFile(file):\ndef lock (self, operation, length=0, start=0, whence=0):\nimport fcntl\nreturn fcntl.lockf(self.fileno(), operation,\nlength, start, whence)\nThe now-obsolete posixfile\nmodule contained a class that emulated all of\na file object\u2019s methods and also added a lock()\nmethod, but this class\ncouldn\u2019t be passed to internal functions that expected a built-in file,\nsomething which is possible with our new LockableFile\n.\nDescriptors\u00b6\nIn previous versions of Python, there was no consistent way to discover what\nattributes and methods were supported by an object. There were some informal\nconventions, such as defining __members__\nand __methods__\nattributes that were lists of names, but often the author of an extension type\nor a class wouldn\u2019t bother to define them. You could fall back on inspecting\nthe __dict__\nof an object, but when class inheritance or an arbitrary\n__getattr__()\nhook were in use this could still be inaccurate.\nThe one big idea underlying the new class model is that an API for describing the attributes of an object using descriptors has been formalized. Descriptors specify the value of an attribute, stating whether it\u2019s a method or a field. With the descriptor API, static methods and class methods become possible, as well as more exotic constructs.\nAttribute descriptors are objects that live inside class objects, and have a few attributes of their own:\n__name__\nis the attribute\u2019s name.__doc__\nis the attribute\u2019s docstring.__get__(object)\nis a method that retrieves the attribute value from object.__set__(object, value)\nsets the attribute on object to value.__delete__(object, value)\ndeletes the value attribute of object.\nFor example, when you write obj.x\n, the steps that Python actually performs\nare:\ndescriptor = obj.__class__.x\ndescriptor.__get__(obj)\nFor methods, descriptor.__get__\nreturns a temporary\nobject that\u2019s\ncallable, and wraps up the instance and the method to be called on it. This is\nalso why static methods and class methods are now possible; they have\ndescriptors that wrap up just the method, or the method and the class. As a\nbrief explanation of these new kinds of methods, static methods aren\u2019t passed\nthe instance, and therefore resemble regular functions. Class methods are\npassed the class of the object, but not the object itself. Static and class\nmethods are defined like this:\nclass C(object):\ndef f(arg1, arg2):\n...\nf = staticmethod(f)\ndef g(cls, arg1, arg2):\n...\ng = classmethod(g)\nThe staticmethod()\nfunction takes the function f()\n, and returns it\nwrapped up in a descriptor so it can be stored in the class object. You might\nexpect there to be special syntax for creating such methods (def static f\n,\ndefstatic f()\n, or something like that) but no such syntax has been defined\nyet; that\u2019s been left for future versions of Python.\nMore new features, such as slots and properties, are also implemented as new kinds of descriptors, and it\u2019s not difficult to write a descriptor class that does something novel. For example, it would be possible to write a descriptor class that made it possible to write Eiffel-style preconditions and postconditions for a method. A class that used this feature might be defined like this:\nfrom eiffel import eiffelmethod\nclass C(object):\ndef f(self, arg1, arg2):\n# The actual function\n...\ndef pre_f(self):\n# Check preconditions\n...\ndef post_f(self):\n# Check postconditions\n...\nf = eiffelmethod(f, pre_f, post_f)\nNote that a person using the new eiffelmethod()\ndoesn\u2019t have to understand\nanything about descriptors. This is why I think the new features don\u2019t increase\nthe basic complexity of the language. There will be a few wizards who need to\nknow about it in order to write eiffelmethod()\nor the ZODB or whatever,\nbut most users will just write code on top of the resulting libraries and ignore\nthe implementation details.\nMultiple Inheritance: The Diamond Rule\u00b6\nMultiple inheritance has also been made more useful through changing the rules under which names are resolved. Consider this set of classes (diagram taken from PEP 253 by Guido van Rossum):\nclass A:\n^ ^ def save(self): ...\n/ \\\n/ \\\n/ \\\n/ \\\nclass B class C:\n^ ^ def save(self): ...\n\\ /\n\\ /\n\\ /\n\\ /\nclass D\nThe lookup rule for classic classes is simple but not very smart; the base\nclasses are searched depth-first, going from left to right. A reference to\nD.save()\nwill search the classes D\n, B\n, and then\nA\n, where save()\nwould be found and returned. C.save()\nwould never be found at all. This is bad, because if C\n\u2019s save()\nmethod is saving some internal state specific to C\n, not calling it will\nresult in that state never getting saved.\nNew-style classes follow a different algorithm that\u2019s a bit more complicated to explain, but does the right thing in this situation. (Note that Python 2.3 changes this algorithm to one that produces the same results in most cases, but produces more useful results for really complicated inheritance graphs.)\nList all the base classes, following the classic lookup rule and include a class multiple times if it\u2019s visited repeatedly. In the above example, the list of visited classes is [\nD\n,B\n,A\n,C\n,A\n].Scan the list for duplicated classes. If any are found, remove all but one occurrence, leaving the last one in the list. In the above example, the list becomes [\nD\n,B\n,C\n,A\n] after dropping duplicates.\nFollowing this rule, referring to D.save()\nwill return C.save()\n,\nwhich is the behaviour we\u2019re after. This lookup rule is the same as the one\nfollowed by Common Lisp. A new built-in function, super()\n, provides a way\nto get at a class\u2019s superclasses without having to reimplement Python\u2019s\nalgorithm. The most commonly used form will be super(class, obj)\n, which\nreturns a bound superclass object (not the actual class object). This form\nwill be used in methods to call a method in the superclass; for example,\nD\n\u2019s save()\nmethod would look like this:\nclass D (B,C):\ndef save (self):\n# Call superclass .save()\nsuper(D, self).save()\n# Save D's private information here\n...\nsuper()\ncan also return unbound superclass objects when called as\nsuper(class)\nor super(class1, class2)\n, but this probably won\u2019t\noften be useful.\nAttribute Access\u00b6\nA fair number of sophisticated Python classes define hooks for attribute access\nusing __getattr__()\n; most commonly this is done for convenience, to make\ncode more readable by automatically mapping an attribute access such as\nobj.parent\ninto a method call such as obj.get_parent\n. Python 2.2 adds\nsome new ways of controlling attribute access.\nFirst, __getattr__(attr_name)\nis still supported by new-style classes,\nand nothing about it has changed. As before, it will be called when an attempt\nis made to access obj.foo\nand no attribute named foo\nis found in the\ninstance\u2019s dictionary.\nNew-style classes also support a new method,\n__getattribute__(attr_name)\n. The difference between the two methods is\nthat __getattribute__()\nis always called whenever any attribute is\naccessed, while the old __getattr__()\nis only called if foo\nisn\u2019t\nfound in the instance\u2019s dictionary.\nHowever, Python 2.2\u2019s support for properties will often be a simpler way\nto trap attribute references. Writing a __getattr__()\nmethod is\ncomplicated because to avoid recursion you can\u2019t use regular attribute accesses\ninside them, and instead have to mess around with the contents of\n__dict__\n. __getattr__()\nmethods also end up being called by Python\nwhen it checks for other methods such as __repr__()\nor __coerce__()\n,\nand so have to be written with this in mind. Finally, calling a function on\nevery attribute access results in a sizable performance loss.\nproperty\nis a new built-in type that packages up three functions that\nget, set, or delete an attribute, and a docstring. For example, if you want to\ndefine a size\nattribute that\u2019s computed, but also settable, you could\nwrite:\nclass C(object):\ndef get_size (self):\nresult = ... computation ...\nreturn result\ndef set_size (self, size):\n... compute something based on the size\nand set internal state appropriately ...\n# Define a property. The 'delete this attribute'\n# method is defined as None, so the attribute\n# can't be deleted.\nsize = property(get_size, set_size,\nNone,\n\"Storage size of this instance\")\nThat is certainly clearer and easier to write than a pair of\n__getattr__()\n/__setattr__()\nmethods that check for the size\nattribute and handle it specially while retrieving all other attributes from the\ninstance\u2019s __dict__\n. Accesses to size\nare also the only ones\nwhich have to perform the work of calling a function, so references to other\nattributes run at their usual speed.\nFinally, it\u2019s possible to constrain the list of attributes that can be\nreferenced on an object using the new __slots__\nclass attribute. Python\nobjects are usually very dynamic; at any time it\u2019s possible to define a new\nattribute on an instance by just doing obj.new_attr=1\n. A new-style class\ncan define a class attribute named __slots__\nto limit the legal\nattributes to a particular set of names. An example will make this clear:\n>>> class C(object):\n... __slots__ = ('template', 'name')\n...\n>>> obj = C()\n>>> print obj.template\nNone\n>>> obj.template = 'Test'\n>>> print obj.template\nTest\n>>> obj.newattr = None\nTraceback (most recent call last):\nFile \"\", line 1, in ?\nAttributeError: 'C' object has no attribute 'newattr'\nNote how you get an AttributeError\non the attempt to assign to an\nattribute not listed in __slots__\n.\nPEP 234: Iterators\u00b6\nAnother significant addition to 2.2 is an iteration interface at both the C and Python levels. Objects can define how they can be looped over by callers.\nIn Python versions up to 2.1, the usual way to make for item in obj\nwork is\nto define a __getitem__()\nmethod that looks something like this:\ndef __getitem__(self, index):\nreturn \n__getitem__()\nis more properly used to define an indexing operation on an\nobject so that you can write obj[5]\nto retrieve the sixth element. It\u2019s a\nbit misleading when you\u2019re using this only to support for\nloops.\nConsider some file-like object that wants to be looped over; the index\nparameter is essentially meaningless, as the class probably assumes that a\nseries of __getitem__()\ncalls will be made with index incrementing by\none each time. In other words, the presence of the __getitem__()\nmethod\ndoesn\u2019t mean that using file[5]\nto randomly access the sixth element will\nwork, though it really should.\nIn Python 2.2, iteration can be implemented separately, and __getitem__()\nmethods can be limited to classes that really do support random access. The\nbasic idea of iterators is simple. A new built-in function, iter(obj)\nor iter(C, sentinel)\n, is used to get an iterator. iter(obj)\nreturns\nan iterator for the object obj, while iter(C, sentinel)\nreturns an\niterator that will invoke the callable object C until it returns sentinel to\nsignal that the iterator is done.\nPython classes can define an __iter__()\nmethod, which should create and\nreturn a new iterator for the object; if the object is its own iterator, this\nmethod can just return self\n. In particular, iterators will usually be their\nown iterators. Extension types implemented in C can implement a tp_iter\nfunction in order to return an iterator, and extension types that want to behave\nas iterators can define a tp_iternext\nfunction.\nSo, after all this, what do iterators actually do? They have one required\nmethod, next()\n, which takes no arguments and returns the next value. When\nthere are no more values to be returned, calling next()\nshould raise the\nStopIteration\nexception.\n>>> L = [1,2,3]\n>>> i = iter(L)\n>>> print i\n\n>>> i.next()\n1\n>>> i.next()\n2\n>>> i.next()\n3\n>>> i.next()\nTraceback (most recent call last):\nFile \"\", line 1, in ?\nStopIteration\n>>>\nIn 2.2, Python\u2019s for\nstatement no longer expects a sequence; it\nexpects something for which iter()\nwill return an iterator. For backward\ncompatibility and convenience, an iterator is automatically constructed for\nsequences that don\u2019t implement __iter__()\nor a tp_iter\nslot, so\nfor i in [1,2,3]\nwill still work. Wherever the Python interpreter loops\nover a sequence, it\u2019s been changed to use the iterator protocol. This means you\ncan do things like this:\n>>> L = [1,2,3]\n>>> i = iter(L)\n>>> a,b,c = i\n>>> a,b,c\n(1, 2, 3)\nIterator support has been added to some of Python\u2019s basic types. Calling\niter()\non a dictionary will return an iterator which loops over its keys:\n>>> m = {'Jan': 1, 'Feb': 2, 'Mar': 3, 'Apr': 4, 'May': 5, 'Jun': 6,\n... 'Jul': 7, 'Aug': 8, 'Sep': 9, 'Oct': 10, 'Nov': 11, 'Dec': 12}\n>>> for key in m: print key, m[key]\n...\nMar 3\nFeb 2\nAug 8\nSep 9\nMay 5\nJun 6\nJul 7\nJan 1\nApr 4\nNov 11\nDec 12\nOct 10\nThat\u2019s just the default behaviour. If you want to iterate over keys, values, or\nkey/value pairs, you can explicitly call the iterkeys()\n,\nitervalues()\n, or iteritems()\nmethods to get an appropriate iterator.\nIn a minor related change, the in\noperator now works on dictionaries,\nso key in dict\nis now equivalent to dict.has_key(key)\n.\nFiles also provide an iterator, which calls the readline()\nmethod until\nthere are no more lines in the file. This means you can now read each line of a\nfile using code like this:\nfor line in file:\n# do something for each line\n...\nNote that you can only go forward in an iterator; there\u2019s no way to get the\nprevious element, reset the iterator, or make a copy of it. An iterator object\ncould provide such additional capabilities, but the iterator protocol only\nrequires a next()\nmethod.\nSee also\n- PEP 234 - Iterators\nWritten by Ka-Ping Yee and GvR; implemented by the Python Labs crew, mostly by GvR and Tim Peters.\nPEP 255: Simple Generators\u00b6\nGenerators are another new feature, one that interacts with the introduction of iterators.\nYou\u2019re doubtless familiar with how function calls work in Python or C. When you\ncall a function, it gets a private namespace where its local variables are\ncreated. When the function reaches a return\nstatement, the local\nvariables are destroyed and the resulting value is returned to the caller. A\nlater call to the same function will get a fresh new set of local variables.\nBut, what if the local variables weren\u2019t thrown away on exiting a function?\nWhat if you could later resume the function where it left off? This is what\ngenerators provide; they can be thought of as resumable functions.\nHere\u2019s the simplest example of a generator function:\ndef generate_ints(N):\nfor i in range(N):\nyield i\nA new keyword, yield\n, was introduced for generators. Any function\ncontaining a yield\nstatement is a generator function; this is\ndetected by Python\u2019s bytecode compiler which compiles the function specially as\na result. Because a new keyword was introduced, generators must be explicitly\nenabled in a module by including a from __future__ import generators\nstatement near the top of the module\u2019s source code. In Python 2.3 this\nstatement will become unnecessary.\nWhen you call a generator function, it doesn\u2019t return a single value; instead it\nreturns a generator object that supports the iterator protocol. On executing\nthe yield\nstatement, the generator outputs the value of i\n,\nsimilar to a return\nstatement. The big difference between\nyield\nand a return\nstatement is that on reaching a\nyield\nthe generator\u2019s state of execution is suspended and local\nvariables are preserved. On the next call to the generator\u2019s next()\nmethod,\nthe function will resume executing immediately after the yield\nstatement. (For complicated reasons, the yield\nstatement isn\u2019t\nallowed inside the try\nblock of a\ntry\n\u2026finally\nstatement; read PEP 255 for a full\nexplanation of the interaction between yield\nand exceptions.)\nHere\u2019s a sample usage of the generate_ints()\ngenerator:\n>>> gen = generate_ints(3)\n>>> gen\n\n>>> gen.next()\n0\n>>> gen.next()\n1\n>>> gen.next()\n2\n>>> gen.next()\nTraceback (most recent call last):\nFile \"\", line 1, in ?\nFile \"\", line 2, in generate_ints\nStopIteration\nYou could equally write for i in generate_ints(5)\n, or a,b,c =\ngenerate_ints(3)\n.\nInside a generator function, the return\nstatement can only be used\nwithout a value, and signals the end of the procession of values; afterwards the\ngenerator cannot return any further values. return\nwith a value, such\nas return 5\n, is a syntax error inside a generator function. The end of the\ngenerator\u2019s results can also be indicated by raising StopIteration\nmanually, or by just letting the flow of execution fall off the bottom of the\nfunction.\nYou could achieve the effect of generators manually by writing your own class\nand storing all the local variables of the generator as instance variables. For\nexample, returning a list of integers could be done by setting self.count\nto\n0, and having the next()\nmethod increment self.count\nand return it.\nHowever, for a moderately complicated generator, writing a corresponding class\nwould be much messier. Lib/test/test_generators.py\ncontains a number of\nmore interesting examples. The simplest one implements an in-order traversal of\na tree using generators recursively.\n# A recursive generator that generates Tree leaves in in-order.\ndef inorder(t):\nif t:\nfor x in inorder(t.left):\nyield x\nyield t.label\nfor x in inorder(t.right):\nyield x\nTwo other examples in Lib/test/test_generators.py\nproduce solutions for\nthe N-Queens problem (placing $N$ queens on an $NxN$ chess board so that no\nqueen threatens another) and the Knight\u2019s Tour (a route that takes a knight to\nevery square of an $NxN$ chessboard without visiting any square twice).\nThe idea of generators comes from other programming languages, especially Icon (https://www2.cs.arizona.edu/icon/), where the idea of generators is central. In Icon, every expression and function call behaves like a generator. One example from \u201cAn Overview of the Icon Programming Language\u201d at https://www2.cs.arizona.edu/icon/docs/ipd266.htm gives an idea of what this looks like:\nsentence := \"Store it in the neighboring harbor\"\nif (i := find(\"or\", sentence)) > 5 then write(i)\nIn Icon the find()\nfunction returns the indexes at which the substring\n\u201cor\u201d is found: 3, 23, 33. In the if\nstatement, i\nis first\nassigned a value of 3, but 3 is less than 5, so the comparison fails, and Icon\nretries it with the second value of 23. 23 is greater than 5, so the comparison\nnow succeeds, and the code prints the value 23 to the screen.\nPython doesn\u2019t go nearly as far as Icon in adopting generators as a central concept. Generators are considered a new part of the core Python language, but learning or using them isn\u2019t compulsory; if they don\u2019t solve any problems that you have, feel free to ignore them. One novel feature of Python\u2019s interface as compared to Icon\u2019s is that a generator\u2019s state is represented as a concrete object (the iterator) that can be passed around to other functions or stored in a data structure.\nSee also\n- PEP 255 - Simple Generators\nWritten by Neil Schemenauer, Tim Peters, Magnus Lie Hetland. Implemented mostly by Neil Schemenauer and Tim Peters, with other fixes from the Python Labs crew.\nPEP 237: Unifying Long Integers and Integers\u00b6\nIn recent versions, the distinction between regular integers, which are 32-bit\nvalues on most machines, and long integers, which can be of arbitrary size, was\nbecoming an annoyance. For example, on platforms that support files larger than\n2**32\nbytes, the tell()\nmethod of file objects has to return a long\ninteger. However, there were various bits of Python that expected plain integers\nand would raise an error if a long integer was provided instead. For example,\nin Python 1.5, only regular integers could be used as a slice index, and\n'abc'[1L:]\nwould raise a TypeError\nexception with the message \u2018slice\nindex must be int\u2019.\nPython 2.2 will shift values from short to long integers as required. The \u2018L\u2019\nsuffix is no longer needed to indicate a long integer literal, as now the\ncompiler will choose the appropriate type. (Using the \u2018L\u2019 suffix will be\ndiscouraged in future 2.x versions of Python, triggering a warning in Python\n2.4, and probably dropped in Python 3.0.) Many operations that used to raise an\nOverflowError\nwill now return a long integer as their result. For\nexample:\n>>> 1234567890123\n1234567890123L\n>>> 2 ** 64\n18446744073709551616L\nIn most cases, integers and long integers will now be treated identically. You\ncan still distinguish them with the type()\nbuilt-in function, but that\u2019s\nrarely needed.\nSee also\n- PEP 237 - Unifying Long Integers and Integers\nWritten by Moshe Zadka and Guido van Rossum. Implemented mostly by Guido van Rossum.\nPEP 238: Changing the Division Operator\u00b6\nThe most controversial change in Python 2.2 heralds the start of an effort to\nfix an old design flaw that\u2019s been in Python from the beginning. Currently\nPython\u2019s division operator, /\n, behaves like C\u2019s division operator when\npresented with two integer arguments: it returns an integer result that\u2019s\ntruncated down when there would be a fractional part. For example, 3/2\nis\n1, not 1.5, and (-1)/2\nis -1, not -0.5. This means that the results of\ndivision can vary unexpectedly depending on the type of the two operands and\nbecause Python is dynamically typed, it can be difficult to determine the\npossible types of the operands.\n(The controversy is over whether this is really a design flaw, and whether it\u2019s worth breaking existing code to fix this. It\u2019s caused endless discussions on python-dev, and in July 2001 erupted into a storm of acidly sarcastic postings on comp.lang.python. I won\u2019t argue for either side here and will stick to describing what\u2019s implemented in 2.2. Read PEP 238 for a summary of arguments and counter-arguments.)\nBecause this change might break code, it\u2019s being introduced very gradually. Python 2.2 begins the transition, but the switch won\u2019t be complete until Python 3.0.\nFirst, I\u2019ll borrow some terminology from PEP 238. \u201cTrue division\u201d is the\ndivision that most non-programmers are familiar with: 3/2 is 1.5, 1/4 is 0.25,\nand so forth. \u201cFloor division\u201d is what Python\u2019s /\noperator currently does\nwhen given integer operands; the result is the floor of the value returned by\ntrue division. \u201cClassic division\u201d is the current mixed behaviour of /\n; it\nreturns the result of floor division when the operands are integers, and returns\nthe result of true division when one of the operands is a floating-point number.\nHere are the changes 2.2 introduces:\nA new operator,\n//\n, is the floor division operator. (Yes, we know it looks like C++\u2019s comment symbol.)//\nalways performs floor division no matter what the types of its operands are, so1 // 2\nis 0 and1.0 // 2.0\nis also 0.0.//\nis always available in Python 2.2; you don\u2019t need to enable it using a__future__\nstatement.By including a\nfrom __future__ import division\nin a module, the/\noperator will be changed to return the result of true division, so1/2\nis 0.5. Without the__future__\nstatement,/\nstill means classic division. The default meaning of/\nwill not change until Python 3.0.Classes can define methods called\n__truediv__()\nand__floordiv__()\nto overload the two division operators. At the C level, there are also slots in thePyNumberMethods\nstructure so extension types can define the two operators.Python 2.2 supports some command-line arguments for testing whether code will work with the changed division semantics. Running python with\n-Q warn\nwill cause a warning to be issued whenever division is applied to two integers. You can use this to find code that\u2019s affected by the change and fix it. By default, Python 2.2 will simply perform classic division without a warning; the warning will be turned on by default in Python 2.3.\nSee also\n- PEP 238 - Changing the Division Operator\nWritten by Moshe Zadka and Guido van Rossum. Implemented by Guido van Rossum..\nUnicode Changes\u00b6\nPython\u2019s Unicode support has been enhanced a bit in 2.2. Unicode strings are\nusually stored as UCS-2, as 16-bit unsigned integers. Python 2.2 can also be\ncompiled to use UCS-4, 32-bit unsigned integers, as its internal encoding by\nsupplying --enable-unicode=ucs4\nto the configure script. (It\u2019s also\npossible to specify --disable-unicode\nto completely disable Unicode\nsupport.)\nWhen built to use UCS-4 (a \u201cwide Python\u201d), the interpreter can natively handle\nUnicode characters from U+000000 to U+110000, so the range of legal values for\nthe unichr()\nfunction is expanded accordingly. Using an interpreter\ncompiled to use UCS-2 (a \u201cnarrow Python\u201d), values greater than 65535 will still\ncause unichr()\nto raise a ValueError\nexception. This is all\ndescribed in PEP 261, \u201cSupport for \u2018wide\u2019 Unicode characters\u201d; consult it for\nfurther details.\nAnother change is simpler to explain. Since their introduction, Unicode strings\nhave supported an encode()\nmethod to convert the string to a selected\nencoding such as UTF-8 or Latin-1. A symmetric decode([*encoding*])\nmethod has been added to 8-bit strings (though not to Unicode strings) in 2.2.\ndecode()\nassumes that the string is in the specified encoding and decodes\nit, returning whatever is returned by the codec.\nUsing this new feature, codecs have been added for tasks not directly related to\nUnicode. For example, codecs have been added for uu-encoding, MIME\u2019s base64\nencoding, and compression with the zlib\nmodule:\n>>> s = \"\"\"Here is a lengthy piece of redundant, overly verbose,\n... and repetitive text.\n... \"\"\"\n>>> data = s.encode('zlib')\n>>> data\n'x\\x9c\\r\\xc9\\xc1\\r\\x80 \\x10\\x04\\xc0?Ul...'\n>>> data.decode('zlib')\n'Here is a lengthy piece of redundant, overly verbose,\\nand repetitive text.\\n'\n>>> print s.encode('uu')\nbegin 666 \nM2&5R92!I=F5R8F]S92P*86YD(')E<&5T:71I=F4@=&5X=\"X*\nend\n>>> \"sheesh\".encode('rot-13')\n'furrfu'\nTo convert a class instance to Unicode, a __unicode__()\nmethod can be\ndefined by a class, analogous to __str__()\n.\nencode()\n, decode()\n, and __unicode__()\nwere implemented by\nMarc-Andr\u00e9 Lemburg. The changes to support using UCS-4 internally were\nimplemented by Fredrik Lundh and Martin von L\u00f6wis.\nSee also\n- PEP 261 - Support for \u2018wide\u2019 Unicode characters\nWritten by Paul Prescod.\nPEP 227: Nested Scopes\u00b6\nIn Python 2.1, statically nested scopes were added as an optional feature, to be\nenabled by a from __future__ import nested_scopes\ndirective. In 2.2 nested\nscopes no longer need to be specially enabled, and are now always present. The\nrest of this section is a copy of the description of nested scopes from my\n\u201cWhat\u2019s New in Python 2.1\u201d document; if you read it when 2.1 came out, you can\nskip the rest of this section.\nThe largest change introduced in Python 2.1, and made complete in 2.2, is to Python\u2019s scoping rules. In Python 2.0, at any given time there are at most three namespaces used to look up variable names: local, module-level, and the built-in namespace. This often surprised people because it didn\u2019t match their intuitive expectations. For example, a nested recursive function definition doesn\u2019t work:\ndef f():\n...\ndef g(value):\n...\nreturn g(value-1) + 1\n...\nThe function g()\nwill always raise a NameError\nexception, because\nthe binding of the name g\nisn\u2019t in either its local namespace or in the\nmodule-level namespace. This isn\u2019t much of a problem in practice (how often do\nyou recursively define interior functions like this?), but this also made using\nthe lambda\nexpression clumsier, and this was a problem in practice.\nIn code which uses lambda\nyou can often find local variables being\ncopied by passing them as the default values of arguments.\ndef find(self, name):\n\"Return list of any entries equal to 'name'\"\nL = filter(lambda x, name=name: x == name,\nself.list_attribute)\nreturn L\nThe readability of Python code written in a strongly functional style suffers greatly as a result.\nThe most significant change to Python 2.2 is that static scoping has been added\nto the language to fix this problem. As a first effect, the name=name\ndefault argument is now unnecessary in the above example. Put simply, when a\ngiven variable name is not assigned a value within a function (by an assignment,\nor the def\n, class\n, or import\nstatements),\nreferences to the variable will be looked up in the local namespace of the\nenclosing scope. A more detailed explanation of the rules, and a dissection of\nthe implementation, can be found in the PEP.\nThis change may cause some compatibility problems for code where the same variable name is used both at the module level and as a local variable within a function that contains further function definitions. This seems rather unlikely though, since such code would have been pretty confusing to read in the first place.\nOne side effect of the change is that the from module import *\nand\nexec\nstatements have been made illegal inside a function scope under\ncertain conditions. The Python reference manual has said all along that from\nmodule import *\nis only legal at the top level of a module, but the CPython\ninterpreter has never enforced this before. As part of the implementation of\nnested scopes, the compiler which turns Python source into bytecodes has to\ngenerate different code to access variables in a containing scope. from\nmodule import *\nand exec\nmake it impossible for the compiler to\nfigure this out, because they add names to the local namespace that are\nunknowable at compile time. Therefore, if a function contains function\ndefinitions or lambda\nexpressions with free variables, the compiler\nwill flag this by raising a SyntaxError\nexception.\nTo make the preceding explanation a bit clearer, here\u2019s an example:\nx = 1\ndef f():\n# The next line is a syntax error\nexec 'x=2'\ndef g():\nreturn x\nLine 4 containing the exec\nstatement is a syntax error, since\nexec\nwould define a new local variable named x\nwhose value should\nbe accessed by g()\n.\nThis shouldn\u2019t be much of a limitation, since exec\nis rarely used in\nmost Python code (and when it is used, it\u2019s often a sign of a poor design\nanyway).\nSee also\n- PEP 227 - Statically Nested Scopes\nWritten and implemented by Jeremy Hylton.\nNew and Improved Modules\u00b6\nThe\nxmlrpclib\nmodule was contributed to the standard library by Fredrik Lundh, providing support for writing XML-RPC clients. XML-RPC is a simple remote procedure call protocol built on top of HTTP and XML. For example, the following snippet retrieves a list of RSS channels from the O\u2019Reilly Network, and then lists the recent headlines for one channel:import xmlrpclib s = xmlrpclib.Server( 'http://www.oreillynet.com/meerkat/xml-rpc/server.php') channels = s.meerkat.getChannels() # channels is a list of dictionaries, like this: # [{'id': 4, 'title': 'Freshmeat Daily News'} # {'id': 190, 'title': '32Bits Online'}, # {'id': 4549, 'title': '3DGamers'}, ... ] # Get the items for one channel items = s.meerkat.getItems( {'channel': 4} ) # 'items' is another list of dictionaries, like this: # [{'link': 'http://freshmeat.net/releases/52719/', # 'description': 'A utility which converts HTML to XSL FO.', # 'title': 'html2fo 0.3 (Default)'}, ... ]\nThe\nSimpleXMLRPCServer\nmodule makes it easy to create straightforward XML-RPC servers. See http://xmlrpc.scripting.com/ for more information about XML-RPC.The new\nhmac\nmodule implements the HMAC algorithm described by RFC 2104. (Contributed by Gerhard H\u00e4ring.)Several functions that originally returned lengthy tuples now return pseudo-sequences that still behave like tuples but also have mnemonic attributes such as\nmemberst_mtime\nortm_year\n. The enhanced functions includestat()\n,fstat()\n,statvfs()\n, andfstatvfs()\nin theos\nmodule, andlocaltime()\n,gmtime()\n, andstrptime()\nin thetime\nmodule.For example, to obtain a file\u2019s size using the old tuples, you\u2019d end up writing something like\nfile_size = os.stat(filename)[stat.ST_SIZE]\n, but now this can be written more clearly asfile_size = os.stat(filename).st_size\n.The original patch for this feature was contributed by Nick Mathewson.\nThe Python profiler has been extensively reworked and various errors in its output have been corrected. (Contributed by Fred L. Drake, Jr. and Tim Peters.)\nThe\nsocket\nmodule can be compiled to support IPv6; specify the--enable-ipv6\noption to Python\u2019s configure script. (Contributed by Jun-ichiro \u201citojun\u201d Hagino.)Two new format characters were added to the\nstruct\nmodule for 64-bit integers on platforms that support the C long long type.q\nis for a signed 64-bit integer, andQ\nis for an unsigned one. The value is returned in Python\u2019s long integer type. (Contributed by Tim Peters.)In the interpreter\u2019s interactive mode, there\u2019s a new built-in function\nhelp()\nthat uses thepydoc\nmodule introduced in Python 2.1 to provide interactive help.help(object)\ndisplays any available help text about object.help()\nwith no argument puts you in an online help utility, where you can enter the names of functions, classes, or modules to read their help text. (Contributed by Guido van Rossum, using Ka-Ping Yee\u2019spydoc\nmodule.)Various bugfixes and performance improvements have been made to the SRE engine underlying the\nre\nmodule. For example, there.sub()\nandre.split()\nfunctions have been rewritten in C. Another contributed patch speeds up certain Unicode character ranges by a factor of two, and a newfinditer()\nmethod that returns an iterator over all the non-overlapping matches in a given string. (SRE is maintained by Fredrik Lundh. The BIGCHARSET patch was contributed by Martin von L\u00f6wis.)The\nsmtplib\nmodule now supports RFC 2487, \u201cSecure SMTP over TLS\u201d, so it\u2019s now possible to encrypt the SMTP traffic between a Python program and the mail transport agent being handed a message.smtplib\nalso supports SMTP authentication. (Contributed by Gerhard H\u00e4ring.)The\nimaplib\nmodule, maintained by Piers Lauder, has support for several new extensions: the NAMESPACE extension defined in RFC 2342, SORT, GETACL and SETACL. (Contributed by Anthony Baxter and Michel Pelletier.)The\nrfc822\nmodule\u2019s parsing of email addresses is now compliant with RFC 2822, an update to RFC 822. (The module\u2019s name is not going to be changed torfc2822\n.) A new package,email\n, has also been added for parsing and generating e-mail messages. (Contributed by Barry Warsaw, and arising out of his work on Mailman.)The\ndifflib\nmodule now contains a newDiffer\nclass for producing human-readable lists of changes (a \u201cdelta\u201d) between two sequences of lines of text. There are also two generator functions,ndiff()\nandrestore()\n, which respectively return a delta from two sequences, or one of the original sequences from a delta. (Grunt work contributed by David Goodger, from ndiff.py code by Tim Peters who then did the generatorization.)New constants\nascii_letters\n,ascii_lowercase\n, andascii_uppercase\nwere added to thestring\nmodule. There were several modules in the standard library that usedstring.letters\nto mean the ranges A-Za-z, but that assumption is incorrect when locales are in use, becausestring.letters\nvaries depending on the set of legal characters defined by the current locale. The buggy modules have all been fixed to useascii_letters\ninstead. (Reported by an unknown person; fixed by Fred L. Drake, Jr.)The\nmimetypes\nmodule now makes it easier to use alternative MIME-type databases by the addition of aMimeTypes\nclass, which takes a list of filenames to be parsed. (Contributed by Fred L. Drake, Jr.)A\nTimer\nclass was added to thethreading\nmodule that allows scheduling an activity to happen at some future time. (Contributed by Itamar Shtull-Trauring.)\nInterpreter Changes and Fixes\u00b6\nSome of the changes only affect people who deal with the Python interpreter at the C level because they\u2019re writing Python extension modules, embedding the interpreter, or just hacking on the interpreter itself. If you only write Python code, none of the changes described here will affect you very much.\nProfiling and tracing functions can now be implemented in C, which can operate at much higher speeds than Python-based functions and should reduce the overhead of profiling and tracing. This will be of interest to authors of development environments for Python. Two new C functions were added to Python\u2019s API,\nPyEval_SetProfile()\nandPyEval_SetTrace()\n. The existingsys.setprofile()\nandsys.settrace()\nfunctions still exist, and have simply been changed to use the new C-level interface. (Contributed by Fred L. Drake, Jr.)Another low-level API, primarily of interest to implementers of Python debuggers and development tools, was added.\nPyInterpreterState_Head()\nandPyInterpreterState_Next()\nlet a caller walk through all the existing interpreter objects;PyInterpreterState_ThreadHead()\nandPyThreadState_Next()\nallow looping over all the thread states for a given interpreter. (Contributed by David Beazley.)The C-level interface to the garbage collector has been changed to make it easier to write extension types that support garbage collection and to debug misuses of the functions. Various functions have slightly different semantics, so a bunch of functions had to be renamed. Extensions that use the old API will still compile but will not participate in garbage collection, so updating them for 2.2 should be considered fairly high priority.\nTo upgrade an extension module to the new API, perform the following steps:\nRename\nPy_TPFLAGS_GC\ntoPy_TPFLAGS_HAVE_GC\n.- Use\nPyObject_GC_New()\norPyObject_GC_NewVar()\nto allocate objects, and\nPyObject_GC_Del()\nto deallocate them.\n- Use\nRename\nPyObject_GC_Init()\ntoPyObject_GC_Track()\nandPyObject_GC_Fini()\ntoPyObject_GC_UnTrack()\n.Remove\nPyGC_HEAD_SIZE\nfrom object size calculations.Remove calls to\nPyObject_AS_GC()\nandPyObject_FROM_GC()\n.A new\net\nformat sequence was added toPyArg_ParseTuple()\n;et\ntakes both a parameter and an encoding name, and converts the parameter to the given encoding if the parameter turns out to be a Unicode string, or leaves it alone if it\u2019s an 8-bit string, assuming it to already be in the desired encoding. This differs from thees\nformat character, which assumes that 8-bit strings are in Python\u2019s default ASCII encoding and converts them to the specified new encoding. (Contributed by M.-A. Lemburg, and used for the MBCS support on Windows described in the following section.)A different argument parsing function,\nPyArg_UnpackTuple()\n, has been added that\u2019s simpler and presumably faster. Instead of specifying a format string, the caller simply gives the minimum and maximum number of arguments expected, and a set of pointers to PyObject* variables that will be filled in with argument values.Two new flags\nMETH_NOARGS\nandMETH_O\nare available in method definition tables to simplify implementation of methods with no arguments or a single untyped argument. Calling such methods is more efficient than calling a corresponding method that usesMETH_VARARGS\n. Also, the oldMETH_OLDARGS\nstyle of writing C methods is now officially deprecated.Two new wrapper functions,\nPyOS_snprintf()\nandPyOS_vsnprintf()\nwere added to provide cross-platform implementations for the relatively newsnprintf()\nandvsnprintf()\nC lib APIs. In contrast to the standardsprintf()\nandvsprintf()\nfunctions, the Python versions check the bounds of the buffer used to protect against buffer overruns. (Contributed by M.-A. Lemburg.)The\n_PyTuple_Resize()\nfunction has lost an unused parameter, so now it takes 2 parameters instead of 3. The third argument was never used, and can simply be discarded when porting code from earlier versions to Python 2.2.\nOther Changes and Fixes\u00b6\nAs usual there were a bunch of other improvements and bugfixes scattered throughout the source tree. A search through the CVS change logs finds there were 527 patches applied and 683 bugs fixed between Python 2.1 and 2.2; 2.2.1 applied 139 patches and fixed 143 bugs; 2.2.2 applied 106 patches and fixed 82 bugs. These figures are likely to be underestimates.\nSome of the more notable changes are:\nThe code for the MacOS port for Python, maintained by Jack Jansen, is now kept in the main Python CVS tree, and many changes have been made to support MacOS X.\nThe most significant change is the ability to build Python as a framework, enabled by supplying the\n--enable-framework\noption to the configure script when compiling Python. According to Jack Jansen, \u201cThis installs a self-contained Python installation plus the OS X framework \u201cglue\u201d into/Library/Frameworks/Python.framework\n(or another location of choice). For now there is little immediate added benefit to this (actually, there is the disadvantage that you have to change your PATH to be able to find Python), but it is the basis for creating a full-blown Python application, porting the MacPython IDE, possibly using Python as a standard OSA scripting language and much more.\u201dMost of the MacPython toolbox modules, which interface to MacOS APIs such as windowing, QuickTime, scripting, etc. have been ported to OS X, but they\u2019ve been left commented out in\nsetup.py\n. People who want to experiment with these modules can uncomment them manually.Keyword arguments passed to built-in functions that don\u2019t take them now cause a\nTypeError\nexception to be raised, with the message \u201cfunction takes no keyword arguments\u201d.Weak references, added in Python 2.1 as an extension module, are now part of the core because they\u2019re used in the implementation of new-style classes. The\nReferenceError\nexception has therefore moved from theweakref\nmodule to become a built-in exception.A new script,\nTools/scripts/cleanfuture.py\nby Tim Peters, automatically removes obsolete__future__\nstatements from Python source code.An additional flags argument has been added to the built-in function\ncompile()\n, so the behaviour of__future__\nstatements can now be correctly observed in simulated shells, such as those presented by IDLE and other development environments. This is described in PEP 264. (Contributed by Michael Hudson.)The new license introduced with Python 1.6 wasn\u2019t GPL-compatible. This is fixed by some minor textual changes to the 2.2 license, so it\u2019s now legal to embed Python inside a GPLed program again. Note that Python itself is not GPLed, but instead is under a license that\u2019s essentially equivalent to the BSD license, same as it always was. The license changes were also applied to the Python 2.0.1 and 2.1.1 releases.\nWhen presented with a Unicode filename on Windows, Python will now convert it to an MBCS encoded string, as used by the Microsoft file APIs. As MBCS is explicitly used by the file APIs, Python\u2019s choice of ASCII as the default encoding turns out to be an annoyance. On Unix, the locale\u2019s character set is used if\nlocale.nl_langinfo(CODESET)\nis available. (Windows support was contributed by Mark Hammond with assistance from Marc-Andr\u00e9 Lemburg. Unix support was added by Martin von L\u00f6wis.)Large file support is now enabled on Windows. (Contributed by Tim Peters.)\nThe\nTools/scripts/ftpmirror.py\nscript now parses a.netrc\nfile, if you have one. (Contributed by Mike Romberg.)Some features of the object returned by the\nxrange()\nfunction are now deprecated, and trigger warnings when they\u2019re accessed; they\u2019ll disappear in Python 2.3.xrange\nobjects tried to pretend they were full sequence types by supporting slicing, sequence multiplication, and thein\noperator, but these features were rarely used and therefore buggy. Thetolist()\nmethod and thestart\n,stop\n, andstep\nattributes are also being deprecated. At the C level, the fourth argument to thePyRange_New()\nfunction,repeat\n, has also been deprecated.There were a bunch of patches to the dictionary implementation, mostly to fix potential core dumps if a dictionary contains objects that sneakily changed their hash value, or mutated the dictionary they were contained in. For a while python-dev fell into a gentle rhythm of Michael Hudson finding a case that dumped core, Tim Peters fixing the bug, Michael finding another case, and round and round it went.\nOn Windows, Python can now be compiled with Borland C thanks to a number of patches contributed by Stephen Hansen, though the result isn\u2019t fully functional yet. (But this is progress\u2026)\nAnother Windows enhancement: Wise Solutions generously offered PythonLabs use of their InstallerMaster 8.1 system. Earlier PythonLabs Windows installers used Wise 5.0a, which was beginning to show its age. (Packaged up by Tim Peters.)\nFiles ending in\n.pyw\ncan now be imported on Windows..pyw\nis a Windows-only thing, used to indicate that a script needs to be run using PYTHONW.EXE instead of PYTHON.EXE in order to prevent a DOS console from popping up to display the output. This patch makes it possible to import such scripts, in case they\u2019re also usable as modules. (Implemented by David Bolen.)On platforms where Python uses the C\ndlopen()\nfunction to load extension modules, it\u2019s now possible to set the flags used bydlopen()\nusing thesys.getdlopenflags()\nandsys.setdlopenflags()\nfunctions. (Contributed by Bram Stolk.)The\npow()\nbuilt-in function no longer supports 3 arguments when floating-point numbers are supplied.pow(x, y, z)\nreturns(x**y) % z\n, but this is never useful for floating-point numbers, and the final result varies unpredictably depending on the platform. A call such aspow(2.0, 8.0, 7.0)\nwill now raise aTypeError\nexception.\nAcknowledgements\u00b6\nThe author would like to thank the following people for offering suggestions, corrections and assistance with various drafts of this article: Fred Bremmer, Keith Briggs, Andrew Dalke, Fred L. Drake, Jr., Carel Fellinger, David Goodger, Mark Hammond, Stephen Hansen, Michael Hudson, Jack Jansen, Marc-Andr\u00e9 Lemburg, Martin von L\u00f6wis, Fredrik Lundh, Michael McLay, Nick Mathewson, Paul Moore, Gustavo Niemeyer, Don O\u2019Donnell, Joonas Paalasma, Tim Peters, Jens Quade, Tom Reinhardt, Neil Schemenauer, Guido van Rossum, Greg Ward, Edward Welbourne.", "code_snippets": ["\n ", " ", "\n ", "\n ", "\n", "\n", "\n", "\n", "\n", "\n ", " ", " ", " ", " ", " ", "\n ", "\n ", " ", " ", "\n ", " ", " ", "\n", " ", " ", "\n", "\n", "\n ", " ", "\n ", "\n ", " ", " ", "\n\n ", " ", " ", "\n ", "\n ", " ", " ", "\n", " ", "\n\n", "\n ", " ", " ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n\n ", " ", " ", " ", " ", "\n", " ", "\n ", " ", " ", " ", "\n ", " \\\n ", " \\\n ", " \\\n ", " \\\n", " ", "\n ", " ", " ", " ", "\n \\ ", "\n \\ ", "\n \\ ", "\n \\ ", "\n ", "\n", " ", "\n ", " ", "\n ", "\n ", " ", "\n ", "\n ", "\n", "\n ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n\n ", "\n ", "\n ", "\n ", " ", " ", " ", "\n ", "\n ", "\n", "\n", " ", " ", " ", " ", "\n", "\n", " ", " ", "\n", " ", "\n", "\n", " ", " ", "\n", " ", "\n", "\n", " ", " ", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", " ", "\n ", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n File ", ", line ", ", in ", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", "\n ", "\n ", "\n", "\n ", " ", " ", " ", "\n ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n File ", ", line ", ", in ", "\n File ", ", line ", ", in ", "\n", "\n", "\n", "\n ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n\n", "\n", "\n", "\n", "\n ", "\n ", "\n ", "\n ", " ", " ", " ", "\n ", "\n", " ", "\n ", "\n ", " ", " ", " ", " ", " ", " ", " ", "\n ", "\n ", " ", "\n", " ", " ", "\n", "\n ", "\n ", " ", "\n ", "\n ", " ", "\n", "\n", " ", " ", "\n ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n\n", "\n", " ", " ", " ", " ", " ", "\n\n", "\n", "\n", "\n", "\n"], "language": "Python", "source": "python.org", "token_count": 12920}
{"url": "https://docs.python.org/3/reference/toplevel_components.html", "title": "Top-level components", "content": "9. Top-level components\u00b6\nThe Python interpreter can get its input from a number of sources: from a script passed to it as standard input or as program argument, typed in interactively, from a module source file, etc. This chapter gives the syntax used in these cases.\n9.1. Complete Python programs\u00b6\nWhile a language specification need not prescribe how the language interpreter\nis invoked, it is useful to have a notion of a complete Python program. A\ncomplete Python program is executed in a minimally initialized environment: all\nbuilt-in and standard modules are available, but none have been initialized,\nexcept for sys\n(various system services), builtins\n(built-in\nfunctions, exceptions and None\n) and __main__\n. The latter is used to\nprovide the local and global namespace for execution of the complete program.\nThe syntax for a complete Python program is that for file input, described in the next section.\nThe interpreter may also be invoked in interactive mode; in this case, it does\nnot read and execute a complete program but reads and executes one statement\n(possibly compound) at a time. The initial environment is identical to that of\na complete program; each statement is executed in the namespace of\n__main__\n.\nA complete program can be passed to the interpreter\nin three forms: with the -c\nstring command line option, as a file\npassed as the first command line argument, or as standard input. If the file\nor standard input is a tty device, the interpreter enters interactive mode;\notherwise, it executes the file as a complete program.\n9.2. File input\u00b6\nAll input read from non-interactive files has the same form:\nfile_input: (NEWLINE | statement\n)* ENDMARKER\nThis syntax is used in the following situations:\nwhen parsing a complete Python program (from a file or from a string);\nwhen parsing a module;\nwhen parsing a string passed to the\nexec()\nfunction;\n9.3. Interactive input\u00b6\nInput in interactive mode is parsed using the following grammar:\ninteractive_input: [stmt_list\n] NEWLINE |compound_stmt\nNEWLINE | ENDMARKER\nNote that a (top-level) compound statement must be followed by a blank line in interactive mode; this is needed to help the parser detect the end of the input.\n9.4. Expression input\u00b6\neval()\nis used for expression input. It ignores leading whitespace. The\nstring argument to eval()\nmust have the following form:\neval_input: expression_list\nNEWLINE* ENDMARKER", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 598}
{"url": "https://docs.python.org/3/library/email.mime.html", "title": ": Creating email and MIME objects from scratch", "content": "email.mime\n: Creating email and MIME objects from scratch\u00b6\nSource code: Lib/email/mime/\nThis module is part of the legacy (Compat32\n) email API. Its functionality\nis partially replaced by the contentmanager\nin the new API, but\nin certain applications these classes may still be useful, even in non-legacy\ncode.\nOrdinarily, you get a message object structure by passing a file or some text to\na parser, which parses the text and returns the root message object. However\nyou can also build a complete message structure from scratch, or even individual\nMessage\nobjects by hand. In fact, you can also take an\nexisting structure and add new Message\nobjects, move them\naround, etc. This makes a very convenient interface for slicing-and-dicing MIME\nmessages.\nYou can create a new object structure by creating Message\ninstances, adding attachments and all the appropriate headers manually. For MIME\nmessages though, the email\npackage provides some convenient subclasses to\nmake things easier.\nHere are the classes:\n- class email.mime.base.MIMEBase(_maintype, _subtype, *, policy=compat32, **_params)\u00b6\nModule:\nemail.mime.base\nThis is the base class for all the MIME-specific subclasses of\nMessage\n. Ordinarily you won\u2019t create instances specifically ofMIMEBase\n, although you could.MIMEBase\nis provided primarily as a convenient base class for more specific MIME-aware subclasses._maintype is the Content-Type major type (e.g. text or image), and _subtype is the Content-Type minor type (e.g. plain or gif). _params is a parameter key/value dictionary and is passed directly to\nMessage.add_header\n.If policy is specified, (defaults to the\ncompat32\npolicy) it will be passed toMessage\n.The\nMIMEBase\nclass always adds a Content-Type header (based on _maintype, _subtype, and _params), and a MIME-Version header (always set to1.0\n).Changed in version 3.6: Added policy keyword-only parameter.\n- class email.mime.nonmultipart.MIMENonMultipart\u00b6\nModule:\nemail.mime.nonmultipart\nA subclass of\nMIMEBase\n, this is an intermediate base class for MIME messages that are not multipart. The primary purpose of this class is to prevent the use of theattach()\nmethod, which only makes sense for multipart messages. Ifattach()\nis called, aMultipartConversionError\nexception is raised.\n- class email.mime.multipart.MIMEMultipart(_subtype='mixed', boundary=None, _subparts=None, *, policy=compat32, **_params)\u00b6\nModule:\nemail.mime.multipart\nA subclass of\nMIMEBase\n, this is an intermediate base class for MIME messages that are multipart. Optional _subtype defaults to mixed, but can be used to specify the subtype of the message. A Content-Type header of multipart/_subtype will be added to the message object. A MIME-Version header will also be added.Optional boundary is the multipart boundary string. When\nNone\n(the default), the boundary is calculated when needed (for example, when the message is serialized)._subparts is a sequence of initial subparts for the payload. It must be possible to convert this sequence to a list. You can always attach new subparts to the message by using the\nMessage.attach\nmethod.Optional policy argument defaults to\ncompat32\n.Additional parameters for the Content-Type header are taken from the keyword arguments, or passed into the _params argument, which is a keyword dictionary.\nChanged in version 3.6: Added policy keyword-only parameter.\n- class email.mime.application.MIMEApplication(_data, _subtype='octet-stream', _encoder=email.encoders.encode_base64, *, policy=compat32, **_params)\u00b6\nModule:\nemail.mime.application\nA subclass of\nMIMENonMultipart\n, theMIMEApplication\nclass is used to represent MIME message objects of major type application. _data contains the bytes for the raw application data. Optional _subtype specifies the MIME subtype and defaults to octet-stream.Optional _encoder is a callable (i.e. function) which will perform the actual encoding of the data for transport. This callable takes one argument, which is the\nMIMEApplication\ninstance. It should useget_payload()\nandset_payload()\nto change the payload to encoded form. It should also add any Content-Transfer-Encoding or other headers to the message object as necessary. The default encoding is base64. See theemail.encoders\nmodule for a list of the built-in encoders.Optional policy argument defaults to\ncompat32\n._params are passed straight through to the base class constructor.\nChanged in version 3.6: Added policy keyword-only parameter.\n- class email.mime.audio.MIMEAudio(_audiodata, _subtype=None, _encoder=email.encoders.encode_base64, *, policy=compat32, **_params)\u00b6\nModule:\nemail.mime.audio\nA subclass of\nMIMENonMultipart\n, theMIMEAudio\nclass is used to create MIME message objects of major type audio. _audiodata contains the bytes for the raw audio data. If this data can be decoded as au, wav, aiff, or aifc, then the subtype will be automatically included in the Content-Type header. Otherwise you can explicitly specify the audio subtype via the _subtype argument. If the minor type could not be guessed and _subtype was not given, thenTypeError\nis raised.Optional _encoder is a callable (i.e. function) which will perform the actual encoding of the audio data for transport. This callable takes one argument, which is the\nMIMEAudio\ninstance. It should useget_payload()\nandset_payload()\nto change the payload to encoded form. It should also add any Content-Transfer-Encoding or other headers to the message object as necessary. The default encoding is base64. See theemail.encoders\nmodule for a list of the built-in encoders.Optional policy argument defaults to\ncompat32\n._params are passed straight through to the base class constructor.\nChanged in version 3.6: Added policy keyword-only parameter.\n- class email.mime.image.MIMEImage(_imagedata, _subtype=None, _encoder=email.encoders.encode_base64, *, policy=compat32, **_params)\u00b6\nModule:\nemail.mime.image\nA subclass of\nMIMENonMultipart\n, theMIMEImage\nclass is used to create MIME message objects of major type image. _imagedata contains the bytes for the raw image data. If this data type can be detected (jpeg, png, gif, tiff, rgb, pbm, pgm, ppm, rast, xbm, bmp, webp, and exr attempted), then the subtype will be automatically included in the Content-Type header. Otherwise you can explicitly specify the image subtype via the _subtype argument. If the minor type could not be guessed and _subtype was not given, thenTypeError\nis raised.Optional _encoder is a callable (i.e. function) which will perform the actual encoding of the image data for transport. This callable takes one argument, which is the\nMIMEImage\ninstance. It should useget_payload()\nandset_payload()\nto change the payload to encoded form. It should also add any Content-Transfer-Encoding or other headers to the message object as necessary. The default encoding is base64. See theemail.encoders\nmodule for a list of the built-in encoders.Optional policy argument defaults to\ncompat32\n._params are passed straight through to the\nMIMEBase\nconstructor.Changed in version 3.6: Added policy keyword-only parameter.\n- class email.mime.message.MIMEMessage(_msg, _subtype='rfc822', *, policy=compat32)\u00b6\nModule:\nemail.mime.message\nA subclass of\nMIMENonMultipart\n, theMIMEMessage\nclass is used to create MIME objects of main type message. _msg is used as the payload, and must be an instance of classMessage\n(or a subclass thereof), otherwise aTypeError\nis raised.Optional _subtype sets the subtype of the message; it defaults to rfc822.\nOptional policy argument defaults to\ncompat32\n.Changed in version 3.6: Added policy keyword-only parameter.\n- class email.mime.text.MIMEText(_text, _subtype='plain', _charset=None, *, policy=compat32)\u00b6\nModule:\nemail.mime.text\nA subclass of\nMIMENonMultipart\n, theMIMEText\nclass is used to create MIME objects of major type text. _text is the string for the payload. _subtype is the minor type and defaults to plain. _charset is the character set of the text and is passed as an argument to theMIMENonMultipart\nconstructor; it defaults tous-ascii\nif the string contains onlyascii\ncode points, andutf-8\notherwise. The _charset parameter accepts either a string or aCharset\ninstance.Unless the _charset argument is explicitly set to\nNone\n, the MIMEText object created will have both a Content-Type header with acharset\nparameter, and a Content-Transfer-Encoding header. This means that a subsequentset_payload\ncall will not result in an encoded payload, even if a charset is passed in theset_payload\ncommand. You can \u201creset\u201d this behavior by deleting theContent-Transfer-Encoding\nheader, after which aset_payload\ncall will automatically encode the new payload (and add a new Content-Transfer-Encoding header).Optional policy argument defaults to\ncompat32\n.Changed in version 3.5: _charset also accepts\nCharset\ninstances.Changed in version 3.6: Added policy keyword-only parameter.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 2208}
{"url": "https://docs.python.org/3/whatsnew/2.3.html", "title": "What\u2019s New in Python 2.3", "content": "What\u2019s New in Python 2.3\u00b6\n- Author:\nA.M. Kuchling\nThis article explains the new features in Python 2.3. Python 2.3 was released on July 29, 2003.\nThe main themes for Python 2.3 are polishing some of the features added in 2.2,\nadding various small but useful enhancements to the core language, and expanding\nthe standard library. The new object model introduced in the previous version\nhas benefited from 18 months of bugfixes and from optimization efforts that have\nimproved the performance of new-style classes. A few new built-in functions\nhave been added such as sum()\nand enumerate()\n. The in\noperator can now be used for substring searches (e.g. \"ab\" in \"abc\"\nreturns\nTrue\n).\nSome of the many new library features include Boolean, set, heap, and date/time data types, the ability to import modules from ZIP-format archives, metadata support for the long-awaited Python catalog, an updated version of IDLE, and modules for logging messages, wrapping text, parsing CSV files, processing command-line options, using BerkeleyDB databases\u2026 the list of new and enhanced modules is lengthy.\nThis article doesn\u2019t attempt to provide a complete specification of the new features, but instead provides a convenient overview. For full details, you should refer to the documentation for Python 2.3, such as the Python Library Reference and the Python Reference Manual. If you want to understand the complete implementation and design rationale, refer to the PEP for a particular new feature.\nPEP 218: A Standard Set Datatype\u00b6\nThe new sets\nmodule contains an implementation of a set datatype. The\nSet\nclass is for mutable sets, sets that can have members added and\nremoved. The ImmutableSet\nclass is for sets that can\u2019t be modified,\nand instances of ImmutableSet\ncan therefore be used as dictionary keys.\nSets are built on top of dictionaries, so the elements within a set must be\nhashable.\nHere\u2019s a simple example:\n>>> import sets\n>>> S = sets.Set([1,2,3])\n>>> S\nSet([1, 2, 3])\n>>> 1 in S\nTrue\n>>> 0 in S\nFalse\n>>> S.add(5)\n>>> S.remove(3)\n>>> S\nSet([1, 2, 5])\n>>>\nThe union and intersection of sets can be computed with the union()\nand\nintersection()\nmethods; an alternative notation uses the bitwise operators\n&\nand |\n. Mutable sets also have in-place versions of these methods,\nunion_update()\nand intersection_update()\n.\n>>> S1 = sets.Set([1,2,3])\n>>> S2 = sets.Set([4,5,6])\n>>> S1.union(S2)\nSet([1, 2, 3, 4, 5, 6])\n>>> S1 | S2 # Alternative notation\nSet([1, 2, 3, 4, 5, 6])\n>>> S1.intersection(S2)\nSet([])\n>>> S1 & S2 # Alternative notation\nSet([])\n>>> S1.union_update(S2)\n>>> S1\nSet([1, 2, 3, 4, 5, 6])\n>>>\nIt\u2019s also possible to take the symmetric difference of two sets. This is the\nset of all elements in the union that aren\u2019t in the intersection. Another way\nof putting it is that the symmetric difference contains all elements that are in\nexactly one set. Again, there\u2019s an alternative notation (^\n), and an\nin-place version with the ungainly name symmetric_difference_update()\n.\n>>> S1 = sets.Set([1,2,3,4])\n>>> S2 = sets.Set([3,4,5,6])\n>>> S1.symmetric_difference(S2)\nSet([1, 2, 5, 6])\n>>> S1 ^ S2\nSet([1, 2, 5, 6])\n>>>\nThere are also issubset()\nand issuperset()\nmethods for checking\nwhether one set is a subset or superset of another:\n>>> S1 = sets.Set([1,2,3])\n>>> S2 = sets.Set([2,3])\n>>> S2.issubset(S1)\nTrue\n>>> S1.issubset(S2)\nFalse\n>>> S1.issuperset(S2)\nTrue\n>>>\nSee also\n- PEP 218 - Adding a Built-In Set Object Type\nPEP written by Greg V. Wilson. Implemented by Greg V. Wilson, Alex Martelli, and GvR.\nPEP 255: Simple Generators\u00b6\nIn Python 2.2, generators were added as an optional feature, to be enabled by a\nfrom __future__ import generators\ndirective. In 2.3 generators no longer\nneed to be specially enabled, and are now always present; this means that\nyield\nis now always a keyword. The rest of this section is a copy of\nthe description of generators from the \u201cWhat\u2019s New in Python 2.2\u201d document; if\nyou read it back when Python 2.2 came out, you can skip the rest of this\nsection.\nYou\u2019re doubtless familiar with how function calls work in Python or C. When you\ncall a function, it gets a private namespace where its local variables are\ncreated. When the function reaches a return\nstatement, the local\nvariables are destroyed and the resulting value is returned to the caller. A\nlater call to the same function will get a fresh new set of local variables.\nBut, what if the local variables weren\u2019t thrown away on exiting a function?\nWhat if you could later resume the function where it left off? This is what\ngenerators provide; they can be thought of as resumable functions.\nHere\u2019s the simplest example of a generator function:\ndef generate_ints(N):\nfor i in range(N):\nyield i\nA new keyword, yield\n, was introduced for generators. Any function\ncontaining a yield\nstatement is a generator function; this is\ndetected by Python\u2019s bytecode compiler which compiles the function specially as\na result.\nWhen you call a generator function, it doesn\u2019t return a single value; instead it\nreturns a generator object that supports the iterator protocol. On executing\nthe yield\nstatement, the generator outputs the value of i\n,\nsimilar to a return\nstatement. The big difference between\nyield\nand a return\nstatement is that on reaching a\nyield\nthe generator\u2019s state of execution is suspended and local\nvariables are preserved. On the next call to the generator\u2019s .next()\nmethod, the function will resume executing immediately after the\nyield\nstatement. (For complicated reasons, the yield\nstatement isn\u2019t allowed inside the try\nblock of a\ntry\n\u2026finally\nstatement; read PEP 255 for a full\nexplanation of the interaction between yield\nand exceptions.)\nHere\u2019s a sample usage of the generate_ints()\ngenerator:\n>>> gen = generate_ints(3)\n>>> gen\n\n>>> gen.next()\n0\n>>> gen.next()\n1\n>>> gen.next()\n2\n>>> gen.next()\nTraceback (most recent call last):\nFile \"stdin\", line 1, in ?\nFile \"stdin\", line 2, in generate_ints\nStopIteration\nYou could equally write for i in generate_ints(5)\n, or a,b,c =\ngenerate_ints(3)\n.\nInside a generator function, the return\nstatement can only be used\nwithout a value, and signals the end of the procession of values; afterwards the\ngenerator cannot return any further values. return\nwith a value, such\nas return 5\n, is a syntax error inside a generator function. The end of the\ngenerator\u2019s results can also be indicated by raising StopIteration\nmanually, or by just letting the flow of execution fall off the bottom of the\nfunction.\nYou could achieve the effect of generators manually by writing your own class\nand storing all the local variables of the generator as instance variables. For\nexample, returning a list of integers could be done by setting self.count\nto\n0, and having the next()\nmethod increment self.count\nand return it.\nHowever, for a moderately complicated generator, writing a corresponding class\nwould be much messier. Lib/test/test_generators.py\ncontains a number of\nmore interesting examples. The simplest one implements an in-order traversal of\na tree using generators recursively.\n# A recursive generator that generates Tree leaves in in-order.\ndef inorder(t):\nif t:\nfor x in inorder(t.left):\nyield x\nyield t.label\nfor x in inorder(t.right):\nyield x\nTwo other examples in Lib/test/test_generators.py\nproduce solutions for\nthe N-Queens problem (placing $N$ queens on an $NxN$ chess board so that no\nqueen threatens another) and the Knight\u2019s Tour (a route that takes a knight to\nevery square of an $NxN$ chessboard without visiting any square twice).\nThe idea of generators comes from other programming languages, especially Icon (https://www2.cs.arizona.edu/icon/), where the idea of generators is central. In Icon, every expression and function call behaves like a generator. One example from \u201cAn Overview of the Icon Programming Language\u201d at https://www2.cs.arizona.edu/icon/docs/ipd266.htm gives an idea of what this looks like:\nsentence := \"Store it in the neighboring harbor\"\nif (i := find(\"or\", sentence)) > 5 then write(i)\nIn Icon the find()\nfunction returns the indexes at which the substring\n\u201cor\u201d is found: 3, 23, 33. In the if\nstatement, i\nis first\nassigned a value of 3, but 3 is less than 5, so the comparison fails, and Icon\nretries it with the second value of 23. 23 is greater than 5, so the comparison\nnow succeeds, and the code prints the value 23 to the screen.\nPython doesn\u2019t go nearly as far as Icon in adopting generators as a central concept. Generators are considered part of the core Python language, but learning or using them isn\u2019t compulsory; if they don\u2019t solve any problems that you have, feel free to ignore them. One novel feature of Python\u2019s interface as compared to Icon\u2019s is that a generator\u2019s state is represented as a concrete object (the iterator) that can be passed around to other functions or stored in a data structure.\nSee also\n- PEP 255 - Simple Generators\nWritten by Neil Schemenauer, Tim Peters, Magnus Lie Hetland. Implemented mostly by Neil Schemenauer and Tim Peters, with other fixes from the Python Labs crew.\nPEP 263: Source Code Encodings\u00b6\nPython source files can now be declared as being in different character set encodings. Encodings are declared by including a specially formatted comment in the first or second line of the source file. For example, a UTF-8 file can be declared with:\n#!/usr/bin/env python\n# -*- coding: UTF-8 -*-\nWithout such an encoding declaration, the default encoding used is 7-bit ASCII.\nExecuting or importing modules that contain string literals with 8-bit\ncharacters and have no encoding declaration will result in a\nDeprecationWarning\nbeing signalled by Python 2.3; in 2.4 this will be a\nsyntax error.\nThe encoding declaration only affects Unicode string literals, which will be converted to Unicode using the specified encoding. Note that Python identifiers are still restricted to ASCII characters, so you can\u2019t have variable names that use characters outside of the usual alphanumerics.\nSee also\n- PEP 263 - Defining Python Source Code Encodings\nWritten by Marc-Andr\u00e9 Lemburg and Martin von L\u00f6wis; implemented by Suzuki Hisao and Martin von L\u00f6wis.\nPEP 273: Importing Modules from ZIP Archives\u00b6\nThe new zipimport\nmodule adds support for importing modules from a\nZIP-format archive. You don\u2019t need to import the module explicitly; it will be\nautomatically imported if a ZIP archive\u2019s filename is added to sys.path\n.\nFor example:\namk@nyman:~/src/python$ unzip -l /tmp/example.zip\nArchive: /tmp/example.zip\nLength Date Time Name\n-------- ---- ---- ----\n8467 11-26-02 22:30 jwzthreading.py\n-------- -------\n8467 1 file\namk@nyman:~/src/python$ ./python\nPython 2.3 (#1, Aug 1 2003, 19:54:32)\n>>> import sys\n>>> sys.path.insert(0, '/tmp/example.zip') # Add .zip file to front of path\n>>> import jwzthreading\n>>> jwzthreading.__file__\n'/tmp/example.zip/jwzthreading.py'\n>>>\nAn entry in sys.path\ncan now be the filename of a ZIP archive. The ZIP\narchive can contain any kind of files, but only files named *.py\n,\n*.pyc\n, or *.pyo\ncan be imported. If an archive only contains\n*.py\nfiles, Python will not attempt to modify the archive by adding the\ncorresponding *.pyc\nfile, meaning that if a ZIP archive doesn\u2019t contain\n*.pyc\nfiles, importing may be rather slow.\nA path within the archive can also be specified to only import from a\nsubdirectory; for example, the path /tmp/example.zip/lib/\nwould only\nimport from the lib/\nsubdirectory within the archive.\nSee also\n- PEP 273 - Import Modules from Zip Archives\nWritten by James C. Ahlstrom, who also provided an implementation. Python 2.3 follows the specification in PEP 273, but uses an implementation written by Just van Rossum that uses the import hooks described in PEP 302. See section PEP 302: New Import Hooks for a description of the new import hooks.\nPEP 277: Unicode file name support for Windows NT\u00b6\nOn Windows NT, 2000, and XP, the system stores file names as Unicode strings. Traditionally, Python has represented file names as byte strings, which is inadequate because it renders some file names inaccessible.\nPython now allows using arbitrary Unicode strings (within the limitations of the\nfile system) for all functions that expect file names, most notably the\nopen()\nbuilt-in function. If a Unicode string is passed to\nos.listdir()\n, Python now returns a list of Unicode strings. A new\nfunction, os.getcwdu()\n, returns the current directory as a Unicode string.\nByte strings still work as file names, and on Windows Python will transparently\nconvert them to Unicode using the mbcs\nencoding.\nOther systems also allow Unicode strings as file names but convert them to byte\nstrings before passing them to the system, which can cause a UnicodeError\nto be raised. Applications can test whether arbitrary Unicode strings are\nsupported as file names by checking os.path.supports_unicode_filenames\n,\na Boolean value.\nUnder MacOS, os.listdir()\nmay now return Unicode filenames.\nSee also\n- PEP 277 - Unicode file name support for Windows NT\nWritten by Neil Hodgson; implemented by Neil Hodgson, Martin von L\u00f6wis, and Mark Hammond.\nPEP 278: Universal Newline Support\u00b6\nThe three major operating systems used today are Microsoft Windows, Apple\u2019s Macintosh OS, and the various Unix derivatives. A minor irritation of cross-platform work is that these three platforms all use different characters to mark the ends of lines in text files. Unix uses the linefeed (ASCII character 10), MacOS uses the carriage return (ASCII character 13), and Windows uses a two-character sequence of a carriage return plus a newline.\nPython\u2019s file objects can now support end of line conventions other than the\none followed by the platform on which Python is running. Opening a file with\nthe mode 'U'\nor 'rU'\nwill open a file for reading in universal\nnewlines mode. All three line ending conventions will be translated to a\n'\\n'\nin the strings returned by the various file methods such as\nread()\nand readline()\n.\nUniversal newline support is also used when importing modules and when executing\na file with the execfile()\nfunction. This means that Python modules can\nbe shared between all three operating systems without needing to convert the\nline-endings.\nThis feature can be disabled when compiling Python by specifying the\n--without-universal-newlines\nswitch when running Python\u2019s\nconfigure script.\nSee also\n- PEP 278 - Universal Newline Support\nWritten and implemented by Jack Jansen.\nPEP 279: enumerate()\u00b6\nA new built-in function, enumerate()\n, will make certain loops a bit\nclearer. enumerate(thing)\n, where thing is either an iterator or a\nsequence, returns an iterator that will return (0, thing[0])\n, (1,\nthing[1])\n, (2, thing[2])\n, and so forth.\nA common idiom to change every element of a list looks like this:\nfor i in range(len(L)):\nitem = L[i]\n# ... compute some result based on item ...\nL[i] = result\nThis can be rewritten using enumerate()\nas:\nfor i, item in enumerate(L):\n# ... compute some result based on item ...\nL[i] = result\nSee also\n- PEP 279 - The enumerate() built-in function\nWritten and implemented by Raymond D. Hettinger.\nPEP 282: The logging Package\u00b6\nA standard package for writing logs, logging\n, has been added to Python\n2.3. It provides a powerful and flexible mechanism for generating logging\noutput which can then be filtered and processed in various ways. A\nconfiguration file written in a standard format can be used to control the\nlogging behavior of a program. Python includes handlers that will write log\nrecords to standard error or to a file or socket, send them to the system log,\nor even e-mail them to a particular address; of course, it\u2019s also possible to\nwrite your own handler classes.\nThe Logger\nclass is the primary class. Most application code will deal\nwith one or more Logger\nobjects, each one used by a particular\nsubsystem of the application. Each Logger\nis identified by a name, and\nnames are organized into a hierarchy using .\nas the component separator.\nFor example, you might have Logger\ninstances named server\n,\nserver.auth\nand server.network\n. The latter two instances are below\nserver\nin the hierarchy. This means that if you turn up the verbosity for\nserver\nor direct server\nmessages to a different handler, the changes\nwill also apply to records logged to server.auth\nand server.network\n.\nThere\u2019s also a root Logger\nthat\u2019s the parent of all other loggers.\nFor simple uses, the logging\npackage contains some convenience functions\nthat always use the root log:\nimport logging\nlogging.debug('Debugging information')\nlogging.info('Informational message')\nlogging.warning('Warning:config file %s not found', 'server.conf')\nlogging.error('Error occurred')\nlogging.critical('Critical error -- shutting down')\nThis produces the following output:\nWARNING:root:Warning:config file server.conf not found\nERROR:root:Error occurred\nCRITICAL:root:Critical error -- shutting down\nIn the default configuration, informational and debugging messages are\nsuppressed and the output is sent to standard error. You can enable the display\nof informational and debugging messages by calling the setLevel()\nmethod\non the root logger.\nNotice the warning()\ncall\u2019s use of string formatting operators; all of the\nfunctions for logging messages take the arguments (msg, arg1, arg2, ...)\nand\nlog the string resulting from msg % (arg1, arg2, ...)\n.\nThere\u2019s also an exception()\nfunction that records the most recent\ntraceback. Any of the other functions will also record the traceback if you\nspecify a true value for the keyword argument exc_info.\ndef f():\ntry: 1/0\nexcept: logging.exception('Problem recorded')\nf()\nThis produces the following output:\nERROR:root:Problem recorded\nTraceback (most recent call last):\nFile \"t.py\", line 6, in f\n1/0\nZeroDivisionError: integer division or modulo by zero\nSlightly more advanced programs will use a logger other than the root logger.\nThe getLogger(name)\nfunction is used to get a particular log, creating\nit if it doesn\u2019t exist yet. getLogger(None)\nreturns the root logger.\nlog = logging.getLogger('server')\n...\nlog.info('Listening on port %i', port)\n...\nlog.critical('Disk full')\n...\nLog records are usually propagated up the hierarchy, so a message logged to\nserver.auth\nis also seen by server\nand root\n, but a Logger\ncan prevent this by setting its propagate\nattribute to False\n.\nThere are more classes provided by the logging\npackage that can be\ncustomized. When a Logger\ninstance is told to log a message, it\ncreates a LogRecord\ninstance that is sent to any number of different\nHandler\ninstances. Loggers and handlers can also have an attached list\nof filters, and each filter can cause the LogRecord\nto be ignored or\ncan modify the record before passing it along. When they\u2019re finally output,\nLogRecord\ninstances are converted to text by a Formatter\nclass. All of these classes can be replaced by your own specially written\nclasses.\nWith all of these features the logging\npackage should provide enough\nflexibility for even the most complicated applications. This is only an\nincomplete overview of its features, so please see the package\u2019s reference\ndocumentation for all of the details. Reading PEP 282 will also be helpful.\nSee also\n- PEP 282 - A Logging System\nWritten by Vinay Sajip and Trent Mick; implemented by Vinay Sajip.\nPEP 285: A Boolean Type\u00b6\nA Boolean type was added to Python 2.3. Two new constants were added to the\n__builtin__\nmodule, True\nand False\n. (True\nand\nFalse\nconstants were added to the built-ins in Python 2.2.1, but the\n2.2.1 versions are simply set to integer values of 1 and 0 and aren\u2019t a\ndifferent type.)\nThe type object for this new type is named bool\n; the constructor for it\ntakes any Python value and converts it to True\nor False\n.\n>>> bool(1)\nTrue\n>>> bool(0)\nFalse\n>>> bool([])\nFalse\n>>> bool( (1,) )\nTrue\nMost of the standard library modules and built-in functions have been changed to return Booleans.\n>>> obj = []\n>>> hasattr(obj, 'append')\nTrue\n>>> isinstance(obj, list)\nTrue\n>>> isinstance(obj, tuple)\nFalse\nPython\u2019s Booleans were added with the primary goal of making code clearer. For\nexample, if you\u2019re reading a function and encounter the statement return 1\n,\nyou might wonder whether the 1\nrepresents a Boolean truth value, an index,\nor a coefficient that multiplies some other quantity. If the statement is\nreturn True\n, however, the meaning of the return value is quite clear.\nPython\u2019s Booleans were not added for the sake of strict type-checking. A very\nstrict language such as Pascal would also prevent you performing arithmetic with\nBooleans, and would require that the expression in an if\nstatement\nalways evaluate to a Boolean result. Python is not this strict and never will\nbe, as PEP 285 explicitly says. This means you can still use any expression\nin an if\nstatement, even ones that evaluate to a list or tuple or\nsome random object. The Boolean type is a subclass of the int\nclass so\nthat arithmetic using a Boolean still works.\n>>> True + 1\n2\n>>> False + 1\n1\n>>> False * 75\n0\n>>> True * 75\n75\nTo sum up True\nand False\nin a sentence: they\u2019re alternative\nways to spell the integer values 1 and 0, with the single difference that\nstr()\nand repr()\nreturn the strings 'True'\nand 'False'\ninstead of '1'\nand '0'\n.\nSee also\n- PEP 285 - Adding a bool type\nWritten and implemented by GvR.\nPEP 293: Codec Error Handling Callbacks\u00b6\nWhen encoding a Unicode string into a byte string, unencodable characters may be\nencountered. So far, Python has allowed specifying the error processing as\neither \u201cstrict\u201d (raising UnicodeError\n), \u201cignore\u201d (skipping the\ncharacter), or \u201creplace\u201d (using a question mark in the output string), with\n\u201cstrict\u201d being the default behavior. It may be desirable to specify alternative\nprocessing of such errors, such as inserting an XML character reference or HTML\nentity reference into the converted string.\nPython now has a flexible framework to add different processing strategies. New\nerror handlers can be added with codecs.register_error()\n, and codecs then\ncan access the error handler with codecs.lookup_error()\n. An equivalent C\nAPI has been added for codecs written in C. The error handler gets the necessary\nstate information such as the string being converted, the position in the string\nwhere the error was detected, and the target encoding. The handler can then\neither raise an exception or return a replacement string.\nTwo additional error handlers have been implemented using this framework: \u201cbackslashreplace\u201d uses Python backslash quoting to represent unencodable characters and \u201cxmlcharrefreplace\u201d emits XML character references.\nSee also\n- PEP 293 - Codec Error Handling Callbacks\nWritten and implemented by Walter D\u00f6rwald.\nPEP 301: Package Index and Metadata for Distutils\u00b6\nSupport for the long-requested Python catalog makes its first appearance in 2.3.\nThe heart of the catalog is the new Distutils register command.\nRunning python setup.py register\nwill collect the metadata describing a\npackage, such as its name, version, maintainer, description, &c., and send it to\na central catalog server. The resulting catalog is available from\nhttps://pypi.org.\nTo make the catalog a bit more useful, a new optional classifiers keyword\nargument has been added to the Distutils setup()\nfunction. A list of\nTrove-style strings can be supplied to help\nclassify the software.\nHere\u2019s an example setup.py\nwith classifiers, written to be compatible\nwith older versions of the Distutils:\nfrom distutils import core\nkw = {'name': \"Quixote\",\n'version': \"0.5.1\",\n'description': \"A highly Pythonic Web application framework\",\n# ...\n}\nif (hasattr(core, 'setup_keywords') and\n'classifiers' in core.setup_keywords):\nkw['classifiers'] = \\\n['Topic :: Internet :: WWW/HTTP :: Dynamic Content',\n'Environment :: No Input/Output (Daemon)',\n'Intended Audience :: Developers'],\ncore.setup(**kw)\nThe full list of classifiers can be obtained by running python setup.py\nregister --list-classifiers\n.\nSee also\n- PEP 301 - Package Index and Metadata for Distutils\nWritten and implemented by Richard Jones.\nPEP 302: New Import Hooks\u00b6\nWhile it\u2019s been possible to write custom import hooks ever since the\nihooks\nmodule was introduced in Python 1.3, no one has ever been really\nhappy with it because writing new import hooks is difficult and messy. There\nhave been various proposed alternatives such as the imputil\nand iu\nmodules, but none of them has ever gained much acceptance, and none of them were\neasily usable from C code.\nPEP 302 borrows ideas from its predecessors, especially from Gordon\nMcMillan\u2019s iu\nmodule. Three new items are added to the sys\nmodule:\nsys.path_hooks\nis a list of callable objects; most often they\u2019ll be classes. Each callable takes a string containing a path and either returns an importer object that will handle imports from this path or raises anImportError\nexception if it can\u2019t handle this path.sys.path_importer_cache\ncaches importer objects for each path, sosys.path_hooks\nwill only need to be traversed once for each path.sys.meta_path\nis a list of importer objects that will be traversed beforesys.path\nis checked. This list is initially empty, but user code can add objects to it. Additional built-in and frozen modules can be imported by an object added to this list.\nImporter objects must have a single method, find_module(fullname,\npath=None)\n. fullname will be a module or package name, e.g. string\nor\ndistutils.core\n. find_module()\nmust return a loader object that has a\nsingle method, load_module(fullname)\n, that creates and returns the\ncorresponding module object.\nPseudo-code for Python\u2019s new import logic, therefore, looks something like this (simplified a bit; see PEP 302 for the full details):\nfor mp in sys.meta_path:\nloader = mp(fullname)\nif loader is not None:\n = loader.load_module(fullname)\nfor path in sys.path:\nfor hook in sys.path_hooks:\ntry:\nimporter = hook(path)\nexcept ImportError:\n# ImportError, so try the other path hooks\npass\nelse:\nloader = importer.find_module(fullname)\n = loader.load_module(fullname)\n# Not found!\nraise ImportError\nSee also\n- PEP 302 - New Import Hooks\nWritten by Just van Rossum and Paul Moore. Implemented by Just van Rossum.\nPEP 305: Comma-separated Files\u00b6\nComma-separated files are a format frequently used for exporting data from databases and spreadsheets. Python 2.3 adds a parser for comma-separated files.\nComma-separated format is deceptively simple at first glance:\nCosts,150,200,3.95\nRead a line and call line.split(',')\n: what could be simpler? But toss in\nstring data that can contain commas, and things get more complicated:\n\"Costs\",150,200,3.95,\"Includes taxes, shipping, and sundry items\"\nA big ugly regular expression can parse this, but using the new csv\npackage is much simpler:\nimport csv\ninput = open('datafile', 'rb')\nreader = csv.reader(input)\nfor line in reader:\nprint line\nThe reader()\nfunction takes a number of different options. The field\nseparator isn\u2019t limited to the comma and can be changed to any character, and so\ncan the quoting and line-ending characters.\nDifferent dialects of comma-separated files can be defined and registered;\ncurrently there are two dialects, both used by Microsoft Excel. A separate\ncsv.writer\nclass will generate comma-separated files from a succession\nof tuples or lists, quoting strings that contain the delimiter.\nSee also\n- PEP 305 - CSV File API\nWritten and implemented by Kevin Altis, Dave Cole, Andrew McNamara, Skip Montanaro, Cliff Wells.\nPEP 307: Pickle Enhancements\u00b6\nThe pickle\nand cPickle\nmodules received some attention during the\n2.3 development cycle. In 2.2, new-style classes could be pickled without\ndifficulty, but they weren\u2019t pickled very compactly; PEP 307 quotes a trivial\nexample where a new-style class results in a pickled string three times longer\nthan that for a classic class.\nThe solution was to invent a new pickle protocol. The pickle.dumps()\nfunction has supported a text-or-binary flag for a long time. In 2.3, this\nflag is redefined from a Boolean to an integer: 0 is the old text-mode pickle\nformat, 1 is the old binary format, and now 2 is a new 2.3-specific format. A\nnew constant, pickle.HIGHEST_PROTOCOL\n, can be used to select the\nfanciest protocol available.\nUnpickling is no longer considered a safe operation. 2.2\u2019s pickle\nprovided hooks for trying to prevent unsafe classes from being unpickled\n(specifically, a __safe_for_unpickling__\nattribute), but none of this\ncode was ever audited and therefore it\u2019s all been ripped out in 2.3. You should\nnot unpickle untrusted data in any version of Python.\nTo reduce the pickling overhead for new-style classes, a new interface for\ncustomizing pickling was added using three special methods:\n__getstate__()\n, __setstate__()\n, and __getnewargs__()\n. Consult\nPEP 307 for the full semantics of these methods.\nAs a way to compress pickles yet further, it\u2019s now possible to use integer codes instead of long strings to identify pickled classes. The Python Software Foundation will maintain a list of standardized codes; there\u2019s also a range of codes for private use. Currently no codes have been specified.\nSee also\n- PEP 307 - Extensions to the pickle protocol\nWritten and implemented by Guido van Rossum and Tim Peters.\nExtended Slices\u00b6\nEver since Python 1.4, the slicing syntax has supported an optional third \u201cstep\u201d\nor \u201cstride\u201d argument. For example, these are all legal Python syntax:\nL[1:10:2]\n, L[:-1:1]\n, L[::-1]\n. This was added to Python at the\nrequest of the developers of Numerical Python, which uses the third argument\nextensively. However, Python\u2019s built-in list, tuple, and string sequence types\nhave never supported this feature, raising a TypeError\nif you tried it.\nMichael Hudson contributed a patch to fix this shortcoming.\nFor example, you can now easily extract the elements of a list that have even indexes:\n>>> L = range(10)\n>>> L[::2]\n[0, 2, 4, 6, 8]\nNegative values also work to make a copy of the same list in reverse order:\n>>> L[::-1]\n[9, 8, 7, 6, 5, 4, 3, 2, 1, 0]\nThis also works for tuples, arrays, and strings:\n>>> s='abcd'\n>>> s[::2]\n'ac'\n>>> s[::-1]\n'dcba'\nIf you have a mutable sequence such as a list or an array you can assign to or delete an extended slice, but there are some differences between assignment to extended and regular slices. Assignment to a regular slice can be used to change the length of the sequence:\n>>> a = range(3)\n>>> a\n[0, 1, 2]\n>>> a[1:3] = [4, 5, 6]\n>>> a\n[0, 4, 5, 6]\nExtended slices aren\u2019t this flexible. When assigning to an extended slice, the list on the right hand side of the statement must contain the same number of items as the slice it is replacing:\n>>> a = range(4)\n>>> a\n[0, 1, 2, 3]\n>>> a[::2]\n[0, 2]\n>>> a[::2] = [0, -1]\n>>> a\n[0, 1, -1, 3]\n>>> a[::2] = [0,1,2]\nTraceback (most recent call last):\nFile \"\", line 1, in ?\nValueError: attempt to assign sequence of size 3 to extended slice of size 2\nDeletion is more straightforward:\n>>> a = range(4)\n>>> a\n[0, 1, 2, 3]\n>>> a[::2]\n[0, 2]\n>>> del a[::2]\n>>> a\n[1, 3]\nOne can also now pass slice objects to the __getitem__()\nmethods of the\nbuilt-in sequences:\n>>> range(10).__getitem__(slice(0, 5, 2))\n[0, 2, 4]\nOr use slice objects directly in subscripts:\n>>> range(10)[slice(0, 5, 2)]\n[0, 2, 4]\nTo simplify implementing sequences that support extended slicing, slice objects\nnow have a method indices(length)\nwhich, given the length of a sequence,\nreturns a (start, stop, step)\ntuple that can be passed directly to\nrange()\n. indices()\nhandles omitted and out-of-bounds indices in a\nmanner consistent with regular slices (and this innocuous phrase hides a welter\nof confusing details!). The method is intended to be used like this:\nclass FakeSeq:\n...\ndef calc_item(self, i):\n...\ndef __getitem__(self, item):\nif isinstance(item, slice):\nindices = item.indices(len(self))\nreturn FakeSeq([self.calc_item(i) for i in range(*indices)])\nelse:\nreturn self.calc_item(i)\nFrom this example you can also see that the built-in slice\nobject is\nnow the type object for the slice type, and is no longer a function. This is\nconsistent with Python 2.2, where int\n, str\n, etc., underwent\nthe same change.\nOther Language Changes\u00b6\nHere are all of the changes that Python 2.3 makes to the core Python language.\nThe\nyield\nstatement is now always a keyword, as described in section PEP 255: Simple Generators of this document.A new built-in function\nenumerate()\nwas added, as described in section PEP 279: enumerate() of this document.Two new constants,\nTrue\nandFalse\nwere added along with the built-inbool\ntype, as described in section PEP 285: A Boolean Type of this document.The\nint()\ntype constructor will now return a long integer instead of raising anOverflowError\nwhen a string or floating-point number is too large to fit into an integer. This can lead to the paradoxical result thatisinstance(int(expression), int)\nis false, but that seems unlikely to cause problems in practice.Built-in types now support the extended slicing syntax, as described in section Extended Slices of this document.\nA new built-in function,\nsum(iterable, start=0)\n, adds up the numeric items in the iterable object and returns their sum.sum()\nonly accepts numbers, meaning that you can\u2019t use it to concatenate a bunch of strings. (Contributed by Alex Martelli.)list.insert(pos, value)\nused to insert value at the front of the list when pos was negative. The behaviour has now been changed to be consistent with slice indexing, so when pos is -1 the value will be inserted before the last element, and so forth.list.index(value)\n, which searches for value within the list and returns its index, now takes optional start and stop arguments to limit the search to only part of the list.Dictionaries have a new method,\npop(key[, *default*])\n, that returns the value corresponding to key and removes that key/value pair from the dictionary. If the requested key isn\u2019t present in the dictionary, default is returned if it\u2019s specified andKeyError\nraised if it isn\u2019t.>>> d = {1:2} >>> d {1: 2} >>> d.pop(4) Traceback (most recent call last): File \"stdin\", line 1, in ? KeyError: 4 >>> d.pop(1) 2 >>> d.pop(1) Traceback (most recent call last): File \"stdin\", line 1, in ? KeyError: 'pop(): dictionary is empty' >>> d {} >>>\nThere\u2019s also a new class method,\ndict.fromkeys(iterable, value)\n, that creates a dictionary with keys taken from the supplied iterator iterable and all values set to value, defaulting toNone\n.(Patches contributed by Raymond Hettinger.)\nAlso, the\ndict()\nconstructor now accepts keyword arguments to simplify creating small dictionaries:>>> dict(red=1, blue=2, green=3, black=4) {'blue': 2, 'black': 4, 'green': 3, 'red': 1}\n(Contributed by Just van Rossum.)\nThe\nassert\nstatement no longer checks the__debug__\nflag, so you can no longer disable assertions by assigning to__debug__\n. Running Python with the-O\nswitch will still generate code that doesn\u2019t execute any assertions.Most type objects are now callable, so you can use them to create new objects such as functions, classes, and modules. (This means that the\nnew\nmodule can be deprecated in a future Python version, because you can now use the type objects available in thetypes\nmodule.) For example, you can create a new module object with the following code:>>> import types >>> m = types.ModuleType('abc','docstring') >>> m >>> m.__doc__ 'docstring'\nA new warning,\nPendingDeprecationWarning\nwas added to indicate features which are in the process of being deprecated. The warning will not be printed by default. To check for use of features that will be deprecated in the future, supply-Walways::PendingDeprecationWarning::\non the command line or usewarnings.filterwarnings()\n.The process of deprecating string-based exceptions, as in\nraise \"Error occurred\"\n, has begun. Raising a string will now triggerPendingDeprecationWarning\n.Using\nNone\nas a variable name will now result in aSyntaxWarning\nwarning. In a future version of Python,None\nmay finally become a keyword.The\nxreadlines()\nmethod of file objects, introduced in Python 2.1, is no longer necessary because files now behave as their own iterator.xreadlines()\nwas originally introduced as a faster way to loop over all the lines in a file, but now you can simply writefor line in file_obj\n. File objects also have a new read-onlyencoding\nattribute that gives the encoding used by the file; Unicode strings written to the file will be automatically converted to bytes using the given encoding.The method resolution order used by new-style classes has changed, though you\u2019ll only notice the difference if you have a really complicated inheritance hierarchy. Classic classes are unaffected by this change. Python 2.2 originally used a topological sort of a class\u2019s ancestors, but 2.3 now uses the C3 algorithm as described in the paper \u201cA Monotonic Superclass Linearization for Dylan\u201d. To understand the motivation for this change, read Michele Simionato\u2019s article The Python 2.3 Method Resolution Order, or read the thread on python-dev starting with the message at https://mail.python.org/pipermail/python-dev/2002-October/029035.html. Samuele Pedroni first pointed out the problem and also implemented the fix by coding the C3 algorithm.\nPython runs multithreaded programs by switching between threads after executing N bytecodes. The default value for N has been increased from 10 to 100 bytecodes, speeding up single-threaded applications by reducing the switching overhead. Some multithreaded applications may suffer slower response time, but that\u2019s easily fixed by setting the limit back to a lower number using\nsys.setcheckinterval(N)\n. The limit can be retrieved with the newsys.getcheckinterval()\nfunction.One minor but far-reaching change is that the names of extension types defined by the modules included with Python now contain the module and a\n'.'\nin front of the type name. For example, in Python 2.2, if you created a socket and printed its__class__\n, you\u2019d get this output:>>> s = socket.socket() >>> s.__class__ \nIn 2.3, you get this:\n>>> s.__class__ \nOne of the noted incompatibilities between old- and new-style classes has been removed: you can now assign to the\n__name__\nand__bases__\nattributes of new-style classes. There are some restrictions on what can be assigned to__bases__\nalong the lines of those relating to assigning to an instance\u2019s__class__\nattribute.\nString Changes\u00b6\nThe\nin\noperator now works differently for strings. Previously, when evaluatingX in Y\nwhere X and Y are strings, X could only be a single character. That\u2019s now changed; X can be a string of any length, andX in Y\nwill returnTrue\nif X is a substring of Y. If X is the empty string, the result is alwaysTrue\n.>>> 'ab' in 'abcd' True >>> 'ad' in 'abcd' False >>> '' in 'abcd' True\nNote that this doesn\u2019t tell you where the substring starts; if you need that information, use the\nfind()\nstring method.The\nstrip()\n,lstrip()\n, andrstrip()\nstring methods now have an optional argument for specifying the characters to strip. The default is still to remove all whitespace characters:>>> ' abc '.strip() 'abc' >>> '><><><>'.strip('<>') 'abc' >>> '><><><>\\n'.strip('<>') 'abc<><><>\\n' >>> u'\\u4000\\u4001abc\\u4000'.strip(u'\\u4000') u'\\u4001abc' >>>\n(Suggested by Simon Brunning and implemented by Walter D\u00f6rwald.)\nThe\nstartswith()\nandendswith()\nstring methods now accept negative numbers for the start and end parameters.Another new string method is\nzfill()\n, originally a function in thestring\nmodule.zfill()\npads a numeric string with zeros on the left until it\u2019s the specified width. Note that the%\noperator is still more flexible and powerful thanzfill()\n.>>> '45'.zfill(4) '0045' >>> '12345'.zfill(4) '12345' >>> 'goofy'.zfill(6) '0goofy'\n(Contributed by Walter D\u00f6rwald.)\nA new type object,\nbasestring\n, has been added. Both 8-bit strings and Unicode strings inherit from this type, soisinstance(obj, basestring)\nwill returnTrue\nfor either kind of string. It\u2019s a completely abstract type, so you can\u2019t createbasestring\ninstances.Interned strings are no longer immortal and will now be garbage-collected in the usual way when the only reference to them is from the internal dictionary of interned strings. (Implemented by Oren Tirosh.)\nOptimizations\u00b6\nThe creation of new-style class instances has been made much faster; they\u2019re now faster than classic classes!\nThe\nsort()\nmethod of list objects has been extensively rewritten by Tim Peters, and the implementation is significantly faster.Multiplication of large long integers is now much faster thanks to an implementation of Karatsuba multiplication, an algorithm that scales better than the O(n2) required for the grade-school multiplication algorithm. (Original patch by Christopher A. Craig, and significantly reworked by Tim Peters.)\nThe\nSET_LINENO\nopcode is now gone. This may provide a small speed increase, depending on your compiler\u2019s idiosyncrasies. See section Other Changes and Fixes for a longer explanation. (Removed by Michael Hudson.)xrange()\nobjects now have their own iterator, makingfor i in xrange(n)\nslightly faster thanfor i in range(n)\n. (Patch by Raymond Hettinger.)A number of small rearrangements have been made in various hotspots to improve performance, such as inlining a function or removing some code. (Implemented mostly by GvR, but lots of people have contributed single changes.)\nThe net result of the 2.3 optimizations is that Python 2.3 runs the pystone benchmark around 25% faster than Python 2.2.\nNew, Improved, and Deprecated Modules\u00b6\nAs usual, Python\u2019s standard library received a number of enhancements and bug\nfixes. Here\u2019s a partial list of the most notable changes, sorted alphabetically\nby module name. Consult the Misc/NEWS\nfile in the source tree for a more\ncomplete list of changes, or look through the CVS logs for all the details.\nThe\narray\nmodule now supports arrays of Unicode characters using the'u'\nformat character. Arrays also now support using the+=\nassignment operator to add another array\u2019s contents, and the*=\nassignment operator to repeat an array. (Contributed by Jason Orendorff.)The\nbsddb\nmodule has been replaced by version 4.1.6 of the PyBSDDB package, providing a more complete interface to the transactional features of the BerkeleyDB library.The old version of the module has been renamed to\nbsddb185\nand is no longer built automatically; you\u2019ll have to editModules/Setup\nto enable it. Note that the newbsddb\npackage is intended to be compatible with the old module, so be sure to file bugs if you discover any incompatibilities. When upgrading to Python 2.3, if the new interpreter is compiled with a new version of the underlying BerkeleyDB library, you will almost certainly have to convert your database files to the new version. You can do this fairly easily with the new scriptsdb2pickle.py\nandpickle2db.py\nwhich you will find in the distribution\u2019sTools/scripts\ndirectory. If you\u2019ve already been using the PyBSDDB package and importing it asbsddb3\n, you will have to change yourimport\nstatements to import it asbsddb\n.The new\nbz2\nmodule is an interface to the bz2 data compression library. bz2-compressed data is usually smaller than correspondingzlib\n-compressed data. (Contributed by Gustavo Niemeyer.)A set of standard date/time types has been added in the new\ndatetime\nmodule. See the following section for more details.The Distutils\nExtension\nclass now supports an extra constructor argument named depends for listing additional source files that an extension depends on. This lets Distutils recompile the module if any of the dependency files are modified. For example, ifsampmodule.c\nincludes the header filesample.h\n, you would create theExtension\nobject like this:ext = Extension(\"samp\", sources=[\"sampmodule.c\"], depends=[\"sample.h\"])\nModifying\nsample.h\nwould then cause the module to be recompiled. (Contributed by Jeremy Hylton.)Other minor changes to Distutils: it now checks for the\nCC\n,CFLAGS\n,CPP\n,LDFLAGS\n, andCPPFLAGS\nenvironment variables, using them to override the settings in Python\u2019s configuration (contributed by Robert Weber).Previously the\ndoctest\nmodule would only search the docstrings of public methods and functions for test cases, but it now also examines private ones as well. TheDocTestSuite()\nfunction creates aunittest.TestSuite\nobject from a set ofdoctest\ntests.The new\ngc.get_referents(object)\nfunction returns a list of all the objects referenced by object.The\ngetopt\nmodule gained a new function,gnu_getopt()\n, that supports the same arguments as the existinggetopt()\nfunction but uses GNU-style scanning mode. The existinggetopt()\nstops processing options as soon as a non-option argument is encountered, but in GNU-style mode processing continues, meaning that options and arguments can be mixed. For example:>>> getopt.getopt(['-f', 'filename', 'output', '-v'], 'f:v') ([('-f', 'filename')], ['output', '-v']) >>> getopt.gnu_getopt(['-f', 'filename', 'output', '-v'], 'f:v') ([('-f', 'filename'), ('-v', '')], ['output'])\n(Contributed by Peter \u00c5strand.)\nThe\ngrp\n,pwd\n, andresource\nmodules now return enhanced tuples:>>> import grp >>> g = grp.getgrnam('amk') >>> g.gr_name, g.gr_gid ('amk', 500)\nThe\ngzip\nmodule can now handle files exceeding 2 GiB.The new\nheapq\nmodule contains an implementation of a heap queue algorithm. A heap is an array-like data structure that keeps items in a partially sorted order such that, for every index k,heap[k] <= heap[2*k+1]\nandheap[k] <= heap[2*k+2]\n. This makes it quick to remove the smallest item, and inserting a new item while maintaining the heap property is O(log n). (See https://xlinux.nist.gov/dads//HTML/priorityque.html for more information about the priority queue data structure.)The\nheapq\nmodule providesheappush()\nandheappop()\nfunctions for adding and removing items while maintaining the heap property on top of some other mutable Python sequence type. Here\u2019s an example that uses a Python list:>>> import heapq >>> heap = [] >>> for item in [3, 7, 5, 11, 1]: ... heapq.heappush(heap, item) ... >>> heap [1, 3, 5, 11, 7] >>> heapq.heappop(heap) 1 >>> heapq.heappop(heap) 3 >>> heap [5, 7, 11]\n(Contributed by Kevin O\u2019Connor.)\nThe IDLE integrated development environment has been updated using the code from the IDLEfork project (https://idlefork.sourceforge.net). The most notable feature is that the code being developed is now executed in a subprocess, meaning that there\u2019s no longer any need for manual\nreload()\noperations. IDLE\u2019s core code has been incorporated into the standard library as theidlelib\npackage.The\nimaplib\nmodule now supports IMAP over SSL. (Contributed by Piers Lauder and Tino Lange.)The\nitertools\ncontains a number of useful functions for use with iterators, inspired by various functions provided by the ML and Haskell languages. For example,itertools.ifilter(predicate, iterator)\nreturns all elements in the iterator for which the functionpredicate()\nreturnsTrue\n, anditertools.repeat(obj, N)\nreturnsobj\nN times. There are a number of other functions in the module; see the package\u2019s reference documentation for details. (Contributed by Raymond Hettinger.)Two new functions in the\nmath\nmodule,degrees(rads)\nandradians(degs)\n, convert between radians and degrees. Other functions in themath\nmodule such asmath.sin()\nandmath.cos()\nhave always required input values measured in radians. Also, an optional base argument was added tomath.log()\nto make it easier to compute logarithms for bases other thane\nand10\n. (Contributed by Raymond Hettinger.)Several new POSIX functions (\ngetpgid()\n,killpg()\n,lchown()\n,loadavg()\n,major()\n,makedev()\n,minor()\n, andmknod()\n) were added to theposix\nmodule that underlies theos\nmodule. (Contributed by Gustavo Niemeyer, Geert Jansen, and Denis S. Otkidach.)In the\nos\nmodule, the*stat()\nfamily of functions can now report fractions of a second in a timestamp. Such time stamps are represented as floats, similar to the value returned bytime.time()\n.During testing, it was found that some applications will break if time stamps are floats. For compatibility, when using the tuple interface of the\nstat_result\ntime stamps will be represented as integers. When using named fields (a feature first introduced in Python 2.2), time stamps are still represented as integers, unlessos.stat_float_times()\nis invoked to enable float return values:>>> os.stat(\"/tmp\").st_mtime 1034791200 >>> os.stat_float_times(True) >>> os.stat(\"/tmp\").st_mtime 1034791200.6335014\nIn Python 2.4, the default will change to always returning floats.\nApplication developers should enable this feature only if all their libraries work properly when confronted with floating-point time stamps, or if they use the tuple API. If used, the feature should be activated on an application level instead of trying to enable it on a per-use basis.\nThe\noptparse\nmodule contains a new parser for command-line arguments that can convert option values to a particular Python type and will automatically generate a usage message. See the following section for more details.The old and never-documented\nlinuxaudiodev\nmodule has been deprecated, and a new version namedossaudiodev\nhas been added. The module was renamed because the OSS sound drivers can be used on platforms other than Linux, and the interface has also been tidied and brought up to date in various ways. (Contributed by Greg Ward and Nicholas FitzRoy-Dale.)The new\nplatform\nmodule contains a number of functions that try to determine various properties of the platform you\u2019re running on. There are functions for getting the architecture, CPU type, the Windows OS version, and even the Linux distribution version. (Contributed by Marc-Andr\u00e9 Lemburg.)The parser objects provided by the\npyexpat\nmodule can now optionally buffer character data, resulting in fewer calls to your character data handler and therefore faster performance. Setting the parser object\u2019sbuffer_text\nattribute toTrue\nwill enable buffering.The\nsample(population, k)\nfunction was added to therandom\nmodule. population is a sequence orxrange\nobject containing the elements of a population, andsample()\nchooses k elements from the population without replacing chosen elements. k can be any value up tolen(population)\n. For example:>>> days = ['Mo', 'Tu', 'We', 'Th', 'Fr', 'St', 'Sn'] >>> random.sample(days, 3) # Choose 3 elements ['St', 'Sn', 'Th'] >>> random.sample(days, 7) # Choose 7 elements ['Tu', 'Th', 'Mo', 'We', 'St', 'Fr', 'Sn'] >>> random.sample(days, 7) # Choose 7 again ['We', 'Mo', 'Sn', 'Fr', 'Tu', 'St', 'Th'] >>> random.sample(days, 8) # Can't choose eight Traceback (most recent call last): File \"\", line 1, in ? File \"random.py\", line 414, in sample raise ValueError, \"sample larger than population\" ValueError: sample larger than population >>> random.sample(xrange(1,10000,2), 10) # Choose ten odd nos. under 10000 [3407, 3805, 1505, 7023, 2401, 2267, 9733, 3151, 8083, 9195]\nThe\nrandom\nmodule now uses a new algorithm, the Mersenne Twister, implemented in C. It\u2019s faster and more extensively studied than the previous algorithm.(All changes contributed by Raymond Hettinger.)\nThe\nreadline\nmodule also gained a number of new functions:get_history_item()\n,get_current_history_length()\n, andredisplay()\n.The\nrexec\nandBastion\nmodules have been declared dead, and attempts to import them will fail with aRuntimeError\n. New-style classes provide new ways to break out of the restricted execution environment provided byrexec\n, and no one has interest in fixing them or time to do so. If you have applications usingrexec\n, rewrite them to use something else.(Sticking with Python 2.2 or 2.1 will not make your applications any safer because there are known bugs in the\nrexec\nmodule in those versions. To repeat: if you\u2019re usingrexec\n, stop using it immediately.)The\nrotor\nmodule has been deprecated because the algorithm it uses for encryption is not believed to be secure. If you need encryption, use one of the several AES Python modules that are available separately.The\nshutil\nmodule gained amove(src, dest)\nfunction that recursively moves a file or directory to a new location.Support for more advanced POSIX signal handling was added to the\nsignal\nbut then removed again as it proved impossible to make it work reliably across platforms.The\nsocket\nmodule now supports timeouts. You can call thesettimeout(t)\nmethod on a socket object to set a timeout of t seconds. Subsequent socket operations that take longer than t seconds to complete will abort and raise asocket.timeout\nexception.The original timeout implementation was by Tim O\u2019Malley. Michael Gilfix integrated it into the Python\nsocket\nmodule and shepherded it through a lengthy review. After the code was checked in, Guido van Rossum rewrote parts of it. (This is a good example of a collaborative development process in action.)On Windows, the\nsocket\nmodule now ships with Secure Sockets Layer (SSL) support.The value of the C\nPYTHON_API_VERSION\nmacro is now exposed at the Python level assys.api_version\n. The current exception can be cleared by calling the newsys.exc_clear()\nfunction.The new\ntarfile\nmodule allows reading from and writing to tar-format archive files. (Contributed by Lars Gust\u00e4bel.)The new\ntextwrap\nmodule contains functions for wrapping strings containing paragraphs of text. Thewrap(text, width)\nfunction takes a string and returns a list containing the text split into lines of no more than the chosen width. Thefill(text, width)\nfunction returns a single string, reformatted to fit into lines no longer than the chosen width. (As you can guess,fill()\nis built on top ofwrap()\n. For example:>>> import textwrap >>> paragraph = \"Not a whit, we defy augury: ... more text ...\" >>> textwrap.wrap(paragraph, 60) [\"Not a whit, we defy augury: there's a special providence in\", \"the fall of a sparrow. If it be now, 'tis not to come; if it\", ...] >>> print textwrap.fill(paragraph, 35) Not a whit, we defy augury: there's a special providence in the fall of a sparrow. If it be now, 'tis not to come; if it be not to come, it will be now; if it be not now, yet it will come: the readiness is all. >>>\nThe module also contains a\nTextWrapper\nclass that actually implements the text wrapping strategy. Both theTextWrapper\nclass and thewrap()\nandfill()\nfunctions support a number of additional keyword arguments for fine-tuning the formatting; consult the module\u2019s documentation for details. (Contributed by Greg Ward.)The\nthread\nandthreading\nmodules now have companion modules,dummy_thread\nanddummy_threading\n, that provide a do-nothing implementation of thethread\nmodule\u2019s interface for platforms where threads are not supported. The intention is to simplify thread-aware modules (ones that don\u2019t rely on threads to run) by putting the following code at the top:try: import threading as _threading except ImportError: import dummy_threading as _threading\nIn this example,\n_threading\nis used as the module name to make it clear that the module being used is not necessarily the actualthreading\nmodule. Code can call functions and use classes in_threading\nwhether or not threads are supported, avoiding anif\nstatement and making the code slightly clearer. This module will not magically make multithreaded code run without threads; code that waits for another thread to return or to do something will simply hang forever.The\ntime\nmodule\u2019sstrptime()\nfunction has long been an annoyance because it uses the platform C library\u2019sstrptime()\nimplementation, and different platforms sometimes have odd bugs. Brett Cannon contributed a portable implementation that\u2019s written in pure Python and should behave identically on all platforms.The new\ntimeit\nmodule helps measure how long snippets of Python code take to execute. Thetimeit.py\nfile can be run directly from the command line, or the module\u2019sTimer\nclass can be imported and used directly. Here\u2019s a short example that figures out whether it\u2019s faster to convert an 8-bit string to Unicode by appending an empty Unicode string to it or by using theunicode()\nfunction:import timeit timer1 = timeit.Timer('unicode(\"abc\")') timer2 = timeit.Timer('\"abc\" + u\"\"') # Run three trials print timer1.repeat(repeat=3, number=100000) print timer2.repeat(repeat=3, number=100000) # On my laptop this outputs: # [0.36831796169281006, 0.37441694736480713, 0.35304892063140869] # [0.17574405670166016, 0.18193507194519043, 0.17565798759460449]\nThe\nTix\nmodule has received various bug fixes and updates for the current version of the Tix package.The\nTkinter\nmodule now works with a thread-enabled version of Tcl. Tcl\u2019s threading model requires that widgets only be accessed from the thread in which they\u2019re created; accesses from another thread can cause Tcl to panic. For certain Tcl interfaces,Tkinter\nwill now automatically avoid this when a widget is accessed from a different thread by marshalling a command, passing it to the correct thread, and waiting for the results. Other interfaces can\u2019t be handled automatically butTkinter\nwill now raise an exception on such an access so that you can at least find out about the problem. See https://mail.python.org/pipermail/python-dev/2002-December/031107.html for a more detailed explanation of this change. (Implemented by Martin von L\u00f6wis.)Calling Tcl methods through\n_tkinter\nno longer returns only strings. Instead, if Tcl returns other objects those objects are converted to their Python equivalent, if one exists, or wrapped with a_tkinter.Tcl_Obj\nobject if no Python equivalent exists. This behavior can be controlled through thewantobjects()\nmethod oftkapp\nobjects.When using\n_tkinter\nthrough theTkinter\nmodule (as most Tkinter applications will), this feature is always activated. It should not cause compatibility problems, since Tkinter would always convert string results to Python types where possible.If any incompatibilities are found, the old behavior can be restored by setting the\nwantobjects\nvariable in theTkinter\nmodule to false before creating the firsttkapp\nobject.import Tkinter Tkinter.wantobjects = 0\nAny breakage caused by this change should be reported as a bug.\nThe\nUserDict\nmodule has a newDictMixin\nclass which defines all dictionary methods for classes that already have a minimum mapping interface. This greatly simplifies writing classes that need to be substitutable for dictionaries, such as the classes in theshelve\nmodule.Adding the mix-in as a superclass provides the full dictionary interface whenever the class defines\n__getitem__()\n,__setitem__()\n,__delitem__()\n, andkeys()\n. For example:>>> import UserDict >>> class SeqDict(UserDict.DictMixin): ... \"\"\"Dictionary lookalike implemented with lists.\"\"\" ... def __init__(self): ... self.keylist = [] ... self.valuelist = [] ... def __getitem__(self, key): ... try: ... i = self.keylist.index(key) ... except ValueError: ... raise KeyError ... return self.valuelist[i] ... def __setitem__(self, key, value): ... try: ... i = self.keylist.index(key) ... self.valuelist[i] = value ... except ValueError: ... self.keylist.append(key) ... self.valuelist.append(value) ... def __delitem__(self, key): ... try: ... i = self.keylist.index(key) ... except ValueError: ... raise KeyError ... self.keylist.pop(i) ... self.valuelist.pop(i) ... def keys(self): ... return list(self.keylist) ... >>> s = SeqDict() >>> dir(s) # See that other dictionary methods are implemented ['__cmp__', '__contains__', '__delitem__', '__doc__', '__getitem__', '__init__', '__iter__', '__len__', '__module__', '__repr__', '__setitem__', 'clear', 'get', 'has_key', 'items', 'iteritems', 'iterkeys', 'itervalues', 'keylist', 'keys', 'pop', 'popitem', 'setdefault', 'update', 'valuelist', 'values']\n(Contributed by Raymond Hettinger.)\nThe DOM implementation in\nxml.dom.minidom\ncan now generate XML output in a particular encoding by providing an optional encoding argument to thetoxml()\nandtoprettyxml()\nmethods of DOM nodes.The\nxmlrpclib\nmodule now supports an XML-RPC extension for handling nil data values such as Python\u2019sNone\n. Nil values are always supported on unmarshalling an XML-RPC response. To generate requests containingNone\n, you must supply a true value for the allow_none parameter when creating aMarshaller\ninstance.The new\nDocXMLRPCServer\nmodule allows writing self-documenting XML-RPC servers. Run it in demo mode (as a program) to see it in action. Pointing the web browser to the RPC server produces pydoc-style documentation; pointing xmlrpclib to the server allows invoking the actual methods. (Contributed by Brian Quinlan.)Support for internationalized domain names (RFCs 3454, 3490, 3491, and 3492) has been added. The \u201cidna\u201d encoding can be used to convert between a Unicode domain name and the ASCII-compatible encoding (ACE) of that name.\n>{}>{}> u\"www.Alliancefran\u00e7aise.nu\".encode(\"idna\") 'www.xn--alliancefranaise-npb.nu'\nThe\nsocket\nmodule has also been extended to transparently convert Unicode hostnames to the ACE version before passing them to the C library. Modules that deal with hostnames such ashttplib\nandftplib\n) also support Unicode host names;httplib\nalso sends HTTPHost\nheaders using the ACE version of the domain name.urllib\nsupports Unicode URLs with non-ASCII host names as long as thepath\npart of the URL is ASCII only.To implement this change, the\nstringprep\nmodule, themkstringprep\ntool and thepunycode\nencoding have been added.\nDate/Time Type\u00b6\nDate and time types suitable for expressing timestamps were added as the\ndatetime\nmodule. The types don\u2019t support different calendars or many\nfancy features, and just stick to the basics of representing time.\nThe three primary types are: date\n, representing a day, month, and year;\ntime\n, consisting of hour, minute, and second; and datetime\n,\nwhich contains all the attributes of both date\nand time\n.\nThere\u2019s also a timedelta\nclass representing differences between two\npoints in time, and time zone logic is implemented by classes inheriting from\nthe abstract tzinfo\nclass.\nYou can create instances of date\nand time\nby either supplying\nkeyword arguments to the appropriate constructor, e.g.\ndatetime.date(year=1972, month=10, day=15)\n, or by using one of a number of\nclass methods. For example, the today()\nclass method returns the\ncurrent local date.\nOnce created, instances of the date/time classes are all immutable. There are a number of methods for producing formatted strings from objects:\n>>> import datetime\n>>> now = datetime.datetime.now()\n>>> now.isoformat()\n'2002-12-30T21:27:03.994956'\n>>> now.ctime() # Only available on date, datetime\n'Mon Dec 30 21:27:03 2002'\n>>> now.strftime('%Y %d %b')\n'2002 30 Dec'\nThe replace()\nmethod allows modifying one or more fields of a\ndate\nor datetime\ninstance, returning a new instance:\n>>> d = datetime.datetime.now()\n>>> d\ndatetime.datetime(2002, 12, 30, 22, 15, 38, 827738)\n>>> d.replace(year=2001, hour = 12)\ndatetime.datetime(2001, 12, 30, 12, 15, 38, 827738)\n>>>\nInstances can be compared, hashed, and converted to strings (the result is the\nsame as that of isoformat()\n). date\nand datetime\ninstances can be subtracted from each other, and added to timedelta\ninstances. The largest missing feature is that there\u2019s no standard library\nsupport for parsing strings and getting back a date\nor\ndatetime\n.\nFor more information, refer to the module\u2019s reference documentation. (Contributed by Tim Peters.)\nThe optparse Module\u00b6\nThe getopt\nmodule provides simple parsing of command-line arguments. The\nnew optparse\nmodule (originally named Optik) provides more elaborate\ncommand-line parsing that follows the Unix conventions, automatically creates\nthe output for --help\n, and can perform different actions for different\noptions.\nYou start by creating an instance of OptionParser\nand telling it what\nyour program\u2019s options are.\nimport sys\nfrom optparse import OptionParser\nop = OptionParser()\nop.add_option('-i', '--input',\naction='store', type='string', dest='input',\nhelp='set input filename')\nop.add_option('-l', '--length',\naction='store', type='int', dest='length',\nhelp='set maximum length of output')\nParsing a command line is then done by calling the parse_args()\nmethod.\noptions, args = op.parse_args(sys.argv[1:])\nprint options\nprint args\nThis returns an object containing all of the option values, and a list of strings containing the remaining arguments.\nInvoking the script with the various arguments now works as you\u2019d expect it to. Note that the length argument is automatically converted to an integer.\n$ ./python opt.py -i data arg1\n\n['arg1']\n$ ./python opt.py --input=data --length=4\n\n[]\n$\nThe help message is automatically generated for you:\n$ ./python opt.py --help\nusage: opt.py [options]\noptions:\n-h, --help show this help message and exit\n-iINPUT, --input=INPUT\nset input filename\n-lLENGTH, --length=LENGTH\nset maximum length of output\n$\nSee the module\u2019s documentation for more details.\nOptik was written by Greg Ward, with suggestions from the readers of the Getopt SIG.\nPymalloc: A Specialized Object Allocator\u00b6\nPymalloc, a specialized object allocator written by Vladimir Marangozov, was a\nfeature added to Python 2.1. Pymalloc is intended to be faster than the system\nmalloc()\nand to have less memory overhead for allocation patterns typical\nof Python programs. The allocator uses C\u2019s malloc()\nfunction to get large\npools of memory and then fulfills smaller memory requests from these pools.\nIn 2.1 and 2.2, pymalloc was an experimental feature and wasn\u2019t enabled by\ndefault; you had to explicitly enable it when compiling Python by providing the\n--with-pymalloc\noption to the configure script. In 2.3,\npymalloc has had further enhancements and is now enabled by default; you\u2019ll have\nto supply --without-pymalloc\nto disable it.\nThis change is transparent to code written in Python; however, pymalloc may expose bugs in C extensions. Authors of C extension modules should test their code with pymalloc enabled, because some incorrect code may cause core dumps at runtime.\nThere\u2019s one particularly common error that causes problems. There are a number\nof memory allocation functions in Python\u2019s C API that have previously just been\naliases for the C library\u2019s malloc()\nand free()\n, meaning that if\nyou accidentally called mismatched functions the error wouldn\u2019t be noticeable.\nWhen the object allocator is enabled, these functions aren\u2019t aliases of\nmalloc()\nand free()\nany more, and calling the wrong function to\nfree memory may get you a core dump. For example, if memory was allocated using\nPyObject_Malloc()\n, it has to be freed using PyObject_Free()\n, not\nfree()\n. A few modules included with Python fell afoul of this and had to\nbe fixed; doubtless there are more third-party modules that will have the same\nproblem.\nAs part of this change, the confusing multiple interfaces for allocating memory have been consolidated down into two API families. Memory allocated with one family must not be manipulated with functions from the other family. There is one family for allocating chunks of memory and another family of functions specifically for allocating Python objects.\nTo allocate and free an undistinguished chunk of memory use the \u201craw memory\u201d family:\nPyMem_Malloc()\n,PyMem_Realloc()\n, andPyMem_Free()\n.The \u201cobject memory\u201d family is the interface to the pymalloc facility described above and is biased towards a large number of \u201csmall\u201d allocations:\nPyObject_Malloc()\n,PyObject_Realloc()\n, andPyObject_Free()\n.To allocate and free Python objects, use the \u201cobject\u201d family\nPyObject_New\n,PyObject_NewVar\n, andPyObject_Del()\n.\nThanks to lots of work by Tim Peters, pymalloc in 2.3 also provides debugging\nfeatures to catch memory overwrites and doubled frees in both extension modules\nand in the interpreter itself. To enable this support, compile a debugging\nversion of the Python interpreter by running configure with\n--with-pydebug\n.\nTo aid extension writers, a header file Misc/pymemcompat.h\nis\ndistributed with the source to Python 2.3 that allows Python extensions to use\nthe 2.3 interfaces to memory allocation while compiling against any version of\nPython since 1.5.2. You would copy the file from Python\u2019s source distribution\nand bundle it with the source of your extension.\nSee also\n- https://hg.python.org/cpython/file/default/Objects/obmalloc.c\nFor the full details of the pymalloc implementation, see the comments at the top of the file\nObjects/obmalloc.c\nin the Python source code. The above link points to the file within the python.org SVN browser.\nBuild and C API Changes\u00b6\nChanges to Python\u2019s build process and to the C API include:\nThe cycle detection implementation used by the garbage collection has proven to be stable, so it\u2019s now been made mandatory. You can no longer compile Python without it, and the\n--with-cycle-gc\nswitch to configure has been removed.Python can now optionally be built as a shared library (\nlibpython2.3.so\n) by supplying--enable-shared\nwhen running Python\u2019s configure script. (Contributed by Ondrej Palkovsky.)The\nDL_EXPORT\nandDL_IMPORT\nmacros are now deprecated. Initialization functions for Python extension modules should now be declared using the new macroPyMODINIT_FUNC\n, while the Python core will generally use thePyAPI_FUNC\nandPyAPI_DATA\nmacros.The interpreter can be compiled without any docstrings for the built-in functions and modules by supplying\n--without-doc-strings\nto the configure script. This makes the Python executable about 10% smaller, but will also mean that you can\u2019t get help for Python\u2019s built-ins. (Contributed by Gustavo Niemeyer.)The\nPyArg_NoArgs()\nmacro is now deprecated, and code that uses it should be changed. For Python 2.2 and later, the method definition table can specify theMETH_NOARGS\nflag, signalling that there are no arguments, and the argument checking can then be removed. If compatibility with pre-2.2 versions of Python is important, the code could usePyArg_ParseTuple(args, \"\")\ninstead, but this will be slower than usingMETH_NOARGS\n.PyArg_ParseTuple()\naccepts new format characters for various sizes of unsigned integers:B\nfor unsigned char,H\nfor unsigned short int,I\nfor unsigned int, andK\nfor unsigned long long.A new function,\nPyObject_DelItemString(mapping, char *key)\nwas added as shorthand forPyObject_DelItem(mapping, PyString_New(key))\n.File objects now manage their internal string buffer differently, increasing it exponentially when needed. This results in the benchmark tests in\nLib/test/test_bufio.py\nspeeding up considerably (from 57 seconds to 1.7 seconds, according to one measurement).It\u2019s now possible to define class and static methods for a C extension type by setting either the\nMETH_CLASS\norMETH_STATIC\nflags in a method\u2019sPyMethodDef\nstructure.Python now includes a copy of the Expat XML parser\u2019s source code, removing any dependence on a system version or local installation of Expat.\nIf you dynamically allocate type objects in your extension, you should be aware of a change in the rules relating to the\n__module__\nand__name__\nattributes. In summary, you will want to ensure the type\u2019s dictionary contains a'__module__'\nkey; making the module name the part of the type name leading up to the final period will no longer have the desired effect. For more detail, read the API reference documentation or the source.\nPort-Specific Changes\u00b6\nSupport for a port to IBM\u2019s OS/2 using the EMX runtime environment was merged\ninto the main Python source tree. EMX is a POSIX emulation layer over the OS/2\nsystem APIs. The Python port for EMX tries to support all the POSIX-like\ncapability exposed by the EMX runtime, and mostly succeeds; fork()\nand\nfcntl()\nare restricted by the limitations of the underlying emulation\nlayer. The standard OS/2 port, which uses IBM\u2019s Visual Age compiler, also\ngained support for case-sensitive import semantics as part of the integration of\nthe EMX port into CVS. (Contributed by Andrew MacIntyre.)\nOn MacOS, most toolbox modules have been weaklinked to improve backward compatibility. This means that modules will no longer fail to load if a single routine is missing on the current OS version. Instead calling the missing routine will raise an exception. (Contributed by Jack Jansen.)\nThe RPM spec files, found in the Misc/RPM/\ndirectory in the Python\nsource distribution, were updated for 2.3. (Contributed by Sean Reifschneider.)\nOther new platforms now supported by Python include AtheOS (http://www.atheos.cx/), GNU/Hurd, and OpenVMS.\nOther Changes and Fixes\u00b6\nAs usual, there were a bunch of other improvements and bugfixes scattered throughout the source tree. A search through the CVS change logs finds there were 523 patches applied and 514 bugs fixed between Python 2.2 and 2.3. Both figures are likely to be underestimates.\nSome of the more notable changes are:\nIf the\nPYTHONINSPECT\nenvironment variable is set, the Python interpreter will enter the interactive prompt after running a Python program, as if Python had been invoked with the-i\noption. The environment variable can be set before running the Python interpreter, or it can be set by the Python program as part of its execution.The\nregrtest.py\nscript now provides a way to allow \u201call resources except foo.\u201d A resource name passed to the-u\noption can now be prefixed with a hyphen ('-'\n) to mean \u201cremove this resource.\u201d For example, the option \u2018-uall,-bsddb\n\u2019 could be used to enable the use of all resources exceptbsddb\n.The tools used to build the documentation now work under Cygwin as well as Unix.\nThe\nSET_LINENO\nopcode has been removed. Back in the mists of time, this opcode was needed to produce line numbers in tracebacks and support trace functions (for, e.g.,pdb\n). Since Python 1.5, the line numbers in tracebacks have been computed using a different mechanism that works with \u201cpython -O\u201d. For Python 2.3 Michael Hudson implemented a similar scheme to determine when to call the trace function, removing the need forSET_LINENO\nentirely.It would be difficult to detect any resulting difference from Python code, apart from a slight speed up when Python is run without\n-O\n.C extensions that access the\nf_lineno\nfield of frame objects should instead callPyCode_Addr2Line(f->f_code, f->f_lasti)\n. This will have the added effect of making the code work as desired under \u201cpython -O\u201d in earlier versions of Python.A nifty new feature is that trace functions can now assign to the\nf_lineno\nattribute of frame objects, changing the line that will be executed next. Ajump\ncommand has been added to thepdb\ndebugger taking advantage of this new feature. (Implemented by Richie Hindle.)\nPorting to Python 2.3\u00b6\nThis section lists previously described changes that may require changes to your code:\nyield\nis now always a keyword; if it\u2019s used as a variable name in your code, a different name must be chosen.For strings X and Y,\nX in Y\nnow works if X is more than one character long.The\nint()\ntype constructor will now return a long integer instead of raising anOverflowError\nwhen a string or floating-point number is too large to fit into an integer.If you have Unicode strings that contain 8-bit characters, you must declare the file\u2019s encoding (UTF-8, Latin-1, or whatever) by adding a comment to the top of the file. See section PEP 263: Source Code Encodings for more information.\nCalling Tcl methods through\n_tkinter\nno longer returns only strings. Instead, if Tcl returns other objects those objects are converted to their Python equivalent, if one exists, or wrapped with a_tkinter.Tcl_Obj\nobject if no Python equivalent exists.Large octal and hex literals such as\n0xffffffff\nnow trigger aFutureWarning\n. Currently they\u2019re stored as 32-bit numbers and result in a negative value, but in Python 2.4 they\u2019ll become positive long integers.There are a few ways to fix this warning. If you really need a positive number, just add an\nL\nto the end of the literal. If you\u2019re trying to get a 32-bit integer with low bits set and have previously used an expression such as~(1 << 31)\n, it\u2019s probably clearest to start with all bits set and clear the desired upper bits. For example, to clear just the top bit (bit 31), you could write0xffffffffL &~(1L<<31)\n.You can no longer disable assertions by assigning to\n__debug__\n.The Distutils\nsetup()\nfunction has gained various new keyword arguments such as depends. Old versions of the Distutils will abort if passed unknown keywords. A solution is to check for the presence of the newget_distutil_options()\nfunction in yoursetup.py\nand only uses the new keywords with a version of the Distutils that supports them:from distutils import core kw = {'sources': 'foo.c', ...} if hasattr(core, 'get_distutil_options'): kw['depends'] = ['foo.h'] ext = Extension(**kw)\nUsing\nNone\nas a variable name will now result in aSyntaxWarning\nwarning.Names of extension types defined by the modules included with Python now contain the module and a\n'.'\nin front of the type name.\nAcknowledgements\u00b6\nThe author would like to thank the following people for offering suggestions, corrections and assistance with various drafts of this article: Jeff Bauer, Simon Brunning, Brett Cannon, Michael Chermside, Andrew Dalke, Scott David Daniels, Fred L. Drake, Jr., David Fraser, Kelly Gerber, Raymond Hettinger, Michael Hudson, Chris Lambert, Detlef Lannert, Martin von L\u00f6wis, Andrew MacIntyre, Lalo Martins, Chad Netzer, Gustavo Niemeyer, Neal Norwitz, Hans Nowak, Chris Reedy, Francesco Ricciardi, Vinay Sajip, Neil Schemenauer, Roman Suzi, Jason Tishler, Just van Rossum.", "code_snippets": ["\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n ", " ", " ", " ", "\n ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n File ", ", line ", ", in ", "\n File ", ", line ", ", in ", "\n", "\n", "\n", "\n ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", "\n", " ", " ", " ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n", " ", " ", " ", " ", "\n ", "\n ", " ", " ", "\n", "\n\n", "\n", "\n", " ", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", "\n", " ", " ", " ", " ", "\n", "\n ", " ", "\n ", " ", "\n\n", "\n", " ", "\n", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n ", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", "\n ", "\n", " ", "\n ", "\n", "\n ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", "\n", " ", " ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", "\n\n", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " \\\n ", "\n ", "\n ", "\n\n", "\n", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", "\n\n", " ", " ", " ", "\n ", " ", " ", " ", "\n ", "\n ", " ", " ", "\n ", " ", "\n ", "\n ", "\n ", "\n ", " ", " ", "\n ", " ", " ", "\n\n", "\n", " ", "\n", "\n", "\n", "\n\n", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n ", "\n ", " ", "\n ", "\n ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", " ", "\n ", "\n ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", "\n", "\n", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n ", "\n ", "\n", " ", " ", " ", " ", "\n", "\n", " ", " ", " ", " ", "\n", "\n", "\n", " ", " ", "\n", " ", "\n", "\n", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n File ", ", line ", ", in ", "\n File ", ", line ", ", in ", "\n", " ", " ", " ", "\n", ": ", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n ", "\n", " ", "\n ", "\n", "\n\n", " ", " ", "\n", " ", " ", "\n\n", "\n", " ", " ", "\n", " ", " ", "\n\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", " ", "\n", " ", "\n", " ", " ", "\n", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", " ", "\n\n", " ", " ", "\n", " ", "\n ", " ", " ", "\n ", "\n", " ", "\n ", " ", " ", "\n ", "\n", " ", " ", " ", "\n", " ", "\n", " ", "\n", " ", "\n\n", " ", " ", " ", " ", "\n", " ", " ", "\n ", " ", " ", "\n", " ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 19560}
{"url": "https://docs.python.org/3/whatsnew/2.4.html", "title": "What\u2019s New in Python 2.4", "content": "What\u2019s New in Python 2.4\u00b6\n- Author:\nA.M. Kuchling\nThis article explains the new features in Python 2.4.1, released on March 30, 2005.\nPython 2.4 is a medium-sized release. It doesn\u2019t introduce as many changes as the radical Python 2.2, but introduces more features than the conservative 2.3 release. The most significant new language features are function decorators and generator expressions; most other changes are to the standard library.\nAccording to the CVS change logs, there were 481 patches applied and 502 bugs fixed between Python 2.3 and 2.4. Both figures are likely to be underestimates.\nThis article doesn\u2019t attempt to provide a complete specification of every single new feature, but instead provides a brief introduction to each feature. For full details, you should refer to the documentation for Python 2.4, such as the Python Library Reference and the Python Reference Manual. Often you will be referred to the PEP for a particular new feature for explanations of the implementation and design rationale.\nPEP 218: Built-In Set Objects\u00b6\nPython 2.3 introduced the sets\nmodule. C implementations of set data\ntypes have now been added to the Python core as two new built-in types,\nset(iterable)\nand frozenset(iterable)\n. They provide high speed\noperations for membership testing, for eliminating duplicates from sequences,\nand for mathematical operations like unions, intersections, differences, and\nsymmetric differences.\n>>> a = set('abracadabra') # form a set from a string\n>>> 'z' in a # fast membership testing\nFalse\n>>> a # unique letters in a\nset(['a', 'r', 'b', 'c', 'd'])\n>>> ''.join(a) # convert back into a string\n'arbcd'\n>>> b = set('alacazam') # form a second set\n>>> a - b # letters in a but not in b\nset(['r', 'd', 'b'])\n>>> a | b # letters in either a or b\nset(['a', 'c', 'r', 'd', 'b', 'm', 'z', 'l'])\n>>> a & b # letters in both a and b\nset(['a', 'c'])\n>>> a ^ b # letters in a or b but not both\nset(['r', 'd', 'b', 'm', 'z', 'l'])\n>>> a.add('z') # add a new element\n>>> a.update('wxy') # add multiple new elements\n>>> a\nset(['a', 'c', 'b', 'd', 'r', 'w', 'y', 'x', 'z'])\n>>> a.remove('x') # take one element out\n>>> a\nset(['a', 'c', 'b', 'd', 'r', 'w', 'y', 'z'])\nThe frozenset()\ntype is an immutable version of set()\n. Since it is\nimmutable and hashable, it may be used as a dictionary key or as a member of\nanother set.\nThe sets\nmodule remains in the standard library, and may be useful if you\nwish to subclass the Set\nor ImmutableSet\nclasses. There are\ncurrently no plans to deprecate the module.\nSee also\n- PEP 218 - Adding a Built-In Set Object Type\nOriginally proposed by Greg Wilson and ultimately implemented by Raymond Hettinger.\nPEP 237: Unifying Long Integers and Integers\u00b6\nThe lengthy transition process for this PEP, begun in Python 2.2, takes another\nstep forward in Python 2.4. In 2.3, certain integer operations that would\nbehave differently after int/long unification triggered FutureWarning\nwarnings and returned values limited to 32 or 64 bits (depending on your\nplatform). In 2.4, these expressions no longer produce a warning and instead\nproduce a different result that\u2019s usually a long integer.\nThe problematic expressions are primarily left shifts and lengthy hexadecimal\nand octal constants. For example, 2 << 32\nresults in a warning in 2.3,\nevaluating to 0 on 32-bit platforms. In Python 2.4, this expression now returns\nthe correct answer, 8589934592.\nSee also\n- PEP 237 - Unifying Long Integers and Integers\nOriginal PEP written by Moshe Zadka and GvR. The changes for 2.4 were implemented by Kalle Svensson.\nPEP 289: Generator Expressions\u00b6\nThe iterator feature introduced in Python 2.2 and the itertools\nmodule\nmake it easier to write programs that loop through large data sets without\nhaving the entire data set in memory at one time. List comprehensions don\u2019t fit\ninto this picture very well because they produce a Python list object containing\nall of the items. This unavoidably pulls all of the objects into memory, which\ncan be a problem if your data set is very large. When trying to write a\nfunctionally styled program, it would be natural to write something like:\nlinks = [link for link in get_all_links() if not link.followed]\nfor link in links:\n...\ninstead of\nfor link in get_all_links():\nif link.followed:\ncontinue\n...\nThe first form is more concise and perhaps more readable, but if you\u2019re dealing with a large number of link objects you\u2019d have to write the second form to avoid having all link objects in memory at the same time.\nGenerator expressions work similarly to list comprehensions but don\u2019t materialize the entire list; instead they create a generator that will return elements one by one. The above example could be written as:\nlinks = (link for link in get_all_links() if not link.followed)\nfor link in links:\n...\nGenerator expressions always have to be written inside parentheses, as in the above example. The parentheses signalling a function call also count, so if you want to create an iterator that will be immediately passed to a function you could write:\nprint sum(obj.count for obj in list_all_objects())\nGenerator expressions differ from list comprehensions in various small ways. Most notably, the loop variable (obj in the above example) is not accessible outside of the generator expression. List comprehensions leave the variable assigned to its last value; future versions of Python will change this, making list comprehensions match generator expressions in this respect.\nSee also\n- PEP 289 - Generator Expressions\nProposed by Raymond Hettinger and implemented by Jiwon Seo with early efforts steered by Hye-Shik Chang.\nPEP 292: Simpler String Substitutions\u00b6\nSome new classes in the standard library provide an alternative mechanism for substituting variables into strings; this style of substitution may be better for applications where untrained users need to edit templates.\nThe usual way of substituting variables by name is the %\noperator:\n>>> '%(page)i: %(title)s' % {'page':2, 'title': 'The Best of Times'}\n'2: The Best of Times'\nWhen writing the template string, it can be easy to forget the i\nor s\nafter the closing parenthesis. This isn\u2019t a big problem if the template is in a\nPython module, because you run the code, get an \u201cUnsupported format character\u201d\nValueError\n, and fix the problem. However, consider an application such\nas Mailman where template strings or translations are being edited by users who\naren\u2019t aware of the Python language. The format string\u2019s syntax is complicated\nto explain to such users, and if they make a mistake, it\u2019s difficult to provide\nhelpful feedback to them.\nPEP 292 adds a Template\nclass to the string\nmodule that uses\n$\nto indicate a substitution:\n>>> import string\n>>> t = string.Template('$page: $title')\n>>> t.substitute({'page':2, 'title': 'The Best of Times'})\n'2: The Best of Times'\nIf a key is missing from the dictionary, the substitute()\nmethod will\nraise a KeyError\n. There\u2019s also a safe_substitute()\nmethod that\nignores missing keys:\n>>> t = string.Template('$page: $title')\n>>> t.safe_substitute({'page':3})\n'3: $title'\nSee also\n- PEP 292 - Simpler String Substitutions\nWritten and implemented by Barry Warsaw.\nPEP 318: Decorators for Functions and Methods\u00b6\nPython 2.2 extended Python\u2019s object model by adding static methods and class\nmethods, but it didn\u2019t extend Python\u2019s syntax to provide any new way of defining\nstatic or class methods. Instead, you had to write a def\nstatement\nin the usual way, and pass the resulting method to a staticmethod()\nor\nclassmethod()\nfunction that would wrap up the function as a method of the\nnew type. Your code would look like this:\nclass C:\ndef meth (cls):\n...\nmeth = classmethod(meth) # Rebind name to wrapped-up class method\nIf the method was very long, it would be easy to miss or forget the\nclassmethod()\ninvocation after the function body.\nThe intention was always to add some syntax to make such definitions more readable, but at the time of 2.2\u2019s release a good syntax was not obvious. Today a good syntax still isn\u2019t obvious but users are asking for easier access to the feature; a new syntactic feature has been added to meet this need.\nThe new feature is called \u201cfunction decorators\u201d. The name comes from the idea\nthat classmethod()\n, staticmethod()\n, and friends are storing\nadditional information on a function object; they\u2019re decorating functions with\nmore details.\nThe notation borrows from Java and uses the '@'\ncharacter as an indicator.\nUsing the new syntax, the example above would be written:\nclass C:\n@classmethod\ndef meth (cls):\n...\nThe @classmethod\nis shorthand for the meth=classmethod(meth)\nassignment.\nMore generally, if you have the following:\n@A\n@B\n@C\ndef f ():\n...\nIt\u2019s equivalent to the following pre-decorator code:\ndef f(): ...\nf = A(B(C(f)))\nDecorators must come on the line before a function definition, one decorator per\nline, and can\u2019t be on the same line as the def statement, meaning that @A def\nf(): ...\nis illegal. You can only decorate function definitions, either at\nthe module level or inside a class; you can\u2019t decorate class definitions.\nA decorator is just a function that takes the function to be decorated as an argument and returns either the same function or some new object. The return value of the decorator need not be callable (though it typically is), unless further decorators will be applied to the result. It\u2019s easy to write your own decorators. The following simple example just sets an attribute on the function object:\n>>> def deco(func):\n... func.attr = 'decorated'\n... return func\n...\n>>> @deco\n... def f(): pass\n...\n>>> f\n\n>>> f.attr\n'decorated'\n>>>\nAs a slightly more realistic example, the following decorator checks that the supplied argument is an integer:\ndef require_int (func):\ndef wrapper (arg):\nassert isinstance(arg, int)\nreturn func(arg)\nreturn wrapper\n@require_int\ndef p1 (arg):\nprint arg\n@require_int\ndef p2(arg):\nprint arg*2\nAn example in PEP 318 contains a fancier version of this idea that lets you both specify the required type and check the returned type.\nDecorator functions can take arguments. If arguments are supplied, your\ndecorator function is called with only those arguments and must return a new\ndecorator function; this function must take a single function and return a\nfunction, as previously described. In other words, @A @B @C(args)\nbecomes:\ndef f(): ...\n_deco = C(args)\nf = A(B(_deco(f)))\nGetting this right can be slightly brain-bending, but it\u2019s not too difficult.\nA small related change makes the func_name\nattribute of functions\nwritable. This attribute is used to display function names in tracebacks, so\ndecorators should change the name of any new function that\u2019s constructed and\nreturned.\nSee also\n- PEP 318 - Decorators for Functions, Methods and Classes\nWritten by Kevin D. Smith, Jim Jewett, and Skip Montanaro. Several people wrote patches implementing function decorators, but the one that was actually checked in was patch #979728, written by Mark Russell.\n- https://wiki.python.org/moin/PythonDecoratorLibrary\nThis Wiki page contains several examples of decorators.\nPEP 322: Reverse Iteration\u00b6\nA new built-in function, reversed(seq)\n, takes a sequence and returns an\niterator that loops over the elements of the sequence in reverse order.\n>>> for i in reversed(xrange(1,4)):\n... print i\n...\n3\n2\n1\nCompared to extended slicing, such as range(1,4)[::-1]\n, reversed()\nis\neasier to read, runs faster, and uses substantially less memory.\nNote that reversed()\nonly accepts sequences, not arbitrary iterators. If\nyou want to reverse an iterator, first convert it to a list with list()\n.\n>>> input = open('/etc/passwd', 'r')\n>>> for line in reversed(list(input)):\n... print line\n...\nroot:*:0:0:System Administrator:/var/root:/bin/tcsh\n...\nSee also\n- PEP 322 - Reverse Iteration\nWritten and implemented by Raymond Hettinger.\nPEP 324: New subprocess Module\u00b6\nThe standard library provides a number of ways to execute a subprocess, offering\ndifferent features and different levels of complexity.\nos.system(command)\nis easy to use, but slow (it runs a shell process\nwhich executes the command) and dangerous (you have to be careful about escaping\nthe shell\u2019s metacharacters). The popen2\nmodule offers classes that can\ncapture standard output and standard error from the subprocess, but the naming\nis confusing. The subprocess\nmodule cleans this up, providing a unified\ninterface that offers all the features you might need.\nInstead of popen2\n\u2019s collection of classes, subprocess\ncontains a\nsingle class called subprocess.Popen\nwhose constructor supports a number of\ndifferent keyword arguments.\nclass Popen(args, bufsize=0, executable=None,\nstdin=None, stdout=None, stderr=None,\npreexec_fn=None, close_fds=False, shell=False,\ncwd=None, env=None, universal_newlines=False,\nstartupinfo=None, creationflags=0):\nargs is commonly a sequence of strings that will be the arguments to the\nprogram executed as the subprocess. (If the shell argument is true, args\ncan be a string which will then be passed on to the shell for interpretation,\njust as os.system()\ndoes.)\nstdin, stdout, and stderr specify what the subprocess\u2019s input, output, and\nerror streams will be. You can provide a file object or a file descriptor, or\nyou can use the constant subprocess.PIPE\nto create a pipe between the\nsubprocess and the parent.\nThe constructor has a number of handy options:\nclose_fds requests that all file descriptors be closed before running the subprocess.\ncwd specifies the working directory in which the subprocess will be executed (defaulting to whatever the parent\u2019s working directory is).\nenv is a dictionary specifying environment variables.\npreexec_fn is a function that gets called before the child is started.\nuniversal_newlines opens the child\u2019s input and output using Python\u2019s universal newlines feature.\nOnce you\u2019ve created the Popen\ninstance, you can call its wait()\nmethod to pause until the subprocess has exited, poll()\nto check if it\u2019s\nexited without pausing, or communicate(data)\nto send the string data\nto the subprocess\u2019s standard input. communicate(data)\nthen reads any\ndata that the subprocess has sent to its standard output or standard error,\nreturning a tuple (stdout_data, stderr_data)\n.\ncall()\nis a shortcut that passes its arguments along to the Popen\nconstructor, waits for the command to complete, and returns the status code of\nthe subprocess. It can serve as a safer analog to os.system()\n:\nsts = subprocess.call(['dpkg', '-i', '/tmp/new-package.deb'])\nif sts == 0:\n# Success\n...\nelse:\n# dpkg returned an error\n...\nThe command is invoked without use of the shell. If you really do want to use\nthe shell, you can add shell=True\nas a keyword argument and provide a string\ninstead of a sequence:\nsts = subprocess.call('dpkg -i /tmp/new-package.deb', shell=True)\nThe PEP takes various examples of shell and Python code and shows how they\u2019d be\ntranslated into Python code that uses subprocess\n. Reading this section\nof the PEP is highly recommended.\nSee also\n- PEP 324 - subprocess - New process module\nWritten and implemented by Peter \u00c5strand, with assistance from Fredrik Lundh and others.\nPEP 327: Decimal Data Type\u00b6\nPython has always supported floating-point (FP) numbers, based on the underlying\nC double type, as a data type. However, while most programming\nlanguages provide a floating-point type, many people (even programmers) are\nunaware that floating-point numbers don\u2019t represent certain decimal fractions\naccurately. The new Decimal\ntype can represent these fractions\naccurately, up to a user-specified precision limit.\nWhy is Decimal needed?\u00b6\nThe limitations arise from the representation used for floating-point numbers. FP numbers are made up of three components:\nThe sign, which is positive or negative.\nThe mantissa, which is a single-digit binary number followed by a fractional part. For example,\n1.01\nin base-2 notation is1 + 0/2 + 1/4\n, or 1.25 in decimal notation.The exponent, which tells where the decimal point is located in the number represented.\nFor example, the number 1.25 has positive sign, a mantissa value of 1.01 (in binary), and an exponent of 0 (the decimal point doesn\u2019t need to be shifted). The number 5 has the same sign and mantissa, but the exponent is 2 because the mantissa is multiplied by 4 (2 to the power of the exponent 2); 1.25 * 4 equals 5.\nModern systems usually provide floating-point support that conforms to a\nstandard called IEEE 754. C\u2019s double type is usually implemented as a\n64-bit IEEE 754 number, which uses 52 bits of space for the mantissa. This\nmeans that numbers can only be specified to 52 bits of precision. If you\u2019re\ntrying to represent numbers whose expansion repeats endlessly, the expansion is\ncut off after 52 bits. Unfortunately, most software needs to produce output in\nbase 10, and common fractions in base 10 are often repeating decimals in binary.\nFor example, 1.1 decimal is binary 1.0001100110011 ...\n; .1 = 1/16 + 1/32 +\n1/256 plus an infinite number of additional terms. IEEE 754 has to chop off\nthat infinitely repeated decimal after 52 digits, so the representation is\nslightly inaccurate.\nSometimes you can see this inaccuracy when the number is printed:\n>>> 1.1\n1.1000000000000001\nThe inaccuracy isn\u2019t always visible when you print the number because the FP-to-decimal-string conversion is provided by the C library, and most C libraries try to produce sensible output. Even if it\u2019s not displayed, however, the inaccuracy is still there and subsequent operations can magnify the error.\nFor many applications this doesn\u2019t matter. If I\u2019m plotting points and displaying them on my monitor, the difference between 1.1 and 1.1000000000000001 is too small to be visible. Reports often limit output to a certain number of decimal places, and if you round the number to two or three or even eight decimal places, the error is never apparent. However, for applications where it does matter, it\u2019s a lot of work to implement your own custom arithmetic routines.\nHence, the Decimal\ntype was created.\nThe Decimal\ntype\u00b6\nA new module, decimal\n, was added to Python\u2019s standard library. It\ncontains two classes, Decimal\nand Context\n. Decimal\ninstances represent numbers, and Context\ninstances are used to wrap up\nvarious settings such as the precision and default rounding mode.\nDecimal\ninstances are immutable, like regular Python integers and FP\nnumbers; once it\u2019s been created, you can\u2019t change the value an instance\nrepresents. Decimal\ninstances can be created from integers or\nstrings:\n>>> import decimal\n>>> decimal.Decimal(1972)\nDecimal(\"1972\")\n>>> decimal.Decimal(\"1.1\")\nDecimal(\"1.1\")\nYou can also provide tuples containing the sign, the mantissa represented as a tuple of decimal digits, and the exponent:\n>>> decimal.Decimal((1, (1, 4, 7, 5), -2))\nDecimal(\"-14.75\")\nCautionary note: the sign bit is a Boolean value, so 0 is positive and 1 is negative.\nConverting from floating-point numbers poses a bit of a problem: should the FP\nnumber representing 1.1 turn into the decimal number for exactly 1.1, or for 1.1\nplus whatever inaccuracies are introduced? The decision was to dodge the issue\nand leave such a conversion out of the API. Instead, you should convert the\nfloating-point number into a string using the desired precision and pass the\nstring to the Decimal\nconstructor:\n>>> f = 1.1\n>>> decimal.Decimal(str(f))\nDecimal(\"1.1\")\n>>> decimal.Decimal('%.12f' % f)\nDecimal(\"1.100000000000\")\nOnce you have Decimal\ninstances, you can perform the usual mathematical\noperations on them. One limitation: exponentiation requires an integer\nexponent:\n>>> a = decimal.Decimal('35.72')\n>>> b = decimal.Decimal('1.73')\n>>> a+b\nDecimal(\"37.45\")\n>>> a-b\nDecimal(\"33.99\")\n>>> a*b\nDecimal(\"61.7956\")\n>>> a/b\nDecimal(\"20.64739884393063583815028902\")\n>>> a ** 2\nDecimal(\"1275.9184\")\n>>> a**b\nTraceback (most recent call last):\n...\ndecimal.InvalidOperation: x ** (non-integer)\nYou can combine Decimal\ninstances with integers, but not with\nfloating-point numbers:\n>>> a + 4\nDecimal(\"39.72\")\n>>> a + 4.5\nTraceback (most recent call last):\n...\nTypeError: You can interact Decimal only with int, long or Decimal data types.\n>>>\nDecimal\nnumbers can be used with the math\nand cmath\nmodules, but note that they\u2019ll be immediately converted to floating-point\nnumbers before the operation is performed, resulting in a possible loss of\nprecision and accuracy. You\u2019ll also get back a regular floating-point number\nand not a Decimal\n.\n>>> import math, cmath\n>>> d = decimal.Decimal('123456789012.345')\n>>> math.sqrt(d)\n351364.18288201344\n>>> cmath.sqrt(-d)\n351364.18288201344j\nDecimal\ninstances have a sqrt()\nmethod that returns a\nDecimal\n, but if you need other things such as trigonometric functions\nyou\u2019ll have to implement them.\n>>> d.sqrt()\nDecimal(\"351364.1828820134592177245001\")\nThe Context\ntype\u00b6\nInstances of the Context\nclass encapsulate several settings for\ndecimal operations:\nprec\nis the precision, the number of decimal places.rounding\nspecifies the rounding mode. Thedecimal\nmodule has constants for the various possibilities:ROUND_DOWN\n,ROUND_CEILING\n,ROUND_HALF_EVEN\n, and various others.traps\nis a dictionary specifying what happens on encountering certain error conditions: either an exception is raised or a value is returned. Some examples of error conditions are division by zero, loss of precision, and overflow.\nThere\u2019s a thread-local default context available by calling getcontext()\n;\nyou can change the properties of this context to alter the default precision,\nrounding, or trap handling. The following example shows the effect of changing\nthe precision of the default context:\n>>> decimal.getcontext().prec\n28\n>>> decimal.Decimal(1) / decimal.Decimal(7)\nDecimal(\"0.1428571428571428571428571429\")\n>>> decimal.getcontext().prec = 9\n>>> decimal.Decimal(1) / decimal.Decimal(7)\nDecimal(\"0.142857143\")\nThe default action for error conditions is selectable; the module can either return a special value such as infinity or not-a-number, or exceptions can be raised:\n>>> decimal.Decimal(1) / decimal.Decimal(0)\nTraceback (most recent call last):\n...\ndecimal.DivisionByZero: x / 0\n>>> decimal.getcontext().traps[decimal.DivisionByZero] = False\n>>> decimal.Decimal(1) / decimal.Decimal(0)\nDecimal(\"Infinity\")\n>>>\nThe Context\ninstance also has various methods for formatting numbers\nsuch as to_eng_string()\nand to_sci_string()\n.\nFor more information, see the documentation for the decimal\nmodule, which\nincludes a quick-start tutorial and a reference.\nSee also\n- PEP 327 - Decimal Data Type\nWritten by Facundo Batista and implemented by Facundo Batista, Eric Price, Raymond Hettinger, Aahz, and Tim Peters.\n- http://www.lahey.com/float.htm\nThe article uses Fortran code to illustrate many of the problems that floating-point inaccuracy can cause.\n- https://speleotrove.com/decimal/\nA description of a decimal-based representation. This representation is being proposed as a standard, and underlies the new Python decimal type. Much of this material was written by Mike Cowlishaw, designer of the Rexx language.\nPEP 328: Multi-line Imports\u00b6\nOne language change is a small syntactic tweak aimed at making it easier to\nimport many names from a module. In a from module import names\nstatement,\nnames is a sequence of names separated by commas. If the sequence is very\nlong, you can either write multiple imports from the same module, or you can use\nbackslashes to escape the line endings like this:\nfrom SimpleXMLRPCServer import SimpleXMLRPCServer,\\\nSimpleXMLRPCRequestHandler,\\\nCGIXMLRPCRequestHandler,\\\nresolve_dotted_attribute\nThe syntactic change in Python 2.4 simply allows putting the names within parentheses. Python ignores newlines within a parenthesized expression, so the backslashes are no longer needed:\nfrom SimpleXMLRPCServer import (SimpleXMLRPCServer,\nSimpleXMLRPCRequestHandler,\nCGIXMLRPCRequestHandler,\nresolve_dotted_attribute)\nThe PEP also proposes that all import\nstatements be absolute imports,\nwith a leading .\ncharacter to indicate a relative import. This part of the\nPEP was not implemented for Python 2.4, but was completed for Python 2.5.\nSee also\n- PEP 328 - Imports: Multi-Line and Absolute/Relative\nWritten by Aahz. Multi-line imports were implemented by Dima Dorfman.\nPEP 331: Locale-Independent Float/String Conversions\u00b6\nThe locale\nmodules lets Python software select various conversions and\ndisplay conventions that are localized to a particular country or language.\nHowever, the module was careful to not change the numeric locale because various\nfunctions in Python\u2019s implementation required that the numeric locale remain set\nto the 'C'\nlocale. Often this was because the code was using the C\nlibrary\u2019s atof()\nfunction.\nNot setting the numeric locale caused trouble for extensions that used third-party C libraries, however, because they wouldn\u2019t have the correct locale set. The motivating example was GTK+, whose user interface widgets weren\u2019t displaying numbers in the current locale.\nThe solution described in the PEP is to add three new functions to the Python API that perform ASCII-only conversions, ignoring the locale setting:\nPyOS_ascii_strtod(str, ptr)\nandPyOS_ascii_atof(str, ptr)\nboth convert a string to a C double.PyOS_ascii_formatd(buffer, buf_len, format, d)\nconverts a double to an ASCII string.\nThe code for these functions came from the GLib library\n(https://developer-old.gnome.org/glib/2.26/), whose developers kindly\nrelicensed the relevant functions and donated them to the Python Software\nFoundation. The locale\nmodule can now change the numeric locale,\nletting extensions such as GTK+ produce the correct results.\nSee also\n- PEP 331 - Locale-Independent Float/String Conversions\nWritten by Christian R. Reis, and implemented by Gustavo Carneiro.\nOther Language Changes\u00b6\nHere are all of the changes that Python 2.4 makes to the core Python language.\nDecorators for functions and methods were added (PEP 318).\nBuilt-in\nset()\nandfrozenset()\ntypes were added (PEP 218). Other new built-ins include thereversed(seq)\nfunction (PEP 322).Generator expressions were added (PEP 289).\nCertain numeric expressions no longer return values restricted to 32 or 64 bits (PEP 237).\nYou can now put parentheses around the list of names in a\nfrom module import names\nstatement (PEP 328).The\ndict.update()\nmethod now accepts the same argument forms as thedict\nconstructor. This includes any mapping, any iterable of key/value pairs, and keyword arguments. (Contributed by Raymond Hettinger.)The string methods\nljust()\n,rjust()\n, andcenter()\nnow take an optional argument for specifying a fill character other than a space. (Contributed by Raymond Hettinger.)Strings also gained an\nrsplit()\nmethod that works like thesplit()\nmethod but splits from the end of the string. (Contributed by Sean Reifschneider.)>>> 'www.python.org'.split('.', 1) ['www', 'python.org'] 'www.python.org'.rsplit('.', 1) ['www.python', 'org']\nThree keyword parameters, cmp, key, and reverse, were added to the\nsort()\nmethod of lists. These parameters make some common usages ofsort()\nsimpler. All of these parameters are optional.For the cmp parameter, the value should be a comparison function that takes two parameters and returns -1, 0, or +1 depending on how the parameters compare. This function will then be used to sort the list. Previously this was the only parameter that could be provided to\nsort()\n.key should be a single-parameter function that takes a list element and returns a comparison key for the element. The list is then sorted using the comparison keys. The following example sorts a list case-insensitively:\n>>> L = ['A', 'b', 'c', 'D'] >>> L.sort() # Case-sensitive sort >>> L ['A', 'D', 'b', 'c'] >>> # Using 'key' parameter to sort list >>> L.sort(key=lambda x: x.lower()) >>> L ['A', 'b', 'c', 'D'] >>> # Old-fashioned way >>> L.sort(cmp=lambda x,y: cmp(x.lower(), y.lower())) >>> L ['A', 'b', 'c', 'D']\nThe last example, which uses the cmp parameter, is the old way to perform a case-insensitive sort. It works but is slower than using a key parameter. Using key calls\nlower()\nmethod once for each element in the list while using cmp will call it twice for each comparison, so using key saves on invocations of thelower()\nmethod.For simple key functions and comparison functions, it is often possible to avoid a\nlambda\nexpression by using an unbound method instead. For example, the above case-insensitive sort is best written as:>>> L.sort(key=str.lower) >>> L ['A', 'b', 'c', 'D']\nFinally, the reverse parameter takes a Boolean value. If the value is true, the list will be sorted into reverse order. Instead of\nL.sort(); L.reverse()\n, you can now writeL.sort(reverse=True)\n.The results of sorting are now guaranteed to be stable. This means that two entries with equal keys will be returned in the same order as they were input. For example, you can sort a list of people by name, and then sort the list by age, resulting in a list sorted by age where people with the same age are in name-sorted order.\n(All changes to\nsort()\ncontributed by Raymond Hettinger.)There is a new built-in function\nsorted(iterable)\nthat works like the in-placelist.sort()\nmethod but can be used in expressions. The differences are:the input may be any iterable;\na newly formed copy is sorted, leaving the original intact; and\nthe expression returns the new sorted copy\n>>> L = [9,7,8,3,2,4,1,6,5] >>> [10+i for i in sorted(L)] # usable in a list comprehension [11, 12, 13, 14, 15, 16, 17, 18, 19] >>> L # original is left unchanged [9,7,8,3,2,4,1,6,5] >>> sorted('Monty Python') # any iterable may be an input [' ', 'M', 'P', 'h', 'n', 'n', 'o', 'o', 't', 't', 'y', 'y'] >>> # List the contents of a dict sorted by key values >>> colormap = dict(red=1, blue=2, green=3, black=4, yellow=5) >>> for k, v in sorted(colormap.iteritems()): ... print k, v ... black 4 blue 2 green 3 red 1 yellow 5\n(Contributed by Raymond Hettinger.)\nInteger operations will no longer trigger an\nOverflowWarning\n. TheOverflowWarning\nwarning will disappear in Python 2.5.The interpreter gained a new switch,\n-m\n, that takes a name, searches for the corresponding module onsys.path\n, and runs the module as a script. For example, you can now run the Python profiler withpython -m profile\n. (Contributed by Nick Coghlan.)The\neval(expr, globals, locals)\nandexecfile(filename, globals, locals)\nfunctions and theexec\nstatement now accept any mapping type for the locals parameter. Previously this had to be a regular Python dictionary. (Contributed by Raymond Hettinger.)The\nzip()\nbuilt-in function anditertools.izip()\nnow return an empty list if called with no arguments. Previously they raised aTypeError\nexception. This makes them more suitable for use with variable length argument lists:>>> def transpose(array): ... return zip(*array) ... >>> transpose([(1,2,3), (4,5,6)]) [(1, 4), (2, 5), (3, 6)] >>> transpose([]) []\n(Contributed by Raymond Hettinger.)\nEncountering a failure while importing a module no longer leaves a partially initialized module object in\nsys.modules\n. The incomplete module object left behind would fool further imports of the same module into succeeding, leading to confusing errors. (Fixed by Tim Peters.)None\nis now a constant; code that binds a new value to the nameNone\nis now a syntax error. (Contributed by Raymond Hettinger.)\nOptimizations\u00b6\nThe inner loops for list and tuple slicing were optimized and now run about one-third faster. The inner loops for dictionaries were also optimized, resulting in performance boosts for\nkeys()\n,values()\n,items()\n,iterkeys()\n,itervalues()\n, anditeritems()\n. (Contributed by Raymond Hettinger.)The machinery for growing and shrinking lists was optimized for speed and for space efficiency. Appending and popping from lists now runs faster due to more efficient code paths and less frequent use of the underlying system\nrealloc()\n. List comprehensions also benefit.list.extend()\nwas also optimized and no longer converts its argument into a temporary list before extending the base list. (Contributed by Raymond Hettinger.)list()\n,tuple()\n,map()\n,filter()\n, andzip()\nnow run several times faster with non-sequence arguments that supply a__len__()\nmethod. (Contributed by Raymond Hettinger.)The methods\nlist.__getitem__()\n,dict.__getitem__()\n, anddict.__contains__()\nare now implemented asmethod_descriptor\nobjects rather thanwrapper_descriptor\nobjects. This form of access doubles their performance and makes them more suitable for use as arguments to functionals:map(mydict.__getitem__, keylist)\n. (Contributed by Raymond Hettinger.)Added a new opcode,\nLIST_APPEND\n, that simplifies the generated bytecode for list comprehensions and speeds them up by about a third. (Contributed by Raymond Hettinger.)The peephole bytecode optimizer has been improved to produce shorter, faster bytecode; remarkably, the resulting bytecode is more readable. (Enhanced by Raymond Hettinger.)\nString concatenations in statements of the form\ns = s + \"abc\"\nands += \"abc\"\nare now performed more efficiently in certain circumstances. This optimization won\u2019t be present in other Python implementations such as Jython, so you shouldn\u2019t rely on it; using thejoin()\nmethod of strings is still recommended when you want to efficiently glue a large number of strings together. (Contributed by Armin Rigo.)\nThe net result of the 2.4 optimizations is that Python 2.4 runs the pystone benchmark around 5% faster than Python 2.3 and 35% faster than Python 2.2. (pystone is not a particularly good benchmark, but it\u2019s the most commonly used measurement of Python\u2019s performance. Your own applications may show greater or smaller benefits from Python 2.4.)\nNew, Improved, and Deprecated Modules\u00b6\nAs usual, Python\u2019s standard library received a number of enhancements and bug\nfixes. Here\u2019s a partial list of the most notable changes, sorted alphabetically\nby module name. Consult the Misc/NEWS\nfile in the source tree for a more\ncomplete list of changes, or look through the CVS logs for all the details.\nThe\nasyncore\nmodule\u2019sloop()\nfunction now has a count parameter that lets you perform a limited number of passes through the polling loop. The default is still to loop forever.The\nbase64\nmodule now has more complete RFC 3548 support for Base64, Base32, and Base16 encoding and decoding, including optional case folding and optional alternative alphabets. (Contributed by Barry Warsaw.)The\nbisect\nmodule now has an underlying C implementation for improved performance. (Contributed by Dmitry Vasiliev.)The CJKCodecs collections of East Asian codecs, maintained by Hye-Shik Chang, was integrated into 2.4. The new encodings are:\nChinese (PRC): gb2312, gbk, gb18030, big5hkscs, hz\nChinese (ROC): big5, cp950\n- Japanese: cp932, euc-jis-2004, euc-jp, euc-jisx0213, iso-2022-jp,\niso-2022-jp-1, iso-2022-jp-2, iso-2022-jp-3, iso-2022-jp-ext, iso-2022-jp-2004, shift-jis, shift-jisx0213, shift-jis-2004\nKorean: cp949, euc-kr, johab, iso-2022-kr\nSome other new encodings were added: HP Roman8, ISO_8859-11, ISO_8859-16, PCTP-154, and TIS-620.\nThe UTF-8 and UTF-16 codecs now cope better with receiving partial input. Previously the\nStreamReader\nclass would try to read more data, making it impossible to resume decoding from the stream. Theread()\nmethod will now return as much data as it can and future calls will resume decoding where previous ones left off. (Implemented by Walter D\u00f6rwald.)There is a new\ncollections\nmodule for various specialized collection datatypes. Currently it contains just one type,deque\n, a double-ended queue that supports efficiently adding and removing elements from either end:>>> from collections import deque >>> d = deque('ghi') # make a new deque with three items >>> d.append('j') # add a new entry to the right side >>> d.appendleft('f') # add a new entry to the left side >>> d # show the representation of the deque deque(['f', 'g', 'h', 'i', 'j']) >>> d.pop() # return and remove the rightmost item 'j' >>> d.popleft() # return and remove the leftmost item 'f' >>> list(d) # list the contents of the deque ['g', 'h', 'i'] >>> 'h' in d # search the deque True\nSeveral modules, such as the\nQueue\nandthreading\nmodules, now take advantage ofcollections.deque\nfor improved performance. (Contributed by Raymond Hettinger.)The\nConfigParser\nclasses have been enhanced slightly. Theread()\nmethod now returns a list of the files that were successfully parsed, and theset()\nmethod raisesTypeError\nif passed a value argument that isn\u2019t a string. (Contributed by John Belmonte and David Goodger.)The\ncurses\nmodule now supports the ncurses extensionuse_default_colors()\n. On platforms where the terminal supports transparency, this makes it possible to use a transparent background. (Contributed by J\u00f6rg Lehmann.)The\ndifflib\nmodule now includes anHtmlDiff\nclass that creates an HTML table showing a side by side comparison of two versions of a text. (Contributed by Dan Gass.)The\nemail\npackage was updated to version 3.0, which dropped various deprecated APIs and removes support for Python versions earlier than 2.3. The 3.0 version of the package uses a new incremental parser for MIME messages, available in theemail.FeedParser\nmodule. The new parser doesn\u2019t require reading the entire message into memory, and doesn\u2019t raise exceptions if a message is malformed; instead it records any problems in thedefect\nattribute of the message. (Developed by Anthony Baxter, Barry Warsaw, Thomas Wouters, and others.)The\nheapq\nmodule has been converted to C. The resulting tenfold improvement in speed makes the module suitable for handling high volumes of data. In addition, the module has two new functionsnlargest()\nandnsmallest()\nthat use heaps to find the N largest or smallest values in a dataset without the expense of a full sort. (Contributed by Raymond Hettinger.)The\nhttplib\nmodule now contains constants for HTTP status codes defined in various HTTP-related RFC documents. Constants have names such asOK\n,CREATED\n,CONTINUE\n, andMOVED_PERMANENTLY\n; use pydoc to get a full list. (Contributed by Andrew Eland.)The\nimaplib\nmodule now supports IMAP\u2019s THREAD command (contributed by Yves Dionne) and newdeleteacl()\nandmyrights()\nmethods (contributed by Arnaud Mazin).The\nitertools\nmodule gained agroupby(iterable[, *func*])\nfunction. iterable is something that can be iterated over to return a stream of elements, and the optional func parameter is a function that takes an element and returns a key value; if omitted, the key is simply the element itself.groupby()\nthen groups the elements into subsequences which have matching values of the key, and returns a series of 2-tuples containing the key value and an iterator over the subsequence.Here\u2019s an example to make this clearer. The key function simply returns whether a number is even or odd, so the result of\ngroupby()\nis to return consecutive runs of odd or even numbers.>>> import itertools >>> L = [2, 4, 6, 7, 8, 9, 11, 12, 14] >>> for key_val, it in itertools.groupby(L, lambda x: x % 2): ... print key_val, list(it) ... 0 [2, 4, 6] 1 [7] 0 [8] 1 [9, 11] 0 [12, 14] >>>\ngroupby()\nis typically used with sorted input. The logic forgroupby()\nis similar to the Unixuniq\nfilter which makes it handy for eliminating, counting, or identifying duplicate elements:>>> word = 'abracadabra' >>> letters = sorted(word) # Turn string into a sorted list of letters >>> letters ['a', 'a', 'a', 'a', 'a', 'b', 'b', 'c', 'd', 'r', 'r'] >>> for k, g in itertools.groupby(letters): ... print k, list(g) ... a ['a', 'a', 'a', 'a', 'a'] b ['b', 'b'] c ['c'] d ['d'] r ['r', 'r'] >>> # List unique letters >>> [k for k, g in groupby(letters)] ['a', 'b', 'c', 'd', 'r'] >>> # Count letter occurrences >>> [(k, len(list(g))) for k, g in groupby(letters)] [('a', 5), ('b', 2), ('c', 1), ('d', 1), ('r', 2)]\n(Contributed by Hye-Shik Chang.)\nitertools\nalso gained a function namedtee(iterator, N)\nthat returns N independent iterators that replicate iterator. If N is omitted, the default is 2.>>> L = [1,2,3] >>> i1, i2 = itertools.tee(L) >>> i1,i2 (, ) >>> list(i1) # Run the first iterator to exhaustion [1, 2, 3] >>> list(i2) # Run the second iterator to exhaustion [1, 2, 3]\nNote that\ntee()\nhas to keep copies of the values returned by the iterator; in the worst case, it may need to keep all of them. This should therefore be used carefully if the leading iterator can run far ahead of the trailing iterator in a long stream of inputs. If the separation is large, then you might as well uselist()\ninstead. When the iterators track closely with one another,tee()\nis ideal. Possible applications include bookmarking, windowing, or lookahead iterators. (Contributed by Raymond Hettinger.)A number of functions were added to the\nlocale\nmodule, such asbind_textdomain_codeset()\nto specify a particular encoding and a family ofl*gettext()\nfunctions that return messages in the chosen encoding. (Contributed by Gustavo Niemeyer.)Some keyword arguments were added to the\nlogging\npackage\u2019sbasicConfig()\nfunction to simplify log configuration. The default behavior is to log messages to standard error, but various keyword arguments can be specified to log to a particular file, change the logging format, or set the logging level. For example:import logging logging.basicConfig(filename='/var/log/application.log', level=0, # Log all messages format='%(levelname):%(process):%(thread):%(message)')\nOther additions to the\nlogging\npackage include alog(level, msg)\nconvenience method, as well as aTimedRotatingFileHandler\nclass that rotates its log files at a timed interval. The module already hadRotatingFileHandler\n, which rotated logs once the file exceeded a certain size. Both classes derive from a newBaseRotatingHandler\nclass that can be used to implement other rotating handlers.(Changes implemented by Vinay Sajip.)\nThe\nmarshal\nmodule now shares interned strings on unpacking a data structure. This may shrink the size of certain pickle strings, but the primary effect is to make.pyc\nfiles significantly smaller. (Contributed by Martin von L\u00f6wis.)The\nnntplib\nmodule\u2019sNNTP\nclass gaineddescription()\nanddescriptions()\nmethods to retrieve newsgroup descriptions for a single group or for a range of groups. (Contributed by J\u00fcrgen A. Erhard.)Two new functions were added to the\noperator\nmodule,attrgetter(attr)\nanditemgetter(index)\n. Both functions return callables that take a single argument and return the corresponding attribute or item; these callables make excellent data extractors when used withmap()\norsorted()\n. For example:>>> L = [('c', 2), ('d', 1), ('a', 4), ('b', 3)] >>> map(operator.itemgetter(0), L) ['c', 'd', 'a', 'b'] >>> map(operator.itemgetter(1), L) [2, 1, 4, 3] >>> sorted(L, key=operator.itemgetter(1)) # Sort list by second tuple item [('d', 1), ('c', 2), ('b', 3), ('a', 4)]\n(Contributed by Raymond Hettinger.)\nThe\noptparse\nmodule was updated in various ways. The module now passes its messages throughgettext.gettext()\n, making it possible to internationalize Optik\u2019s help and error messages. Help messages for options can now include the string'%default'\n, which will be replaced by the option\u2019s default value. (Contributed by Greg Ward.)The long-term plan is to deprecate the\nrfc822\nmodule in some future Python release in favor of theemail\npackage. To this end, theemail.Utils.formatdate\nfunction has been changed to make it usable as a replacement forrfc822.formatdate()\n. You may want to write new e-mail processing code with this in mind. (Change implemented by Anthony Baxter.)A new\nurandom(n)\nfunction was added to theos\nmodule, returning a string containing n bytes of random data. This function provides access to platform-specific sources of randomness such as/dev/urandom\non Linux or the Windows CryptoAPI. (Contributed by Trevor Perrin.)Another new function:\nos.path.lexists(path)\nreturns true if the file specified by path exists, whether or not it\u2019s a symbolic link. This differs from the existingos.path.exists(path)\nfunction, which returns false if path is a symlink that points to a destination that doesn\u2019t exist. (Contributed by Beni Cherniavsky.)A new\ngetsid()\nfunction was added to theposix\nmodule that underlies theos\nmodule. (Contributed by J. Raynor.)The\npoplib\nmodule now supports POP over SSL. (Contributed by Hector Urtubia.)The\nprofile\nmodule can now profile C extension functions. (Contributed by Nick Bastin.)The\nrandom\nmodule has a new method calledgetrandbits(N)\nthat returns a long integer N bits in length. The existingrandrange()\nmethod now usesgetrandbits()\nwhere appropriate, making generation of arbitrarily large random numbers more efficient. (Contributed by Raymond Hettinger.)The regular expression language accepted by the\nre\nmodule was extended with simple conditional expressions, written as(?(group)A|B)\n. group is either a numeric group ID or a group name defined with(?P...)\nearlier in the expression. If the specified group matched, the regular expression pattern A will be tested against the string; if the group didn\u2019t match, the pattern B will be used instead. (Contributed by Gustavo Niemeyer.)The\nre\nmodule is also no longer recursive, thanks to a massive amount of work by Gustavo Niemeyer. In a recursive regular expression engine, certain patterns result in a large amount of C stack space being consumed, and it was possible to overflow the stack. For example, if you matched a 30000-byte string ofa\ncharacters against the expression(a|b)+\n, one stack frame was consumed per character. Python 2.3 tried to check for stack overflow and raise aRuntimeError\nexception, but certain patterns could sidestep the checking and if you were unlucky Python could segfault. Python 2.4\u2019s regular expression engine can match this pattern without problems.The\nsignal\nmodule now performs tighter error-checking on the parameters to thesignal.signal()\nfunction. For example, you can\u2019t set a handler on theSIGKILL\nsignal; previous versions of Python would quietly accept this, but 2.4 will raise aRuntimeError\nexception.Two new functions were added to the\nsocket\nmodule.socketpair()\nreturns a pair of connected sockets andgetservbyport(port)\nlooks up the service name for a given port number. (Contributed by Dave Cole and Barry Warsaw.)The\nsys.exitfunc()\nfunction has been deprecated. Code should be using the existingatexit\nmodule, which correctly handles calling multiple exit functions. Eventuallysys.exitfunc()\nwill become a purely internal interface, accessed only byatexit\n.The\ntarfile\nmodule now generates GNU-format tar files by default. (Contributed by Lars Gust\u00e4bel.)The\nthreading\nmodule now has an elegantly simple way to support thread-local data. The module contains alocal\nclass whose attribute values are local to different threads.import threading data = threading.local() data.number = 42 data.url = ('www.python.org', 80)\nOther threads can assign and retrieve their own values for the\nnumber\nandurl\nattributes. You can subclasslocal\nto initialize attributes or to add methods. (Contributed by Jim Fulton.)The\ntimeit\nmodule now automatically disables periodic garbage collection during the timing loop. This change makes consecutive timings more comparable. (Contributed by Raymond Hettinger.)The\nweakref\nmodule now supports a wider variety of objects including Python functions, class instances, sets, frozensets, deques, arrays, files, sockets, and regular expression pattern objects. (Contributed by Raymond Hettinger.)The\nxmlrpclib\nmodule now supports a multi-call extension for transmitting multiple XML-RPC calls in a single HTTP operation. (Contributed by Brian Quinlan.)The\nmpz\n,rotor\n, andxreadlines\nmodules have been removed.\ndoctest\u00b6\nThe doctest\nmodule underwent considerable refactoring thanks to Edward\nLoper and Tim Peters. Testing can still be as simple as running\ndoctest.testmod()\n, but the refactorings allow customizing the module\u2019s\noperation in various ways\nThe new DocTestFinder\nclass extracts the tests from a given object\u2019s\ndocstrings:\ndef f (x, y):\n\"\"\">>> f(2,2)\n4\n>>> f(3,2)\n6\n\"\"\"\nreturn x*y\nfinder = doctest.DocTestFinder()\n# Get list of DocTest instances\ntests = finder.find(f)\nThe new DocTestRunner\nclass then runs individual tests and can produce\na summary of the results:\nrunner = doctest.DocTestRunner()\nfor t in tests:\ntried, failed = runner.run(t)\nrunner.summarize(verbose=1)\nThe above example produces the following output:\n1 items passed all tests:\n2 tests in f\n2 tests in 1 items.\n2 passed and 0 failed.\nTest passed.\nDocTestRunner\nuses an instance of the OutputChecker\nclass to\ncompare the expected output with the actual output. This class takes a number\nof different flags that customize its behaviour; ambitious users can also write\na completely new subclass of OutputChecker\n.\nThe default output checker provides a number of handy features. For example,\nwith the doctest.ELLIPSIS\noption flag, an ellipsis (...\n) in the\nexpected output matches any substring, making it easier to accommodate outputs\nthat vary in minor ways:\ndef o (n):\n\"\"\">>> o(1)\n<__main__.C instance at 0x...>\n>>>\n\"\"\"\nAnother special string, \n, matches a blank line:\ndef p (n):\n\"\"\">>> p(1)\n\n>>>\n\"\"\"\nAnother new capability is producing a diff-style display of the output by\nspecifying the doctest.REPORT_UDIFF\n(unified diffs),\ndoctest.REPORT_CDIFF\n(context diffs), or doctest.REPORT_NDIFF\n(delta-style) option flags. For example:\ndef g (n):\n\"\"\">>> g(4)\nhere\nis\na\nlengthy\n>>>\"\"\"\nL = 'here is a rather lengthy list of words'.split()\nfor word in L[:n]:\nprint word\nRunning the above function\u2019s tests with doctest.REPORT_UDIFF\nspecified,\nyou get the following output:\n**********************************************************************\nFile \"t.py\", line 15, in g\nFailed example:\ng(4)\nDifferences (unified diff with -expected +actual):\n@@ -2,3 +2,3 @@\nis\na\n-lengthy\n+rather\n**********************************************************************\nBuild and C API Changes\u00b6\nSome of the changes to Python\u2019s build process and to the C API are:\nThree new convenience macros were added for common return values from extension functions:\nPy_RETURN_NONE\n,Py_RETURN_TRUE\n, andPy_RETURN_FALSE\n. (Contributed by Brett Cannon.)Another new macro,\nPy_CLEAR\n, decreases the reference count of obj and sets obj to the null pointer. (Contributed by Jim Fulton.)A new function,\nPyTuple_Pack(N, obj1, obj2, ..., objN)\n, constructs tuples from a variable length argument list of Python objects. (Contributed by Raymond Hettinger.)A new function,\nPyDict_Contains(d, k)\n, implements fast dictionary lookups without masking exceptions raised during the look-up process. (Contributed by Raymond Hettinger.)The Py_IS_NAN(X) macro returns 1 if its float or double argument X is a NaN. (Contributed by Tim Peters.)\nC code can avoid unnecessary locking by using the new\nPyEval_ThreadsInitialized()\nfunction to tell if any thread operations have been performed. If this function returns false, no lock operations are needed. (Contributed by Nick Coghlan.)A new function,\nPyArg_VaParseTupleAndKeywords()\n, is the same asPyArg_ParseTupleAndKeywords()\nbut takes ava_list\ninstead of a number of arguments. (Contributed by Greg Chapman.)A new method flag,\nMETH_COEXIST\n, allows a function defined in slots to co-exist with aPyCFunction\nhaving the same name. This can halve the access time for a method such asset.__contains__()\n. (Contributed by Raymond Hettinger.)Python can now be built with additional profiling for the interpreter itself, intended as an aid to people developing the Python core. Providing\n--enable-profiling\nto the configure script will let you profile the interpreter with gprof, and providing the--with-tsc\nswitch enables profiling using the Pentium\u2019s Time-Stamp-Counter register. Note that the--with-tsc\nswitch is slightly misnamed, because the profiling feature also works on the PowerPC platform, though that processor architecture doesn\u2019t call that register \u201cthe TSC register\u201d. (Contributed by Jeremy Hylton.)The\ntracebackobject\ntype has been renamed toPyTracebackObject\n.\nPort-Specific Changes\u00b6\nThe Windows port now builds under MSVC++ 7.1 as well as version 6. (Contributed by Martin von L\u00f6wis.)\nPorting to Python 2.4\u00b6\nThis section lists previously described changes that may require changes to your code:\nLeft shifts and hexadecimal/octal constants that are too large no longer trigger a\nFutureWarning\nand return a value limited to 32 or 64 bits; instead they return a long integer.Integer operations will no longer trigger an\nOverflowWarning\n. TheOverflowWarning\nwarning will disappear in Python 2.5.The\nzip()\nbuilt-in function anditertools.izip()\nnow return an empty list instead of raising aTypeError\nexception if called with no arguments.You can no longer compare the\ndate\nanddatetime\ninstances provided by thedatetime\nmodule. Two instances of different classes will now always be unequal, and relative comparisons (<\n,>\n) will raise aTypeError\n.dircache.listdir()\nnow passes exceptions to the caller instead of returning empty lists.LexicalHandler.startDTD()\nused to receive the public and system IDs in the wrong order. This has been corrected; applications relying on the wrong order need to be fixed.fcntl.ioctl()\nnow warns if the mutate argument is omitted and relevant.The\ntarfile\nmodule now generates GNU-format tar files by default.Encountering a failure while importing a module no longer leaves a partially initialized module object in\nsys.modules\n.None\nis now a constant; code that binds a new value to the nameNone\nis now a syntax error.The\nsignals.signal()\nfunction now raises aRuntimeError\nexception for certain illegal values; previously these errors would pass silently. For example, you can no longer set a handler on theSIGKILL\nsignal.\nAcknowledgements\u00b6\nThe author would like to thank the following people for offering suggestions, corrections and assistance with various drafts of this article: Koray Can, Hye-Shik Chang, Michael Dyck, Raymond Hettinger, Brian Hurt, Hamish Lawson, Fredrik Lundh, Sean Reifschneider, Sadruddin Rejeb.", "code_snippets": [" ", " ", " ", "\n", " ", " ", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n\n", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n\n", " ", "\n", " ", "\n", "\n", "\n", " ", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n ", "\n", " ", " ", " ", "\n ", " ", "\n ", "\n ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n ", " ", "\n ", "\n\n ", " ", " ", " ", "\n", "\n\n ", "\n ", " ", "\n ", "\n", "\n", "\n", "\n", " ", "\n ", "\n", " ", "\n", " ", " ", "\n", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n ", " ", "\n ", " ", " ", "\n ", " ", "\n\n ", " ", "\n\n", "\n", " ", "\n ", " ", "\n\n", "\n", "\n ", " ", "\n", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", "\n ", "\n ", "\n", "\n ", "\n ", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", ": ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", ": ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", ": ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", " ", "\\\n ", "\\\n ", "\\\n ", "\n", " ", "\n ", "\n ", "\n ", "\n", " ", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n\n", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", " ", "\n", "\n", "\n", "\n", " ", "\n", " ", " ", " ", "\n", " ", "\n", " ", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", "\n", "\n", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", "\n", "\n ", " ", "\n ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", " ", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", " ", "\n", "\n", "\n\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n ", " ", "\n\n", " ", " ", "\n\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n ", " ", " ", " ", "\n\n", "\n", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 13504}
{"url": "https://docs.python.org/3/library/asyncio-stream.html", "title": "Streams", "content": "Streams\u00b6\nSource code: Lib/asyncio/streams.py\nStreams are high-level async/await-ready primitives to work with network connections. Streams allow sending and receiving data without using callbacks or low-level protocols and transports.\nHere is an example of a TCP echo client written using asyncio streams:\nimport asyncio\nasync def tcp_echo_client(message):\nreader, writer = await asyncio.open_connection(\n'127.0.0.1', 8888)\nprint(f'Send: {message!r}')\nwriter.write(message.encode())\nawait writer.drain()\ndata = await reader.read(100)\nprint(f'Received: {data.decode()!r}')\nprint('Close the connection')\nwriter.close()\nawait writer.wait_closed()\nasyncio.run(tcp_echo_client('Hello World!'))\nSee also the Examples section below.\nStream Functions\nThe following top-level asyncio functions can be used to create and work with streams:\n- async asyncio.open_connection(host=None, port=None, *, limit=None, ssl=None, family=0, proto=0, flags=0, sock=None, local_addr=None, server_hostname=None, ssl_handshake_timeout=None, ssl_shutdown_timeout=None, happy_eyeballs_delay=None, interleave=None)\u00b6\nEstablish a network connection and return a pair of\n(reader, writer)\nobjects.The returned reader and writer objects are instances of\nStreamReader\nandStreamWriter\nclasses.limit determines the buffer size limit used by the returned\nStreamReader\ninstance. By default the limit is set to 64 KiB.The rest of the arguments are passed directly to\nloop.create_connection()\n.Note\nThe sock argument transfers ownership of the socket to the\nStreamWriter\ncreated. To close the socket, call itsclose()\nmethod.Changed in version 3.7: Added the ssl_handshake_timeout parameter.\nChanged in version 3.8: Added the happy_eyeballs_delay and interleave parameters.\nChanged in version 3.10: Removed the loop parameter.\nChanged in version 3.11: Added the ssl_shutdown_timeout parameter.\n- async asyncio.start_server(client_connected_cb, host=None, port=None, *, limit=None, family=socket.AF_UNSPEC, flags=socket.AI_PASSIVE, sock=None, backlog=100, ssl=None, reuse_address=None, reuse_port=None, keep_alive=None, ssl_handshake_timeout=None, ssl_shutdown_timeout=None, start_serving=True)\u00b6\nStart a socket server.\nThe client_connected_cb callback is called whenever a new client connection is established. It receives a\n(reader, writer)\npair as two arguments, instances of theStreamReader\nandStreamWriter\nclasses.client_connected_cb can be a plain callable or a coroutine function; if it is a coroutine function, it will be automatically scheduled as a\nTask\n.limit determines the buffer size limit used by the returned\nStreamReader\ninstance. By default the limit is set to 64 KiB.The rest of the arguments are passed directly to\nloop.create_server()\n.Note\nThe sock argument transfers ownership of the socket to the server created. To close the socket, call the server\u2019s\nclose()\nmethod.Changed in version 3.7: Added the ssl_handshake_timeout and start_serving parameters.\nChanged in version 3.10: Removed the loop parameter.\nChanged in version 3.11: Added the ssl_shutdown_timeout parameter.\nChanged in version 3.13: Added the keep_alive parameter.\nUnix Sockets\n- async asyncio.open_unix_connection(path=None, *, limit=None, ssl=None, sock=None, server_hostname=None, ssl_handshake_timeout=None, ssl_shutdown_timeout=None)\u00b6\nEstablish a Unix socket connection and return a pair of\n(reader, writer)\n.Similar to\nopen_connection()\nbut operates on Unix sockets.See also the documentation of\nloop.create_unix_connection()\n.Note\nThe sock argument transfers ownership of the socket to the\nStreamWriter\ncreated. To close the socket, call itsclose()\nmethod.Availability: Unix.\nChanged in version 3.7: Added the ssl_handshake_timeout parameter. The path parameter can now be a path-like object\nChanged in version 3.10: Removed the loop parameter.\nChanged in version 3.11: Added the ssl_shutdown_timeout parameter.\n- async asyncio.start_unix_server(client_connected_cb, path=None, *, limit=None, sock=None, backlog=100, ssl=None, ssl_handshake_timeout=None, ssl_shutdown_timeout=None, start_serving=True, cleanup_socket=True)\u00b6\nStart a Unix socket server.\nSimilar to\nstart_server()\nbut works with Unix sockets.If cleanup_socket is true then the Unix socket will automatically be removed from the filesystem when the server is closed, unless the socket has been replaced after the server has been created.\nSee also the documentation of\nloop.create_unix_server()\n.Note\nThe sock argument transfers ownership of the socket to the server created. To close the socket, call the server\u2019s\nclose()\nmethod.Availability: Unix.\nChanged in version 3.7: Added the ssl_handshake_timeout and start_serving parameters. The path parameter can now be a path-like object.\nChanged in version 3.10: Removed the loop parameter.\nChanged in version 3.11: Added the ssl_shutdown_timeout parameter.\nChanged in version 3.13: Added the cleanup_socket parameter.\nStreamReader\u00b6\n- class asyncio.StreamReader\u00b6\nRepresents a reader object that provides APIs to read data from the IO stream. As an asynchronous iterable, the object supports the\nasync for\nstatement.It is not recommended to instantiate StreamReader objects directly; use\nopen_connection()\nandstart_server()\ninstead.- feed_eof()\u00b6\nAcknowledge the EOF.\n- async read(n=-1)\u00b6\nRead up to n bytes from the stream.\nIf n is not provided or set to\n-1\n, read until EOF, then return all readbytes\n. If EOF was received and the internal buffer is empty, return an emptybytes\nobject.If n is\n0\n, return an emptybytes\nobject immediately.If n is positive, return at most n available\nbytes\nas soon as at least 1 byte is available in the internal buffer. If EOF is received before any byte is read, return an emptybytes\nobject.\n- async readline()\u00b6\nRead one line, where \u201cline\u201d is a sequence of bytes ending with\n\\n\n.If EOF is received and\n\\n\nwas not found, the method returns partially read data.If EOF is received and the internal buffer is empty, return an empty\nbytes\nobject.\n- async readexactly(n)\u00b6\nRead exactly n bytes.\nRaise an\nIncompleteReadError\nif EOF is reached before n can be read. Use theIncompleteReadError.partial\nattribute to get the partially read data.\n- async readuntil(separator=b'\\n')\u00b6\nRead data from the stream until separator is found.\nOn success, the data and separator will be removed from the internal buffer (consumed). Returned data will include the separator at the end.\nIf the amount of data read exceeds the configured stream limit, a\nLimitOverrunError\nexception is raised, and the data is left in the internal buffer and can be read again.If EOF is reached before the complete separator is found, an\nIncompleteReadError\nexception is raised, and the internal buffer is reset. TheIncompleteReadError.partial\nattribute may contain a portion of the separator.The separator may also be a tuple of separators. In this case the return value will be the shortest possible that has any separator as the suffix. For the purposes of\nLimitOverrunError\n, the shortest possible separator is considered to be the one that matched.Added in version 3.5.2.\nChanged in version 3.13: The separator parameter may now be a\ntuple\nof separators.\n- at_eof()\u00b6\nReturn\nTrue\nif the buffer is empty andfeed_eof()\nwas called.\nStreamWriter\u00b6\n- class asyncio.StreamWriter\u00b6\nRepresents a writer object that provides APIs to write data to the IO stream.\nIt is not recommended to instantiate StreamWriter objects directly; use\nopen_connection()\nandstart_server()\ninstead.- write(data)\u00b6\nThe method attempts to write the data to the underlying socket immediately. If that fails, the data is queued in an internal write buffer until it can be sent.\nThe data buffer should be a bytes, bytearray, or C-contiguous one-dimensional memoryview object.\nThe method should be used along with the\ndrain()\nmethod:stream.write(data) await stream.drain()\n- writelines(data)\u00b6\nThe method writes a list (or any iterable) of bytes to the underlying socket immediately. If that fails, the data is queued in an internal write buffer until it can be sent.\nThe method should be used along with the\ndrain()\nmethod:stream.writelines(lines) await stream.drain()\n- close()\u00b6\nThe method closes the stream and the underlying socket.\nThe method should be used, though not mandatory, along with the\nwait_closed()\nmethod:stream.close() await stream.wait_closed()\n- can_write_eof()\u00b6\nReturn\nTrue\nif the underlying transport supports thewrite_eof()\nmethod,False\notherwise.\n- write_eof()\u00b6\nClose the write end of the stream after the buffered write data is flushed.\n- transport\u00b6\nReturn the underlying asyncio transport.\n- get_extra_info(name, default=None)\u00b6\nAccess optional transport information; see\nBaseTransport.get_extra_info()\nfor details.\n- async drain()\u00b6\nWait until it is appropriate to resume writing to the stream. Example:\nwriter.write(data) await writer.drain()\nThis is a flow control method that interacts with the underlying IO write buffer. When the size of the buffer reaches the high watermark, drain() blocks until the size of the buffer is drained down to the low watermark and writing can be resumed. When there is nothing to wait for, the\ndrain()\nreturns immediately.\n- async start_tls(sslcontext, *, server_hostname=None, ssl_handshake_timeout=None, ssl_shutdown_timeout=None)\u00b6\nUpgrade an existing stream-based connection to TLS.\nParameters:\nsslcontext: a configured instance of\nSSLContext\n.server_hostname: sets or overrides the host name that the target server\u2019s certificate will be matched against.\nssl_handshake_timeout is the time in seconds to wait for the TLS handshake to complete before aborting the connection.\n60.0\nseconds ifNone\n(default).ssl_shutdown_timeout is the time in seconds to wait for the SSL shutdown to complete before aborting the connection.\n30.0\nseconds ifNone\n(default).\nAdded in version 3.11.\nChanged in version 3.12: Added the ssl_shutdown_timeout parameter.\n- is_closing()\u00b6\nReturn\nTrue\nif the stream is closed or in the process of being closed.Added in version 3.7.\nExamples\u00b6\nTCP echo client using streams\u00b6\nTCP echo client using the asyncio.open_connection()\nfunction:\nimport asyncio\nasync def tcp_echo_client(message):\nreader, writer = await asyncio.open_connection(\n'127.0.0.1', 8888)\nprint(f'Send: {message!r}')\nwriter.write(message.encode())\nawait writer.drain()\ndata = await reader.read(100)\nprint(f'Received: {data.decode()!r}')\nprint('Close the connection')\nwriter.close()\nawait writer.wait_closed()\nasyncio.run(tcp_echo_client('Hello World!'))\nSee also\nThe TCP echo client protocol\nexample uses the low-level loop.create_connection()\nmethod.\nTCP echo server using streams\u00b6\nTCP echo server using the asyncio.start_server()\nfunction:\nimport asyncio\nasync def handle_echo(reader, writer):\ndata = await reader.read(100)\nmessage = data.decode()\naddr = writer.get_extra_info('peername')\nprint(f\"Received {message!r} from {addr!r}\")\nprint(f\"Send: {message!r}\")\nwriter.write(data)\nawait writer.drain()\nprint(\"Close the connection\")\nwriter.close()\nawait writer.wait_closed()\nasync def main():\nserver = await asyncio.start_server(\nhandle_echo, '127.0.0.1', 8888)\naddrs = ', '.join(str(sock.getsockname()) for sock in server.sockets)\nprint(f'Serving on {addrs}')\nasync with server:\nawait server.serve_forever()\nasyncio.run(main())\nSee also\nThe TCP echo server protocol\nexample uses the loop.create_server()\nmethod.\nGet HTTP headers\u00b6\nSimple example querying HTTP headers of the URL passed on the command line:\nimport asyncio\nimport urllib.parse\nimport sys\nasync def print_http_headers(url):\nurl = urllib.parse.urlsplit(url)\nif url.scheme == 'https':\nreader, writer = await asyncio.open_connection(\nurl.hostname, 443, ssl=True)\nelse:\nreader, writer = await asyncio.open_connection(\nurl.hostname, 80)\nquery = (\nf\"HEAD {url.path or '/'} HTTP/1.0\\r\\n\"\nf\"Host: {url.hostname}\\r\\n\"\nf\"\\r\\n\"\n)\nwriter.write(query.encode('latin-1'))\nwhile True:\nline = await reader.readline()\nif not line:\nbreak\nline = line.decode('latin1').rstrip()\nif line:\nprint(f'HTTP header> {line}')\n# Ignore the body, close the socket\nwriter.close()\nawait writer.wait_closed()\nurl = sys.argv[1]\nasyncio.run(print_http_headers(url))\nUsage:\npython example.py http://example.com/path/page.html\nor with HTTPS:\npython example.py https://example.com/path/page.html\nRegister an open socket to wait for data using streams\u00b6\nCoroutine waiting until a socket receives data using the\nopen_connection()\nfunction:\nimport asyncio\nimport socket\nasync def wait_for_data():\n# Get a reference to the current event loop because\n# we want to access low-level APIs.\nloop = asyncio.get_running_loop()\n# Create a pair of connected sockets.\nrsock, wsock = socket.socketpair()\n# Register the open socket to wait for data.\nreader, writer = await asyncio.open_connection(sock=rsock)\n# Simulate the reception of data from the network\nloop.call_soon(wsock.send, 'abc'.encode())\n# Wait for data\ndata = await reader.read(100)\n# Got data, we are done: close the socket\nprint(\"Received:\", data.decode())\nwriter.close()\nawait writer.wait_closed()\n# Close the second socket\nwsock.close()\nasyncio.run(wait_for_data())\nSee also\nThe register an open socket to wait for data using a protocol example uses a low-level protocol and\nthe loop.create_connection()\nmethod.\nThe watch a file descriptor for read events example uses the low-level\nloop.add_reader()\nmethod to watch a file descriptor.", "code_snippets": ["\n\n", " ", "\n ", " ", " ", " ", " ", "\n ", " ", "\n\n ", "\n ", "\n ", " ", "\n\n ", " ", " ", " ", "\n ", "\n\n ", "\n ", "\n ", " ", "\n\n", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n\n", " ", "\n ", " ", " ", " ", " ", "\n ", " ", "\n\n ", "\n ", "\n ", " ", "\n\n ", " ", " ", " ", "\n ", "\n\n ", "\n ", "\n ", " ", "\n\n", "\n", "\n\n", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n\n ", "\n\n ", "\n ", "\n ", " ", "\n\n ", "\n ", "\n ", " ", "\n\n", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n\n ", " ", " ", " ", " ", " ", " ", "\n ", "\n\n ", " ", " ", "\n ", " ", "\n\n", "\n", "\n", "\n", "\n\n", " ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", "\n ", "\n ", " ", " ", " ", " ", "\n ", " ", "\n\n ", " ", " ", "\n ", "\n ", "\n ", "\n ", "\n\n ", "\n ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", "\n\n ", " ", " ", "\n ", " ", "\n ", "\n\n ", "\n ", "\n ", " ", "\n\n", " ", " ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n\n", " ", "\n ", "\n ", "\n ", " ", " ", "\n\n ", "\n ", " ", " ", " ", "\n\n ", "\n ", " ", " ", " ", " ", "\n\n ", "\n ", " ", "\n\n ", "\n ", " ", " ", " ", "\n\n ", "\n ", " ", "\n ", "\n ", " ", "\n\n ", "\n ", "\n\n", "\n"], "language": "Python", "source": "python.org", "token_count": 3318}
{"url": "https://docs.python.org/3/library/asyncio-graph.html", "title": "Call Graph Introspection", "content": "Call Graph Introspection\u00b6\nSource code: Lib/asyncio/graph.py\nasyncio has powerful runtime call graph introspection utilities to trace the entire call graph of a running coroutine or task, or a suspended future. These utilities and the underlying machinery can be used from within a Python program or by external profilers and debuggers.\nAdded in version 3.14.\n- asyncio.print_call_graph(future=None, /, *, file=None, depth=1, limit=None)\u00b6\nPrint the async call graph for the current task or the provided\nTask\norFuture\n.This function prints entries starting from the top frame and going down towards the invocation point.\nThe function receives an optional future argument. If not passed, the current running task will be used.\nIf the function is called on the current task, the optional keyword-only depth argument can be used to skip the specified number of frames from top of the stack.\nIf the optional keyword-only limit argument is provided, each call stack in the resulting graph is truncated to include at most\nabs(limit)\nentries. If limit is positive, the entries left are the closest to the invocation point. If limit is negative, the topmost entries are left. If limit is omitted orNone\n, all entries are present. If limit is0\n, the call stack is not printed at all, only \u201cawaited by\u201d information is printed.If file is omitted or\nNone\n, the function will print tosys.stdout\n.Example:\nThe following Python code:\nimport asyncio async def test(): asyncio.print_call_graph() async def main(): async with asyncio.TaskGroup() as g: g.create_task(test(), name='test') asyncio.run(main())\nwill print:\n* Task(name='test', id=0x1039f0fe0) + Call stack: | File 't2.py', line 4, in async test() + Awaited by: * Task(name='Task-1', id=0x103a5e060) + Call stack: | File 'taskgroups.py', line 107, in async TaskGroup.__aexit__() | File 't2.py', line 7, in async main()\n- asyncio.format_call_graph(future=None, /, *, depth=1, limit=None)\u00b6\nLike\nprint_call_graph()\n, but returns a string. If future isNone\nand there\u2019s no current task, the function returns an empty string.\n- asyncio.capture_call_graph(future=None, /, *, depth=1, limit=None)\u00b6\nCapture the async call graph for the current task or the provided\nTask\norFuture\n.The function receives an optional future argument. If not passed, the current running task will be used. If there\u2019s no current task, the function returns\nNone\n.If the function is called on the current task, the optional keyword-only depth argument can be used to skip the specified number of frames from top of the stack.\nReturns a\nFutureCallGraph\ndata class object:FutureCallGraph(future, call_stack, awaited_by)\nFrameCallGraphEntry(frame)\nWhere frame is a frame object of a regular Python function in the call stack.\nLow level utility functions\u00b6\nTo introspect an async call graph asyncio requires cooperation from\ncontrol flow structures, such as shield()\nor TaskGroup\n.\nAny time an intermediate Future\nobject with low-level APIs like\nFuture.add_done_callback()\nis\ninvolved, the following two functions should be used to inform asyncio\nabout how exactly such intermediate future objects are connected with\nthe tasks they wrap or control.\n- asyncio.future_add_to_awaited_by(future, waiter, /)\u00b6\nRecord that future is awaited on by waiter.\nBoth future and waiter must be instances of\nFuture\norTask\nor their subclasses, otherwise the call would have no effect.A call to\nfuture_add_to_awaited_by()\nmust be followed by an eventual call to thefuture_discard_from_awaited_by()\nfunction with the same arguments.", "code_snippets": [" ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", "\n", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 879}
{"url": "https://docs.python.org/3/library/email.charset.html", "title": ": Representing character sets", "content": "email.charset\n: Representing character sets\u00b6\nSource code: Lib/email/charset.py\nThis module is part of the legacy (Compat32\n) email API. In the new\nAPI only the aliases table is used.\nThe remaining text in this section is the original documentation of the module.\nThis module provides a class Charset\nfor representing character sets\nand character set conversions in email messages, as well as a character set\nregistry and several convenience methods for manipulating this registry.\nInstances of Charset\nare used in several other modules within the\nemail\npackage.\nImport this class from the email.charset\nmodule.\n- class email.charset.Charset(input_charset=DEFAULT_CHARSET)\u00b6\nMap character sets to their email properties.\nThis class provides information about the requirements imposed on email for a specific character set. It also provides convenience routines for converting between character sets, given the availability of the applicable codecs. Given a character set, it will do its best to provide information on how to use that character set in an email message in an RFC-compliant way.\nCertain character sets must be encoded with quoted-printable or base64 when used in email headers or bodies. Certain character sets must be converted outright, and are not allowed in email.\nOptional input_charset is as described below; it is always coerced to lower case. After being alias normalized it is also used as a lookup into the registry of character sets to find out the header encoding, body encoding, and output conversion codec to be used for the character set. For example, if input_charset is\niso-8859-1\n, then headers and bodies will be encoded using quoted-printable and no output conversion codec is necessary. If input_charset iseuc-jp\n, then headers will be encoded with base64, bodies will not be encoded, but output text will be converted from theeuc-jp\ncharacter set to theiso-2022-jp\ncharacter set.Charset\ninstances have the following data attributes:- input_charset\u00b6\nThe initial character set specified. Common aliases are converted to their official email names (e.g.\nlatin_1\nis converted toiso-8859-1\n). Defaults to 7-bitus-ascii\n.\n- header_encoding\u00b6\nIf the character set must be encoded before it can be used in an email header, this attribute will be set to\ncharset.QP\n(for quoted-printable),charset.BASE64\n(for base64 encoding), orcharset.SHORTEST\nfor the shortest of QP or BASE64 encoding. Otherwise, it will beNone\n.\n- body_encoding\u00b6\nSame as header_encoding, but describes the encoding for the mail message\u2019s body, which indeed may be different than the header encoding.\ncharset.SHORTEST\nis not allowed for body_encoding.\n- output_charset\u00b6\nSome character sets must be converted before they can be used in email headers or bodies. If the input_charset is one of them, this attribute will contain the name of the character set output will be converted to. Otherwise, it will be\nNone\n.\n- input_codec\u00b6\nThe name of the Python codec used to convert the input_charset to Unicode. If no conversion codec is necessary, this attribute will be\nNone\n.\n- output_codec\u00b6\nThe name of the Python codec used to convert Unicode to the output_charset. If no conversion codec is necessary, this attribute will have the same value as the input_codec.\nCharset\ninstances also have the following methods:- get_body_encoding()\u00b6\nReturn the content transfer encoding used for body encoding.\nThis is either the string\nquoted-printable\norbase64\ndepending on the encoding used, or it is a function, in which case you should call the function with a single argument, the Message object being encoded. The function should then set the Content-Transfer-Encoding header itself to whatever is appropriate.Returns the string\nquoted-printable\nif body_encoding isQP\n, returns the stringbase64\nif body_encoding isBASE64\n, and returns the string7bit\notherwise.\n- get_output_charset()\u00b6\nReturn the output character set.\nThis is the output_charset attribute if that is not\nNone\n, otherwise it is input_charset.\n- header_encode(string)\u00b6\nHeader-encode the string string.\nThe type of encoding (base64 or quoted-printable) will be based on the header_encoding attribute.\n- header_encode_lines(string, maxlengths)\u00b6\nHeader-encode a string by converting it first to bytes.\nThis is similar to\nheader_encode()\nexcept that the string is fit into maximum line lengths as given by the argument maxlengths, which must be an iterator: each element returned from this iterator will provide the next maximum line length.\n- body_encode(string)\u00b6\nBody-encode the string string.\nThe type of encoding (base64 or quoted-printable) will be based on the body_encoding attribute.\nThe\nCharset\nclass also provides a number of methods to support standard operations and built-in functions.- __str__()\u00b6\nReturns input_charset as a string coerced to lower case.\n__repr__()\nis an alias for__str__()\n.\nThe email.charset\nmodule also provides the following functions for adding\nnew entries to the global character set, alias, and codec registries:\n- email.charset.add_charset(charset, header_enc=None, body_enc=None, output_charset=None)\u00b6\nAdd character properties to the global registry.\ncharset is the input character set, and must be the canonical name of a character set.\nOptional header_enc and body_enc is either\ncharset.QP\nfor quoted-printable,charset.BASE64\nfor base64 encoding,charset.SHORTEST\nfor the shortest of quoted-printable or base64 encoding, orNone\nfor no encoding.SHORTEST\nis only valid for header_enc. The default isNone\nfor no encoding.Optional output_charset is the character set that the output should be in. Conversions will proceed from input charset, to Unicode, to the output charset when the method\nCharset.convert()\nis called. The default is to output in the same character set as the input.Both input_charset and output_charset must have Unicode codec entries in the module\u2019s character set-to-codec mapping; use\nadd_codec()\nto add codecs the module does not know about. See thecodecs\nmodule\u2019s documentation for more information.The global character set registry is kept in the module global dictionary\nCHARSETS\n.\n- email.charset.add_alias(alias, canonical)\u00b6\nAdd a character set alias. alias is the alias name, e.g.\nlatin-1\n. canonical is the character set\u2019s canonical name, e.g.iso-8859-1\n.The global charset alias registry is kept in the module global dictionary\nALIASES\n.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 1586}
{"url": "https://docs.python.org/3/library/email.iterators.html", "title": ": Iterators", "content": "email.iterators\n: Iterators\u00b6\nSource code: Lib/email/iterators.py\nIterating over a message object tree is fairly easy with the\nMessage.walk\nmethod. The\nemail.iterators\nmodule provides some useful higher level iterations over\nmessage object trees.\n- email.iterators.body_line_iterator(msg, decode=False)\u00b6\nThis iterates over all the payloads in all the subparts of msg, returning the string payloads line-by-line. It skips over all the subpart headers, and it skips over any subpart with a payload that isn\u2019t a Python string. This is somewhat equivalent to reading the flat text representation of the message from a file using\nreadline()\n, skipping over all the intervening headers.Optional decode is passed through to\nMessage.get_payload\n.\n- email.iterators.typed_subpart_iterator(msg, maintype='text', subtype=None)\u00b6\nThis iterates over all the subparts of msg, returning only those subparts that match the MIME type specified by maintype and subtype.\nNote that subtype is optional; if omitted, then subpart MIME type matching is done only with the main type. maintype is optional too; it defaults to text.\nThus, by default\ntyped_subpart_iterator()\nreturns each subpart that has a MIME type of text/*.\nThe following function has been added as a useful debugging tool. It should not be considered part of the supported public interface for the package.\n- email.iterators._structure(msg, fp=None, level=0, include_default=False)\u00b6\nPrints an indented representation of the content types of the message object structure. For example:\n>>> msg = email.message_from_file(somefile) >>> _structure(msg) multipart/mixed text/plain text/plain multipart/digest message/rfc822 text/plain message/rfc822 text/plain message/rfc822 text/plain message/rfc822 text/plain message/rfc822 text/plain text/plain\nOptional fp is a file-like object to print the output to. It must be suitable for Python\u2019s\nprint()\nfunction. level is used internally. include_default, if true, prints the default type as well.", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 495}
{"url": "https://docs.python.org/3/library/email.encoders.html", "title": ": Encoders", "content": "email.encoders\n: Encoders\u00b6\nSource code: Lib/email/encoders.py\nThis module is part of the legacy (Compat32\n) email API. In the\nnew API the functionality is provided by the cte parameter of\nthe set_content()\nmethod.\nThis module is deprecated in Python 3. The functions provided here\nshould not be called explicitly since the MIMEText\nclass sets the content type and CTE header using the _subtype and _charset\nvalues passed during the instantiation of that class.\nThe remaining text in this section is the original documentation of the module.\nWhen creating Message\nobjects from scratch, you often\nneed to encode the payloads for transport through compliant mail servers. This\nis especially true for image/* and text/* type messages\ncontaining binary data.\nThe email\npackage provides some convenient encoders in its\nencoders\nmodule. These encoders are actually used by the\nMIMEAudio\nand MIMEImage\nclass constructors to provide default encodings. All encoder functions take\nexactly one argument, the message object to encode. They usually extract the\npayload, encode it, and reset the payload to this newly encoded value. They\nshould also set the Content-Transfer-Encoding header as appropriate.\nNote that these functions are not meaningful for a multipart message. They\nmust be applied to individual subparts instead, and will raise a\nTypeError\nif passed a message whose type is multipart.\nHere are the encoding functions provided:\n- email.encoders.encode_quopri(msg)\u00b6\nEncodes the payload into quoted-printable form and sets the Content-Transfer-Encoding header to\nquoted-printable\n[1]. This is a good encoding to use when most of your payload is normal printable data, but contains a few unprintable characters.\n- email.encoders.encode_base64(msg)\u00b6\nEncodes the payload into base64 form and sets the Content-Transfer-Encoding header to\nbase64\n. This is a good encoding to use when most of your payload is unprintable data since it is a more compact form than quoted-printable. The drawback of base64 encoding is that it renders the text non-human readable.\n- email.encoders.encode_7or8bit(msg)\u00b6\nThis doesn\u2019t actually modify the message\u2019s payload, but it does set the Content-Transfer-Encoding header to either\n7bit\nor8bit\nas appropriate, based on the payload data.\n- email.encoders.encode_noop(msg)\u00b6\nThis does nothing; it doesn\u2019t even set the Content-Transfer-Encoding header.\nFootnotes", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 596}
{"url": "https://docs.python.org/3/extending/embedding.html", "title": "Embedding Python in Another Application", "content": "1. Embedding Python in Another Application\u00b6\nThe previous chapters discussed how to extend Python, that is, how to extend the functionality of Python by attaching a library of C functions to it. It is also possible to do it the other way around: enrich your C/C++ application by embedding Python in it. Embedding provides your application with the ability to implement some of the functionality of your application in Python rather than C or C++. This can be used for many purposes; one example would be to allow users to tailor the application to their needs by writing some scripts in Python. You can also use it yourself if some of the functionality can be written in Python more easily.\nEmbedding Python is similar to extending it, but not quite. The difference is that when you extend Python, the main program of the application is still the Python interpreter, while if you embed Python, the main program may have nothing to do with Python \u2014 instead, some parts of the application occasionally call the Python interpreter to run some Python code.\nSo if you are embedding Python, you are providing your own main program. One of\nthe things this main program has to do is initialize the Python interpreter. At\nthe very least, you have to call the function Py_Initialize()\n. There are\noptional calls to pass command line arguments to Python. Then later you can\ncall the interpreter from any part of the application.\nThere are several different ways to call the interpreter: you can pass a string\ncontaining Python statements to PyRun_SimpleString()\n, or you can pass a\nstdio file pointer and a file name (for identification in error messages only)\nto PyRun_SimpleFile()\n. You can also call the lower-level operations\ndescribed in the previous chapters to construct and use Python objects.\nSee also\n- Python/C API Reference Manual\nThe details of Python\u2019s C interface are given in this manual. A great deal of necessary information can be found here.\n1.1. Very High Level Embedding\u00b6\nThe simplest form of embedding Python is the use of the very high level interface. This interface is intended to execute a Python script without needing to interact with the application directly. This can for example be used to perform some operation on a file.\n#define PY_SSIZE_T_CLEAN\n#include \nint\nmain(int argc, char *argv[])\n{\nPyStatus status;\nPyConfig config;\nPyConfig_InitPythonConfig(&config);\n/* optional but recommended */\nstatus = PyConfig_SetBytesString(&config, &config.program_name, argv[0]);\nif (PyStatus_Exception(status)) {\ngoto exception;\n}\nstatus = Py_InitializeFromConfig(&config);\nif (PyStatus_Exception(status)) {\ngoto exception;\n}\nPyConfig_Clear(&config);\nPyRun_SimpleString(\"from time import time,ctime\\n\"\n\"print('Today is', ctime(time()))\\n\");\nif (Py_FinalizeEx() < 0) {\nexit(120);\n}\nreturn 0;\nexception:\nPyConfig_Clear(&config);\nPy_ExitStatusException(status);\n}\nNote\n#define PY_SSIZE_T_CLEAN\nwas used to indicate that Py_ssize_t\nshould be\nused in some APIs instead of int\n.\nIt is not necessary since Python 3.13, but we keep it here for backward compatibility.\nSee Strings and buffers for a description of this macro.\nSetting PyConfig.program_name\nshould be called before\nPy_InitializeFromConfig()\nto inform the interpreter about paths to Python run-time\nlibraries. Next, the Python interpreter is initialized with\nPy_Initialize()\n, followed by the execution of a hard-coded Python script\nthat prints the date and time. Afterwards, the Py_FinalizeEx()\ncall shuts\nthe interpreter down, followed by the end of the program. In a real program,\nyou may want to get the Python script from another source, perhaps a text-editor\nroutine, a file, or a database. Getting the Python code from a file can better\nbe done by using the PyRun_SimpleFile()\nfunction, which saves you the\ntrouble of allocating memory space and loading the file contents.\n1.2. Beyond Very High Level Embedding: An overview\u00b6\nThe high level interface gives you the ability to execute arbitrary pieces of Python code from your application, but exchanging data values is quite cumbersome to say the least. If you want that, you should use lower level calls. At the cost of having to write more C code, you can achieve almost anything.\nIt should be noted that extending Python and embedding Python is quite the same activity, despite the different intent. Most topics discussed in the previous chapters are still valid. To show this, consider what the extension code from Python to C really does:\nConvert data values from Python to C,\nPerform a function call to a C routine using the converted values, and\nConvert the data values from the call from C to Python.\nWhen embedding Python, the interface code does:\nConvert data values from C to Python,\nPerform a function call to a Python interface routine using the converted values, and\nConvert the data values from the call from Python to C.\nAs you can see, the data conversion steps are simply swapped to accommodate the different direction of the cross-language transfer. The only difference is the routine that you call between both data conversions. When extending, you call a C routine, when embedding, you call a Python routine.\nThis chapter will not discuss how to convert data from Python to C and vice versa. Also, proper use of references and dealing with errors is assumed to be understood. Since these aspects do not differ from extending the interpreter, you can refer to earlier chapters for the required information.\n1.3. Pure Embedding\u00b6\nThe first program aims to execute a function in a Python script. Like in the section about the very high level interface, the Python interpreter does not directly interact with the application (but that will change in the next section).\nThe code to run a function defined in a Python script is:\n#define PY_SSIZE_T_CLEAN\n#include \nint\nmain(int argc, char *argv[])\n{\nPyObject *pName, *pModule, *pFunc;\nPyObject *pArgs, *pValue;\nint i;\nif (argc < 3) {\nfprintf(stderr,\"Usage: call pythonfile funcname [args]\\n\");\nreturn 1;\n}\nPy_Initialize();\npName = PyUnicode_DecodeFSDefault(argv[1]);\n/* Error checking of pName left out */\npModule = PyImport_Import(pName);\nPy_DECREF(pName);\nif (pModule != NULL) {\npFunc = PyObject_GetAttrString(pModule, argv[2]);\n/* pFunc is a new reference */\nif (pFunc && PyCallable_Check(pFunc)) {\npArgs = PyTuple_New(argc - 3);\nfor (i = 0; i < argc - 3; ++i) {\npValue = PyLong_FromLong(atoi(argv[i + 3]));\nif (!pValue) {\nPy_DECREF(pArgs);\nPy_DECREF(pModule);\nfprintf(stderr, \"Cannot convert argument\\n\");\nreturn 1;\n}\n/* pValue reference stolen here: */\nPyTuple_SetItem(pArgs, i, pValue);\n}\npValue = PyObject_CallObject(pFunc, pArgs);\nPy_DECREF(pArgs);\nif (pValue != NULL) {\nprintf(\"Result of call: %ld\\n\", PyLong_AsLong(pValue));\nPy_DECREF(pValue);\n}\nelse {\nPy_DECREF(pFunc);\nPy_DECREF(pModule);\nPyErr_Print();\nfprintf(stderr,\"Call failed\\n\");\nreturn 1;\n}\n}\nelse {\nif (PyErr_Occurred())\nPyErr_Print();\nfprintf(stderr, \"Cannot find function \\\"%s\\\"\\n\", argv[2]);\n}\nPy_XDECREF(pFunc);\nPy_DECREF(pModule);\n}\nelse {\nPyErr_Print();\nfprintf(stderr, \"Failed to load \\\"%s\\\"\\n\", argv[1]);\nreturn 1;\n}\nif (Py_FinalizeEx() < 0) {\nreturn 120;\n}\nreturn 0;\n}\nThis code loads a Python script using argv[1]\n, and calls the function named\nin argv[2]\n. Its integer arguments are the other values of the argv\narray. If you compile and link this program (let\u2019s call\nthe finished executable call), and use it to execute a Python\nscript, such as:\ndef multiply(a,b):\nprint(\"Will compute\", a, \"times\", b)\nc = 0\nfor i in range(0, a):\nc = c + b\nreturn c\nthen the result should be:\n$ call multiply multiply 3 2\nWill compute 3 times 2\nResult of call: 6\nAlthough the program is quite large for its functionality, most of the code is for data conversion between Python and C, and for error reporting. The interesting part with respect to embedding Python starts with\nPy_Initialize();\npName = PyUnicode_DecodeFSDefault(argv[1]);\n/* Error checking of pName left out */\npModule = PyImport_Import(pName);\nAfter initializing the interpreter, the script is loaded using\nPyImport_Import()\n. This routine needs a Python string as its argument,\nwhich is constructed using the PyUnicode_DecodeFSDefault()\ndata\nconversion routine.\npFunc = PyObject_GetAttrString(pModule, argv[2]);\n/* pFunc is a new reference */\nif (pFunc && PyCallable_Check(pFunc)) {\n...\n}\nPy_XDECREF(pFunc);\nOnce the script is loaded, the name we\u2019re looking for is retrieved using\nPyObject_GetAttrString()\n. If the name exists, and the object returned is\ncallable, you can safely assume that it is a function. The program then\nproceeds by constructing a tuple of arguments as normal. The call to the Python\nfunction is then made with:\npValue = PyObject_CallObject(pFunc, pArgs);\nUpon return of the function, pValue\nis either NULL\nor it contains a\nreference to the return value of the function. Be sure to release the reference\nafter examining the value.\n1.4. Extending Embedded Python\u00b6\nUntil now, the embedded Python interpreter had no access to functionality from the application itself. The Python API allows this by extending the embedded interpreter. That is, the embedded interpreter gets extended with routines provided by the application. While it sounds complex, it is not so bad. Simply forget for a while that the application starts the Python interpreter. Instead, consider the application to be a set of subroutines, and write some glue code that gives Python access to those routines, just like you would write a normal Python extension. For example:\nstatic int numargs=0;\n/* Return the number of arguments of the application command line */\nstatic PyObject*\nemb_numargs(PyObject *self, PyObject *args)\n{\nif(!PyArg_ParseTuple(args, \":numargs\"))\nreturn NULL;\nreturn PyLong_FromLong(numargs);\n}\nstatic PyMethodDef emb_module_methods[] = {\n{\"numargs\", emb_numargs, METH_VARARGS,\n\"Return the number of arguments received by the process.\"},\n{NULL, NULL, 0, NULL}\n};\nstatic struct PyModuleDef emb_module = {\n.m_base = PyModuleDef_HEAD_INIT,\n.m_name = \"emb\",\n.m_size = 0,\n.m_methods = emb_module_methods,\n};\nstatic PyObject*\nPyInit_emb(void)\n{\nreturn PyModuleDef_Init(&emb_module);\n}\nInsert the above code just above the main()\nfunction. Also, insert the\nfollowing two statements before the call to Py_Initialize()\n:\nnumargs = argc;\nPyImport_AppendInittab(\"emb\", &PyInit_emb);\nThese two lines initialize the numargs\nvariable, and make the\nemb.numargs()\nfunction accessible to the embedded Python interpreter.\nWith these extensions, the Python script can do things like\nimport emb\nprint(\"Number of arguments\", emb.numargs())\nIn a real application, the methods will expose an API of the application to Python.\n1.5. Embedding Python in C++\u00b6\nIt is also possible to embed Python in a C++ program; precisely how this is done will depend on the details of the C++ system used; in general you will need to write the main program in C++, and use the C++ compiler to compile and link your program. There is no need to recompile Python itself using C++.\n1.6. Compiling and Linking under Unix-like systems\u00b6\nIt is not necessarily trivial to find the right flags to pass to your\ncompiler (and linker) in order to embed the Python interpreter into your\napplication, particularly because Python needs to load library modules\nimplemented as C dynamic extensions (.so\nfiles) linked against\nit.\nTo find out the required compiler and linker flags, you can execute the\npythonX.Y-config\nscript which is generated as part of the\ninstallation process (a python3-config\nscript may also be\navailable). This script has several options, of which the following will\nbe directly useful to you:\npythonX.Y-config --cflags\nwill give you the recommended flags when compiling:$ /opt/bin/python3.11-config --cflags -I/opt/include/python3.11 -I/opt/include/python3.11 -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall\npythonX.Y-config --ldflags --embed\nwill give you the recommended flags when linking:$ /opt/bin/python3.11-config --ldflags --embed -L/opt/lib/python3.11/config-3.11-x86_64-linux-gnu -L/opt/lib -lpython3.11 -lpthread -ldl -lutil -lm\nNote\nTo avoid confusion between several Python installations (and especially\nbetween the system Python and your own compiled Python), it is recommended\nthat you use the absolute path to pythonX.Y-config\n, as in the above\nexample.\nIf this procedure doesn\u2019t work for you (it is not guaranteed to work for\nall Unix-like platforms; however, we welcome bug reports)\nyou will have to read your system\u2019s documentation about dynamic linking and/or\nexamine Python\u2019s Makefile\n(use sysconfig.get_makefile_filename()\nto find its location) and compilation\noptions. In this case, the sysconfig\nmodule is a useful tool to\nprogrammatically extract the configuration values that you will want to\ncombine together. For example:\n>>> import sysconfig\n>>> sysconfig.get_config_var('LIBS')\n'-lpthread -ldl -lutil'\n>>> sysconfig.get_config_var('LINKFORSHARED')\n'-Xlinker -export-dynamic'", "code_snippets": [], "language": "Python", "source": "python.org", "token_count": 3228}
{"url": "https://docs.python.org/3/library/email.message.html", "title": ": Representing an email message", "content": "email.message\n: Representing an email message\u00b6\nSource code: Lib/email/message.py\nAdded in version 3.6: [1]\nThe central class in the email\npackage is the EmailMessage\nclass, imported from the email.message\nmodule. It is the base class for\nthe email\nobject model. EmailMessage\nprovides the core\nfunctionality for setting and querying header fields, for accessing message\nbodies, and for creating or modifying structured messages.\nAn email message consists of headers and a payload (which is also referred to as the content). Headers are RFC 5322 or RFC 6532 style field names and values, where the field name and value are separated by a colon. The colon is not part of either the field name or the field value. The payload may be a simple text message, or a binary object, or a structured sequence of sub-messages each with their own set of headers and their own payload. The latter type of payload is indicated by the message having a MIME type such as multipart/* or message/rfc822.\nThe conceptual model provided by an EmailMessage\nobject is that of an\nordered dictionary of headers coupled with a payload that represents the\nRFC 5322 body of the message, which might be a list of sub-EmailMessage\nobjects. In addition to the normal dictionary methods for accessing the header\nnames and values, there are methods for accessing specialized information from\nthe headers (for example the MIME content type), for operating on the payload,\nfor generating a serialized version of the message, and for recursively walking\nover the object tree.\nThe EmailMessage\ndictionary-like interface is indexed by the header\nnames, which must be ASCII values. The values of the dictionary are strings\nwith some extra methods. Headers are stored and returned in case-preserving\nform, but field names are matched case-insensitively. The keys are ordered,\nbut unlike a real dict, there can be duplicates. Additional methods are\nprovided for working with headers that have duplicate keys.\nThe payload is either a string or bytes object, in the case of simple message\nobjects, or a list of EmailMessage\nobjects, for MIME container\ndocuments such as multipart/* and message/rfc822\nmessage objects.\n- class email.message.EmailMessage(policy=default)\u00b6\nIf policy is specified use the rules it specifies to update and serialize the representation of the message. If policy is not set, use the\ndefault\npolicy, which follows the rules of the email RFCs except for line endings (instead of the RFC mandated\\r\\n\n, it uses the Python standard\\n\nline endings). For more information see thepolicy\ndocumentation. [2]- as_string(unixfrom=False, maxheaderlen=None, policy=None)\u00b6\nReturn the entire message flattened as a string. When optional unixfrom is true, the envelope header is included in the returned string. unixfrom defaults to\nFalse\n. For backward compatibility with the baseMessage\nclass maxheaderlen is accepted, but defaults toNone\n, which means that by default the line length is controlled by themax_line_length\nof the policy. The policy argument may be used to override the default policy obtained from the message instance. This can be used to control some of the formatting produced by the method, since the specified policy will be passed to theGenerator\n.Flattening the message may trigger changes to the\nEmailMessage\nif defaults need to be filled in to complete the transformation to a string (for example, MIME boundaries may be generated or modified).Note that this method is provided as a convenience and may not be the most useful way to serialize messages in your application, especially if you are dealing with multiple messages. See\nemail.generator.Generator\nfor a more flexible API for serializing messages. Note also that this method is restricted to producing messages serialized as \u201c7 bit clean\u201d whenutf8\nisFalse\n, which is the default.Changed in version 3.6: the default behavior when maxheaderlen is not specified was changed from defaulting to 0 to defaulting to the value of max_line_length from the policy.\n- __str__()\u00b6\nEquivalent to\nas_string(policy=self.policy.clone(utf8=True))\n. Allowsstr(msg)\nto produce a string containing the serialized message in a readable format.Changed in version 3.4: the method was changed to use\nutf8=True\n, thus producing an RFC 6531-like message representation, instead of being a direct alias foras_string()\n.\n- as_bytes(unixfrom=False, policy=None)\u00b6\nReturn the entire message flattened as a bytes object. When optional unixfrom is true, the envelope header is included in the returned string. unixfrom defaults to\nFalse\n. The policy argument may be used to override the default policy obtained from the message instance. This can be used to control some of the formatting produced by the method, since the specified policy will be passed to theBytesGenerator\n.Flattening the message may trigger changes to the\nEmailMessage\nif defaults need to be filled in to complete the transformation to a string (for example, MIME boundaries may be generated or modified).Note that this method is provided as a convenience and may not be the most useful way to serialize messages in your application, especially if you are dealing with multiple messages. See\nemail.generator.BytesGenerator\nfor a more flexible API for serializing messages.\n- __bytes__()\u00b6\nEquivalent to\nas_bytes()\n. Allowsbytes(msg)\nto produce a bytes object containing the serialized message.\n- is_multipart()\u00b6\nReturn\nTrue\nif the message\u2019s payload is a list of sub-EmailMessage\nobjects, otherwise returnFalse\n. Whenis_multipart()\nreturnsFalse\n, the payload should be a string object (which might be a CTE encoded binary payload). Note thatis_multipart()\nreturningTrue\ndoes not necessarily mean that \u201cmsg.get_content_maintype() == \u2018multipart\u2019\u201d will return theTrue\n. For example,is_multipart\nwill returnTrue\nwhen theEmailMessage\nis of typemessage/rfc822\n.\n- set_unixfrom(unixfrom)\u00b6\nSet the message\u2019s envelope header to unixfrom, which should be a string. (See\nmboxMessage\nfor a brief description of this header.)\n- get_unixfrom()\u00b6\nReturn the message\u2019s envelope header. Defaults to\nNone\nif the envelope header was never set.\nThe following methods implement the mapping-like interface for accessing the message\u2019s headers. Note that there are some semantic differences between these methods and a normal mapping (i.e. dictionary) interface. For example, in a dictionary there are no duplicate keys, but here there may be duplicate message headers. Also, in dictionaries there is no guaranteed order to the keys returned by\nkeys()\n, but in anEmailMessage\nobject, headers are always returned in the order they appeared in the original message, or in which they were added to the message later. Any header deleted and then re-added is always appended to the end of the header list.These semantic differences are intentional and are biased toward convenience in the most common use cases.\nNote that in all cases, any envelope header present in the message is not included in the mapping interface.\n- __len__()\u00b6\nReturn the total number of headers, including duplicates.\n- __contains__(name)\u00b6\nReturn\nTrue\nif the message object has a field named name. Matching is done without regard to case and name does not include the trailing colon. Used for thein\noperator. For example:if 'message-id' in myMessage: print('Message-ID:', myMessage['message-id'])\n- __getitem__(name)\u00b6\nReturn the value of the named header field. name does not include the colon field separator. If the header is missing,\nNone\nis returned; aKeyError\nis never raised.Note that if the named field appears more than once in the message\u2019s headers, exactly which of those field values will be returned is undefined. Use the\nget_all()\nmethod to get the values of all the extant headers named name.Using the standard (non-\ncompat32\n) policies, the returned value is an instance of a subclass ofemail.headerregistry.BaseHeader\n.\n- __setitem__(name, val)\u00b6\nAdd a header to the message with field name name and value val. The field is appended to the end of the message\u2019s existing headers.\nNote that this does not overwrite or delete any existing header with the same name. If you want to ensure that the new header is the only one present in the message with field name name, delete the field first, e.g.:\ndel msg['subject'] msg['subject'] = 'Python roolz!'\nIf the\npolicy\ndefines certain headers to be unique (as the standard policies do), this method may raise aValueError\nwhen an attempt is made to assign a value to such a header when one already exists. This behavior is intentional for consistency\u2019s sake, but do not depend on it as we may choose to make such assignments do an automatic deletion of the existing header in the future.\n- __delitem__(name)\u00b6\nDelete all occurrences of the field with name name from the message\u2019s headers. No exception is raised if the named field isn\u2019t present in the headers.\n- keys()\u00b6\nReturn a list of all the message\u2019s header field names.\n- values()\u00b6\nReturn a list of all the message\u2019s field values.\n- items()\u00b6\nReturn a list of 2-tuples containing all the message\u2019s field headers and values.\n- get(name, failobj=None)\u00b6\nReturn the value of the named header field. This is identical to\n__getitem__()\nexcept that optional failobj is returned if the named header is missing (failobj defaults toNone\n).\nHere are some additional useful header related methods:\n- get_all(name, failobj=None)\u00b6\nReturn a list of all the values for the field named name. If there are no such named headers in the message, failobj is returned (defaults to\nNone\n).\n- add_header(_name, _value, **_params)\u00b6\nExtended header setting. This method is similar to\n__setitem__()\nexcept that additional header parameters can be provided as keyword arguments. _name is the header field to add and _value is the primary value for the header.For each item in the keyword argument dictionary _params, the key is taken as the parameter name, with underscores converted to dashes (since dashes are illegal in Python identifiers). Normally, the parameter will be added as\nkey=\"value\"\nunless the value isNone\n, in which case only the key will be added.If the value contains non-ASCII characters, the charset and language may be explicitly controlled by specifying the value as a three tuple in the format\n(CHARSET, LANGUAGE, VALUE)\n, whereCHARSET\nis a string naming the charset to be used to encode the value,LANGUAGE\ncan usually be set toNone\nor the empty string (see RFC 2231 for other possibilities), andVALUE\nis the string value containing non-ASCII code points. If a three tuple is not passed and the value contains non-ASCII characters, it is automatically encoded in RFC 2231 format using aCHARSET\nofutf-8\nand aLANGUAGE\nofNone\n.Here is an example:\nmsg.add_header('Content-Disposition', 'attachment', filename='bud.gif')\nThis will add a header that looks like\nContent-Disposition: attachment; filename=\"bud.gif\"\nAn example of the extended interface with non-ASCII characters:\nmsg.add_header('Content-Disposition', 'attachment', filename=('iso-8859-1', '', 'Fu\u00dfballer.ppt'))\n- replace_header(_name, _value)\u00b6\nReplace a header. Replace the first header found in the message that matches _name, retaining header order and field name case of the original header. If no matching header is found, raise a\nKeyError\n.\n- get_content_type()\u00b6\nReturn the message\u2019s content type, coerced to lower case of the form maintype/subtype. If there is no Content-Type header in the message return the value returned by\nget_default_type()\n. If the Content-Type header is invalid, returntext/plain\n.(According to RFC 2045, messages always have a default type,\nget_content_type()\nwill always return a value. RFC 2045 defines a message\u2019s default type to be text/plain unless it appears inside a multipart/digest container, in which case it would be message/rfc822. If the Content-Type header has an invalid type specification, RFC 2045 mandates that the default type be text/plain.)\n- get_content_maintype()\u00b6\nReturn the message\u2019s main content type. This is the maintype part of the string returned by\nget_content_type()\n.\n- get_content_subtype()\u00b6\nReturn the message\u2019s sub-content type. This is the subtype part of the string returned by\nget_content_type()\n.\n- get_default_type()\u00b6\nReturn the default content type. Most messages have a default content type of text/plain, except for messages that are subparts of multipart/digest containers. Such subparts have a default content type of message/rfc822.\n- set_default_type(ctype)\u00b6\nSet the default content type. ctype should either be text/plain or message/rfc822, although this is not enforced. The default content type is not stored in the Content-Type header, so it only affects the return value of the\nget_content_type\nmethods when no Content-Type header is present in the message.\n- set_param(param, value, header='Content-Type', requote=True, charset=None, language='', replace=False)\u00b6\nSet a parameter in the Content-Type header. If the parameter already exists in the header, replace its value with value. When header is\nContent-Type\n(the default) and the header does not yet exist in the message, add it, set its value to text/plain, and append the new parameter value. Optional header specifies an alternative header to Content-Type.If the value contains non-ASCII characters, the charset and language may be explicitly specified using the optional charset and language parameters. Optional language specifies the RFC 2231 language, defaulting to the empty string. Both charset and language should be strings. The default is to use the\nutf8\ncharset andNone\nfor the language.If replace is\nFalse\n(the default) the header is moved to the end of the list of headers. If replace isTrue\n, the header will be updated in place.Use of the requote parameter with\nEmailMessage\nobjects is deprecated.Note that existing parameter values of headers may be accessed through the\nparams\nattribute of the header value (for example,msg['Content-Type'].params['charset']\n).Changed in version 3.4:\nreplace\nkeyword was added.\n- del_param(param, header='content-type', requote=True)\u00b6\nRemove the given parameter completely from the Content-Type header. The header will be re-written in place without the parameter or its value. Optional header specifies an alternative to Content-Type.\nUse of the requote parameter with\nEmailMessage\nobjects is deprecated.\n- get_filename(failobj=None)\u00b6\nReturn the value of the\nfilename\nparameter of the Content-Disposition header of the message. If the header does not have afilename\nparameter, this method falls back to looking for thename\nparameter on the Content-Type header. If neither is found, or the header is missing, then failobj is returned. The returned string will always be unquoted as peremail.utils.unquote()\n.\n- get_boundary(failobj=None)\u00b6\nReturn the value of the\nboundary\nparameter of the Content-Type header of the message, or failobj if either the header is missing, or has noboundary\nparameter. The returned string will always be unquoted as peremail.utils.unquote()\n.\n- set_boundary(boundary)\u00b6\nSet the\nboundary\nparameter of the Content-Type header to boundary.set_boundary()\nwill always quote boundary if necessary. AHeaderParseError\nis raised if the message object has no Content-Type header.Note that using this method is subtly different from deleting the old Content-Type header and adding a new one with the new boundary via\nadd_header()\n, becauseset_boundary()\npreserves the order of the Content-Type header in the list of headers.\n- get_content_charset(failobj=None)\u00b6\nReturn the\ncharset\nparameter of the Content-Type header, coerced to lower case. If there is no Content-Type header, or if that header has nocharset\nparameter, failobj is returned.\n- get_charsets(failobj=None)\u00b6\nReturn a list containing the character set names in the message. If the message is a multipart, then the list will contain one element for each subpart in the payload, otherwise, it will be a list of length 1.\nEach item in the list will be a string which is the value of the\ncharset\nparameter in the Content-Type header for the represented subpart. If the subpart has no Content-Type header, nocharset\nparameter, or is not of the text main MIME type, then that item in the returned list will be failobj.\n- is_attachment()\u00b6\nReturn\nTrue\nif there is a Content-Disposition header and its (case insensitive) value isattachment\n,False\notherwise.Changed in version 3.4.2: is_attachment is now a method instead of a property, for consistency with\nis_multipart()\n.\n- get_content_disposition()\u00b6\nReturn the lowercased value (without parameters) of the message\u2019s Content-Disposition header if it has one, or\nNone\n. The possible values for this method are inline, attachment orNone\nif the message follows RFC 2183.Added in version 3.5.\nThe following methods relate to interrogating and manipulating the content (payload) of the message.\n- walk()\u00b6\nThe\nwalk()\nmethod is an all-purpose generator which can be used to iterate over all the parts and subparts of a message object tree, in depth-first traversal order. You will typically usewalk()\nas the iterator in afor\nloop; each iteration returns the next subpart.Here\u2019s an example that prints the MIME type of every part of a multipart message structure:\n>>> for part in msg.walk(): ... print(part.get_content_type()) multipart/report text/plain message/delivery-status text/plain text/plain message/rfc822 text/plain\nwalk\niterates over the subparts of any part whereis_multipart()\nreturnsTrue\n, even thoughmsg.get_content_maintype() == 'multipart'\nmay returnFalse\n. We can see this in our example by making use of the_structure\ndebug helper function:>>> from email.iterators import _structure >>> for part in msg.walk(): ... print(part.get_content_maintype() == 'multipart', ... part.is_multipart()) True True False False False True False False False False False True False False >>> _structure(msg) multipart/report text/plain message/delivery-status text/plain text/plain message/rfc822 text/plain\nHere the\nmessage\nparts are notmultiparts\n, but they do contain subparts.is_multipart()\nreturnsTrue\nandwalk\ndescends into the subparts.\n- get_body(preferencelist=('related', 'html', 'plain'))\u00b6\nReturn the MIME part that is the best candidate to be the \u201cbody\u201d of the message.\npreferencelist must be a sequence of strings from the set\nrelated\n,html\n, andplain\n, and indicates the order of preference for the content type of the part returned.Start looking for candidate matches with the object on which the\nget_body\nmethod is called.If\nrelated\nis not included in preferencelist, consider the root part (or subpart of the root part) of any related encountered as a candidate if the (sub-)part matches a preference.When encountering a\nmultipart/related\n, check thestart\nparameter and if a part with a matching Content-ID is found, consider only it when looking for candidate matches. Otherwise consider only the first (default root) part of themultipart/related\n.If a part has a Content-Disposition header, only consider the part a candidate match if the value of the header is\ninline\n.If none of the candidates matches any of the preferences in preferencelist, return\nNone\n.Notes: (1) For most applications the only preferencelist combinations that really make sense are\n('plain',)\n,('html', 'plain')\n, and the default('related', 'html', 'plain')\n. (2) Because matching starts with the object on whichget_body\nis called, callingget_body\non amultipart/related\nwill return the object itself unless preferencelist has a non-default value. (3) Messages (or message parts) that do not specify a Content-Type or whose Content-Type header is invalid will be treated as if they are of typetext/plain\n, which may occasionally causeget_body\nto return unexpected results.\n- iter_attachments()\u00b6\nReturn an iterator over all of the immediate sub-parts of the message that are not candidate \u201cbody\u201d parts. That is, skip the first occurrence of each of\ntext/plain\n,text/html\n,multipart/related\n, ormultipart/alternative\n(unless they are explicitly marked as attachments via Content-Disposition: attachment), and return all remaining parts. When applied directly to amultipart/related\n, return an iterator over the all the related parts except the root part (ie: the part pointed to by thestart\nparameter, or the first part if there is nostart\nparameter or thestart\nparameter doesn\u2019t match the Content-ID of any of the parts). When applied directly to amultipart/alternative\nor a non-multipart\n, return an empty iterator.\n- iter_parts()\u00b6\nReturn an iterator over all of the immediate sub-parts of the message, which will be empty for a non-\nmultipart\n. (See alsowalk()\n.)\n- get_content(*args, content_manager=None, **kw)\u00b6\nCall the\nget_content()\nmethod of the content_manager, passing self as the message object, and passing along any other arguments or keywords as additional arguments. If content_manager is not specified, use thecontent_manager\nspecified by the currentpolicy\n.\n- set_content(*args, content_manager=None, **kw)\u00b6\nCall the\nset_content()\nmethod of the content_manager, passing self as the message object, and passing along any other arguments or keywords as additional arguments. If content_manager is not specified, use thecontent_manager\nspecified by the currentpolicy\n.\nConvert a non-\nmultipart\nmessage into amultipart/related\nmessage, moving any existing Content- headers and payload into a (new) first part of themultipart\n. If boundary is specified, use it as the boundary string in the multipart, otherwise leave the boundary to be automatically created when it is needed (for example, when the message is serialized).\n- make_alternative(boundary=None)\u00b6\nConvert a non-\nmultipart\nor amultipart/related\ninto amultipart/alternative\n, moving any existing Content- headers and payload into a (new) first part of themultipart\n. If boundary is specified, use it as the boundary string in the multipart, otherwise leave the boundary to be automatically created when it is needed (for example, when the message is serialized).\n- make_mixed(boundary=None)\u00b6\nConvert a non-\nmultipart\n, amultipart/related\n, or amultipart-alternative\ninto amultipart/mixed\n, moving any existing Content- headers and payload into a (new) first part of themultipart\n. If boundary is specified, use it as the boundary string in the multipart, otherwise leave the boundary to be automatically created when it is needed (for example, when the message is serialized).\nIf the message is a\nmultipart/related\n, create a new message object, pass all of the arguments to itsset_content()\nmethod, andattach()\nit to themultipart\n. If the message is a non-multipart\n, callmake_related()\nand then proceed as above. If the message is any other type ofmultipart\n, raise aTypeError\n. If content_manager is not specified, use thecontent_manager\nspecified by the currentpolicy\n. If the added part has no Content-Disposition header, add one with the valueinline\n.\n- add_alternative(*args, content_manager=None, **kw)\u00b6\nIf the message is a\nmultipart/alternative\n, create a new message object, pass all of the arguments to itsset_content()\nmethod, andattach()\nit to themultipart\n. If the message is a non-multipart\normultipart/related\n, callmake_alternative()\nand then proceed as above. If the message is any other type ofmultipart\n, raise aTypeError\n. If content_manager is not specified, use thecontent_manager\nspecified by the currentpolicy\n.\n- add_attachment(*args, content_manager=None, **kw)\u00b6\nIf the message is a\nmultipart/mixed\n, create a new message object, pass all of the arguments to itsset_content()\nmethod, andattach()\nit to themultipart\n. If the message is a non-multipart\n,multipart/related\n, ormultipart/alternative\n, callmake_mixed()\nand then proceed as above. If content_manager is not specified, use thecontent_manager\nspecified by the currentpolicy\n. If the added part has no Content-Disposition header, add one with the valueattachment\n. This method can be used both for explicit attachments (Content-Disposition: attachment) andinline\nattachments (Content-Disposition: inline), by passing appropriate options to thecontent_manager\n.\n- clear()\u00b6\nRemove the payload and all of the headers.\n- clear_content()\u00b6\nRemove the payload and all of the !Content- headers, leaving all other headers intact and in their original order.\nEmailMessage\nobjects have the following instance attributes:- preamble\u00b6\nThe format of a MIME document allows for some text between the blank line following the headers, and the first multipart boundary string. Normally, this text is never visible in a MIME-aware mail reader because it falls outside the standard MIME armor. However, when viewing the raw text of the message, or when viewing the message in a non-MIME aware reader, this text can become visible.\nThe preamble attribute contains this leading extra-armor text for MIME documents. When the\nParser\ndiscovers some text after the headers but before the first boundary string, it assigns this text to the message\u2019s preamble attribute. When theGenerator\nis writing out the plain text representation of a MIME message, and it finds the message has a preamble attribute, it will write this text in the area between the headers and the first boundary. Seeemail.parser\nandemail.generator\nfor details.Note that if the message object has no preamble, the preamble attribute will be\nNone\n.\n- epilogue\u00b6\nThe epilogue attribute acts the same way as the preamble attribute, except that it contains text that appears between the last boundary and the end of the message. As with the\npreamble\n, if there is no epilog text this attribute will beNone\n.\n- defects\u00b6\nThe defects attribute contains a list of all the problems found when parsing this message. See\nemail.errors\nfor a detailed description of the possible parsing defects.\n- class email.message.MIMEPart(policy=default)\u00b6\nThis class represents a subpart of a MIME message. It is identical to\nEmailMessage\n, except that no MIME-Version headers are added whenset_content()\nis called, since sub-parts do not need their own MIME-Version headers.\nFootnotes", "code_snippets": [" ", " ", " ", "\n ", " ", "\n", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", "\n ", " ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 6487}
{"url": "https://docs.python.org/3/library/asyncio-queue.html", "title": "Queues", "content": "Queues\u00b6\nSource code: Lib/asyncio/queues.py\nasyncio queues are designed to be similar to classes of the\nqueue\nmodule. Although asyncio queues are not thread-safe,\nthey are designed to be used specifically in async/await code.\nNote that methods of asyncio queues don\u2019t have a timeout parameter;\nuse asyncio.wait_for()\nfunction to do queue operations with a\ntimeout.\nSee also the Examples section below.\nQueue\u00b6\n- class asyncio.Queue(maxsize=0)\u00b6\nA first in, first out (FIFO) queue.\nIf maxsize is less than or equal to zero, the queue size is infinite. If it is an integer greater than\n0\n, thenawait put()\nblocks when the queue reaches maxsize until an item is removed byget()\n.Unlike the standard library threading\nqueue\n, the size of the queue is always known and can be returned by calling theqsize()\nmethod.Changed in version 3.10: Removed the loop parameter.\nThis class is not thread safe.\n- maxsize\u00b6\nNumber of items allowed in the queue.\n- empty()\u00b6\nReturn\nTrue\nif the queue is empty,False\notherwise.\n- full()\u00b6\nReturn\nTrue\nif there aremaxsize\nitems in the queue.If the queue was initialized with\nmaxsize=0\n(the default), thenfull()\nnever returnsTrue\n.\n- async get()\u00b6\nRemove and return an item from the queue. If queue is empty, wait until an item is available.\nRaises\nQueueShutDown\nif the queue has been shut down and is empty, or if the queue has been shut down immediately.\n- get_nowait()\u00b6\nReturn an item if one is immediately available, else raise\nQueueEmpty\n.\n- async join()\u00b6\nBlock until all items in the queue have been received and processed.\nThe count of unfinished tasks goes up whenever an item is added to the queue. The count goes down whenever a consumer coroutine calls\ntask_done()\nto indicate that the item was retrieved and all work on it is complete. When the count of unfinished tasks drops to zero,join()\nunblocks.\n- async put(item)\u00b6\nPut an item into the queue. If the queue is full, wait until a free slot is available before adding the item.\nRaises\nQueueShutDown\nif the queue has been shut down.\n- put_nowait(item)\u00b6\nPut an item into the queue without blocking.\nIf no free slot is immediately available, raise\nQueueFull\n.\n- qsize()\u00b6\nReturn the number of items in the queue.\n- shutdown(immediate=False)\u00b6\nPut a\nQueue\ninstance into a shutdown mode.The queue can no longer grow. Future calls to\nput()\nraiseQueueShutDown\n. Currently blocked callers ofput()\nwill be unblocked and will raiseQueueShutDown\nin the formerly awaiting task.If immediate is false (the default), the queue can be wound down normally with\nget()\ncalls to extract tasks that have already been loaded.And if\ntask_done()\nis called for each remaining task, a pendingjoin()\nwill be unblocked normally.Once the queue is empty, future calls to\nget()\nwill raiseQueueShutDown\n.If immediate is true, the queue is terminated immediately. The queue is drained to be completely empty and the count of unfinished tasks is reduced by the number of tasks drained. If unfinished tasks is zero, callers of\njoin()\nare unblocked. Also, blocked callers ofget()\nare unblocked and will raiseQueueShutDown\nbecause the queue is empty.Use caution when using\njoin()\nwith immediate set to true. This unblocks the join even when no work has been done on the tasks, violating the usual invariant for joining a queue.Added in version 3.13.\n- task_done()\u00b6\nIndicate that a formerly enqueued work item is complete.\nUsed by queue consumers. For each\nget()\nused to fetch a work item, a subsequent call totask_done()\ntells the queue that the processing on the work item is complete.If a\njoin()\nis currently blocking, it will resume when all items have been processed (meaning that atask_done()\ncall was received for every item that had beenput()\ninto the queue).Raises\nValueError\nif called more times than there were items placed in the queue.\nPriority Queue\u00b6\nLIFO Queue\u00b6\nExceptions\u00b6\n- exception asyncio.QueueEmpty\u00b6\nThis exception is raised when the\nget_nowait()\nmethod is called on an empty queue.\n- exception asyncio.QueueFull\u00b6\nException raised when the\nput_nowait()\nmethod is called on a queue that has reached its maxsize.\nExamples\u00b6\nQueues can be used to distribute workload between several concurrent tasks:\nimport asyncio\nimport random\nimport time\nasync def worker(name, queue):\nwhile True:\n# Get a \"work item\" out of the queue.\nsleep_for = await queue.get()\n# Sleep for the \"sleep_for\" seconds.\nawait asyncio.sleep(sleep_for)\n# Notify the queue that the \"work item\" has been processed.\nqueue.task_done()\nprint(f'{name} has slept for {sleep_for:.2f} seconds')\nasync def main():\n# Create a queue that we will use to store our \"workload\".\nqueue = asyncio.Queue()\n# Generate random timings and put them into the queue.\ntotal_sleep_time = 0\nfor _ in range(20):\nsleep_for = random.uniform(0.05, 1.0)\ntotal_sleep_time += sleep_for\nqueue.put_nowait(sleep_for)\n# Create three worker tasks to process the queue concurrently.\ntasks = []\nfor i in range(3):\ntask = asyncio.create_task(worker(f'worker-{i}', queue))\ntasks.append(task)\n# Wait until the queue is fully processed.\nstarted_at = time.monotonic()\nawait queue.join()\ntotal_slept_for = time.monotonic() - started_at\n# Cancel our worker tasks.\nfor task in tasks:\ntask.cancel()\n# Wait until all worker tasks are cancelled.\nawait asyncio.gather(*tasks, return_exceptions=True)\nprint('====')\nprint(f'3 workers slept in parallel for {total_slept_for:.2f} seconds')\nprint(f'total expected sleep time: {total_sleep_time:.2f} seconds')\nasyncio.run(main())", "code_snippets": ["\n", "\n", "\n\n\n", " ", " ", "\n ", " ", "\n ", "\n ", " ", " ", " ", "\n\n ", "\n ", " ", "\n\n ", "\n ", "\n\n ", "\n\n\n", " ", "\n ", "\n ", " ", " ", "\n\n ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", "\n ", "\n\n ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", "\n\n ", "\n ", " ", " ", "\n ", " ", "\n ", " ", " ", " ", " ", "\n\n ", "\n ", " ", " ", " ", "\n ", "\n ", "\n ", " ", " ", "\n\n ", "\n ", "\n ", "\n\n\n", "\n"], "language": "Python", "source": "python.org", "token_count": 1362}
{"url": "https://docs.python.org/3/whatsnew/2.5.html", "title": "What\u2019s New in Python 2.5", "content": "What\u2019s New in Python 2.5\u00b6\n- Author:\nA.M. Kuchling\nThis article explains the new features in Python 2.5. The final release of Python 2.5 is scheduled for August 2006; PEP 356 describes the planned release schedule. Python 2.5 was released on September 19, 2006.\nThe changes in Python 2.5 are an interesting mix of language and library\nimprovements. The library enhancements will be more important to Python\u2019s user\ncommunity, I think, because several widely useful packages were added. New\nmodules include ElementTree for XML processing (xml.etree\n),\nthe SQLite database module (sqlite\n), and the ctypes\nmodule for calling C functions.\nThe language changes are of middling significance. Some pleasant new features\nwere added, but most of them aren\u2019t features that you\u2019ll use every day.\nConditional expressions were finally added to the language using a novel syntax;\nsee section PEP 308: Conditional Expressions. The new \u2018with\n\u2019 statement will make\nwriting cleanup code easier (section PEP 343: The \u2018with\u2019 statement). Values can now be passed\ninto generators (section PEP 342: New Generator Features). Imports are now visible as either\nabsolute or relative (section PEP 328: Absolute and Relative Imports). Some corner cases of exception\nhandling are handled better (section PEP 341: Unified try/except/finally). All these improvements\nare worthwhile, but they\u2019re improvements to one specific language feature or\nanother; none of them are broad modifications to Python\u2019s semantics.\nAs well as the language and library additions, other improvements and bugfixes were made throughout the source tree. A search through the SVN change logs finds there were 353 patches applied and 458 bugs fixed between Python 2.4 and 2.5. (Both figures are likely to be underestimates.)\nThis article doesn\u2019t try to be a complete specification of the new features; instead changes are briefly introduced using helpful examples. For full details, you should always refer to the documentation for Python 2.5 at https://docs.python.org. If you want to understand the complete implementation and design rationale, refer to the PEP for a particular new feature.\nComments, suggestions, and error reports for this document are welcome; please e-mail them to the author or open a bug in the Python bug tracker.\nPEP 308: Conditional Expressions\u00b6\nFor a long time, people have been requesting a way to write conditional expressions, which are expressions that return value A or value B depending on whether a Boolean value is true or false. A conditional expression lets you write a single assignment statement that has the same effect as the following:\nif condition:\nx = true_value\nelse:\nx = false_value\nThere have been endless tedious discussions of syntax on both python-dev and\ncomp.lang.python. A vote was even held that found the majority of voters wanted\nconditional expressions in some form, but there was no syntax that was preferred\nby a clear majority. Candidates included C\u2019s cond ? true_v : false_v\n, if\ncond then true_v else false_v\n, and 16 other variations.\nGuido van Rossum eventually chose a surprising syntax:\nx = true_value if condition else false_value\nEvaluation is still lazy as in existing Boolean expressions, so the order of evaluation jumps around a bit. The condition expression in the middle is evaluated first, and the true_value expression is evaluated only if the condition was true. Similarly, the false_value expression is only evaluated when the condition is false.\nThis syntax may seem strange and backwards; why does the condition go in the\nmiddle of the expression, and not in the front as in C\u2019s c ? x : y\n? The\ndecision was checked by applying the new syntax to the modules in the standard\nlibrary and seeing how the resulting code read. In many cases where a\nconditional expression is used, one value seems to be the \u2018common case\u2019 and one\nvalue is an \u2018exceptional case\u2019, used only on rarer occasions when the condition\nisn\u2019t met. The conditional syntax makes this pattern a bit more obvious:\ncontents = ((doc + '\\n') if doc else '')\nI read the above statement as meaning \u201chere contents is usually assigned a\nvalue of doc+'\\n'\n; sometimes doc is empty, in which special case an empty\nstring is returned.\u201d I doubt I will use conditional expressions very often\nwhere there isn\u2019t a clear common and uncommon case.\nThere was some discussion of whether the language should require surrounding conditional expressions with parentheses. The decision was made to not require parentheses in the Python language\u2019s grammar, but as a matter of style I think you should always use them. Consider these two statements:\n# First version -- no parens\nlevel = 1 if logging else 0\n# Second version -- with parens\nlevel = (1 if logging else 0)\nIn the first version, I think a reader\u2019s eye might group the statement into \u2018level = 1\u2019, \u2018if logging\u2019, \u2018else 0\u2019, and think that the condition decides whether the assignment to level is performed. The second version reads better, in my opinion, because it makes it clear that the assignment is always performed and the choice is being made between two values.\nAnother reason for including the brackets: a few odd combinations of list comprehensions and lambdas could look like incorrect conditional expressions. See PEP 308 for some examples. If you put parentheses around your conditional expressions, you won\u2019t run into this case.\nSee also\n- PEP 308 - Conditional Expressions\nPEP written by Guido van Rossum and Raymond D. Hettinger; implemented by Thomas Wouters.\nPEP 309: Partial Function Application\u00b6\nThe functools\nmodule is intended to contain tools for functional-style\nprogramming.\nOne useful tool in this module is the partial()\nfunction. For programs\nwritten in a functional style, you\u2019ll sometimes want to construct variants of\nexisting functions that have some of the parameters filled in. Consider a\nPython function f(a, b, c)\n; you could create a new function g(b, c)\nthat\nwas equivalent to f(1, b, c)\n. This is called \u201cpartial function\napplication\u201d.\npartial()\ntakes the arguments (function, arg1, arg2, ... kwarg1=value1,\nkwarg2=value2)\n. The resulting object is callable, so you can just call it to\ninvoke function with the filled-in arguments.\nHere\u2019s a small but realistic example:\nimport functools\ndef log (message, subsystem):\n\"Write the contents of 'message' to the specified subsystem.\"\nprint '%s: %s' % (subsystem, message)\n...\nserver_log = functools.partial(log, subsystem='server')\nserver_log('Unable to open socket')\nHere\u2019s another example, from a program that uses PyGTK. Here a context-sensitive\npop-up menu is being constructed dynamically. The callback provided\nfor the menu option is a partially applied version of the open_item()\nmethod, where the first argument has been provided.\n...\nclass Application:\ndef open_item(self, path):\n...\ndef init (self):\nopen_func = functools.partial(self.open_item, item_path)\npopup_menu.append( (\"Open\", open_func, 1) )\nAnother function in the functools\nmodule is the\nupdate_wrapper(wrapper, wrapped)\nfunction that helps you write\nwell-behaved decorators. update_wrapper()\ncopies the name, module, and\ndocstring attribute to a wrapper function so that tracebacks inside the wrapped\nfunction are easier to understand. For example, you might write:\ndef my_decorator(f):\ndef wrapper(*args, **kwds):\nprint 'Calling decorated function'\nreturn f(*args, **kwds)\nfunctools.update_wrapper(wrapper, f)\nreturn wrapper\nwraps()\nis a decorator that can be used inside your own decorators to copy\nthe wrapped function\u2019s information. An alternate version of the previous\nexample would be:\ndef my_decorator(f):\n@functools.wraps(f)\ndef wrapper(*args, **kwds):\nprint 'Calling decorated function'\nreturn f(*args, **kwds)\nreturn wrapper\nSee also\n- PEP 309 - Partial Function Application\nPEP proposed and written by Peter Harris; implemented by Hye-Shik Chang and Nick Coghlan, with adaptations by Raymond Hettinger.\nPEP 314: Metadata for Python Software Packages v1.1\u00b6\nSome simple dependency support was added to Distutils. The setup()\nfunction now has requires\n, provides\n, and obsoletes\nkeyword\nparameters. When you build a source distribution using the sdist\ncommand,\nthe dependency information will be recorded in the PKG-INFO\nfile.\nAnother new keyword parameter is download_url\n, which should be set to a URL\nfor the package\u2019s source code. This means it\u2019s now possible to look up an entry\nin the package index, determine the dependencies for a package, and download the\nrequired packages.\nVERSION = '1.0'\nsetup(name='PyPackage',\nversion=VERSION,\nrequires=['numarray', 'zlib (>=1.1.4)'],\nobsoletes=['OldPackage']\ndownload_url=('http://www.example.com/pypackage/dist/pkg-%s.tar.gz'\n% VERSION),\n)\nAnother new enhancement to the Python package index at https://pypi.org is storing source and binary archives for a package. The new upload Distutils command will upload a package to the repository.\nBefore a package can be uploaded, you must be able to build a distribution using\nthe sdist Distutils command. Once that works, you can run python\nsetup.py upload\nto add your package to the PyPI archive. Optionally you can\nGPG-sign the package by supplying the --sign\nand --identity\noptions.\nPackage uploading was implemented by Martin von L\u00f6wis and Richard Jones.\nSee also\n- PEP 314 - Metadata for Python Software Packages v1.1\nPEP proposed and written by A.M. Kuchling, Richard Jones, and Fred Drake; implemented by Richard Jones and Fred Drake.\nPEP 328: Absolute and Relative Imports\u00b6\nThe simpler part of PEP 328 was implemented in Python 2.4: parentheses could now\nbe used to enclose the names imported from a module using the from ... import\n...\nstatement, making it easier to import many different names.\nThe more complicated part has been implemented in Python 2.5: importing a module can be specified to use absolute or package-relative imports. The plan is to move toward making absolute imports the default in future versions of Python.\nLet\u2019s say you have a package directory like this:\npkg/\npkg/__init__.py\npkg/main.py\npkg/string.py\nThis defines a package named pkg\ncontaining the pkg.main\nand\npkg.string\nsubmodules.\nConsider the code in the main.py\nmodule. What happens if it executes\nthe statement import string\n? In Python 2.4 and earlier, it will first look\nin the package\u2019s directory to perform a relative import, finds\npkg/string.py\n, imports the contents of that file as the\npkg.string\nmodule, and that module is bound to the name string\nin the\npkg.main\nmodule\u2019s namespace.\nThat\u2019s fine if pkg.string\nwas what you wanted. But what if you wanted\nPython\u2019s standard string\nmodule? There\u2019s no clean way to ignore\npkg.string\nand look for the standard module; generally you had to look at\nthe contents of sys.modules\n, which is slightly unclean. Holger Krekel\u2019s\npy.std\npackage provides a tidier way to perform imports from the standard\nlibrary, import py; py.std.string.join()\n, but that package isn\u2019t available\non all Python installations.\nReading code which relies on relative imports is also less clear, because a\nreader may be confused about which module, string\nor pkg.string\n,\nis intended to be used. Python users soon learned not to duplicate the names of\nstandard library modules in the names of their packages\u2019 submodules, but you\ncan\u2019t protect against having your submodule\u2019s name being used for a new module\nadded in a future version of Python.\nIn Python 2.5, you can switch import\n\u2019s behaviour to absolute imports\nusing a from __future__ import absolute_import\ndirective. This absolute-import\nbehaviour will become the default in a future version (probably Python\n2.7). Once absolute imports are the default, import string\nwill always\nfind the standard library\u2019s version. It\u2019s suggested that users should begin\nusing absolute imports as much as possible, so it\u2019s preferable to begin writing\nfrom pkg import string\nin your code.\nRelative imports are still possible by adding a leading period to the module\nname when using the from ... import\nform:\n# Import names from pkg.string\nfrom .string import name1, name2\n# Import pkg.string\nfrom . import string\nThis imports the string\nmodule relative to the current package, so in\npkg.main\nthis will import name1 and name2 from pkg.string\n.\nAdditional leading periods perform the relative import starting from the parent\nof the current package. For example, code in the A.B.C\nmodule can do:\nfrom . import D # Imports A.B.D\nfrom .. import E # Imports A.E\nfrom ..F import G # Imports A.F.G\nLeading periods cannot be used with the import modname\nform of the import\nstatement, only the from ... import\nform.\nSee also\n- PEP 328 - Imports: Multi-Line and Absolute/Relative\nPEP written by Aahz; implemented by Thomas Wouters.\n- https://pylib.readthedocs.io/\nThe py library by Holger Krekel, which contains the\npy.std\npackage.\nPEP 338: Executing Modules as Scripts\u00b6\nThe -m\nswitch added in Python 2.4 to execute a module as a script\ngained a few more abilities. Instead of being implemented in C code inside the\nPython interpreter, the switch now uses an implementation in a new module,\nrunpy\n.\nThe runpy\nmodule implements a more sophisticated import mechanism so that\nit\u2019s now possible to run modules in a package such as pychecker.checker\n.\nThe module also supports alternative import mechanisms such as the\nzipimport\nmodule. This means you can add a .zip archive\u2019s path to\nsys.path\nand then use the -m\nswitch to execute code from the\narchive.\nSee also\n- PEP 338 - Executing modules as scripts\nPEP written and implemented by Nick Coghlan.\nPEP 341: Unified try/except/finally\u00b6\nUntil Python 2.5, the try\nstatement came in two flavours. You could\nuse a finally\nblock to ensure that code is always executed, or one or\nmore except\nblocks to catch specific exceptions. You couldn\u2019t\ncombine both except\nblocks and a finally\nblock, because\ngenerating the right bytecode for the combined version was complicated and it\nwasn\u2019t clear what the semantics of the combined statement should be.\nGuido van Rossum spent some time working with Java, which does support the\nequivalent of combining except\nblocks and a finally\nblock,\nand this clarified what the statement should mean. In Python 2.5, you can now\nwrite:\ntry:\nblock-1 ...\nexcept Exception1:\nhandler-1 ...\nexcept Exception2:\nhandler-2 ...\nelse:\nelse-block\nfinally:\nfinal-block\nThe code in block-1 is executed. If the code raises an exception, the various\nexcept\nblocks are tested: if the exception is of class\nException1\n, handler-1 is executed; otherwise if it\u2019s of class\nException2\n, handler-2 is executed, and so forth. If no exception is\nraised, the else-block is executed.\nNo matter what happened previously, the final-block is executed once the code block is complete and any raised exceptions handled. Even if there\u2019s an error in an exception handler or the else-block and a new exception is raised, the code in the final-block is still run.\nSee also\n- PEP 341 - Unifying try-except and try-finally\nPEP written by Georg Brandl; implementation by Thomas Lee.\nPEP 342: New Generator Features\u00b6\nPython 2.5 adds a simple way to pass values into a generator. As introduced in Python 2.3, generators only produce output; once a generator\u2019s code was invoked to create an iterator, there was no way to pass any new information into the function when its execution is resumed. Sometimes the ability to pass in some information would be useful. Hackish solutions to this include making the generator\u2019s code look at a global variable and then changing the global variable\u2019s value, or passing in some mutable object that callers then modify.\nTo refresh your memory of basic generators, here\u2019s a simple example:\ndef counter (maximum):\ni = 0\nwhile i < maximum:\nyield i\ni += 1\nWhen you call counter(10)\n, the result is an iterator that returns the values\nfrom 0 up to 9. On encountering the yield\nstatement, the iterator\nreturns the provided value and suspends the function\u2019s execution, preserving the\nlocal variables. Execution resumes on the following call to the iterator\u2019s\nnext()\nmethod, picking up after the yield\nstatement.\nIn Python 2.3, yield\nwas a statement; it didn\u2019t return any value. In\n2.5, yield\nis now an expression, returning a value that can be\nassigned to a variable or otherwise operated on:\nval = (yield i)\nI recommend that you always put parentheses around a yield\nexpression\nwhen you\u2019re doing something with the returned value, as in the above example.\nThe parentheses aren\u2019t always necessary, but it\u2019s easier to always add them\ninstead of having to remember when they\u2019re needed.\n(PEP 342 explains the exact rules, which are that a\nyield\n-expression must always be parenthesized except when it\noccurs at the top-level\nexpression on the right-hand side of an assignment. This means you can write\nval = yield i\nbut have to use parentheses when there\u2019s an operation, as in\nval = (yield i) + 12\n.)\nValues are sent into a generator by calling its send(value)\nmethod. The\ngenerator\u2019s code is then resumed and the yield\nexpression returns the\nspecified value. If the regular next()\nmethod is called, the\nyield\nreturns None\n.\nHere\u2019s the previous example, modified to allow changing the value of the internal counter.\ndef counter (maximum):\ni = 0\nwhile i < maximum:\nval = (yield i)\n# If value provided, change counter\nif val is not None:\ni = val\nelse:\ni += 1\nAnd here\u2019s an example of changing the counter:\n>>> it = counter(10)\n>>> print it.next()\n0\n>>> print it.next()\n1\n>>> print it.send(8)\n8\n>>> print it.next()\n9\n>>> print it.next()\nTraceback (most recent call last):\nFile \"t.py\", line 15, in ?\nprint it.next()\nStopIteration\nyield\nwill usually return None\n, so you should always check\nfor this case. Don\u2019t just use its value in expressions unless you\u2019re sure that\nthe send()\nmethod will be the only method used to resume your generator\nfunction.\nIn addition to send()\n, there are two other new methods on generators:\nthrow(type, value=None, traceback=None)\nis used to raise an exception inside the generator; the exception is raised by theyield\nexpression where the generator\u2019s execution is paused.close()\nraises a newGeneratorExit\nexception inside the generator to terminate the iteration. On receiving this exception, the generator\u2019s code must either raiseGeneratorExit\norStopIteration\n. Catching theGeneratorExit\nexception and returning a value is illegal and will trigger aRuntimeError\n; if the function raises some other exception, that exception is propagated to the caller.close()\nwill also be called by Python\u2019s garbage collector when the generator is garbage-collected.If you need to run cleanup code when a\nGeneratorExit\noccurs, I suggest using atry: ... finally:\nsuite instead of catchingGeneratorExit\n.\nThe cumulative effect of these changes is to turn generators from one-way producers of information into both producers and consumers.\nGenerators also become coroutines, a more generalized form of subroutines.\nSubroutines are entered at one point and exited at another point (the top of the\nfunction, and a return\nstatement), but coroutines can be entered,\nexited, and resumed at many different points (the yield\nstatements).\nWe\u2019ll have to figure out patterns for using coroutines effectively in Python.\nThe addition of the close()\nmethod has one side effect that isn\u2019t obvious.\nclose()\nis called when a generator is garbage-collected, so this means the\ngenerator\u2019s code gets one last chance to run before the generator is destroyed.\nThis last chance means that try...finally\nstatements in generators can now\nbe guaranteed to work; the finally\nclause will now always get a\nchance to run. The syntactic restriction that you couldn\u2019t mix yield\nstatements with a try...finally\nsuite has therefore been removed. This\nseems like a minor bit of language trivia, but using generators and\ntry...finally\nis actually necessary in order to implement the\nwith\nstatement described by PEP 343. I\u2019ll look at this new statement\nin the following section.\nAnother even more esoteric effect of this change: previously, the\ngi_frame\nattribute of a generator was always a frame object. It\u2019s now\npossible for gi_frame\nto be None\nonce the generator has been\nexhausted.\nSee also\n- PEP 342 - Coroutines via Enhanced Generators\nPEP written by Guido van Rossum and Phillip J. Eby; implemented by Phillip J. Eby. Includes examples of some fancier uses of generators as coroutines.\nEarlier versions of these features were proposed in PEP 288 by Raymond Hettinger and PEP 325 by Samuele Pedroni.\n- https://en.wikipedia.org/wiki/Coroutine\nThe Wikipedia entry for coroutines.\n- https://web.archive.org/web/20160321211320/http://www.sidhe.org/~dan/blog/archives/000178.html\nAn explanation of coroutines from a Perl point of view, written by Dan Sugalski.\nPEP 343: The \u2018with\u2019 statement\u00b6\nThe \u2018with\n\u2019 statement clarifies code that previously would use\ntry...finally\nblocks to ensure that clean-up code is executed. In this\nsection, I\u2019ll discuss the statement as it will commonly be used. In the next\nsection, I\u2019ll examine the implementation details and show how to write objects\nfor use with this statement.\nThe \u2018with\n\u2019 statement is a new control-flow structure whose basic\nstructure is:\nwith expression [as variable]:\nwith-block\nThe expression is evaluated, and it should result in an object that supports the\ncontext management protocol (that is, has __enter__()\nand __exit__()\nmethods.\nThe object\u2019s __enter__()\nis called before with-block is executed and\ntherefore can run set-up code. It also may return a value that is bound to the\nname variable, if given. (Note carefully that variable is not assigned\nthe result of expression.)\nAfter execution of the with-block is finished, the object\u2019s __exit__()\nmethod is called, even if the block raised an exception, and can therefore run\nclean-up code.\nTo enable the statement in Python 2.5, you need to add the following directive to your module:\nfrom __future__ import with_statement\nThe statement will always be enabled in Python 2.6.\nSome standard Python objects now support the context management protocol and can\nbe used with the \u2018with\n\u2019 statement. File objects are one example:\nwith open('/etc/passwd', 'r') as f:\nfor line in f:\nprint line\n... more processing code ...\nAfter this statement has executed, the file object in f will have been\nautomatically closed, even if the for\nloop raised an exception\npart-way through the block.\nNote\nIn this case, f is the same object created by open()\n, because\n__enter__()\nreturns self.\nThe threading\nmodule\u2019s locks and condition variables also support the\n\u2018with\n\u2019 statement:\nlock = threading.Lock()\nwith lock:\n# Critical section of code\n...\nThe lock is acquired before the block is executed and always released once the block is complete.\nThe new localcontext()\nfunction in the decimal\nmodule makes it easy\nto save and restore the current decimal context, which encapsulates the desired\nprecision and rounding characteristics for computations:\nfrom decimal import Decimal, Context, localcontext\n# Displays with default precision of 28 digits\nv = Decimal('578')\nprint v.sqrt()\nwith localcontext(Context(prec=16)):\n# All code in this block uses a precision of 16 digits.\n# The original context is restored on exiting the block.\nprint v.sqrt()\nWriting Context Managers\u00b6\nUnder the hood, the \u2018with\n\u2019 statement is fairly complicated. Most\npeople will only use \u2018with\n\u2019 in company with existing objects and\ndon\u2019t need to know these details, so you can skip the rest of this section if\nyou like. Authors of new objects will need to understand the details of the\nunderlying implementation and should keep reading.\nA high-level explanation of the context management protocol is:\nThe expression is evaluated and should result in an object called a \u201ccontext manager\u201d. The context manager must have\n__enter__()\nand__exit__()\nmethods.The context manager\u2019s\n__enter__()\nmethod is called. The value returned is assigned to VAR. If no'as VAR'\nclause is present, the value is simply discarded.The code in BLOCK is executed.\nIf BLOCK raises an exception, the\n__exit__(type, value, traceback)\nis called with the exception details, the same values returned bysys.exc_info()\n. The method\u2019s return value controls whether the exception is re-raised: any false value re-raises the exception, andTrue\nwill result in suppressing it. You\u2019ll only rarely want to suppress the exception, because if you do the author of the code containing the \u2018with\n\u2019 statement will never realize anything went wrong.If BLOCK didn\u2019t raise an exception, the\n__exit__()\nmethod is still called, but type, value, and traceback are allNone\n.\nLet\u2019s think through an example. I won\u2019t present detailed code but will only sketch the methods necessary for a database that supports transactions.\n(For people unfamiliar with database terminology: a set of changes to the database are grouped into a transaction. Transactions can be either committed, meaning that all the changes are written into the database, or rolled back, meaning that the changes are all discarded and the database is unchanged. See any database textbook for more information.)\nLet\u2019s assume there\u2019s an object representing a database connection. Our goal will be to let the user write code like this:\ndb_connection = DatabaseConnection()\nwith db_connection as cursor:\ncursor.execute('insert into ...')\ncursor.execute('delete from ...')\n# ... more operations ...\nThe transaction should be committed if the code in the block runs flawlessly or\nrolled back if there\u2019s an exception. Here\u2019s the basic interface for\nDatabaseConnection\nthat I\u2019ll assume:\nclass DatabaseConnection:\n# Database interface\ndef cursor (self):\n\"Returns a cursor object and starts a new transaction\"\ndef commit (self):\n\"Commits current transaction\"\ndef rollback (self):\n\"Rolls back current transaction\"\nThe __enter__()\nmethod is pretty easy, having only to start a new\ntransaction. For this application the resulting cursor object would be a useful\nresult, so the method will return it. The user can then add as cursor\nto\ntheir \u2018with\n\u2019 statement to bind the cursor to a variable name.\nclass DatabaseConnection:\n...\ndef __enter__ (self):\n# Code to start a new transaction\ncursor = self.cursor()\nreturn cursor\nThe __exit__()\nmethod is the most complicated because it\u2019s where most of\nthe work has to be done. The method has to check if an exception occurred. If\nthere was no exception, the transaction is committed. The transaction is rolled\nback if there was an exception.\nIn the code below, execution will just fall off the end of the function,\nreturning the default value of None\n. None\nis false, so the exception\nwill be re-raised automatically. If you wished, you could be more explicit and\nadd a return\nstatement at the marked location.\nclass DatabaseConnection:\n...\ndef __exit__ (self, type, value, tb):\nif tb is None:\n# No exception, so commit\nself.commit()\nelse:\n# Exception occurred, so rollback.\nself.rollback()\n# return False\nThe contextlib module\u00b6\nThe new contextlib\nmodule provides some functions and a decorator that\nare useful for writing objects for use with the \u2018with\n\u2019 statement.\nThe decorator is called contextmanager()\n, and lets you write a single\ngenerator function instead of defining a new class. The generator should yield\nexactly one value. The code up to the yield\nwill be executed as the\n__enter__()\nmethod, and the value yielded will be the method\u2019s return\nvalue that will get bound to the variable in the \u2018with\n\u2019 statement\u2019s\nas\nclause, if any. The code after the yield\nwill be\nexecuted in the __exit__()\nmethod. Any exception raised in the block will\nbe raised by the yield\nstatement.\nOur database example from the previous section could be written using this decorator as:\nfrom contextlib import contextmanager\n@contextmanager\ndef db_transaction (connection):\ncursor = connection.cursor()\ntry:\nyield cursor\nexcept:\nconnection.rollback()\nraise\nelse:\nconnection.commit()\ndb = DatabaseConnection()\nwith db_transaction(db) as cursor:\n...\nThe contextlib\nmodule also has a nested(mgr1, mgr2, ...)\nfunction\nthat combines a number of context managers so you don\u2019t need to write nested\n\u2018with\n\u2019 statements. In this example, the single \u2018with\n\u2019\nstatement both starts a database transaction and acquires a thread lock:\nlock = threading.Lock()\nwith nested (db_transaction(db), lock) as (cursor, locked):\n...\nFinally, the closing(object)\nfunction returns object so that it can be\nbound to a variable, and calls object.close\nat the end of the block.\nimport urllib, sys\nfrom contextlib import closing\nwith closing(urllib.urlopen('http://www.yahoo.com')) as f:\nfor line in f:\nsys.stdout.write(line)\nSee also\n- PEP 343 - The \u201cwith\u201d statement\nPEP written by Guido van Rossum and Nick Coghlan; implemented by Mike Bland, Guido van Rossum, and Neal Norwitz. The PEP shows the code generated for a \u2018\nwith\n\u2019 statement, which can be helpful in learning how the statement works.\nThe documentation for the contextlib\nmodule.\nPEP 352: Exceptions as New-Style Classes\u00b6\nException classes can now be new-style classes, not just classic classes, and\nthe built-in Exception\nclass and all the standard built-in exceptions\n(NameError\n, ValueError\n, etc.) are now new-style classes.\nThe inheritance hierarchy for exceptions has been rearranged a bit. In 2.5, the inheritance relationships are:\nBaseException # New in Python 2.5\n|- KeyboardInterrupt\n|- SystemExit\n|- Exception\n|- (all other current built-in exceptions)\nThis rearrangement was done because people often want to catch all exceptions\nthat indicate program errors. KeyboardInterrupt\nand SystemExit\naren\u2019t errors, though, and usually represent an explicit action such as the user\nhitting Control-C or code calling sys.exit()\n. A bare except:\nwill\ncatch all exceptions, so you commonly need to list KeyboardInterrupt\nand\nSystemExit\nin order to re-raise them. The usual pattern is:\ntry:\n...\nexcept (KeyboardInterrupt, SystemExit):\nraise\nexcept:\n# Log error...\n# Continue running program...\nIn Python 2.5, you can now write except Exception\nto achieve the same\nresult, catching all the exceptions that usually indicate errors but leaving\nKeyboardInterrupt\nand SystemExit\nalone. As in previous versions,\na bare except:\nstill catches all exceptions.\nThe goal for Python 3.0 is to require any class raised as an exception to derive\nfrom BaseException\nor some descendant of BaseException\n, and future\nreleases in the Python 2.x series may begin to enforce this constraint.\nTherefore, I suggest you begin making all your exception classes derive from\nException\nnow. It\u2019s been suggested that the bare except:\nform should\nbe removed in Python 3.0, but Guido van Rossum hasn\u2019t decided whether to do this\nor not.\nRaising of strings as exceptions, as in the statement raise \"Error\noccurred\"\n, is deprecated in Python 2.5 and will trigger a warning. The aim is\nto be able to remove the string-exception feature in a few releases.\nSee also\n- PEP 352 - Required Superclass for Exceptions\nPEP written by Brett Cannon and Guido van Rossum; implemented by Brett Cannon.\nPEP 353: Using ssize_t as the index type\u00b6\nA wide-ranging change to Python\u2019s C API, using a new Py_ssize_t\ntype\ndefinition instead of int, will permit the interpreter to handle more\ndata on 64-bit platforms. This change doesn\u2019t affect Python\u2019s capacity on 32-bit\nplatforms.\nVarious pieces of the Python interpreter used C\u2019s int type to store\nsizes or counts; for example, the number of items in a list or tuple were stored\nin an int. The C compilers for most 64-bit platforms still define\nint as a 32-bit type, so that meant that lists could only hold up to\n2**31 - 1\n= 2147483647 items. (There are actually a few different\nprogramming models that 64-bit C compilers can use \u2013 see\nhttps://unix.org/version2/whatsnew/lp64_wp.html for a discussion \u2013 but the\nmost commonly available model leaves int as 32 bits.)\nA limit of 2147483647 items doesn\u2019t really matter on a 32-bit platform because\nyou\u2019ll run out of memory before hitting the length limit. Each list item\nrequires space for a pointer, which is 4 bytes, plus space for a\nPyObject\nrepresenting the item. 2147483647*4 is already more bytes\nthan a 32-bit address space can contain.\nIt\u2019s possible to address that much memory on a 64-bit platform, however. The pointers for a list that size would only require 16 GiB of space, so it\u2019s not unreasonable that Python programmers might construct lists that large. Therefore, the Python interpreter had to be changed to use some type other than int, and this will be a 64-bit type on 64-bit platforms. The change will cause incompatibilities on 64-bit machines, so it was deemed worth making the transition now, while the number of 64-bit users is still relatively small. (In 5 or 10 years, we may all be on 64-bit machines, and the transition would be more painful then.)\nThis change most strongly affects authors of C extension modules. Python\nstrings and container types such as lists and tuples now use\nPy_ssize_t\nto store their size. Functions such as\nPyList_Size()\nnow return Py_ssize_t\n. Code in extension modules\nmay therefore need to have some variables changed to Py_ssize_t\n.\nThe PyArg_ParseTuple()\nand Py_BuildValue()\nfunctions have a new\nconversion code, n\n, for Py_ssize_t\n. PyArg_ParseTuple()\n\u2019s\ns#\nand t#\nstill output int by default, but you can define the\nmacro PY_SSIZE_T_CLEAN\nbefore including Python.h\nto make\nthem return Py_ssize_t\n.\nPEP 353 has a section on conversion guidelines that extension authors should read to learn about supporting 64-bit platforms.\nSee also\n- PEP 353 - Using ssize_t as the index type\nPEP written and implemented by Martin von L\u00f6wis.\nPEP 357: The \u2018__index__\u2019 method\u00b6\nThe NumPy developers had a problem that could only be solved by adding a new\nspecial method, __index__()\n. When using slice notation, as in\n[start:stop:step]\n, the values of the start, stop, and step indexes\nmust all be either integers or long integers. NumPy defines a variety of\nspecialized integer types corresponding to unsigned and signed integers of 8,\n16, 32, and 64 bits, but there was no way to signal that these types could be\nused as slice indexes.\nSlicing can\u2019t just use the existing __int__()\nmethod because that method\nis also used to implement coercion to integers. If slicing used\n__int__()\n, floating-point numbers would also become legal slice indexes\nand that\u2019s clearly an undesirable behaviour.\nInstead, a new special method called __index__()\nwas added. It takes no\narguments and returns an integer giving the slice index to use. For example:\nclass C:\ndef __index__ (self):\nreturn self.value\nThe return value must be either a Python integer or long integer. The\ninterpreter will check that the type returned is correct, and raises a\nTypeError\nif this requirement isn\u2019t met.\nA corresponding nb_index\nslot was added to the C-level\nPyNumberMethods\nstructure to let C extensions implement this protocol.\nPyNumber_Index(obj)\ncan be used in extension code to call the\n__index__()\nfunction and retrieve its result.\nSee also\n- PEP 357 - Allowing Any Object to be Used for Slicing\nPEP written and implemented by Travis Oliphant.\nOther Language Changes\u00b6\nHere are all of the changes that Python 2.5 makes to the core Python language.\nThe\ndict\ntype has a new hook for letting subclasses provide a default value when a key isn\u2019t contained in the dictionary. When a key isn\u2019t found, the dictionary\u2019s__missing__(key)\nmethod will be called. This hook is used to implement the newdefaultdict\nclass in thecollections\nmodule. The following example defines a dictionary that returns zero for any missing key:class zerodict (dict): def __missing__ (self, key): return 0 d = zerodict({1:1, 2:2}) print d[1], d[2] # Prints 1, 2 print d[3], d[4] # Prints 0, 0\nBoth 8-bit and Unicode strings have new\npartition(sep)\nandrpartition(sep)\nmethods that simplify a common use case.The\nfind(S)\nmethod is often used to get an index which is then used to slice the string and obtain the pieces that are before and after the separator.partition(sep)\ncondenses this pattern into a single method call that returns a 3-tuple containing the substring before the separator, the separator itself, and the substring after the separator. If the separator isn\u2019t found, the first element of the tuple is the entire string and the other two elements are empty.rpartition(sep)\nalso returns a 3-tuple but starts searching from the end of the string; ther\nstands for \u2018reverse\u2019.Some examples:\n>>> ('http://www.python.org').partition('://') ('http', '://', 'www.python.org') >>> ('file:/usr/share/doc/index.html').partition('://') ('file:/usr/share/doc/index.html', '', '') >>> (u'Subject: a quick question').partition(':') (u'Subject', u':', u' a quick question') >>> 'www.python.org'.rpartition('.') ('www.python', '.', 'org') >>> 'www.python.org'.rpartition(':') ('', '', 'www.python.org')\n(Implemented by Fredrik Lundh following a suggestion by Raymond Hettinger.)\nThe\nstartswith()\nandendswith()\nmethods of string types now accept tuples of strings to check for.def is_image_file (filename): return filename.endswith(('.gif', '.jpg', '.tiff'))\n(Implemented by Georg Brandl following a suggestion by Tom Lynn.)\nThe\nmin()\nandmax()\nbuilt-in functions gained akey\nkeyword parameter analogous to thekey\nargument forsort()\n. This parameter supplies a function that takes a single argument and is called for every value in the list;min()\n/max()\nwill return the element with the smallest/largest return value from this function. For example, to find the longest string in a list, you can do:L = ['medium', 'longest', 'short'] # Prints 'longest' print max(L, key=len) # Prints 'short', because lexicographically 'short' has the largest value print max(L)\n(Contributed by Steven Bethard and Raymond Hettinger.)\nTwo new built-in functions,\nany()\nandall()\n, evaluate whether an iterator contains any true or false values.any()\nreturnsTrue\nif any value returned by the iterator is true; otherwise it will returnFalse\n.all()\nreturnsTrue\nonly if all of the values returned by the iterator evaluate as true. (Suggested by Guido van Rossum, and implemented by Raymond Hettinger.)The result of a class\u2019s\n__hash__()\nmethod can now be either a long integer or a regular integer. If a long integer is returned, the hash of that value is taken. In earlier versions the hash value was required to be a regular integer, but in 2.5 theid()\nbuilt-in was changed to always return non-negative numbers, and users often seem to useid(self)\nin__hash__()\nmethods (though this is discouraged).ASCII is now the default encoding for modules. It\u2019s now a syntax error if a module contains string literals with 8-bit characters but doesn\u2019t have an encoding declaration. In Python 2.4 this triggered a warning, not a syntax error. See PEP 263 for how to declare a module\u2019s encoding; for example, you might add a line like this near the top of the source file:\n# -*- coding: latin1 -*-\nA new warning,\nUnicodeWarning\n, is triggered when you attempt to compare a Unicode string and an 8-bit string that can\u2019t be converted to Unicode using the default ASCII encoding. The result of the comparison is false:>>> chr(128) == unichr(128) # Can't convert chr(128) to Unicode __main__:1: UnicodeWarning: Unicode equal comparison failed to convert both arguments to Unicode - interpreting them as being unequal False >>> chr(127) == unichr(127) # chr(127) can be converted True\nPreviously this would raise a\nUnicodeDecodeError\nexception, but in 2.5 this could result in puzzling problems when accessing a dictionary. If you looked upunichr(128)\nandchr(128)\nwas being used as a key, you\u2019d get aUnicodeDecodeError\nexception. Other changes in 2.5 resulted in this exception being raised instead of suppressed by the code indictobject.c\nthat implements dictionaries.Raising an exception for such a comparison is strictly correct, but the change might have broken code, so instead\nUnicodeWarning\nwas introduced.(Implemented by Marc-Andr\u00e9 Lemburg.)\nOne error that Python programmers sometimes make is forgetting to include an\n__init__.py\nmodule in a package directory. Debugging this mistake can be confusing, and usually requires running Python with the-v\nswitch to log all the paths searched. In Python 2.5, a newImportWarning\nwarning is triggered when an import would have picked up a directory as a package but no__init__.py\nwas found. This warning is silently ignored by default; provide the-Wd\noption when running the Python executable to display the warning message. (Implemented by Thomas Wouters.)The list of base classes in a class definition can now be empty. As an example, this is now legal:\nclass C(): pass\n(Implemented by Brett Cannon.)\nInteractive Interpreter Changes\u00b6\nIn the interactive interpreter, quit\nand exit\nhave long been strings so\nthat new users get a somewhat helpful message when they try to quit:\n>>> quit\n'Use Ctrl-D (i.e. EOF) to exit.'\nIn Python 2.5, quit\nand exit\nare now objects that still produce string\nrepresentations of themselves, but are also callable. Newbies who try quit()\nor exit()\nwill now exit the interpreter as they expect. (Implemented by\nGeorg Brandl.)\nThe Python executable now accepts the standard long options --help\nand --version\n; on Windows, it also accepts the /?\noption\nfor displaying a help message. (Implemented by Georg Brandl.)\nOptimizations\u00b6\nSeveral of the optimizations were developed at the NeedForSpeed sprint, an event held in Reykjavik, Iceland, from May 21\u201328 2006. The sprint focused on speed enhancements to the CPython implementation and was funded by EWT LLC with local support from CCP Games. Those optimizations added at this sprint are specially marked in the following list.\nWhen they were introduced in Python 2.4, the built-in\nset\nandfrozenset\ntypes were built on top of Python\u2019s dictionary type. In 2.5 the internal data structure has been customized for implementing sets, and as a result sets will use a third less memory and are somewhat faster. (Implemented by Raymond Hettinger.)The speed of some Unicode operations, such as finding substrings, string splitting, and character map encoding and decoding, has been improved. (Substring search and splitting improvements were added by Fredrik Lundh and Andrew Dalke at the NeedForSpeed sprint. Character maps were improved by Walter D\u00f6rwald and Martin von L\u00f6wis.)\nThe\nlong(str, base)\nfunction is now faster on long digit strings because fewer intermediate results are calculated. The peak is for strings of around 800\u20131000 digits where the function is 6 times faster. (Contributed by Alan McIntyre and committed at the NeedForSpeed sprint.)It\u2019s now illegal to mix iterating over a file with\nfor line in file\nand calling the file object\u2019sread()\n/readline()\n/readlines()\nmethods. Iteration uses an internal buffer and theread*()\nmethods don\u2019t use that buffer. Instead they would return the data following the buffer, causing the data to appear out of order. Mixing iteration and these methods will now trigger aValueError\nfrom theread*()\nmethod. (Implemented by Thomas Wouters.)The\nstruct\nmodule now compiles structure format strings into an internal representation and caches this representation, yielding a 20% speedup. (Contributed by Bob Ippolito at the NeedForSpeed sprint.)The\nre\nmodule got a 1 or 2% speedup by switching to Python\u2019s allocator functions instead of the system\u2019smalloc()\nandfree()\n. (Contributed by Jack Diederich at the NeedForSpeed sprint.)The code generator\u2019s peephole optimizer now performs simple constant folding in expressions. If you write something like\na = 2+3\n, the code generator will do the arithmetic and produce code corresponding toa = 5\n. (Proposed and implemented by Raymond Hettinger.)Function calls are now faster because code objects now keep the most recently finished frame (a \u201czombie frame\u201d) in an internal field of the code object, reusing it the next time the code object is invoked. (Original patch by Michael Hudson, modified by Armin Rigo and Richard Jones; committed at the NeedForSpeed sprint.) Frame objects are also slightly smaller, which may improve cache locality and reduce memory usage a bit. (Contributed by Neal Norwitz.)\nPython\u2019s built-in exceptions are now new-style classes, a change that speeds up instantiation considerably. Exception handling in Python 2.5 is therefore about 30% faster than in 2.4. (Contributed by Richard Jones, Georg Brandl and Sean Reifschneider at the NeedForSpeed sprint.)\nImporting now caches the paths tried, recording whether they exist or not so that the interpreter makes fewer\nopen()\nandstat()\ncalls on startup. (Contributed by Martin von L\u00f6wis and Georg Brandl.)\nNew, Improved, and Removed Modules\u00b6\nThe standard library received many enhancements and bug fixes in Python 2.5.\nHere\u2019s a partial list of the most notable changes, sorted alphabetically by\nmodule name. Consult the Misc/NEWS\nfile in the source tree for a more\ncomplete list of changes, or look through the SVN logs for all the details.\nThe\naudioop\nmodule now supports the a-LAW encoding, and the code for u-LAW encoding has been improved. (Contributed by Lars Immisch.)The\ncodecs\nmodule gained support for incremental codecs. Thecodec.lookup()\nfunction now returns aCodecInfo\ninstance instead of a tuple.CodecInfo\ninstances behave like a 4-tuple to preserve backward compatibility but also have the attributesencode\n,decode\n,incrementalencoder\n,incrementaldecoder\n,streamwriter\n, andstreamreader\n. Incremental codecs can receive input and produce output in multiple chunks; the output is the same as if the entire input was fed to the non-incremental codec. See thecodecs\nmodule documentation for details. (Designed and implemented by Walter D\u00f6rwald.)The\ncollections\nmodule gained a new type,defaultdict\n, that subclasses the standarddict\ntype. The new type mostly behaves like a dictionary but constructs a default value when a key isn\u2019t present, automatically adding it to the dictionary for the requested key value.The first argument to\ndefaultdict\n\u2019s constructor is a factory function that gets called whenever a key is requested but not found. This factory function receives no arguments, so you can use built-in type constructors such aslist()\norint()\n. For example, you can make an index of words based on their initial letter like this:words = \"\"\"Nel mezzo del cammin di nostra vita mi ritrovai per una selva oscura che la diritta via era smarrita\"\"\".lower().split() index = defaultdict(list) for w in words: init_letter = w[0] index[init_letter].append(w)\nPrinting\nindex\nresults in the following output:defaultdict(, {'c': ['cammin', 'che'], 'e': ['era'], 'd': ['del', 'di', 'diritta'], 'm': ['mezzo', 'mi'], 'l': ['la'], 'o': ['oscura'], 'n': ['nel', 'nostra'], 'p': ['per'], 's': ['selva', 'smarrita'], 'r': ['ritrovai'], 'u': ['una'], 'v': ['vita', 'via']}\n(Contributed by Guido van Rossum.)\nThe\ndeque\ndouble-ended queue type supplied by thecollections\nmodule now has aremove(value)\nmethod that removes the first occurrence of value in the queue, raisingValueError\nif the value isn\u2019t found. (Contributed by Raymond Hettinger.)New module: The\ncontextlib\nmodule contains helper functions for use with the new \u2018with\n\u2019 statement. See section The contextlib module for more about this module.New module: The\ncProfile\nmodule is a C implementation of the existingprofile\nmodule that has much lower overhead. The module\u2019s interface is the same asprofile\n: you runcProfile.run('main()')\nto profile a function, can save profile data to a file, etc. It\u2019s not yet known if the Hotshot profiler, which is also written in C but doesn\u2019t match theprofile\nmodule\u2019s interface, will continue to be maintained in future versions of Python. (Contributed by Armin Rigo.)Also, the\npstats\nmodule for analyzing the data measured by the profiler now supports directing the output to any file object by supplying a stream argument to theStats\nconstructor. (Contributed by Skip Montanaro.)The\ncsv\nmodule, which parses files in comma-separated value format, received several enhancements and a number of bugfixes. You can now set the maximum size in bytes of a field by calling thecsv.field_size_limit(new_limit)\nfunction; omitting the new_limit argument will return the currently set limit. Thereader\nclass now has aline_num\nattribute that counts the number of physical lines read from the source; records can span multiple physical lines, soline_num\nis not the same as the number of records read.The CSV parser is now stricter about multi-line quoted fields. Previously, if a line ended within a quoted field without a terminating newline character, a newline would be inserted into the returned field. This behavior caused problems when reading files that contained carriage return characters within fields, so the code was changed to return the field without inserting newlines. As a consequence, if newlines embedded within fields are important, the input should be split into lines in a manner that preserves the newline characters.\n(Contributed by Skip Montanaro and Andrew McNamara.)\nThe\ndatetime\nclass in thedatetime\nmodule now has astrptime(string, format)\nmethod for parsing date strings, contributed by Josh Spoerri. It uses the same format characters astime.strptime()\nandtime.strftime()\n:from datetime import datetime ts = datetime.strptime('10:13:15 2006-03-07', '%H:%M:%S %Y-%m-%d')\nThe\nSequenceMatcher.get_matching_blocks()\nmethod in thedifflib\nmodule now guarantees to return a minimal list of blocks describing matching subsequences. Previously, the algorithm would occasionally break a block of matching elements into two list entries. (Enhancement by Tim Peters.)The\ndoctest\nmodule gained aSKIP\noption that keeps an example from being executed at all. This is intended for code snippets that are usage examples intended for the reader and aren\u2019t actually test cases.An encoding parameter was added to the\ntestfile()\nfunction and theDocFileSuite\nclass to specify the file\u2019s encoding. This makes it easier to use non-ASCII characters in tests contained within a docstring. (Contributed by Bjorn Tillenius.)The\nemail\npackage has been updated to version 4.0. (Contributed by Barry Warsaw.)The\nfileinput\nmodule was made more flexible. Unicode filenames are now supported, and a mode parameter that defaults to\"r\"\nwas added to theinput()\nfunction to allow opening files in binary or universal newlines mode. Another new parameter, openhook, lets you use a function other thanopen()\nto open the input files. Once you\u2019re iterating over the set of files, theFileInput\nobject\u2019s newfileno()\nreturns the file descriptor for the currently opened file. (Contributed by Georg Brandl.)In the\ngc\nmodule, the newget_count()\nfunction returns a 3-tuple containing the current collection counts for the three GC generations. This is accounting information for the garbage collector; when these counts reach a specified threshold, a garbage collection sweep will be made. The existinggc.collect()\nfunction now takes an optional generation argument of 0, 1, or 2 to specify which generation to collect. (Contributed by Barry Warsaw.)The\nnsmallest()\nandnlargest()\nfunctions in theheapq\nmodule now support akey\nkeyword parameter similar to the one provided by themin()\n/max()\nfunctions and thesort()\nmethods. For example:>>> import heapq >>> L = [\"short\", 'medium', 'longest', 'longer still'] >>> heapq.nsmallest(2, L) # Return two lowest elements, lexicographically ['longer still', 'longest'] >>> heapq.nsmallest(2, L, key=len) # Return two shortest elements ['short', 'medium']\n(Contributed by Raymond Hettinger.)\nThe\nitertools.islice()\nfunction now acceptsNone\nfor the start and step arguments. This makes it more compatible with the attributes of slice objects, so that you can now write the following:s = slice(5) # Create slice object itertools.islice(iterable, s.start, s.stop, s.step)\n(Contributed by Raymond Hettinger.)\nThe\nformat()\nfunction in thelocale\nmodule has been modified and two new functions were added,format_string()\nandcurrency()\n.The\nformat()\nfunction\u2019s val parameter could previously be a string as long as no more than one %char specifier appeared; now the parameter must be exactly one %char specifier with no surrounding text. An optional monetary parameter was also added which, ifTrue\n, will use the locale\u2019s rules for formatting currency in placing a separator between groups of three digits.To format strings with multiple %char specifiers, use the new\nformat_string()\nfunction that works likeformat()\nbut also supports mixing %char specifiers with arbitrary text.A new\ncurrency()\nfunction was also added that formats a number according to the current locale\u2019s settings.(Contributed by Georg Brandl.)\nThe\nmailbox\nmodule underwent a massive rewrite to add the capability to modify mailboxes in addition to reading them. A new set of classes that includembox\n,MH\n, andMaildir\nare used to read mailboxes, and have anadd(message)\nmethod to add messages,remove(key)\nto remove messages, andlock()\n/unlock()\nto lock/unlock the mailbox. The following example converts a maildir-format mailbox into an mbox-format one:import mailbox # 'factory=None' uses email.Message.Message as the class representing # individual messages. src = mailbox.Maildir('maildir', factory=None) dest = mailbox.mbox('/tmp/mbox') for msg in src: dest.add(msg)\n(Contributed by Gregory K. Johnson. Funding was provided by Google\u2019s 2005 Summer of Code.)\nNew module: the\nmsilib\nmodule allows creating Microsoft Installer.msi\nfiles and CAB files. Some support for reading the.msi\ndatabase is also included. (Contributed by Martin von L\u00f6wis.)The\nnis\nmodule now supports accessing domains other than the system default domain by supplying a domain argument to thenis.match()\nandnis.maps()\nfunctions. (Contributed by Ben Bell.)The\noperator\nmodule\u2019sitemgetter()\nandattrgetter()\nfunctions now support multiple fields. A call such asoperator.attrgetter('a', 'b')\nwill return a function that retrieves thea\nandb\nattributes. Combining this new feature with thesort()\nmethod\u2019skey\nparameter lets you easily sort lists using multiple fields. (Contributed by Raymond Hettinger.)The\noptparse\nmodule was updated to version 1.5.1 of the Optik library. TheOptionParser\nclass gained anepilog\nattribute, a string that will be printed after the help message, and adestroy()\nmethod to break reference cycles created by the object. (Contributed by Greg Ward.)The\nos\nmodule underwent several changes. Thestat_float_times\nvariable now defaults to true, meaning thatos.stat()\nwill now return time values as floats. (This doesn\u2019t necessarily mean thatos.stat()\nwill return times that are precise to fractions of a second; not all systems support such precision.)Constants named\nos.SEEK_SET\n,os.SEEK_CUR\n, andos.SEEK_END\nhave been added; these are the parameters to theos.lseek()\nfunction. Two new constants for locking areos.O_SHLOCK\nandos.O_EXLOCK\n.Two new functions,\nwait3()\nandwait4()\n, were added. They\u2019re similar thewaitpid()\nfunction which waits for a child process to exit and returns a tuple of the process ID and its exit status, butwait3()\nandwait4()\nreturn additional information.wait3()\ndoesn\u2019t take a process ID as input, so it waits for any child process to exit and returns a 3-tuple of process-id, exit-status, resource-usage as returned from theresource.getrusage()\nfunction.wait4(pid)\ndoes take a process ID. (Contributed by Chad J. Schroeder.)On FreeBSD, the\nos.stat()\nfunction now returns times with nanosecond resolution, and the returned object now hasst_gen\nandst_birthtime\n. Thest_flags\nattribute is also available, if the platform supports it. (Contributed by Antti Louko and Diego Petten\u00f2.)The Python debugger provided by the\npdb\nmodule can now store lists of commands to execute when a breakpoint is reached and execution stops. Once breakpoint #1 has been created, entercommands 1\nand enter a series of commands to be executed, finishing the list withend\n. The command list can include commands that resume execution, such ascontinue\nornext\n. (Contributed by Gr\u00e9goire Dooms.)The\npickle\nandcPickle\nmodules no longer accept a return value ofNone\nfrom the__reduce__()\nmethod; the method must return a tuple of arguments instead. The ability to returnNone\nwas deprecated in Python 2.4, so this completes the removal of the feature.The\npkgutil\nmodule, containing various utility functions for finding packages, was enhanced to support PEP 302\u2019s import hooks and now also works for packages stored in ZIP-format archives. (Contributed by Phillip J. Eby.)The pybench benchmark suite by Marc-Andr\u00e9 Lemburg is now included in the\nTools/pybench\ndirectory. The pybench suite is an improvement on the commonly usedpystone.py\nprogram because pybench provides a more detailed measurement of the interpreter\u2019s speed. It times particular operations such as function calls, tuple slicing, method lookups, and numeric operations, instead of performing many different operations and reducing the result to a single number aspystone.py\ndoes.The\npyexpat\nmodule now uses version 2.0 of the Expat parser. (Contributed by Trent Mick.)The\nQueue\nclass provided by theQueue\nmodule gained two new methods.join()\nblocks until all items in the queue have been retrieved and all processing work on the items have been completed. Worker threads call the other new method,task_done()\n, to signal that processing for an item has been completed. (Contributed by Raymond Hettinger.)The old\nregex\nandregsub\nmodules, which have been deprecated ever since Python 2.0, have finally been deleted. Other deleted modules:statcache\n,tzparse\n,whrandom\n.Also deleted: the\nlib-old\ndirectory, which includes ancient modules such asdircmp\nandni\n, was removed.lib-old\nwasn\u2019t on the defaultsys.path\n, so unless your programs explicitly added the directory tosys.path\n, this removal shouldn\u2019t affect your code.The\nrlcompleter\nmodule is no longer dependent on importing thereadline\nmodule and therefore now works on non-Unix platforms. (Patch from Robert Kiendl.)The\nSimpleXMLRPCServer\nandDocXMLRPCServer\nclasses now have arpc_paths\nattribute that constrains XML-RPC operations to a limited set of URL paths; the default is to allow only'/'\nand'/RPC2'\n. Settingrpc_paths\ntoNone\nor an empty tuple disables this path checking.The\nsocket\nmodule now supportsAF_NETLINK\nsockets on Linux, thanks to a patch from Philippe Biondi. Netlink sockets are a Linux-specific mechanism for communications between a user-space process and kernel code; an introductory article about them is at https://www.linuxjournal.com/article/7356. In Python code, netlink addresses are represented as a tuple of 2 integers,(pid, group_mask)\n.Two new methods on socket objects,\nrecv_into(buffer)\nandrecvfrom_into(buffer)\n, store the received data in an object that supports the buffer protocol instead of returning the data as a string. This means you can put the data directly into an array or a memory-mapped file.Socket objects also gained\ngetfamily()\n,gettype()\n, andgetproto()\naccessor methods to retrieve the family, type, and protocol values for the socket.New module: the\nspwd\nmodule provides functions for accessing the shadow password database on systems that support shadow passwords.The\nstruct\nis now faster because it compiles format strings intoStruct\nobjects withpack()\nandunpack()\nmethods. This is similar to how there\nmodule lets you create compiled regular expression objects. You can still use the module-levelpack()\nandunpack()\nfunctions; they\u2019ll createStruct\nobjects and cache them. Or you can useStruct\ninstances directly:s = struct.Struct('ih3s') data = s.pack(1972, 187, 'abc') year, number, name = s.unpack(data)\nYou can also pack and unpack data to and from buffer objects directly using the\npack_into(buffer, offset, v1, v2, ...)\nandunpack_from(buffer, offset)\nmethods. This lets you store data directly into an array or a memory-mapped file.(\nStruct\nobjects were implemented by Bob Ippolito at the NeedForSpeed sprint. Support for buffer objects was added by Martin Blais, also at the NeedForSpeed sprint.)The Python developers switched from CVS to Subversion during the 2.5 development process. Information about the exact build version is available as the\nsys.subversion\nvariable, a 3-tuple of(interpreter-name, branch-name, revision-range)\n. For example, at the time of writing my copy of 2.5 was reporting('CPython', 'trunk', '45313:45315')\n.This information is also available to C extensions via the\nPy_GetBuildInfo()\nfunction that returns a string of build information like this:\"trunk:45355:45356M, Apr 13 2006, 07:42:19\"\n. (Contributed by Barry Warsaw.)Another new function,\nsys._current_frames()\n, returns the current stack frames for all running threads as a dictionary mapping thread identifiers to the topmost stack frame currently active in that thread at the time the function is called. (Contributed by Tim Peters.)The\nTarFile\nclass in thetarfile\nmodule now has anextractall()\nmethod that extracts all members from the archive into the current working directory. It\u2019s also possible to set a different directory as the extraction target, and to unpack only a subset of the archive\u2019s members.The compression used for a tarfile opened in stream mode can now be autodetected using the mode\n'r|*'\n. (Contributed by Lars Gust\u00e4bel.)The\nthreading\nmodule now lets you set the stack size used when new threads are created. Thestack_size([*size*])\nfunction returns the currently configured stack size, and supplying the optional size parameter sets a new value. Not all platforms support changing the stack size, but Windows, POSIX threading, and OS/2 all do. (Contributed by Andrew MacIntyre.)The\nunicodedata\nmodule has been updated to use version 4.1.0 of the Unicode character database. Version 3.2.0 is required by some specifications, so it\u2019s still available asunicodedata.ucd_3_2_0\n.New module: the\nuuid\nmodule generates universally unique identifiers (UUIDs) according to RFC 4122. The RFC defines several different UUID versions that are generated from a starting string, from system properties, or purely randomly. This module contains aUUID\nclass and functions nameduuid1()\n,uuid3()\n,uuid4()\n, anduuid5()\nto generate different versions of UUID. (Version 2 UUIDs are not specified in RFC 4122 and are not supported by this module.)>>> import uuid >>> # make a UUID based on the host ID and current time >>> uuid.uuid1() UUID('a8098c1a-f86e-11da-bd1a-00112444be1e') >>> # make a UUID using an MD5 hash of a namespace UUID and a name >>> uuid.uuid3(uuid.NAMESPACE_DNS, 'python.org') UUID('6fa459ea-ee8a-3ca4-894e-db77e160355e') >>> # make a random UUID >>> uuid.uuid4() UUID('16fd2706-8baf-433b-82eb-8c7fada847da') >>> # make a UUID using a SHA-1 hash of a namespace UUID and a name >>> uuid.uuid5(uuid.NAMESPACE_DNS, 'python.org') UUID('886313e1-3b8a-5372-9b90-0c9aee199e5d')\n(Contributed by Ka-Ping Yee.)\nThe\nweakref\nmodule\u2019sWeakKeyDictionary\nandWeakValueDictionary\ntypes gained new methods for iterating over the weak references contained in the dictionary.iterkeyrefs()\nandkeyrefs()\nmethods were added toWeakKeyDictionary\n, anditervaluerefs()\nandvaluerefs()\nwere added toWeakValueDictionary\n. (Contributed by Fred L. Drake, Jr.)The\nwebbrowser\nmodule received a number of enhancements. It\u2019s now usable as a script withpython -m webbrowser\n, taking a URL as the argument; there are a number of switches to control the behaviour (-n\nfor a new browser window,-t\nfor a new tab). New module-level functions,open_new()\nandopen_new_tab()\n, were added to support this. The module\u2019sopen()\nfunction supports an additional feature, an autoraise parameter that signals whether to raise the open window when possible. A number of additional browsers were added to the supported list such as Firefox, Opera, Konqueror, and elinks. (Contributed by Oleg Broytmann and Georg Brandl.)The\nxmlrpclib\nmodule now supports returningdatetime\nobjects for the XML-RPC date type. Supplyuse_datetime=True\nto theloads()\nfunction or theUnmarshaller\nclass to enable this feature. (Contributed by Skip Montanaro.)The\nzipfile\nmodule now supports the ZIP64 version of the format, meaning that a .zip archive can now be larger than 4 GiB and can contain individual files larger than 4 GiB. (Contributed by Ronald Oussoren.)The\nzlib\nmodule\u2019sCompress\nandDecompress\nobjects now support acopy()\nmethod that makes a copy of the object\u2019s internal state and returns a newCompress\norDecompress\nobject. (Contributed by Chris AtLee.)\nThe ctypes package\u00b6\nThe ctypes\npackage, written by Thomas Heller, has been added to the\nstandard library. ctypes\nlets you call arbitrary functions in shared\nlibraries or DLLs. Long-time users may remember the dl\nmodule, which\nprovides functions for loading shared libraries and calling functions in them.\nThe ctypes\npackage is much fancier.\nTo load a shared library or DLL, you must create an instance of the\nCDLL\nclass and provide the name or path of the shared library or DLL.\nOnce that\u2019s done, you can call arbitrary functions by accessing them as\nattributes of the CDLL\nobject.\nimport ctypes\nlibc = ctypes.CDLL('libc.so.6')\nresult = libc.printf(\"Line of output\\n\")\nType constructors for the various C types are provided: c_int()\n,\nc_float()\n, c_double()\n, c_char_p()\n(equivalent to char*), and so forth. Unlike Python\u2019s types, the C versions are all mutable; you\ncan assign to their value\nattribute to change the wrapped value. Python\nintegers and strings will be automatically converted to the corresponding C\ntypes, but for other types you must call the correct type constructor. (And I\nmean must; getting it wrong will often result in the interpreter crashing\nwith a segmentation fault.)\nYou shouldn\u2019t use c_char_p()\nwith a Python string when the C function will\nbe modifying the memory area, because Python strings are supposed to be\nimmutable; breaking this rule will cause puzzling bugs. When you need a\nmodifiable memory area, use create_string_buffer()\n:\ns = \"this is a string\"\nbuf = ctypes.create_string_buffer(s)\nlibc.strfry(buf)\nC functions are assumed to return integers, but you can set the restype\nattribute of the function object to change this:\n>>> libc.atof('2.71828')\n-1783957616\n>>> libc.atof.restype = ctypes.c_double\n>>> libc.atof('2.71828')\n2.71828\nctypes\nalso provides a wrapper for Python\u2019s C API as the\nctypes.pythonapi\nobject. This object does not release the global\ninterpreter lock before calling a function, because the lock must be held when\ncalling into the interpreter\u2019s code. There\u2019s a py_object\ntype\nconstructor that will create a PyObject* pointer. A simple usage:\nimport ctypes\nd = {}\nctypes.pythonapi.PyObject_SetItem(ctypes.py_object(d),\nctypes.py_object(\"abc\"), ctypes.py_object(1))\n# d is now {'abc', 1}.\nDon\u2019t forget to use py_object()\n; if it\u2019s omitted you end up with a\nsegmentation fault.\nctypes\nhas been around for a while, but people still write and\ndistribution hand-coded extension modules because you can\u2019t rely on\nctypes\nbeing present. Perhaps developers will begin to write Python\nwrappers atop a library accessed through ctypes\ninstead of extension\nmodules, now that ctypes\nis included with core Python.\nSee also\n- https://web.archive.org/web/20180410025338/http://starship.python.net/crew/theller/ctypes/\nThe pre-stdlib ctypes web page, with a tutorial, reference, and FAQ.\nThe documentation for the ctypes\nmodule.\nThe ElementTree package\u00b6\nA subset of Fredrik Lundh\u2019s ElementTree library for processing XML has been\nadded to the standard library as xml.etree\n. The available modules are\nElementTree\n, ElementPath\n, and ElementInclude\nfrom\nElementTree 1.2.6. The cElementTree\naccelerator module is also\nincluded.\nThe rest of this section will provide a brief overview of using ElementTree. Full documentation for ElementTree is available at https://web.archive.org/web/20201124024954/http://effbot.org/zone/element-index.htm.\nElementTree represents an XML document as a tree of element nodes. The text\ncontent of the document is stored as the text\nand tail\nattributes of (This is one of the major differences between ElementTree and\nthe Document Object Model; in the DOM there are many different types of node,\nincluding TextNode\n.)\nThe most commonly used parsing function is parse()\n, that takes either a\nstring (assumed to contain a filename) or a file-like object and returns an\nElementTree\ninstance:\nfrom xml.etree import ElementTree as ET\ntree = ET.parse('ex-1.xml')\nfeed = urllib.urlopen(\n'http://planet.python.org/rss10.xml')\ntree = ET.parse(feed)\nOnce you have an ElementTree\ninstance, you can call its getroot()\nmethod to get the root Element\nnode.\nThere\u2019s also an XML()\nfunction that takes a string literal and returns an\nElement\nnode (not an ElementTree\n). This function provides a\ntidy way to incorporate XML fragments, approaching the convenience of an XML\nliteral:\nsvg = ET.XML(\"\"\"\"\"\")\nsvg.set('height', '320px')\nsvg.append(elem1)\nEach XML element supports some dictionary-like and some list-like access methods. Dictionary-like operations are used to access attribute values, and list-like operations are used to access child nodes.\nOperation |\nResult |\n|---|---|\n|\nReturns n\u2019th child element. |\n|\nReturns list of m\u2019th through n\u2019th child elements. |\n|\nReturns number of child elements. |\n|\nReturns list of child elements. |\n|\nAdds elem2 as a child. |\n|\nInserts elem2 at the specified location. |\n|\nDeletes n\u2019th child element. |\n|\nReturns list of attribute names. |\n|\nReturns value of attribute name. |\n|\nSets new value for attribute name. |\n|\nRetrieves the dictionary containing attributes. |\n|\nDeletes attribute name. |\nComments and processing instructions are also represented as Element\nnodes. To check if a node is a comment or processing instructions:\nif elem.tag is ET.Comment:\n...\nelif elem.tag is ET.ProcessingInstruction:\n...\nTo generate XML output, you should call the ElementTree.write()\nmethod.\nLike parse()\n, it can take either a string or a file-like object:\n# Encoding is US-ASCII\ntree.write('output.xml')\n# Encoding is UTF-8\nf = open('output.xml', 'w')\ntree.write(f, encoding='utf-8')\n(Caution: the default encoding used for output is ASCII. For general XML work, where an element\u2019s name may contain arbitrary Unicode characters, ASCII isn\u2019t a very useful encoding because it will raise an exception if an element\u2019s name contains any characters with values greater than 127. Therefore, it\u2019s best to specify a different encoding such as UTF-8 that can handle any Unicode character.)\nThis section is only a partial description of the ElementTree interfaces. Please read the package\u2019s official documentation for more details.\nSee also\n- https://web.archive.org/web/20201124024954/http://effbot.org/zone/element-index.htm\nOfficial documentation for ElementTree.\nThe hashlib package\u00b6\nA new hashlib\nmodule, written by Gregory P. Smith, has been added to\nreplace the md5\nand sha\nmodules. hashlib\nadds support for\nadditional secure hashes (SHA-224, SHA-256, SHA-384, and SHA-512). When\navailable, the module uses OpenSSL for fast platform optimized implementations\nof algorithms.\nThe old md5\nand sha\nmodules still exist as wrappers around hashlib\nto preserve backwards compatibility. The new module\u2019s interface is very close\nto that of the old modules, but not identical. The most significant difference\nis that the constructor functions for creating new hashing objects are named\ndifferently.\n# Old versions\nh = md5.md5()\nh = md5.new()\n# New version\nh = hashlib.md5()\n# Old versions\nh = sha.sha()\nh = sha.new()\n# New version\nh = hashlib.sha1()\n# Hash that weren't previously available\nh = hashlib.sha224()\nh = hashlib.sha256()\nh = hashlib.sha384()\nh = hashlib.sha512()\n# Alternative form\nh = hashlib.new('md5') # Provide algorithm as a string\nOnce a hash object has been created, its methods are the same as before:\nupdate(string)\nhashes the specified string into the current digest\nstate, digest()\nand hexdigest()\nreturn the digest value as a binary\nstring or a string of hex digits, and copy()\nreturns a new hashing object\nwith the same digest state.\nSee also\nThe documentation for the hashlib\nmodule.\nThe sqlite3 package\u00b6\nThe pysqlite module (https://www.pysqlite.org), a wrapper for the SQLite embedded\ndatabase, has been added to the standard library under the package name\nsqlite3\n.\nSQLite is a C library that provides a lightweight disk-based database that doesn\u2019t require a separate server process and allows accessing the database using a nonstandard variant of the SQL query language. Some applications can use SQLite for internal data storage. It\u2019s also possible to prototype an application using SQLite and then port the code to a larger database such as PostgreSQL or Oracle.\npysqlite was written by Gerhard H\u00e4ring and provides a SQL interface compliant with the DB-API 2.0 specification described by PEP 249.\nIf you\u2019re compiling the Python source yourself, note that the source tree doesn\u2019t include the SQLite code, only the wrapper module. You\u2019ll need to have the SQLite libraries and headers installed before compiling Python, and the build process will compile the module when the necessary headers are available.\nTo use the module, you must first create a Connection\nobject that\nrepresents the database. Here the data will be stored in the\n/tmp/example\nfile:\nconn = sqlite3.connect('/tmp/example')\nYou can also supply the special name :memory:\nto create a database in RAM.\nOnce you have a Connection\n, you can create a Cursor\nobject\nand call its execute()\nmethod to perform SQL commands:\nc = conn.cursor()\n# Create table\nc.execute('''create table stocks\n(date text, trans text, symbol text,\nqty real, price real)''')\n# Insert a row of data\nc.execute(\"\"\"insert into stocks\nvalues ('2006-01-05','BUY','RHAT',100,35.14)\"\"\")\nUsually your SQL operations will need to use values from Python variables. You shouldn\u2019t assemble your query using Python\u2019s string operations because doing so is insecure; it makes your program vulnerable to an SQL injection attack.\nInstead, use the DB-API\u2019s parameter substitution. Put ?\nas a placeholder\nwherever you want to use a value, and then provide a tuple of values as the\nsecond argument to the cursor\u2019s execute()\nmethod. (Other database modules\nmay use a different placeholder, such as %s\nor :1\n.) For example:\n# Never do this -- insecure!\nsymbol = 'IBM'\nc.execute(\"... where symbol = '%s'\" % symbol)\n# Do this instead\nt = (symbol,)\nc.execute('select * from stocks where symbol=?', t)\n# Larger example\nfor t in (('2006-03-28', 'BUY', 'IBM', 1000, 45.00),\n('2006-04-05', 'BUY', 'MSOFT', 1000, 72.00),\n('2006-04-06', 'SELL', 'IBM', 500, 53.00),\n):\nc.execute('insert into stocks values (?,?,?,?,?)', t)\nTo retrieve data after executing a SELECT statement, you can either treat the\ncursor as an iterator, call the cursor\u2019s fetchone()\nmethod to retrieve a\nsingle matching row, or call fetchall()\nto get a list of the matching\nrows.\nThis example uses the iterator form:\n>>> c = conn.cursor()\n>>> c.execute('select * from stocks order by price')\n>>> for row in c:\n... print row\n...\n(u'2006-01-05', u'BUY', u'RHAT', 100, 35.140000000000001)\n(u'2006-03-28', u'BUY', u'IBM', 1000, 45.0)\n(u'2006-04-06', u'SELL', u'IBM', 500, 53.0)\n(u'2006-04-05', u'BUY', u'MSOFT', 1000, 72.0)\n>>>\nFor more information about the SQL dialect supported by SQLite, see https://www.sqlite.org.\nSee also\n- https://www.pysqlite.org\nThe pysqlite web page.\n- https://www.sqlite.org\nThe SQLite web page; the documentation describes the syntax and the available data types for the supported SQL dialect.\nThe documentation for the sqlite3\nmodule.\n- PEP 249 - Database API Specification 2.0\nPEP written by Marc-Andr\u00e9 Lemburg.\nThe wsgiref package\u00b6\nThe Web Server Gateway Interface (WSGI) v1.0 defines a standard interface\nbetween web servers and Python web applications and is described in PEP 333.\nThe wsgiref\npackage is a reference implementation of the WSGI\nspecification.\nThe package includes a basic HTTP server that will run a WSGI application; this server is useful for debugging but isn\u2019t intended for production use. Setting up a server takes only a few lines of code:\nfrom wsgiref import simple_server\nwsgi_app = ...\nhost = ''\nport = 8000\nhttpd = simple_server.make_server(host, port, wsgi_app)\nhttpd.serve_forever()\nSee also\n- https://web.archive.org/web/20160331090247/http://wsgi.readthedocs.org/en/latest/\nA central web site for WSGI-related resources.\n- PEP 333 - Python Web Server Gateway Interface v1.0\nPEP written by Phillip J. Eby.\nBuild and C API Changes\u00b6\nChanges to Python\u2019s build process and to the C API include:\nThe Python source tree was converted from CVS to Subversion, in a complex migration procedure that was supervised and flawlessly carried out by Martin von L\u00f6wis. The procedure was developed as PEP 347.\nCoverity, a company that markets a source code analysis tool called Prevent, provided the results of their examination of the Python source code. The analysis found about 60 bugs that were quickly fixed. Many of the bugs were refcounting problems, often occurring in error-handling code. See https://scan.coverity.com for the statistics.\nThe largest change to the C API came from PEP 353, which modifies the interpreter to use a\nPy_ssize_t\ntype definition instead of int. See the earlier section PEP 353: Using ssize_t as the index type for a discussion of this change.The design of the bytecode compiler has changed a great deal, no longer generating bytecode by traversing the parse tree. Instead the parse tree is converted to an abstract syntax tree (or AST), and it is the abstract syntax tree that\u2019s traversed to produce the bytecode.\nIt\u2019s possible for Python code to obtain AST objects by using the\ncompile()\nbuilt-in and specifying_ast.PyCF_ONLY_AST\nas the value of the flags parameter:from _ast import PyCF_ONLY_AST ast = compile(\"\"\"a=0 for i in range(10): a += i \"\"\", \"\", 'exec', PyCF_ONLY_AST) assignment = ast.body[0] for_loop = ast.body[1]\nNo official documentation has been written for the AST code yet, but PEP 339 discusses the design. To start learning about the code, read the definition of the various AST nodes in\nParser/Python.asdl\n. A Python script reads this file and generates a set of C structure definitions inInclude/Python-ast.h\n. ThePyParser_ASTFromString()\nandPyParser_ASTFromFile()\n, defined inInclude/pythonrun.h\n, take Python source as input and return the root of an AST representing the contents. This AST can then be turned into a code object byPyAST_Compile()\n. For more information, read the source code, and then ask questions on python-dev.The AST code was developed under Jeremy Hylton\u2019s management, and implemented by (in alphabetical order) Brett Cannon, Nick Coghlan, Grant Edwards, John Ehresman, Kurt Kaiser, Neal Norwitz, Tim Peters, Armin Rigo, and Neil Schemenauer, plus the participants in a number of AST sprints at conferences such as PyCon.\nEvan Jones\u2019s patch to obmalloc, first described in a talk at PyCon DC 2005, was applied. Python 2.4 allocated small objects in 256K-sized arenas, but never freed arenas. With this patch, Python will free arenas when they\u2019re empty. The net effect is that on some platforms, when you allocate many objects, Python\u2019s memory usage may actually drop when you delete them and the memory may be returned to the operating system. (Implemented by Evan Jones, and reworked by Tim Peters.)\nNote that this change means extension modules must be more careful when allocating memory. Python\u2019s API has many different functions for allocating memory that are grouped into families. For example,\nPyMem_Malloc()\n,PyMem_Realloc()\n, andPyMem_Free()\nare one family that allocates raw memory, whilePyObject_Malloc()\n,PyObject_Realloc()\n, andPyObject_Free()\nare another family that\u2019s supposed to be used for creating Python objects.Previously these different families all reduced to the platform\u2019s\nmalloc()\nandfree()\nfunctions. This meant it didn\u2019t matter if you got things wrong and allocated memory with thePyMem\nfunction but freed it with thePyObject\nfunction. With 2.5\u2019s changes to obmalloc, these families now do different things and mismatches will probably result in a segfault. You should carefully test your C extension modules with Python 2.5.The built-in set types now have an official C API. Call\nPySet_New()\nandPyFrozenSet_New()\nto create a new set,PySet_Add()\nandPySet_Discard()\nto add and remove elements, andPySet_Contains()\nandPySet_Size()\nto examine the set\u2019s state. (Contributed by Raymond Hettinger.)C code can now obtain information about the exact revision of the Python interpreter by calling the\nPy_GetBuildInfo()\nfunction that returns a string of build information like this:\"trunk:45355:45356M, Apr 13 2006, 07:42:19\"\n. (Contributed by Barry Warsaw.)Two new macros can be used to indicate C functions that are local to the current file so that a faster calling convention can be used.\nPy_LOCAL\ndeclares the function as returning a value of the specified type and uses a fast-calling qualifier.Py_LOCAL_INLINE\ndoes the same thing and also requests the function be inlined. If macroPY_LOCAL_AGGRESSIVE\nis defined beforepython.h\nis included, a set of more aggressive optimizations are enabled for the module; you should benchmark the results to find out if these optimizations actually make the code faster. (Contributed by Fredrik Lundh at the NeedForSpeed sprint.)PyErr_NewException(name, base, dict)\ncan now accept a tuple of base classes as its base argument. (Contributed by Georg Brandl.)The\nPyErr_Warn()\nfunction for issuing warnings is now deprecated in favour ofPyErr_WarnEx(category, message, stacklevel)\nwhich lets you specify the number of stack frames separating this function and the caller. A stacklevel of 1 is the function callingPyErr_WarnEx()\n, 2 is the function above that, and so forth. (Added by Neal Norwitz.)The CPython interpreter is still written in C, but the code can now be compiled with a C++ compiler without errors. (Implemented by Anthony Baxter, Martin von L\u00f6wis, Skip Montanaro.)\nThe\nPyRange_New()\nfunction was removed. It was never documented, never used in the core code, and had dangerously lax error checking. In the unlikely case that your extensions were using it, you can replace it by something like the following:range = PyObject_CallFunction((PyObject*) &PyRange_Type, \"lll\", start, stop, step);\nPort-Specific Changes\u00b6\nMacOS X (10.3 and higher): dynamic loading of modules now uses the\ndlopen()\nfunction instead of MacOS-specific functions.MacOS X: an\n--enable-universalsdk\nswitch was added to the configure script that compiles the interpreter as a universal binary able to run on both PowerPC and Intel processors. (Contributed by Ronald Oussoren; bpo-2573.)Windows:\n.dll\nis no longer supported as a filename extension for extension modules..pyd\nis now the only filename extension that will be searched for.\nPorting to Python 2.5\u00b6\nThis section lists previously described changes that may require changes to your code:\nASCII is now the default encoding for modules. It\u2019s now a syntax error if a module contains string literals with 8-bit characters but doesn\u2019t have an encoding declaration. In Python 2.4 this triggered a warning, not a syntax error.\nPreviously, the\ngi_frame\nattribute of a generator was always a frame object. Because of the PEP 342 changes described in section PEP 342: New Generator Features, it\u2019s now possible forgi_frame\nto beNone\n.A new warning,\nUnicodeWarning\n, is triggered when you attempt to compare a Unicode string and an 8-bit string that can\u2019t be converted to Unicode using the default ASCII encoding. Previously such comparisons would raise aUnicodeDecodeError\nexception.Library: the\ncsv\nmodule is now stricter about multi-line quoted fields. If your files contain newlines embedded within fields, the input should be split into lines in a manner which preserves the newline characters.Library: the\nlocale\nmodule\u2019sformat()\nfunction\u2019s would previously accept any string as long as no more than one %char specifier appeared. In Python 2.5, the argument must be exactly one %char specifier with no surrounding text.Library: The\npickle\nandcPickle\nmodules no longer accept a return value ofNone\nfrom the__reduce__()\nmethod; the method must return a tuple of arguments instead. The modules also no longer accept the deprecated bin keyword parameter.Library: The\nSimpleXMLRPCServer\nandDocXMLRPCServer\nclasses now have arpc_paths\nattribute that constrains XML-RPC operations to a limited set of URL paths; the default is to allow only'/'\nand'/RPC2'\n. Settingrpc_paths\ntoNone\nor an empty tuple disables this path checking.C API: Many functions now use\nPy_ssize_t\ninstead of int to allow processing more data on 64-bit machines. Extension code may need to make the same change to avoid warnings and to support 64-bit machines. See the earlier section PEP 353: Using ssize_t as the index type for a discussion of this change.C API: The obmalloc changes mean that you must be careful to not mix usage of the\nPyMem_*\nandPyObject_*\nfamilies of functions. Memory allocated with one family\u2019s*_Malloc\nmust be freed with the corresponding family\u2019s*_Free\nfunction.\nAcknowledgements\u00b6\nThe author would like to thank the following people for offering suggestions, corrections and assistance with various drafts of this article: Georg Brandl, Nick Coghlan, Phillip J. Eby, Lars Gust\u00e4bel, Raymond Hettinger, Ralf W. Grosse-Kunstleve, Kent Johnson, Iain Lowe, Martin von L\u00f6wis, Fredrik Lundh, Andrew McNamara, Skip Montanaro, Gustavo Niemeyer, Paul Prescod, James Pryor, Mike Rovner, Scott Weikart, Barry Warsaw, Thomas Wouters.", "code_snippets": [" ", "\n ", " ", " ", "\n", "\n ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n", " ", " ", " ", " ", " ", " ", " ", " ", "\n", "\n", " ", " ", " ", " ", " ", " ", "\n\n", "\n", " ", " ", " ", " ", " ", " ", "\n", "\n\n", " ", " ", "\n ", "\n ", " ", " ", " ", " ", "\n ", "\n\n", " ", " ", " ", "\n", "\n", "\n", "\n ", " ", "\n ", "\n ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n", "\n ", " ", "\n ", " ", "\n ", " ", " ", "\n ", " ", "\n ", " ", "\n", "\n ", "\n ", " ", "\n ", " ", "\n ", " ", " ", "\n ", " ", "\n", " ", " ", "\n", "\n ", "\n ", " ", "\n ", "\n ", "\n ", " ", "\n ", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n ", " ", "\n", " ", "\n ", " ", "\n", " ", "\n ", " ", "\n", "\n ", "\n", "\n ", "\n", " ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", "\n ", "\n ", " ", " ", "\n", " ", " ", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n File ", ", line ", ", in ", "\n", " ", "\n", "\n", " ", " ", " ", "\n ", "\n", " ", "\n", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", " ", " ", " ", "\n", " ", " ", "\n", " ", "\n ", "\n ", "\n", " ", " ", " ", "\n\n", "\n", " ", " ", "\n", " ", "\n\n", " ", "\n ", "\n ", "\n ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n ", "\n ", "\n ", "\n", "\n ", "\n ", " ", "\n ", "\n ", " ", "\n ", "\n ", " ", "\n ", "\n", "\n ", "\n ", " ", "\n ", "\n ", " ", " ", "\n ", " ", "\n", "\n ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n", " ", "\n\n", "\n", " ", "\n ", " ", " ", "\n ", "\n ", " ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n\n", " ", " ", "\n", " ", " ", " ", "\n ", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n ", "\n", "\n", " ", "\n\n", " ", " ", " ", "\n ", " ", " ", " ", "\n ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n ", " ", " ", " ", " ", " ", "\n", "\n ", "\n", " ", " ", "\n ", "\n", "\n ", "\n ", "\n", "\n ", " ", "\n ", " ", "\n", " ", "\n ", " ", " ", "\n ", " ", "\n\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", "\n", "\n ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n\n", " ", " ", "\n\n", " ", " ", " ", "\n ", " ", " ", "\n ", "\n", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", " ", " ", "\n", " ", "\n\n", " ", " ", "\n ", "\n", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n\n", "\n", "\n", " ", " ", " ", "\n", " ", " ", "\n\n", " ", " ", " ", "\n ", "\n", " ", " ", "\n\n", " ", " ", " ", " ", "\n", " ", " ", " ", " ", "\n", "\n", "\n", "\n", "\n\n", "\n", " ", "\n", "\n\n", "\n", "\n", "\n\n", "\n", " ", "\n", "\n", "\n\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n\n", " ", " ", "\n", "\n ", " ", "\n", "\n", " ", " ", " ", "\n\n", " ", " ", "\n\n", " ", " ", "\n ", "\n", " ", " ", "\n", " ", " ", "\n", "\n", " ", "\n", "\n", " ", " ", " ", "\n ", "\n", " ", " ", " ", "\n ", "\n", "\n", "\n\n", "\n", " ", " ", " ", "\n", " ", "\n", "\n", " ", " ", "\n", " ", " ", "\n\n", "\n", " ", " ", "\n\n", "\n", " ", " ", "\n", " ", " ", "\n\n", "\n", " ", " ", "\n\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n\n", "\n", " ", " ", " ", "\n", " ", " ", "\n", " ", " ", "\n\n", "\n", "\n", "\n", "\n\n", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n\n", "\n", " ", " ", "\n", " ", "\n\n", "\n", " ", " ", " ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", "\n ", " ", "\n", " ", " ", "\n", "\n", " ", " ", " ", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n\n", " ", " ", "\n\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n", "\n", " ", "\n", " ", " ", "\n", "\n", "\n", " ", " ", " ", "\n\n", " ", " ", "\n", " ", " ", "\n", " ", " ", " ", " ", "\n ", " ", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 21288}
{"url": "https://docs.python.org/3/whatsnew/2.6.html", "title": "What\u2019s New in Python 2.6", "content": "What\u2019s New in Python 2.6\u00b6\n- Author:\nA.M. Kuchling (amk at amk.ca)\nThis article explains the new features in Python 2.6, released on October 1, 2008. The release schedule is described in PEP 361.\nThe major theme of Python 2.6 is preparing the migration path to\nPython 3.0, a major redesign of the language. Whenever possible,\nPython 2.6 incorporates new features and syntax from 3.0 while\nremaining compatible with existing code by not removing older features\nor syntax. When it\u2019s not possible to do that, Python 2.6 tries to do\nwhat it can, adding compatibility functions in a\nfuture_builtins\nmodule and a -3\nswitch to warn about\nusages that will become unsupported in 3.0.\nSome significant new packages have been added to the standard library,\nsuch as the multiprocessing\nand json\nmodules, but\nthere aren\u2019t many new features that aren\u2019t related to Python 3.0 in\nsome way.\nPython 2.6 also sees a number of improvements and bugfixes throughout the source. A search through the change logs finds there were 259 patches applied and 612 bugs fixed between Python 2.5 and 2.6. Both figures are likely to be underestimates.\nThis article doesn\u2019t attempt to provide a complete specification of the new features, but instead provides a convenient overview. For full details, you should refer to the documentation for Python 2.6. If you want to understand the rationale for the design and implementation, refer to the PEP for a particular new feature. Whenever possible, \u201cWhat\u2019s New in Python\u201d links to the bug/patch item for each change.\nPython 3.0\u00b6\nThe development cycle for Python versions 2.6 and 3.0 was synchronized, with the alpha and beta releases for both versions being made on the same days. The development of 3.0 has influenced many features in 2.6.\nPython 3.0 is a far-ranging redesign of Python that breaks compatibility with the 2.x series. This means that existing Python code will need some conversion in order to run on Python 3.0. However, not all the changes in 3.0 necessarily break compatibility. In cases where new features won\u2019t cause existing code to break, they\u2019ve been backported to 2.6 and are described in this document in the appropriate place. Some of the 3.0-derived features are:\nA\n__complex__()\nmethod for converting objects to a complex number.Alternate syntax for catching exceptions:\nexcept TypeError as exc\n.The addition of\nfunctools.reduce()\nas a synonym for the built-inreduce()\nfunction.\nPython 3.0 adds several new built-in functions and changes the\nsemantics of some existing builtins. Functions that are new in 3.0\nsuch as bin()\nhave simply been added to Python 2.6, but existing\nbuiltins haven\u2019t been changed; instead, the future_builtins\nmodule has versions with the new 3.0 semantics. Code written to be\ncompatible with 3.0 can do from future_builtins import hex, map\nas\nnecessary.\nA new command-line switch, -3\n, enables warnings\nabout features that will be removed in Python 3.0. You can run code\nwith this switch to see how much work will be necessary to port\ncode to 3.0. The value of this switch is available\nto Python code as the boolean variable sys.py3kwarning\n,\nand to C extension code as Py_Py3kWarningFlag\n.\nChanges to the Development Process\u00b6\nWhile 2.6 was being developed, the Python development process underwent two significant changes: we switched from SourceForge\u2019s issue tracker to a customized Roundup installation, and the documentation was converted from LaTeX to reStructuredText.\nNew Issue Tracker: Roundup\u00b6\nFor a long time, the Python developers had been growing increasingly annoyed by SourceForge\u2019s bug tracker. SourceForge\u2019s hosted solution doesn\u2019t permit much customization; for example, it wasn\u2019t possible to customize the life cycle of issues.\nThe infrastructure committee of the Python Software Foundation therefore posted a call for issue trackers, asking volunteers to set up different products and import some of the bugs and patches from SourceForge. Four different trackers were examined: Jira, Launchpad, Roundup, and Trac. The committee eventually settled on Jira and Roundup as the two candidates. Jira is a commercial product that offers no-cost hosted instances to free-software projects; Roundup is an open-source project that requires volunteers to administer it and a server to host it.\nAfter posting a call for volunteers, a new Roundup installation was set up at https://bugs.python.org. One installation of Roundup can host multiple trackers, and this server now also hosts issue trackers for Jython and for the Python web site. It will surely find other uses in the future. Where possible, this edition of \u201cWhat\u2019s New in Python\u201d links to the bug/patch item for each change.\nHosting of the Python bug tracker is kindly provided by\nUpfront Systems\nof Stellenbosch, South Africa. Martin von L\u00f6wis put a\nlot of effort into importing existing bugs and patches from\nSourceForge; his scripts for this import operation are at\nhttps://svn.python.org/view/tracker/importer/\nand may be useful to\nother projects wishing to move from SourceForge to Roundup.\nSee also\n- https://bugs.python.org\nThe Python bug tracker.\n- https://bugs.jython.org:\nThe Jython bug tracker.\n- https://roundup.sourceforge.io/\nRoundup downloads and documentation.\n- https://svn.python.org/view/tracker/importer/\nMartin von L\u00f6wis\u2019s conversion scripts.\nNew Documentation Format: reStructuredText Using Sphinx\u00b6\nThe Python documentation was written using LaTeX since the project started around 1989. In the 1980s and early 1990s, most documentation was printed out for later study, not viewed online. LaTeX was widely used because it provided attractive printed output while remaining straightforward to write once the basic rules of the markup were learned.\nToday LaTeX is still used for writing publications destined for printing, but the landscape for programming tools has shifted. We no longer print out reams of documentation; instead, we browse through it online and HTML has become the most important format to support. Unfortunately, converting LaTeX to HTML is fairly complicated and Fred L. Drake Jr., the long-time Python documentation editor, spent a lot of time maintaining the conversion process. Occasionally people would suggest converting the documentation into SGML and later XML, but performing a good conversion is a major task and no one ever committed the time required to finish the job.\nDuring the 2.6 development cycle, Georg Brandl put a lot of effort into building a new toolchain for processing the documentation. The resulting package is called Sphinx, and is available from https://www.sphinx-doc.org/.\nSphinx concentrates on HTML output, producing attractively styled and modern HTML; printed output is still supported through conversion to LaTeX. The input format is reStructuredText, a markup syntax supporting custom extensions and directives that is commonly used in the Python community.\nSphinx is a standalone package that can be used for writing, and almost two dozen other projects (listed on the Sphinx web site) have adopted Sphinx as their documentation tool.\nSee also\n- Documenting Python\nDescribes how to write for Python\u2019s documentation.\n- Sphinx\nDocumentation and code for the Sphinx toolchain.\n- Docutils\nThe underlying reStructuredText parser and toolset.\nPEP 343: The \u2018with\u2019 statement\u00b6\nThe previous version, Python 2.5, added the \u2018with\n\u2019\nstatement as an optional feature, to be enabled by a from __future__\nimport with_statement\ndirective. In 2.6 the statement no longer needs to\nbe specially enabled; this means that with\nis now always a\nkeyword. The rest of this section is a copy of the corresponding\nsection from the \u201cWhat\u2019s New in Python 2.5\u201d document; if you\u2019re\nfamiliar with the \u2018with\n\u2019 statement\nfrom Python 2.5, you can skip this section.\nThe \u2018with\n\u2019 statement clarifies code that previously would use\ntry...finally\nblocks to ensure that clean-up code is executed. In this\nsection, I\u2019ll discuss the statement as it will commonly be used. In the next\nsection, I\u2019ll examine the implementation details and show how to write objects\nfor use with this statement.\nThe \u2018with\n\u2019 statement is a control-flow structure whose basic\nstructure is:\nwith expression [as variable]:\nwith-block\nThe expression is evaluated, and it should result in an object that supports the\ncontext management protocol (that is, has __enter__()\nand __exit__()\nmethods).\nThe object\u2019s __enter__()\nis called before with-block is executed and\ntherefore can run set-up code. It also may return a value that is bound to the\nname variable, if given. (Note carefully that variable is not assigned\nthe result of expression.)\nAfter execution of the with-block is finished, the object\u2019s __exit__()\nmethod is called, even if the block raised an exception, and can therefore run\nclean-up code.\nSome standard Python objects now support the context management protocol and can\nbe used with the \u2018with\n\u2019 statement. File objects are one example:\nwith open('/etc/passwd', 'r') as f:\nfor line in f:\nprint line\n... more processing code ...\nAfter this statement has executed, the file object in f will have been\nautomatically closed, even if the for\nloop raised an exception\npart-way through the block.\nNote\nIn this case, f is the same object created by open()\n, because\n__enter__()\nreturns self.\nThe threading\nmodule\u2019s locks and condition variables also support the\n\u2018with\n\u2019 statement:\nlock = threading.Lock()\nwith lock:\n# Critical section of code\n...\nThe lock is acquired before the block is executed and always released once the block is complete.\nThe localcontext()\nfunction in the decimal\nmodule makes\nit easy to save and restore the current decimal context, which encapsulates\nthe desired precision and rounding characteristics for computations:\nfrom decimal import Decimal, Context, localcontext\n# Displays with default precision of 28 digits\nv = Decimal('578')\nprint v.sqrt()\nwith localcontext(Context(prec=16)):\n# All code in this block uses a precision of 16 digits.\n# The original context is restored on exiting the block.\nprint v.sqrt()\nWriting Context Managers\u00b6\nUnder the hood, the \u2018with\n\u2019 statement is fairly complicated. Most\npeople will only use \u2018with\n\u2019 in company with existing objects and\ndon\u2019t need to know these details, so you can skip the rest of this section if\nyou like. Authors of new objects will need to understand the details of the\nunderlying implementation and should keep reading.\nA high-level explanation of the context management protocol is:\nThe expression is evaluated and should result in an object called a \u201ccontext manager\u201d. The context manager must have\n__enter__()\nand__exit__()\nmethods.The context manager\u2019s\n__enter__()\nmethod is called. The value returned is assigned to VAR. If noas VAR\nclause is present, the value is simply discarded.The code in BLOCK is executed.\nIf BLOCK raises an exception, the context manager\u2019s\n__exit__()\nmethod is called with three arguments, the exception details (type, value, traceback\n, the same values returned bysys.exc_info()\n, which can also beNone\nif no exception occurred). The method\u2019s return value controls whether an exception is re-raised: any false value re-raises the exception, andTrue\nwill result in suppressing it. You\u2019ll only rarely want to suppress the exception, because if you do the author of the code containing the \u2018with\n\u2019 statement will never realize anything went wrong.If BLOCK didn\u2019t raise an exception, the\n__exit__()\nmethod is still called, but type, value, and traceback are allNone\n.\nLet\u2019s think through an example. I won\u2019t present detailed code but will only sketch the methods necessary for a database that supports transactions.\n(For people unfamiliar with database terminology: a set of changes to the database are grouped into a transaction. Transactions can be either committed, meaning that all the changes are written into the database, or rolled back, meaning that the changes are all discarded and the database is unchanged. See any database textbook for more information.)\nLet\u2019s assume there\u2019s an object representing a database connection. Our goal will be to let the user write code like this:\ndb_connection = DatabaseConnection()\nwith db_connection as cursor:\ncursor.execute('insert into ...')\ncursor.execute('delete from ...')\n# ... more operations ...\nThe transaction should be committed if the code in the block runs flawlessly or\nrolled back if there\u2019s an exception. Here\u2019s the basic interface for\nDatabaseConnection\nthat I\u2019ll assume:\nclass DatabaseConnection:\n# Database interface\ndef cursor(self):\n\"Returns a cursor object and starts a new transaction\"\ndef commit(self):\n\"Commits current transaction\"\ndef rollback(self):\n\"Rolls back current transaction\"\nThe __enter__()\nmethod is pretty easy, having only to start a new\ntransaction. For this application the resulting cursor object would be a useful\nresult, so the method will return it. The user can then add as cursor\nto\ntheir \u2018with\n\u2019 statement to bind the cursor to a variable name.\nclass DatabaseConnection:\n...\ndef __enter__(self):\n# Code to start a new transaction\ncursor = self.cursor()\nreturn cursor\nThe __exit__()\nmethod is the most complicated because it\u2019s where most of\nthe work has to be done. The method has to check if an exception occurred. If\nthere was no exception, the transaction is committed. The transaction is rolled\nback if there was an exception.\nIn the code below, execution will just fall off the end of the function,\nreturning the default value of None\n. None\nis false, so the exception\nwill be re-raised automatically. If you wished, you could be more explicit and\nadd a return\nstatement at the marked location.\nclass DatabaseConnection:\n...\ndef __exit__(self, type, value, tb):\nif tb is None:\n# No exception, so commit\nself.commit()\nelse:\n# Exception occurred, so rollback.\nself.rollback()\n# return False\nThe contextlib module\u00b6\nThe contextlib\nmodule provides some functions and a decorator that\nare useful when writing objects for use with the \u2018with\n\u2019 statement.\nThe decorator is called contextmanager()\n, and lets you write\na single generator function instead of defining a new class. The generator\nshould yield exactly one value. The code up to the yield\nwill be\nexecuted as the __enter__()\nmethod, and the value yielded will\nbe the method\u2019s return value that will get bound to the variable in the\n\u2018with\n\u2019 statement\u2019s as\nclause, if any. The code after\nthe yield\nwill be executed in the __exit__()\nmethod.\nAny exception raised in the block will be raised by the yield\nstatement.\nUsing this decorator, our database example from the previous section could be written as:\nfrom contextlib import contextmanager\n@contextmanager\ndef db_transaction(connection):\ncursor = connection.cursor()\ntry:\nyield cursor\nexcept:\nconnection.rollback()\nraise\nelse:\nconnection.commit()\ndb = DatabaseConnection()\nwith db_transaction(db) as cursor:\n...\nThe contextlib\nmodule also has a nested(mgr1, mgr2, ...)\nfunction\nthat combines a number of context managers so you don\u2019t need to write nested\n\u2018with\n\u2019 statements. In this example, the single \u2018with\n\u2019\nstatement both starts a database transaction and acquires a thread lock:\nlock = threading.Lock()\nwith nested (db_transaction(db), lock) as (cursor, locked):\n...\nFinally, the closing()\nfunction returns its argument so that it can be\nbound to a variable, and calls the argument\u2019s .close()\nmethod at the end\nof the block.\nimport urllib, sys\nfrom contextlib import closing\nwith closing(urllib.urlopen('http://www.yahoo.com')) as f:\nfor line in f:\nsys.stdout.write(line)\nSee also\n- PEP 343 - The \u201cwith\u201d statement\nPEP written by Guido van Rossum and Nick Coghlan; implemented by Mike Bland, Guido van Rossum, and Neal Norwitz. The PEP shows the code generated for a \u2018\nwith\n\u2019 statement, which can be helpful in learning how the statement works.\nThe documentation for the contextlib\nmodule.\nPEP 366: Explicit Relative Imports From a Main Module\u00b6\nPython\u2019s -m\nswitch allows running a module as a script.\nWhen you ran a module that was located inside a package, relative\nimports didn\u2019t work correctly.\nThe fix for Python 2.6 adds a module.__package__\nattribute.\nWhen this attribute is present, relative imports will be\nrelative to the value of this attribute instead of the\n__name__\nattribute.\nPEP 302-style importers can then set __package__\nas necessary.\nThe runpy\nmodule that implements the -m\nswitch now\ndoes this, so relative imports will now work correctly in scripts\nrunning from inside a package.\nPEP 370: Per-user site-packages\nDirectory\u00b6\nWhen you run Python, the module search path sys.path\nusually\nincludes a directory whose path ends in \"site-packages\"\n. This\ndirectory is intended to hold locally installed packages available to\nall users using a machine or a particular site installation.\nPython 2.6 introduces a convention for user-specific site directories. The directory varies depending on the platform:\nUnix and Mac OS X:\n~/.local/\nWindows:\n%APPDATA%/Python\nWithin this directory, there will be version-specific subdirectories,\nsuch as lib/python2.6/site-packages\non Unix/Mac OS and\nPython26/site-packages\non Windows.\nIf you don\u2019t like the default directory, it can be overridden by an\nenvironment variable. PYTHONUSERBASE\nsets the root\ndirectory used for all Python versions supporting this feature. On\nWindows, the directory for application-specific data can be changed by\nsetting the APPDATA\nenvironment variable. You can also\nmodify the site.py\nfile for your Python installation.\nThe feature can be disabled entirely by running Python with the\n-s\noption or setting the PYTHONNOUSERSITE\nenvironment variable.\nSee also\n- PEP 370 - Per-user\nsite-packages\nDirectory PEP written and implemented by Christian Heimes.\nPEP 371: The multiprocessing\nPackage\u00b6\nThe new multiprocessing\npackage lets Python programs create new\nprocesses that will perform a computation and return a result to the\nparent. The parent and child processes can communicate using queues\nand pipes, synchronize their operations using locks and semaphores,\nand can share simple arrays of data.\nThe multiprocessing\nmodule started out as an exact emulation of\nthe threading\nmodule using processes instead of threads. That\ngoal was discarded along the path to Python 2.6, but the general\napproach of the module is still similar. The fundamental class\nis the Process\n, which is passed a callable object and\na collection of arguments. The start()\nmethod\nsets the callable running in a subprocess, after which you can call\nthe is_alive()\nmethod to check whether the\nsubprocess is still running and the join()\nmethod to wait for the process to exit.\nHere\u2019s a simple example where the subprocess will calculate a factorial. The function doing the calculation is written strangely so that it takes significantly longer when the input argument is a multiple of 4.\nimport time\nfrom multiprocessing import Process, Queue\ndef factorial(queue, N):\n\"Compute a factorial.\"\n# If N is a multiple of 4, this function will take much longer.\nif (N % 4) == 0:\ntime.sleep(.05 * N/4)\n# Calculate the result\nfact = 1L\nfor i in range(1, N+1):\nfact = fact * i\n# Put the result on the queue\nqueue.put(fact)\nif __name__ == '__main__':\nqueue = Queue()\nN = 5\np = Process(target=factorial, args=(queue, N))\np.start()\np.join()\nresult = queue.get()\nprint 'Factorial', N, '=', result\nA Queue\nis used to communicate the result of the factorial.\nThe Queue\nobject is stored in a global variable.\nThe child process will use the value of the variable when the child\nwas created; because it\u2019s a Queue\n, parent and child can use\nthe object to communicate. (If the parent were to change the value of\nthe global variable, the child\u2019s value would be unaffected, and vice\nversa.)\nTwo other classes, Pool\nand\nManager\n, provide higher-level interfaces.\nPool\nwill create a fixed number of worker\nprocesses, and requests can then be distributed to the workers by calling\napply()\nor\napply_async()\nto add a single request, and\nmap()\nor\nmap_async()\nto add a number of\nrequests. The following code uses a Pool\nto\nspread requests across 5 worker processes and retrieve a list of results:\nfrom multiprocessing import Pool\ndef factorial(N, dictionary):\n\"Compute a factorial.\"\n...\np = Pool(5)\nresult = p.map(factorial, range(1, 1000, 10))\nfor v in result:\nprint v\nThis produces the following output:\n1\n39916800\n51090942171709440000\n8222838654177922817725562880000000\n33452526613163807108170062053440751665152000000000\n...\nThe other high-level interface, the Manager\nclass,\ncreates a separate server process that can hold master copies of Python data\nstructures. Other processes can then access and modify these data\nstructures using proxy objects. The following example creates a\nshared dictionary by calling the dict()\nmethod; the worker\nprocesses then insert values into the dictionary. (Locking is not\ndone for you automatically, which doesn\u2019t matter in this example.\nManager\n\u2019s methods also include\nLock()\n,\nRLock()\n,\nand Semaphore()\nto create\nshared locks.)\nimport time\nfrom multiprocessing import Pool, Manager\ndef factorial(N, dictionary):\n\"Compute a factorial.\"\n# Calculate the result\nfact = 1L\nfor i in range(1, N+1):\nfact = fact * i\n# Store result in dictionary\ndictionary[N] = fact\nif __name__ == '__main__':\np = Pool(5)\nmgr = Manager()\nd = mgr.dict() # Create shared dictionary\n# Run tasks using the pool\nfor N in range(1, 1000, 10):\np.apply_async(factorial, (N, d))\n# Mark pool as closed -- no more tasks can be added.\np.close()\n# Wait for tasks to exit\np.join()\n# Output results\nfor k, v in sorted(d.items()):\nprint k, v\nThis will produce the output:\n1 1\n11 39916800\n21 51090942171709440000\n31 8222838654177922817725562880000000\n41 33452526613163807108170062053440751665152000000000\n51 15511187532873822802242430164693032110632597200169861120000...\nSee also\nThe documentation for the multiprocessing\nmodule.\n- PEP 371 - Addition of the multiprocessing package\nPEP written by Jesse Noller and Richard Oudkerk; implemented by Richard Oudkerk and Jesse Noller.\nPEP 3101: Advanced String Formatting\u00b6\nIn Python 3.0, the %\noperator is supplemented by a more powerful string\nformatting method, format()\n. Support for the str.format()\nmethod\nhas been backported to Python 2.6.\nIn 2.6, both 8-bit and Unicode strings have a .format()\nmethod that\ntreats the string as a template and takes the arguments to be formatted.\nThe formatting template uses curly brackets ({\n, }\n) as special characters:\n>>> # Substitute positional argument 0 into the string.\n>>> \"User ID: {0}\".format(\"root\")\n'User ID: root'\n>>> # Use the named keyword arguments\n>>> \"User ID: {uid} Last seen: {last_login}\".format(\n... uid=\"root\",\n... last_login = \"5 Mar 2008 07:20\")\n'User ID: root Last seen: 5 Mar 2008 07:20'\nCurly brackets can be escaped by doubling them:\n>>> \"Empty dict: {{}}\".format()\n\"Empty dict: {}\"\nField names can be integers indicating positional arguments, such as\n{0}\n, {1}\n, etc. or names of keyword arguments. You can also\nsupply compound field names that read attributes or access dictionary keys:\n>>> import sys\n>>> print 'Platform: {0.platform}\\nPython version: {0.version}'.format(sys)\nPlatform: darwin\nPython version: 2.6a1+ (trunk:61261M, Mar 5 2008, 20:29:41)\n[GCC 4.0.1 (Apple Computer, Inc. build 5367)]'\n>>> import mimetypes\n>>> 'Content-type: {0[.mp4]}'.format(mimetypes.types_map)\n'Content-type: video/mp4'\nNote that when using dictionary-style notation such as [.mp4]\n, you\ndon\u2019t need to put any quotation marks around the string; it will look\nup the value using .mp4\nas the key. Strings beginning with a\nnumber will be converted to an integer. You can\u2019t write more\ncomplicated expressions inside a format string.\nSo far we\u2019ve shown how to specify which field to substitute into the resulting string. The precise formatting used is also controllable by adding a colon followed by a format specifier. For example:\n>>> # Field 0: left justify, pad to 15 characters\n>>> # Field 1: right justify, pad to 6 characters\n>>> fmt = '{0:15} ${1:>6}'\n>>> fmt.format('Registration', 35)\n'Registration $ 35'\n>>> fmt.format('Tutorial', 50)\n'Tutorial $ 50'\n>>> fmt.format('Banquet', 125)\n'Banquet $ 125'\nFormat specifiers can reference other fields through nesting:\n>>> fmt = '{0:{1}}'\n>>> width = 15\n>>> fmt.format('Invoice #1234', width)\n'Invoice #1234 '\n>>> width = 35\n>>> fmt.format('Invoice #1234', width)\n'Invoice #1234 '\nThe alignment of a field within the desired width can be specified:\nCharacter |\nEffect |\n|---|---|\n< (default) |\nLeft-align |\n> |\nRight-align |\n^ |\nCenter |\n= |\n(For numeric types only) Pad after the sign. |\nFormat specifiers can also include a presentation type, which controls how the value is formatted. For example, floating-point numbers can be formatted as a general number or in exponential notation:\n>>> '{0:g}'.format(3.75)\n'3.75'\n>>> '{0:e}'.format(3.75)\n'3.750000e+00'\nA variety of presentation types are available. Consult the 2.6 documentation for a complete list; here\u2019s a sample:\n|\nBinary. Outputs the number in base 2. |\n|\nCharacter. Converts the integer to the corresponding Unicode character before printing. |\n|\nDecimal Integer. Outputs the number in base 10. |\n|\nOctal format. Outputs the number in base 8. |\n|\nHex format. Outputs the number in base 16, using lower-case letters for the digits above 9. |\n|\nExponent notation. Prints the number in scientific notation using the letter \u2018e\u2019 to indicate the exponent. |\n|\nGeneral format. This prints the number as a fixed-point number, unless the number is too large, in which case it switches to \u2018e\u2019 exponent notation. |\n|\nNumber. This is the same as \u2018g\u2019 (for floats) or \u2018d\u2019 (for integers), except that it uses the current locale setting to insert the appropriate number separator characters. |\n|\nPercentage. Multiplies the number by 100 and displays in fixed (\u2018f\u2019) format, followed by a percent sign. |\nClasses and types can define a __format__()\nmethod to control how they\u2019re\nformatted. It receives a single argument, the format specifier:\ndef __format__(self, format_spec):\nif isinstance(format_spec, unicode):\nreturn unicode(str(self))\nelse:\nreturn str(self)\nThere\u2019s also a format()\nbuiltin that will format a single\nvalue. It calls the type\u2019s __format__()\nmethod with the\nprovided specifier:\n>>> format(75.6564, '.2f')\n'75.66'\nSee also\n- Format String Syntax\nThe reference documentation for format fields.\n- PEP 3101 - Advanced String Formatting\nPEP written by Talin. Implemented by Eric Smith.\nPEP 3105: print\nAs a Function\u00b6\nThe print\nstatement becomes the print()\nfunction in Python 3.0.\nMaking print()\na function makes it possible to replace the function\nby doing def print(...)\nor importing a new function from somewhere else.\nPython 2.6 has a __future__\nimport that removes print\nas language\nsyntax, letting you use the functional form instead. For example:\n>>> from __future__ import print_function\n>>> print('# of entries', len(dictionary), file=sys.stderr)\nThe signature of the new function is:\ndef print(*args, sep=' ', end='\\n', file=None)\nThe parameters are:\nargs: positional arguments whose values will be printed out.\nsep: the separator, which will be printed between arguments.\nend: the ending text, which will be printed after all of the arguments have been output.\nfile: the file object to which the output will be sent.\nSee also\n- PEP 3105 - Make print a function\nPEP written by Georg Brandl.\nPEP 3110: Exception-Handling Changes\u00b6\nOne error that Python programmers occasionally make is writing the following code:\ntry:\n...\nexcept TypeError, ValueError: # Wrong!\n...\nThe author is probably trying to catch both TypeError\nand\nValueError\nexceptions, but this code actually does something\ndifferent: it will catch TypeError\nand bind the resulting\nexception object to the local name \"ValueError\"\n. The\nValueError\nexception will not be caught at all. The correct\ncode specifies a tuple of exceptions:\ntry:\n...\nexcept (TypeError, ValueError):\n...\nThis error happens because the use of the comma here is ambiguous: does it indicate two different nodes in the parse tree, or a single node that\u2019s a tuple?\nPython 3.0 makes this unambiguous by replacing the comma with the word\n\u201cas\u201d. To catch an exception and store the exception object in the\nvariable exc\n, you must write:\ntry:\n...\nexcept TypeError as exc:\n...\nPython 3.0 will only support the use of \u201cas\u201d, and therefore interprets the first example as catching two different exceptions. Python 2.6 supports both the comma and \u201cas\u201d, so existing code will continue to work. We therefore suggest using \u201cas\u201d when writing new Python code that will only be executed with 2.6.\nSee also\n- PEP 3110 - Catching Exceptions in Python 3000\nPEP written and implemented by Collin Winter.\nPEP 3112: Byte Literals\u00b6\nPython 3.0 adopts Unicode as the language\u2019s fundamental string type and\ndenotes 8-bit literals differently, either as b'string'\nor using a bytes\nconstructor. For future compatibility,\nPython 2.6 adds bytes\nas a synonym for the str\ntype,\nand it also supports the b''\nnotation.\nThe 2.6 str\ndiffers from 3.0\u2019s bytes\ntype in various\nways; most notably, the constructor is completely different. In 3.0,\nbytes([65, 66, 67])\nis 3 elements long, containing the bytes\nrepresenting ABC\n; in 2.6, bytes([65, 66, 67])\nreturns the\n12-byte string representing the str()\nof the list.\nThe primary use of bytes\nin 2.6 will be to write tests of\nobject type such as isinstance(x, bytes)\n. This will help the 2to3\nconverter, which can\u2019t tell whether 2.x code intends strings to\ncontain either characters or 8-bit bytes; you can now\nuse either bytes\nor str\nto represent your intention\nexactly, and the resulting code will also be correct in Python 3.0.\nThere\u2019s also a __future__\nimport that causes all string literals\nto become Unicode strings. This means that \\u\nescape sequences\ncan be used to include Unicode characters:\nfrom __future__ import unicode_literals\ns = ('\\u751f\\u3080\\u304e\\u3000\\u751f\\u3054'\n'\\u3081\\u3000\\u751f\\u305f\\u307e\\u3054')\nprint len(s) # 12 Unicode characters\nAt the C level, Python 3.0 will rename the existing 8-bit\nstring type, called PyStringObject\nin Python 2.x,\nto PyBytesObject\n. Python 2.6 uses #define\nto support using the names PyBytesObject()\n,\nPyBytes_Check()\n, PyBytes_FromStringAndSize()\n,\nand all the other functions and macros used with strings.\nInstances of the bytes\ntype are immutable just\nas strings are. A new bytearray\ntype stores a mutable\nsequence of bytes:\n>>> bytearray([65, 66, 67])\nbytearray(b'ABC')\n>>> b = bytearray(u'\\u21ef\\u3244', 'utf-8')\n>>> b\nbytearray(b'\\xe2\\x87\\xaf\\xe3\\x89\\x84')\n>>> b[0] = '\\xe3'\n>>> b\nbytearray(b'\\xe3\\x87\\xaf\\xe3\\x89\\x84')\n>>> unicode(str(b), 'utf-8')\nu'\\u31ef \\u3244'\nByte arrays support most of the methods of string types, such as\nstartswith()\n/endswith()\n,\nfind()\n/rfind()\n,\nand some of the methods of lists, such as append()\n,\npop()\n, and reverse()\n.\n>>> b = bytearray('ABC')\n>>> b.append('d')\n>>> b.append(ord('e'))\n>>> b\nbytearray(b'ABCde')\nThere\u2019s also a corresponding C API, with\nPyByteArray_FromObject()\n,\nPyByteArray_FromStringAndSize()\n,\nand various other functions.\nSee also\n- PEP 3112 - Bytes literals in Python 3000\nPEP written by Jason Orendorff; backported to 2.6 by Christian Heimes.\nPEP 3116: New I/O Library\u00b6\nPython\u2019s built-in file objects support a number of methods, but\nfile-like objects don\u2019t necessarily support all of them. Objects that\nimitate files usually support read()\nand\nwrite()\n, but they may not support readline()\n,\nfor example. Python 3.0 introduces a layered I/O library in the io\nmodule that separates buffering and text-handling features from the\nfundamental read and write operations.\nThere are three levels of abstract base classes provided by\nthe io\nmodule:\nRawIOBase\ndefines raw I/O operations:read()\n,readinto()\n,write()\n,seek()\n,tell()\n,truncate()\n, andclose()\n. Most of the methods of this class will often map to a single system call. There are alsoreadable()\n,writable()\n, andseekable()\nmethods for determining what operations a given object will allow.Python 3.0 has concrete implementations of this class for files and sockets, but Python 2.6 hasn\u2019t restructured its file and socket objects in this way.\nBufferedIOBase\nis an abstract base class that buffers data in memory to reduce the number of system calls used, making I/O processing more efficient. It supports all of the methods ofRawIOBase\n, and adds araw\nattribute holding the underlying raw object.There are five concrete classes implementing this ABC.\nBufferedWriter\nandBufferedReader\nare for objects that support write-only or read-only usage that have aseek()\nmethod for random access.BufferedRandom\nobjects support read and write access upon the same underlying stream, andBufferedRWPair\nis for objects such as TTYs that have both read and write operations acting upon unconnected streams of data. TheBytesIO\nclass supports reading, writing, and seeking over an in-memory buffer.TextIOBase\n: Provides functions for reading and writing strings (remember, strings will be Unicode in Python 3.0), and supporting universal newlines.TextIOBase\ndefines thereadline()\nmethod and supports iteration upon objects.There are two concrete implementations.\nTextIOWrapper\nwraps a buffered I/O object, supporting all of the methods for text I/O and adding abuffer\nattribute for access to the underlying object.StringIO\nsimply buffers everything in memory without ever writing anything to disk.(In Python 2.6,\nio.StringIO\nis implemented in pure Python, so it\u2019s pretty slow. You should therefore stick with the existingStringIO\nmodule orcStringIO\nfor now. At some point Python 3.0\u2019sio\nmodule will be rewritten into C for speed, and perhaps the C implementation will be backported to the 2.x releases.)\nIn Python 2.6, the underlying implementations haven\u2019t been\nrestructured to build on top of the io\nmodule\u2019s classes. The\nmodule is being provided to make it easier to write code that\u2019s\nforward-compatible with 3.0, and to save developers the effort of writing\ntheir own implementations of buffering and text I/O.\nSee also\n- PEP 3116 - New I/O\nPEP written by Daniel Stutzbach, Mike Verdone, and Guido van Rossum. Code by Guido van Rossum, Georg Brandl, Walter Doerwald, Jeremy Hylton, Martin von L\u00f6wis, Tony Lownds, and others.\nPEP 3118: Revised Buffer Protocol\u00b6\nThe buffer protocol is a C-level API that lets Python types\nexchange pointers into their internal representations. A\nmemory-mapped file can be viewed as a buffer of characters, for\nexample, and this lets another module such as re\ntreat memory-mapped files as a string of characters to be searched.\nThe primary users of the buffer protocol are numeric-processing packages such as NumPy, which expose the internal representation of arrays so that callers can write data directly into an array instead of going through a slower API. This PEP updates the buffer protocol in light of experience from NumPy development, adding a number of new features such as indicating the shape of an array or locking a memory region.\nThe most important new C API function is\nPyObject_GetBuffer(PyObject *obj, Py_buffer *view, int flags)\n, which\ntakes an object and a set of flags, and fills in the\nPy_buffer\nstructure with information\nabout the object\u2019s memory representation. Objects\ncan use this operation to lock memory in place\nwhile an external caller could be modifying the contents,\nso there\u2019s a corresponding PyBuffer_Release(Py_buffer *view)\nto\nindicate that the external caller is done.\nThe flags argument to PyObject_GetBuffer()\nspecifies\nconstraints upon the memory returned. Some examples are:\nPyBUF_WRITABLE\nindicates that the memory must be writable.PyBUF_LOCK\nrequests a read-only or exclusive lock on the memory.PyBUF_C_CONTIGUOUS\nandPyBUF_F_CONTIGUOUS\nrequests a C-contiguous (last dimension varies the fastest) or Fortran-contiguous (first dimension varies the fastest) array layout.\nTwo new argument codes for PyArg_ParseTuple()\n,\ns*\nand z*\n, return locked buffer objects for a parameter.\nSee also\n- PEP 3118 - Revising the buffer protocol\nPEP written by Travis Oliphant and Carl Banks; implemented by Travis Oliphant.\nPEP 3119: Abstract Base Classes\u00b6\nSome object-oriented languages such as Java support interfaces,\ndeclaring that a class has a given set of methods or supports a given\naccess protocol. Abstract Base Classes (or ABCs) are an equivalent\nfeature for Python. The ABC support consists of an abc\nmodule\ncontaining a metaclass called ABCMeta\n, special handling of\nthis metaclass by the isinstance()\nand issubclass()\nbuiltins, and a collection of basic ABCs that the Python developers\nthink will be widely useful. Future versions of Python will probably\nadd more ABCs.\nLet\u2019s say you have a particular class and wish to know whether it supports\ndictionary-style access. The phrase \u201cdictionary-style\u201d is vague, however.\nIt probably means that accessing items with obj[1]\nworks.\nDoes it imply that setting items with obj[2] = value\nworks?\nOr that the object will have keys()\n, values()\n, and items()\nmethods? What about the iterative variants such as iterkeys()\n?\ncopy`and :meth:()\n!update`? Iterating over the object with iter()\n?\nThe Python 2.6 collections\nmodule includes a number of\ndifferent ABCs that represent these distinctions. Iterable\nindicates that a class defines __iter__()\n, and\nContainer\nmeans the class defines a __contains__()\nmethod and therefore supports x in y\nexpressions. The basic\ndictionary interface of getting items, setting items, and\nkeys()\n, values()\n, and items()\n, is defined by the\nMutableMapping\nABC.\nYou can derive your own classes from a particular ABC to indicate they support that ABC\u2019s interface:\nimport collections\nclass Storage(collections.MutableMapping):\n...\nAlternatively, you could write the class without deriving from\nthe desired ABC and instead register the class by\ncalling the ABC\u2019s register()\nmethod:\nimport collections\nclass Storage:\n...\ncollections.MutableMapping.register(Storage)\nFor classes that you write, deriving from the ABC is probably clearer.\nThe register()\nmethod is useful when you\u2019ve written a new\nABC that can describe an existing type or class, or if you want\nto declare that some third-party class implements an ABC.\nFor example, if you defined a PrintableType\nABC,\nit\u2019s legal to do:\n# Register Python's types\nPrintableType.register(int)\nPrintableType.register(float)\nPrintableType.register(str)\nClasses should obey the semantics specified by an ABC, but Python can\u2019t check this; it\u2019s up to the class author to understand the ABC\u2019s requirements and to implement the code accordingly.\nTo check whether an object supports a particular interface, you can now write:\ndef func(d):\nif not isinstance(d, collections.MutableMapping):\nraise ValueError(\"Mapping object expected, not %r\" % d)\nDon\u2019t feel that you must now begin writing lots of checks as in the above example. Python has a strong tradition of duck-typing, where explicit type-checking is never done and code simply calls methods on an object, trusting that those methods will be there and raising an exception if they aren\u2019t. Be judicious in checking for ABCs and only do it where it\u2019s absolutely necessary.\nYou can write your own ABCs by using abc.ABCMeta\nas the\nmetaclass in a class definition:\nfrom abc import ABCMeta, abstractmethod\nclass Drawable():\n__metaclass__ = ABCMeta\n@abstractmethod\ndef draw(self, x, y, scale=1.0):\npass\ndef draw_doubled(self, x, y):\nself.draw(x, y, scale=2.0)\nclass Square(Drawable):\ndef draw(self, x, y, scale):\n...\nIn the Drawable\nABC above, the draw_doubled()\nmethod\nrenders the object at twice its size and can be implemented in terms\nof other methods described in Drawable\n. Classes implementing\nthis ABC therefore don\u2019t need to provide their own implementation\nof draw_doubled()\n, though they can do so. An implementation\nof draw()\nis necessary, though; the ABC can\u2019t provide\na useful generic implementation.\nYou can apply the @~abc.abstractmethod\ndecorator to methods such as\ndraw()\nthat must be implemented; Python will then raise an\nexception for classes that don\u2019t define the method.\nNote that the exception is only raised when you actually\ntry to create an instance of a subclass lacking the method:\n>>> class Circle(Drawable):\n... pass\n...\n>>> c = Circle()\nTraceback (most recent call last):\nFile \"\", line 1, in \nTypeError: Can't instantiate abstract class Circle with abstract methods draw\n>>>\nAbstract data attributes can be declared using the\n@abstractproperty\ndecorator:\nfrom abc import abstractproperty\n...\n@abstractproperty\ndef readonly(self):\nreturn self._x\nSubclasses must then define a readonly\nproperty.\nSee also\n- PEP 3119 - Introducing Abstract Base Classes\nPEP written by Guido van Rossum and Talin. Implemented by Guido van Rossum. Backported to 2.6 by Benjamin Aranguren, with Alex Martelli.\nPEP 3127: Integer Literal Support and Syntax\u00b6\nPython 3.0 changes the syntax for octal (base-8) integer literals, prefixing them with \u201c0o\u201d or \u201c0O\u201d instead of a leading zero, and adds support for binary (base-2) integer literals, signalled by a \u201c0b\u201d or \u201c0B\u201d prefix.\nPython 2.6 doesn\u2019t drop support for a leading 0 signalling an octal number, but it does add support for \u201c0o\u201d and \u201c0b\u201d:\n>>> 0o21, 2*8 + 1\n(17, 17)\n>>> 0b101111\n47\nThe oct()\nbuiltin still returns numbers\nprefixed with a leading zero, and a new bin()\nbuiltin returns the binary representation for a number:\n>>> oct(42)\n'052'\n>>> future_builtins.oct(42)\n'0o52'\n>>> bin(173)\n'0b10101101'\nThe int()\nand long()\nbuiltins will now accept the \u201c0o\u201d\nand \u201c0b\u201d prefixes when base-8 or base-2 are requested, or when the\nbase argument is zero (signalling that the base used should be\ndetermined from the string):\n>>> int ('0o52', 0)\n42\n>>> int('1101', 2)\n13\n>>> int('0b1101', 2)\n13\n>>> int('0b1101', 0)\n13\nSee also\n- PEP 3127 - Integer Literal Support and Syntax\nPEP written by Patrick Maupin; backported to 2.6 by Eric Smith.\nPEP 3129: Class Decorators\u00b6\nDecorators have been extended from functions to classes. It\u2019s now legal to write:\n@foo\n@bar\nclass A:\npass\nThis is equivalent to:\nclass A:\npass\nA = foo(bar(A))\nSee also\n- PEP 3129 - Class Decorators\nPEP written by Collin Winter.\nPEP 3141: A Type Hierarchy for Numbers\u00b6\nPython 3.0 adds several abstract base classes for numeric types\ninspired by Scheme\u2019s numeric tower. These classes were backported to\n2.6 as the numbers\nmodule.\nThe most general ABC is Number\n. It defines no operations at\nall, and only exists to allow checking if an object is a number by\ndoing isinstance(obj, Number)\n.\nComplex\nis a subclass of Number\n. Complex numbers\ncan undergo the basic operations of addition, subtraction,\nmultiplication, division, and exponentiation, and you can retrieve the\nreal and imaginary parts and obtain a number\u2019s conjugate. Python\u2019s built-in\ncomplex type is an implementation of Complex\n.\nReal\nfurther derives from Complex\n, and adds\noperations that only work on real numbers: floor()\n, trunc()\n,\nrounding, taking the remainder mod N, floor division,\nand comparisons.\nRational\nnumbers derive from Real\n, have\nnumerator\nand denominator\nproperties, and can be\nconverted to floats. Python 2.6 adds a simple rational-number class,\nFraction\n, in the fractions\nmodule. (It\u2019s called\nFraction\ninstead of Rational\nto avoid\na name clash with numbers.Rational\n.)\nIntegral\nnumbers derive from Rational\n, and\ncan be shifted left and right with <<\nand >>\n,\ncombined using bitwise operations such as &\nand |\n,\nand can be used as array indexes and slice boundaries.\nIn Python 3.0, the PEP slightly redefines the existing builtins\nround()\n, math.floor()\n, math.ceil()\n, and adds a new\none, math.trunc()\n, that\u2019s been backported to Python 2.6.\nmath.trunc()\nrounds toward zero, returning the closest\nIntegral\nthat\u2019s between the function\u2019s argument and zero.\nSee also\n- PEP 3141 - A Type Hierarchy for Numbers\nPEP written by Jeffrey Yasskin.\nScheme\u2019s numerical tower, from the Guile manual.\nScheme\u2019s number datatypes from the R5RS Scheme specification.\nThe fractions\nModule\u00b6\nTo fill out the hierarchy of numeric types, the fractions\nmodule provides a rational-number class. Rational numbers store their\nvalues as a numerator and denominator forming a fraction, and can\nexactly represent numbers such as 2/3\nthat floating-point numbers\ncan only approximate.\nThe Fraction\nconstructor takes two Integral\nvalues\nthat will be the numerator and denominator of the resulting fraction.\n>>> from fractions import Fraction\n>>> a = Fraction(2, 3)\n>>> b = Fraction(2, 5)\n>>> float(a), float(b)\n(0.66666666666666663, 0.40000000000000002)\n>>> a+b\nFraction(16, 15)\n>>> a/b\nFraction(5, 3)\nFor converting floating-point numbers to rationals,\nthe float type now has an as_integer_ratio()\nmethod that returns\nthe numerator and denominator for a fraction that evaluates to the same\nfloating-point value:\n>>> (2.5) .as_integer_ratio()\n(5, 2)\n>>> (3.1415) .as_integer_ratio()\n(7074029114692207L, 2251799813685248L)\n>>> (1./3) .as_integer_ratio()\n(6004799503160661L, 18014398509481984L)\nNote that values that can only be approximated by floating-point numbers, such as 1./3, are not simplified to the number being approximated; the fraction attempts to match the floating-point value exactly.\nThe fractions\nmodule is based upon an implementation by Sjoerd\nMullender that was in Python\u2019s Demo/classes/\ndirectory for a\nlong time. This implementation was significantly updated by Jeffrey\nYasskin.\nOther Language Changes\u00b6\nSome smaller changes made to the core Python language are:\nDirectories and zip archives containing a\n__main__.py\nfile can now be executed directly by passing their name to the interpreter. The directory or zip archive is automatically inserted as the first entry in sys.path. (Suggestion and initial patch by Andy Chu, subsequently revised by Phillip J. Eby and Nick Coghlan; bpo-1739468.)The\nhasattr()\nfunction was catching and ignoring all errors, under the assumption that they meant a__getattr__()\nmethod was failing somehow and the return value ofhasattr()\nwould therefore beFalse\n. This logic shouldn\u2019t be applied toKeyboardInterrupt\nandSystemExit\n, however; Python 2.6 will no longer discard such exceptions whenhasattr()\nencounters them. (Fixed by Benjamin Peterson; bpo-2196.)When calling a function using the\n**\nsyntax to provide keyword arguments, you are no longer required to use a Python dictionary; any mapping will now work:>>> def f(**kw): ... print sorted(kw) ... >>> ud=UserDict.UserDict() >>> ud['a'] = 1 >>> ud['b'] = 'string' >>> f(**ud) ['a', 'b']\n(Contributed by Alexander Belopolsky; bpo-1686487.)\nIt\u2019s also become legal to provide keyword arguments after a\n*args\nargument to a function call.>>> def f(*args, **kw): ... print args, kw ... >>> f(1,2,3, *(4,5,6), keyword=13) (1, 2, 3, 4, 5, 6) {'keyword': 13}\nPreviously this would have been a syntax error. (Contributed by Amaury Forgeot d\u2019Arc; bpo-3473.)\nA new builtin,\nnext(iterator, [default])\nreturns the next item from the specified iterator. If the default argument is supplied, it will be returned if iterator has been exhausted; otherwise, theStopIteration\nexception will be raised. (Backported in bpo-2719.)Tuples now have\nindex()\nandcount()\nmethods matching the list type\u2019sindex()\nandcount()\nmethods:>>> t = (0,1,2,3,4,0,1,2) >>> t.index(3) 3 >>> t.count(0) 2\n(Contributed by Raymond Hettinger)\nThe built-in types now have improved support for extended slicing syntax, accepting various combinations of\n(start, stop, step)\n. Previously, the support was partial and certain corner cases wouldn\u2019t work. (Implemented by Thomas Wouters.)Properties now have three attributes,\ngetter\n,setter\nanddeleter\n, that are decorators providing useful shortcuts for adding a getter, setter or deleter function to an existing property. You would use them like this:class C(object): @property def x(self): return self._x @x.setter def x(self, value): self._x = value @x.deleter def x(self): del self._x class D(C): @C.x.getter def x(self): return self._x * 2 @x.setter def x(self, value): self._x = value / 2\nSeveral methods of the built-in set types now accept multiple iterables:\nintersection()\n,intersection_update()\n,union()\n,update()\n,difference()\nanddifference_update()\n.>>> s=set('1234567890') >>> s.intersection('abc123', 'cdf246') # Intersection between all inputs set(['2']) >>> s.difference('246', '789') set(['1', '0', '3', '5'])\n(Contributed by Raymond Hettinger.)\nMany floating-point features were added. The\nfloat()\nfunction will now turn the stringnan\ninto an IEEE 754 Not A Number value, and+inf\nand-inf\ninto positive or negative infinity. This works on any platform with IEEE 754 semantics. (Contributed by Christian Heimes; bpo-1635.)Other functions in the\nmath\nmodule,isinf()\nandisnan()\n, return true if their floating-point argument is infinite or Not A Number. (bpo-1640)Conversion functions were added to convert floating-point numbers into hexadecimal strings (bpo-3008). These functions convert floats to and from a string representation without introducing rounding errors from the conversion between decimal and binary. Floats have a\nhex()\nmethod that returns a string representation, and thefloat.fromhex()\nmethod converts a string back into a number:>>> a = 3.75 >>> a.hex() '0x1.e000000000000p+1' >>> float.fromhex('0x1.e000000000000p+1') 3.75 >>> b=1./3 >>> b.hex() '0x1.5555555555555p-2'\nA numerical nicety: when creating a complex number from two floats on systems that support signed zeros (-0 and +0), the\ncomplex()\nconstructor will now preserve the sign of the zero. (Fixed by Mark T. Dickinson; bpo-1507.)Classes that inherit a\n__hash__()\nmethod from a parent class can set__hash__ = None\nto indicate that the class isn\u2019t hashable. This will makehash(obj)\nraise aTypeError\nand the class will not be indicated as implementing theHashable\nABC.You should do this when you\u2019ve defined a\n__cmp__()\nor__eq__()\nmethod that compares objects by their value rather than by identity. All objects have a default hash method that usesid(obj)\nas the hash value. There\u2019s no tidy way to remove the__hash__()\nmethod inherited from a parent class, so assigningNone\nwas implemented as an override. At the C level, extensions can settp_hash\ntoPyObject_HashNotImplemented()\n. (Fixed by Nick Coghlan and Amaury Forgeot d\u2019Arc; bpo-2235.)The\nGeneratorExit\nexception now subclassesBaseException\ninstead ofException\n. This means that an exception handler that doesexcept Exception:\nwill not inadvertently catchGeneratorExit\n. (Contributed by Chad Austin; bpo-1537.)Generator objects now have a\ngi_code\nattribute that refers to the original code object backing the generator. (Contributed by Collin Winter; bpo-1473257.)The\ncompile()\nbuilt-in function now accepts keyword arguments as well as positional parameters. (Contributed by Thomas Wouters; bpo-1444529.)The\ncomplex()\nconstructor now accepts strings containing parenthesized complex numbers, meaning thatcomplex(repr(cplx))\nwill now round-trip values. For example,complex('(3+4j)')\nnow returns the value (3+4j). (bpo-1491866)The string\ntranslate()\nmethod now acceptsNone\nas the translation table parameter, which is treated as the identity transformation. This makes it easier to carry out operations that only delete characters. (Contributed by Bengt Richter and implemented by Raymond Hettinger; bpo-1193128.)The built-in\ndir()\nfunction now checks for a__dir__()\nmethod on the objects it receives. This method must return a list of strings containing the names of valid attributes for the object, and lets the object control the value thatdir()\nproduces. Objects that have__getattr__()\nor__getattribute__()\nmethods can use this to advertise pseudo-attributes they will honor. (bpo-1591665)Instance method objects have new attributes for the object and function comprising the method; the new synonym for\nim_self\nis__self__\n, andim_func\nis also available as__func__\n. The old names are still supported in Python 2.6, but are gone in 3.0.An obscure change: when you use the\nlocals()\nfunction inside aclass\nstatement, the resulting dictionary no longer returns free variables. (Free variables, in this case, are variables referenced in theclass\nstatement that aren\u2019t attributes of the class.)\nOptimizations\u00b6\nThe\nwarnings\nmodule has been rewritten in C. This makes it possible to invoke warnings from the parser, and may also make the interpreter\u2019s startup faster. (Contributed by Neal Norwitz and Brett Cannon; bpo-1631171.)Type objects now have a cache of methods that can reduce the work required to find the correct method implementation for a particular class; once cached, the interpreter doesn\u2019t need to traverse base classes to figure out the right method to call. The cache is cleared if a base class or the class itself is modified, so the cache should remain correct even in the face of Python\u2019s dynamic nature. (Original optimization implemented by Armin Rigo, updated for Python 2.6 by Kevin Jacobs; bpo-1700288.)\nBy default, this change is only applied to types that are included with the Python core. Extension modules may not necessarily be compatible with this cache, so they must explicitly add\nPy_TPFLAGS_HAVE_VERSION_TAG\nto the module\u2019stp_flags\nfield to enable the method cache. (To be compatible with the method cache, the extension module\u2019s code must not directly access and modify thetp_dict\nmember of any of the types it implements. Most modules don\u2019t do this, but it\u2019s impossible for the Python interpreter to determine that. See bpo-1878 for some discussion.)Function calls that use keyword arguments are significantly faster by doing a quick pointer comparison, usually saving the time of a full string comparison. (Contributed by Raymond Hettinger, after an initial implementation by Antoine Pitrou; bpo-1819.)\nAll of the functions in the\nstruct\nmodule have been rewritten in C, thanks to work at the Need For Speed sprint. (Contributed by Raymond Hettinger.)Some of the standard built-in types now set a bit in their type objects. This speeds up checking whether an object is a subclass of one of these types. (Contributed by Neal Norwitz.)\nUnicode strings now use faster code for detecting whitespace and line breaks; this speeds up the\nsplit()\nmethod by about 25% andsplitlines()\nby 35%. (Contributed by Antoine Pitrou.) Memory usage is reduced by using pymalloc for the Unicode string\u2019s data.The\nwith\nstatement now stores the__exit__()\nmethod on the stack, producing a small speedup. (Implemented by Jeffrey Yasskin.)To reduce memory usage, the garbage collector will now clear internal free lists when garbage-collecting the highest generation of objects. This may return memory to the operating system sooner.\nInterpreter Changes\u00b6\nTwo command-line options have been reserved for use by other Python\nimplementations. The -J\nswitch has been reserved for use by\nJython for Jython-specific options, such as switches that are passed to\nthe underlying JVM. -X\nhas been reserved for options\nspecific to a particular implementation of Python such as CPython,\nJython, or IronPython. If either option is used with Python 2.6, the\ninterpreter will report that the option isn\u2019t currently used.\nPython can now be prevented from writing .pyc\nor .pyo\nfiles by supplying the -B\nswitch to the Python interpreter,\nor by setting the PYTHONDONTWRITEBYTECODE\nenvironment\nvariable before running the interpreter. This setting is available to\nPython programs as the sys.dont_write_bytecode\nvariable, and\nPython code can change the value to modify the interpreter\u2019s\nbehaviour. (Contributed by Neal Norwitz and Georg Brandl.)\nThe encoding used for standard input, output, and standard error can\nbe specified by setting the PYTHONIOENCODING\nenvironment\nvariable before running the interpreter. The value should be a string\nin the form \nor :\n.\nThe encoding part specifies the encoding\u2019s name, e.g. utf-8\nor\nlatin-1\n; the optional errorhandler part specifies\nwhat to do with characters that can\u2019t be handled by the encoding,\nand should be one of \u201cerror\u201d, \u201cignore\u201d, or \u201creplace\u201d. (Contributed\nby Martin von L\u00f6wis.)\nNew and Improved Modules\u00b6\nAs in every release, Python\u2019s standard library received a number of\nenhancements and bug fixes. Here\u2019s a partial list of the most notable\nchanges, sorted alphabetically by module name. Consult the\nMisc/NEWS\nfile in the source tree for a more complete list of\nchanges, or look through the Subversion logs for all the details.\nThe\nasyncore\nandasynchat\nmodules are being actively maintained again, and a number of patches and bugfixes were applied. (Maintained by Josiah Carlson; see bpo-1736190 for one patch.)The\nbsddb\nmodule also has a new maintainer, Jes\u00fas Cea Avi\u00f3n, and the package is now available as a standalone package. The web page for the package is www.jcea.es/programacion/pybsddb.htm. The plan is to remove the package from the standard library in Python 3.0, because its pace of releases is much more frequent than Python\u2019s.The\nbsddb.dbshelve\nmodule now uses the highest pickling protocol available, instead of restricting itself to protocol 1. (Contributed by W. Barnes.)The\ncgi\nmodule will now read variables from the query string of an HTTP POST request. This makes it possible to use form actions with URLs that include query strings such as \u201c/cgi-bin/add.py?category=1\u201d. (Contributed by Alexandre Fiori and Nubis; bpo-1817.)The\nparse_qs()\nandparse_qsl()\nfunctions have been relocated from thecgi\nmodule to theurlparse\nmodule. The versions still available in thecgi\nmodule will triggerPendingDeprecationWarning\nmessages in 2.6 (bpo-600362).The\ncmath\nmodule underwent extensive revision, contributed by Mark Dickinson and Christian Heimes. Five new functions were added:polar()\nconverts a complex number to polar form, returning the modulus and argument of the complex number.rect()\ndoes the opposite, turning a modulus, argument pair back into the corresponding complex number.phase()\nreturns the argument (also called the angle) of a complex number.isnan()\nreturns True if either the real or imaginary part of its argument is a NaN.isinf()\nreturns True if either the real or imaginary part of its argument is infinite.\nThe revisions also improved the numerical soundness of the\ncmath\nmodule. For all functions, the real and imaginary parts of the results are accurate to within a few units of least precision (ulps) whenever possible. See bpo-1381 for the details. The branch cuts forasinh()\n,atanh()\n: andatan()\nhave also been corrected.The tests for the module have been greatly expanded; nearly 2000 new test cases exercise the algebraic functions.\nOn IEEE 754 platforms, the\ncmath\nmodule now handles IEEE 754 special values and floating-point exceptions in a manner consistent with Annex \u2018G\u2019 of the C99 standard.A new data type in the\ncollections\nmodule:namedtuple(typename, fieldnames)\nis a factory function that creates subclasses of the standard tuple whose fields are accessible by name as well as index. For example:>>> var_type = collections.namedtuple('variable', ... 'id name type size') >>> # Names are separated by spaces or commas. >>> # 'id, name, type, size' would also work. >>> var_type._fields ('id', 'name', 'type', 'size') >>> var = var_type(1, 'frequency', 'int', 4) >>> print var[0], var.id # Equivalent 1 1 >>> print var[2], var.type # Equivalent int int >>> var._asdict() {'size': 4, 'type': 'int', 'id': 1, 'name': 'frequency'} >>> v2 = var._replace(name='amplitude') >>> v2 variable(id=1, name='amplitude', type='int', size=4)\nSeveral places in the standard library that returned tuples have been modified to return\nnamedtuple()\ninstances. For example, theDecimal.as_tuple()\nmethod now returns a named tuple withsign\n,digits\n, andexponent\nfields.(Contributed by Raymond Hettinger.)\nAnother change to the\ncollections\nmodule is that thedeque\ntype now supports an optional maxlen parameter; if supplied, the deque\u2019s size will be restricted to no more than maxlen items. Adding more items to a full deque causes old items to be discarded.>>> from collections import deque >>> dq=deque(maxlen=3) >>> dq deque([], maxlen=3) >>> dq.append(1); dq.append(2); dq.append(3) >>> dq deque([1, 2, 3], maxlen=3) >>> dq.append(4) >>> dq deque([2, 3, 4], maxlen=3)\n(Contributed by Raymond Hettinger.)\nThe\nCookie\nmodule\u2019sMorsel\nobjects now support anhttponly\nattribute. In some browsers. cookies with this attribute set cannot be accessed or manipulated by JavaScript code. (Contributed by Arvin Schnell; bpo-1638033.)A new window method in the\ncurses\nmodule,chgat()\n, changes the display attributes for a certain number of characters on a single line. (Contributed by Fabian Kreutz.)# Boldface text starting at y=0,x=21 # and affecting the rest of the line. stdscr.chgat(0, 21, curses.A_BOLD)\nThe\nTextbox\nclass in thecurses.textpad\nmodule now supports editing in insert mode as well as overwrite mode. Insert mode is enabled by supplying a true value for the insert_mode parameter when creating theTextbox\ninstance.The\ndatetime\nmodule\u2019sstrftime()\nmethods now support a%f\nformat code that expands to the number of microseconds in the object, zero-padded on the left to six places. (Contributed by Skip Montanaro; bpo-1158.)The\ndecimal\nmodule was updated to version 1.66 of the General Decimal Specification. New features include some methods for some basic mathematical functions such asexp()\nandlog10()\n:>>> Decimal(1).exp() Decimal(\"2.718281828459045235360287471\") >>> Decimal(\"2.7182818\").ln() Decimal(\"0.9999999895305022877376682436\") >>> Decimal(1000).log10() Decimal(\"3\")\nThe\nas_tuple()\nmethod ofDecimal\nobjects now returns a named tuple withsign\n,digits\n, andexponent\nfields.(Implemented by Facundo Batista and Mark Dickinson. Named tuple support added by Raymond Hettinger.)\nThe\ndifflib\nmodule\u2019sSequenceMatcher\nclass now returns named tuples representing matches, witha\n,b\n, andsize\nattributes. (Contributed by Raymond Hettinger.)An optional\ntimeout\nparameter, specifying a timeout measured in seconds, was added to theftplib.FTP\nclass constructor as well as theconnect()\nmethod. (Added by Facundo Batista.) Also, theFTP\nclass\u2019sstorbinary()\nandstorlines()\nnow take an optional callback parameter that will be called with each block of data after the data has been sent. (Contributed by Phil Schwartz; bpo-1221598.)The\nreduce()\nbuilt-in function is also available in thefunctools\nmodule. In Python 3.0, the builtin has been dropped andreduce()\nis only available fromfunctools\n; currently there are no plans to drop the builtin in the 2.x series. (Patched by Christian Heimes; bpo-1739906.)When possible, the\ngetpass\nmodule will now use/dev/tty\nto print a prompt message and read the password, falling back to standard error and standard input. If the password may be echoed to the terminal, a warning is printed before the prompt is displayed. (Contributed by Gregory P. Smith.)The\nglob.glob()\nfunction can now return Unicode filenames if a Unicode path was used and Unicode filenames are matched within the directory. (bpo-1001604)A new function in the\nheapq\nmodule,merge(iter1, iter2, ...)\n, takes any number of iterables returning data in sorted order, and returns a new generator that returns the contents of all the iterators, also in sorted order. For example:>>> list(heapq.merge([1, 3, 5, 9], [2, 8, 16])) [1, 2, 3, 5, 8, 9, 16]\nAnother new function,\nheappushpop(heap, item)\n, pushes item onto heap, then pops off and returns the smallest item. This is more efficient than making a call toheappush()\nand thenheappop()\n.heapq\nis now implemented to only use less-than comparison, instead of the less-than-or-equal comparison it previously used. This makesheapq\n\u2019s usage of a type match thelist.sort()\nmethod. (Contributed by Raymond Hettinger.)An optional\ntimeout\nparameter, specifying a timeout measured in seconds, was added to thehttplib.HTTPConnection\nandHTTPSConnection\nclass constructors. (Added by Facundo Batista.)Most of the\ninspect\nmodule\u2019s functions, such asgetmoduleinfo()\nandgetargs()\n, now return named tuples. In addition to behaving like tuples, the elements of the return value can also be accessed as attributes. (Contributed by Raymond Hettinger.)Some new functions in the module include\nisgenerator()\n,isgeneratorfunction()\n, andisabstract()\n.The\nitertools\nmodule gained several new functions.izip_longest(iter1, iter2, ...[, fillvalue])\nmakes tuples from each of the elements; if some of the iterables are shorter than others, the missing values are set to fillvalue. For example:>>> tuple(itertools.izip_longest([1,2,3], [1,2,3,4,5])) ((1, 1), (2, 2), (3, 3), (None, 4), (None, 5))\nproduct(iter1, iter2, ..., [repeat=N])\nreturns the Cartesian product of the supplied iterables, a set of tuples containing every possible combination of the elements returned from each iterable.>>> list(itertools.product([1,2,3], [4,5,6])) [(1, 4), (1, 5), (1, 6), (2, 4), (2, 5), (2, 6), (3, 4), (3, 5), (3, 6)]\nThe optional repeat keyword argument is used for taking the product of an iterable or a set of iterables with themselves, repeated N times. With a single iterable argument, N-tuples are returned:\n>>> list(itertools.product([1,2], repeat=3)) [(1, 1, 1), (1, 1, 2), (1, 2, 1), (1, 2, 2), (2, 1, 1), (2, 1, 2), (2, 2, 1), (2, 2, 2)]\nWith two iterables, 2N-tuples are returned.\n>>> list(itertools.product([1,2], [3,4], repeat=2)) [(1, 3, 1, 3), (1, 3, 1, 4), (1, 3, 2, 3), (1, 3, 2, 4), (1, 4, 1, 3), (1, 4, 1, 4), (1, 4, 2, 3), (1, 4, 2, 4), (2, 3, 1, 3), (2, 3, 1, 4), (2, 3, 2, 3), (2, 3, 2, 4), (2, 4, 1, 3), (2, 4, 1, 4), (2, 4, 2, 3), (2, 4, 2, 4)]\ncombinations(iterable, r)\nreturns sub-sequences of length r from the elements of iterable.>>> list(itertools.combinations('123', 2)) [('1', '2'), ('1', '3'), ('2', '3')] >>> list(itertools.combinations('123', 3)) [('1', '2', '3')] >>> list(itertools.combinations('1234', 3)) [('1', '2', '3'), ('1', '2', '4'), ('1', '3', '4'), ('2', '3', '4')]\npermutations(iter[, r])\nreturns all the permutations of length r of the iterable\u2019s elements. If r is not specified, it will default to the number of elements produced by the iterable.>>> list(itertools.permutations([1,2,3,4], 2)) [(1, 2), (1, 3), (1, 4), (2, 1), (2, 3), (2, 4), (3, 1), (3, 2), (3, 4), (4, 1), (4, 2), (4, 3)]\nitertools.chain(*iterables)\nis an existing function initertools\nthat gained a new constructor in Python 2.6.itertools.chain.from_iterable(iterable)\ntakes a single iterable that should return other iterables.chain()\nwill then return all the elements of the first iterable, then all the elements of the second, and so on.>>> list(itertools.chain.from_iterable([[1,2,3], [4,5,6]])) [1, 2, 3, 4, 5, 6]\n(All contributed by Raymond Hettinger.)\nThe\nlogging\nmodule\u2019sFileHandler\nclass and its subclassesWatchedFileHandler\n,RotatingFileHandler\n, andTimedRotatingFileHandler\nnow have an optional delay parameter to their constructors. If delay is true, opening of the log file is deferred until the firstemit()\ncall is made. (Contributed by Vinay Sajip.)TimedRotatingFileHandler\nalso has a utc constructor parameter. If the argument is true, UTC time will be used in determining when midnight occurs and in generating filenames; otherwise local time will be used.Several new functions were added to the\nmath\nmodule:isinf()\nandisnan()\ndetermine whether a given float is a (positive or negative) infinity or a NaN (Not a Number), respectively.copysign()\ncopies the sign bit of an IEEE 754 number, returning the absolute value of x combined with the sign bit of y. For example,math.copysign(1, -0.0)\nreturns -1.0. (Contributed by Christian Heimes.)factorial()\ncomputes the factorial of a number. (Contributed by Raymond Hettinger; bpo-2138.)fsum()\nadds up the stream of numbers from an iterable, and is careful to avoid loss of precision through using partial sums. (Contributed by Jean Brouwers, Raymond Hettinger, and Mark Dickinson; bpo-2819.)acosh()\n,asinh()\nandatanh()\ncompute the inverse hyperbolic functions.log1p()\nreturns the natural logarithm of 1+x (base e).trunc()\nrounds a number toward zero, returning the closestIntegral\nthat\u2019s between the function\u2019s argument and zero. Added as part of the backport of PEP 3141\u2019s type hierarchy for numbers.\nThe\nmath\nmodule has been improved to give more consistent behaviour across platforms, especially with respect to handling of floating-point exceptions and IEEE 754 special values.Whenever possible, the module follows the recommendations of the C99 standard about 754\u2019s special values. For example,\nsqrt(-1.)\nshould now give aValueError\nacross almost all platforms, whilesqrt(float('NaN'))\nshould return a NaN on all IEEE 754 platforms. Where Annex \u2018F\u2019 of the C99 standard recommends signaling \u2018divide-by-zero\u2019 or \u2018invalid\u2019, Python will raiseValueError\n. Where Annex \u2018F\u2019 of the C99 standard recommends signaling \u2018overflow\u2019, Python will raiseOverflowError\n. (See bpo-711019 and bpo-1640.)(Contributed by Christian Heimes and Mark Dickinson.)\nmmap\nobjects now have arfind()\nmethod that searches for a substring beginning at the end of the string and searching backwards. Thefind()\nmethod also gained an end parameter giving an index at which to stop searching. (Contributed by John Lenton.)The\noperator\nmodule gained amethodcaller()\nfunction that takes a name and an optional set of arguments, returning a callable that will call the named function on any arguments passed to it. For example:>>> # Equivalent to lambda s: s.replace('old', 'new') >>> replacer = operator.methodcaller('replace', 'old', 'new') >>> replacer('old wine in old bottles') 'new wine in new bottles'\n(Contributed by Georg Brandl, after a suggestion by Gregory Petrosyan.)\nThe\nattrgetter()\nfunction now accepts dotted names and performs the corresponding attribute lookups:>>> inst_name = operator.attrgetter( ... '__class__.__name__') >>> inst_name('') 'str' >>> inst_name(help) '_Helper'\n(Contributed by Georg Brandl, after a suggestion by Barry Warsaw.)\nThe\nos\nmodule now wraps several new system calls.fchmod(fd, mode)\nandfchown(fd, uid, gid)\nchange the mode and ownership of an opened file, andlchmod(path, mode)\nchanges the mode of a symlink. (Contributed by Georg Brandl and Christian Heimes.)chflags()\nandlchflags()\nare wrappers for the corresponding system calls (where they\u2019re available), changing the flags set on a file. Constants for the flag values are defined in thestat\nmodule; some possible values includeUF_IMMUTABLE\nto signal the file may not be changed andUF_APPEND\nto indicate that data can only be appended to the file. (Contributed by M. Levinson.)os.closerange(low, high)\nefficiently closes all file descriptors from low to high, ignoring any errors and not including high itself. This function is now used by thesubprocess\nmodule to make starting processes faster. (Contributed by Georg Brandl; bpo-1663329.)The\nos.environ\nobject\u2019sclear()\nmethod will now unset the environment variables usingos.unsetenv()\nin addition to clearing the object\u2019s keys. (Contributed by Martin Horcicka; bpo-1181.)The\nos.walk()\nfunction now has afollowlinks\nparameter. If set to True, it will follow symlinks pointing to directories and visit the directory\u2019s contents. For backward compatibility, the parameter\u2019s default value is false. Note that the function can fall into an infinite recursion if there\u2019s a symlink that points to a parent directory. (bpo-1273829)In the\nos.path\nmodule, thesplitext()\nfunction has been changed to not split on leading period characters. This produces better results when operating on Unix\u2019s dot-files. For example,os.path.splitext('.ipython')\nnow returns('.ipython', '')\ninstead of('', '.ipython')\n. (bpo-1115886)A new function,\nos.path.relpath(path, start='.')\n, returns a relative path from thestart\npath, if it\u2019s supplied, or from the current working directory to the destinationpath\n. (Contributed by Richard Barran; bpo-1339796.)On Windows,\nos.path.expandvars()\nwill now expand environment variables given in the form \u201c%var%\u201d, and \u201c~user\u201d will be expanded into the user\u2019s home directory path. (Contributed by Josiah Carlson; bpo-957650.)The Python debugger provided by the\npdb\nmodule gained a new command: \u201crun\u201d restarts the Python program being debugged and can optionally take new command-line arguments for the program. (Contributed by Rocky Bernstein; bpo-1393667.)The\npdb.post_mortem()\nfunction, used to begin debugging a traceback, will now use the traceback returned bysys.exc_info()\nif no traceback is supplied. (Contributed by Facundo Batista; bpo-1106316.)The\npickletools\nmodule now has anoptimize()\nfunction that takes a string containing a pickle and removes some unused opcodes, returning a shorter pickle that contains the same data structure. (Contributed by Raymond Hettinger.)A\nget_data()\nfunction was added to thepkgutil\nmodule that returns the contents of resource files included with an installed Python package. For example:>>> import pkgutil >>> print pkgutil.get_data('test', 'exception_hierarchy.txt') BaseException +-- SystemExit +-- KeyboardInterrupt +-- GeneratorExit +-- Exception +-- StopIteration +-- StandardError ...\n(Contributed by Paul Moore; bpo-2439.)\nThe\npyexpat\nmodule\u2019sParser\nobjects now allow setting theirbuffer_size\nattribute to change the size of the buffer used to hold character data. (Contributed by Achim Gaedke; bpo-1137.)The\nQueue\nmodule now provides queue variants that retrieve entries in different orders. ThePriorityQueue\nclass stores queued items in a heap and retrieves them in priority order, andLifoQueue\nretrieves the most recently added entries first, meaning that it behaves like a stack. (Contributed by Raymond Hettinger.)The\nrandom\nmodule\u2019sRandom\nobjects can now be pickled on a 32-bit system and unpickled on a 64-bit system, and vice versa. Unfortunately, this change also means that Python 2.6\u2019sRandom\nobjects can\u2019t be unpickled correctly on earlier versions of Python. (Contributed by Shawn Ligocki; bpo-1727780.)The new\ntriangular(low, high, mode)\nfunction returns random numbers following a triangular distribution. The returned values are between low and high, not including high itself, and with mode as the most frequently occurring value in the distribution. (Contributed by Wladmir van der Laan and Raymond Hettinger; bpo-1681432.)Long regular expression searches carried out by the\nre\nmodule will check for signals being delivered, so time-consuming searches can now be interrupted. (Contributed by Josh Hoyt and Ralf Schmitt; bpo-846388.)The regular expression module is implemented by compiling bytecodes for a tiny regex-specific virtual machine. Untrusted code could create malicious strings of bytecode directly and cause crashes, so Python 2.6 includes a verifier for the regex bytecode. (Contributed by Guido van Rossum from work for Google App Engine; bpo-3487.)\nThe\nrlcompleter\nmodule\u2019sCompleter.complete()\nmethod will now ignore exceptions triggered while evaluating a name. (Fixed by Lorenz Quack; bpo-2250.)The\nsched\nmodule\u2019sscheduler\ninstances now have a read-onlyqueue\nattribute that returns the contents of the scheduler\u2019s queue, represented as a list of named tuples with the fields(time, priority, action, argument)\n. (Contributed by Raymond Hettinger; bpo-1861.)The\nselect\nmodule now has wrapper functions for the Linuxepoll()\nand BSDkqueue()\nsystem calls.modify()\nmethod was added to the existingpoll\nobjects;pollobj.modify(fd, eventmask)\ntakes a file descriptor or file object and an event mask, modifying the recorded event mask for that file. (Contributed by Christian Heimes; bpo-1657.)The\nshutil.copytree()\nfunction now has an optional ignore argument that takes a callable object. This callable will receive each directory path and a list of the directory\u2019s contents, and returns a list of names that will be ignored, not copied.The\nshutil\nmodule also provides anignore_patterns()\nfunction for use with this new parameter.ignore_patterns()\ntakes an arbitrary number of glob-style patterns and returns a callable that will ignore any files and directories that match any of these patterns. The following example copies a directory tree, but skips both.svn\ndirectories and Emacs backup files, which have names ending with \u2018~\u2019:shutil.copytree('Doc/library', '/tmp/library', ignore=shutil.ignore_patterns('*~', '.svn'))\n(Contributed by Tarek Ziad\u00e9; bpo-2663.)\nIntegrating signal handling with GUI handling event loops like those used by Tkinter or GTk+ has long been a problem; most software ends up polling, waking up every fraction of a second to check if any GUI events have occurred. The\nsignal\nmodule can now make this more efficient. Callingsignal.set_wakeup_fd(fd)\nsets a file descriptor to be used; when a signal is received, a byte is written to that file descriptor. There\u2019s also a C-level function,PySignal_SetWakeupFd()\n, for setting the descriptor.Event loops will use this by opening a pipe to create two descriptors, one for reading and one for writing. The writable descriptor will be passed to\nset_wakeup_fd()\n, and the readable descriptor will be added to the list of descriptors monitored by the event loop viaselect()\norpoll()\n. On receiving a signal, a byte will be written and the main event loop will be woken up, avoiding the need to poll.(Contributed by Adam Olsen; bpo-1583.)\nThe\nsiginterrupt()\nfunction is now available from Python code, and allows changing whether signals can interrupt system calls or not. (Contributed by Ralf Schmitt.)The\nsetitimer()\nandgetitimer()\nfunctions have also been added (where they\u2019re available).setitimer()\nallows setting interval timers that will cause a signal to be delivered to the process after a specified time, measured in wall-clock time, consumed process time, or combined process+system time. (Contributed by Guilherme Polo; bpo-2240.)The\nsmtplib\nmodule now supports SMTP over SSL thanks to the addition of theSMTP_SSL\nclass. This class supports an interface identical to the existingSMTP\nclass. (Contributed by Monty Taylor.) Both class constructors also have an optionaltimeout\nparameter that specifies a timeout for the initial connection attempt, measured in seconds. (Contributed by Facundo Batista.)An implementation of the LMTP protocol (RFC 2033) was also added to the module. LMTP is used in place of SMTP when transferring e-mail between agents that don\u2019t manage a mail queue. (LMTP implemented by Leif Hedstrom; bpo-957003.)\nSMTP.starttls()\nnow complies with RFC 3207 and forgets any knowledge obtained from the server not obtained from the TLS negotiation itself. (Patch contributed by Bill Fenner; bpo-829951.)The\nsocket\nmodule now supports TIPC (https://tipc.sourceforge.net/), a high-performance non-IP-based protocol designed for use in clustered environments. TIPC addresses are 4- or 5-tuples. (Contributed by Alberto Bertogli; bpo-1646.)A new function,\ncreate_connection()\n, takes an address and connects to it using an optional timeout value, returning the connected socket object. This function also looks up the address\u2019s type and connects to it using IPv4 or IPv6 as appropriate. Changing your code to usecreate_connection()\ninstead ofsocket(socket.AF_INET, ...)\nmay be all that\u2019s required to make your code work with IPv6.The base classes in the\nSocketServer\nmodule now support calling ahandle_timeout()\nmethod after a span of inactivity specified by the server\u2019stimeout\nattribute. (Contributed by Michael Pomraning.) Theserve_forever()\nmethod now takes an optional poll interval measured in seconds, controlling how often the server will check for a shutdown request. (Contributed by Pedro Werneck and Jeffrey Yasskin; bpo-742598, bpo-1193577.)The\nsqlite3\nmodule, maintained by Gerhard H\u00e4ring, has been updated from version 2.3.2 in Python 2.5 to version 2.4.1.The\nstruct\nmodule now supports the C99 _Bool type, using the format character'?'\n. (Contributed by David Remahl.)The\nPopen\nobjects provided by thesubprocess\nmodule now haveterminate()\n,kill()\n, andsend_signal()\nmethods. On Windows,send_signal()\nonly supports theSIGTERM\nsignal, and all these methods are aliases for the Win32 API functionTerminateProcess()\n. (Contributed by Christian Heimes.)A new variable in the\nsys\nmodule,float_info\n, is an object containing information derived from thefloat.h\nfile about the platform\u2019s floating-point support. Attributes of this object includemant_dig\n(number of digits in the mantissa),epsilon\n(smallest difference between 1.0 and the next largest value representable), and several others. (Contributed by Christian Heimes; bpo-1534.)Another new variable,\ndont_write_bytecode\n, controls whether Python writes any.pyc\nor.pyo\nfiles on importing a module. If this variable is true, the compiled files are not written. The variable is initially set on start-up by supplying the-B\nswitch to the Python interpreter, or by setting thePYTHONDONTWRITEBYTECODE\nenvironment variable before running the interpreter. Python code can subsequently change the value of this variable to control whether bytecode files are written or not. (Contributed by Neal Norwitz and Georg Brandl.)Information about the command-line arguments supplied to the Python interpreter is available by reading attributes of a named tuple available as\nsys.flags\n. For example, theverbose\nattribute is true if Python was executed in verbose mode,debug\nis true in debugging mode, etc. These attributes are all read-only. (Contributed by Christian Heimes.)A new function,\ngetsizeof()\n, takes a Python object and returns the amount of memory used by the object, measured in bytes. Built-in objects return correct results; third-party extensions may not, but can define a__sizeof__()\nmethod to return the object\u2019s size. (Contributed by Robert Schuppenies; bpo-2898.)It\u2019s now possible to determine the current profiler and tracer functions by calling\nsys.getprofile()\nandsys.gettrace()\n. (Contributed by Georg Brandl; bpo-1648.)The\ntarfile\nmodule now supports POSIX.1-2001 (pax) tarfiles in addition to the POSIX.1-1988 (ustar) and GNU tar formats that were already supported. The default format is GNU tar; specify theformat\nparameter to open a file using a different format:tar = tarfile.open(\"output.tar\", \"w\", format=tarfile.PAX_FORMAT)\nThe new\nencoding\nanderrors\nparameters specify an encoding and an error handling scheme for character conversions.'strict'\n,'ignore'\n, and'replace'\nare the three standard ways Python can handle errors,;'utf-8'\nis a special value that replaces bad characters with their UTF-8 representation. (Character conversions occur because the PAX format supports Unicode filenames, defaulting to UTF-8 encoding.)The\nTarFile.add()\nmethod now accepts anexclude\nargument that\u2019s a function that can be used to exclude certain filenames from an archive. The function must take a filename and return true if the file should be excluded or false if it should be archived. The function is applied to both the name initially passed toadd()\nand to the names of files in recursively added directories.(All changes contributed by Lars Gust\u00e4bel).\nAn optional\ntimeout\nparameter was added to thetelnetlib.Telnet\nclass constructor, specifying a timeout measured in seconds. (Added by Facundo Batista.)The\ntempfile.NamedTemporaryFile\nclass usually deletes the temporary file it created when the file is closed. This behaviour can now be changed by passingdelete=False\nto the constructor. (Contributed by Damien Miller; bpo-1537850.)A new class,\nSpooledTemporaryFile\n, behaves like a temporary file but stores its data in memory until a maximum size is exceeded. On reaching that limit, the contents will be written to an on-disk temporary file. (Contributed by Dustin J. Mitchell.)The\nNamedTemporaryFile\nandSpooledTemporaryFile\nclasses both work as context managers, so you can writewith tempfile.NamedTemporaryFile() as tmp: ...\n. (Contributed by Alexander Belopolsky; bpo-2021.)The\ntest.test_support\nmodule gained a number of context managers useful for writing tests.EnvironmentVarGuard()\nis a context manager that temporarily changes environment variables and automatically restores them to their old values.Another context manager,\nTransientResource\n, can surround calls to resources that may or may not be available; it will catch and ignore a specified list of exceptions. For example, a network test may ignore certain failures when connecting to an external web site:with test_support.TransientResource(IOError, errno=errno.ETIMEDOUT): f = urllib.urlopen('https://sf.net') ...\nFinally,\ncheck_warnings()\nresets thewarning\nmodule\u2019s warning filters and returns an object that will record all warning messages triggered (bpo-3781):with test_support.check_warnings() as wrec: warnings.simplefilter(\"always\") # ... code that triggers a warning ... assert str(wrec.message) == \"function is outdated\" assert len(wrec.warnings) == 1, \"Multiple warnings raised\"\n(Contributed by Brett Cannon.)\nThe\ntextwrap\nmodule can now preserve existing whitespace at the beginnings and ends of the newly created lines by specifyingdrop_whitespace=False\nas an argument:>>> S = \"\"\"This sentence has a bunch of ... extra whitespace.\"\"\" >>> print textwrap.fill(S, width=15) This sentence has a bunch of extra whitespace. >>> print textwrap.fill(S, drop_whitespace=False, width=15) This sentence has a bunch of extra whitespace. >>>\n(Contributed by Dwayne Bailey; bpo-1581073.)\nThe\nthreading\nmodule API is being changed to use properties such asdaemon\ninstead ofsetDaemon()\nandisDaemon()\nmethods, and some methods have been renamed to use underscores instead of camel-case; for example, theactiveCount()\nmethod is renamed toactive_count()\n. Both the 2.6 and 3.0 versions of the module support the same properties and renamed methods, but don\u2019t remove the old methods. No date has been set for the deprecation of the old APIs in Python 3.x; the old APIs won\u2019t be removed in any 2.x version. (Carried out by several people, most notably Benjamin Peterson.)The\nthreading\nmodule\u2019sThread\nobjects gained anident\nproperty that returns the thread\u2019s identifier, a nonzero integer. (Contributed by Gregory P. Smith; bpo-2871.)The\ntimeit\nmodule now accepts callables as well as strings for the statement being timed and for the setup code. Two convenience functions were added for creatingTimer\ninstances:repeat(stmt, setup, time, repeat, number)\nandtimeit(stmt, setup, time, number)\ncreate an instance and call the corresponding method. (Contributed by Erik Demaine; bpo-1533909.)The\nTkinter\nmodule now accepts lists and tuples for options, separating the elements by spaces before passing the resulting value to Tcl/Tk. (Contributed by Guilherme Polo; bpo-2906.)The\nturtle\nmodule for turtle graphics was greatly enhanced by Gregor Lingl. New features in the module include:Better animation of turtle movement and rotation.\nControl over turtle movement using the new\ndelay()\n,tracer()\n, andspeed()\nmethods.The ability to set new shapes for the turtle, and to define a new coordinate system.\nTurtles now have an\nundo()\nmethod that can roll back actions.Simple support for reacting to input events such as mouse and keyboard activity, making it possible to write simple games.\nA\nturtle.cfg\nfile can be used to customize the starting appearance of the turtle\u2019s screen.The module\u2019s docstrings can be replaced by new docstrings that have been translated into another language.\nAn optional\ntimeout\nparameter was added to theurllib.urlopen\nfunction and theurllib.ftpwrapper\nclass constructor, as well as theurllib2.urlopen\nfunction. The parameter specifies a timeout measured in seconds. For example:>>> u = urllib2.urlopen(\"http://slow.example.com\", timeout=3) Traceback (most recent call last): ... urllib2.URLError: >>>\n(Added by Facundo Batista.)\nThe Unicode database provided by the\nunicodedata\nmodule has been updated to version 5.1.0. (Updated by Martin von L\u00f6wis; bpo-3811.)The\nwarnings\nmodule\u2019sformatwarning()\nandshowwarning()\ngained an optional line argument that can be used to supply the line of source code. (Added as part of bpo-1631171, which re-implemented part of thewarnings\nmodule in C code.)A new function,\ncatch_warnings()\n, is a context manager intended for testing purposes that lets you temporarily modify the warning filters and then restore their original values (bpo-3781).The XML-RPC\nSimpleXMLRPCServer\nandDocXMLRPCServer\nclasses can now be prevented from immediately opening and binding to their socket by passingFalse\nas the bind_and_activate constructor parameter. This can be used to modify the instance\u2019sallow_reuse_address\nattribute before calling theserver_bind()\nandserver_activate()\nmethods to open the socket and begin listening for connections. (Contributed by Peter Parente; bpo-1599845.)SimpleXMLRPCServer\nalso has a_send_traceback_header\nattribute; if true, the exception and formatted traceback are returned as HTTP headers \u201cX-Exception\u201d and \u201cX-Traceback\u201d. This feature is for debugging purposes only and should not be used on production servers because the tracebacks might reveal passwords or other sensitive information. (Contributed by Alan McIntyre as part of his project for Google\u2019s Summer of Code 2007.)The\nxmlrpclib\nmodule no longer automatically convertsdatetime.date\nanddatetime.time\nto thexmlrpclib.DateTime\ntype; the conversion semantics were not necessarily correct for all applications. Code usingxmlrpclib\nshould convertdate\nandtime\ninstances. (bpo-1330538) The code can also handle dates before 1900 (contributed by Ralf Schmitt; bpo-2014) and 64-bit integers represented by using\nin XML-RPC responses (contributed by Riku Lindblad; bpo-2985).The\nzipfile\nmodule\u2019sZipFile\nclass now hasextract()\nandextractall()\nmethods that will unpack a single file or all the files in the archive to the current directory, or to a specified directory:z = zipfile.ZipFile('python-251.zip') # Unpack a single file, writing it relative # to the /tmp directory. z.extract('Python/sysmodule.c', '/tmp') # Unpack all the files in the archive. z.extractall()\n(Contributed by Alan McIntyre; bpo-467924.)\nThe\nopen()\n,read()\nandextract()\nmethods can now take either a filename or aZipInfo\nobject. This is useful when an archive accidentally contains a duplicated filename. (Contributed by Graham Horler; bpo-1775025.)Finally,\nzipfile\nnow supports using Unicode filenames for archived files. (Contributed by Alexey Borzenkov; bpo-1734346.)\nThe ast\nmodule\u00b6\nThe ast\nmodule provides an Abstract Syntax Tree\nrepresentation of Python code, and Armin Ronacher\ncontributed a set of helper functions that perform a variety of\ncommon tasks. These will be useful for HTML templating\npackages, code analyzers, and similar tools that process\nPython code.\nThe parse()\nfunction takes an expression and returns an AST.\nThe dump()\nfunction outputs a representation of a tree, suitable\nfor debugging:\nimport ast\nt = ast.parse(\"\"\"\nd = {}\nfor i in 'abcdefghijklm':\nd[i + i] = ord(i) - ord('a') + 1\nprint d\n\"\"\")\nprint ast.dump(t)\nThis outputs a deeply nested tree:\nModule(body=[\nAssign(targets=[\nName(id='d', ctx=Store())\n], value=Dict(keys=[], values=[]))\nFor(target=Name(id='i', ctx=Store()),\niter=Str(s='abcdefghijklm'), body=[\nAssign(targets=[\nSubscript(value=\nName(id='d', ctx=Load()),\nslice=\nIndex(value=\nBinOp(left=Name(id='i', ctx=Load()), op=Add(),\nright=Name(id='i', ctx=Load()))), ctx=Store())\n], value=\nBinOp(left=\nBinOp(left=\nCall(func=\nName(id='ord', ctx=Load()), args=[\nName(id='i', ctx=Load())\n], keywords=[], starargs=None, kwargs=None),\nop=Sub(), right=Call(func=\nName(id='ord', ctx=Load()), args=[\nStr(s='a')\n], keywords=[], starargs=None, kwargs=None)),\nop=Add(), right=Num(n=1)))\n], orelse=[])\nPrint(dest=None, values=[\nName(id='d', ctx=Load())\n], nl=True)\n])\nThe literal_eval()\nmethod takes a string or an AST\nrepresenting a literal expression, parses and evaluates it, and\nreturns the resulting value. A literal expression is a Python\nexpression containing only strings, numbers, dictionaries,\netc. but no statements or function calls. If you need to\nevaluate an expression but cannot accept the security risk of using an\neval()\ncall, literal_eval()\nwill handle it safely:\n>>> literal = '(\"a\", \"b\", {2:4, 3:8, 1:2})'\n>>> print ast.literal_eval(literal)\n('a', 'b', {1: 2, 2: 4, 3: 8})\n>>> print ast.literal_eval('\"a\" + \"b\"')\nTraceback (most recent call last):\n...\nValueError: malformed string\nThe module also includes NodeVisitor\nand\nNodeTransformer\nclasses for traversing and modifying an AST,\nand functions for common transformations such as changing line\nnumbers.\nThe future_builtins\nmodule\u00b6\nPython 3.0 makes many changes to the repertoire of built-in\nfunctions, and most of the changes can\u2019t be introduced in the Python\n2.x series because they would break compatibility.\nThe future_builtins\nmodule provides versions\nof these built-in functions that can be imported when writing\n3.0-compatible code.\nThe functions in this module currently include:\nascii(obj)\n: equivalent torepr()\n. In Python 3.0,repr()\nwill return a Unicode string, whileascii()\nwill return a pure ASCII bytestring.filter(predicate, iterable)\n,map(func, iterable1, ...)\n: the 3.0 versions return iterators, unlike the 2.x builtins which return lists.hex(value)\n,oct(value)\n: instead of calling the__hex__()\nor__oct__()\nmethods, these versions will call the__index__()\nmethod and convert the result to hexadecimal or octal.oct()\nwill use the new0o\nnotation for its result.\nThe json\nmodule: JavaScript Object Notation\u00b6\nThe new json\nmodule supports the encoding and decoding of Python types in\nJSON (Javascript Object Notation). JSON is a lightweight interchange format\noften used in web applications. For more information about JSON, see\nhttp://www.json.org.\njson\ncomes with support for decoding and encoding most built-in Python\ntypes. The following example encodes and decodes a dictionary:\n>>> import json\n>>> data = {\"spam\": \"foo\", \"parrot\": 42}\n>>> in_json = json.dumps(data) # Encode the data\n>>> in_json\n'{\"parrot\": 42, \"spam\": \"foo\"}'\n>>> json.loads(in_json) # Decode into a Python object\n{\"spam\": \"foo\", \"parrot\": 42}\nIt\u2019s also possible to write your own decoders and encoders to support more types. Pretty-printing of the JSON strings is also supported.\njson\n(originally called simplejson) was written by Bob\nIppolito.\nThe plistlib\nmodule: A Property-List Parser\u00b6\nThe .plist\nformat is commonly used on Mac OS X to\nstore basic data types (numbers, strings, lists,\nand dictionaries) by serializing them into an XML-based format.\nIt resembles the XML-RPC serialization of data types.\nDespite being primarily used on Mac OS X, the format\nhas nothing Mac-specific about it and the Python implementation works\non any platform that Python supports, so the plistlib\nmodule\nhas been promoted to the standard library.\nUsing the module is simple:\nimport sys\nimport plistlib\nimport datetime\n# Create data structure\ndata_struct = dict(lastAccessed=datetime.datetime.now(),\nversion=1,\ncategories=('Personal','Shared','Private'))\n# Create string containing XML.\nplist_str = plistlib.writePlistToString(data_struct)\nnew_struct = plistlib.readPlistFromString(plist_str)\nprint data_struct\nprint new_struct\n# Write data structure to a file and read it back.\nplistlib.writePlist(data_struct, '/tmp/customizations.plist')\nnew_struct = plistlib.readPlist('/tmp/customizations.plist')\n# read/writePlist accepts file-like objects as well as paths.\nplistlib.writePlist(data_struct, sys.stdout)\nctypes Enhancements\u00b6\nThomas Heller continued to maintain and enhance the\nctypes\nmodule.\nctypes\nnow supports a c_bool\ndatatype\nthat represents the C99 bool\ntype. (Contributed by David Remahl;\nbpo-1649190.)\nThe ctypes\nstring, buffer and array types have improved\nsupport for extended slicing syntax,\nwhere various combinations of (start, stop, step)\nare supplied.\n(Implemented by Thomas Wouters.)\nAll ctypes\ndata types now support\nfrom_buffer()\nand from_buffer_copy()\nmethods that create a ctypes instance based on a\nprovided buffer object. from_buffer_copy()\ncopies\nthe contents of the object,\nwhile from_buffer()\nwill share the same memory area.\nA new calling convention tells ctypes\nto clear the errno\nor\nWin32 LastError variables at the outset of each wrapped call.\n(Implemented by Thomas Heller; bpo-1798.)\nYou can now retrieve the Unix errno\nvariable after a function\ncall. When creating a wrapped function, you can supply\nuse_errno=True\nas a keyword parameter to the DLL()\nfunction\nand then call the module-level methods set_errno()\nand\nget_errno()\nto set and retrieve the error value.\nThe Win32 LastError variable is similarly supported by\nthe DLL()\n, OleDLL()\n, and WinDLL()\nfunctions.\nYou supply use_last_error=True\nas a keyword parameter\nand then call the module-level methods set_last_error()\nand get_last_error()\n.\nThe byref()\nfunction, used to retrieve a pointer to a ctypes\ninstance, now has an optional offset parameter that is a byte\ncount that will be added to the returned pointer.\nImproved SSL Support\u00b6\nBill Janssen made extensive improvements to Python 2.6\u2019s support for\nthe Secure Sockets Layer by adding a new module, ssl\n, that\u2019s\nbuilt atop the OpenSSL library.\nThis new module provides more control over the protocol negotiated,\nthe X.509 certificates used, and has better support for writing SSL\nservers (as opposed to clients) in Python. The existing SSL support\nin the socket\nmodule hasn\u2019t been removed and continues to work,\nthough it will be removed in Python 3.0.\nTo use the new module, you must first create a TCP connection in the\nusual way and then pass it to the ssl.wrap_socket()\nfunction.\nIt\u2019s possible to specify whether a certificate is required, and to\nobtain certificate info by calling the getpeercert()\nmethod.\nSee also\nThe documentation for the ssl\nmodule.\nDeprecations and Removals\u00b6\nString exceptions have been removed. Attempting to use them raises a\nTypeError\n.Changes to the\nException\ninterface as dictated by PEP 352 continue to be made. For 2.6, themessage\nattribute is being deprecated in favor of theargs\nattribute.(3.0-warning mode) Python 3.0 will feature a reorganized standard library that will drop many outdated modules and rename others. Python 2.6 running in 3.0-warning mode will warn about these modules when they are imported.\nThe list of deprecated modules is:\naudiodev\n,bgenlocations\n,buildtools\n,bundlebuilder\n,Canvas\n,compiler\n,dircache\n,dl\n,fpformat\n,gensuitemodule\n,ihooks\n,imageop\n,imgfile\n,linuxaudiodev\n,mhlib\n,mimetools\n,multifile\n,new\n,pure\n,statvfs\n,sunaudiodev\n,test.testall\n, andtoaiff\n.The\ngopherlib\nmodule has been removed.The\nMimeWriter\nmodule andmimify\nmodule have been deprecated; use theemail\npackage instead.The\nmd5\nmodule has been deprecated; use thehashlib\nmodule instead.The\nposixfile\nmodule has been deprecated;fcntl.lockf()\nprovides better locking.The\npopen2\nmodule has been deprecated; use thesubprocess\nmodule.The\nrgbimg\nmodule has been removed.The\nsets\nmodule has been deprecated; it\u2019s better to use the built-inset\nandfrozenset\ntypes.The\nsha\nmodule has been deprecated; use thehashlib\nmodule instead.\nBuild and C API Changes\u00b6\nChanges to Python\u2019s build process and to the C API include:\nPython now must be compiled with C89 compilers (after 19 years!). This means that the Python source tree has dropped its own implementations of\nmemmove()\nandstrerror()\n, which are in the C89 standard library.Python 2.6 can be built with Microsoft Visual Studio 2008 (version 9.0), and this is the new default compiler. See the\nPCbuild\ndirectory for the build files. (Implemented by Christian Heimes.)On Mac OS X, Python 2.6 can be compiled as a 4-way universal build. The configure script can take a\n--with-universal-archs=[32-bit|64-bit|all]\nswitch, controlling whether the binaries are built for 32-bit architectures (x86, PowerPC), 64-bit (x86-64 and PPC-64), or both. (Contributed by Ronald Oussoren.)A new function added in Python 2.6.6,\nPySys_SetArgvEx()\n, sets the value ofsys.argv\nand can optionally updatesys.path\nto include the directory containing the script named bysys.argv[0]\ndepending on the value of an updatepath parameter.This function was added to close a security hole for applications that embed Python. The old function,\nPySys_SetArgv()\n, would always updatesys.path\n, and sometimes it would add the current directory. This meant that, if you ran an application embedding Python in a directory controlled by someone else, attackers could put a Trojan-horse module in the directory (say, a file namedos.py\n) that your application would then import and run.If you maintain a C/C++ application that embeds Python, check whether you\u2019re calling\nPySys_SetArgv()\nand carefully consider whether the application should be usingPySys_SetArgvEx()\nwith updatepath set to false. Note that using this function will break compatibility with Python versions 2.6.5 and earlier; if you have to continue working with earlier versions, you can leave the call toPySys_SetArgv()\nalone and callPyRun_SimpleString(\"sys.path.pop(0)\\n\")\nafterwards to discard the firstsys.path\ncomponent.Security issue reported as CVE 2008-5983; discussed in gh-50003, and fixed by Antoine Pitrou.\nThe BerkeleyDB module now has a C API object, available as\nbsddb.db.api\n. This object can be used by other C extensions that wish to use thebsddb\nmodule for their own purposes. (Contributed by Duncan Grisby.)The new buffer interface, previously described in the PEP 3118 section, adds\nPyObject_GetBuffer()\nandPyBuffer_Release()\n, as well as a few other functions.Python\u2019s use of the C stdio library is now thread-safe, or at least as thread-safe as the underlying library is. A long-standing potential bug occurred if one thread closed a file object while another thread was reading from or writing to the object. In 2.6 file objects have a reference count, manipulated by the\nPyFile_IncUseCount()\nandPyFile_DecUseCount()\nfunctions. File objects can\u2019t be closed unless the reference count is zero.PyFile_IncUseCount()\nshould be called while the GIL is still held, before carrying out an I/O operation using theFILE *\npointer, andPyFile_DecUseCount()\nshould be called immediately after the GIL is re-acquired. (Contributed by Antoine Pitrou and Gregory P. Smith.)Importing modules simultaneously in two different threads no longer deadlocks; it will now raise an\nImportError\n. A new API function,PyImport_ImportModuleNoBlock()\n, will look for a module insys.modules\nfirst, then try to import it after acquiring an import lock. If the import lock is held by another thread, anImportError\nis raised. (Contributed by Christian Heimes.)Several functions return information about the platform\u2019s floating-point support.\nPyFloat_GetMax()\nreturns the maximum representable floating-point value, andPyFloat_GetMin()\nreturns the minimum positive value.PyFloat_GetInfo()\nreturns an object containing more information from thefloat.h\nfile, such as\"mant_dig\"\n(number of digits in the mantissa),\"epsilon\"\n(smallest difference between 1.0 and the next largest value representable), and several others. (Contributed by Christian Heimes; bpo-1534.)C functions and methods that use\nPyComplex_AsCComplex()\nwill now accept arguments that have a__complex__()\nmethod. In particular, the functions in thecmath\nmodule will now accept objects with this method. This is a backport of a Python 3.0 change. (Contributed by Mark Dickinson; bpo-1675423.)Python\u2019s C API now includes two functions for case-insensitive string comparisons,\nPyOS_stricmp(char*, char*)\nandPyOS_strnicmp(char*, char*, Py_ssize_t)\n. (Contributed by Christian Heimes; bpo-1635.)Many C extensions define their own little macro for adding integers and strings to the module\u2019s dictionary in the\ninit*\nfunction. Python 2.6 finally defines standard macros for adding values to a module,PyModule_AddStringMacro\nandPyModule_AddIntMacro()\n. (Contributed by Christian Heimes.)Some macros were renamed in both 3.0 and 2.6 to make it clearer that they are macros, not functions.\nPy_Size()\nbecamePy_SIZE()\n,Py_Type()\nbecamePy_TYPE()\n, andPy_Refcnt()\nbecamePy_REFCNT()\n. The mixed-case macros are still available in Python 2.6 for backward compatibility. (bpo-1629)Distutils now places C extensions it builds in a different directory when running on a debug version of Python. (Contributed by Collin Winter; bpo-1530959.)\nSeveral basic data types, such as integers and strings, maintain internal free lists of objects that can be re-used. The data structures for these free lists now follow a naming convention: the variable is always named\nfree_list\n, the counter is always namednumfree\n, and a macroPy_MAXFREELIST\nis always defined.A new Makefile target, \u201cmake patchcheck\u201d, prepares the Python source tree for making a patch: it fixes trailing whitespace in all modified\n.py\nfiles, checks whether the documentation has been changed, and reports whether theMisc/ACKS\nandMisc/NEWS\nfiles have been updated. (Contributed by Brett Cannon.)Another new target, \u201cmake profile-opt\u201d, compiles a Python binary using GCC\u2019s profile-guided optimization. It compiles Python with profiling enabled, runs the test suite to obtain a set of profiling results, and then compiles using these results for optimization. (Contributed by Gregory P. Smith.)\nPort-Specific Changes: Windows\u00b6\nThe support for Windows 95, 98, ME and NT4 has been dropped. Python 2.6 requires at least Windows 2000 SP4.\nThe new default compiler on Windows is Visual Studio 2008 (version 9.0). The build directories for Visual Studio 2003 (version 7.1) and 2005 (version 8.0) were moved into the PC/ directory. The new\nPCbuild\ndirectory supports cross compilation for X64, debug builds and Profile Guided Optimization (PGO). PGO builds are roughly 10% faster than normal builds. (Contributed by Christian Heimes with help from Amaury Forgeot d\u2019Arc and Martin von L\u00f6wis.)The\nmsvcrt\nmodule now supports both the normal and wide char variants of the console I/O API. Thegetwch()\nfunction reads a keypress and returns a Unicode value, as does thegetwche()\nfunction. Theputwch()\nfunction takes a Unicode character and writes it to the console. (Contributed by Christian Heimes.)os.path.expandvars()\nwill now expand environment variables in the form \u201c%var%\u201d, and \u201c~user\u201d will be expanded into the user\u2019s home directory path. (Contributed by Josiah Carlson; bpo-957650.)The\nsocket\nmodule\u2019s socket objects now have anioctl()\nmethod that provides a limited interface to theWSAIoctl()\nsystem interface.The\n_winreg\nmodule now has a function,ExpandEnvironmentStrings()\n, that expands environment variable references such as%NAME%\nin an input string. The handle objects provided by this module now support the context protocol, so they can be used inwith\nstatements. (Contributed by Christian Heimes.)_winreg\nalso has better support for x64 systems, exposing theDisableReflectionKey()\n,EnableReflectionKey()\n, andQueryReflectionKey()\nfunctions, which enable and disable registry reflection for 32-bit processes running on 64-bit systems. (bpo-1753245)The\nmsilib\nmodule\u2019sRecord\nobject gainedGetInteger()\nandGetString()\nmethods that return field values as an integer or a string. (Contributed by Floris Bruynooghe; bpo-2125.)\nPort-Specific Changes: Mac OS X\u00b6\nWhen compiling a framework build of Python, you can now specify the framework name to be used by providing the\n--with-framework-name=\noption to the configure script.The\nmacfs\nmodule has been removed. This in turn required themacostools.touched()\nfunction to be removed because it depended on themacfs\nmodule. (bpo-1490190)Many other Mac OS modules have been deprecated and will be removed in Python 3.0:\n_builtinSuites\n,aepack\n,aetools\n,aetypes\n,applesingle\n,appletrawmain\n,appletrunner\n,argvemulator\n,Audio_mac\n,autoGIL\n,Carbon\n,cfmfile\n,CodeWarrior\n,ColorPicker\n,EasyDialogs\n,Explorer\n,Finder\n,FrameWork\n,findertools\n,ic\n,icglue\n,icopen\n,macerrors\n,MacOS\n,macfs\n,macostools\n,macresource\n,MiniAEFrame\n,Nav\n,Netscape\n,OSATerminology\n,pimp\n,PixMapWrapper\n,StdSuites\n,SystemEvents\n,Terminal\n, andterminalcommand\n.\nPort-Specific Changes: IRIX\u00b6\nA number of old IRIX-specific modules were deprecated and will\nbe removed in Python 3.0:\nal\nand AL\n,\ncd\n,\ncddb\n,\ncdplayer\n,\nCL\nand cl\n,\nDEVICE\n,\nERRNO\n,\nFILE\n,\nFL\nand fl\n,\nflp\n,\nfm\n,\nGET\n,\nGLWS\n,\nGL\nand gl\n,\nIN\n,\nIOCTL\n,\njpeg\n,\npanelparser\n,\nreadcd\n,\nSV\nand sv\n,\ntorgb\n,\nvideoreader\n, and\nWAIT\n.\nPorting to Python 2.6\u00b6\nThis section lists previously described changes and other bugfixes that may require changes to your code:\nClasses that aren\u2019t supposed to be hashable should set\n__hash__ = None\nin their definitions to indicate the fact.String exceptions have been removed. Attempting to use them raises a\nTypeError\n.The\n__init__()\nmethod ofcollections.deque\nnow clears any existing contents of the deque before adding elements from the iterable. This change makes the behavior matchlist.__init__()\n.object.__init__()\npreviously accepted arbitrary arguments and keyword arguments, ignoring them. In Python 2.6, this is no longer allowed and will result in aTypeError\n. This will affect__init__()\nmethods that end up calling the corresponding method onobject\n(perhaps through usingsuper()\n). See bpo-1683368 for discussion.The\nDecimal\nconstructor now accepts leading and trailing whitespace when passed a string. Previously it would raise anInvalidOperation\nexception. On the other hand, thecreate_decimal()\nmethod ofContext\nobjects now explicitly disallows extra whitespace, raising aConversionSyntax\nexception.Due to an implementation accident, if you passed a file path to the built-in\n__import__()\nfunction, it would actually import the specified file. This was never intended to work, however, and the implementation now explicitly checks for this case and raises anImportError\n.C API: the\nPyImport_Import()\nandPyImport_ImportModule()\nfunctions now default to absolute imports, not relative imports. This will affect C extensions that import other modules.C API: extension data types that shouldn\u2019t be hashable should define their\ntp_hash\nslot toPyObject_HashNotImplemented()\n.The\nsocket\nmodule exceptionsocket.error\nnow inherits fromIOError\n. Previously it wasn\u2019t a subclass ofStandardError\nbut now it is, throughIOError\n. (Implemented by Gregory P. Smith; bpo-1706815.)The\nxmlrpclib\nmodule no longer automatically convertsdatetime.date\nanddatetime.time\nto thexmlrpclib.DateTime\ntype; the conversion semantics were not necessarily correct for all applications. Code usingxmlrpclib\nshould convertdate\nandtime\ninstances. (bpo-1330538)(3.0-warning mode) The\nException\nclass now warns when accessed using slicing or index access; havingException\nbehave like a tuple is being phased out.(3.0-warning mode) inequality comparisons between two dictionaries or two objects that don\u2019t implement comparison methods are reported as warnings.\ndict1 == dict2\nstill works, butdict1 < dict2\nis being phased out.Comparisons between cells, which are an implementation detail of Python\u2019s scoping rules, also cause warnings because such comparisons are forbidden entirely in 3.0.\nFor applications that embed Python:\nThe\nPySys_SetArgvEx()\nfunction was added in Python 2.6.6, letting applications close a security hole when the existingPySys_SetArgv()\nfunction was used. Check whether you\u2019re callingPySys_SetArgv()\nand carefully consider whether the application should be usingPySys_SetArgvEx()\nwith updatepath set to false.\nAcknowledgements\u00b6\nThe author would like to thank the following people for offering suggestions, corrections and assistance with various drafts of this article: Georg Brandl, Steve Brown, Nick Coghlan, Ralph Corderoy, Jim Jewett, Kent Johnson, Chris Lambacher, Martin Michlmayr, Antoine Pitrou, Brian Warner.", "code_snippets": [" ", " ", " ", "\n ", "\n", " ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", " ", " ", " ", "\n", " ", " ", "\n", " ", "\n ", "\n ", "\n", " ", " ", " ", "\n\n", "\n", " ", " ", "\n", " ", "\n\n", " ", "\n ", "\n ", "\n ", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n ", "\n ", "\n ", "\n", "\n ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n", "\n ", "\n ", "\n ", "\n ", " ", " ", "\n ", " ", "\n", "\n ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n", " ", "\n\n", "\n", "\n ", " ", " ", "\n ", "\n ", " ", "\n ", "\n ", "\n ", "\n ", "\n ", "\n\n", " ", " ", "\n", " ", " ", " ", "\n ", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", " ", "\n ", "\n", "\n", " ", "\n\n", " ", " ", " ", "\n ", " ", " ", " ", "\n ", "\n", "\n", " ", " ", "\n\n\n", " ", "\n ", "\n ", "\n ", " ", " ", " ", " ", " ", "\n ", " ", " ", "\n\n ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n\n ", "\n ", "\n\n", " ", " ", " ", "\n ", " ", " ", "\n\n ", " ", " ", "\n\n ", " ", " ", " ", " ", "\n ", "\n ", "\n\n ", " ", " ", "\n ", " ", " ", " ", " ", "\n", " ", "\n\n", " ", "\n ", "\n ", "\n", " ", " ", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n\n", " ", "\n ", "\n ", "\n ", " ", " ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n\n ", "\n ", " ", " ", "\n\n", " ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", " ", " ", "\n\n ", "\n ", " ", " ", " ", " ", " ", "\n ", " ", " ", "\n\n ", "\n ", "\n\n ", "\n ", "\n\n ", "\n ", " ", " ", " ", " ", "\n ", " ", " ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", "\n", "\n", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n ", " ", " ", "\n ", " ", "\n ", "\n ", " ", "\n", " ", "\n", "\n", " ", "\n", " ", " ", "\n", " ", " ", " ", "\n", "\n ", "\n", " ", " ", " ", "\n ", "\n", "\n ", "\n", " ", " ", "\n ", "\n", "\n ", "\n", " ", " ", " ", "\n ", "\n", " ", "\n\n", " ", " ", "\n ", "\n\n", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n\n", "\n ", "\n", "\n\n", "\n ", "\n\n", "\n", "\n", "\n", "\n", "\n", "\n ", " ", " ", " ", "\n ", " ", " ", " ", "\n", " ", " ", "\n\n", "\n ", " ", " ", "\n\n ", "\n ", " ", " ", " ", "\n ", "\n\n ", " ", " ", "\n ", " ", " ", "\n\n\n", "\n ", " ", " ", " ", "\n ", "\n", "\n", " ", "\n", "\n", " ", " ", "\n", "\n File ", ", line ", ", in ", "\n", ": ", "\n", "\n", " ", "\n", "\n\n", "\n", "\n ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", "\n", "\n", "\n ", "\n", "\n ", "\n\n", " ", " ", "\n", " ", "\n", " ", " ", " ", "\n", " ", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", " ", "\n", " ", " ", "\n", "\n", "\n", " ", "\n", " ", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n ", "\n ", "\n ", " ", "\n\n ", "\n ", " ", "\n ", " ", " ", "\n\n ", "\n ", "\n ", " ", "\n\n", "\n ", "\n ", "\n ", " ", " ", " ", "\n\n ", "\n ", " ", "\n ", " ", " ", " ", " ", "\n", "\n", " ", " ", "\n", "\n", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", " ", " ", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", "\n", "\n", " ", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", " ", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n", "\n", " ", " ", " ", " ", "\n", "\n", "\n", " ", " ", "\n", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n ", " ", "\n", " ", " ", " ", "\n ", "\n", " ", "\n ", "\n ", " ", " ", "\n ", "\n", " ", " ", " ", "\n ", "\n ", "\n ", " ", " ", " ", "\n ", " ", " ", " ", " ", "\n", " ", " ", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", "\n", " ", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", " ", "\n", "\n", "\n", "\n", ": ", "\n", "\n", " ", " ", "\n\n", "\n", "\n", " ", "\n\n", "\n", "\n", "\n\n", " ", " ", "\n", "\n", "\n", "\n", "\n", "\n", " ", "\n", "\n ", "\n ", " ", "\n ", " ", " ", "\n ", " ", "\n ", " ", "\n ", "\n ", "\n ", " ", "\n ", "\n ", "\n ", " ", " ", "\n ", " ", " ", "\n ", " ", "\n ", "\n ", "\n ", "\n ", " ", " ", "\n ", " ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", " ", "\n ", "\n ", " ", " ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", " ", "\n ", "\n", " ", " ", "\n", " ", "\n", "\n", " ", "\n", "\n", "\n", ": ", "\n", "\n", " ", " ", " ", " ", " ", "\n", " ", " ", " ", "\n", "\n", "\n", " ", "\n", "\n", "\n", "\n", "\n\n", "\n", " ", " ", "\n ", "\n ", "\n\n", "\n", " ", " ", "\n", " ", " ", "\n", " ", "\n", " ", "\n\n", "\n", " ", "\n", " ", " ", "\n\n", "\n", " ", "\n"], "language": "Python", "source": "python.org", "token_count": 27955}
{"url": "https://docs.python.org/3/whatsnew/2.7.html", "title": "What\u2019s New in Python 2.7", "content": "What\u2019s New in Python 2.7\u00b6\n- Author:\nA.M. Kuchling (amk at amk.ca)\nThis article explains the new features in Python 2.7. Python 2.7 was released on July 3, 2010.\nNumeric handling has been improved in many ways, for both\nfloating-point numbers and for the Decimal\nclass.\nThere are some useful additions to the standard library, such as a\ngreatly enhanced unittest\nmodule, the argparse\nmodule\nfor parsing command-line options, convenient OrderedDict\nand Counter\nclasses in the collections\nmodule,\nand many other improvements.\nPython 2.7 is planned to be the last of the 2.x releases, so we worked on making it a good release for the long term. To help with porting to Python 3, several new features from the Python 3.x series have been included in 2.7.\nThis article doesn\u2019t attempt to provide a complete specification of the new features, but instead provides a convenient overview. For full details, you should refer to the documentation for Python 2.7 at https://docs.python.org. If you want to understand the rationale for the design and implementation, refer to the PEP for a particular new feature or the issue on https://bugs.python.org in which a change was discussed. Whenever possible, \u201cWhat\u2019s New in Python\u201d links to the bug/patch item for each change.\nThe Future for Python 2.x\u00b6\nPython 2.7 is the last major release in the 2.x series, as the Python maintainers have shifted the focus of their new feature development efforts to the Python 3.x series. This means that while Python 2 continues to receive bug fixes, and to be updated to build correctly on new hardware and versions of supported operated systems, there will be no new full feature releases for the language or standard library.\nHowever, while there is a large common subset between Python 2.7 and Python 3, and many of the changes involved in migrating to that common subset, or directly to Python 3, can be safely automated, some other changes (notably those associated with Unicode handling) may require careful consideration, and preferably robust automated regression test suites, to migrate effectively.\nThis means that Python 2.7 will remain in place for a long time, providing a stable and supported base platform for production systems that have not yet been ported to Python 3. The full expected lifecycle of the Python 2.7 series is detailed in PEP 373.\nSome key consequences of the long-term significance of 2.7 are:\nAs noted above, the 2.7 release has a much longer period of maintenance when compared to earlier 2.x versions. Python 2.7 is currently expected to remain supported by the core development team (receiving security updates and other bug fixes) until at least 2020 (10 years after its initial release, compared to the more typical support period of 18\u201324 months).\nAs the Python 2.7 standard library ages, making effective use of the Python Package Index (either directly or via a redistributor) becomes more important for Python 2 users. In addition to a wide variety of third party packages for various tasks, the available packages include backports of new modules and features from the Python 3 standard library that are compatible with Python 2, as well as various tools and libraries that can make it easier to migrate to Python 3. The Python Packaging User Guide provides guidance on downloading and installing software from the Python Package Index.\nWhile the preferred approach to enhancing Python 2 is now the publication of new packages on the Python Package Index, this approach doesn\u2019t necessarily work in all cases, especially those related to network security. In exceptional cases that cannot be handled adequately by publishing new or updated packages on PyPI, the Python Enhancement Proposal process may be used to make the case for adding new features directly to the Python 2 standard library. Any such additions, and the maintenance releases where they were added, will be noted in the New Features Added to Python 2.7 Maintenance Releases section below.\nFor projects wishing to migrate from Python 2 to Python 3, or for library and framework developers wishing to support users on both Python 2 and Python 3, there are a variety of tools and guides available to help decide on a suitable approach and manage some of the technical details involved. The recommended starting point is the How to port Python 2 Code to Python 3 HOWTO guide.\nChanges to the Handling of Deprecation Warnings\u00b6\nFor Python 2.7, a policy decision was made to silence warnings only of\ninterest to developers by default. DeprecationWarning\nand its\ndescendants are now ignored unless otherwise requested, preventing\nusers from seeing warnings triggered by an application. This change\nwas also made in the branch that became Python 3.2. (Discussed\non stdlib-sig and carried out in bpo-7319.)\nIn previous releases, DeprecationWarning\nmessages were\nenabled by default, providing Python developers with a clear\nindication of where their code may break in a future major version\nof Python.\nHowever, there are increasingly many users of Python-based\napplications who are not directly involved in the development of\nthose applications. DeprecationWarning\nmessages are\nirrelevant to such users, making them worry about an application\nthat\u2019s actually working correctly and burdening application developers\nwith responding to these concerns.\nYou can re-enable display of DeprecationWarning\nmessages by\nrunning Python with the -Wdefault\n(short form:\n-Wd\n) switch, or by setting the PYTHONWARNINGS\nenvironment variable to \"default\"\n(or \"d\"\n) before running\nPython. Python code can also re-enable them\nby calling warnings.simplefilter('default')\n.\nThe unittest\nmodule also automatically reenables deprecation warnings\nwhen running tests.\nPython 3.1 Features\u00b6\nMuch as Python 2.6 incorporated features from Python 3.0, version 2.7 incorporates some of the new features in Python 3.1. The 2.x series continues to provide tools for migrating to the 3.x series.\nA partial list of 3.1 features that were backported to 2.7:\nThe syntax for set literals (\n{1,2,3}\nis a mutable set).Dictionary and set comprehensions (\n{i: i*2 for i in range(3)}\n).Multiple context managers in a single\nwith\nstatement.A new version of the\nio\nlibrary, rewritten in C for performance.The ordered-dictionary type described in PEP 372: Adding an Ordered Dictionary to collections.\nThe new\n\",\"\nformat specifier described in PEP 378: Format Specifier for Thousands Separator.The\nmemoryview\nobject.A small subset of the\nimportlib\nmodule, described below.The\nrepr()\nof a floatx\nis shorter in many cases: it\u2019s now based on the shortest decimal string that\u2019s guaranteed to round back tox\n. As in previous versions of Python, it\u2019s guaranteed thatfloat(repr(x))\nrecoversx\n.Float-to-string and string-to-float conversions are correctly rounded. The\nround()\nfunction is also now correctly rounded.The\nPyCapsule\ntype, used to provide a C API for extension modules.The\nPyLong_AsLongAndOverflow()\nC API function.\nOther new Python3-mode warnings include:\noperator.isCallable()\nandoperator.sequenceIncludes()\n, which are not supported in 3.x, now trigger warnings.The\n-3\nswitch now automatically enables the-Qwarn\nswitch that causes warnings about using classic division with integers and long integers.\nPEP 372: Adding an Ordered Dictionary to collections\u00b6\nRegular Python dictionaries iterate over key/value pairs in arbitrary order.\nOver the years, a number of authors have written alternative implementations\nthat remember the order that the keys were originally inserted. Based on\nthe experiences from those implementations, 2.7 introduces a new\nOrderedDict\nclass in the collections\nmodule.\nThe OrderedDict\nAPI provides the same interface as regular\ndictionaries but iterates over keys and values in a guaranteed order\ndepending on when a key was first inserted:\n>>> from collections import OrderedDict\n>>> d = OrderedDict([('first', 1),\n... ('second', 2),\n... ('third', 3)])\n>>> d.items()\n[('first', 1), ('second', 2), ('third', 3)]\nIf a new entry overwrites an existing entry, the original insertion position is left unchanged:\n>>> d['second'] = 4\n>>> d.items()\n[('first', 1), ('second', 4), ('third', 3)]\nDeleting an entry and reinserting it will move it to the end:\n>>> del d['second']\n>>> d['second'] = 5\n>>> d.items()\n[('first', 1), ('third', 3), ('second', 5)]\nThe popitem()\nmethod has an optional last\nargument that defaults to True\n. If last is true, the most recently\nadded key is returned and removed; if it\u2019s false, the\noldest key is selected:\n>>> od = OrderedDict([(x,0) for x in range(20)])\n>>> od.popitem()\n(19, 0)\n>>> od.popitem()\n(18, 0)\n>>> od.popitem(last=False)\n(0, 0)\n>>> od.popitem(last=False)\n(1, 0)\nComparing two ordered dictionaries checks both the keys and values, and requires that the insertion order was the same:\n>>> od1 = OrderedDict([('first', 1),\n... ('second', 2),\n... ('third', 3)])\n>>> od2 = OrderedDict([('third', 3),\n... ('first', 1),\n... ('second', 2)])\n>>> od1 == od2\nFalse\n>>> # Move 'third' key to the end\n>>> del od2['third']; od2['third'] = 3\n>>> od1 == od2\nTrue\nComparing an OrderedDict\nwith a regular dictionary\nignores the insertion order and just compares the keys and values.\nHow does the OrderedDict\nwork? It maintains a\ndoubly linked list of keys, appending new keys to the list as they\u2019re inserted.\nA secondary dictionary maps keys to their corresponding list node, so\ndeletion doesn\u2019t have to traverse the entire linked list and therefore\nremains O(1).\nThe standard library now supports use of ordered dictionaries in several modules.\nThe\nConfigParser\nmodule uses them by default, meaning that configuration files can now be read, modified, and then written back in their original order.The\n_asdict()\nmethod forcollections.namedtuple()\nnow returns an ordered dictionary with the values appearing in the same order as the underlying tuple indices.The\njson\nmodule\u2019sJSONDecoder\nclass constructor was extended with an object_pairs_hook parameter to allowOrderedDict\ninstances to be built by the decoder. Support was also added for third-party tools like PyYAML.\nSee also\n- PEP 372 - Adding an ordered dictionary to collections\nPEP written by Armin Ronacher and Raymond Hettinger; implemented by Raymond Hettinger.\nPEP 378: Format Specifier for Thousands Separator\u00b6\nTo make program output more readable, it can be useful to add separators to large numbers, rendering them as 18,446,744,073,709,551,616 instead of 18446744073709551616.\nThe fully general solution for doing this is the locale\nmodule,\nwhich can use different separators (\u201c,\u201d in North America, \u201c.\u201d in\nEurope) and different grouping sizes, but locale\nis complicated\nto use and unsuitable for multi-threaded applications where different\nthreads are producing output for different locales.\nTherefore, a simple comma-grouping mechanism has been added to the\nmini-language used by the str.format()\nmethod. When\nformatting a floating-point number, simply include a comma between the\nwidth and the precision:\n>>> '{:20,.2f}'.format(18446744073709551616.0)\n'18,446,744,073,709,551,616.00'\nWhen formatting an integer, include the comma after the width:\n>>> '{:20,d}'.format(18446744073709551616)\n'18,446,744,073,709,551,616'\nThis mechanism is not adaptable at all; commas are always used as the\nseparator and the grouping is always into three-digit groups. The\ncomma-formatting mechanism isn\u2019t as general as the locale\nmodule, but it\u2019s easier to use.\nSee also\n- PEP 378 - Format Specifier for Thousands Separator\nPEP written by Raymond Hettinger; implemented by Eric Smith.\nPEP 389: The argparse Module for Parsing Command Lines\u00b6\nThe argparse\nmodule for parsing command-line arguments was\nadded as a more powerful replacement for the\noptparse\nmodule.\nThis means Python now supports three different modules for parsing\ncommand-line arguments: getopt\n, optparse\n, and\nargparse\n. The getopt\nmodule closely resembles the C\nlibrary\u2019s getopt()\nfunction, so it remains useful if you\u2019re writing a\nPython prototype that will eventually be rewritten in C.\noptparse\nbecomes redundant, but there are no plans to remove it\nbecause there are many scripts still using it, and there\u2019s no\nautomated way to update these scripts. (Making the argparse\nAPI consistent with optparse\n\u2019s interface was discussed but\nrejected as too messy and difficult.)\nIn short, if you\u2019re writing a new script and don\u2019t need to worry\nabout compatibility with earlier versions of Python, use\nargparse\ninstead of optparse\n.\nHere\u2019s an example:\nimport argparse\nparser = argparse.ArgumentParser(description='Command-line example.')\n# Add optional switches\nparser.add_argument('-v', action='store_true', dest='is_verbose',\nhelp='produce verbose output')\nparser.add_argument('-o', action='store', dest='output',\nmetavar='FILE',\nhelp='direct output to FILE instead of stdout')\nparser.add_argument('-C', action='store', type=int, dest='context',\nmetavar='NUM', default=0,\nhelp='display NUM lines of added context')\n# Allow any number of additional arguments.\nparser.add_argument(nargs='*', action='store', dest='inputs',\nhelp='input filenames (default is stdin)')\nargs = parser.parse_args()\nprint args.__dict__\nUnless you override it, -h\nand --help\nswitches\nare automatically added, and produce neatly formatted output:\n-> ./python.exe argparse-example.py --help\nusage: argparse-example.py [-h] [-v] [-o FILE] [-C NUM] [inputs [inputs ...]]\nCommand-line example.\npositional arguments:\ninputs input filenames (default is stdin)\noptional arguments:\n-h, --help show this help message and exit\n-v produce verbose output\n-o FILE direct output to FILE instead of stdout\n-C NUM display NUM lines of added context\nAs with optparse\n, the command-line switches and arguments\nare returned as an object with attributes named by the dest parameters:\n-> ./python.exe argparse-example.py -v\n{'output': None,\n'is_verbose': True,\n'context': 0,\n'inputs': []}\n-> ./python.exe argparse-example.py -v -o /tmp/output -C 4 file1 file2\n{'output': '/tmp/output',\n'is_verbose': True,\n'context': 4,\n'inputs': ['file1', 'file2']}\nargparse\nhas much fancier validation than optparse\n; you\ncan specify an exact number of arguments as an integer, 0 or more\narguments by passing '*'\n, 1 or more by passing '+'\n, or an\noptional argument with '?'\n. A top-level parser can contain\nsub-parsers to define subcommands that have different sets of\nswitches, as in svn commit\n, svn checkout\n, etc. You can\nspecify an argument\u2019s type as FileType\n, which will\nautomatically open files for you and understands that '-'\nmeans\nstandard input or output.\nSee also\nargparse\ndocumentationThe documentation page of the argparse module.\n- Migrating optparse code to argparse\nPart of the Python documentation, describing how to convert code that uses\noptparse\n.- PEP 389 - argparse - New Command Line Parsing Module\nPEP written and implemented by Steven Bethard.\nPEP 391: Dictionary-Based Configuration For Logging\u00b6\nThe logging\nmodule is very flexible; applications can define\na tree of logging subsystems, and each logger in this tree can filter\nout certain messages, format them differently, and direct messages to\na varying number of handlers.\nAll this flexibility can require a lot of configuration. You can\nwrite Python statements to create objects and set their properties,\nbut a complex set-up requires verbose but boring code.\nlogging\nalso supports a fileConfig()\nfunction that parses a file, but the file format doesn\u2019t support\nconfiguring filters, and it\u2019s messier to generate programmatically.\nPython 2.7 adds a dictConfig()\nfunction that\nuses a dictionary to configure logging. There are many ways to\nproduce a dictionary from different sources: construct one with code;\nparse a file containing JSON; or use a YAML parsing library if one is\ninstalled. For more information see Configuration functions.\nThe following example configures two loggers, the root logger and a\nlogger named \u201cnetwork\u201d. Messages sent to the root logger will be\nsent to the system log using the syslog protocol, and messages\nto the \u201cnetwork\u201d logger will be written to a network.log\nfile\nthat will be rotated once the log reaches 1MB.\nimport logging\nimport logging.config\nconfigdict = {\n'version': 1, # Configuration schema in use; must be 1 for now\n'formatters': {\n'standard': {\n'format': ('%(asctime)s %(name)-15s '\n'%(levelname)-8s %(message)s')}},\n'handlers': {'netlog': {'backupCount': 10,\n'class': 'logging.handlers.RotatingFileHandler',\n'filename': '/logs/network.log',\n'formatter': 'standard',\n'level': 'INFO',\n'maxBytes': 1000000},\n'syslog': {'class': 'logging.handlers.SysLogHandler',\n'formatter': 'standard',\n'level': 'ERROR'}},\n# Specify all the subordinate loggers\n'loggers': {\n'network': {\n'handlers': ['netlog']\n}\n},\n# Specify properties of the root logger\n'root': {\n'handlers': ['syslog']\n},\n}\n# Set up configuration\nlogging.config.dictConfig(configdict)\n# As an example, log two error messages\nlogger = logging.getLogger('/')\nlogger.error('Database not found')\nnetlogger = logging.getLogger('network')\nnetlogger.error('Connection failed')\nThree smaller enhancements to the logging\nmodule, all\nimplemented by Vinay Sajip, are:\nThe\nSysLogHandler\nclass now supports syslogging over TCP. The constructor has a socktype parameter giving the type of socket to use, eithersocket.SOCK_DGRAM\nfor UDP orsocket.SOCK_STREAM\nfor TCP. The default protocol remains UDP.Logger\ninstances gained agetChild()\nmethod that retrieves a descendant logger using a relative path. For example, once you retrieve a logger by doinglog = getLogger('app')\n, callinglog.getChild('network.listen')\nis equivalent togetLogger('app.network.listen')\n.The\nLoggerAdapter\nclass gained anisEnabledFor()\nmethod that takes a level and returns whether the underlying logger would process a message of that level of importance.\nSee also\n- PEP 391 - Dictionary-Based Configuration For Logging\nPEP written and implemented by Vinay Sajip.\nPEP 3106: Dictionary Views\u00b6\nThe dictionary methods keys()\n, values()\n, and\nitems()\nare different in Python 3.x. They return an object\ncalled a view instead of a fully materialized list.\nIt\u2019s not possible to change the return values of keys()\n,\nvalues()\n, and items()\nin Python 2.7 because\ntoo much code would break. Instead the 3.x versions were added\nunder the new names viewkeys()\n, viewvalues()\n,\nand viewitems()\n.\n>>> d = dict((i*10, chr(65+i)) for i in range(26))\n>>> d\n{0: 'A', 130: 'N', 10: 'B', 140: 'O', 20: ..., 250: 'Z'}\n>>> d.viewkeys()\ndict_keys([0, 130, 10, 140, 20, 150, 30, ..., 250])\nViews can be iterated over, but the key and item views also behave\nlike sets. The &\noperator performs intersection, and |\nperforms a union:\n>>> d1 = dict((i*10, chr(65+i)) for i in range(26))\n>>> d2 = dict((i**.5, i) for i in range(1000))\n>>> d1.viewkeys() & d2.viewkeys()\nset([0.0, 10.0, 20.0, 30.0])\n>>> d1.viewkeys() | range(0, 30)\nset([0, 1, 130, 3, 4, 5, 6, ..., 120, 250])\nThe view keeps track of the dictionary and its contents change as the dictionary is modified:\n>>> vk = d.viewkeys()\n>>> vk\ndict_keys([0, 130, 10, ..., 250])\n>>> d[260] = '&'\n>>> vk\ndict_keys([0, 130, 260, 10, ..., 250])\nHowever, note that you can\u2019t add or remove keys while you\u2019re iterating over the view:\n>>> for k in vk:\n... d[k*2] = k\n...\nTraceback (most recent call last):\nFile \"\", line 1, in \nRuntimeError: dictionary changed size during iteration\nYou can use the view methods in Python 2.x code, and the 2to3\nconverter will change them to the standard keys()\n,\nvalues()\n, and items()\nmethods.\nPEP 3137: The memoryview Object\u00b6\nThe memoryview\nobject provides a view of another object\u2019s\nmemory content that matches the bytes\ntype\u2019s interface.\n>>> import string\n>>> m = memoryview(string.letters)\n>>> m\n